Skip to main content

4.2 Discrete and Continuous Probabilities

The target space $T$ may be continuous or discrete. When the target space $T$ is discrete, we can specify the probability that the random variable $X$ takes a particular value $x \in T$ , denoted as $P(X = x)$.The expression $P(X = x)$ for a discrete random variable X is known as the probability mass function.

When the target space $T$ is continuous, e.g., the real line $\mathbb{R}$, it is more natural to specify the probability that a random variable $X$ is in an interval, denoted by $P(a \le X \le b)$ for a < b. By convention,we specify the probability that a random variable $X$ is less than a particular value $x$, denoted by $P(X \le x)$. The expression $P(X \le x)$ for a continuous random variable $X$ is known as cumulative distribution function.

Remark: We will use the phrase univariate distribution to refer to distributions of a single random variable .We will refer to distributions of more than one random variable as multivariate  distributions.

Discrete Probabilities
When the target space is discrete, we can imagine the probability distribution of multiple random  variables as filling out a (multidimensional) array of numbers. Figure  shows an example.

The target space of the joint probability is the Cartesian product of the target spaces of each of the random variables. We define the joint probability as the entry of both values jointly
$P(X = x_i, Y = y_j) = \frac{n_{ij}}{N}$

where $n_{ij}$ is the number of events with state $x_i$ and $y_j$ and $N$ the total number of events. The joint probability is the probability of the intersection of both events, that is, 
$P(X = x_i,Y = y_j) = P(X = x_i \cap Y = y_j)$

Figure illustrates the probability mass function (pmf) of a discrete probability distribution. For two random variables $X$ and $Y$ , the probability that $X = x$ and $Y = y$ is (lazily) written as $p(x,y)$ and is called the joint probability.

The marginal probability that X takes the value $x$ irrespective of the value of random variable $Y$ is (lazily) written as $p(x)$. We write $X \sim p(x)$ to denote that the random variable $X$ is distributed according to $p(x)$. If we consider only the instances where $X = x$, then the fraction of instances (the conditional probability) for which $Y = y$ is written (lazily) as $p(y | x)$.
 Consider two random variables $X$ and $Y$ , where $X$ has five possible states and $Y$ has three possible states, as shown in Figure . We denote by $n_{ij}$ the number of events with state $X = x_i$ and $Y = y_j$ , and denote by $N$ the total number of events. The value $c_i$ is the sum of the individual frequencies for the $i$th column, that is, 
$c_i =\sum_{j=1}^{3}n_{ij}$
Similarly, the value $r_j$ is the row sum, that is, 
$r_j =\sum_{i=1}^{5}n_{ij}$
 Using these definitions, we can compactly express the distribution of $X$ and $Y$ .
The probability distribution of each random variable, the marginal probability, can be seen as the sum over a row or column
$P(X=x_i)=\frac{c_i}{N}=\frac{\sum_{j=1}^{3} n_{ij}}{N}$
and
$P(Y=y_j)=\frac{r_j}{N}=\frac{\sum_{i=1}^{5} n_{ij}}{N}$

where $c_i$ and $r_j$ are the $i$th column and $j$th row of the probability table,respectively. By convention, for discrete random variables with a finite number of events, we assume that probabilities sum up to one, that is,
$\sum_{i=1}^{5}P(X=x_i)=1$ and $\sum_{j=1}^{3}P(Y=y_j)=1$

The conditional probability is the fraction of a row or column in a particular cell. For example, the conditional probability of  $Y$ given $X$ is
$P(Y=y_j|X=x_i)=\frac{n_{ij}}{c_i}$
and
$P(X=x_i|Y=y_j)=\frac{n_{ij}}{r_j}$
In machine learning, we use discrete probability distributions to model categorical variable categorical variables, i.e., variables that take a finite set of unordered values.They could be categorical features, such as the degree taken at university when used for predicting the salary of a person, or categorical labels,such as letters of the alphabet when doing handwriting recognition.Discrete distributions are also often used to construct probabilistic models that combine a finite number of continuous distributions.
Example: ( university question)
$p(x|Y=y1)=0.01/0.26,0.02/0.26,0.03/0.26,0.1/0.26,0.1/0.26$
$p(y|X=x3)=0.03/0.11,0.05/0.11,0.03/0.11$
Continuous Probabilities
In the real-valued random variables, the target spaces that are intervals of the real line $\mathbb{R}$.
For the continuous probabilities we define probability density function.
Definition : (Probability Density Function). A function $f : \mathbb{R}^D \to \mathbb{R}$ is
called a probability density function (pdf ) if
$1.\forall x \in \mathbb{R}^D: f(x) \ge 0$
2.Its integral exists and $\int_{\mathbb{R}^D}=f(x)dx=1$
Observe that the probability density function is any function $f$ that is non-negative and integrates to one. We associate a random variable $X$ with this function $f$ by
$P(a \le X \le b)=\int_{a}^bf(x)dx$

where $a, b \in \mathbb{R}$ and $x\in \mathbb{R}$ are outcomes of the continuous random variable $X$. States $x \in \mathbb{R}^D$ are defined analogously by considering a vector of $x \in \mathbb{R}$. This association is called the law or distribution of the random variable $X$.

Remark: In contrast to discrete random variables, the probability of a continuous random variable $X$ taking a particular value $P(X = x)$ is zero.This is like trying to specify an interval where $a = b$

Cumulative Distribution Function.

A cumulative distributive function(cdf) of a multivariate real-valued random variable $X$ with states $x \in \mathbb{R}^D$ is given by
$F_x(x)=P(X_1 \le x_1,\ldots,X_D \le x_D$
where $X=[X_1,\ldots,X_D]^T,x=[x_1,\ldots,x_D]^T$ and the right hand side represents the probability that the random variable $X_i$ takes the value smaller than or equal to $x_i$.

Example:
The cdf can be expressed also as the integral of the probability density function $f(x)$ so that $F_X(x)=\int_{\infty}^{x_1}\cdots \int_{\infty}^{x_D}f(z_1,\ldots,z_D)dz_1 \cdots dz_D$
Example:
We consider two examples of the uniform distribution, where each state is equally likely to occur. This example illustrates some differences between discrete and continuous probability distributions. Let $Z$ be a discrete uniform random variable with three states ${z=-1.1,z=0.3,z=1.5}$.The probability mass function can be represented as a table of probability values.

Alternatively, we can think of this as a graph, where we use the fact that the states can be located on the $x$-axis, and the $y$-axis represents the probability of a particular state.

Let X be a continuous random variable taking values in the range $$0.9 \le X \le 1.6$$ as represented in figure. Observe that the height of the density can be greater than 1. However, it needs to hold that

$\int_{0.9}^{1.6} p(x)dx=1$


Example ( University Question)
A random variable X has the following probability distribution
$\begin{vmatrix}X& -2&-1&0 &1&2&3 \\
f(X)& 1/10&15k^2 & 1/5&2k&3/10&3k \\
\end{vmatrix}$
a)Find $k$
The probabilities will add upto 1
$1/10+15k^2+1/5+2k+3/10+3k=1$
$150k^2+50k-4=0$
$k=\frac{-50\pm \sqrt{2500+2400}}{300}$
The positive value of $k=20/300=1/15$
b)$P(X<2)$
$1/10+1/15+1/5+2/15$
$\frac{15+10+30+20}{150}=\frac{1}{2}$
c)$P(-2<X<2)$
$1/15+1/5+2/15$
$\frac{1+3+2}{15}=\frac{6}{15}=\frac{2}{5}$
d)$P(X<=2 / X>0)$
$2/15+3/10$
$\frac{65}{150}=\frac{13}{30}$
e) Find Mean

f) Find the variance
Given that $E(X)=2.8$, find $a$ and $b$ and also compute $P(X>2)$, if the random variable $X$ has the probability distribution as shown in the following table. ( University Question)
$\begin{vmatrix}X& 1&2&3&4&5 \\
P(X=x )& 0.35 & a & 0.15 & b &0.20 \\
\end{vmatrix}$

Finding $a$ and $b$
$E(X)= \sum x.P(x)$
$2.8=1*0.35+2*a+3*0.15+4*b+5*0.20$
$2a+4b=1--------(1)$
The probability values will add upto 1. So
$0.35+a+0.15+b+0.20=1$
$a+b=0.3------(2)$

Solving (1) and (2) we have $a=0.1$ and $b=0.2$

$P(X>2)$ is
$P(X>2)=P(X=3)+P(X=4)+P(X=5)$
$P(X>2)=0.15+0.20+0.20$
$P(X>2)=0.55$



Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(...

5.1 Optimization using Gradient Descent

Since machine learning algorithms are implemented on a computer, the mathematical formulations are expressed as numerical optimization methods.Training a machine learning model often boils down to finding a good set of parameters. The notion of “good” is determined by the objective function or the probabilistic model. Given an objective function, finding the best value is done using optimization algorithms. There are two main branches of continuous optimization constrained and unconstrained. By convention, most objective functions in machine learning are intended to be minimized, that is, the best value is the minimum value. Intuitively finding the best value is like finding the valleys of the objective function, and the gradients point us uphill. The idea is to move downhill (opposite to the gradient) and hope to find the deepest point. For unconstrained optimization, this is the only concept we need,but there are several design choices. For constrained optimization, we need to intr...