The target space $T$ may be continuous or discrete. When the target space $T$ is discrete, we can specify the probability that the random variable $X$ takes a particular value $x \in T$ , denoted as $P(X = x)$.The expression $P(X = x)$ for a discrete random variable X is known as the probability mass function.
When the target space $T$ is continuous, e.g., the real line $\mathbb{R}$, it is more natural to specify the probability that a random variable $X$ is in an interval, denoted by $P(a \le X \le b)$ for a < b. By convention,we specify the probability that a random variable $X$ is less than a particular value $x$, denoted by $P(X \le x)$. The expression $P(X \le x)$ for a continuous random variable $X$ is known as cumulative distribution function.
Remark: We will use the phrase univariate distribution to refer to distributions of a single random variable .We will refer to distributions of more than one random variable as multivariate distributions.
Discrete Probabilities
When the target space is discrete, we can imagine the probability distribution of multiple random variables as filling out a (multidimensional) array of numbers. Figure shows an example.
The target space of the joint probability is the Cartesian product of the target spaces of each of the random variables. We define the joint probability as the entry of both values jointly
$P(X = x_i, Y = y_j) = \frac{n_{ij}}{N}$
where $n_{ij}$ is the number of events with state $x_i$ and $y_j$ and $N$ the total number of events. The joint probability is the probability of the intersection of both events, that is,
$P(X = x_i,Y = y_j) = P(X = x_i \cap Y = y_j)$
Figure illustrates the probability mass function (pmf) of a discrete probability distribution. For two random variables $X$ and $Y$ , the probability that $X = x$ and $Y = y$ is (lazily) written as $p(x,y)$ and is called the joint probability.
The marginal probability that X takes the value $x$ irrespective of the value of random variable $Y$ is (lazily) written as $p(x)$. We write $X \sim p(x)$ to denote that the random variable $X$ is distributed according to $p(x)$. If we consider only the instances where $X = x$, then the fraction of instances (the conditional probability) for which $Y = y$ is written (lazily) as $p(y | x)$.
Consider two random variables $X$ and $Y$ , where $X$ has five possible states and $Y$ has three possible states, as shown in Figure . We denote by $n_{ij}$ the number of events with state $X = x_i$ and $Y = y_j$ , and denote by $N$ the total number of events. The value $c_i$ is the sum of the individual frequencies for the $i$th column, that is,
$c_i =\sum_{j=1}^{3}n_{ij}$
Similarly, the value $r_j$ is the row sum, that is,
$r_j =\sum_{i=1}^{5}n_{ij}$
Using these definitions, we can compactly express the distribution of $X$ and $Y$ .
The probability distribution of each random variable, the marginal probability, can be seen as the sum over a row or column
$P(X=x_i)=\frac{c_i}{N}=\frac{\sum_{j=1}^{3} n_{ij}}{N}$
and
$P(Y=y_j)=\frac{r_j}{N}=\frac{\sum_{i=1}^{5} n_{ij}}{N}$
where $c_i$ and $r_j$ are the $i$th column and $j$th row of the probability table,respectively. By convention, for discrete random variables with a finite number of events, we assume that probabilities sum up to one, that is,
$\sum_{i=1}^{5}P(X=x_i)=1$ and $\sum_{j=1}^{3}P(Y=y_j)=1$
The conditional probability is the fraction of a row or column in a particular cell. For example, the conditional probability of $Y$ given $X$ is
$P(Y=y_j|X=x_i)=\frac{n_{ij}}{c_i}$
and
$P(X=x_i|Y=y_j)=\frac{n_{ij}}{r_j}$
In machine learning, we use discrete probability distributions to model categorical variable categorical variables, i.e., variables that take a finite set of unordered values.They could be categorical features, such as the degree taken at university when used for predicting the salary of a person, or categorical labels,such as letters of the alphabet when doing handwriting recognition.Discrete distributions are also often used to construct probabilistic models that combine a finite number of continuous distributions.
Example: ( university question)
$p(x|Y=y1)=0.01/0.26,0.02/0.26,0.03/0.26,0.1/0.26,0.1/0.26$
$p(y|X=x3)=0.03/0.11,0.05/0.11,0.03/0.11$
Continuous Probabilities
In the real-valued random variables, the target spaces that are intervals of the real line $\mathbb{R}$.
For the continuous probabilities we define probability density function.
Definition : (Probability Density Function). A function $f : \mathbb{R}^D \to \mathbb{R}$ is
called a probability density function (pdf ) if
$1.\forall x \in \mathbb{R}^D: f(x) \ge 0$
2.Its integral exists and $\int_{\mathbb{R}^D}=f(x)dx=1$
Observe that the probability density function is any function $f$ that is non-negative and integrates to one. We associate a random variable $X$ with this function $f$ by
$P(a \le X \le b)=\int_{a}^bf(x)dx$
where $a, b \in \mathbb{R}$ and $x\in \mathbb{R}$ are outcomes of the continuous random variable $X$. States $x \in \mathbb{R}^D$ are defined analogously by considering a vector of $x \in \mathbb{R}$. This association is called the law or distribution of the random variable $X$.
Remark: In contrast to discrete random variables, the probability of a continuous random variable $X$ taking a particular value $P(X = x)$ is zero.This is like trying to specify an interval where $a = b$
Cumulative Distribution Function.
A cumulative distributive function(cdf) of a multivariate real-valued random variable $X$ with states $x \in \mathbb{R}^D$ is given by
$F_x(x)=P(X_1 \le x_1,\ldots,X_D \le x_D$
where $X=[X_1,\ldots,X_D]^T,x=[x_1,\ldots,x_D]^T$ and the right hand side represents the probability that the random variable $X_i$ takes the value smaller than or equal to $x_i$.
Example:
The cdf can be expressed also as the integral of the probability density function $f(x)$ so that $F_X(x)=\int_{\infty}^{x_1}\cdots \int_{\infty}^{x_D}f(z_1,\ldots,z_D)dz_1 \cdots dz_D$
Example:
We consider two examples of the uniform distribution, where each state is equally likely to occur. This example illustrates some differences between discrete and continuous probability distributions. Let $Z$ be a discrete uniform random variable with three states ${z=-1.1,z=0.3,z=1.5}$.The probability mass function can be represented as a table of probability values.
Alternatively, we can think of this as a graph, where we use the fact that the states can be located on the $x$-axis, and the $y$-axis represents the probability of a particular state.
Let X be a continuous random variable taking values in the range $$0.9 \le X \le 1.6$$ as represented in figure. Observe that the height of the density can be greater than 1. However, it needs to hold that
$\int_{0.9}^{1.6} p(x)dx=1$
Example ( University Question)
A random variable X has the following probability distribution
$\begin{vmatrix}X& -2&-1&0 &1&2&3 \\ f(X)& 1/10&15k^2 & 1/5&2k&3/10&3k \\
\end{vmatrix}$
a)Find $k$
The probabilities will add upto 1
$1/10+15k^2+1/5+2k+3/10+3k=1$
$150k^2+50k-4=0$
$k=\frac{-50\pm \sqrt{2500+2400}}{300}$
The positive value of $k=20/300=1/15$
b)$P(X<2)$
$1/10+1/15+1/5+2/15$
$\frac{15+10+30+20}{150}=\frac{1}{2}$
c)$P(-2<X<2)$
$1/15+1/5+2/15$
$\frac{1+3+2}{15}=\frac{6}{15}=\frac{2}{5}$
d)$P(X<=2 / X>0)$
$2/15+3/10$
$\frac{65}{150}=\frac{13}{30}$
e) Find Mean
Given that $E(X)=2.8$, find $a$ and $b$ and also compute $P(X>2)$, if the random variable $X$ has the probability distribution as shown in the following table. ( University Question)
$\begin{vmatrix}X& 1&2&3&4&5 \\
P(X=x )& 0.35 & a & 0.15 & b &0.20 \\
\end{vmatrix}$
P(X=x )& 0.35 & a & 0.15 & b &0.20 \\
\end{vmatrix}$
Finding $a$ and $b$
$E(X)= \sum x.P(x)$
$2.8=1*0.35+2*a+3*0.15+4*b+5*0.20$
$2a+4b=1--------(1)$
The probability values will add upto 1. So
$0.35+a+0.15+b+0.20=1$
$a+b=0.3------(2)$
Solving (1) and (2) we have $a=0.1$ and $b=0.2$
$P(X>2)$ is
$P(X>2)=P(X=3)+P(X=4)+P(X=5)$
$P(X>2)=0.15+0.20+0.20$
$P(X>2)=0.55$
Comments
Post a Comment