Skip to main content

4.1 Probability Space


Probability, loosely speaking, concerns the study of uncertainty. Probability can be thought of as the  fraction of times an event occurs, or as a degree of belief about an event. We then would like to use this probability to measure the chance of something occurring in an experiment.

We often quantify uncertainty in the data, uncertainty in the machine learning model, and uncertainty in the predictions produced by random variable of the model. Quantifying uncertainty requires the idea of a random variable, which is a function that maps outcomes of random experiments to a set of properties that we are interested in. Associated with the random variable is a function that measures the probability that a particular outcome (or probability set of outcomes) will occur; this is called the probability distribution.

Probability and Random Variables
Modern probability is based on a set of axioms proposed by Kolmogorov(Grinstead and Snell, 1997, Jaynes, 2003) that introduce the three concepts of sample space, event space, and probability measure. The probability space models a real-world process (referred to as an experiment) with random outcomes.

The sample space $\Omega$

The sample space is the set of all possible outcomes of the experiment,usually denoted by $\Omega$. For example, two successive coin tosses have a sample space of $\{hh, tt, ht, th\}$, where $h$ denotes “heads” and $t$ denotes “tails”.

The event space $\mathbb{A}$
The event space is the space of potential results of the experiment. A  subset $\mathbb{A}$ of the sample space $\Omega$ is in the event space $\mathbb{A}$ if at the end of the experiment we can observe whether a particular outcome $\omega \in \Omega$ is in $\mathbb{A}$. The event space $\mathbb{A}$ is obtained by considering the collection of subsets of $\Omega$, and for discrete probability distributions $\mathbb{A}$ is often the power set of $\Omega$.

The probability $P$
With each event $A \in \mathbb{A}$, we associate a number $P(A)$ that measures the probability or degree of belief that the event will occur. $P(A)$ is called the probability of $A$.

The probability of a single event must lie in the interval $[0, 1]$ and the total probability over all outcomes in the sample space   must be 1, i.e.,$P(\Omega) = 1$.

Given a probability space $(\Omega,\mathbb{A}, P)$, we want to use it to model some real-world phenomenon. In machine learning, we often avoid explicitly referring to the probability space, but instead refer to probabilities on quantities of interest, which we denote by $T$ . We refer $T$ as the target space and refer to elements of $T$ as states.

We introduce a target space function $X :\Omega \to T$, that takes an element of  $\Omega$ (an outcome) and returns a particular quantity of interest $x$, a value in $T$.This association/mapping from $\Omega$ to $T$ is called a random variable.

For example, in the case of tossing two coins and counting the number of heads, a random variable $X$ maps to the three possible outcomes: $X(hh) = 2, X(ht) = 1, X(th) = 1$, and $X(tt) = 0$. In this particular case, $T = {0, 1, 2}$, and it is the probabilities on elements of $T$ that we are interested in. For any subset $S \in T $, we associate $P_X(S) \in [0,1]$(the probability) to a particular event occurring corresponding to the random variable $X$.

$P(X=2)=P(hh)=P(h).P(h)=\frac{1}{2}.\frac{1}{2}=\frac{1}{4}$
$P(X=1)=P(ht) + P(th)= \frac{1}{2}.\frac{1}{2}+\frac{1}{2}.\frac{1}{2}=\frac{1}{4}+\frac{1}{4}=\frac{1}{2}$
$P(X=0)=P(tt)=P(t).P(t)=\frac{1}{2}.\frac{1}{2}=\frac{1}{4}$

Consider another example


Remark. The target space, that is, the range $T$ of the random variable $X$,is used to indicate the kind of probability space, i.e., a $T$ random variable.When $T$ is finite or countably infinite, this is called a discrete random variable . For continuous random variables ,we only consider $T = \mathbb{R}$ or $T = \mathbb{R}^D$.

In machine learning systems we are interested in generalization error.This means that we are actually interested in the performance of our system on instances that we will observe in future, which are not identical to the instances that we have seen so far. This analysis of future performance relies on probability and statistics.

Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper Question Paper July 2021 and evaluation scheme Question Paper June 2022 and evaluation scheme Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping and Matri

1.1 Solving system of equations using Gauss Elimination Method

Elementary Transformations Key to solving a system of linear equations are elementary transformations that keep the solution set the same, but that transform the equation system into a simpler form: Exchange of two equations (rows in the matrix representing the system of equations) Multiplication of an equation (row) with a constant  Addition of two equations (rows) Add a scalar multiple of one row to the other. Row Echelon Form A matrix is in row-echelon form if All rows that contain only zeros are at the bottom of the matrix; correspondingly,all rows that contain at least one nonzero element are on top of rows that contain only zeros. Looking at nonzero rows only, the first nonzero number from the left pivot (also called the pivot or the leading coefficient) is always strictly to the right of the  pivot of the row above it. The row-echelon form is where the leading (first non-zero) entry of each row has only zeroes below it. These leading entries are called pivots Example: $\begin

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(A∪B)=P(A)+P(B)$  wh