Skip to main content

4.1 Probability Space


Probability, loosely speaking, concerns the study of uncertainty. Probability can be thought of as the  fraction of times an event occurs, or as a degree of belief about an event. We then would like to use this probability to measure the chance of something occurring in an experiment.

We often quantify uncertainty in the data, uncertainty in the machine learning model, and uncertainty in the predictions produced by random variable of the model. Quantifying uncertainty requires the idea of a random variable, which is a function that maps outcomes of random experiments to a set of properties that we are interested in. Associated with the random variable is a function that measures the probability that a particular outcome (or probability set of outcomes) will occur; this is called the probability distribution.

Probability and Random Variables
Modern probability is based on a set of axioms proposed by Kolmogorov(Grinstead and Snell, 1997, Jaynes, 2003) that introduce the three concepts of sample space, event space, and probability measure. The probability space models a real-world process (referred to as an experiment) with random outcomes.

The sample space $\Omega$

The sample space is the set of all possible outcomes of the experiment,usually denoted by $\Omega$. For example, two successive coin tosses have a sample space of $\{hh, tt, ht, th\}$, where $h$ denotes “heads” and $t$ denotes “tails”.

The event space $\mathbb{A}$
The event space is the space of potential results of the experiment. A  subset $\mathbb{A}$ of the sample space $\Omega$ is in the event space $\mathbb{A}$ if at the end of the experiment we can observe whether a particular outcome $\omega \in \Omega$ is in $\mathbb{A}$. The event space $\mathbb{A}$ is obtained by considering the collection of subsets of $\Omega$, and for discrete probability distributions $\mathbb{A}$ is often the power set of $\Omega$.

The probability $P$
With each event $A \in \mathbb{A}$, we associate a number $P(A)$ that measures the probability or degree of belief that the event will occur. $P(A)$ is called the probability of $A$.

The probability of a single event must lie in the interval $[0, 1]$ and the total probability over all outcomes in the sample space   must be 1, i.e.,$P(\Omega) = 1$.

Given a probability space $(\Omega,\mathbb{A}, P)$, we want to use it to model some real-world phenomenon. In machine learning, we often avoid explicitly referring to the probability space, but instead refer to probabilities on quantities of interest, which we denote by $T$ . We refer $T$ as the target space and refer to elements of $T$ as states.

We introduce a target space function $X :\Omega \to T$, that takes an element of  $\Omega$ (an outcome) and returns a particular quantity of interest $x$, a value in $T$.This association/mapping from $\Omega$ to $T$ is called a random variable.

For example, in the case of tossing two coins and counting the number of heads, a random variable $X$ maps to the three possible outcomes: $X(hh) = 2, X(ht) = 1, X(th) = 1$, and $X(tt) = 0$. In this particular case, $T = {0, 1, 2}$, and it is the probabilities on elements of $T$ that we are interested in. For any subset $S \in T $, we associate $P_X(S) \in [0,1]$(the probability) to a particular event occurring corresponding to the random variable $X$.

$P(X=2)=P(hh)=P(h).P(h)=\frac{1}{2}.\frac{1}{2}=\frac{1}{4}$
$P(X=1)=P(ht) + P(th)= \frac{1}{2}.\frac{1}{2}+\frac{1}{2}.\frac{1}{2}=\frac{1}{4}+\frac{1}{4}=\frac{1}{2}$
$P(X=0)=P(tt)=P(t).P(t)=\frac{1}{2}.\frac{1}{2}=\frac{1}{4}$

Consider another example


Remark. The target space, that is, the range $T$ of the random variable $X$,is used to indicate the kind of probability space, i.e., a $T$ random variable.When $T$ is finite or countably infinite, this is called a discrete random variable . For continuous random variables ,we only consider $T = \mathbb{R}$ or $T = \mathbb{R}^D$.

In machine learning systems we are interested in generalization error.This means that we are actually interested in the performance of our system on instances that we will observe in future, which are not identical to the instances that we have seen so far. This analysis of future performance relies on probability and statistics.

Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(...

5.1 Optimization using Gradient Descent

Since machine learning algorithms are implemented on a computer, the mathematical formulations are expressed as numerical optimization methods.Training a machine learning model often boils down to finding a good set of parameters. The notion of “good” is determined by the objective function or the probabilistic model. Given an objective function, finding the best value is done using optimization algorithms. There are two main branches of continuous optimization constrained and unconstrained. By convention, most objective functions in machine learning are intended to be minimized, that is, the best value is the minimum value. Intuitively finding the best value is like finding the valleys of the objective function, and the gradients point us uphill. The idea is to move downhill (opposite to the gradient) and hope to find the deepest point. For unconstrained optimization, this is the only concept we need,but there are several design choices. For constrained optimization, we need to intr...