Skip to main content

2.3 Lengths of Vectors and Distances

Norms can be to compute the length of a vector. Inner products and norms are closely related in the sense that any inner product induces a norm

$\left \|  x \right \| := \sqrt{<x,x>}$

This shows that we can compute length of vectors using inner product.However not every norm is induced by an inner product. The Manhattan norm is an example of a norm without a corresponding inner product.Norms induced by inner products needs special attention

Cauchy-Schwarz Inequality

For an inner product vector space $( V, <\cdot,\cdot>)$, the induced norm $\left \|  \cdot \right \|$ satisfies the Cauchy-Schwarz Inequality  $|<x,y>|  \le \left \|  x \right \| \left \|  y \right \| $

Example :Length of vectors using inner product

In geometry, we are often interested in lengths of vectors. We can now use an inner product to compute the length. Let us take $x = [1,1]^T \in  \mathbb{R}^2$. If we use the dot product as the inner product, we obtain
$\left \|  x \right \| = \sqrt{x^Tx}=\sqrt{1^2+1^2}=\sqrt{2}$

Distance and Metric
Consider an inner product space $(V,<\cdot,\cdot>)$. Then
$d(x,y):=\left \|  x-y \right \| = \sqrt{<x-y,x-y>}$
is called the distance between $x$ and $y$ for $x,y \in V$.
If we use the dot product as the inner product, then the distance is called Euclidean distance.

The mapping
$d: V \times V \to \mathbb{R}$
$(x,y) \to d(x,y)$
is called a metric

The metric satisfies following properties
d is Positive definite i.e.,: $d(x,y) \ge 0$ for all $x,y \in V$  and $d(x,y)= 0  \Leftrightarrow x=y$
d is Symmetric i.e., $d(x,y)=d(y,x)$ for all $x,y \in V$
Triangle inequality:$d(x,z) <= d(x,y) + d(y,z)$ for all $x,y \in V$

Example:
if $x=[1,2]^T$ and $y=[2,3]^T$ in $\mathbb{R}^2$. Then $x-y=[-1,-1]^T$
$d(x,y)=\sqrt{(-1)^2+(-1)^2}=\sqrt{2}$

Compute the distance between $x=\begin{bmatrix}
1 \\
2 \\
3
\end{bmatrix},y=\begin{bmatrix}
-1 \\
- 1\\
0
\end{bmatrix}$ ( university question)

$x-y=(2,3,3)$    So the distance

$d(x,y)=\sqrt{2^2+3^2+3^2}=\sqrt{22}$

# calculating euclidean distance between vectors
from math import sqrt
import numpy as np
# define data
x =np.array([1,2])
y =np.array([2,3])
# calculate distance
d=x-y
print("x")
print(x)
print("y")
print(y)
print("x-y")
print(d)
dist = sqrt(d.dot(d))
print("distance")
print(dist)

O/P
x
[1 2] 
[2 3] 
x-y 
[-1 -1] 
distance 
1.4142135623730951

Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(...

5.1 Optimization using Gradient Descent

Since machine learning algorithms are implemented on a computer, the mathematical formulations are expressed as numerical optimization methods.Training a machine learning model often boils down to finding a good set of parameters. The notion of “good” is determined by the objective function or the probabilistic model. Given an objective function, finding the best value is done using optimization algorithms. There are two main branches of continuous optimization constrained and unconstrained. By convention, most objective functions in machine learning are intended to be minimized, that is, the best value is the minimum value. Intuitively finding the best value is like finding the valleys of the objective function, and the gradients point us uphill. The idea is to move downhill (opposite to the gradient) and hope to find the deepest point. For unconstrained optimization, this is the only concept we need,but there are several design choices. For constrained optimization, we need to intr...