Skip to main content

2.2 Inner products

Inner products allow for the introduction of intuitive geometrical concepts,such as the length of a vector and the angle or distance between two vectors. A major purpose of inner products is to determine whether vectors are orthogonal to each other.

Dot Product

It is a special type of inner product.The scalar product/dot product of two vectors in $\mathbb{R}^n$ is given by

$x^T y= \sum_{i=1}^n x_iy_i $

Example
from numpy import inf
from numpy import array
a = array([1, 2, 3])
b = array([2, 3, 4])
print(a)
print(b)
print("dot product")
print(a.dot(b))

O/P
[1 2 3]
[2 3 4] 
dot product 
20



General Inner products
Let $V$ be a vector space and  $\Omega : V \times V \to \mathbb{R}$ be a bilinear mapping that takes two vectors and maps them onto real number. Then

$\Omega$ is called symmetric if $\Omega(x,y)=\Omega(y,x)$ for all $x,y \in V$, i.e., the order of the arguments does not matter.
$\Omega$ is called positive definite if
$\forall x \in V / \{0\}$: $\Omega(x,x) > 0, \Omega(0,0)=0 $

Definition:Let $V$ be a vector space and  $\Omega : V \times V \to \mathbb{R}$ be a bilinear mapping that takes two vectors and maps them onto real number. Then

  • A positive definite, symmetric bilinear mapping $\Omega : V \times V \to \mathbb{R}$ is called an inner product on $V$.We typically write $<x,y>$ instead of  $\Omega(x,y)$
  • The pair $(V,<\cdot,\cdot>)$ is called an inner product space or (real) vector space with inner product.If we use the dot product defined as $x^Tx$, we call $(V,<\cdot,\cdot>)$ a Euclidean vector space
Inner Product that is not dot product

Consider $V=\mathbb{R}^2$. If we define
$<x,y>:=x_1y_1-(x_1y_2+x_2y_1)+2x_2y_2$
then $<\cdot,\cdot>$ is an inner product but different from the dot product.
$<x,y>:=x_1y_1-(x_1y_2+x_2y_1)+2x_2y_2$
$\qquad= y_1x_1 - (y_1x_2 + y_2x_1) + 2y_2x_2 = <y,x>$

Thus it is symmetric

It holds that
$<x,x> = x_1^2- (2x_1x_2) + 2x_2^2$
$= (x_1 - x_2)^2 + x_2^2$

This is a sum of positive terms for $x \ne 0$. Moreover, this expression shows that if $<x,x> = 0$ then $x_2 = 0$ and then $x_1 = 0$, i.e., $x = 0$. Hence, $<.,.>$ is positive definite.


Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper Question Paper July 2021 and evaluation scheme Question Paper June 2022 and evaluation scheme Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping and Matri

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(A∪B)=P(A)+P(B)$  wh

1.1 Solving system of equations using Gauss Elimination Method

Elementary Transformations Key to solving a system of linear equations are elementary transformations that keep the solution set the same, but that transform the equation system into a simpler form: Exchange of two equations (rows in the matrix representing the system of equations) Multiplication of an equation (row) with a constant  Addition of two equations (rows) Add a scalar multiple of one row to the other. Row Echelon Form A matrix is in row-echelon form if All rows that contain only zeros are at the bottom of the matrix; correspondingly,all rows that contain at least one nonzero element are on top of rows that contain only zeros. Looking at nonzero rows only, the first nonzero number from the left pivot (also called the pivot or the leading coefficient) is always strictly to the right of the  pivot of the row above it. The row-echelon form is where the leading (first non-zero) entry of each row has only zeroes below it. These leading entries are called pivots Example: $\begin