Skip to main content

2.4 Angles between vectors and Orthogonality

In addition to enabling the definition of lengths of vectors, as well as the distance between two vectors, inner products also capture the geometry of a vector space by defining the angle $\omega$ between two vectors.
Assume $x \ne 0, y \ne 0$, then

$ -1 \le  \frac{<x,y> }{\left \| x \right \|\left \| y\right \|}$ $  \le 1$

therefore there exist a unique $\omega \in [0,\pi]$ with

$ cos \omega=  \frac{<x,y> }{\left \| x \right \|\left \| y\right \|}$

The number $\omega$ is the angle between the vectors $x$ and $y$. Intuitively, the angle between two vectors tells us how similar their orientations are. For example, using the dot product, the angle between $x$ and $y = 4x$, i.e., y is a scaled version of x, is 0: Their orientation is the same.

Example ( university question)
Lets consider the angle between $x=[1,1]^T \in \mathbb{R}^2$ and $y=[1,2]^T \in \mathbb{R}^2$.If we use dot product as the inner product

$cos \omega=\frac{<x,y>}{\sqrt{<x,x><y,y>}}=\frac{x^Ty}{\sqrt{x^Txy^Ty}}=\frac{3}{\sqrt{10}}$

and the angle between the two vectors is $arccos(\frac{3}{\sqrt{10}}) \approx 0.32 rad$, which corresponds to about $18^{\circ}$.

# calculating angle between vectors
from math import sqrt
import numpy as np
# define data
x =np.array([1,1])
y =np.array([1,2])
print("x")
print(x)
print("y")
print(y)

cosw = x.dot(y)/(sqrt(x.dot(x)*y.dot(y)))
print("angle between vectors in radiance")
print(np.arccos(cosw))
print("angle between vectors in degrees")
print(np.arccos(cosw)*180.0/np.pi)

O/P
[1 1] 
[1 2] 
angle between vectors in radiance 
0.3217505543966423 
angle between vectors in degrees 
18.434948822922017

Orthogonality
Two vectors $x$ and $y$ are orthogonal if and only if $<x,y>=0$, and we write $x \perp y$. If additionally $\left \| x \right \| = 1 = \left \| y \right \|$, i.e., the vectors are unit vectors, then $x$ and $y$ are orthonormal.
An implication of this definition is that the 0-vector is orthogonal to every vector in the vector space.

Note: vectors that are orthogonal with respect to one inner product do not have to be orthogonal with respect to different inner product.

Example:University question
Are the vectors $x = [2, 5 ,1]^𝑇$ and $y = [9, −3 ,6]^𝑇 $ orthogonal?. Justify your answer.

Their  dot product is $2*9 + 5* -3 + 1*6=18-15+6=9$, which is not equal to zero.
So the vectors are not orthogonal.

The angle between them is $(\theta)=arccos(9/\sqrt{30}.\sqrt{126})$, which is not 90 degrees.

Example:University question
Determine whether the vectors
$a=\begin{bmatrix}
3\\
0\\
-2
\end{bmatrix} b=\begin{bmatrix}
2\\
0\\
3
\end{bmatrix}$ are orthogonal or not

The dot product $a.b=6+0-6=0$. Hence the vectors are orthogonal

Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

Vectors in Machine Learning

As data scientists we work with data in various formats such as text images and numerical values We often use vectors to represent data in a structured and efficient manner especially in machine learning applications In this blog post we will explore what vectors are in terms of machine learning their significance and how they are used What is a Vector? In mathematics, a vector is a mathematical object that has both magnitude and direction. In machine learning, a vector is a mathematical representation of a set of numerical values. Vectors are usually represented as arrays or lists of numbers, and each number in the list represents a specific feature or attribute of the data. For example, suppose we have a dataset of houses, and we want to predict their prices based on their features such as the number of bedrooms, the size of the house, and the location. We can represent each house as a vector, where each element of the vector represents a specific feature of the house, such as the nu...

2.14 Singular Value Decomposition

The Singular Value Decomposition ( SVD) of a matrix is a central matrix decomposition method in linear algebra.It can be applied to all matrices,not only to square matrices and it always exists.It has been referred to as the 'fundamental theorem of linear algebra'( strang 1993). SVD Theorem: Let $A^{m \times n}$ be a rectangular matrix of rank $r \in [0,min(m,n)]$. The SVD of A is a decomposition of the form. $A= U \Sigma V^T $ with an orthogonal matrix $U \in \mathbb{R}^{m \times m}$ with column vectors $u_i, i=1,\ldots,m$ and an orthogonal matrix $V \in \mathbb{R}^{n \times n}$ with column vectors $v_j, j=1,\ldots,n$.Moreover, $\Sigma$ is an $m \times n$ matrix with $\sum_{ii} = \sigma \ge 0$ and $\sigma_{ij}=0, i \ne j$. The diagonal entries $\Sigma_i=1,\ldots,r$ of $\sigma$ are called singular values . $u_i$ are called left singular vectors , and $v_j$ are called right singular vectors .By convention singular values are ordered ie; $\sigma_1 \ge \sigma_2 \ldots \sigma_r \...