Skip to main content

Posts

Showing posts from July, 2021

2.4 Angles between vectors and Orthogonality

In addition to enabling the definition of lengths of vectors, as well as the distance between two vectors, inner products also capture the geometry of a vector space by defining the angle $\omega$ between two vectors. Assume $x \ne 0, y \ne 0$, then $ -1 \le  \frac{<x,y> }{\left \| x \right \|\left \| y\right \|}$ $  \le 1$ therefore there exist a unique $\omega \in [0,\pi]$ with $ cos \omega=  \frac{<x,y> }{\left \| x \right \|\left \| y\right \|}$ The number $\omega$ is the angle between the vectors $x$ and $y$. Intuitively, the angle between two vectors tells us how similar their orientations are. For example, using the dot product, the angle between $x$ and $y = 4x$, i.e., y is a scaled version of x, is 0: Their orientation is the same. Example ( university question) Lets consider the angle between $x=[1,1]^T \in \mathbb{R}^2$ and $y=[1,2]^T \in \mathbb{R}^2$.If we use dot product as the inner product $cos \omega=\frac{<x,y>}{\sqrt{<x,x><y,y>}}=\fr

2.3 Lengths of Vectors and Distances

Norms can be to compute the length of a vector. Inner products and norms are closely related in the sense that any inner product induces a norm $\left \|  x \right \| := \sqrt{<x,x>}$ This shows that we can compute length of vectors using inner product.However not every norm is induced by an inner product. The Manhattan norm is an example of a norm without a corresponding inner product.Norms induced by inner products needs special attention Cauchy-Schwarz Inequality For an inner product vector space $( V, <\cdot,\cdot>)$, the induced norm $\left \|  \cdot \right \|$ satisfies the Cauchy-Schwarz Inequality  $|<x,y>|  \le \left \|  x \right \| \left \|  y \right \| $ Example :Length of vectors using inner product In geometry, we are often interested in lengths of vectors. We can now use an inner product to compute the length. Let us take $x = [1,1]^T \in  \mathbb{R}^2$. If we use the dot product as the inner product, we obtain $\left \|  x \right \| = \sqrt{x^Tx}=\sq

2.2 Inner products

Inner products allow for the introduction of intuitive geometrical concepts,such as the length of a vector and the angle or distance between two vectors. A major purpose of inner products is to determine whether vectors are orthogonal to each other. Dot Product It is a special type of inner product.The scalar product/dot product of two vectors in $\mathbb{R}^n$ is given by $x^T y= \sum_{i=1}^n x_iy_i $ Example from numpy import inf from numpy import array a = array([1, 2, 3]) b = array([2, 3, 4]) print(a) print(b) print("dot product") print(a.dot(b)) O/P [1 2 3] [2 3 4]  dot product  20 General Inner products Let $V$ be a vector space and  $\Omega : V \times V \to \mathbb{R}$ be a bilinear mapping that takes two vectors and maps them onto real number. Then $\Omega$ is called symmetric if $\Omega(x,y)=\Omega(y,x)$ for all $x,y \in V$, i.e., the order of the arguments does not matter. $\Omega$ is called positive definite if $\forall x \in V / \{0\}$: $\Omega(x,x) > 0, \Om

1.12 Image and Kernel

The image and kernel of a linear mapping are vector subspaces with certain important properties Definition:  Image and Kernel For $\Phi: V \to W $, we define the kernel or null space $ker(\Phi)=\Phi^{-1}(0_w)=\{v \in V: \Phi(v)=0_w\}$ and image or range $Im(\Phi) =\Phi(V)=\{w \in W| \exists v \in V: \Phi(v)=w\}$ We also call $V$ and $W$ as domain and codomain of $\Phi$, respectively. Intuitively, the kernel is the set of vectors in $v \in V$ that $\Phi$ maps onto the neutral element $0_w \in W$. The image is the set of vectors $w \in W$  that can be “reached” by $\Phi$ from any vector in $V$ . An illustration is given in figure. Remark: Consider a linear mapping $\Phi: V \to W$, where $V,W$ are vector spaces. It always holds that $\Phi(0_v)=0_w$ and therefore $0_v \in ker(\Phi)$.In particular the null space is never empty. $Im(\Phi) \subseteq W$ is a subspace of $W$, and $ker(\Phi) \subseteq V$ is a subspace of $V$. Remark(Null space and Column space) Let us consider $A \in \mathbb

1.7 Generating set , basis and span

Generating set and span Consider a vector space $\mathbb{V}=(V,+,\cdot)$ and set of vectors $A=\{x_1,\ldots,x_k\} \subseteq V$. If every vector $v \in V$ can be expressed as a linear combination of $x_1,\ldots,x_k$,$A$ is called a generating set of $V$.The set of all linear combination of vectors in $A$ is called the span of $A$.If $A$ spans the vector space $V$, we write $V=span[A]$ or $V=span[x_1,\ldots,x_k]$. Generating sets are sets of vectors that span vector (sub)spaces, i.e.,every vector can be represented as a linear combination of the vectors in the generating set. Now, we will be more specific and characterize the smallest generating set that spans a vector (sub)space. Basis Consider a vector space $\mathbb{V}=(V,+,\cdot)$ and $A \subseteq V$. A generating set $A$of $V$ is called minimal if there exists no smaller set $\hat{A} \subseteq A \subseteq V$ that spans $V$.Every linearly independent generating set of $V$ is minimal and is called the basis of $V$. Let $\mathbb{V}

1.5 Vector Spaces and subspaces

Vector Spaces A real valued vector space $\mathbb{V}= (\textit{V},+, \cdot)$ is a set $\textit{V}$ with two operations $+: \textit{V} \times \textit{V} \implies {V}$ $\cdot : \mathbb{R} \times \textit{V} \implies {V}$ $+$ and $\cdot$ are standard vector addition and scalar multiplication where 1.$(\textit{V},+)$ is an Abelian group 2. Distributivity of scalar multiplication  $\forall \lambda \in \mathbb{R} \quad and \quad x,y \in V: \lambda.(x+y)=\lambda.x + \lambda.y$ $\forall \lambda.\psi  \in \mathbb{R} \quad and \quad x,y \in V:( \lambda + \psi).(x+y)=\lambda.x + \psi.y$ 3.Associativity of scalar multiplication $\forall \lambda.\psi  \in \mathbb{R} \quad and \quad x \in V: \lambda  (\psi.x)=(\lambda.\psi).x $ 4.Identity element with respect to multiplication $\forall  x \in V: 1.x=x $ The elements $x \in V$ are called vectors.The identity element of $(V,+)$ is the zero vector $0=[0,\ldots,0]^T$.The operation $+$ is the vector addition.The elements $\lambda \in \mathbb{R}$ are ca

1.4 Gaussian Elimination to find the Inverse of a square matrix

A linear system is said to be square if the number of equations matches the number of unknowns. If the system $A x = b$ is square, then the coefficient matrix, $A$, is square. If $A$ has an inverse, then the solution to the system $A x = b$ can be found by multiplying both sides by $A ^{−1}$ $Ax=b$ $A^{-1}Ax=A^{-1}b$ $x=A^{-1}b$ If $A$ is an invertible $n$ by $n$ matrix, then the system $A x = b$ has a unique solution for every $n$‐vector $b$, and this solution equals $A ^{−1} b$. Since the determination of $A^{ −1}$ typically requires more calculation than performing Gaussian elimination and back substitution, this is not necessarily an improved method of solving $A x = b$ (And, of course, if $A$ is not square, then it has no inverse, so this method is not even an option for non square systems.).However, if the coefficient matrix $A$ is square, and if $A ^{−1}$ is known or the solution of $A x = b$ is required for several different $b$'s, then this method is indeed useful, from

1.1 Solving system of equations using Gauss Elimination Method

Elementary Transformations Key to solving a system of linear equations are elementary transformations that keep the solution set the same, but that transform the equation system into a simpler form: Exchange of two equations (rows in the matrix representing the system of equations) Multiplication of an equation (row) with a constant  Addition of two equations (rows) Add a scalar multiple of one row to the other. Row Echelon Form A matrix is in row-echelon form if All rows that contain only zeros are at the bottom of the matrix; correspondingly,all rows that contain at least one nonzero element are on top of rows that contain only zeros. Looking at nonzero rows only, the first nonzero number from the left pivot (also called the pivot or the leading coefficient) is always strictly to the right of the  pivot of the row above it. The row-echelon form is where the leading (first non-zero) entry of each row has only zeroes below it. These leading entries are called pivots Example: $\begin