Skip to main content

Posts

Orthogonal and Orthonormal Basis, Orthogonal Matrix,Orthogonal Compliment

Orthogonal Vectors Two vectors are orthogonal if the angle between them is 90 degree or the dot product is zero. Eg: [ 1,0] and [0,1] are two orthogonal vectors [1,1] and [-1,1] are two orthogonal vectors Orthonormal vectors Two vectors are orthonormal if they are orthogonal and also the length of each vector is 1. Eg: [ 1,0] and [0,1] are two orthonormal vectors ( length 1 and dot product is zero) [1,1] and [-1,1] are two orthogonal vectors but not orthonormal( length is not 1) Definition Let S = {v1, v2, ... , vk} be a set of vectors in Rn, then S is called an orthogonal if vi . vj = 0 for all i not equal to j. An orthogonal set of vectors is called orthonormal if all vectors in S are unit vectors. Theorem Any orthogonal set of vectors S = {v1, v2, ... , vk} are linearly independent. Proof Let c1v1 + ... + cnvk = 0 Since the vectors are orthogonal, we have vi . vj = 0 (for i != j) and when we dot both sides of the equation with vi all the terms drop ou

1.11 Transformation Matrix in New Basis

Lets look at how transformation matrices of a linear mapping $T: V \to W$ change if we change the bases in $V$ and W. Consider two ordered bases of  $V$ $B= ( b_1,b_2,......,b_n)$ $\tilde{B}=( \tilde{b_1},\tilde{b_2},......,\tilde{b_n})$ Consider two ordered bases of W $C= ( c_1,c_2,......,c_n)$ $\tilde{C}=( \tilde{c_1},\tilde{c_2},......,\tilde{c_m})$ Let $A\in \mathbb{R}^{m \times n}$ be the transformation matrix from $V \to W $ with respect to basic $B$ and $C$ and $\tilde{A}\in \mathbb{R}^{m \times n}$ be the transformation matrix from $V \to W$ with respect to basis $\tilde{B}$ and $\tilde{C}$. We will find out how $A$ and $\tilde{A}$ are related ie; how we can transform $A$ to $\tilde{A}$. The transformation matrix $\tilde{A}=T^{-1} A S $ Here $S \in \mathbb{R}^{n \times n}$ is the  transformation matrix that maps coordinates with respect to $\tilde{B}$ into coordinates with respect to $B$ and  $T \in \mathbb{R}^{m \times m}$ is the  transformation matrix that maps coordinates

1.9 Linear Mapping and Matrix Representation for Linear Mapping

Linear Mapping ( Linear Transformation) A linear Mapping (or simply transformation, sometimes called linear transformation) is a mapping between two vector spaces: it takes a vector as input and transforms it into a new output vector. A function is said to be linear if the properties of additivity and scalar multiplication are preserved, that is, the same result is obtained if these operations are done before or after the transformation. Linear functions are synonymously called linear transformations.A mapping $\Phi:V \to W$ preserves the structure of the vector space if $\Phi(x+y)=\Phi(x)+\Phi(y)$ $\Phi(\lambda x)=\lambda \Phi(x)$ for all $x,y \in V$ and $\lambda \in \mathbb{R}$ Definition: For vector spaces $V$ and $W$, a mapping $\Phi: V \to W$ is called a linear mapping( or vector space homomorphism/linear transformation) if $\forall x,y \in V \quad \forall \lambda, \psi \in \mathbb{R}:\Phi(\lambda x+ \psi y)=\lambda \Phi(x)+ \psi \Phi(x)$ Definition: Consider a mapping $\Phi:

1.10 Basis and Change of Basis

Basis The basis is a coordinate system used to describe vector spaces (sets of vectors).  To be considered as a basis, a set of vectors must: Be linearly independent. Span the space. Every vector in the space is a unique combination of the basis vectors. The dimension of a space is defined to be the size of a basis set. For instance, there are two basis vectors in $\mathbb{R}^2$ (corresponding to the x and y-axis in the Cartesian plane), or three in $\mathbb{R}^3$.if the number of vectors in a set is larger than the dimensions of the space, they can’t be linearly independent. If a set contains fewer vectors than the number of dimensions, these vectors can’t span the whole space. In the Cartesian plane, the basis vectors are orthogonal unit vectors (length of one), generally denoted as $i$ and $j$. Figure 1: The basis vectors in the Cartesian plane. For instance, in Figure 1, the basis vectors $i$ and $j$ point in the direction of the axis $x$ and $y$ respectively. These vectors giv

Row space, Column space and Null space

Row Space The span of row vectors of any matrix, represented as a vector space is called row space of that matrix. or If we represent individual row as a vector, then the vector space formed by set of linear combination of all those vectors will be called row space of that matrix. Assuming a 3x3 matrix $A=\begin{bmatrix} 1& 2 & 3 \\ 4 &5& 6\\ 7& 8& 9 \end{bmatrix}$ Then the row space is Span of these row vectors . $Span( v1,v2,v3)$ where $v1=(1,2,3), v2=(4,5,6),v3=(7,8,9)$ If there are linearly depended vectors, we have to avoid those vectors.Linearly independent vectors can be found by converting the vectors into row reduced echelon form.For example consider $A=\begin{bmatrix} 1& 2 & 3 \\ 2 &4& 6\\ 1& 1& 0 \end{bmatrix}$ After row reduction $A=\begin{bmatrix} 1& 0 & -3 \\ 0 &1& 3\\ 0& 0& 0 \end{bmatrix}$ So the row space is the span of $[1,0,-3]$ and $[0,1,3]$. Note: we can find transpose of $A$ and do the

1.6 Linear Independence and Dependence of vectors,Linear combination

Occasionally we have a set of vectors and we need to determine whether the vectors are linearly independent of each other. This may be necessary to determine if the vectors form a basis, or to determine how many independent equations there are.In order to define linear dependence and independence let farther clarify what is a linear combination. Linear Combination Consider a vector space $\mathbb{V}$ and a finite number of vectors $x_1,x_2,\ldots,x_k \in \mathbb{V}$. Then every $v \in \mathbb{V}$ of the form. $ v=\lambda_1x_1+\cdots+\lambda_kx_k= \sum_{i=1}^{k} \lambda_ix_i \in \mathbb{V}$ with $\lambda_1,\ldots,\lambda_k \in \mathbb{R}$ is a linear combination of vectors $x_1,\ldots,x_k$. For example, if $V=\{[1,2],[2,1]\}$, then $2x_1−x_2=2([1,2])−[2,1]=[0,3]$ is linear combination of the vectors in $V$. Linear Independence Let us consider a vector space $\mathbb{V}$ with $k \in \mathbb{N}$ and $x_1,\ldots,x_k \in \mathbb{V}$. If there is a non trivial linear combination such that

1.2 Row Echelon form and Reduced Row Echelon form

  Row  Echelon form A matrix is in row-echelon form if All rows that contain zeros are at the bottom of the matrix; correspondingly all rows that contain at least one nonzero element are on top of rows that contain only zeros. Looking at nonzero rows only, the first nonzero number from the left(also called the pivot or the leading coefficient) is always strictly to the right of the pivot of the row above it. The variables corresponds to the pivots in the row-echelon form are called basic variables and the other variables are free variables.The column which contains the pivot element are called pivot columns. from sympy import Matrix M = Matrix([[1,2,3,4], [5,6,3, 4],[7,8,1,5]]) display(M.echelon_form())     1 0 0 2 − 4 0 3 − 12 8 4 − 16 − 4     Reduced  Row  Echelon form An equation is in reduced row-echelon form ( row reduced echelon form or row canonical form) if It is in row-echelon form Every pivot is 1 The pivot is the only nonzero entry in its column. There's actually