Skip to main content

Posts

2.10 Determinant and Trace of Matrix- Importance

Determinant Determinants are defined for square matrices and it is a function that maps the matrix into a real number.Determinants are important concepts in linear algebra.It is used in the analysis and solution of system of linear equations. Determinant of a matrix is represented as $det(A)$ or $|A|$. determinants are used for testing the invertibility.  A square matrix $A \in \mathbb{R}^{n \times n}$ is invertable if and only if $det(A) \ne 0$ For a diagonal, upper triangular and lower triangular matrix, the determinant is the product of its diagonal element Determinant measures volume.If the two sides of the parallelogram is represented as a two columns of a matrix then the absolute value of the determinant represents volume.For a parallelepiped with three sides r,b,g then determinant of the 3x3 matrix $[ r b g ]$ is the volume of the solid. Laplace expansion can be used to find determinant.It reduces the problem of computing the determinant of $ n \times n$ matrix into computing

2.9 Gram-Schmidt Orthogolization

Gram-Schdmit method allows us to constructively transform any basis $(b_1,b_2,....,b_n)$ of an $n$-dimensional vector space $V$ into an orthogonal/orthonormal basis $(u_1,u_2,....u_n)$ of $V$. The method interactively constructs an orthogonal basis as follows $u_1=b_1$ $u_2=b_2- \frac{u_1.u_1^T b_2}{u_1^T.u_1}$ .. $u_k=b_k - $ projection of $u_k$ onto the span of $(u_1,u_2,...u_{k-1})$ The $k$ th basis vector $b_k$ is projected onto the subspace spanned by the first $k-1$ constructed orthogonal vectors $u_1,u_2,......u_{k-1}$. This projection is then subtracted from $b_k$ and yields a vector $u_k$ that is orthogonal to the $(k-1)$ dimensional subspace spanned by $u_1,u_2,......u_{k-1}$.Repeating this procedure for all $n$ basis vectors $(b_1,b_2,....,b_n)$ yields an orthogonal basis $(u_1,u_2,....u_n)$ of V. If we normalize $u_k$, we obtain a ortho normal basis where $||u_k||=1$ for $k=1\ldots n $ Consider a simple example import numpy as np b1=np.array([2,0]) b2=np.array([1,1]) u1=b

2.8 Orthogonal Projections, projection onto line and general subspaces

Projections are an important class of linear transformations (besides rotations and reflections) and play an important role in graphics, coding theory, statistics and machine learning. In machine learning, we often deal with data that is high-dimensional. High-dimensional data is often hard to analyze or visualize. However, high-dimensional data quite often possesses the property that only a few dimensions contain most information, and most other dimensions are not essential to describe key properties of the data. When we compress or visualize high-dimensional data, we will lose information. To minimize this compression loss, we ideally find the most informative dimensions in the data.More specifically, we can project the original high-dimensional data onto a lower-dimensional feature space and work in this lower-dimensional space to learn more about the dataset and extract relevant patterns. Machine learning algorithms, such as principal component analysis (PCA) by Pearson (1901) a

2.11 Eigen Value and Eigen Vectors

Eigenvalues and Eigenvectors Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Definition: Let $A \in \mathbb{R}^{n \times n}$ be a square matrix. Then $\lambda \in \mathbb{R}$ is an eigen value of $A$ and $ x \in \mathbb{R}^n /{0} $ is the corresponding eigen vector of  $A$ if       $Ax=\lambda x$ We call this as Eigenvalue equation The following statements are equivalent $\lambda$ is the eigen value of $A$ There exist an $x \in \mathbb{R}^n/0$ with $Ax=\lambda x$ or equivalently $(A-\lambda I_n)x=0$ can be solved non trivially i.e., $x \ne 0$ $rank(A-\lambda I) \le n$ $det(A-\lambda I) =0$ collinearity and codirection Two vectors that point in the same direction are called codirected.Two vectors are collinear if they point in the same or opposite direction. if $x$ is an eig

Symmetric Positive Definite Matrix

  A real     matrix     is symmetric positive definite if it is symmetric (   is equal to its transpose,   ) and By making particular choices of   in this definition we can derive the inequalities Satisfying these inequalities is not sufficient for positive definiteness. For example, the matrix satisfies all the inequalities but   for  . A sufficient condition for a symmetric matrix to be positive definite is that it has positive diagonal elements and is diagonally dominant, that is,   for all  . The definition requires the positivity of the quadratic form  . Sometimes this condition can be confirmed from the definition of  . For example, if   and   has linearly independent columns then   for  . Generally, though, this condition is not easy to check. Two equivalent conditions to   being symmetric positive definite are every leading principal minor  , where the submatrix   comprises the intersection of rows and columns   to  , is positive, the eigenvalues of   are all positive. The firs