c_1 & \cdots & 0\\
\vdots&\ddots & \vdots \\
0& \cdots & c_n
\end{bmatrix}$
Eigen decomposition can be done only on a square matrix.
let $A \in \mathbb{R}^{n \times n}$ and let $\lambda_1,\lambda_2,\ldots,\lambda_n$, be a set of scalars, and let $p_1,p_2,\ldots,p_n$ be set of vectors in $\mathbb{R}^n$.We define $P=[p_1,p_2,\ldots,p_n]$ and let $D \in \mathbb{R}^{n \times n}$ be a diagonal matrix with diagonal entries $\lambda_1,\lambda_2,\ldots,\lambda_n$, then we can show that
$A=PDP^{-1}$
if and only if $\lambda_1,\lambda_2,\ldots,\lambda_n$ are the eigen values of $A$ and $p_1,p_2,\ldots,p_n$ are the corresponding eigen vectors of $A$.This requires that $P$ must be invertable and $P$ must have full rank.This requires us to have $n$ linearly independent eigen vectors $p_1,\ldots,p_n$ and forms the basis of $\mathbb{R}^n$.
Theorem:(Eigen Decomposition). A square matrix $A \in \mathbb{R}^{n \times n}$ can be factored into
$A=PDP^{-1}$
where $P \in \mathbb{R}^n$ and $D$ is a diagonal matrix whose diagonal entries are the eigenvalues of $A$, if and only if the eigen vectors of $A$ form the basis of $\mathbb{R}^n$.
This theorem implies that only non defective matrix can be diagonalized and that the columns of $P$ are the $n$ eigen vectors of $A$.
A symmetric matrix $S \in \mathbb{R}^{n \times n}$ can always be diagonalized.
Spectral theorem states that we can find orthonormal eigen vectors .This makes $P$ an orthogonal matrix so that $A=PDP^T$ and $D=P^TAP$
Geometric Intuition
We can interpret the eigendecomposition of a matrix as follows.Let $A$ be a transformation matrix of a linear mapping with respect to the standard basis.$P^{-1}$ performs a basis change from the standard basis into eigen basis. Then the diagonal $D$ scales the vectors along these axes by the eigenvalues $\lambda_i$.Finally, $P$ transforms these scaled vectors back into standard/canonical coordinates.
Advantages
1.Diagonal matrix $D$ can be efficiently be raised to a power. Therefore we can find a matrix power for a matrix $A \in \mathbb{R}^{n \times n}$ via eigen value decomposition(if exist) so that
$A^k= ( PDP^{-1})^k= PD^kP^{-1}$
Computing $D^k$ is efficient because we apply this operation individually to any diagonal element.
2.Assume that eigen decomposition exist $A=PDP^{-1}$ exist, Then
$det(A)=det(PDP^{-1})=det(P)det(D)det(P^{-1})=det(D)=\prod_i d_{ii}$
This allows efficient computation of the determinant of $A$.
3.The inverse of $A$ is $A^{-1}=(PDP^{-1})^{-1}=PD^{-1}P^{-1}$
$P^{-1}=P^T$ for orthonormal eignen vectors and $D^{-1}$ can be found by taking $1/\lambda_{ii}$
Eg: lets compute the eigen decomposition of ( university question)
$A=\begin{bmatrix}2 &1 \\1 & 2
\end{bmatrix}$
2-\lambda &1 \\
1 & 2-\lambda
\end{bmatrix}\right)$
Therefore the eigen values of $A$ are $\lambda_1=1$ and $\lambda_2=3$ and the corresponding normalized eigen vectors are
$p_1=\frac{1}{\sqrt{2}}\begin{bmatrix}1 \\
-1
\end{bmatrix}$ and
$p_2=\frac{1}{\sqrt{2}}\begin{bmatrix}1 \\
1
\end{bmatrix}$
step2:Check for existence
The eigen vectors $p_1,p_2$ form a basis of $\mathbb{R}^2$. Therefore $A$ can be diagonalized.
step3: construct the matrix P to diagonalize A
We collect the eigenvectors of $A$ in $P$ so that
$P=[p_1,p_2]=\frac{1}{\sqrt{2}}\begin{bmatrix}1 &1 \\-1 &1
\end{bmatrix}$
import numpy as np
# define matrix
A = array([[2,1],[1,2]])
print("Original Matrix A")
print(A)
# factorize
values, vectors = np.linalg.eig(A)
# create matrix from eigenvectors
P = vectors
print("Normalized Eigen Vectors P")
print(P)
# create inverse of eigenvectors matrix
Pt = np.linalg.inv(P)
print("Inverse of P which is P^T")
print(Pt)
# create diagonal matrix from eigenvalues
D = np.diag(values)
print("diagonal Matrix D with eigen values on the diagonal")
print(D)
# reconstruct the original matrix
print("reconstructed original matrix PDP^T")
B = P.dot(D).dot(Pt)
print(B)
o/p
Original Matrix A
1 & -2 & 0 \\
-2 & 0 & 2 \\
0 & 2 & -1 \\
\end{bmatrix}$
The eigen values are $\lambda_1=0,\lambda_2=3,\lambda=-3$
The eigen vectors are$P$ $(-1, 2, -2), (-2, 1, 2), (2, 2, 1)$
So the Diagonalized matrix is $P^{-1}.A.P$
$\begin{bmatrix}
-3 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 3 \\
\end{bmatrix}$
Comments
Post a Comment