Eigen decomposition can be done only on a square matrix.
let A∈Rn×n and let λ1,λ2,…,λn, be a set of scalars, and let p1,p2,…,pn be set of vectors in Rn.We define P=[p1,p2,…,pn] and let D∈Rn×n be a diagonal matrix with diagonal entries λ1,λ2,…,λn, then we can show that
A=PDP−1
if and only if λ1,λ2,…,λn are the eigen values of A and p1,p2,…,pn are the corresponding eigen vectors of A.This requires that P must be invertable and P must have full rank.This requires us to have n linearly independent eigen vectors p1,…,pn and forms the basis of Rn.
Theorem:(Eigen Decomposition). A square matrix A∈Rn×n can be factored into
A=PDP−1
where P∈Rn and D is a diagonal matrix whose diagonal entries are the eigenvalues of A, if and only if the eigen vectors of A form the basis of Rn.
This theorem implies that only non defective matrix can be diagonalized and that the columns of P are the n eigen vectors of A.
A symmetric matrix S∈Rn×n can always be diagonalized.
Spectral theorem states that we can find orthonormal eigen vectors .This makes P an orthogonal matrix so that A=PDPT and D=PTAP
Geometric Intuition
We can interpret the eigendecomposition of a matrix as follows.Let A be a transformation matrix of a linear mapping with respect to the standard basis.P−1 performs a basis change from the standard basis into eigen basis. Then the diagonal D scales the vectors along these axes by the eigenvalues λi.Finally, P transforms these scaled vectors back into standard/canonical coordinates.
Advantages
1.Diagonal matrix D can be efficiently be raised to a power. Therefore we can find a matrix power for a matrix A∈Rn×n via eigen value decomposition(if exist) so that
Ak=(PDP−1)k=PDkP−1
Computing Dk is efficient because we apply this operation individually to any diagonal element.
2.Assume that eigen decomposition exist A=PDP−1 exist, Then
det(A)=det(PDP−1)=det(P)det(D)det(P−1)=det(D)=∏idii
This allows efficient computation of the determinant of A.
3.The inverse of A is A−1=(PDP−1)−1=PD−1P−1
P−1=PT for orthonormal eignen vectors and D−1 can be found by taking 1/λii
Eg: lets compute the eigen decomposition of ( university question)
A=[2112]Therefore the eigen values of A are λ1=1 and λ2=3 and the corresponding normalized eigen vectors are
p1=1√2[1−1] and
p2=1√2[11]
step2:Check for existence
The eigen vectors p1,p2 form a basis of R2. Therefore A can be diagonalized.
step3: construct the matrix P to diagonalize A
We collect the eigenvectors of A in P so that
P=[p1,p2]=1√2[11−11]import numpy as np
# define matrix
A = array([[2,1],[1,2]])
print("Original Matrix A")
print(A)
# factorize
values, vectors = np.linalg.eig(A)
# create matrix from eigenvectors
P = vectors
print("Normalized Eigen Vectors P")
print(P)
# create inverse of eigenvectors matrix
Pt = np.linalg.inv(P)
print("Inverse of P which is P^T")
print(Pt)
# create diagonal matrix from eigenvalues
D = np.diag(values)
print("diagonal Matrix D with eigen values on the diagonal")
print(D)
# reconstruct the original matrix
print("reconstructed original matrix PDP^T")
B = P.dot(D).dot(Pt)
print(B)
o/p
Original Matrix A
The eigen values are λ1=0,λ2=3,λ=−3
The eigen vectors areP (−1,2,−2),(−2,1,2),(2,2,1)
So the Diagonalized matrix is P−1.A.P
[−300000003]
Comments
Post a Comment