Skip to main content

1.8 Rank of a Matrix

 Rank of a Matrix

The number of linearly independent columns of a matrix $A \in \mathbb{R}^{m \times n}$ equals the number of linearly independent rows and is called the  rank of $A$ and is denoted by $rk(A)$.

The rank of a matrix has some important properties:

  • $rk(A) = rk(A^T)$, i.e., the column rank equals the row rank.
  • The columns of $A \in \mathbb{R}^{m \times n}$ span a subspace $U  \subseteq \mathbb{R}^m$ with $dim(U) =rk(A)$. Later we will call this subspace the image or range. A basis of $U$ can be found by applying Gaussian elimination to $A $ to identify the pivot columns.
  • The rows of $A \in \mathbb{R}^{m \times n}$  span a subspace $W \in \mathbb{R}^n$ with $dim(W) =rk(A)$. A basis of W can be found by applying Gaussian elimination to $A^T$.
  • For all $A \in \mathbb{R}^{n \times n}$ it holds that $A$ is regular (invertible) if and only if $rk(A) = n$.
  • For all $A \in \mathbb{R}^{m \times n}$ and all $ b \in \mathbb{R}^m$ it holds that the linear equation system $Ax = b$ can be solved if and only if $ rk(A) = rk(A|b)$, where $A|b$ denotes the augmented system.
  • For $A \in \mathbb{R}^{m \times n}$ the subspace $U \subseteq \mathbb{R}^n$ of solutions for $Ax = 0$ possesses dimension $n - rk(A)$. Later, we will call this subspace the kernel or the null space.
  • A matrix $A \in \mathbb{R}^{m \times n}$ has full rank if its rank equals the largest possible rank for a matrix of the same dimensions. This means that the rank of a full-rank matrix is the lesser of the number of rows and columns, i.e.,$rk(A) = min(m, n)$. A matrix is said to be rank deficient if it does not have full rank.
Example:
$A=\displaystyle \left[\begin{matrix}1 & 0 & 1\\0 & 1 & 1\\0 & 0 & 0\end{matrix}\right]$
A has two linearly independent rows/columns so that $rk(A) = 2$.
find the rank
$A=\displaystyle \left[\begin{matrix}1 & 2 & 1\\-2 & -3 & 1\\3 & 5 & 0\end{matrix}\right]$
We can use Gaussian elimination to determine the rank
import sympy as sp
import numpy as np
row1=[1,2,1]
row2=[-2,-3,1]
row3=[3,5,0]
M = sp.Matrix((row1,row2,row3))
print("Coefficient Matrix")
display(M)
print(" Echelon Form")
display(M.echelon_form())

O/P
Coefficient Matrix
$\displaystyle \left[\begin{matrix}1 & 2 & 1\\-2 & -3 & 1\\3 & 5 & 0\end{matrix}\right]$
Echelon Form
$\displaystyle \left[\begin{matrix}1 & 2 & 1\\0 & 1 & 3\\0 & 0 & 0\end{matrix}\right]$
Here, we see that the number of linearly independent rows and columns is 2, such that $rk(A) = 2.$


An intuition for rank is to consider it the number of dimensions spanned by all of the vectors within a matrix. For example, a rank of 0 suggest all vectors span a point, a rank of 1 suggests all vectors span a line, a rank of 2 suggests all vectors span a two-dimensional plane. The rank is estimated numerically, often using a matrix decomposition method. A common approach is to use the Singular-Value Decomposition or SVD for short. NumPy provides the matrix rank() function for calculating the rank of an array. It uses the SVD method to estimate the rank. The example below demonstrates calculating the rank of a matrix with scalar values and another vector with all zero values.
# vector rank
from numpy import array
from numpy.linalg import matrix_rank
# rank
v1 = array([1,2,3])
print("v1");
print(v1)
vr1 = matrix_rank(v1)
print("Rank of v1")
print(vr1)
# zero rank
v2 = array([0,0,0,0,0])
print("v2")
print(v2)
vr2 = matrix_rank(v2)
print("Rank of v2")
print(vr2)
o/p:
v1
[1 2 3]
Rank of v1
1
v2
[0 0 0 0 0]
Rank of v2
0
# matrix rank
from numpy import array
from numpy.linalg import matrix_rank
# rank 0
M0 = array([[0,0],[0,0]])
mr0 = matrix_rank(M0)
print("M0 and Rank of M0")
print(M0)
print(mr0)
# rank 1
M1 = array([[1,2],[1,2]])
mr1 = matrix_rank(M1)
print("M1 and Rank of M1")
print(M1)
print(mr1)
# rank 2
M2 = array([[1,2],[3,4]])
mr2 = matrix_rank(M2)
print("M2 and Rank of M2")
print(M2)
print(mr2)
o/p:
M0 and Rank of M0
[[0 0]
[0 0]]
0
M1 and Rank of M1
[[1 2]
[1 2]]
1
M2 and Rank of M2
[[1 2]
[3 4]]
2

Rank and determinant using sympy

import sympy
M=sympy.Matrix([[1,2],[4,5]])
display(M)
print("determinent")
print(sympy.det(M))
print("rank")
print(sympy.Matrix.rank(M))

o/p
$\displaystyle \left[\begin{matrix}1 & 2\\4 & 5\end{matrix}\right]$
determinent 
-3 
rank 
2

Example ( University question)
Find the rank of the matrix
$A=\begin{bmatrix}
1 & 2 & 3\\
2 &3 & 4\\
3 & 5 & 7
\end{bmatrix}$
Do the elementary row operations and convert into echelon form
$R_2=R_2-2R_1$
$R_3=R_3-3R_1$
$A=\begin{bmatrix}
1 & 2 & 3\\
0 &-1 & -2\\
0 & -1 & -2
\end{bmatrix}$

$R_3=R_3-R_2$
$A=\begin{bmatrix}
1 & 2 & 3\\
0 &-1 & -2\\
0 & 0 & 0
\end{bmatrix}$

From the echelon form the number of non zero rows is 2 . So the rank is 2.

Example:




Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(...

5.1 Optimization using Gradient Descent

Since machine learning algorithms are implemented on a computer, the mathematical formulations are expressed as numerical optimization methods.Training a machine learning model often boils down to finding a good set of parameters. The notion of “good” is determined by the objective function or the probabilistic model. Given an objective function, finding the best value is done using optimization algorithms. There are two main branches of continuous optimization constrained and unconstrained. By convention, most objective functions in machine learning are intended to be minimized, that is, the best value is the minimum value. Intuitively finding the best value is like finding the valleys of the objective function, and the gradients point us uphill. The idea is to move downhill (opposite to the gradient) and hope to find the deepest point. For unconstrained optimization, this is the only concept we need,but there are several design choices. For constrained optimization, we need to intr...