Skip to main content

5.7 Quadratic Programming

 

Consider the case of a convex quadratic objective function, where the constraints are affine i.e.,

$min_{x \in \mathbb{R}^d} \quad \frac{1}{2}x^TQx+c^Tx$

subject to $Ax \le b$

Where $A \in \mathbb{R}^{m \times d},b \in \mathbb{R}^m$ and $c \in \mathbb{R}^d$. The square symmetric matrix $ Q \in \mathbb{R}^{d \times d}$ is positive definite, and therefore the objective function is convex.This is known as a quadratic program. Observe that it has $d$ variables and $m$ linear constraints.

Consider the quadratic program of two variables

$min_{x \in \mathbb{R}^2}\frac{1}{2}\begin{bmatrix}
x_1\\
x_2
\end{bmatrix}^T\begin{bmatrix}
2 & 1\\
1 & 4
\end{bmatrix}
\begin{bmatrix}
x_1\\
x_2
\end{bmatrix}+\begin{bmatrix}
5\\
3
\end{bmatrix}^T\begin{bmatrix}
x_1\\
x_2
\end{bmatrix}$

subject to
$\begin{bmatrix}
1 & 0\\
-1& 0\\
0& 1\\
0& -1
\end{bmatrix}\begin{bmatrix}
x_1\\
x_2
\end{bmatrix} \le \begin{bmatrix}
1\\
1\\
1\\
1
\end{bmatrix}$
This is illustrated in figure

The objective function is quadratic with a positive semidefinite matrix $Q$, resulting in elliptical contour lines. The optimal value must lie in the shaded (feasible) region, and is indicated by the star.

The Lagrangian is given by

$L(x,\lambda)=\frac{1}{2}x^TQx+c^Tx+\lambda^T(Ax-b)$
$\quad=\frac{1}{2}x^TQx+(c+A^T\lambda)^Tx-\lambda^Tb$

where again, we have re arranged the terms.Taking the derivative of $L(x,\lambda)$ with respect to $x$ and setting it to zero gives

$Qx+(c+A^T\lambda)=0$

Assuming that $Q$ is invertible, we get
$x = -Q^{-1}(c + A^T\lambda)$

Substituting  into the primal Lagrangian $L(x,\lambda)$, we get the dual Lagrangian
$D(\lambda)=-\frac{1}{2}(c+A^T\lambda)^TQ^{-1}(c+A^T\lambda)-\lambda^Tb$

Therefore the dual optimization problem is given by

$max_{\lambda \in \mathbb{R}^m}\quad  -\frac{1}{2}(c+A^T\lambda)^TQ^{-1}(c+A^T\lambda)-\lambda^Tb$

subject to $\lambda \ge 0$

Example: Consider the above problem

Provide necessary and sufficient conditions under which a quadratic optimization problem be written as a linear least squares problem. 

Solution 
Consider $min_{x \in R^n} \frac{1}{2} x^ TQx + g^T x$, then the necessary and sufficient condition is $Q$ is PSD and $g \in Ran(Q)$. Indeed, if $Q$ is PSD, then $Q$ has a Cholesky factorization $Q = LL^T$ where $L \in R^{ n×k}$ with $k = rank(Q)$. Since $g \in Ran(Q) = Ran(L)$, there is a vector $b \in R^ k$ such that $−g = Lb$. 
Then $\frac{1}{2} x^TQx + g ^T x = \frac{1}{2} x^TLL^T x − (Lb)^ T x$
  $ = || \frac{1}{2} (L^T x) ^T (L^T x) − b^T (L^T x) + \frac{1}{2} b^T b || − \frac{1}{2} b^T b $
$= \frac{1}{2} ||L^T x − b||^T_ 2 −\frac{1}{2} b^Tb$

Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

Vectors in Machine Learning

As data scientists we work with data in various formats such as text images and numerical values We often use vectors to represent data in a structured and efficient manner especially in machine learning applications In this blog post we will explore what vectors are in terms of machine learning their significance and how they are used What is a Vector? In mathematics, a vector is a mathematical object that has both magnitude and direction. In machine learning, a vector is a mathematical representation of a set of numerical values. Vectors are usually represented as arrays or lists of numbers, and each number in the list represents a specific feature or attribute of the data. For example, suppose we have a dataset of houses, and we want to predict their prices based on their features such as the number of bedrooms, the size of the house, and the location. We can represent each house as a vector, where each element of the vector represents a specific feature of the house, such as the nu...

2.14 Singular Value Decomposition

The Singular Value Decomposition ( SVD) of a matrix is a central matrix decomposition method in linear algebra.It can be applied to all matrices,not only to square matrices and it always exists.It has been referred to as the 'fundamental theorem of linear algebra'( strang 1993). SVD Theorem: Let $A^{m \times n}$ be a rectangular matrix of rank $r \in [0,min(m,n)]$. The SVD of A is a decomposition of the form. $A= U \Sigma V^T $ with an orthogonal matrix $U \in \mathbb{R}^{m \times m}$ with column vectors $u_i, i=1,\ldots,m$ and an orthogonal matrix $V \in \mathbb{R}^{n \times n}$ with column vectors $v_j, j=1,\ldots,n$.Moreover, $\Sigma$ is an $m \times n$ matrix with $\sum_{ii} = \sigma \ge 0$ and $\sigma_{ij}=0, i \ne j$. The diagonal entries $\Sigma_i=1,\ldots,r$ of $\sigma$ are called singular values . $u_i$ are called left singular vectors , and $v_j$ are called right singular vectors .By convention singular values are ordered ie; $\sigma_1 \ge \sigma_2 \ldots \sigma_r \...