Skip to main content

Posts

Showing posts from October, 2021

Orthogonal Subspace

we said that two vectors $v$ and $w$ are orthogonal if their dot product, $v . w$, is 0. In $R^2$ or $R^3$ this matches our geometric understanding of orthogonal, and in higher dimensions the idea still applies, even though we can’t visualize it. Consider the vectors $a=\begin{pmatrix} 1 \\ 2 \\ 3\\ 4 \end{pmatrix}, b=\begin{pmatrix} 2 \\ 1 \\ 0\\ -1 \end{pmatrix}$ These two vectors are orthogonal because their dot product $1.2+2.1+3.0+4.-1=0$ Now, we can extend these definitions to subspaces of a vector space. Definition -Two subspaces $V$ and $W$ of a vector space are orthogonal if every vector $v \in V$ is perpendicular to every vector $w \in W$. As a simple example, in $R^2$ the span of $\begin{pmatrix}1 \\ 0 \\ \end{pmatrix}$ is the set of all vectors of the form $\begin{pmatrix}c \\ 0 \\ \end{pmatrix}$, where $c$ is some real number, while the span of $\begin{pmatrix}0\\ 1 \\ \end{pmatrix}$ is the set of all vectors of the form $\begin{pmatrix}0\\ d \\ \end{pmatrix}$,where $d$ i

5.7 Quadratic Programming

  Consider the case of a convex quadratic objective function, where the constraints are affine i.e., $min_{x \in \mathbb{R}^d} \quad \frac{1}{2}x^TQx+c^Tx$ subject to $Ax \le b$ Where $A \in \mathbb{R}^{m \times d},b \in \mathbb{R}^m$ and $c \in \mathbb{R}^d$. The square symmetric matrix $ Q \in \mathbb{R}^{d \times d}$ is positive definite, and therefore the objective function is convex.This is known as a quadratic program . Observe that it has $d$ variables and $m$ linear constraints. Consider the quadratic program of two variables $min_{x \in \mathbb{R}^2}\frac{1}{2}\begin{bmatrix} x_1\\ x_2 \end{bmatrix}^T\begin{bmatrix} 2 & 1\\ 1 & 4 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix}+\begin{bmatrix} 5\\ 3 \end{bmatrix}^T\begin{bmatrix} x_1\\ x_2 \end{bmatrix}$ subject to $\begin{bmatrix} 1 & 0\\ -1& 0\\ 0& 1\\ 0& -1 \end{bmatrix}\begin{bmatrix} x_1\\ x_2 \end{bmatrix} \le \begin{bmatrix} 1\\ 1\\ 1\\ 1 \end{bmatrix}$ This is illustrated in figure The

5.6 Linear Programming

In Mathematics, linear programming is a method of optimising operations with some constraints. The main objective of linear programming is to maximize or minimize the numerical value. It consists of linear functions which are subjected to the constraints in the form of linear equations or in the form of inequalities. Linear programming is considered an important technique that is used to find the optimum resource utilisation. The term “linear programming” consists of two words as linear and programming. The word “linear” defines the relationship between multiple variables with degree one. The word “programming” defines the process of selecting the best solution from various alternatives. Linear Programming is widely used in Mathematics and some other fields such as economics, business, telecommunication, and manufacturing fields. In this article, let us discuss the definition of linear programming, its components, and different methods to solve linear programming problems. What is L

5.5 Convex Optimization

Convex optimization problems are a particularly useful class of optimization problems, where we can guarantee global optimality. When $f(.)$ is a convex function, and when the constraints involving $g(.)$ and $h(.)$ are convex sets, this is called a convex optimization problem. In this setting, we have strong duality: The optimal solution of the dual problem is the same as the optimal solution of the primal problem. A set $C$ is a convex set if for any $x, y \in C$ and for any scalar $\theta$ with $0 \le \theta \le 1$, we have  $\theta x + (1 - \theta)y \in C $. Convex sets are sets such that a straight line connecting any two elements of the set lie inside the set. Figures 7.5 and 7.6 illustrate convex and non convex sets, respectively. Convex functions are functions such that a straight line between any two points of the function lie above the function.  Non convex function Convex Function Definition Let function $f : \mathbb{R}^D \to  \mathbb{R}$ be a function whose domain is a c

5.4 Constrained Optimization and Lagrange Multipliers

  In the unconstrained optimization we considered the problem of solving for the minimum of a function $min \quad f(x) $ where $f : \mathbb{R}^D \to  \mathbb{R}$ In the constrained optimization we have additional constraints. That is, for real-valued functions $g_i : \mathbb{R}^D \to \mathbb{R}$ for $i = 1,\ldots,m$, we consider the constrained optimization problem  $min\quad f(x)$  subject to $g_i(x) \le 0$ for all  $i = 1,\ldots,m$. In general the functions $f$ and $g_i$ could be non-convex in general. One obvious, but not very practical, way of converting the constrained problem into an unconstrained one is to use an indicator function $J(x)=f(x)+ \sum_{i=1}^{m} 1(g_i(x))$ where $1(z)$ is an infinite step function $1(z)=\left\{\begin{matrix} 0 & if z \le 0\\ \infty & otherwise \end{matrix}\right\}$ This gives infinite penalty if the constraint is not satisfied and hence would provide the same solution.However, this infinite step function is equally difficult to optimize. W