In the unconstrained optimization we considered the problem of solving for the minimum of a function $min \quad f(x) $ where $f : \mathbb{R}^D \to \mathbb{R}$ In the constrained optimization we have additional constraints. That is, for real-valued functions $g_i : \mathbb{R}^D \to \mathbb{R}$ for $i = 1,\ldots,m$, we consider the constrained optimization problem $min\quad f(x)$ subject to $g_i(x) \le 0$ for all $i = 1,\ldots,m$. In general the functions $f$ and $g_i$ could be non-convex in general. One obvious, but not very practical, way of converting the constrained problem into an unconstrained one is to use an indicator function $J(x)=f(x)+ \sum_{i=1}^{m} 1(g_i(x))$ where $1(z)$ is an infinite step function $1(z)=\left\{\begin{matrix} 0 & if z \le 0\\ \infty & otherwise \end{matrix}\right\}$ This gives infinite penalty if the constraint is not satisfied and hence would provide the same solution.However, this infinite step funct...
This blog is written for the following two courses of KTU using python. CST284-Mathematics for Machine Learning-KTU Minor course and CST294-Computational Fundamentals for Machine Learning-KTU honors course. Queries can be send to Dr Binu V P. 9847390760