Skip to main content

5.6 Linear Programming


In Mathematics, linear programming is a method of optimising operations with some constraints. The main objective of linear programming is to maximize or minimize the numerical value. It consists of linear functions which are subjected to the constraints in the form of linear equations or in the form of inequalities. Linear programming is considered an important technique that is used to find the optimum resource utilisation. The term “linear programming” consists of two words as linear and programming. The word “linear” defines the relationship between multiple variables with degree one. The word “programming” defines the process of selecting the best solution from various alternatives.

Linear Programming is widely used in Mathematics and some other fields such as economics, business, telecommunication, and manufacturing fields. In this article, let us discuss the definition of linear programming, its components, and different methods to solve linear programming problems.

What is Linear Programming?

Linear programming (LP) or Linear Optimisation may be defined as the problem of maximizing or minimizing a linear function that is subjected to linear constraints. The constraints may be equalities or inequalities. The optimisation problems involve the calculation of profit and loss. Linear programming problems are an important class of optimisation problems, that helps to find the feasible region and optimise the solution in order to have the highest or lowest value of the function.

In other words, linear programming is considered as an optimization method to maximize or minimize the objective function of the given mathematical model with the set of some requirements which are represented in the linear relationship. The main aim of the linear programming problem is to find the optimal solution.

Linear programming is the method of considering different inequalities relevant to a situation and calculating the best value that is required to be obtained in those conditions. Some of the assumptions taken while working with linear programming are:
  • The number of constraints should be expressed in the quantitative terms
  • The relationship between the constraints and the objective function should be linear
  • The linear function (i.e., objective function) is to be optimised
Components of Linear Programming

The basic components of the LP are as follows:
Decision Variables
Constraints
Data
Objective Functions

Characteristics of Linear Programming

The following are the five characteristics of the linear programming problem:

Constraints – The limitations should be expressed in the mathematical form, regarding the resource.
Objective Function – In a problem, the objective function should be specified in a quantitative way.
Linearity – The relationship between two or more variables in the function must be linear. It means that the degree of the variable is one.
Finiteness – There should be finite and infinite input and output numbers. In case, if the function has infinite factors, the optimal solution is not feasible.
Non-negativity – The variable value should be positive or zero. It should not be a negative value.
Decision Variables – The decision variable will decide the output. It gives the ultimate solution of the problem. For any problem, the first step is to identify the decision variables. 

Linear Programming Problems

The Linear Programming Problems (LPP) is a problem that is concerned with finding the optimal value of the given linear function. The optimal value can be either maximum value or minimum value. Here, the given linear function is considered an objective function. The objective function can contain several variables, which are subjected to the conditions and it has to satisfy the set of linear inequalities called linear constraints. The linear programming problems can be used to get the optimal solution for the following scenarios, such as manufacturing problems, diet problems, transportation problems, allocation problems and so on.

Methods to Solve Linear Programming Problems

The linear programming problem can be solved using different methods, such as the graphical method, simplex method, or by using tools such as R, open solver etc. Here, we will discuss the two most important techniques called the simplex method and graphical method in detail.

Consider the special case when all the  functions are linear., i.e.,

$min_{x \in \mathbb{R}^D}  c^Tx$

subject to  $Ax \le b$

where $A \in \mathbb{R}^{m \times d}$ and $b \in \mathbb{R}^m$.This is known as linear program.It has $d$ variables and $m$ linear constraints.The Lagrangian is given by

$L(x,\lambda)=c^Tx+\lambda^T(Ax-b)$

where $\lambda \in \mathbb{R}^m$ is the vector of non negative Lagrange multipliers.Rearranging the terms corresponding to $x$ yields

$L(x,\lambda)=(c+A^T\lambda)^Tx-\lambda^Tb$

Taking the derivative of $L(x,\lambda)$ with respect to $x$ and setting it to zero gives us

$c+A^T\lambda=0$

Therefore the dual Lagrangian is $D(\lambda)=-\lambda^Tb$.We would like to maximize $D(\lambda)$. In addition to the constraint due to the derivative of $L(x,\lambda)$ being zero. we also have the fact that $\lambda \ge 0$, resulting in the following dual optimization problem

$max_{\lambda \in \mathbb{R}^m}    -b^T\lambda$

subject to 
$c+A^T\lambda=0$
$\lambda \ge 0$

This is also a linear program, but with $m$ variables.We have the choice of solving the primal or the dual program depending on whether $m$ or $d$ is larger. Recall that $d$ is the number of variables and $m$ is the number of constraints in the primal linear program.

Example:
consider the linear program with two variables.

$min \quad x \in\mathbb{R}^2$

$-\begin{bmatrix}
5\\
3
\end{bmatrix}^T\begin{bmatrix}
x_1\\
x_2
\end{bmatrix}$

subject to
$\begin{bmatrix}
2 & 2\\
2 & -4\\
-2& 1\\
0& -1\\
0 & 1
\end{bmatrix}\begin{bmatrix}
x_1\\
x_2
\end{bmatrix} \le \begin{bmatrix}
33\\
8\\
5\\
-1\\
8
\end{bmatrix}$

The following figure shows this.The objective function is linear, resulting in linear contour lines. The optimal value must lie in the shaded ( feasible) region, and is indicated by the star.

We can derive the dual linear program using Lagrange duality.

$L(x,\lambda)=c^Tx+\lambda^T(Ax-b)$

$L(x,\lambda)= -\begin{bmatrix}
5\\
3
\end{bmatrix}^T x + \lambda^T \left(\begin{bmatrix}
2 & 2\\
2 & -4\\
-2& 1\\
0& -1\\
0 & 1
\end{bmatrix} x - \begin{bmatrix}
33\\
8\\
5\\
-1\\
8
\end{bmatrix}\right)$

$L(x,\lambda)=\left( -\begin{bmatrix}
5\\
3
\end{bmatrix}^T + \lambda^T \begin{bmatrix}
2 & 2\\
2 & -4\\
-2& 1\\
0& -1\\
0 & 1
\end{bmatrix} \right) x- \lambda^T\begin{bmatrix}
33\\
8\\
5\\
-1\\
8
\end{bmatrix}$

Differentiate with respect to $x$ and set to zero:
$\bigtriangledown_x(L)=\left( -\begin{bmatrix}
5\\
3
\end{bmatrix}^T + \lambda^T \begin{bmatrix}
2 & 2\\
2 & -4\\
-2& 1\\
0& -1\\
0 & 1
\end{bmatrix} \right)=0$

Then substitute back into the Lagrangian to obtain the dual Lagrangian

$D(\lambda)=- \lambda^T\begin{bmatrix}
33\\
8\\
5\\
-1\\
8
\end{bmatrix}$
The dual optimization problem is therefore

$max_{\lambda \in \mathbb{R}^m} =- \lambda^T\begin{bmatrix}
33\\
8\\
5\\
-1\\
8
\end{bmatrix}$

subject to 

$ -\begin{bmatrix}
5\\
3
\end{bmatrix}^T + \lambda^T \begin{bmatrix}
2 & 2\\
2 & -4\\
-2& 1\\
0& -1\\
0 & 1
\end{bmatrix} =0 \\
\lambda \ge 0$

Learn the LPP problems by watching these videos


Graphical Method

The graphical method is used to optimize the two-variable linear programming. If the problem has two decision variables, a graphical method is the best method to find the optimal solution. In this method, the set of inequalities are subjected to constraints. Then the inequalities are plotted in the XY plane. Once, all the inequalities are plotted in the XY graph, the intersecting region will help to decide the feasible region. The feasible region will provide the optimal solution as well as explains what all values our model can take. Let us see an example here and understand the concept of linear programming in a better way.
Learn the method by watching this videos


Example Problem using graphical Method

A furniture company produces inexpensive tables and chairs. The production process for each is similar in that both require a certain number of hours of carpentry work and a certain number of labour hours in the painting department. Each table takes 4 hours of carpentry and 2 hours in the painting department. Each chair requires 3 hours of carpentry and 1 hour in the painting department. During the current production period, 240 hours of carpentry time are available and 100 hours in painting is available. Each table sold yields a profit of $7$; each chair produced is sold for a profit of $5$. Find the best combination of tables and chairs to manufacture in order to reach the maximum profit.

The decision variables can be defined as 
$X$ = number of tables to be produced 
$Y$ = number of chairs to be produced.

Now linear programming (LP) problem can be formulated in terms of $X$ and $Y$ and Profit ($P$)
maximize $P = 7X + 5Y $(Objective function)
subject to 
$4X + 3Y ≤ 240$ (hours of carpentry constraint)
$2X + Y ≤ 100 $(hours of painting constraint) 
$X ≥ 0, Y ≥ 0 $(Non-negativity constraint). 

Therefore the mathematical formulation of the LPP is:
Maximize: $P = 7X + 5Y$
Subject to: 
 $4X + 3Y ≤ 240$
$2X + Y ≤ 100$
$X ≥ 0 , Y ≥ 0$

To find the optimal solution to this LP using the graphical method we first identify the region of feasible solutions and the corner points of the of the feasible region. The graph for this example is plotted and identify the corner points and then evaluate the profit function for max.


Example ( university question)
A manufacturer of furniture makes two products, chairs and tables. Processing of these products is done on two machines A and B.A chair requires 2 hours on machine A and 6 hours on machine B. A table requires 5 hours of machine A and no time on machine B. There are 16 hours of time per day available on machine A and 30 hours on machine B.Profit gained by the manufacturer from a chair is Rs.1 and from a table is Rs.5 respectively.Formulate the problem into LPP, in order to maximize the total profit.

Machine    

Chairs ( req hrs)

Tables(req hrs)

Available Hours

A

2

5

16

B

6

0

30

profit

1rs

5 rs

 


The decision variables can be defined as 
$X$ = number of chairs to be produced 
$Y$ = number of tables to be produced.
Now linear programming (LP) problem can be formulated in terms of $X$ and $Y$ and Profit ($P$)
Therefore the mathematical formulation of the LPP is:
Maximize: $P = X + 5Y$
Subject to:
$2X + 5Y ≤ 16$
$6X ≤ 30$
$X ≥ 0 , Y ≥ 0$


Example ( University Question)
Indicate on the graph the region satisfying the following constraints
$x >=0,y>=0,2x+y <=20,x+2y<=20$
Under the above conditions maximize the function $x+3y$

The graphical method is one of the easiest way to solve a small LP problem. However this is useful only when the decision variables are not more than two. It is not possible to plot the solution on a two-dimensional graph when there are more than two variables and we must turn to more complex methods. Another limitation of graphical method is that, an incorrect or inconsistent graph will produce inaccurate answers, so one need to be very careful while drawing and plotting the graph. A very useful method of solving linear programming problems of any size is called Simplex method.

Simplex Method

The simplex method is one of the most popular methods to solve linear programming problems. It is an iterative process to get the feasible optimal solution. In this method, the value of the basic variable keeps transforming to obtain the maximum value for the objective function. The algorithm for linear programming simplex method is provided below:

Step 1: Establish a given problem. (i.e.,) write the inequality constraints and objective function.

Step 2: Convert the given inequalities to equations by adding the slack variable to each inequality expression.

Step 3: Create the initial simplex tableau. Write the objective function at the bottom row. Here, each inequality constraint appears in its own row. Now, we can represent the problem in the form of an augmented matrix, which is called the initial simplex tableau.

Step 4: Identify the greatest negative entry in the bottom row, which helps to identify the pivot column. The greatest negative entry in the bottom row defines the largest coefficient in the objective function, which will help us to increase the value of the objective function as fastest as possible.

Step 5: Compute the quotients. To calculate the quotient, we need to divide the entries in the far right column by the entries in the first column, excluding the bottom row. The smallest quotient identifies the row. The row identified in this step and the element identified in the step will be taken as the pivot element.

Step 6: Carry out pivoting to make all other entries in column is zero.

Step 7: If there are no negative entries in the bottom row, end the process. Otherwise, start from step 4.

Step 8: Finally, determine the solution associated with the final simplex tableau.

Learn Simplex method by watching this videos


Simplex Method
Maximize $Z=7x_1+6x_2$
Subject to, $x_1+x_2 \le 4$
$2x_1+x_2 \le 6$
where $x_1,x_2 \ge 0$

Introduce slack variables and obtain the standard form
Maximize $Z=7x_1+6x_2+s_1+s_2$
Subject to, $x_1+x_2 +s_1= 4$
$2x_1+x_2 +s_2= 6$
where $x_1,x_2 ,s_1,s_2 \ge 0$

Simplex Table

key column , key row are marked in red, key element is 2, new incoming variable is $x_1$


key column , key row are marked in red, key element is 1/2, new incoming variable is $x_2$

All values of $C_j-Zj$ are less than or equal to zero.So there is no scope for improvement and we can stop the iteration.
The optimal values are
$x_1=2,x_2=2$
So 
$Zmax=7 \times 2+6 \times 2+0+0=26$

Example ( university question)
Maximize $z=3x_1+2x_2+5x_3$
subject to
$x_1+2x_2+x_3 \le 430$
$3x_1+2x_3 \le 460$
$x_1+4x_2 \le 420$
$x_1,x_2,x_3 \ge 0$
Introduce slack variables $s_1,s_2,s_3$ and convert into standard form
Maximize $z=3x_1+2x_2+5x_3+0s_1+0s_2+0s_3$
subject to
$x_1+2x_2+x_3 +s_1= 430$
$3x_1+2x_3+s_2= 460$
$x_1+4x_2 +s_3= 420$
$x_1,x_2,x_3,s_1,s_2,s_3 \ge 0$

Write the values of initial basic feasible solution
$x_1=x_2=x_3=0$ so $s_1=430, s_2=460,s_3=420$

Write in matrix form $Ax=B$
$\begin{bmatrix}
x_1 & x_2 & x_3 & s_1 & s_2 & s_3\\
1 & 2 & 1 & 1 & 0 & 0\\
3 & 0 & 2& 0 & 1 & 0\\
1& 4 & 0 & 0 & 0 & 1
\end{bmatrix}\begin{bmatrix}
x_1\\
x_2\\
x_3\\
s_1\\
s_2\\
s_3
\end{bmatrix}=\begin{bmatrix}
430\\
460\\
420
\end{bmatrix}$

Construct the initial simplex table and do the iterations


Since all $c_j -z_j \le 0$ the optimal solution is reached
$x_1=0,x_2=100,x_3=230$

$Z max=2 \times 100+5 \times 230$
$=1350$

Example
Solve the following LP problem with the simplex method.

max $5x_1+6x_2+9x_3+8x_4$

subjects to the constraints
$x_1+2x_2+3x_3+x_4 \le 5$
$x_1+x_2+2x_3+3x_4 \le 3$

$x_1,x_2,x_3,x_4 \ge 0$

construct the simplex table and do the iterations.key column,key row and key elements are highlighted in red.


No further improvements are possible since $C_j-Z_j$ is less than or equal to 0.
So the optimum values are $x_1=1,x_2=2,x_3=0,x_4=0$

The max value of the function is $5 \times 1+ 6 \times 2=17$

Comments

Popular posts from this blog

Mathematics for Machine Learning- CST 284 - KTU Minor Notes - Dr Binu V P

  Introduction About Me Syllabus Course Outcomes and Model Question Paper University Question Papers and Evaluation Scheme -Mathematics for Machine learning CST 284 KTU Overview of Machine Learning What is Machine Learning (video) Learn the Seven Steps in Machine Learning (video) Linear Algebra in Machine Learning Module I- Linear Algebra 1.Geometry of Linear Equations (video-Gilbert Strang) 2.Elimination with Matrices (video-Gilbert Strang) 3.Solving System of equations using Gauss Elimination Method 4.Row Echelon form and Reduced Row Echelon Form -Python Code 5.Solving system of equations Python code 6. Practice problems Gauss Elimination ( contact) 7.Finding Inverse using Gauss Jordan Elimination  (video) 8.Finding Inverse using Gauss Jordan Elimination-Python code Vectors in Machine Learning- Basics 9.Vector spaces and sub spaces 10.Linear Independence 11.Linear Independence, Basis and Dimension (video) 12.Generating set basis and span 13.Rank of a Matrix 14.Linear Mapping...

4.3 Sum Rule, Product Rule, and Bayes’ Theorem

 We think of probability theory as an extension to logical reasoning Probabilistic modeling  provides a principled foundation for designing machine learning methods. Once we have defined probability distributions corresponding to the uncertainties of the data and our problem, it turns out that there are only two fundamental rules, the sum rule and the product rule. Let $p(x,y)$ is the joint distribution of the two random variables $x, y$. The distributions $p(x)$ and $p(y)$ are the corresponding marginal distributions, and $p(y |x)$ is the conditional distribution of $y$ given $x$. Sum Rule The addition rule states the probability of two events is the sum of the probability that either will happen minus the probability that both will happen. The addition rule is: $P(A∪B)=P(A)+P(B)−P(A∩B)$ Suppose $A$ and $B$ are disjoint, their intersection is empty. Then the probability of their intersection is zero. In symbols:  $P(A∩B)=0$  The addition law then simplifies to: $P(...

5.1 Optimization using Gradient Descent

Since machine learning algorithms are implemented on a computer, the mathematical formulations are expressed as numerical optimization methods.Training a machine learning model often boils down to finding a good set of parameters. The notion of “good” is determined by the objective function or the probabilistic model. Given an objective function, finding the best value is done using optimization algorithms. There are two main branches of continuous optimization constrained and unconstrained. By convention, most objective functions in machine learning are intended to be minimized, that is, the best value is the minimum value. Intuitively finding the best value is like finding the valleys of the objective function, and the gradients point us uphill. The idea is to move downhill (opposite to the gradient) and hope to find the deepest point. For unconstrained optimization, this is the only concept we need,but there are several design choices. For constrained optimization, we need to intr...