## Some Uses of Duality in Linear Programming

For most of my students, the first time they see duality in linear programming their first reaction is “What’s the big deal?”  There was a recent post on math.SE asking the same question.  Duality in linear programming seems to me analogous to eigenvalues and eigenvectors in linear algebra, in the sense that at first the concept doesn’t seem like there’s much there, but as you learn more and more you realize how fundamental it is to the entire subject.

This post will describe some of the uses of duality in linear programming.  There are others, of course.  The post draws heavily on my answer to the math.SE question about the uses of duality.

1. Any feasible solution to the dual problem gives a bound on the optimal objective function value in the primal problem.  This is how I motivate the dual problem in class, in fact.  The formal statement of this is the weak duality theorem.
2. Understanding the dual problem leads to specialized algorithms for some important classes of linear programming problems. Examples include the transportation simplex method, the Hungarian algorithm for the assignment problem, and the network simplex method. Even column generation relies partly on duality.
3. The dual can be helpful for sensitivity analysis.  Changing the primal’s right-hand side constraint vector or adding a new constraint to it can make the original primal optimal solution infeasible. However, this only changes the objective function or adds a new variable to the dual, respectively, so the original dual optimal solution is still feasible (and is usually not far from the new dual optimal solution).
4. Sometimes finding an initial feasible solution to the dual is much easier than finding one for the primal.  For example, if the primal is a minimization problem, the constraints are often of the form $A {\bf x} \geq {\bf b}$, ${\bf x} \geq {\bf 0}$, for ${\bf b} \geq {\bf 0}$. The dual constraints would then likely be of the form $A^T {\bf y} \leq {\bf c}$, ${\bf y} \geq {\bf 0}$, for ${\bf c} \geq {\bf 0}$. The origin is feasible for the latter problem but not for the former.
5. The dual variables give the shadow prices for the primal constraints. Suppose you have a profit maximization problem with a resource constraint i. Then the value $y_i$ of the corresponding dual variable in the optimal solution tells you that you get an increase of $y_i$ in the maximum profit for each unit increase in the amount of resource i (absent degeneracy and for small increases in resource i).
6. Sometimes the dual is easier to solve.  A primal problem with many constraints and few variables can be converted into a dual problem with few constraints and many variables.  Fewer constraints are nice in linear programs because the basis matrix is an $n \times n$ matrix, where n is the number of constraints.  Thus the fewer the constraints, the smaller the size of the basis matrix, and thus the fewer computations required in each iteration of the simplex method.
7. The dual can be used to detect primal infeasibility.  This is a consequence of weak duality: If the dual is a minimization problem whose objective function value can be made as small as possible, and any feasible solution to the dual gives an upper bound on the optimal objective function value in the primal, then the primal problem cannot have any feasible solutions.