Intuition for the Dual in Linear Programming

One of the most important theoretical results in linear programming is that every LP has a corresponding dual program. Where, exactly, this dual comes from can often seem mysterious. Several years ago I answered a question on a couple of Stack Exchange sites giving an intuitive explanation for where the dual comes from. Those posts seem to have been appreciated, so I thought I would reproduce my answer here.


Suppose we have a primal problem as follows.

Primal = \begin{Bmatrix} \max &5x_1 - 6x_2 \\ s.t. &2x_1 - x_2 = 1\\ &x_1 + 3x_2 \leq 9 \\  &x_1 \geq 0 \end{Bmatrix}


Now, suppose we want to use the primal’s constraints as a way to find an upper bound on the optimal value of the primal. If we multiply the first constraint by 9, the second constraint by 1, and add them together, we get 9(2x_1 - x_2) + 1(x_1 +3 x_2) for the left-hand side and 9(1) + 1(9) for the right-hand side. Since the first constraint is an equality and the second is an inequality, this implies

19x_1 - 6x_2 \leq 18.

But since x_1 \geq 0, it’s also true that 5x_1 \leq 19x_1, and so

\displaystyle 5x_1 - 6x_2 \leq 19x_1 - 6x_2 \leq 18.

Therefore, 18 is an upper-bound on the optimal value of the primal problem.

Surely we can do better than that, though. Instead of just guessing 9 and 1 as the multipliers, let’s let them be variables. Thus we’re looking for multipliers y_1 and y_2 to force

\displaystyle 5x_1 - 6x_2 \leq y_1(2x_1-x_2) + y_2(x_1 + 3x_2) \leq y_1(1) + y_2(9).

Now, in order for this pair of inequalities to hold, what has to be true about y_1 and y_2? Let’s take the two inequalities one at a time.


The first inequality: 5x_1 - 6x_2 \leq y_1(2x_1-x_2) + y_2(x_1 + 3x_2)

We have to track the coefficients of the x_1 and x_2 variables separately. First, we need the total x_1 coefficient on the right-hand side to be at least 5. Getting exactly 5 would be great, but since x_1 \geq 0, anything larger than 5 would also satisfy the inequality for x_1. Mathematically speaking, this means that we need 2y_1 + y_2 \geq 5.

On the other hand, to ensure the inequality for the x_2 variable we need the total x_2 coefficient on the right-hand side to be exactly -6. Since x_2 could be positive, we can’t go lower than -6, and since x_2 could be negative, we can’t go higher than -6 (as the negative value for x_2 would flip the direction of the inequality). So for the first inequality to work for the x_2 variable, we’ve got to have -y_1 + 3y_2 = -6.


The second inequality: y_1(2x_1-x_2) + y_2(x_1 + 3x_2) \leq y_1(1) + y_2(9)

Here we have to track the y_1 and y_2 variables separately. The y_1 variable comes from the first constraint, which is an equality constraint. It doesn’t matter if y_1 is positive or negative, the equality constraint still holds. Thus y_1 is unrestricted in sign. However, the y_2 variable comes from the second constraint, which is a less-than-or-equal to constraint. If we were to multiply the second constraint by a negative number that would flip its direction and change it to a greater-than-or-equal constraint. To keep with our goal of upper-bounding the primal objective, we can’t let that happen. So the y_2 variable can’t be negative. Thus we must have y_2 \geq 0.

Finally, we want to make the right-hand side of the second inequality as small as possible, as we want the tightest upper-bound possible on the primal objective. So we want to minimize y_1 + 9y_2.


Putting all of these restrictions on y_1 and y_2 together we find that the problem of using the primal’s constraints to find the best upper bound on the optimal primal objective entails solving the following linear program:

\begin{matrix} \text{Minimize } &y_1 + 9y_2 \\ \text{ subject to } &2y_1 + y_2 \geq 5 \\ &-y_1 + 3y_2 = -6\\ &y_2 \geq 0 \end{matrix}

And that’s the dual.


It’s probably worth summarizing the implications of this argument for all possible forms of the primal and dual. The following table is taken from p. 214 of Introduction to Operations Research, 8th edition, by Hillier and Lieberman. They refer to this as the SOB method, where SOB stands for Sensible, Odd, or Bizarre, depending on how likely one would find that particular constraint or variable restriction in a maximization or minimization problem.

             Primal Problem                           Dual Problem
             (or Dual Problem)                        (or Primal Problem)

             Maximization                             Minimization

Sensible     <= constraint            paired with     nonnegative variable
Odd          =  constraint            paired with     unconstrained variable
Bizarre      >= constraint            paired with     nonpositive variable

Sensible     nonnegative variable     paired with     >= constraint
Odd          unconstrained variable   paired with     = constraint
Bizarre      nonpositive variable     paired with     <= constraint
This entry was posted in linear programming, optimization. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s