Lecture 8. Duality of Linear Programming
...


8.1 Primal and dual problems
...

Consider the LP: Trivially is optimal. How about ?
It is also easy to see that since In general, given , , and the LP constraints , we can assign each constraint a coefficient as follows: If and for all , then is an upper bound, which further implies that We want the upper bound as small as possible. Therefore, we would like to solve the following LP: which is termed as the dual linear program.

Proposition

The dual of the dual is the primal.

Proof

The dual program can be rewritten as: It is clear that the dual of dual is: which is equivalent to the primal.

We note that the primal problem may have different forms. If the primal has a constraint , then we have the corresponding variable in the dual. If the primal has a constraint , then we have the corresponding variable unconstrained in sign in the dual.

Now we use the form , subject to , . By the discussion above, we know that the optimal value of the dual problem gives an upper bound of the primal problem. This property is called the weak duality.

Theorem (Weak duality theorem)

If is feasible for primal and is feasible for dual, then .

Proof

Since and , we have .

Corollary

If is unbounded above, then the dual problem is infeasible.

However, if the primal problem is infeasible, we cannot conclude that the dual problem is unbounded below. It is possible that both primal and dual problems are infeasible.

Example

Consider the following problem: Its dual problem is It is easy to check that both of them are infeasible.

8.2 Strong duality
...

We’ve already known that any feasible solution of the dual gives an upper to the primal. The question is that, is there any gap between the optimal value of the primal and the optimal value of the dual?

Theorem (Strong duality theorem)

If the primal problem has a finite optimal solution , then the dual problem also has a finite optimal solution with the same optimal value of the primal. Namely, it always holds that .

Now we can complete the following table.

unbounded infeasible optimal
unbounded × ×
infeasible ×
optimal × ×

The proof of the strong duality is an application of the Farkas’ lemma, which we have introduced in Lecture 4.

Theorem (Farkas' lemma)

Let . Then exactly one of the following sets is empty:

  1. ;
  2. .

To apply this lemma, we consider the following corollary.

Corollary

Suppose and . Then exactly one of the followings is true:

  1. There exists such that and .
  2. There exists such that , , and .


Let be the cone of column vectors of . The corollary tells us that either intersects the region , or there exists a hyperplane strictly separating and , where and .

Proof of the corollary

Let . Applying the Farkas' lemma to and , it gives that exactly one of the followings is true:

  1. There exists such that ;
  2. There exists such that and .

Note that item 1 is equivalent to such that , and item 2 is equivalent to , , and . So we are done.

We are now ready to prove the strong duality theorem.

Proof of the strong duality theorem

Without loss of generality, we assume that the dual problem has an optimal solution (instead of the primal problem in the statement). Suppose the strong duality is not true. Then there does not exist feasible solution of the primal such that .
Let . It is equivalent to that there does not exists such that Then the corollary of the Farkas' Lemma shows that there exists and such that Now we can claim that is not an optimal solution to the dual problem, which contradicts to our assumption. Consider the following two cases.

  • Case 1. . Then we have and . Thus, and . Noting that is also a feasible solution to the dual problem. But .
  • Case 2. . Dividing on both sides, it leads to which implies that and . So is a feasible solution to the dual problem, and better than .

The proof is concluded by the contradictions in both cases.

In the proof of the corollary of the Farkas’ lemma, we employ the matrix , which looks similar to the standard form of the linear program. In fact, if we consider the standard form, the proof of the strong duality can apply the Farkas’ lemma directly, instead of using the corollary.

Complementary slackness
...

As an application of the strong duality of linear program, the following theorem reveals some relations between the optimal solutions to the primal and the dual.

Theorem (Complementary slackness)

Suppose and are feasible solutions to the primal problem and the dual problem, respectively. Then and are optimal solutions if and only if

The complementary slackness shows that if the -th constraint of the primal at the optimal solution is not tight, then the corresponding variable is in the optimal solution of the dual, and vice versa. Namely, we have that

Proof

It is clear that . Then and are optimal solutions if and only if , that is,

8.3 Applications of linear programming duality
...

We now consider an application of the strong duality in linear programming.

Given an undirected graph , a matching is a set of edges such that no edge in the set is a loop and no two edges share common vertices, and a vertex cover is a set of vertices that includes at least one endpoint of every edge. Here are some examples of matchings and vertex covers.
Pasted image 20231022142329.png
Pasted image 20231022142348.png

Clearly the set is a matching and is a vertex cover. So we consider the problem of maximum matching and minimum vertex cover.

Theorem (Kőnig's theorem)

Suppose the graph is a bipartite graph, namely, for some disjoint and , such that . Then the size of its maximum matching equals the size of its minimum vertex cover.

We can write an integer programming formulation of the maximum matching problem. Any matching can be represented by variables such that if the edge and otherwise. Conversely, any assignment to can represent a matching if and for all . So the problem can be formulated as If we relax the constraints to be , it can be reformulated as a linear program The relaxed problem is called the maximum fractional matching problem.

Analogously, for each vertex we can assign a variable to represent whether is in the vertex cover. We relax the constraints again. Then we obtain the problem of maximum fractional vertex cover as follows It is easy to verify that the fractional minimum vertex cover problem is the dual of the fractional maximum matching problem. So we know that for any graph, the size of the maximum fractional matching equals the size of the minimum fractional vertex cover.

In a bipartite graph , we can further show that the size of the maximum fractional matching equals the size of the maximum matching. Given a fractional matching , consider the subgraph consisting of fractional edges where .

  • Case 1. There exists a cycle . Note that is an even number since is bipartite. Let . Then add to for all odd , and subtract from for all even (we assume that ). The resulting satisfy all constraints and the size of the fractional matching remains the same.
  • Case 2. There is no cycles. Then choose any path . Note that all edges incident to has except . So if belongs to but . Similarly, if belongs to but . Again, let . Then subtract from for all odd , and add to for all even (we assume that ). Now the resulting satisfy all constraints and the size of the fractional matching is nondecreasing.
    Each operation decrease the number of fractional edges. So there is no fractional edges after finite many operations. Consequently, any fractional matching can be converted into an integral matching that is not worse, which implies that the size of the maximum fractional matching equals the size of the maximum matching

We can also show that the size of the minimum fractional vertex cover equals the size of the minimum vertex cover in any bipartite graph . Suppose where and . We construct a random vertex cover as follows.
Uniformly choose a real number . For every , let if , and otherwise. For every , let if , and otherwise. For every and , if , then . So at least one of is in , which gives that is a vertex cover.
Now we calculate the expected size of . For any , . Thus, by the linearity of expectation, , which is the size of the minimum fractional vertex cover. In addition, there exists such that the vertex cover constructed by has the size .

Overall, we conclude that in any bipartite graph, the size of the maximum matching equals the size of the minimum vertex cover.