Lecture 7. Linear Programming
...


7.1 Linear program and standard form
...

A linear program may have both equality and inequality constraints. It is easy to rewrite equality constraints by inequalities. Can we rewrite inequality constraints by equalities? In fact, the standard form of linear programming only allows equality constraints and a special type of inequalities.

Definition (Standard form)

We say that a linear program is in standard form if the following are all true:

  1. Non-negativity constraints for all variables;
  2. All remaining constraints are expressed as equality constraints;
  3. The right hand side vector, , is non-negative.

Namely, the standard form can be expressed as follows:

Now the question is how to convert a linear program into the standard form?
Clearly the third requirement is trivial. We only need to consider the first two requirement.
First, for the second requirement, consider a simple example . We can add a slack variable : together with . Note that it is equivalent to the original inequality. Suppose we have another example . Then multiply the inequality by and add a slack variable .
Next we consider the first requirement. If a variable has non-positive constraints, such as , we can easily let and substitute for in the linear program. If some variable, for example , is unconstrained in sign, we can replace it by and require and to be non-negative.

Example

Consider the following linear program: By add slack variable and splitting free variables, it can be converted into the following standard form:

The standard form is useful in algorithm design and analysis, especially in the algorithms based on primal dual method.

7.2 Solving linear programming
...

We now use the form Note that the feasible set is a polyhedron since it is the intersection of some halfspaces.

Example

Consider the following linear program: We can sketch its feasible set as follows:

When a linear program achieve its optimal value , intuitively the hyperplane passes through a vertex of the feasible set. We may use this observation to solve linear programs geometrically.

Question

Is this observation always true?

Fundamental Theorem of Linear Programming
...

Theorem (Fundamental theorem of linear programming)

Suppose a linear program has an optimal solution. Then there exists an optimal solution at a vertex (extreme point).

First, we define what a vertex and an extreme point are.

Definition (Vertex)

A point is a vertex, if at least linearly independent constraints are tight at .

Definition (Extreme point)

A point is called an extreme point of a convex set , if there does not exist and such that . In other words, cannot be expressed by a convex combination of other points in .

An important fact is that, these two types of points are equivalent for polyhedra.

Proposition

For any polyhedron , is a extreme point if and only if is a vertex.

Proof

Suppose and .

  • Sufficiency: Suppose is a vertex, then there exists indices such that and satisfying . The constraints are linearly independent, so is invertible.
    Assume is not an extreme point, that is, there exists and such that . After substituting we have . Note that . So it holds that and . Thus, we conclude that . Since is invertible, it yields , which leads to a contradiction.

  • Necessity: Assume is an extreme point but not a vertex in . Let . Since is not a vertex, there does not exist linearly independent constraints that are tight at . Hence there exists , such that for all .
    Let . We argue that for some sufficiently small as follows.

    • , note that (since ).
    • , we have . Then there exists such that and .

    Taking , we have . Hence is an extreme point, which leads to a contradiction.

However, a polyhedron may not contain any vertex.

Proposition

A polyhedron has extreme points if and only if does not contain a line, and .

Proof
  • Necessity: Assume there exists a line and for some .
    1. If is on the line , it's obvious that cannot be an extreme point.
    2. If is not on the line , we claim that and are also in . For any and , by convexity. Let . It follows that . Thus, since is closed, Similarly, we have . Hence, cannot be an extreme point.
  • Sufficiency: If contains no extreme point, then there exists such that . Thus, for all , we have for all , which gives that .

We now prove the fundamental theorem of linear programming.
Let be the feasible set of a linear programming and be the set of optimal solution. Assume . Since , there is no line in and also no line in . By above proposition, we know that has an extreme point . Now it suffices to show that is also an extreme point in .
Suppose is not an extreme point in , then there exists and such that . We have since is an optimal solution. Thus, . It implies that , which leads to a contradiction.

The fundamental theorem of linear programming gives us an algorithm to solve linear programs by enumerating all () vertices of the feasible set.

However, consider the -dimensional cube . Only constraints produce a polyhedron of vertices.

7.3 Simplex method
...

We now introduce a (ususally) efficient algorithm: the simplex method. We remark here that the simplex method is not a polynomial-time algorithm. However, it runs fast except for some artificially designed cases.

The key idea is that when we find a vertex of the feasible set, move from the current vertex to a "better" neighbor, where two vertices are neighbors if they share tight constraints.
Assume is a feasible set. Then we can start from the origin point . It is clear that there are neighbors of the origin, and each of them has zero coordiantes.
How can we know whether a neighbor is "better"? Note that our goal is to compute . If , the objective function is "better" as long as . To this end, we can choose such that and increase to until some constraint is tight. Now there are constraints tight: Then we shift coordinates so that becomes the origin point. It suffices to let , , .

Definition (Neighbor)

Two vertices are neighbors if they share tight constraints.

Example

Consider the following linear program:

  • Suppose we start from . Since the coefficient for is negative, we increase and arrive at . Now the constraint is tight. 1697378770499.png
  • At , we create a new coordinate system. For point in the original coordinate system, it becomes in the new coordinate system.
  • Let and . We then rewrite the linear programming as follows 1697378910973.png
  • Now we repeat the above process. We increase and let , . The objective function becomes .
  • Since the coefficients for both and are positive, we know that is an optimal solution. Substituting and we have and .

As shown above, the first step is to choose a variable whose coefficient is negative, and then increase it. What should we do if we have multiple variables who have negative coefficients?

In fact, the following example shows we may encounter some tricky problems if we choose a wrong variable. Consider the linear program with the same objective function as above, but the constraints are 1697379147537.png

  • Suppose in the first step, we choose to increase . Then the tight constraint should be .
  • Let and . The new constraints are 1697379279241.png
  • Unfortunately, we choose to increase and repeat the process again and again……

Clearly it is possible to fail in this case, which is called degeneracy. One way to break cycles is to add perturbation in , that is, let where is a Gaussian random variable.

Now the question is, what if the origin point is not feasible? If there exists a known feasible solution such that , then let and further let to guarantee that all variables are nonnegative.

However, what if there is no known feasible solutions? The following two-phase simple method gives an algorithm to find a feasible solution of a linear program.

  • First, convert the linear program into the standard form: subject to and , where .
  • Next, add slack variables for constraints. Then the constraints are , and .
  • Now it is clear that there exists a trivial solution and .
  • Finally, we can check if is possible by solving the linear program