For the convex sets, we have known that if
If
Furthermore, if a two-dimension function
We can verify the convexity of objective functions via Jensen inequality.
If
Futhermore, if
The proof is immediate by noting that the epigraph of the pointwise maximum function is the intersection of the epigraphs of all
Suppose
By the triangle inequality, it holds that
Given two convex and differentiable functions
When
We just show the proof of case 2, the other three cases can reuse this proof. If
Note that for other four cases, we can not determine whether
We should explain more on the log-sum-exp function. This function is very useful for approximately computing the maximum of
We usually hope the objective function of the optimization problem is differentiable. However
Given a series of points
Suppose
Suppose
We prove this theorem by verifying the Jensen's inequality. In other words, we want to show that
Now we consider the problem of the triangle inequality for general
To verify the triangle inequality for all
Let
Recall the detail of the proof of
When considering general
Let
Without loss of generality, we assume that
We first claim that
Next, applying this claim to
Now we are going to show the triangle inequality for
For any two vectors
Assuming
After defining and discussing properties of convex sets and convex functions, we now introduce what type of optimization problems we should consider in this course.
Recall that, in general, an optimization problem is to find the minimum value of
The following problem
For convenience, we usually allow
In particular, in this course, we mainly consider the convex optimization.
Given an optimization problem
Clearly, the domain of
We also note that the feasible set
For any two optimal solutions
In fact, we can show that the
Similarly, we can also define the
Thus, another proof of item
Now we can say that a convex optimization problem is to compute the minimum value of a convex function over a convex set. However, the converse statement is not true. Calculating the minimum of a convex function on a convex set is not always a convex optimization problem. Consider the following example:
The following optimization problem has convex objective function and convex feasible set. But it is not a convex optimization problem.
Here are some canonical types of convex optimization problems.
A linear programming is a convex optimization where the objective function and constraint functions are all affine (linear).
A quadratic programming is a convex optimization where the objective function is quadratic and constraint functions are all affine.
A quadratically constrained quadratic programming is a convex program where the objective function and inequality-constraint functions are all quadratic functions.
The linear least square regression is a typical QP or QCQP. Given