Data fitting
Missile guidance
Target tracking
Portfolio optimization
In-painting
Optimal value: $$ p^* = \inf \{ f_0(x) \ | \ f_i(x)\le 0, i=1,\dots,m, h_i(x)=0, i=1,\dots,p \} $$
Optimal points:
A convex optimization problem:
$$ \begin{aligned} \underset{x}{\minimize} \quad & f(x) \\ \text{subject to} \quad & x \in \mathcal{C} \end{aligned} $$or
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x) \le 0, & i=1,\dots,m \\ & a_i^Tx = b_i, & i=1,\dots,p \end{aligned} $$Usually written as
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x) \le 0, &i=1,\dots,m \\ & Ax = b \end{aligned} $$Consider the following nonconvex problem
$$ \begin{aligned} \underset{x}{\minimize} \quad & x_1^2 + x_2^2 \\ \text{subject to} \quad & \frac{x_1}{1+x_2^2} \le 0 \\ & \left(x_1+x_2\right)^2=0 \end{aligned} $$which is equivalent to the following convex problem
$$ \begin{aligned} \underset{x}{\minimize} \quad & x_1^2 + x_2^2 \\ \text{subject to} \quad & x_1 \le 0 \\ & x_1+x_2=0 \end{aligned} $$Note that this equivalent convexification is not always possible.
We can show that any locally optimal point of a convex problem is globally optimal. The following provides the proof by contradiction.
Suppose $x$ is locally optimal but not globally, so there exists a feasible $y$ with $f_0(y)<f_0(x)$.
The point $x$ being locally optimal implies that there exists a ball around $x$ with radius $R>0$, for which any feasible $z$ in it, $\|z-x\|_2 \le R$, satisfies $f_0(z) \ge f_0(x)$. Then $y$ is outside the ball, $\|y-x\|_2>R$, since we assumed $f_0(y)<f_0(x)$.
Now consider $z = \theta y + (1-\theta)x$ with $\theta$ satisfying $0<\theta < R/\|y-x\|_2<1$.
This implies that $z$ is feasible sinze $z$ is a convex combination of two feasible points. and therefore $f_0(z) \le \theta f_0(y) + (1-\theta)f_0(x) < f_0(x)$.
By the way, such $z$ is inside the ball so $f_0(z) > f_0(x)$.
This contradicts our assumption that $x$ is locally optimal.
Unconstrained problem:
$x$ is optimal for
\begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \end{aligned}if and only if
$$ x\in\dom f_0, \quad \nabla f_0(x)=0 $$Equality constrained problem:
$x$ is optimal for
\begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & Ax=b \end{aligned}if and only if there exists $\nu$ such that
$$ x\in\dom f_0, \quad Ax=b, \quad \nabla f_0(x)+ A^T \nu =0 $$Inequality constrained problem:
$x$ is optimal for
\begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x)\le 0, \quad\text{ for }i=1,\dots,m \end{aligned}if and only if there exists $\lambda_i\ge 0$, $i=1,\dots,m$ such that
$$ x\in\dom f_0, \quad f_i(x)\le 0,\ \lambda_i f_i(x) = 0,\ i=1,\dots,m, \quad \nabla f_0(x) + \sum_{i=1}^m \lambda_i \nabla f_i(x) =0 \quad $$Eliminating equality constraints
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x) \le 0, & i=1,\dots,m \\ & Ax = b \end{aligned} $$is equivalent to
$$ \begin{aligned} \underset{z}{\minimize} \quad & f_0(Fz+x_0) \\ \text{subject to} \quad & f_i(Fz+x_0) \le 0, & i=1,\dots,m \end{aligned} $$where $F$ and $x_0$ are such that
$$ Ax=b \quad \Longleftrightarrow \quad x=Fz+x_0 \text{ for some } z $$In other words, $x_0$ is some particular solution of $Ax=b$ and $\null(A) = \range(F)$.
Introducing equality constraints
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(A_0 x+b_0) \\ \text{subject to} \quad & f_i(A_i x+b_i) \le 0, & i=1,\dots,m \end{aligned} $$is equivalent to
$$ \begin{aligned} \underset{x,y_i}{\minimize} \quad & f_0(y_0) \\ \text{subject to} \quad & f_i(y_i) \le 0, & i=1,\dots,m \\ & y_i = A_i x + b_i, & i=0,1,\dots,m \end{aligned} $$Introducing slack variables
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & a_i^T x \le b_i, & i=1,\dots,m \end{aligned} $$is equivalent to
$$ \begin{aligned} \underset{x, s}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & a_i^T x +s_i = b_i, & i=1,\dots,m \\ & s_i\ge 0, & i=1,\dots,m \end{aligned} $$Epigraph form
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x) \le 0, & i=1,\dots,m \\ & Ax = b \end{aligned} $$is euivalent to
$$ \begin{aligned} \underset{x,t}{\minimize} \quad & t \\ \text{subject to} \quad & f_0(x) \le t \\ & f_i(x) \le 0, & i=1,\dots,m \\ & Ax = b \end{aligned} $$Indicator functions
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) \\ \text{subject to} \quad & x \in C, & C \text{ convex} \end{aligned} $$is euivalent to
$$ \begin{aligned} \underset{x}{\minimize} \quad & f_0(x) + I_C(x) \end{aligned} $$where
$$ I_C(x) = \begin{cases} 0 & \quad \text{if } x \in C \\ +\infty & \quad \text{otherwise} \end{cases} $$Diet problem:
Choose quantities $x_1,\dots, x_n$ of $n$ foods
The cheapest healthy diet can be found by
$$ \begin{aligned} \underset{x}{\minimize} \quad & c^T x \\ \text{subject to} \quad & Gx \succeq h \\ & x \succeq 0 \end{aligned} $$Piecewise-linear minimization
$$ \begin{aligned} \underset{x}{\minimize} \quad & \underset{i=1,\dots,m}{\max} a_i^T x + b_i \end{aligned} $$is equivalent to
$$ \begin{aligned} \underset{x}{\minimize} \quad & t \\ \text{subject to} \quad & a_i^T x + b_i \le t, & i=1,\dots,m \end{aligned} $$Linear-fractional program
$$ \begin{aligned} \underset{x}{\minimize} \quad & \frac{c^Tx+d}{e^Tx+f} \\ \text{subject to} \quad & Gx \preceq h \\ & e^Tx+f>0 \\ & Ax=b \end{aligned} $$is equivalent to
$$ \begin{aligned} \underset{y,z}{\minimize} \quad & {c^Ty+dz}\\ \text{subject to} \quad & Gy \preceq hz \\ & e^Ty+fz=1 \\ & Ay=bz \\ & z\ge 0 \end{aligned} $$where $P\in\SM^{n}_+$, thus minimizing a convex quadratic function over a polyhedron.
Least squares problem
$$ \begin{aligned} \underset{x}{\minimize} \quad& \| Ax-b \|_2^2 \end{aligned} $$with $A$ being skinny and full rank.
Regularized least squares problem
$$ \begin{aligned} \underset{x}{\minimize} \quad& \| Ax-b \|_2^2 + \lambda \|x\|_2^2 \end{aligned} $$with some $\lambda>0$. This is equivalent to
$$ \begin{aligned} \underset{x}{\minimize} \quad& \left\| \bmat{A \\ \sqrt{\lambda}I}x-\bmat{b\\0} \right\|_2^2 \end{aligned} $$Least norm problem
$$ \begin{aligned} \underset{x}{\minimize} \quad& \|x \|_2^2 \\ \text{subject to} \quad& Ax=b \end{aligned} $$with $A$ being fat and full rank.
General norm minimization with equality constraints
$$ \begin{aligned} \underset{x}{\minimize} \quad& \| Ax-b \|_2^2\\ \text{subject to} \quad& Cx=d \end{aligned} $$with $C$ being full row rank and $$ \bmat{A \\ C} $$ being full column rank.
or explicitly
$$ x^* = (A^T A)^{-1} \left( A^T b - C^T \left(C(A^T A)^{-1}C^T \right)^{-1} \left(C(A^T A)^{-1}A^T b - d\right)\right) $$when $A$ is tall and full rank.
Linear program with random cost
$$ \begin{aligned} \underset{x}{\minimize} \quad& \E\left(c^T x\right) + \gamma \var\left(c^Tx\right) \\ \text{subject to} \quad& Gx\preceq h \\ & Ax=b \end{aligned} $$where the cost vector $c$ has uncertainty with $ \E\left(c\right) =\bar{c}$ and $\E\left(cc^T\right)=\Sigma$. Then the problem is a QP with
$$ \begin{aligned} \underset{x}{\minimize} \quad& \bar{c}^T x + \gamma x^T\Sigma x \\ \text{subject to} \quad& Gx\preceq h \\ & Ax=b \end{aligned} $$where $P_i\in\SM^{n}_+$. The objective and constraints are convex quadratic. If $P_i\in\SM^{n}_{++}$, feasible region is intersection of $m$ ellipsoids and an affine set.
where $A_i\in\R^{n_i \times n}$ and $F\in\R^{p\times n}$.
The parameters in optimization problems are often uncertain, for example in LP
\begin{aligned} \underset{x}{\minimize} \quad & c^Tx\\ \text{subject to} \quad & a^T x \le b \end{aligned}the parameters $c, a, b$ can be uncertain. For example, say $a$ is uncertain
$$ a \in \mc{E} = \{ \bar{a}+ Pu\ | \ \|u\|_2\le 1 \} $$We require that the constraints must hold for all $a_i \in \mc{E}_i$,
\begin{aligned} \underset{x}{\minimize} \quad & c^Tx\\ \text{subject to} \quad & a^T x \le b, \quad \forall a\in\mc{E} \end{aligned}which is equivalent to
\begin{aligned} \underset{x}{\minimize} \quad & c^Tx\\ \text{subject to} \quad & \sup_{\|u\|_2\le 1}\left(\bar{a}+Pu\right)^Tx\le b \end{aligned}and it is again equivalent to the following SOCP.
\begin{aligned} \underset{x}{\minimize} \quad & c^Tx\\ \text{subject to} \quad & \bar{a}^T x + \|P^Tx\|_2 \le b \end{aligned}Changes of variables to $y_i= \log x_i$ and taking the logarithms of the cost and the constraints leads to
\begin{aligned} \underset{y}{\minimize} \quad & \log \left( \sum_{k} \exp\left(a^T_{0k}y + b_{0k}\right)\right)\\ \text{subject to} \quad & \log \left( \sum_{k} \exp\left(a^T_{ik}y + b_{ik}\right)\right)\le 0, \quad i=1,\dots,m\\ \quad & Gy+d = 0 \end{aligned}with $F_i, G \in\SM^k$.
Eigenvalue minimization
\begin{aligned} \underset{x}{\minimize} \quad& \lambda_\max \left( A(x) \right) \end{aligned}where $A(x) = A_0 + x_1A_1 + \cdots + x_nA_n$ (with $A_i\in\SM^k$). This is equivalent to
\begin{aligned} \underset{x,t}{\minimize} \quad& t \\ \text{subject to} \quad& \lambda_\max \left( A(x) \right) \le t \end{aligned}and again equivalent to
\begin{aligned} \underset{x,t}{\minimize} \quad& t \\ \text{subject to} \quad& A(x) \preceq tI \end{aligned}Matrix norm minimization
\begin{aligned} \underset{x}{\minimize} \quad& \| A(x) \|_2 = \left(\lambda_\max\left(A(x)^TA(x)\right)\right)^{1/2} \end{aligned}where $A(x) = A_0 + x_1A_1 + \cdots + x_nA_n$ (with $A_i\in\R^{p\times q}$). This is equivalent to
\begin{aligned} \underset{x,t}{\minimize} \quad& t \\ \text{subject to} \quad& \lambda_\max \left( A(x)^TA(x) \right) \le t^2 \end{aligned}and
\begin{aligned} \underset{x,t}{\minimize} \quad& t \\ \text{subject to} \quad& A(x)^TA(x) \preceq t^2I \end{aligned}and
\begin{aligned} \underset{x,t}{\minimize} \quad& t \\ \text{subject to} \quad& \bmat{tI & A(x) \\ A(x)^T & tI} \succeq 0 \end{aligned}