where $c>0$ and $a_i \in \mathbb{R}$, is called a monomial.
where $c_k > 0$, is called a posynomial.
Posynomials are closed under addition, multiplication, and nonnegative scaling.
A geometric program is of form
where $f_0, \ldots, f_m$ are posynomials and $h_1, \ldots, h_p$ are monomials. The constraint $\mathbf{x} \succ \mathbf{0}$ is implicit.
Is GP a convex optimization problem?
can be written as \begin{eqnarray*} f(\mathbf{x}) = f(e^{y_1}, \ldots, e^{y_n}) = c (e^{y_1})^{a_1} \cdots (e^{y_n})^{a_n} = e^{\mathbf{a}^T \mathbf{y} + b}, \end{eqnarray*} where $b = \ln c$. Similarly, we can write a posynomial as \begin{eqnarray*} f(\mathbf{x}) &=& \sum_{k=1}^K c_k x_1^{a_{1k}} x_2^{a_{2k}} \cdots x_n^{a_{nk}} \\ &=& \sum_{k=1}^K e^{\mathbf{a}_k^T \mathbf{y} + b_k}, \end{eqnarray*} where $\mathbf{a}_k = (a_{1k}, \ldots, a_{nk})$ and $b_k = \ln c_k$.
where $\mathbf{a}_{ik}, \mathbf{g}_i \in \mathbb{R}^n$. Taking log of both objective and constraint functions, we obtain the geometric program in convex form \begin{eqnarray*} &\text{minimize}& \ln \left(\sum_{k=1}^{K_0} e^{\mathbf{a}_{0k}^T \mathbf{y} + b_{0k}}\right) \\ &\text{subject to}& \ln \left(\sum_{k=1}^{K_i} e^{\mathbf{a}_{ik}^T \mathbf{y} + b_{ik}}\right) \le 0, \quad i = 1,\ldots,m \\ & & \mathbf{g}_i^T \mathbf{y} + h_i = 0, \quad i=1,\ldots,p. \end{eqnarray*}
Mosek is capable of solving GP. cvx has a GP mode that recognizes and transforms GP problems.
Example. Logistic regression as GP. Given data $(\mathbf{x}_i, y_i)$, $i=1,\ldots,n$, where $y_i \in \{0, 1\}$ and $\mathbf{x}_i \in \mathbb{R}^p$, the likelihood of the logistic regression model is
The MLE solves \begin{eqnarray*} &\text{minimize}& \prod_{i:y_i=1} e^{-\mathbf{x}_i^T \beta} \prod_{i=1}^n \left( 1 + e^{\mathbf{x}_i^T \beta} \right). \end{eqnarray*} Let $z_j = e^{\beta_j}$, $j=1,\ldots,p$. The objective becomes \begin{eqnarray*} \prod_{i:y_i=1} \prod_{j=1}^p z_j^{-x_{ij}} \prod_{i=1}^n \left( 1 + \prod_{j=1}^p z_j^{x_{ij}} \right). \end{eqnarray*} This leads to a GP \begin{eqnarray*} &\text{minimize}& \prod_{i:y_i=1} s_i \prod_{i=1}^n t_i \\ &\text{subject to}& \prod_{j=1}^p z_j^{-x_{ij}} \le s_i, \quad i = 1,\ldots,m \\ & & 1 + \prod_{j=1}^p z_j^{x_{ij}} \le t_i, \quad i = 1, \ldots, n, \end{eqnarray*} in variables $\mathbf{s} \in \mathbb{R}^{m}$, $\mathbf{t} \in \mathbb{R}^n$, and $\mathbf{z} \in \mathbb{R}^p$. Here $m$ is the number of observations with $y_i=1$.
How to incorporate lasso penalty? Let $z_j^+ = e^{\beta_j^+}$, $z_j^- = e^{\beta_j^-}$. Lasso penalty takes the form $e^{\lambda |\beta_j|} = (z_j^+ z_j^-)^\lambda$.
MLE is solved by GP \begin{eqnarray*} &\text{minimize}& \prod_{i,j} t_{ij}^{y_{ij}} \\ &\text{subject to}& 1 + \gamma_i^{-1} \gamma_j \le t_{ij} \end{eqnarray*} in $\gamma \in \mathbb{R}^n$ and $\mathbf{t} \in \mathbb{R}^{n^2}$.