DECOPT – DEcomposition Convex OPTimization

Overview

DECOPT is a MATLAB software package for solving the following generic constrained convex optimization problem:

$$\displaystyle\min_{\mathbf{x}\in\mathbb{R}^p, \mathbf{r}\in\mathbb{R}^n}\Big\{ f(\mathbf{x}) + g(\mathbf{r}) : \mathbf{A}\mathbf{x} – \mathbf{r} = \mathbf{b}, ~\mathbf{l} \leq \mathbf{x} \leq \mathbf{u} \Big\},~~\textrm{(CP)}$$

where $f$ and $g$ are two proper, closed and convex functions, $\mathbf{A}\in\mathbb{R}^{n\times p}$, $\mathbf{b}\in\mathbb{R}^n$ and $\mathbf{l}, \mathbf{u}\in\mathbb{R}^p$ are the lower and upper bounds of $\mathbf{x}$.

Here, we assume that $f$ and $g$ are proximally tractable. By proximal tractability, we mean that the proximal operator $\mathrm{prox}_{\varphi}$ of a given proper, closed and convex function $\varphi$:

$$\mathrm{prox}_{\varphi}(\mathbf{x}) := \mathrm{arg}\min_{\mathbf{y}\in\mathbb{R}^p}\{ \varphi(\mathbf{y}) + (1/2)\Vert \mathbf{y} – \mathbf{x}\Vert_2^2\}$$

is efficient to compute (e.g., by a closed form solution or by polynomial time algorithms).

DECOPT is implemented by Quoc Tran-Dinh at the Laboratory for Information and Inference Systems (LIONS), EPFL, Lausanne, Switzerland. This is a joint work with Volkan Cevher at LIONS, EPFL.

DECOPT aims at solving (CP) for any convex functions $f$ and $g$, where their proximal operator is provided. The following special cases have been customized in DECOPT:

  • Basis pursuit: 

$$\min_{\mathbf{x}}\big\{ \Vert\mathbf{x}\Vert_1 : \mathbf{A}\mathbf{x} = \mathbf{b}, \mathbf{l} \leq \mathbf{x} \leq\mathbf{u} \big\}.$$

  • $\ell_1/\ell_2$-unconstrained LASSO problem:

$$\min_{\mathbf{x}} \frac{1}{2}\Vert\mathbf{A}\mathbf{x} – \mathbf{b}\Vert_2^2 + \lambda \Vert\mathbf{x}\Vert_1$$

  • $\ell_1/\ell_1$-convex problem:

$$\min_{\mathbf{x}}\Vert \mathbf{A}\mathbf{x} – \mathbf{b}\Vert_1 + \lambda\Vert\mathbf{x}\Vert_1.$$

  • Square-root $\ell_1/\ell_2$ LASSO problem:

$$\min_{\mathbf{x}}\Vert\mathbf{A}\mathbf{x} – \mathbf{b}\Vert_2 + \lambda\Vert\mathbf{x}\Vert_1.$$

  • $\ell_1/\ell2$ – the basis denosing (BPDN) problem:

$$\min_{\mathbf{x}}\big\{ \Vert\mathbf{x}\Vert_1 : \Vert\mathbf{A}\mathbf{x} – \mathbf{b}\Vert_2 \leq \delta \big\}.$$

  • $\ell_2/\ell_1$ – the $\ell_1$-constrained LASSO problem:

 $$\min_{\mathbf{x}}\big\{ (1/2)\Vert\mathbf{A}\mathbf{x} – \mathbf{b}\Vert_2^2 : \Vert\mathbf{x}\Vert_1 \leq \delta\big\}.$$

Here, $\lambda > 0$ is a penalty parameter, $\delta > 0$ is a given level parameter.

 

Download

DECOPT can be downloaded at:

References

   The full algorithms and theory of DECOPT can be found in the following references:

  • Q. Tran-Dinh and V. Cevher, “Constrained convex minimization via model-based excessive gap“. Proceedings of the annual conference on Neural Information Processing Systems Foundation (NIPS), Montreal, Canada, (2014).

Available at: https://infoscience.epfl.ch/record/201842

Abstract: Abstract We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct first-order primal-dual methods with optimal convergence rates on the primal objec-tive residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox- center selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fast-gradient methods as special cases, where our rates apply.

  • Q. Tran-Dinh, and V. Cevher. “A Primal-Dual Algorithmic Framework for Constrained Convex Minimization”LIONS Tech. Report EPFL-REPORT-199844, (2014).

Available at: https://infoscience.epfl.ch/record/199844

Abstract: We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov’s excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.