Mathematics of Data: From Theory to Computation (Fall 2016)



Prof. Volkan Cevher



Convex optimization offers a unified framework in obtaining numerical solutions to data analytics problems with provable statistical guarantees of correctness at well-understood computational costs.

To this end, this course reviews recent advances in convex optimization and statistical analysis in the wake of Big Data. We provide an overview of the emerging convex data models and their statistical guarantees, describe scalable numerical solution techniques such as first-order methods and their randomized versions, and illustrate the essential role of parallel and distributed computation.

Throughout the course, we put the mathematical concepts into action with large-scale applications from machine learning, signal processing, and statistics.


Learning outcomes

By the end of the course, the students are expected to understand the so-called time-data tradeoffs in data analytics. In particular, the students must be able to

  1. Choose an appropriate convex formulation for a data analytics problem at hand
  2. Estimate the underlying data size requirements for the correctness of its solution
  3. Implement an appropriate convex optimization algorithm based on the available computational platform
  4. Decide on a meaningful level of optimization accuracy for stopping the algorithm
  5. Characterize the time required for their algorithm to obtain a numerical solution with the chosen accuracy



Previous coursework in calculus, linear algebra, and probability is required. Familiarity with optimization is useful. 



The course consists of the following topics

Lecture 1 “Objects in Space”: Definitions of norms, inner products, and metrics for vector, matrix and tensor objects.
Lecture 2 Review of basic probability theory.
  Maximum likelihood, M-estimators, and empirical risk minimization as a motivation for convex optimization.
Lecture 3 Fundamental concepts in convex analysis.
  Basics of complexity theory.
Lecture 4 Unconstrained smooth minimization I:
  Concept of an iterative optimization algorithm.
  Convergence rates.
  Characterization of functions.
Lecture 5
Unconstrained smooth minimization II:
  Gradient and accelerated gradient methods.
Lecture 6 Unconstrained smooth minimization III:
  The quadratic case.
  The conjugate gradient method.
  Variable metric algorithms.
Lecture 7 Structured data models (e.g. sparse and low-rank) and convex gauge functions.
  The subgradient method.
Lecture 8 Composite convex minimization I: 
  Proximal and accelerated proximal gradient methods.
Lecture 9 Composite convex minimization II:
  Proximal Newton-type methods.
  Composite self-concordant minimization.
Lecture 10 Convex demixing.
  Basis pursuit denoising.
  Convex geometry of linear inverse problems.
Lecture 11 Constrained convex minimization I:
  The primal-dual approach.
  Smoothing approaches for non-smooth convex minimization.
Lecture 12 Constrained convex minimization II:
  The Frank-Wolfe method.
  The universal primal-dual gradient method.
  The alternating direction method of multipliers (ADMM).
Lecture 13 Classical black-box convex optimization techniques.
  Linear programming, quadratic programming, second-order cone programming, and semidefinite programming.
  The simplex method and interior point method (IPM).
  Hierarchies of classical formulations.