Instructor
Prof. Volkan Cevher
Description
Convex optimization offers a unified framework in obtaining numerical solutions to data analytics problems with provable statistical guarantees of correctness at wellunderstood computational costs.
To this end, this course reviews recent advances in convex optimization and statistical analysis in the wake of Big Data. We provide an overview of the emerging convex data models and their statistical guarantees, describe scalable numerical solution techniques such as stochastic, firstorder and primaldual methods. Throughout the course, we put the mathematical concepts into action with largescale applications from machine learning, signal processing, and statistics.
Throughout the course, we put the mathematical concepts into action with largescale applications from machine learning, signal processing, and statistics.
Learning outcomes
By the end of the course, the students are expected to understand the socalled timedata tradeoffs in data analytics. In particular, the students must be able to
 Choose an appropriate convex formulation for a data analytics problem at hand
 Estimate the underlying data size requirements for the correctness of its solution
 Implement an appropriate convex optimization algorithm based on the available computational platform
 Decide on a meaningful level of optimization accuracy for stopping the algorithm
 Characterize the time required for their algorithm to obtain a numerical solution with the chosen accuracy
Prerequisites
Previous coursework in calculus, linear algebra, and probability is required. Familiarity with optimization is useful.
Outline
The course consists of the following topics
Lecture 1  Introduction to convex optimization and iterative methods. 
Lecture 2  Review of basic probability theory. 
Maximum likelihood, Mestimators, and empirical risk minimization as a motivation for convex optimization.  
Lecture 3  Fundamental concepts in convex analysis. 
Basics of complexity theory.  
Lecture 4  Unconstrained smooth minimization I: 
Concept of an iterative optimization algorithm.  
Convergence rates.  
Characterization of functions.  
Lecture 5 
Unconstrained smooth minimization II:

Gradient and accelerated gradient methods.  
Lecture 6  Unconstrained smooth minimization III: 
The quadratic case.  
The conjugate gradient method.  
Variable metric algorithms.  
Lecture 7  Stochastic gradient methods. 
Lecture 8  Composite convex minimization I: 
Subgradient method.  
Proximal and accelerated proximal gradient methods.  
Lecture 9  Composite convex minimization II: 
Proximal Newtontype methods.  
Stochastic proximal gradient methods.  
Lecture 10  Constrained convex minimization I: 
The primaldual approach.  
Smoothing approaches for nonsmooth convex minimization.  
Lecture 11  Constrained convex minimization II: 
The FrankWolfe method.  
The universal primaldual gradient method.  
The alternating direction method of multipliers (ADMM).  
Lecture 12  Disciplined convex programming. 