Student Projects

Master and semester projects

At LIONS, we are concerned with optimized information extraction from signals or data volumes. We therefore develop mathematical theory and computational methods for information recovery from highly incomplete data.

We divide our research into two synergistic theory thrusts: information scalable optimization and data acquisition, and learning theory and methods for low-dimensional signal models.

Our team of Postdocs and Phd students work on different aspects of these interesting and challenging problems, from theoretical analysis of methods and algorithms to efficient implementations to solve real-world problems.

Choosing our laboratory for your project will enable you to work in an exciting and stimulating environment, allow you to quickly learn state-of-the-art mathemathical and algorithmic tools and deliver an outstanding project.

Below, you can find some project that our team is currently working on and would appreciate your collaboration. You are also welcome to come with your own project topic that matches with our expertise and interests.

Please contact Prof. Volkan Cevher for further information on the projects.

Below projects are reserved for master students only. PhD students should consult LIONS position page lions.epfl.ch/opportunities


Open Projects

We are interested in reducing the scan times in Magnetic Resonance Imaging (MRI) using compressive sensing techniques. For this purpose, we will design sampling mechanisms via a new learning-based approach, which unifies learning theory and the sampling theory. We also consider extensions to multi-coil MRI and provide demonstrations with real scans.

Keywords: Compressive Sensing, Fourier Transform, Image Reconstruction.

Requirements: MATLAB, Signal/Image Processing knowledge, convex and combinatorial optimization

We consider the problem of finding a global maximizer (or minimizer) of an unknown objective function from point evaluations. Bayesian optimization prescribes a prior belief over the possible objective functions and then sequentially refines this model as data are observed.

The student will develop new methods for Bayesian optimization that are both theoretically well founded and practically applicable.The methods will be tested on various real-world problems such as finding a good set of hyperparameters for deep neural networks or discovering molecules with good chemical properties.

Keywords: Bayesian optimization, Gaussian process, Multi-armed bandits

Requirements: Python, machine learning, familiarity with multi-armed bandits and submodular optimization is a plus.

References: [4], [1], [3], [2]

References

[1] Ilija Bogunovic, Jonathan Scarlett, and Volkan Cevher. Time-Varying Gaussian Process Bandit Optimization. In International Conference on Artifcial Intelligence and Statistics (AISTATS), 2016.

[2] Ilija Bogunovic, Jonathan Scarlett, Andreas Krause, and Volkan Cevher. Truncated Variance Reduction: A Uni_ed Approach to Bayesian Optimization and Level-Set Estimation. In Conference on Neural Information Processing Systems (NIPS), 2016.

[3] Josip Djolonga, Andreas Krause, and Volkan Cevher. High-dimensional gaussian process bandits. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1025-1033. Curran Associates, Inc.,2013.

[4] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P Adams, and Nando de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148-175, 2016.

With the massive increase of available data for machine learning research, designing optimization algorithms that could handle large volumes of data is getting more and more important. For that purpose, we are interested in developing convex optimization algorithms that can be used in distributed environment.

In this project, the algorithms will be implemented and tested using Google’s machine learning library TensorFlow.

Keywords: convex optimization, distributed optimization, tensorflow

Requirements: Python, C++, familiarity with optimization is a plus

The high-throughput screening of large databases of novel materials candidates constitutes a central goal of the recently awarded MARVEL NCCR grant. Given a training set of compounds with pre-calculated quantum mechanical properties, we seek to construct supervised machine learning models that accurately infer the corresponding properties for similar materials with correctness guarantees [1-4].

References

[1] A. P. Bartók, R. Kondor and G. Csányi , On representing chemical environments, Phys. Rev.., vol. 87, pp. 184115-1–184115-16, 2013.

[2] S. De, A. P. Bartók, G. Csányi and M. Ceriotti, Comparing molecules and solids across structural and alchemical space, Phys. Chem. Chem. Phys., vol. 20, pp. 13754–13769, 2016.

[3] Y. Nesterov, Introductory lectures on convex optimization. A basic course. Kluwer Academic Publishers,Boston, MA, 2004.

[4] M. Rupp, Machine Learning for Quantum Mechanics in a Nutshell, Quantum Chemistry, vol. 115(16), pp. 1058–1073, 2015.

Submodular functions are set functions, i.e., functions defined on the power set 2V of a given ground set V,, that satisfy a diminishing return property. They occur naturally in several applications in computer vision, machine learning, economics and theoretical computer science. Submodular functions have very interesting links to convex analysis and can be minimized exactly and maximized approximately by efficient algorithms with provable guarantees.

The objective of this project is to develop methods to improve on the theoretical and/or practical efficiency of submodular minimization/maximization algorithms in offline or streaming settings.

Requirements: MATLAB, familiarity with Submodular and Convex Optimization, and Machine Learning.

References:

[1] Fujishige, Satoru. \Submodular functions and optimization”. Vol. 58. El-sevier, 2005.

[2] Bach, Francis. \Learning with submodular functions: A convex optimization perspective.” arXiv preprint arXiv:1111.6453 (2011).

[3] Iyer, Rishabh, and Je_ Bilmes. \Polyhedral aspects of Submodularity, Convexity and Concavity.” arXiv preprint arXiv:1506.07329 (2015).

Bundle methods for non-smooth optimisation were widely used in the literature [1,2,3]. The main idea of this method is to construct a piecewise linear approximation to the objective function. This construction is based on the sub-gradient of the previous iteration.

The objective of this project is to develop stochastic bundle methods and investigate their convergence, for solving non-smooth optimization of minimizing the sum of two functions f+g, where f is proximable and g can be decomposition into the finite sum of g_i.

References:

[1] Yu Du, Andrzej Ruszczynski, Rate of Convergence of the Bundle Method

[2] Alexander J. Smola, S.V. N. Vishwanathan, Quoc V. Le, Bundle Methods for Machine Learning.

[3] C. H. Teo, S.V. N. Vishwanathanm A. Smola, Quoc. V.le, Bundle Methods for Regularized Risk Minimization, Journal of Machine Learning Research 11 (2010) , 311-365

Past projects

We are interested in the application of Quantum Tomography for the calibration of quantum gates. Quantum Tomography is the process of reconstructing the quantum state of a system from measurements made on the quantum system. However a full description of the quantum state is exponential in the number of qubits. We want to leverage recent advances in Compressive Sensing (CS) to drastically decrease the number of measurements needed for full recovery.

This project requires to develop and theoretically characterize efficient recovery algorithms. Since we are striving for real-time application, the algorithm will then be coded in CUDA to exploit the parallel computation power of GPUs.

Prerequisite: Background on linear algebra, probability and statistics, convex optimization.

Contact: Yen-Huan Li

“Although this may seem a paradox, all exact science is dominated by the idea of approximation.”

–Bertrand Russell (1872–1970)

We are interested in developing approximation algorithms with provable estimation guarantees for structured sparse representations of spatial images and temporal data streams.

Students with experience in Hadoop MapReduce and/or Cuda programming are given priority.

Keywords: approximation guarantee, dynamic programming, greedy algorithm, submodularity, union-of-subspace models, randomized algorithms

Requirements: Matlab, C/C++

We are interested in developing algorithms that enable robust decompositions of large-scale data matrices for learning applications.

Keywords: randomized algorithms, fixed-rank approximation problem, approximation guarantees

Requirements: Matlab or C/C++

We are interested in studying non-iid, parametric and non-parametric prior distributions with almost sure compressibility properties. This project pays particular attention to exchangeable distributions defined over binary and quad-trees.

Keywords: compressible priors, order statistics, almost sure convergence, De Finetti’s theorem

Requirements: Matlab, C/C++

We are interested in the application of extremal combinatorial objects in compressive sensing problems, such as randomness extractors and expanders. This project pays particular attention to a recent development in information theory: polar codes.

Keywords: randomness extractors, expanders, polar codes, compressive sensing

Requirements: Matlab or C/C++

We are interested in characterizing the (set) restricted singular values of pseudo-random matrices.

Keywords: matrix martingales, tail bounds, concentration-of-measures

Requirements: Matlab, C/C++

We are interested in making inferences from biological data in order to understand the processes that cause a disease, potentially leading to the identification of therapies.

Within the machine learning framework of supervised linear regression, we want to exploit structured sparse models to both improve prediction performance and interpretability of the results. Structured sparse model encompass models such as hierarchical trees, group structures or union of subspaces model, which offer the possibility of estimating a linear model that can be more easily interpreted by clinicians.

Our aim is to develop and characterize new algorithms and structures to be applied to cancer data and neuroimaging data. This project had both theoretical and computational aspects.

In the classical Markowitz portfolio optimization, one is interested in allocating a fixed amount of capital among a set of assets so as to minimize the risk/variance of the portfolio return, subject to a target empirical return. Assuming that the true mean and covariance of the set of assets are known, the Markowitz portfolio problem has a closed form solution. In practice though, this assumption is unrealistic and we are usually restricted to some empirical estimates of the mean and the covariance from recorded data. Unfortunately, such compromise introduces several problems: in general, financial data are well-known to be non-stationary (which limits the amount of meaningful data in the estimates) while classic ML techniques are robust only in the asymptotic sense where a vast amount of data is required for a good estimate. We aim to improve on these approaches by exploiting other properties of the data, such as the sparsity in the covariance matrix or other low-dimensional assumptions.

During this project, the student learn modern techniques in sparse covariance matrix estimation as well as new scalable optimization algorithms. Our aim is to develop and characterize covariance estimators from a limited amount of data, leveraging sparse promoting and other low-dimensional models in the estimation. As part of the project, the student applies this knowledge to real world data and track progress on the performance of the designed portfolios. This project has both theoretical and computational aspects.

Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem, which may be used to impute unobserved entries. This approach leads to good performance even when very few are known.

We are interested in bandit optimization in this setting which introduces the exploration/exploitation tradeoff, where we actively decide on the sampling positions to optimize a given convex criteria.

We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows. Our design criteria is aimed at preserving the distances between the data points in the dimensionality reduced space as much as possible relative to their distances in original data space. This approach can be considered as a deterministic Bi-Lipschitz embedding of the data points.

We introduce a scalable learning algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by leveraging game theoretic tools. We also provide a generalization characterization of our matrix based on our sample data. We use compressive sensing problems as an example application of our problem, where the Frobenius norm design constraint translates into the sensing energy.

Link to publication.

In particular, complexity analysis and implementation of dynamic programs for projecting a vector onto structures based on groups or trees of variables.

We consider the class of convex minimization problems, composed of a self-concordant function, such as the log det metric, a convex data fidelity term h(·) and, a regularizing – possibly non-smooth – function g(·). This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this locally Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.

Link to publication.

We consider the problem of solving large-scale optimization problems with matrix decision variables. For such problems, classical convex optimization algorithms scale poorly even at moderate data sizes. On the other hand, through cleverly employing the structures of real-world problems, non-convex alternating minimization methods succeed at providing efficient and accurate solutions, albeit sometimes without theoretical guarantees.

In this project, the student will develop novel alternating minimization algorithms, and examine their theoretical convergence. The student will test the applicability of the new methods on various real-world problems including phase retrieval and matrix completion.

Keywords: Alternating minimization, Phase retrieval, Matrix completion.

Requirements: Matlab or Python, basic knowledge in optimization such as gradient descent and its convergence rate, familiarity with phase retrieval or matrix completion is a plus.

References: [1], [2], [3], [4].

References

[1] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger ow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985-2007, 2015.

[2] David E Carlson, Volkan Cevher, and Lawrence Carin. Stochastic spectral descent for restricted boltzmann machines. In AISTATS, 2015.

[3] Moritz Hardt. Understanding alternating minimization for matrix completion. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 651-660. IEEE, 2014.

[4] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 665-674. ACM, 2013.

Content that will be toggled here

We are interested in the application of Quantum Tomography for the calibration of quantum gates. Quantum Tomography is the process of reconstructing the quantum state of a system from measurements made on the quantum system. However a full description of the quantum state is exponential in the number of qubits. We want to leverage recent advances in Compressive Sensing (CS) to drastically decrease the number of measurements needed for full recovery.

This project will require to develop and theoretically characterize efficient recovery algorithms. Since we are striving for real-time application, the algorithm will then be coded in CUDA to exploit the parallel computation power of GPUs.

Prerequisite: Background on linear algebra, probability and statistics, convex optimization.

Contact: Yen-Huan Li