The OptiML group at Princeton University conducts research in developing new machine learning models as well as faster and more efficient optimization techniques for training learning machines. The topics under which our research can be classified include
- Convex optimization with a focus on more efficient stochastic methods
- Non-convex optimization: efficient algorithms with provable guarantees, extending the assumptions under which efficient optimization is possible.
- Iterative methods vs. random walk methods
- Online learning and regret minimization in games
- Unsupervised learning: theory and efficient training of models
You are welcome to have a look at our publications page to find examples of our research.