CATS-Oct-30-2015
Title
Two Faces of Algorithm Design: Optimization and Learning
Speaker
Manish Purohit
Abstract
Traditionally, algorithm designers have been interested in optimization problems where one assumes that the provided data is sacrosanct and correct. Given a specified problem, the goal is to design an algorithm that yields a good solution to any instance of the problem. On the other hand, the rise of machine learning presents an algorithm designer with new challenges. Abstractly, a prediction problem can be described as follows - One is given a problem instance and a large number of experts (i.e. features) who present their opinions as corresponding solutions. The learning goal is to determine the "faithful" experts who consistently give good solutions. In other words, at the training step, one is required to update their belief (weight) in the experts by looking a problem instance as well as the desired solution.
In this talk, I'll present some of my work on both of these aspects of algorithm design. The first part of the talk focuses on a natural optimization problem that arises in the context of job scheduling with soft precedence constraints. We formalize this scheduling problem as the Max-k-Ordering problem that naturally generalizes two of the most well studied problems on directed graphs - Max Directed Cut and Max Acyclic Subgraph. We develop a tight 2-approximation algorithm for the problem for all k by rounding the natural LP-relaxation.
The second part of the talk focuses on the structured prediction problem referred to above. We extend the online large-margin training algorithm (MIRA) developed by Crammer et al (2006) in the context of multiclass prediction to learning structured prediction models. We show that for a large class of structured prediction problems, one can formulate the online learning problem as an inverse optimization problem with a margin. Further, these inverse optimization problems can be solved efficiently using polynomial sized convex programs.