Toyota AI Seminar: All Learning Is Robust
Controlling overfitting (i.e., when the decision rules obtained fit the training samples extremely well, but fail to "generalize" well and perform poorly on the true distribution) is a long standing topic of study in machine learning. Regularization is a widely used technique to control overfitting where a penalty is added to the cost function (typically the classification or regression error). The success of regularization in a host of different algorithms is usually interpreted as coming from penalizing the complexity of the resulting decision rules favoring "simple" rules. In this talk we propose a different perspective to learning base on robust optimization. That is, assuming that each sample corrupted by a certain disturbance, we find the best decision under the most adversarial disturbance. We show that a special choice of the disturbance exactly recovers the solution obtained by penalizing complexity via regularization. Both Support Vector Machines and Lasso can be re-derived from a robust optimization perspective.
The equivalence relationship between regularization and robustness gives a physical interpretation of the regularization process. Moreover, it helps us explain from a robustness point of view why support vector machines are consistent, and why Lasso produces sparse solutions.
Generalizing these results we use the robustness perspective to derive new algorithms in new domains that have both favorable statistical and computational properties. We finally argue that robustness is a necessary and sufficient condition for consistency of learning algorithms and in fact every useful learning algorithm must possess some robustness properties.
Shie Mannor graduated from the Technion with a BSc in Electrical Engineering and BA in mathematics (both summa cum laude) in 1996. After that he spent almost four years as an intelligence officer with the Israeli Defense Forces and was subsequently involved in a few ventures in the high-tech industry. Shie earned a PhD in Electrical Engineering from the Technion at 2002. From 2002 to 2004 Shie was a Fulbright postdoctoral associate with LIDS at MIT. Shie was an assistant professor and then an associate professor at the Department of Electrical and Computer Engineering in McGill University from July 2004 until August 2010, where he held a Canada Research Chair in Machine Learning from 2005 to 2009. Shie has been with the Department of Electrical Engineering at the Technion since 2008 where he is currently an associate professor. His research interests include machine learning and pattern recognition, planning and control, multi-agent systems, and communications.