In this talk we consider two popular and well-studied algorithms in machine learning: regularized Support Vector Machines (SVM) and Lasso, or l_1 regularized regression. We show that the solutions to both of these problems, have robustness properties: they are, respectively, the solution to robust optimization problems. Robustness provides a connection of the regularizer to a physical property, namely, protection from noise, and can therefore be used to explore different properties of the solution. We illustrate this by showing from a unified robustness-based perspective why SVMs are consistent, and why Lasso is sparse.
Consistency of regularized SVMs is typically shown using the fact that the regularizer limits the complexity of the solution space. We provide a direct proof of consistency, relying only on the robustness properties of the solution. Lasso has been shown to have remarkable sparsity properties, and typically this has been understood through studying the geometry of the l_2 loss (the regression) and the l_1 regularizer. We show that robustness of the solution itself explains why the solution is sparse. The analysis as well as the specific results we obtain differ from standard sparsity results, providing different geometric intuition. Time permitting, we also show that the robust optimization formulation of Lasso is related to kernel density estimation, and following this approach, we use robustness directly to re-prove that Lasso is consistent.
This is joint work with Xu Huan and Shie Mannor.
Bio: Constantine Caramanis is an assistant professor in the Electrical and Computer Engineering department at UT Austin, and part of the Wireless Networking and Communications Group (WNCG). Prior to that, he received his PhD from MIT, in EECS.