#
Oracle inequalities in sparse learning problems

### Abstract

A number of problems in Statistics and Learning Theory, such as regression
and classification, can be formulated as risk minimization over
a linear span of a large dictionary of given functions. Several methods
have been developed to find a "sparse" solution of such a problem
(provided that it exists). One of these methods is penalized empirical risk minimization with $\ell_1$-norm of the vector of coefficients used
as a penalty. Another method is so called "Dantzig Selector" recently
introduced by Candes and Tao. We will discuss several oracle inequalities
that express "adaptivity" of these methods to unknown sparsity of the
problem.

Maintained by:
Fei Sha