# 2010 Research Summary

## Computational complexity of statistical estimation

View Current Project Information

Alekh Agarwal, Peter Bartlett, Pradeep Ravikumar^{1} and Martin Wainwright

Defense Advanced Research Projects Agency HR0011-08-2-0002 and National Science Foundation DMS-0830410

As we move to learning with larger datasets, the constraint is not often the number of samples available, but the number of samples that can be processed with available computational resources in a reasonable amount of time. This project tries to understand the fundamental computational complexity constraints for solving statistical estimation problems. One approach that proves fruitful is to reduce learning to stochastic convex optimization. We consider an oracle model of learning and optimization, where the algorithm can only get unbiased estimates for the quality of its current hypothesis. In such a model, complexity is naturally modelled as the number of queries posed to the oracle. Use of information-theoretic techniques allows to obtain minimax lower bounds on the number of queries any computational method will have to make. As a consequence, we get results on the complexity of statistical estimation in certain computational models, which are quite widely used in practice.

^{1}University of Texas at Austin