Achieving information-theoretic limits in high-dimensional regression.

  • Antony Joseph | Yale University

Problems in high-dimensional regression have been of immense interest lately. Examples include graphical model selection, multi-label prediction, computer vision, and in genomics.
There are information-theoretic limits which relate the four quantities, viz. sample size, dimension, sparsity, and signal-to-noise ratio, for accurate variable selection.
We provide analysis of an iterative algorithm, which is similar in spirit to forward stepwise regression, for a linear model with specific coefficient structure. We demonstrate that the algorithm has optimal performance when compared to these information-theoretic limits. These results, apart from providing a practical solution to a long standing problem in communication, also contribute to the understanding of thresholds for variable selection in high-dimensional regression.

Speaker Details

Antony Joseph received his Bachelors and Masters degrees in Statistics from
the Indian Statistical Institute in 2004 and 2006 respectively. After a stint of one year in the financial sector, he got admitted as a doctoral student in Statistics at Yale University. He is currently working towards his Ph.D. in Statistics (expected June, 2012), under the supervision of Prof. Andrew Barron. His interests are in high-dimensional statistical inference, machine learning, and information theory. He is the recipient of the Rajiv Gandhi Science Talent Award, for a summer project with JNCASR, India, and also the Francis J. Anscombe Award for academic excellence from Yale.

    • Portrait of Jeff Running

      Jeff Running

Series: Microsoft Research Talks