Component Based Models: Graphical Models, Sparsity, Low-rank, and all of that Sort of Thing
- Pradeep Ravikumar | University of Texas
Over the past two decades, two statistical machine learning frameworks, graphical models, and structurally constrained (sparse, low-rank, etc.) statistical models have proved very popular and successful in modeling the very high-dimensional systems that arise in modern settings. Interestingly, recent developments have shown that these two frameworks share a commonality, based on a simple idea: in each, a complex model parameter is expressed as a superposition of simple components, and such a superposition is then leveraged for tractable inference and learning. In graphical model inference, the graph structured parameter is split into simpler graph structured components such as edges and trees, while in high-dimensional statistics the simpler components are based on particular structure, for instance in sparse signal recovery, the sparse parameter can be split into a small number of coordinate vectors, and so on.
In this talk, we will provide an overview of these two frameworks of high-dimensional statistical models and graphical models, and then show how we could analyze both of these within a unified framework of component based models.
Speaker Details
I lead the Statistical Machine Learning Group at the Department of Computer Science at the University of Texas, Austin. I am also affiliated with the Division of Statistics and Scientific Computation, and the Institute for Computational Engineering and Sciences.
I obtained my PhD from the School of Computer Science at Carnegie Mellon University in 2007, and was a postdoc at the Department of Statistics, University of California, Berkeley through 2009.
-
-
Jeff Running
-
-
Watch Next
-
-
Dion2: A new simple method to shrink matrix in Muon
- Anson Ho,
- Kwangjun Ahn
-
-
-
-
-
-
-
-