Finding Sparse Representations in Multiple Response Models via Bayesian Learning
- David Wipf ,
- Bhaskar D. Rao
Workshop on Signal Processing with Adaptive Sparse Structured Representations, Rennes, France, November 2005. |
Given a large overcomplete dictionary of basis vectors, our goal is to efficiently represent L > 1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where we have access to multiple responses that were putatively generated by the same small subset of features. Ideally, we would like to uncover the associated sparse generating weights, which can have physical significance in many applications. The generic solution to this problem is combinatorial and therefore we seek approximate procedures. Sparse approximation algorithms tailored to the multiple response domain have typically fallen into two categories: Greedy algorithms such as Matching Pursuit or regularized least-squares methods such as Basis Pursuit and FOCUSS. While these approaches have been extensively analyzed by others, there has been comparably less progress with regard to the development new sparse approximation cost functions and algorithms. Herein, we derive an alternative cost function and associated learning rule based upon a sparse Bayesian learning formulation that improves upon existing methods in many cases.