Priors for Deep Networks: Limit theorems, pitfalls, open questions
- Alexander Matthews | University of Cambridge
Much research in Bayesian Deep Learning is about approximating the posterior. With some notable exceptions, the choice of prior is less often considered. In this second direction, we discuss recent work on central limit theorems for neural networks with more than one hidden layer, some thoughts about over-confident extrapolation, and the dangers of improper priors from the literature.