An Upper Bound on the Bayesian Error Bars for Generalized Linear Regression

in Ellacott, S. W. Mason, J. C. and Anderson, I. J. (Eds.), Mathematics of Neural Networks: Models, Algorithms and Applications

Published by Kluwer | 1997 | Ellacott, S. W. Mason, J. C. and Anderson, I. J. (Eds.), Mathematics of Neural Networks: Models, Algorithms and Applications edition

ISBN: 978-1-4615-6099-9

In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars given by the standard deviation of the output distribution. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from, individual data points which are themselves easily evaluated.