On the relationship between Bayesian error bars and the input data density


We investigate the dependence of Bayesian error bars on the distribution of data in input space. For generalized linear regression models we derive an upper bound on the error bars which shows that, in the neighbourhood of the data points, the error bars are substantially reduced from their prior values. For regions of high data density we also show that the contribution to the output variance due to the uncertainty in the weights can exhibit an approximate inverse proportionality to the probability density. Empirical results support these conclusions.

Divisions: Aston University (General)
Additional Information: ©1995 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Event Title: Proc. Fourth International Conference on Artificial Neural Networks
Event Type: Other
Event Dates: 1995-06-26 - 1995-06-26
Uncontrolled Keywords: Bayes methods,neural nets,prediction theory,ayesian error bars,error bars,high data densit,input data density,linear regression models,probability density
ISBN: 0852966415
Last Modified: 03 Jul 2024 07:21
Date Deposited: 15 Jul 2009 09:05
Full Text Link:
Related URLs: http://www.scop ... tnerID=8YFLogxK (Scopus URL)
PURE Output Type: Chapter
Published Date: 1995-06-26
Authors: Williams, C. K. I.
Qazaz, C.
Bishop, Christopher M.
Zhu, H.



Version: Published Version

Export / Share Citation


Additional statistics for this record