Neural network regression with input uncertainty


It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise or corruption. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which allows for input noise given that some model of the noise process exists. In the limit where this noise process is small and symmetric it is shown, using the Laplace approximation, that there is an additional term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable and sampling this jointly with the network's weights, using Markov Chain Monte Carlo methods, it is demonstrated that it is possible to infer the unbiassed regression over the noiseless input.

Publication DOI:
Divisions: Aston University (General)
Additional Information: ©1998 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Uncontrolled Keywords: Bayesian inference methods,neural networks,Bayesian neural network,Laplace approximation,Bayesian error,Markov Chain Monte Carlo method,unbiassed regression,noiseless input
ISBN: 078035060
Last Modified: 01 Mar 2024 08:06
Date Deposited: 15 Sep 2009 14:09
Full Text Link:
Related URLs: http://ieeexplo ... ber=15338?tag=1 (Publisher URL)
PURE Output Type: Chapter
Published Date: 1998-09-02
Authors: Wright, W. A.



Version: Published Version

Export / Share Citation


Additional statistics for this record