Bayesian invariant measurements of generalisation for discrete distributions

Abstract

Neural network learning rules can be viewed as statistical estimators. They should be studied in Bayesian framework even if they are not Bayesian estimators. Generalisation should be measured by the divergence between the true distribution and the estimated distribution. Information divergences are invariant measurements of the divergence between two distributions. The posterior average information divergence is used to measure the generalisation ability of a network. The optimal estimators for multinomial distributions with Dirichlet priors are studied in detail. This confirms that the definition is compatible with intuition. The results also show that many commonly used methods can be put under this unified framework, by assume special priors and special divergences.

Divisions: Aston University (General)
Uncontrolled Keywords: Neural network,learning rules,Bayesian framework,distribution
ISBN: NCRG/4351
Last Modified: 05 Feb 2024 08:08
Date Deposited: 21 Jul 2009 12:12
PURE Output Type: Technical report
Published Date: 1995-08-31
Authors: Zhu, Huaiyu
Rohwer, Richard

Download

Export / Share Citation


Statistics

Additional statistics for this record