Hjorth, Lars U. (1999). Regularisation of mixture density networks. Technical Report. Aston University, Birmingham.
Abstract
Mixture Density Networks are a principled method to model conditional probability density functions which are non-Gaussian. This is achieved by modelling the conditional distribution for each pattern with a Gaussian Mixture Model for which the parameters are generated by a neural network. This thesis presents a novel method to introduce regularisation in this context for the special case where the mean and variance of the spherical Gaussian Kernels in the mixtures are fixed to predetermined values. Guidelines for how these parameters can be initialised are given, and it is shown how to apply the evidence framework to mixture density networks to achieve regularisation. This also provides an objective stopping criteria that can replace the `early stopping' methods that have previously been used. If the neural network used is an RBF network with fixed centres this opens up new opportunities for improved initialisation of the network weights, which are exploited to start training relatively close to the optimum. The new method is demonstrated on two data sets. The first is a simple synthetic data set while the second is a real life data set, namely satellite scatterometer data used to infer the wind speed and wind direction near the ocean surface. For both data sets the regularisation method performs well in comparison with earlier published results. Ideas on how the constraint on the kernels may be relaxed to allow fully adaptable kernels are presented.
Divisions: | Aston University (General) |
---|---|
Uncontrolled Keywords: | NCRG,neural nets,Bayesian regularisation,maximum likelihood estimation,mixture density networks,multivalued functions,neural networks,probability |
ISBN: | NCRG/99/004 |
Last Modified: | 17 Dec 2024 08:26 |
Date Deposited: | 30 Sep 2009 12:54 | PURE Output Type: | Technical report |
Published Date: | 1999-02-12 |
Authors: |
Hjorth, Lars U.
|