Theoretical foundations of neural networks

Bishop, Christopher M. (1996). Theoretical foundations of neural networks. IN: Proceedings of Physics Computing 96. Borcherds, P.; Bubak, M. and Maksymowicz, A. (eds) Krakow: Academic Computer Centre.


Neural networks have often been motivated by superficial analogy with biological nervous systems. Recently, however, it has become widely recognised that the effective application of neural networks requires instead a deeper understanding of the theoretical foundations of these models. Insight into neural networks comes from a number of fields including statistical pattern recognition, computational learning theory, statistics, information geometry and statistical mechanics. As an illustration of the importance of understanding the theoretical basis for neural network models, we consider their application to the solution of multi-valued inverse problems. We show how a naive application of the standard least-squares approach can lead to very poor results, and how an appreciation of the underlying statistical goals of the modelling process allows the development of a more general and more powerful formalism which can tackle the problem of multi-modality.

Divisions: ?? 13770100JJ ??
Event Title: Physics Computing '96
Event Type: Other
Event Dates: 1996-01-01 - 1996-01-01
Uncontrolled Keywords: neural networks,nervous systems,statistical pattern recognition,computational learning theory,statistics,information geometry,statistical mechanics
Published Date: 1996


Full text not available from this repository.

Export / Share Citation


Additional statistics for this record