Adaptive back-propagation in on-line learning of multilayer networks

Abstract

An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.

Divisions: College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Additional Information: Copyright of the Massachusetts Institute of Technology Press (MIT Press)
Event Title: Neural Information Processing Systems 95
Event Type: Other
Event Dates: 1996-01-01 - 1996-01-01
Uncontrolled Keywords: adaptive back-propagation,algorithm,gradient descent,neural networks,statistical
ISBN: 0262201070
Last Modified: 26 Dec 2023 09:39
Date Deposited: 16 Jul 2009 09:54
Full Text Link:
Related URLs: http://mitpress ... type=2&tid=8421 (Publisher URL)
PURE Output Type: Chapter
Published Date: 1996
Authors: West, Ansgar H L
Saad, David (ORCID Profile 0000-0001-9821-2623)

Download

[img]

Version: Published Version


Export / Share Citation


Statistics

Additional statistics for this record