On-line learning with adaptive back-propagation in two-layer networks

Abstract

An adaptive back-propagation algorithm parameterized by an inverse temperature 1/T is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework, we analyse these learning algorithms in both the symmetric and the convergence phase for finite learning rates in the case of uncorrelated teachers of similar but arbitrary length T. These analyses show that adaptive back-propagation results generally in faster training by breaking the symmetry between hidden units more efficiently and by providing faster convergence to optimal generalization than gradient descent.

Publication DOI: https://doi.org/10.1103/PhysRevE.56.3426
Divisions: College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Aston University (General)
Additional Information: Copyright of the American Physical Society
Uncontrolled Keywords: adaptive back-propagation,algorithm,inverse temperature,gradient descent,on-line learning,neural networks,learning algorithms,Mathematical Physics,General Physics and Astronomy,Condensed Matter Physics,Statistical and Nonlinear Physics
Publication ISSN: 1550-2376
Last Modified: 01 Nov 2024 08:04
Date Deposited: 11 Mar 2019 17:28
Full Text Link:
Related URLs: http://www.scop ... tnerID=8YFLogxK (Scopus URL)
http://prola.ap ... /v56/i3/p3426_1 (Publisher URL)
PURE Output Type: Article
Published Date: 1997-09
Authors: West, Ansgar H.L.
Saad, David (ORCID Profile 0000-0001-9821-2623)

Download

[img]

Version: Accepted Version

| Preview

Export / Share Citation


Statistics

Additional statistics for this record