Noise, regularizers, and unrealizable scenarios in online learning from restricted training sets


We study the dynamics of on-line learning in multilayer neural networks where training examples are sampled with repetition and where the number of examples scales with the number of network weights. The analysis is carried out using the dynamical replica method aimed at obtaining a closed set of coupled equations for a set of macroscopic variables from which both training and generalization errors can be calculated. We focus on scenarios whereby training examples are corrupted by additive Gaussian output noise and regularizers are introduced to improve the network performance. The dependence of the dynamics on the noise level, with and without regularizers, is examined, as well as that of the asymptotic values obtained for both training and generalization errors. We also demonstrate the ability of the method to approximate the learning dynamics in structurally unrealizable scenarios. The theoretical results show good agreement with those obtained by computer simulations.

Publication DOI:
Divisions: College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Additional Information: Copyright of the American Physical Society
Uncontrolled Keywords: on-line learning,multilayer neural networks,dynamical replica method,network performance,noise level
Publication ISSN: 1550-2376
Last Modified: 20 May 2024 07:07
Date Deposited: 10 Aug 2009 13:23
Full Text Link:
Related URLs: http://www.scop ... tnerID=8YFLogxK (Scopus URL)
http://journals ... sRevE.64.011919 (Publisher URL)
PURE Output Type: Article
Published Date: 2001-06-27
Authors: Xiong, Yuan-Sheng
Saad, David (ORCID Profile 0000-0001-9821-2623)



Version: Accepted Version

Export / Share Citation


Additional statistics for this record