Learning with regularizers in multilayer neural networks


We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.

Divisions: College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Additional Information: Copyright of the American Physical Society
Uncontrolled Keywords: on-line gradient-descent learning scenario,Gaussian,noise,weight decay,error
Publication ISSN: 1550-2376
Last Modified: 02 Jan 2024 08:04
Date Deposited: 11 Mar 2019 17:29
Full Text Link:
Related URLs: http://prola.ap ... /v57/i2/p2170_1 (Publisher URL)
PURE Output Type: Article
Published Date: 1998-02
Authors: Saad, David (ORCID Profile 0000-0001-9821-2623)
Rattray, Magnus



Version: Accepted Version

| Preview

Export / Share Citation


Additional statistics for this record