On-line learning in radial basis functions networks

Abstract

An analytic investigation of the average case learning and generalization properties of Radial Basis Function Networks (RBFs) is presented, utilising on-line gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and over-realizable cases are studied in detail; the phase of learning in which the hidden units are unspecialized (symmetric phase) and the phase in which asymptotic convergence occurs are analyzed, and their typical properties found. Finally, simulations are performed which strongly confirm the analytic results.

Divisions: College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Additional Information: Copyright of the Massachusetts Institute of Technology Press (MIT Press)
Uncontrolled Keywords: radial basis function networks,error,network,internal dynamics,learning rate,hidden units
Publication ISSN: 1530-888X
Last Modified: 26 Dec 2023 08:04
Date Deposited: 11 Mar 2019 17:28
Full Text Link:
Related URLs: http://www.mitp ... o.1997.9.7.1601 (Publisher URL)
PURE Output Type: Article
Published Date: 1997-10-01
Authors: Freeman, Jason
Saad, David (ORCID Profile 0000-0001-9821-2623)

Download

[img]

Version: Published Version

| Preview

Export / Share Citation


Statistics

Additional statistics for this record