On-line learning in radial basis functions networks

Freeman, Jason and Saad, David (1997). On-line learning in radial basis functions networks. Neural Computation, 9 (7), pp. 1601-1622.

Abstract

An analytic investigation of the average case learning and generalization properties of Radial Basis Function Networks (RBFs) is presented, utilising on-line gradient descent as the learning rule. The analytic method employed allows both the calculation of generalization error and the examination of the internal dynamics of the network. The generalization error and internal dynamics are then used to examine the role of the learning rate and the specialization of the hidden units, which gives insight into decreasing the time required for training. The realizable and over-realizable cases are studied in detail; the phase of learning in which the hidden units are unspecialized (symmetric phase) and the phase in which asymptotic convergence occurs are analyzed, and their typical properties found. Finally, simulations are performed which strongly confirm the analytic results.

Divisions: Engineering & Applied Sciences > Mathematics
Engineering & Applied Sciences > Systems analytics research institute (SARI)
Additional Information: Copyright of the Massachusetts Institute of Technology Press (MIT Press)
Uncontrolled Keywords: radial basis function networks,error,network,internal dynamics,learning rate,hidden units
Full Text Link:
Related URLs: http://www.mitp ... o.1997.9.7.1601 (Publisher URL)
Published Date: 1997-10-01
Authors: Freeman, Jason
Saad, David ( 0000-0001-9821-2623)

Download

[img]

Version: Published Version

| Preview

Export / Share Citation


Statistics

Additional statistics for this record