Globally optimal learning rates in multilayer neural networks

Abstract

A method for calculating the globally optimal learning rate in on-line gradient-descent training of multilayer neural networks is presented. The method is based on a variational approach which maximizes the decrease in generalization error over a given time frame. We demonstrate the method by computing optimal learning rates in typical learning scenarios. The method can also be employed when different learning rates are allowed for different parameter vectors as well as to determine the relevance of related training algorithms based on modifications to the basic gradient descent rule.

Divisions: College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Additional Information: Proceedings of the MINERVA workshop on Mesoscopics, Fractals and Neural Networks, 25-27 March 1997, Eilat (IL). This is an electronic version of an article published in Saad, David and Rattray, Magnus (1998). Globally optimal learning rates in multilayer neural networks. Philosophical Magazine Part B, 77 (5), pp. 1523-1530. Philosophical Magazine Part B is available online at: http://www.informaworld.com/openurl?genre=article&issn=1364-2812&volume=77&issue=5&spage=1523
Publication ISSN: 1364-2812
Last Modified: 29 Nov 2023 10:00
Date Deposited: 21 Sep 2009 16:36
Full Text Link: 10.1080/13642819808205044
Related URLs: http://www.scop ... tnerID=8YFLogxK (Scopus URL)
http://www.info ... ue=5&spage=1523 (Publisher URL)
PURE Output Type: Article
Published Date: 1998-05
Authors: Saad, David
Rattray, Magnus

Download

[img]

Version: Accepted Version


Export / Share Citation


Statistics

Additional statistics for this record