Magnification control in winner relaxing neural gas


An important goal in neural map learning, which can conveniently be accomplished by magnification control, is to achieve information optimal coding in the sense of information theory. In the present contribution we consider the winner relaxing approach for the neural gas network. Originally, winner relaxing learning is a slight modification of the self-organizing map learning rule that allows for adjustment of the magnification behavior by an a priori chosen control parameter. We transfer this approach to the neural gas algorithm. The magnification exponent can be calculated analytically for arbitrary dimension from a continuum theory, and the entropy of the resulting map is studied numerically confirming the theoretical prediction. The influence of a diagonal term, which can be added without impacting the magnification, is studied numerically. This approach to maps of maximal mutual information is interesting for applications as the winner relaxing term only adds computational cost of same order and is easy to implement. In particular, it is not necessary to estimate the generally unknown data probability density as in other magnification control approaches.

Publication DOI:
Divisions: College of Engineering & Physical Sciences > School of Informatics and Digital Engineering > Mathematics
College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
College of Engineering & Physical Sciences
Additional Information: Copyright © 2004 Elsevier B.V. All rights reserved.
Uncontrolled Keywords: Magnification control,Neural gas,Self-organizing maps,Vector quantization,Computer Science Applications,Cognitive Neuroscience,Artificial Intelligence
Publication ISSN: 1872-8286
Full Text Link:
Related URLs: http://www.scop ... tnerID=8YFLogxK (Scopus URL)
PURE Output Type: Article
Published Date: 2005-01-01
Authors: Claussen, Jens Christian (ORCID Profile 0000-0002-9870-4924)
Villmann, Thomas



Version: Accepted Version

| Preview

Export / Share Citation


Additional statistics for this record