Villmann Thomas, Claussen Jens Christian
Clinic for Psychotherapy, University of Leipzig, 04107 Leipzig, Germany.
Neural Comput. 2006 Feb;18(2):446-69. doi: 10.1162/089976606775093918.
We consider different ways to control the magnification in self-organizing maps (SOM) and neural gas (NG). Starting from early approaches of magnification control in vector quantization, we then concentrate on different approaches for SOM and NG. We show that three structurally similar approaches can be applied to both algorithms that are localized learning, concave-convex learning, and winner-relaxing learning. Thereby, the approach of concave-convex learning in SOM is extended to a more general description, whereas the concave-convex learning for NG is new. In general, the control mechanisms generate only slightly different behavior comparing both neural algorithms. However, we emphasize that the NG results are valid for any data dimension, whereas in the SOM case, the results hold only for the one-dimensional case.
我们考虑了在自组织映射(SOM)和神经气体(NG)中控制放大率的不同方法。从矢量量化中放大率控制的早期方法开始,我们随后专注于SOM和NG的不同方法。我们表明,三种结构相似的方法可应用于这两种算法,即局部学习、凹凸学习和胜者放松学习。由此,SOM中的凹凸学习方法被扩展到更一般的描述,而NG的凹凸学习是新的。一般来说,与这两种神经算法相比,控制机制产生的行为只有轻微差异。然而,我们强调,NG的结果对任何数据维度都有效,而在SOM的情况下,结果仅适用于一维情况。