Suppr超能文献

约束学习向量量化

Constrained learning vector quantization.

作者信息

Yan H

机构信息

Department of Electrical Engineering, University of Sydney, NSW, Australia.

出版信息

Int J Neural Syst. 1994 Jun;5(2):143-52. doi: 10.1142/s0129065794000165.

Abstract

Kohonen's learning vector quantization (LVQ) is an efficient neural network based technique for pattern recognition. The performance of the method depends on proper selection of the learning parameters. Over-training may cause a degradation in recognition rate of the final classifier. In this paper we introduce constrained learning vector quantization (CLVQ). In this method the updated coefficients in each iteration are accepted only if the recognition performance of the classifier after updating is not decreased for the training samples compared with that before updating, a constraint widely used in many prototype editing procedures to simplify and optimize a nearest neighbor classifier (NNC). An efficient computer algorithm is developed to implement this constraint. The method is verified with experimental results. It is shown that CLVQ outperforms and may even require much less training time than LVQ.

摘要

科霍宁的学习向量量化(LVQ)是一种基于神经网络的高效模式识别技术。该方法的性能取决于学习参数的恰当选择。过度训练可能会导致最终分类器的识别率下降。在本文中,我们介绍了约束学习向量量化(CLVQ)。在这种方法中,只有当更新后的分类器对训练样本的识别性能与更新前相比没有降低时,每次迭代中更新的系数才会被接受,这是许多原型编辑过程中广泛使用的一种约束,用于简化和优化最近邻分类器(NNC)。我们开发了一种高效的计算机算法来实现这种约束。该方法通过实验结果得到了验证。结果表明,CLVQ的性能优于LVQ,甚至可能需要更少的训练时间。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验