Suppr超能文献

基于原型网络的标签平滑和任务自适应损失函数的少样本学习。

Label smoothing and task-adaptive loss function based on prototype network for few-shot learning.

机构信息

School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China.

出版信息

Neural Netw. 2022 Dec;156:39-48. doi: 10.1016/j.neunet.2022.09.018. Epub 2022 Sep 23.

Abstract

Aiming at solving the problems of prototype network that the label information is not reliable enough and that the hyperparameters of the loss function cannot follow the changes of image feature information, we propose a method that combines label smoothing and hyperparameters. First, the label information of an image is processed by label smoothing regularization. Then, according to different classification tasks, the distance matrix and logarithmic operation of the image feature are used to fuse the distance matrix of the image with the hyperparameters of the loss function. Finally, the hyperparameters are associated with the smoothed label and the distance matrix for predictive classification. The method is validated on the miniImageNet, FC100 and tieredImageNet datasets. The results show that, compared with the unsmoothed label and fixed hyperparameters methods, the classification accuracy of the flexible hyperparameters in the loss function under the condition of few-shot learning is improved by 2%-3%. The result shows that the proposed method can suppress the interference of false labels, and the flexibility of hyperparameters can improve classification accuracy.

摘要

针对原型网络标签信息不可靠和损失函数超参数无法跟随图像特征信息变化的问题,提出一种结合标签平滑和超参数的方法。首先,对图像的标签信息进行标签平滑正则化处理。然后,根据不同分类任务,利用图像特征的距离矩阵和对数运算,融合图像的距离矩阵与损失函数的超参数。最后,将超参数与平滑后的标签和距离矩阵进行预测分类。在 miniImageNet、FC100 和 tieredImageNet 数据集上进行验证。结果表明,与未平滑标签和固定超参数方法相比,在少样本学习条件下,损失函数中灵活的超参数的分类准确率提高了 2%~3%。结果表明,所提出的方法可以抑制错误标签的干扰,超参数的灵活性可以提高分类准确率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验