Suppr超能文献

AFINet:用于图像分类的注意力特征集成网络。

AFINet: Attentive Feature Integration Networks for image classification.

机构信息

University of Electronic Science and Technology of China, Chengdu, China; Department of Network Intelligence, Peng Cheng Lab, Shenzhen, China.

School of Science and Technology, Harbin Institute of Technology Shenzhen, Shenzhen, China; Department of Network Intelligence, Peng Cheng Lab, Shenzhen, China.

出版信息

Neural Netw. 2022 Nov;155:360-368. doi: 10.1016/j.neunet.2022.08.026. Epub 2022 Sep 5.

Abstract

Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks including image classification. Residual-like networks, such as ResNets, mainly focus on the skip connection to avoid gradient vanishing. However, the skip connection mechanism limits the utilization of intermediate features due to simple iterative updates. To mitigate the redundancy of residual-like networks, we design Attentive Feature Integration (AFI) modules, which are widely applicable to most residual-like network architectures, leading to new architectures named AFI-Nets. AFI-Nets explicitly model the correlations among different levels of features and selectively transfer features with a little overhead. AFI-ResNet-152 obtains a 1.24% relative improvement on the ImageNet dataset while decreases the FLOPs by about 10% and the number of parameters by about 9.2% compared to ResNet-152.

摘要

卷积神经网络(CNNs)在包括图像分类在内的许多学习任务中取得了巨大的成功。Residual-like 网络(如 ResNets)主要关注于跳过连接以避免梯度消失。然而,由于简单的迭代更新,跳过连接机制限制了中间特征的利用。为了减轻类似残差网络的冗余,我们设计了 Attentive Feature Integration (AFI) 模块,该模块广泛适用于大多数类似残差网络的架构,导致新的架构被命名为 AFI-Nets。AFI-Nets 显式地建模了不同层次特征之间的相关性,并选择性地以较小的开销传输特征。与 ResNet-152 相比,AFI-ResNet-152 在 ImageNet 数据集上的相对提升了 1.24%,同时 FLOPs 减少了约 10%,参数量减少了约 9.2%。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验