• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于分类任务的带引导训练的特征融合方法。

A Feature Fusion Method with Guided Training for Classification Tasks.

作者信息

Zhang Taohong, Fan Suli, Hu Junnan, Guo Xuxu, Li Qianqian, Zhang Ying, Wulamu Aziguli

机构信息

Department of Computer, School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing 100083, China.

Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China.

出版信息

Comput Intell Neurosci. 2021 Apr 14;2021:6647220. doi: 10.1155/2021/6647220. eCollection 2021.

DOI:10.1155/2021/6647220
PMID:33936189
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8062192/
Abstract

In this paper, a feature fusion method with guiding training (FGT-Net) is constructed to fuse image data and numerical data for some specific recognition tasks which cannot be classified accurately only according to images. The proposed structure is divided into the shared weight network part, the feature fused layer part, and the classification layer part. First, the guided training method is proposed to optimize the training process, the representative images and training images are input into the shared weight network to learn the ability that extracts the image features better, and then the image features and numerical features are fused together in the feature fused layer to input into the classification layer for the classification task. Experiments are carried out to verify the effectiveness of the proposed model. Loss is calculated by the output of both the shared weight network and classification layer. The results of experiments show that the proposed FGT-Net achieves the accuracy of 87.8%, which is 15% higher than the CNN model of ShuffleNetv2 (which can process image data only) and 9.8% higher than the DNN method (which processes structured data only).

摘要

本文构建了一种带引导训练的特征融合方法(FGT-Net),用于融合图像数据和数值数据,以解决某些仅根据图像无法准确分类的特定识别任务。所提出的结构分为共享权重网络部分、特征融合层部分和分类层部分。首先,提出引导训练方法来优化训练过程,将代表性图像和训练图像输入到共享权重网络中,以学习更好地提取图像特征的能力,然后在特征融合层中将图像特征和数值特征融合在一起,输入到分类层进行分类任务。通过实验验证了所提模型的有效性。损失由共享权重网络和分类层的输出计算得出。实验结果表明,所提的FGT-Net达到了87.8%的准确率,比仅能处理图像数据的ShuffleNetv2的CNN模型高15%,比仅处理结构化数据的DNN方法高9.8%。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/a5f6fa8ac5ae/CIN2021-6647220.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/52f9c4e81c73/CIN2021-6647220.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/1c3d688df0be/CIN2021-6647220.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/ae74ff595abe/CIN2021-6647220.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/1ff53e09a255/CIN2021-6647220.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/9e5ad91b5a29/CIN2021-6647220.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/a5f6fa8ac5ae/CIN2021-6647220.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/52f9c4e81c73/CIN2021-6647220.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/1c3d688df0be/CIN2021-6647220.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/ae74ff595abe/CIN2021-6647220.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/1ff53e09a255/CIN2021-6647220.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/9e5ad91b5a29/CIN2021-6647220.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0d5b/8062192/a5f6fa8ac5ae/CIN2021-6647220.006.jpg

相似文献

1
A Feature Fusion Method with Guided Training for Classification Tasks.一种用于分类任务的带引导训练的特征融合方法。
Comput Intell Neurosci. 2021 Apr 14;2021:6647220. doi: 10.1155/2021/6647220. eCollection 2021.
2
Effect of dual-convolutional neural network model fusion for Aluminum profile surface defects classification and recognition.双卷积神经网络模型融合对铝合金型材表面缺陷分类识别的影响。
Math Biosci Eng. 2022 Jan;19(1):997-1025. doi: 10.3934/mbe.2022046. Epub 2021 Nov 25.
3
Multimodal Medical Image Fusion using Rolling Guidance Filter with CNN and Nuclear Norm Minimization.基于 CNN 和核范数最小化的滚动引导滤波器的多模态医学图像融合。
Curr Med Imaging. 2020;16(10):1243-1258. doi: 10.2174/1573405616999200817103920.
4
Fully Automated Convolutional Neural Network Method for Quantification of Breast MRI Fibroglandular Tissue and Background Parenchymal Enhancement.全自动卷积神经网络方法定量分析乳腺 MRI 的纤维腺体组织和背景实质增强。
J Digit Imaging. 2019 Feb;32(1):141-147. doi: 10.1007/s10278-018-0114-7.
5
Automated Identification of Hookahs (Waterpipes) on Instagram: An Application in Feature Extraction Using Convolutional Neural Network and Support Vector Machine Classification.Instagram上水烟袋(水烟筒)的自动识别:一种使用卷积神经网络和支持向量机分类进行特征提取的应用。
J Med Internet Res. 2018 Nov 21;20(11):e10513. doi: 10.2196/10513.
6
AF-SENet: Classification of Cancer in Cervical Tissue Pathological Images Based on Fusing Deep Convolution Features.基于融合深度卷积特征的宫颈组织病理图像癌症分类的 AF-SENet
Sensors (Basel). 2020 Dec 27;21(1):122. doi: 10.3390/s21010122.
7
A Novel Bilinear Feature and Multi-Layer Fused Convolutional Neural Network for Tactile Shape Recognition.一种新颖的双线性特征和多层融合卷积神经网络的触觉形状识别方法。
Sensors (Basel). 2020 Oct 15;20(20):5822. doi: 10.3390/s20205822.
8
Spatio-Temporal Representation of an Electoencephalogram for Emotion Recognition Using a Three-Dimensional Convolutional Neural Network.使用三维卷积神经网络进行情感识别的脑电图的时空表示。
Sensors (Basel). 2020 Jun 20;20(12):3491. doi: 10.3390/s20123491.
9
A novel fused convolutional neural network for biomedical image classification.一种用于生物医学图像分类的新型融合卷积神经网络。
Med Biol Eng Comput. 2019 Jan;57(1):107-121. doi: 10.1007/s11517-018-1819-y. Epub 2018 Jul 12.
10
Computer assisted recognition of breast cancer in biopsy images via fusion of nucleus-guided deep convolutional features.通过细胞核引导的深度卷积特征融合实现活检图像中乳腺癌的计算机辅助识别。
Comput Methods Programs Biomed. 2020 Oct;194:105531. doi: 10.1016/j.cmpb.2020.105531. Epub 2020 May 11.

引用本文的文献

1
Identification of Soybean Mutant Lines Based on Dual-Branch CNN Model Fusion Framework Utilizing Images from Different Organs.基于利用不同器官图像的双分支卷积神经网络模型融合框架鉴定大豆突变系
Plants (Basel). 2023 Jun 14;12(12):2315. doi: 10.3390/plants12122315.

本文引用的文献

1
Subject-Independent Emotion Recognition of EEG Signals Based on Dynamic Empirical Convolutional Neural Network.基于动态经验卷积神经网络的脑电信号的主体无关情感识别
IEEE/ACM Trans Comput Biol Bioinform. 2021 Sep-Oct;18(5):1710-1721. doi: 10.1109/TCBB.2020.3018137. Epub 2021 Oct 7.