Suppr超能文献

基于深度学习的多标签胸部X光图像双加权度量损失分类

Deep learning based classification of multi-label chest X-ray images via dual-weighted metric loss.

作者信息

Jin Yufei, Lu Huijuan, Zhu Wenjie, Huo Wanli

机构信息

College of Information Engineering, China Jiliang University, Hangzhou, China.

出版信息

Comput Biol Med. 2023 May;157:106683. doi: 10.1016/j.compbiomed.2023.106683. Epub 2023 Feb 15.

Abstract

-Thoracic disease, like many other diseases, can lead to complications. Existing multi-label medical image learning problems typically include rich pathological information, such as images, attributes, and labels, which are crucial for supplementary clinical diagnosis. However, the majority of contemporary efforts exclusively focus on regression from input to binary labels, ignoring the relationship between visual features and semantic vectors of labels. In addition, there is an imbalance in data amount between diseases, which frequently causes intelligent diagnostic systems to make erroneous disease predictions. Therefore, we aim to improve the accuracy of the multi-label classification of chest X-ray images. Chest X-ray14 pictures were utilized as the multi-label dataset for the experiments in this study. By fine-tuning the ConvNeXt network, we got visual vectors, which we combined with semantic vectors encoded by BioBert to map the two different forms of features into a common metric space and made semantic vectors the prototype of each class in metric space. The metric relationship between images and labels is then considered from the image level and disease category level, respectively, and a new dual-weighted metric loss function is proposed. Finally, the average AUC score achieved in the experiment reached 0.826, and our model outperformed the comparison models.

摘要
  • 胸部疾病与许多其他疾病一样,会引发并发症。现有的多标签医学图像学习问题通常包含丰富的病理信息,如图像、属性和标签,这些对辅助临床诊断至关重要。然而,当代的大多数研究仅专注于从输入到二元标签的回归,忽略了视觉特征与标签语义向量之间的关系。此外,疾病之间的数据量存在不平衡,这经常导致智能诊断系统做出错误的疾病预测。因此,我们旨在提高胸部X光图像多标签分类的准确性。本研究中,我们将ChestX-ray14图片用作多标签数据集进行实验。通过微调ConvNeXt网络,我们获得了视觉向量,并将其与BioBert编码的语义向量相结合,将两种不同形式的特征映射到一个公共度量空间中,并使语义向量成为度量空间中每个类别的原型。然后分别从图像级别和疾病类别级别考虑图像与标签之间的度量关系,并提出了一种新的双加权度量损失函数。最后,实验中获得的平均AUC分数达到0.826,我们的模型优于比较模型。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验