• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过修复解剖学侧面标记(左/右标记)增强基于深度学习的分类器以用于多中心试验。

Enhancing deep learning based classifiers with inpainting anatomical side markers (L/R markers) for multi-center trials.

作者信息

Kim Ki Duk, Cho Kyungjin, Kim Mingyu, Lee Kyung Hwa, Lee Seungjun, Lee Sang Min, Lee Kyung Hee, Kim Namkug

机构信息

Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul 05505, Republic of Korea.

Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.

出版信息

Comput Methods Programs Biomed. 2022 Jun;220:106705. doi: 10.1016/j.cmpb.2022.106705. Epub 2022 Feb 22.

DOI:10.1016/j.cmpb.2022.106705
PMID:35462346
Abstract

BACKGROUND AND OBJECTIVE

The protocol for placing anatomical side markers (L/R markers) in chest radiographs varies from one hospital or department to another. However, the markers have strong signals that can be useful for deep learning-based classifier to predict diseases. We aimed to enhance the performance of a deep learning-based classifiers in multi-center datasets by inpainting the L/R markers.

METHODS

The L/R marker was detected with using the EfficientDet detection network; only the detected regions were inpainted using a generative adversarial network (GAN). To analyze the effect of the inpainting in detail, deep learning-based classifiers were trained using original images, marker-inpainted images, and original images clipped using the min-max value of the marker-inpainted images. Binary classification, multi-class classification, and multi-task learning with segmentation and classification were developed and evaluated. Furthermore, the performances of the network on internal and external validation datasets were compared using DeLong's test for two correlated receiver operating characteristic (ROC) curves in binary classification and Stuart-Maxwell test for marginal homogeneity in multi-class classification and multi-task learning. In addition, the qualitative results of activation maps were evaluated using the gradient-class activation map (Grad-CAM).

RESULTS

Marker-inpainting preprocessing improved the classification performances. In the binary classification based on the internal validation, the area under the curves (AUCs) and accuracies were 0.950 and 0.900 for the model trained on the min-max clipped images and 0.911 and 0.850 for the model trained on the original images, respectively (P-value=0.006). In the external validation, the AUCs and accuracies were 0.858 and 0.677 for the model trained using the inpainted images and 0.723 and 0.568 for the model trained using the original images (P-value<0.001), respectively. In addition, the models trained using the marker inpainted images showed the best performance in multi-class classification and multi-task learning. Furthermore, the activation maps obtained using the Grad-CAM improved with the proposed method. The 5-fold validation results also showed improvement trend according to the preprocessing strategies.

CONCLUSIONS

Inpainting an L/R marker significantly enhanced the classifier's performance and robustness, especially in internal and external studies, which could be useful in developing a more robust and accurate deep learning-based classifier for multi-center trials. The code for detection is available at: https://github.com/mi2rl/MI2RLNet. And the code for inpainting is available at: https://github.com/mi2rl/L-R-marker-inpainting.

摘要

背景与目的

胸部X光片中放置解剖学侧方标记(左/右标记)的方案在不同医院或科室之间存在差异。然而,这些标记具有很强的信号,可用于基于深度学习的分类器来预测疾病。我们旨在通过修复左/右标记来提高基于深度学习的分类器在多中心数据集中的性能。

方法

使用EfficientDet检测网络检测左/右标记;仅使用生成对抗网络(GAN)对检测到的区域进行修复。为了详细分析修复的效果,使用原始图像、标记修复后的图像以及使用标记修复后图像的最小-最大值裁剪的原始图像训练基于深度学习的分类器。开发并评估了二元分类、多类分类以及带有分割和分类的多任务学习。此外,在二元分类中使用DeLong检验比较两个相关的受试者工作特征(ROC)曲线,在多类分类和多任务学习中使用Stuart-Maxwell检验比较边缘同质性,以比较网络在内部和外部验证数据集上的性能。此外,使用梯度类激活映射(Grad-CAM)评估激活映射的定性结果。

结果

标记修复预处理提高了分类性能。在基于内部验证的二元分类中,对于在最小-最大值裁剪图像上训练的模型,曲线下面积(AUC)和准确率分别为0.950和0.900,对于在原始图像上训练的模型,分别为0.911和0.850(P值 = 0.006)。在外部验证中,对于使用修复后图像训练的模型,AUC和准确率分别为0.858和0.677,对于使用原始图像训练的模型,分别为0.723和0.568(P值<0.001)。此外,使用标记修复后图像训练的模型在多类分类和多任务学习中表现最佳。此外,使用Grad-CAM获得的激活映射通过所提出的方法得到了改善。5折验证结果也显示出根据预处理策略的改善趋势。

结论

修复左/右标记显著提高了分类器的性能和鲁棒性,特别是在内部和外部研究中,这对于为多中心试验开发更强大、准确的基于深度学习的分类器可能是有用的。检测代码可在:https://github.com/mi2rl/MI2RLNet获取。修复代码可在:https://github.com/mi2rl/L-R-marker-inpainting获取。

相似文献

1
Enhancing deep learning based classifiers with inpainting anatomical side markers (L/R markers) for multi-center trials.通过修复解剖学侧面标记(左/右标记)增强基于深度学习的分类器以用于多中心试验。
Comput Methods Programs Biomed. 2022 Jun;220:106705. doi: 10.1016/j.cmpb.2022.106705. Epub 2022 Feb 22.
2
A generative adversarial inpainting network to enhance prediction of periodontal clinical attachment level.生成对抗性修复网络增强牙周临床附着水平预测
J Dent. 2022 Aug;123:104211. doi: 10.1016/j.jdent.2022.104211. Epub 2022 Jun 26.
3
Deep learning-based X-ray inpainting for improving spinal 2D-3D registration.基于深度学习的 X 射线修复技术提高脊柱 2D-3D 配准。
Int J Med Robot. 2021 Apr;17(2):e2228. doi: 10.1002/rcs.2228. Epub 2021 Feb 15.
4
[Research on multi-class orthodontic image recognition system based on deep learning network model].基于深度学习网络模型的多类别正畸图像识别系统研究
Zhonghua Kou Qiang Yi Xue Za Zhi. 2023 Jun 9;58(6):561-568. doi: 10.3760/cma.j.cn112144-20230305-00070.
5
Predicting muscle invasion in bladder cancer based on MRI: A comparison of radiomics, and single-task and multi-task deep learning.基于MRI预测膀胱癌的肌肉浸润:放射组学、单任务和多任务深度学习的比较。
Comput Methods Programs Biomed. 2023 May;233:107466. doi: 10.1016/j.cmpb.2023.107466. Epub 2023 Mar 5.
6
A deep learning classifier for digital breast tomosynthesis.用于数字乳腺断层合成的深度学习分类器。
Phys Med. 2021 Mar;83:184-193. doi: 10.1016/j.ejmp.2021.03.021. Epub 2021 Mar 31.
7
A deep learning approach for projection and body-side classification in musculoskeletal radiographs.基于深度学习的肌肉骨骼 X 光片投影和体侧分类方法。
Eur Radiol Exp. 2024 Feb 14;8(1):23. doi: 10.1186/s41747-023-00417-x.
8
Prediction of osteoporosis from simple hip radiography using deep learning algorithm.利用深度学习算法从简单的髋关节 X 光片预测骨质疏松症。
Sci Rep. 2021 Oct 7;11(1):19997. doi: 10.1038/s41598-021-99549-6.
9
Improved Semantic Image Inpainting Method with Deep Convolution Generative Adversarial Networks.基于深度卷积生成对抗网络的改进语义图像修复方法
Big Data. 2022 Dec;10(6):506-514. doi: 10.1089/big.2021.0203. Epub 2021 Dec 21.
10
Performance and Usability of Code-Free Deep Learning for Chest Radiograph Classification, Object Detection, and Segmentation.无代码深度学习在胸部X光片分类、目标检测和分割中的性能与可用性
Radiol Artif Intell. 2023 Feb 15;5(2):e220062. doi: 10.1148/ryai.220062. eCollection 2023 Mar.

引用本文的文献

1
Convolutional neural network-based classification of craniosynostosis and suture lines from multi-view cranial X-rays.基于卷积神经网络的多视图颅骨 X 射线颅缝早闭和缝线分类。
Sci Rep. 2024 Nov 5;14(1):26729. doi: 10.1038/s41598-024-77550-z.
2
Screening Patient Misidentification Errors Using a Deep Learning Model of Chest Radiography: A Seven Reader Study.使用胸部X线摄影深度学习模型筛查患者身份识别错误:一项七名阅片者的研究
J Imaging Inform Med. 2025 Apr;38(2):694-702. doi: 10.1007/s10278-024-01245-0. Epub 2024 Sep 11.
3
Anonymizing Radiographs Using an Object Detection Deep Learning Algorithm.
使用目标检测深度学习算法对X光片进行匿名化处理。
Radiol Artif Intell. 2023 Sep 13;5(6):e230085. doi: 10.1148/ryai.230085. eCollection 2023 Nov.
4
Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning.克服放射学中人工智能发展和实施面临的挑战:超越监督学习的解决方案综合述评。
Korean J Radiol. 2023 Nov;24(11):1061-1080. doi: 10.3348/kjr.2023.0393. Epub 2023 Aug 28.
5
CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning.CheSS:基于自监督对比学习的胸部 X 射线预训练模型。
J Digit Imaging. 2023 Jun;36(3):902-910. doi: 10.1007/s10278-023-00782-4. Epub 2023 Jan 26.