• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

MuSiC-ViT:一种用于区分随访胸部 X 光片上变化与无变化的多任务暹罗卷积视觉Transformer。

MuSiC-ViT: A multi-task Siamese convolutional vision transformer for differentiating change from no-change in follow-up chest radiographs.

机构信息

Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea.

Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.

出版信息

Med Image Anal. 2023 Oct;89:102894. doi: 10.1016/j.media.2023.102894. Epub 2023 Jul 12.

DOI:10.1016/j.media.2023.102894
PMID:37562256
Abstract

A major responsibility of radiologists in routine clinical practice is to read follow-up chest radiographs (CXRs) to identify changes in a patient's condition. Diagnosing meaningful changes in follow-up CXRs is challenging because radiologists must differentiate disease changes from natural or benign variations. Here, we suggest using a multi-task Siamese convolutional vision transformer (MuSiC-ViT) with an anatomy-matching module (AMM) to mimic the radiologist's cognitive process for differentiating baseline change from no-change. MuSiC-ViT uses the convolutional neural networks (CNNs) meet vision transformers model that combines CNN and transformer architecture. It has three major components: a Siamese network architecture, an AMM, and multi-task learning. Because the input is a pair of CXRs, a Siamese network was adopted for the encoder. The AMM is an attention module that focuses on related regions in the CXR pairs. To mimic a radiologist's cognitive process, MuSiC-ViT was trained using multi-task learning, normal/abnormal and change/no-change classification, and anatomy-matching. Among 406 K CXRs studied, 88 K change and 115 K no-change pairs were acquired for the training dataset. The internal validation dataset consisted of 1,620 pairs. To demonstrate the robustness of MuSiC-ViT, we verified the results with two other validation datasets. MuSiC-ViT respectively achieved accuracies and area under the receiver operating characteristic curves of 0.728 and 0.797 on the internal validation dataset, 0.614 and 0.784 on the first external validation dataset, and 0.745 and 0.858 on a second temporally separated validation dataset. All code is available at https://github.com/chokyungjin/MuSiC-ViT.

摘要

放射科医生在常规临床实践中的一项主要职责是阅读随访胸部 X 光片(CXR)以确定患者病情的变化。诊断随访 CXR 中的有意义变化具有挑战性,因为放射科医生必须区分疾病变化与自然或良性变化。在这里,我们建议使用具有解剖匹配模块(AMM)的多任务暹罗卷积视觉转换器(MuSiC-ViT)来模拟放射科医生区分基线变化与无变化的认知过程。MuSiC-ViT 使用卷积神经网络(CNN)和视觉转换器模型相结合的 meet vision transformers 模型。它有三个主要组成部分:暹罗网络架构、AMM 和多任务学习。由于输入是一对 CXR,因此采用暹罗网络作为编码器。AMM 是一个注意力模块,它关注 CXR 对中的相关区域。为了模拟放射科医生的认知过程,MuSiC-ViT 采用多任务学习、正常/异常和变化/无变化分类以及解剖匹配进行训练。在研究的 406K 张 CXR 中,获得了 88K 变化和 115K 无变化对用于训练数据集。内部验证数据集由 1620 对组成。为了证明 MuSiC-ViT 的稳健性,我们使用另外两个验证数据集验证了结果。MuSiC-ViT 在内部验证数据集上的准确率和接收者操作特征曲线下面积分别为 0.728 和 0.797,在第一个外部验证数据集上的准确率和曲线下面积分别为 0.614 和 0.784,在第二个时间分离验证数据集上的准确率和曲线下面积分别为 0.745 和 0.858。所有代码均可在 https://github.com/chokyungjin/MuSiC-ViT 获得。

相似文献

1
MuSiC-ViT: A multi-task Siamese convolutional vision transformer for differentiating change from no-change in follow-up chest radiographs.MuSiC-ViT:一种用于区分随访胸部 X 光片上变化与无变化的多任务暹罗卷积视觉Transformer。
Med Image Anal. 2023 Oct;89:102894. doi: 10.1016/j.media.2023.102894. Epub 2023 Jul 12.
2
RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers.RT-ViT:基于轻量级视觉Transformer 的实时单目深度估计。
Sensors (Basel). 2022 May 19;22(10):3849. doi: 10.3390/s22103849.
3
Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound.从视觉Transformer 集成中提取知识,提高乳腺超声分类的性能。
Acad Radiol. 2024 Jan;31(1):104-120. doi: 10.1016/j.acra.2023.08.006. Epub 2023 Sep 2.
4
Enhancing surgical instrument segmentation: integrating vision transformer insights with adapter.增强手术器械分割:结合视觉Transformer 见解与适配器。
Int J Comput Assist Radiol Surg. 2024 Jul;19(7):1313-1320. doi: 10.1007/s11548-024-03140-z. Epub 2024 May 8.
5
Visual Transformers and Convolutional Neural Networks for Disease Classification on Radiographs: A Comparison of Performance, Sample Efficiency, and Hidden Stratification.用于X光片疾病分类的视觉Transformer和卷积神经网络:性能、样本效率及隐藏分层的比较
Radiol Artif Intell. 2022 Sep 21;4(6):e220012. doi: 10.1148/ryai.220012. eCollection 2022 Nov.
6
Detecting Tuberculosis-Consistent Findings in Lateral Chest X-Rays Using an Ensemble of CNNs and Vision Transformers.使用卷积神经网络(CNN)和视觉Transformer的集成在胸部侧位X光片中检测与肺结核一致的表现。
Front Genet. 2022 Feb 24;13:864724. doi: 10.3389/fgene.2022.864724. eCollection 2022.
7
Seeking an optimal approach for Computer-aided Diagnosis of Pulmonary Embolism.寻求肺栓塞计算机辅助诊断的最佳方法。
Med Image Anal. 2024 Jan;91:102988. doi: 10.1016/j.media.2023.102988. Epub 2023 Oct 13.
8
Do it the transformer way: A comprehensive review of brain and vision transformers for autism spectrum disorder diagnosis and classification.采用变压器方法:自闭症谱系障碍诊断和分类的脑和视觉变压器的全面综述。
Comput Biol Med. 2023 Dec;167:107667. doi: 10.1016/j.compbiomed.2023.107667. Epub 2023 Nov 3.
9
Swin Unet3D: a three-dimensional medical image segmentation network combining vision transformer and convolution.Swin Unet3D:一种结合视觉Transformer 和卷积的三维医学图像分割网络。
BMC Med Inform Decis Mak. 2023 Feb 14;23(1):33. doi: 10.1186/s12911-023-02129-z.
10
Multitask Learning with Convolutional Neural Networks and Vision Transformers Can Improve Outcome Prediction for Head and Neck Cancer Patients.结合卷积神经网络和视觉Transformer的多任务学习可改善头颈癌患者的预后预测。
Cancers (Basel). 2023 Oct 9;15(19):4897. doi: 10.3390/cancers15194897.

引用本文的文献

1
Screening Patient Misidentification Errors Using a Deep Learning Model of Chest Radiography: A Seven Reader Study.使用胸部X线摄影深度学习模型筛查患者身份识别错误:一项七名阅片者的研究
J Imaging Inform Med. 2025 Apr;38(2):694-702. doi: 10.1007/s10278-024-01245-0. Epub 2024 Sep 11.