• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

蒸馏 IQA:用于无参考感知 CT 图像质量评估的视觉 Transformer 蒸馏。

DistilIQA: Distilling Vision Transformers for no-reference perceptual CT image quality assessment.

机构信息

Departamento de Ingeniería Industrial and Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito USFQ, Quito, 170157, Ecuador; Colegio de Ciencias e Ingenierías "El Politécnico", Universidad San Francisco de Quito USFQ, Quito, 170157, Ecuador.

Departamento de Investigación y Postgrados, Universidad Internacional del Ecuador UIDE, Quito, Ecuador.

出版信息

Comput Biol Med. 2024 Jul;177:108670. doi: 10.1016/j.compbiomed.2024.108670. Epub 2024 May 28.

DOI:10.1016/j.compbiomed.2024.108670
PMID:38838558
Abstract

No-reference image quality assessment (IQA) is a critical step in medical image analysis, with the objective of predicting perceptual image quality without the need for a pristine reference image. The application of no-reference IQA to CT scans is valuable in providing an automated and objective approach to assessing scan quality, optimizing radiation dose, and improving overall healthcare efficiency. In this paper, we introduce DistilIQA, a novel distilled Vision Transformer network designed for no-reference CT image quality assessment. DistilIQA integrates convolutional operations and multi-head self-attention mechanisms by incorporating a powerful convolutional stem at the beginning of the traditional ViT network. Additionally, we present a two-step distillation methodology aimed at improving network performance and efficiency. In the initial step, a "teacher ensemble network" is constructed by training five vision Transformer networks using a five-fold division schema. In the second step, a "student network", comprising of a single Vision Transformer, is trained using the original labeled dataset and the predictions generated by the teacher network as new labels. DistilIQA is evaluated in the task of quality score prediction from low-dose chest CT scans obtained from the LDCT and Projection data of the Cancer Imaging Archive, along with low-dose abdominal CT images from the LDCTIQAC2023 Grand Challenge. Our results demonstrate DistilIQA's remarkable performance in both benchmarks, surpassing the capabilities of various CNNs and Transformer architectures. Moreover, our comprehensive experimental analysis demonstrates the effectiveness of incorporating convolutional operations within the ViT architecture and highlights the advantages of our distillation methodology.

摘要

无参考图像质量评估(IQA)是医学图像分析中的关键步骤,其目标是在无需原始参考图像的情况下预测感知图像质量。将无参考 IQA 应用于 CT 扫描对于提供一种自动和客观的方法来评估扫描质量、优化辐射剂量和提高整体医疗保健效率非常有价值。在本文中,我们介绍了 DistilIQA,这是一种专为无参考 CT 图像质量评估设计的新型蒸馏视觉转换器网络。DistilIQA 通过在传统 ViT 网络的开头集成强大的卷积主干,将卷积操作和多头自注意力机制集成在一起。此外,我们提出了一种两步蒸馏方法,旨在提高网络性能和效率。在初始步骤中,通过使用五折划分方案训练五个视觉转换器网络来构建“教师集成网络”。在第二步中,使用原始标记数据集和教师网络生成的预测作为新标签来训练由单个视觉转换器组成的“学生网络”。我们在从癌症成像档案的 LDCT 和投影数据以及 LDCTIQAC2023 大挑战中的低剂量腹部 CT 图像中获得的低剂量胸部 CT 扫描的质量分数预测任务中评估了 DistilIQA。我们的结果表明,DistilIQA 在这两个基准测试中都表现出色,超过了各种 CNN 和 Transformer 架构的能力。此外,我们全面的实验分析证明了在 ViT 架构中纳入卷积操作的有效性,并强调了我们的蒸馏方法的优势。

相似文献

1
DistilIQA: Distilling Vision Transformers for no-reference perceptual CT image quality assessment.蒸馏 IQA:用于无参考感知 CT 图像质量评估的视觉 Transformer 蒸馏。
Comput Biol Med. 2024 Jul;177:108670. doi: 10.1016/j.compbiomed.2024.108670. Epub 2024 May 28.
2
HCformer: Hybrid CNN-Transformer for LDCT Image Denoising.HCformer:用于 LDCT 图像去噪的混合 CNN-Transformer。
J Digit Imaging. 2023 Oct;36(5):2290-2305. doi: 10.1007/s10278-023-00842-9. Epub 2023 Jun 29.
3
Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising.基于 Noise2Neighbors 插值的纯 Vision Transformer(CT-ViT)用于低剂量 CT 图像降噪。
J Imaging Inform Med. 2024 Oct;37(5):2669-2687. doi: 10.1007/s10278-024-01108-8. Epub 2024 Apr 15.
4
Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound.从视觉Transformer 集成中提取知识,提高乳腺超声分类的性能。
Acad Radiol. 2024 Jan;31(1):104-120. doi: 10.1016/j.acra.2023.08.006. Epub 2023 Sep 2.
5
RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers.RT-ViT:基于轻量级视觉Transformer 的实时单目深度估计。
Sensors (Basel). 2022 May 19;22(10):3849. doi: 10.3390/s22103849.
6
Semi-supervised abdominal multi-organ segmentation by object-redrawing.通过对象重绘实现半监督腹部多器官分割
Med Phys. 2024 Nov;51(11):8334-8347. doi: 10.1002/mp.17364. Epub 2024 Aug 21.
7
Brain tumor segmentation and detection in MRI using convolutional neural networks and VGG16.使用卷积神经网络和VGG16在磁共振成像(MRI)中进行脑肿瘤分割与检测
Cancer Biomark. 2025 Mar;42(3):18758592241311184. doi: 10.1177/18758592241311184. Epub 2025 Apr 4.
8
Male pelvic multi-organ segmentation using token-based transformer Vnet.基于令牌的 Transformer Vnet 进行男性骨盆多器官分割。
Phys Med Biol. 2022 Oct 14;67(20). doi: 10.1088/1361-6560/ac95f7.
9
Learning low-dose CT degradation from unpaired data with flow-based model.基于流的模型从非配对数据中学习低剂量 CT 衰减
Med Phys. 2022 Dec;49(12):7516-7530. doi: 10.1002/mp.15886. Epub 2022 Aug 8.
10
SACNN: Self-Attention Convolutional Neural Network for Low-Dose CT Denoising With Self-Supervised Perceptual Loss Network.SACNN:基于自监督感知损失网络的自注意卷积神经网络用于低剂量 CT 去噪。
IEEE Trans Med Imaging. 2020 Jul;39(7):2289-2301. doi: 10.1109/TMI.2020.2968472. Epub 2020 Jan 21.

引用本文的文献

1
A Systematic Review of Medical Image Quality Assessment.医学图像质量评估的系统综述
J Imaging. 2025 Mar 27;11(4):100. doi: 10.3390/jimaging11040100.