• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过解剖学引导的深度学习实现医学图像中的目标识别。

Object recognition in medical images via anatomy-guided deep learning.

作者信息

Jin Chao, Udupa Jayaram K, Zhao Liming, Tong Yubing, Odhner Dewey, Pednekar Gargi, Nag Sanghita, Lewis Sharon, Poole Nicholas, Mannikeri Sutirth, Govindasamy Sudarshana, Singh Aarushi, Camaratta Joe, Owens Steve, Torigian Drew A

机构信息

Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States.

Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States.

出版信息

Med Image Anal. 2022 Oct;81:102527. doi: 10.1016/j.media.2022.102527. Epub 2022 Jun 25.

DOI:10.1016/j.media.2022.102527
PMID:35830745
Abstract

PURPOSE

Despite advances in deep learning, robust medical image segmentation in the presence of artifacts, pathology, and other imaging shortcomings has remained a challenge. In this paper, we demonstrate that by synergistically marrying the unmatched strengths of high-level human knowledge (i.e., natural intelligence (NI)) with the capabilities of deep learning (DL) networks (i.e., artificial intelligence (AI)) in garnering intricate details, these challenges can be significantly overcome. Focusing on the object recognition task, we formulate an anatomy-guided deep learning object recognition approach named AAR-DL which combines an advanced anatomy-modeling strategy, model-based non-deep-learning object recognition, and deep learning object detection networks to achieve expert human-like performance.

METHODS

The AAR-DL approach consists of 4 key modules wherein prior knowledge (NI) is made use of judiciously at every stage. In the first module AAR-R, objects are recognized based on a previously created fuzzy anatomy model of the body region with all its organs following the automatic anatomy recognition (AAR) approach wherein high-level human anatomic knowledge is precisely codified. This module is purely model-based with no DL involvement. Although the AAR-R operation lacks accuracy, it is robust to artifacts and deviations (much like NI), and provides the much-needed anatomic guidance in the form of rough regions-of-interest (ROIs) for the following DL modules. The 2nd module DL-R makes use of the ROI information to limit the search region to just where each object is most likely to reside and performs DL-based detection of the 2D bounding boxes (BBs) in slices. The 2D BBs hug the shape of the 3D object much better than 3D BBs and their detection is feasible only due to anatomy guidance from AAR-R. In the 3rd module, the AAR model is deformed via the found 2D BBs providing refined model information which now embodies both NI and AI decisions. The refined AAR model more actively guides the 4th refined DL-R module to perform final object detection via DL. Anatomy knowledge is made use of in designing the DL networks wherein spatially sparse objects and non-sparse objects are handled differently to provide the required level of attention for each.

RESULTS

Utilizing 150 thoracic and 225 head and neck (H&N) computed tomography (CT) data sets of cancer patients undergoing routine radiation therapy planning, the recognition performance of the AAR-DL approach is evaluated on 10 thoracic and 16 H&N organs in comparison to pure model-based approach (AAR-R) and pure DL approach without anatomy guidance. Recognition accuracy is assessed via location error/ centroid distance error, scale or size error, and wall distance error. The results demonstrate how the errors are gradually and systematically reduced from the 1st module to the 4th module as high-level knowledge is infused via NI at various stages into the processing pipeline. This improvement is especially dramatic for sparse and artifact-prone challenging objects, achieving a location error over all objects of 4.4 mm and 4.3 mm for the two body regions, respectively. The pure DL approach failed on several very challenging sparse objects while AAR-DL achieved accurate recognition, almost matching human performance, showing the importance of anatomy guidance for robust operation. Anatomy guidance also reduces the time required for training DL networks considerably.

CONCLUSIONS

(i) High-level anatomy guidance improves recognition performance of DL methods. (ii) This improvement is especially noteworthy for spatially sparse, low-contrast, inconspicuous, and artifact-prone objects. (iii) Once anatomy guidance is provided, 3D objects can be detected much more accurately via 2D BBs than 3D BBs and the 2D BBs represent object containment with much more specificity. (iv) Anatomy guidance brings stability and robustness to DL approaches for object localization. (v) The training time can be greatly reduced by making use of anatomy guidance.

摘要

目的

尽管深度学习取得了进展,但在存在伪影、病变及其他成像缺陷的情况下,进行可靠的医学图像分割仍然是一项挑战。在本文中,我们证明,通过将高级人类知识(即自然智能(NI))与深度学习(DL)网络(即人工智能(AI))在获取复杂细节方面的无与伦比的优势协同结合,可以显著克服这些挑战。针对目标识别任务,我们制定了一种名为AAR-DL的解剖学引导深度学习目标识别方法,该方法结合了先进的解剖学建模策略、基于模型的非深度学习目标识别以及深度学习目标检测网络,以实现类似专家的人类水平性能。

方法

AAR-DL方法由4个关键模块组成,其中在每个阶段都明智地利用了先验知识(NI)。在第一个模块AAR-R中,基于先前创建的身体区域模糊解剖模型识别对象,该模型包含其所有器官,遵循自动解剖识别(AAR)方法,其中高级人类解剖学知识被精确编码。该模块纯粹基于模型,不涉及深度学习。尽管AAR-R操作缺乏准确性,但它对伪影和偏差具有鲁棒性(很像NI),并以粗略的感兴趣区域(ROI)的形式为后续的深度学习模块提供了急需的解剖学指导。第二个模块DL-R利用ROI信息将搜索区域限制在每个对象最可能所在的位置,并对切片中的二维边界框(BB)进行基于深度学习的检测。二维BB比三维BB更贴合三维对象的形状,并且仅由于AAR-R的解剖学指导,其检测才是可行的。在第三个模块中,通过找到的二维BB使AAR模型变形,提供经过细化的模型信息,该信息现在体现了NI和AI的决策。经过细化的AAR模型更积极地指导第四个细化的DL-R模块通过深度学习执行最终的目标检测。在设计深度学习网络时利用了解剖学知识,其中对空间稀疏对象和非稀疏对象进行不同处理,以对每个对象提供所需的关注程度。

结果

利用150例接受常规放射治疗计划的癌症患者的胸部CT数据集和225例头颈部(H&N)CT数据集,与基于纯模型的方法(AAR-R)和无解剖学指导的纯深度学习方法相比,评估了AAR-DL方法在10个胸部器官和16个头颈部器官上的识别性能。通过位置误差/质心距离误差、尺度或大小误差以及壁距离误差评估识别准确性。结果表明,随着高级知识在各个阶段通过NI注入到处理流程中,误差如何从第一个模块到第四个模块逐渐且系统地降低。对于稀疏且容易出现伪影的具有挑战性的对象,这种改进尤为显著,在两个身体区域中,所有对象的位置误差分别达到4.4毫米和4.3毫米。纯深度学习方法在几个极具挑战性的稀疏对象上失败了,而AAR-DL实现了准确识别,几乎与人类性能相当,显示了解剖学指导对于稳健操作的重要性。解剖学指导还大大减少了训练深度学习网络所需的时间。

结论

(i)高级解剖学指导提高了深度学习方法的识别性能。(ii)对于空间稀疏、低对比度、不显眼且容易出现伪影的对象,这种改进尤其值得注意。(iii)一旦提供了解剖学指导,通过二维BB比通过三维BB能更准确地检测三维对象,并且二维BB更具体地表示对象包含情况。(iv)解剖学指导为深度学习方法的对象定位带来了稳定性和鲁棒性。(v)利用解剖学指导可以大大减少训练时间。

相似文献

1
Object recognition in medical images via anatomy-guided deep learning.通过解剖学引导的深度学习实现医学图像中的目标识别。
Med Image Anal. 2022 Oct;81:102527. doi: 10.1016/j.media.2022.102527. Epub 2022 Jun 25.
2
Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto-contouring.将自然和人工智能相结合进行强大的自动解剖分割:在颈部和胸部自动轮廓绘制中的应用。
Med Phys. 2022 Nov;49(11):7118-7149. doi: 10.1002/mp.15854. Epub 2022 Jul 27.
3
AAR-RT - A system for auto-contouring organs at risk on CT images for radiation therapy planning: Principles, design, and large-scale evaluation on head-and-neck and thoracic cancer cases.AAR-RT - 一种用于放射治疗计划的CT图像上危及器官自动轮廓勾画的系统:原理、设计及对头颈部和胸段癌症病例的大规模评估。
Med Image Anal. 2019 May;54:45-62. doi: 10.1016/j.media.2019.01.008. Epub 2019 Jan 29.
4
Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images.全身层次模糊建模、医学图像中解剖结构的识别和描绘。
Med Image Anal. 2014 Jul;18(5):752-71. doi: 10.1016/j.media.2014.04.003. Epub 2014 Apr 24.
5
Auto-segmentation of thoracic brachial plexuses for radiation therapy planning.用于放射治疗计划的胸段臂丛神经自动分割
Proc SPIE Int Soc Opt Eng. 2023 Feb;12466. doi: 10.1117/12.2655159. Epub 2023 Apr 3.
6
Quantification of body-torso-wide tissue composition on low-dose CT images via automatic anatomy recognition.利用自动解剖识别技术对低剂量 CT 图像进行全身组织成分定量分析。
Med Phys. 2019 Mar;46(3):1272-1285. doi: 10.1002/mp.13373. Epub 2019 Feb 5.
7
Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration.使用分层模糊模型和配准技术对CT图像进行胸部解剖结构自动分割
Med Phys. 2016 Mar;43(3):1487-500. doi: 10.1118/1.4942486.
8
Automatic anatomy recognition in whole-body PET/CT images.全身PET/CT图像中的自动解剖识别
Med Phys. 2016 Jan;43(1):613. doi: 10.1118/1.4939127.
9
SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images.SOMA:一种基于主体、客体和模态自适应的精确图谱方法,用于医学图像中的自动解剖结构识别和勾画。
Med Phys. 2021 Dec;48(12):7806-7825. doi: 10.1002/mp.15308. Epub 2021 Nov 18.
10
Disease quantification on PET/CT images without explicit object delineation.PET/CT 图像上的无明确目标勾画的疾病定量。
Med Image Anal. 2019 Jan;51:169-183. doi: 10.1016/j.media.2018.11.002. Epub 2018 Nov 10.

引用本文的文献

1
Weakly Supervised Deep Learning for Monitoring Sleep Apnea Severity Using Coarse-grained Labels.使用粗粒度标签的弱监督深度学习用于监测睡眠呼吸暂停严重程度
IEEE Trans Autom Sci Eng. 2025;22:15227-15240. doi: 10.1109/tase.2025.3566682. Epub 2025 May 12.
2
Anatomic attention regions via optimal anatomy modeling and recognition for DL-based image segmentation.通过基于深度学习的图像分割的最优解剖建模和识别实现的解剖关注区域
Proc SPIE Int Soc Opt Eng. 2024 Feb;12930. doi: 10.1117/12.3006771. Epub 2024 Apr 2.
3
Auto-segmentation of thoraco-abdominal organs in free breathing pediatric dynamic MRI.
自由呼吸状态下小儿动态磁共振成像中胸腹部器官的自动分割
Proc SPIE Int Soc Opt Eng. 2023 Feb;12466. doi: 10.1117/12.2654995. Epub 2023 Apr 3.
4
Optimal strategies for modeling anatomy in a hybrid intelligence framework for auto-segmentation of organs.用于器官自动分割的混合智能框架中解剖结构建模的优化策略。
Proc SPIE Int Soc Opt Eng. 2024 Feb;12928. doi: 10.1117/12.3006617. Epub 2024 Mar 29.
5
Automatization of CT Annotation: Combining AI Efficiency with Expert Precision.CT标注的自动化:将人工智能的效率与专家的精准度相结合。
Diagnostics (Basel). 2024 Jan 15;14(2):185. doi: 10.3390/diagnostics14020185.
6
On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks.关于使用传统机器学习技术和卷积神经网络对医学图像进行分析
Arch Comput Methods Eng. 2023;30(5):3173-3233. doi: 10.1007/s11831-023-09899-9. Epub 2023 Apr 4.
7
Auto-segmentation of thoracic brachial plexuses for radiation therapy planning.用于放射治疗计划的胸段臂丛神经自动分割
Proc SPIE Int Soc Opt Eng. 2023 Feb;12466. doi: 10.1117/12.2655159. Epub 2023 Apr 3.
8
Deep-learning-based ensemble method for fully automated detection of renal masses on magnetic resonance images.基于深度学习的集成方法用于磁共振图像上肾肿块的全自动检测
J Med Imaging (Bellingham). 2023 Mar;10(2):024501. doi: 10.1117/1.JMI.10.2.024501. Epub 2023 Mar 20.
9
Diagnostic Accuracy of Machine-Learning Models on Predicting Chemo-Brain in Breast Cancer Survivors Previously Treated with Chemotherapy: A Meta-Analysis.机器学习模型预测既往接受过化疗的乳腺癌幸存者化疗脑的诊断准确性:一项荟萃分析。
Int J Environ Res Public Health. 2022 Dec 15;19(24):16832. doi: 10.3390/ijerph192416832.