• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于单图像的深度学习在早期食管癌病变分割中的应用。

Single-Image-Based Deep Learning for Segmentation of Early Esophageal Cancer Lesions.

出版信息

IEEE Trans Image Process. 2024;33:2676-2688. doi: 10.1109/TIP.2024.3379902. Epub 2024 Apr 16.

DOI:10.1109/TIP.2024.3379902
PMID:38530733
Abstract

Accurate segmentation of lesions is crucial for diagnosis and treatment of early esophageal cancer (EEC). However, neither traditional nor deep learning-based methods up to today can meet the clinical requirements, with the mean Dice score - the most important metric in medical image analysis - hardly exceeding 0.75. In this paper, we present a novel deep learning approach for segmenting EEC lesions. Our method stands out for its uniqueness, as it relies solely on a single input image from a patient, forming the so-called "You-Only-Have-One" (YOHO) framework. On one hand, this "one-image-one-network" learning ensures complete patient privacy as it does not use any images from other patients as the training data. On the other hand, it avoids nearly all generalization-related problems since each trained network is applied only to the same input image itself. In particular, we can push the training to "over-fitting" as much as possible to increase the segmentation accuracy. Our technical details include an interaction with clinical doctors to utilize their expertise, a geometry-based data augmentation over a single lesion image to generate the training dataset (the biggest novelty), and an edge-enhanced UNet. We have evaluated YOHO over an EEC dataset collected by ourselves and achieved a mean Dice score of 0.888, which is much higher as compared to the existing deep-learning methods, thus representing a significant advance toward clinical applications. The code and dataset are available at: https://github.com/lhaippp/YOHO.

摘要

准确的病变分割对于早期食管癌(EEC)的诊断和治疗至关重要。然而,迄今为止,无论是传统方法还是基于深度学习的方法都无法满足临床需求,医学图像分析中最重要的度量指标——平均 Dice 分数很难超过 0.75。在本文中,我们提出了一种用于分割 EEC 病变的新的深度学习方法。我们的方法的独特之处在于它仅依赖于患者的单个输入图像,形成所谓的“你只有一个”(YOHO)框架。一方面,这种“一图一网络”的学习方式确保了患者的完全隐私,因为它不使用任何来自其他患者的图像作为训练数据。另一方面,它避免了几乎所有与泛化相关的问题,因为每个训练的网络仅应用于相同的输入图像本身。特别是,我们可以尽可能地推动训练达到“过拟合”,以提高分割精度。我们的技术细节包括与临床医生合作,利用他们的专业知识,在单个病变图像上进行基于几何形状的数据增强,以生成训练数据集(这是最大的新颖之处),以及边缘增强的 UNet。我们在自己收集的 EEC 数据集上评估了 YOHO,平均 Dice 分数为 0.888,与现有的深度学习方法相比有了很大提高,因此朝着临床应用迈出了重要一步。代码和数据集可在以下网址获取:https://github.com/lhaippp/YOHO。

相似文献

1
Single-Image-Based Deep Learning for Segmentation of Early Esophageal Cancer Lesions.基于单图像的深度学习在早期食管癌病变分割中的应用。
IEEE Trans Image Process. 2024;33:2676-2688. doi: 10.1109/TIP.2024.3379902. Epub 2024 Apr 16.
2
A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup.一种用于深度学习的新型自适应三次拟牛顿优化器,在 COVID-19 检测和 COVID-19 肺部感染、肝脏肿瘤以及视盘/杯分割等医学图像分析任务中得到验证。
Med Phys. 2023 Mar;50(3):1528-1538. doi: 10.1002/mp.15969. Epub 2022 Oct 6.
3
Robust segmentation of arterial walls in intravascular ultrasound images using Dual Path U-Net.使用双通道 U-Net 对血管内超声图像中的动脉壁进行稳健分割。
Ultrasonics. 2019 Jul;96:24-33. doi: 10.1016/j.ultras.2019.03.014. Epub 2019 Mar 23.
4
Znet: Deep Learning Approach for 2D MRI Brain Tumor Segmentation.Znet:二维 MRI 脑肿瘤分割的深度学习方法。
IEEE J Transl Eng Health Med. 2022 May 23;10:1800508. doi: 10.1109/JTEHM.2022.3176737. eCollection 2022.
5
Medical image diagnosis of prostate tumor based on PSP-Net+VGG16 deep learning network.基于 PSP-Net+VGG16 深度学习网络的前列腺肿瘤医学影像诊断。
Comput Methods Programs Biomed. 2022 Jun;221:106770. doi: 10.1016/j.cmpb.2022.106770. Epub 2022 Mar 23.
6
Automatic intraprostatic lesion segmentation in multiparametric magnetic resonance images with proposed multiple branch UNet.利用提出的多分支U-Net在多参数磁共振图像中实现前列腺内病变的自动分割。
Med Phys. 2020 Dec;47(12):6421-6429. doi: 10.1002/mp.14517. Epub 2020 Oct 24.
7
Efficient fetal ultrasound image segmentation for automatic head circumference measurement using a lightweight deep convolutional neural network.利用轻量级深度卷积神经网络实现高效胎儿超声图像自动头围测量的分割。
Med Phys. 2022 Aug;49(8):5081-5092. doi: 10.1002/mp.15700. Epub 2022 May 24.
8
Deep learning-based automatic segmentation of images in cardiac radiography: A promising challenge.基于深度学习的心脏放射成像图像自动分割:一项颇具前景的挑战。
Comput Methods Programs Biomed. 2022 Jun;220:106821. doi: 10.1016/j.cmpb.2022.106821. Epub 2022 Apr 19.
9
CheXLocNet: Automatic localization of pneumothorax in chest radiographs using deep convolutional neural networks.CheXLocNet:使用深度卷积神经网络自动定位胸部 X 光片中的气胸。
PLoS One. 2020 Nov 9;15(11):e0242013. doi: 10.1371/journal.pone.0242013. eCollection 2020.
10
A conventional-to-spectral CT image translation augmentation workflow for robust contrast injection-independent organ segmentation.一种常规到光谱 CT 图像翻译增强工作流程,用于稳健的对比注入独立器官分割。
Med Phys. 2022 Feb;49(2):1108-1122. doi: 10.1002/mp.15310. Epub 2021 Dec 20.

引用本文的文献

1
Recent advances in machine learning for precision diagnosis and treatment of esophageal disorders.机器学习在食管疾病精准诊断与治疗方面的最新进展。
World J Gastroenterol. 2025 Jun 21;31(23):105076. doi: 10.3748/wjg.v31.i23.105076.
2
Future prospects of deep learning in esophageal cancer diagnosis and clinical decision support (Review).深度学习在食管癌诊断及临床决策支持中的未来前景(综述)
Oncol Lett. 2025 Apr 11;29(6):293. doi: 10.3892/ol.2025.15039. eCollection 2025 Jun.
3
Early detection of esophageal cancer: Evaluating AI algorithms with multi-institutional narrowband and white-light imaging data.
食管癌的早期检测:利用多机构窄带和白光成像数据评估人工智能算法
PLoS One. 2025 Apr 4;20(4):e0321092. doi: 10.1371/journal.pone.0321092. eCollection 2025.
4
Attention-Enhanced Multi-Task Deep Learning Model for Classification and Segmentation of Esophageal Lesions.用于食管病变分类与分割的注意力增强多任务深度学习模型
ACS Omega. 2025 Mar 4;10(10):10468-10479. doi: 10.1021/acsomega.4c10763. eCollection 2025 Mar 18.