• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于高强度聚焦超声(HIFU)病灶检测与分割的优化多任务对比学习框架。

An optimized multi-task contrastive learning framework for HIFU lesion detection and segmentation.

作者信息

Zavar Matineh, Ghaffari Hamid Reza, Tabatabaee Hamid

机构信息

Department, of Computer Engineering, Ferdows Branch, Islamic Azad University, Ferdows, Iran.

Department, of Computer Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran.

出版信息

Sci Rep. 2025 Aug 13;15(1):29666. doi: 10.1038/s41598-025-99783-2.

DOI:10.1038/s41598-025-99783-2
PMID:40804119
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12350952/
Abstract

Accurate detection and segmentation of lesions induced by High-Intensity Focused Ultrasound (HIFU) in medical imaging remain significant challenges in automated disease diagnosis. Traditional methods heavily rely on labeled data, which is often scarce, expensive, and time-consuming to obtain. Moreover, existing approaches frequently struggle with variations in medical data and the limited availability of annotated datasets, leading to suboptimal performance. To address these challenges, this paper introduces an innovative framework called the Optimized Multi-Task Contrastive Learning Framework (OMCLF), which leverages self-supervised learning (SSL) and genetic algorithms (GA) to enhance HIFU lesion detection and segmentation. OMCLF integrates classification and segmentation into a unified model, utilizing a shared backbone to extract common features. The framework systematically optimizes feature representations, hyperparameters, and data augmentation strategies tailored for medical imaging, ensuring that critical information, such as lesion details, is preserved. By employing a genetic algorithm, OMCLF explores and optimizes augmentation techniques suitable for medical data, avoiding distortions that could compromise diagnostic accuracy. Experimental results demonstrate that OMCLF outperforms single-task methods in both classification and segmentation tasks while significantly reducing dependency on labeled data. Specifically, OMCLF achieves an accuracy of 93.3% in lesion detection and a Dice score of 92.5% in segmentation, surpassing state-of-the-art methods such as SimCLR and MoCo. The proposed approach achieves superior accuracy in identifying and delineating HIFU-induced lesions, marking a substantial advancement in medical image interpretation and automated diagnosis. OMCLF represents a significant step forward in the evolutionary optimization of self-supervised learning, with potential applications across various medical imaging domains.

摘要

在医学成像中,准确检测和分割高强度聚焦超声(HIFU)诱导的病变仍然是自动疾病诊断中的重大挑战。传统方法严重依赖标记数据,而获取这些数据往往稀缺、昂贵且耗时。此外,现有方法常常难以应对医学数据的变化以及注释数据集的有限可用性,导致性能欠佳。为应对这些挑战,本文引入了一种名为优化多任务对比学习框架(OMCLF)的创新框架,该框架利用自监督学习(SSL)和遗传算法(GA)来增强HIFU病变的检测和分割。OMCLF将分类和分割集成到一个统一模型中,利用共享骨干提取共同特征。该框架系统地优化了针对医学成像量身定制的特征表示、超参数和数据增强策略,确保保留诸如病变细节等关键信息。通过采用遗传算法,OMCLF探索并优化适合医学数据的增强技术,避免可能损害诊断准确性的失真。实验结果表明,OMCLF在分类和分割任务中均优于单任务方法,同时显著降低了对标记数据的依赖。具体而言,OMCLF在病变检测中达到了93.3%的准确率,在分割中达到了92.5%的Dice分数,超过了SimCLR和MoCo等现有最先进的方法。所提出的方法在识别和描绘HIFU诱导的病变方面实现了卓越的准确性,标志着医学图像解释和自动诊断取得了重大进展。OMCLF代表了自监督学习进化优化中的重要一步,在各个医学成像领域具有潜在应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/f4d32d9df8b3/41598_2025_99783_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/23f99c8ca375/41598_2025_99783_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c585e81613b1/41598_2025_99783_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/234886f96a47/41598_2025_99783_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/2ec56f71b314/41598_2025_99783_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/aebd2ec63520/41598_2025_99783_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c9969a8c14b9/41598_2025_99783_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/ef4b7dd4517e/41598_2025_99783_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e763c2aebde7/41598_2025_99783_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/70b7ba84ae81/41598_2025_99783_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/cff06960bdb0/41598_2025_99783_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/dec166bde90e/41598_2025_99783_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e1b7d711b25c/41598_2025_99783_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/f4d32d9df8b3/41598_2025_99783_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/23f99c8ca375/41598_2025_99783_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c585e81613b1/41598_2025_99783_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/234886f96a47/41598_2025_99783_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/2ec56f71b314/41598_2025_99783_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/aebd2ec63520/41598_2025_99783_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c9969a8c14b9/41598_2025_99783_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/ef4b7dd4517e/41598_2025_99783_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e763c2aebde7/41598_2025_99783_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/70b7ba84ae81/41598_2025_99783_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/cff06960bdb0/41598_2025_99783_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/dec166bde90e/41598_2025_99783_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e1b7d711b25c/41598_2025_99783_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/f4d32d9df8b3/41598_2025_99783_Fig10_HTML.jpg

相似文献

1
An optimized multi-task contrastive learning framework for HIFU lesion detection and segmentation.一种用于高强度聚焦超声(HIFU)病灶检测与分割的优化多任务对比学习框架。
Sci Rep. 2025 Aug 13;15(1):29666. doi: 10.1038/s41598-025-99783-2.
2
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.CXR-MultiTaskNet:一种用于胸部X光片中疾病联合定位与分类的统一深度学习框架。
Sci Rep. 2025 Aug 31;15(1):32022. doi: 10.1038/s41598-025-16669-z.
3
Boundary-aware information maximization for self-supervised medical image segmentation.用于自监督医学图像分割的边界感知信息最大化
Med Image Anal. 2024 May;94:103150. doi: 10.1016/j.media.2024.103150. Epub 2024 Mar 28.
4
Diagnosis of Sacroiliitis Through Semi-Supervised Segmentation and Radiomics Feature Analysis of MRI Images.通过MRI图像的半监督分割和影像组学特征分析诊断骶髂关节炎
J Magn Reson Imaging. 2025 Feb 6. doi: 10.1002/jmri.29731.
5
..
Int Ophthalmol. 2025 Jun 27;45(1):266. doi: 10.1007/s10792-025-03602-6.
6
A segment anything model-guided and match-based semi-supervised segmentation framework for medical imaging.一种用于医学成像的基于段式分割模型引导和匹配的半监督分割框架。
Med Phys. 2025 Mar 29. doi: 10.1002/mp.17785.
7
Explainable self-supervised learning for medical image diagnosis based on DINO V2 model and semantic search.基于DINO V2模型和语义搜索的可解释自监督医学图像诊断学习
Sci Rep. 2025 Sep 1;15(1):32174. doi: 10.1038/s41598-025-15604-6.
8
Large-scale convolutional neural network for clinical target and multi-organ segmentation in gynecologic brachytherapy via multi-stage learning.基于多阶段学习的大规模卷积神经网络用于妇科近距离放疗中的临床靶区和多器官分割
Med Phys. 2025 Aug;52(8):e18067. doi: 10.1002/mp.18067.
9
A segmentation method for oral CBCT image based on Segment Anything Model and semi-supervised teacher-student model.一种基于分割一切模型和半监督师生模型的口腔锥形束计算机断层扫描(CBCT)图像分割方法。
Med Phys. 2025 May 7. doi: 10.1002/mp.17854.
10
DFMF: Harnessing spectral-spatial synergy for MR image segmentation through Dual-Task Feature Mining Framework.DFMF:通过双任务特征挖掘框架利用光谱-空间协同作用进行磁共振图像分割
Comput Med Imaging Graph. 2025 Sep;124:102603. doi: 10.1016/j.compmedimag.2025.102603. Epub 2025 Jul 16.

本文引用的文献

1
Self-Supervised Deep Learning-The Next Frontier.自监督深度学习——下一个前沿领域。
JAMA Ophthalmol. 2024 Mar 1;142(3):234. doi: 10.1001/jamaophthalmol.2023.6650.
2
Multi-Instance Multi-Task Learning for Joint Clinical Outcome and Genomic Profile Predictions From the Histopathological Images.基于组织病理学图像的联合临床结局和基因组特征预测的多实例多任务学习。
IEEE Trans Med Imaging. 2024 Jun;43(6):2266-2278. doi: 10.1109/TMI.2024.3362852. Epub 2024 Jun 3.
3
Self-Supervised Learning With Limited Labeled Data for Prostate Cancer Detection in High-Frequency Ultrasound.
基于有限标注数据的前列腺癌超声高频声像图深度学习检测
IEEE Trans Ultrason Ferroelectr Freq Control. 2023 Sep;70(9):1073-1083. doi: 10.1109/TUFFC.2023.3297840. Epub 2023 Aug 29.
4
Self-supervised learning for gastritis detection with gastric X-ray images.基于胃 X 射线图像的胃炎检测的自监督学习。
Int J Comput Assist Radiol Surg. 2023 Oct;18(10):1841-1848. doi: 10.1007/s11548-023-02891-5. Epub 2023 Apr 11.
5
CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning.CheSS:基于自监督对比学习的胸部 X 射线预训练模型。
J Digit Imaging. 2023 Jun;36(3):902-910. doi: 10.1007/s10278-023-00782-4. Epub 2023 Jan 26.
6
Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning.通过自监督学习对未经注释的胸部 X 光图像中的病理学进行专家级检测。
Nat Biomed Eng. 2022 Dec;6(12):1399-1406. doi: 10.1038/s41551-022-00936-9. Epub 2022 Sep 15.
7
Self-Supervised Adversarial Learning with a Limited Dataset for Electronic Cleansing in Computed Tomographic Colonography: A Preliminary Feasibility Study.基于有限数据集的自监督对抗学习在计算机断层结肠成像电子去噪中的初步可行性研究
Cancers (Basel). 2022 Aug 26;14(17):4125. doi: 10.3390/cancers14174125.
8
Self-supervised region-aware segmentation of COVID-19 CT images using 3D GAN and contrastive learning.基于 3D GAN 和对比学习的 COVID-19 CT 图像自监督区域感知分割。
Comput Biol Med. 2022 Oct;149:106033. doi: 10.1016/j.compbiomed.2022.106033. Epub 2022 Aug 27.
9
Self-supervised learning in medicine and healthcare.医学和医疗保健中的自我监督学习。
Nat Biomed Eng. 2022 Dec;6(12):1346-1352. doi: 10.1038/s41551-022-00914-1. Epub 2022 Aug 11.
10
Heuristic Attention Representation Learning for Self-Supervised Pretraining.启发式注意力表示学习的自监督预训练。
Sensors (Basel). 2022 Jul 10;22(14):5169. doi: 10.3390/s22145169.