文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

一种用于高强度聚焦超声(HIFU)病灶检测与分割的优化多任务对比学习框架。

An optimized multi-task contrastive learning framework for HIFU lesion detection and segmentation.

作者信息

Zavar Matineh, Ghaffari Hamid Reza, Tabatabaee Hamid

机构信息

Department, of Computer Engineering, Ferdows Branch, Islamic Azad University, Ferdows, Iran.

Department, of Computer Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran.

出版信息

Sci Rep. 2025 Aug 13;15(1):29666. doi: 10.1038/s41598-025-99783-2.


DOI:10.1038/s41598-025-99783-2
PMID:40804119
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12350952/
Abstract

Accurate detection and segmentation of lesions induced by High-Intensity Focused Ultrasound (HIFU) in medical imaging remain significant challenges in automated disease diagnosis. Traditional methods heavily rely on labeled data, which is often scarce, expensive, and time-consuming to obtain. Moreover, existing approaches frequently struggle with variations in medical data and the limited availability of annotated datasets, leading to suboptimal performance. To address these challenges, this paper introduces an innovative framework called the Optimized Multi-Task Contrastive Learning Framework (OMCLF), which leverages self-supervised learning (SSL) and genetic algorithms (GA) to enhance HIFU lesion detection and segmentation. OMCLF integrates classification and segmentation into a unified model, utilizing a shared backbone to extract common features. The framework systematically optimizes feature representations, hyperparameters, and data augmentation strategies tailored for medical imaging, ensuring that critical information, such as lesion details, is preserved. By employing a genetic algorithm, OMCLF explores and optimizes augmentation techniques suitable for medical data, avoiding distortions that could compromise diagnostic accuracy. Experimental results demonstrate that OMCLF outperforms single-task methods in both classification and segmentation tasks while significantly reducing dependency on labeled data. Specifically, OMCLF achieves an accuracy of 93.3% in lesion detection and a Dice score of 92.5% in segmentation, surpassing state-of-the-art methods such as SimCLR and MoCo. The proposed approach achieves superior accuracy in identifying and delineating HIFU-induced lesions, marking a substantial advancement in medical image interpretation and automated diagnosis. OMCLF represents a significant step forward in the evolutionary optimization of self-supervised learning, with potential applications across various medical imaging domains.

摘要

在医学成像中,准确检测和分割高强度聚焦超声(HIFU)诱导的病变仍然是自动疾病诊断中的重大挑战。传统方法严重依赖标记数据,而获取这些数据往往稀缺、昂贵且耗时。此外,现有方法常常难以应对医学数据的变化以及注释数据集的有限可用性,导致性能欠佳。为应对这些挑战,本文引入了一种名为优化多任务对比学习框架(OMCLF)的创新框架,该框架利用自监督学习(SSL)和遗传算法(GA)来增强HIFU病变的检测和分割。OMCLF将分类和分割集成到一个统一模型中,利用共享骨干提取共同特征。该框架系统地优化了针对医学成像量身定制的特征表示、超参数和数据增强策略,确保保留诸如病变细节等关键信息。通过采用遗传算法,OMCLF探索并优化适合医学数据的增强技术,避免可能损害诊断准确性的失真。实验结果表明,OMCLF在分类和分割任务中均优于单任务方法,同时显著降低了对标记数据的依赖。具体而言,OMCLF在病变检测中达到了93.3%的准确率,在分割中达到了92.5%的Dice分数,超过了SimCLR和MoCo等现有最先进的方法。所提出的方法在识别和描绘HIFU诱导的病变方面实现了卓越的准确性,标志着医学图像解释和自动诊断取得了重大进展。OMCLF代表了自监督学习进化优化中的重要一步,在各个医学成像领域具有潜在应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/f4d32d9df8b3/41598_2025_99783_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/23f99c8ca375/41598_2025_99783_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c585e81613b1/41598_2025_99783_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/234886f96a47/41598_2025_99783_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/2ec56f71b314/41598_2025_99783_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/aebd2ec63520/41598_2025_99783_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c9969a8c14b9/41598_2025_99783_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/ef4b7dd4517e/41598_2025_99783_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e763c2aebde7/41598_2025_99783_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/70b7ba84ae81/41598_2025_99783_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/cff06960bdb0/41598_2025_99783_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/dec166bde90e/41598_2025_99783_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e1b7d711b25c/41598_2025_99783_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/f4d32d9df8b3/41598_2025_99783_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/23f99c8ca375/41598_2025_99783_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c585e81613b1/41598_2025_99783_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/234886f96a47/41598_2025_99783_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/2ec56f71b314/41598_2025_99783_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/aebd2ec63520/41598_2025_99783_Figa_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/c9969a8c14b9/41598_2025_99783_Figb_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/ef4b7dd4517e/41598_2025_99783_Figc_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e763c2aebde7/41598_2025_99783_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/70b7ba84ae81/41598_2025_99783_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/cff06960bdb0/41598_2025_99783_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/dec166bde90e/41598_2025_99783_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/e1b7d711b25c/41598_2025_99783_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1fd0/12350952/f4d32d9df8b3/41598_2025_99783_Fig10_HTML.jpg

相似文献

[1]
An optimized multi-task contrastive learning framework for HIFU lesion detection and segmentation.

Sci Rep. 2025-8-13

[2]
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.

Sci Rep. 2025-8-31

[3]
Boundary-aware information maximization for self-supervised medical image segmentation.

Med Image Anal. 2024-5

[4]
Diagnosis of Sacroiliitis Through Semi-Supervised Segmentation and Radiomics Feature Analysis of MRI Images.

J Magn Reson Imaging. 2025-2-6

[5]
.

Int Ophthalmol. 2025-6-27

[6]
A segment anything model-guided and match-based semi-supervised segmentation framework for medical imaging.

Med Phys. 2025-3-29

[7]
Explainable self-supervised learning for medical image diagnosis based on DINO V2 model and semantic search.

Sci Rep. 2025-9-1

[8]
Large-scale convolutional neural network for clinical target and multi-organ segmentation in gynecologic brachytherapy via multi-stage learning.

Med Phys. 2025-8

[9]
A segmentation method for oral CBCT image based on Segment Anything Model and semi-supervised teacher-student model.

Med Phys. 2025-5-7

[10]
DFMF: Harnessing spectral-spatial synergy for MR image segmentation through Dual-Task Feature Mining Framework.

Comput Med Imaging Graph. 2025-9

本文引用的文献

[1]
Self-Supervised Deep Learning-The Next Frontier.

JAMA Ophthalmol. 2024-3-1

[2]
Multi-Instance Multi-Task Learning for Joint Clinical Outcome and Genomic Profile Predictions From the Histopathological Images.

IEEE Trans Med Imaging. 2024-6

[3]
Self-Supervised Learning With Limited Labeled Data for Prostate Cancer Detection in High-Frequency Ultrasound.

IEEE Trans Ultrason Ferroelectr Freq Control. 2023-9

[4]
Self-supervised learning for gastritis detection with gastric X-ray images.

Int J Comput Assist Radiol Surg. 2023-10

[5]
CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning.

J Digit Imaging. 2023-6

[6]
Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning.

Nat Biomed Eng. 2022-12

[7]
Self-Supervised Adversarial Learning with a Limited Dataset for Electronic Cleansing in Computed Tomographic Colonography: A Preliminary Feasibility Study.

Cancers (Basel). 2022-8-26

[8]
Self-supervised region-aware segmentation of COVID-19 CT images using 3D GAN and contrastive learning.

Comput Biol Med. 2022-10

[9]
Self-supervised learning in medicine and healthcare.

Nat Biomed Eng. 2022-12

[10]
Heuristic Attention Representation Learning for Self-Supervised Pretraining.

Sensors (Basel). 2022-7-10

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索