Suppr超能文献

超密集肺部分割:基于多模态融合的改进U-Net,用于利用CT-PET扫描多模态进行肺肿瘤分割

Hyper-Dense_Lung_Seg: Multimodal-Fusion-Based Modified U-Net for Lung Tumour Segmentation Using Multimodality of CT-PET Scans.

作者信息

Alshmrani Goram Mufarah, Ni Qiang, Jiang Richard, Muhammed Nada

机构信息

School of Computing and Commutations, Lancaster University, Lancaster LA1 4YW, UK.

College of Computing and Information Technology, University of Bisha, Bisha 67714, Saudi Arabia.

出版信息

Diagnostics (Basel). 2023 Nov 20;13(22):3481. doi: 10.3390/diagnostics13223481.

Abstract

The majority of cancer-related deaths globally are due to lung cancer, which also has the second-highest mortality rate. The segmentation of lung tumours, treatment evaluation, and tumour stage classification have become significantly more accessible with the advent of PET/CT scans. With the advent of PET/CT scans, it is possible to obtain both functioning and anatomic data during a single examination. However, integrating images from different modalities can indeed be time-consuming for medical professionals and remains a challenging task. This challenge arises from several factors, including differences in image acquisition techniques, image resolutions, and the inherent variations in the spectral and temporal data captured by different imaging modalities. Artificial Intelligence (AI) methodologies have shown potential in the automation of image integration and segmentation. To address these challenges, multimodal fusion approach-based U-Net architecture (early fusion, late fusion, dense fusion, hyper-dense fusion, and hyper-dense VGG16 U-Net) are proposed for lung tumour segmentation. Dice scores of 73% show that hyper-dense VGG16 U-Net is superior to the other four proposed models. The proposed method can potentially aid medical professionals in detecting lung cancer at an early stage.

摘要

全球大部分与癌症相关的死亡是由肺癌导致的,肺癌的死亡率在全球也位居第二。随着PET/CT扫描技术的出现,肺肿瘤的分割、治疗评估和肿瘤分期分类变得更加可行。借助PET/CT扫描技术,可以在一次检查中同时获取功能和解剖数据。然而,对于医学专业人员来说,整合来自不同模态的图像确实可能很耗时,并且仍然是一项具有挑战性的任务。这一挑战源于多个因素,包括图像采集技术的差异、图像分辨率以及不同成像模态捕获的光谱和时间数据的固有变化。人工智能(AI)方法在图像整合和分割的自动化方面已显示出潜力。为应对这些挑战,提出了基于多模态融合方法的U-Net架构(早期融合、晚期融合、密集融合、超密集融合和超密集VGG16 U-Net)用于肺肿瘤分割。73%的骰子系数表明超密集VGG16 U-Net优于其他四个提出的模型。所提出的方法可能有助于医学专业人员在早期检测肺癌。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f658/10670323/ddd93a08b916/diagnostics-13-03481-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验