Suppr超能文献

一种基于变分自编码器的多模态融合网络,用于区分小细胞肺癌脑转移瘤与非小细胞肺癌脑转移瘤。

A multimodal fusion network based on variational autoencoder for distinguishing SCLC brain metastases from NSCLC brain metastases.

作者信息

Linyan Xue, Jie Cao, Kexuan Zhou, Houquan Chen, Chaoyi Qi, Xiaosong Yin, Jianing Wang, Kun Yang

机构信息

College of Quality and Technical Supervision, Hebei University, Baoding, China.

Hebei Technology Innovation Center for Lightweight of New Energy Vehicle Power System, and National & Local Joint Engineering Research Center of Metrology Instrument and System, Hebei University, Baoding, China.

出版信息

Med Phys. 2025 Jul;52(7):e17816. doi: 10.1002/mp.17816. Epub 2025 May 2.

Abstract

BACKGROUND

Distinguishing small cell lung cancer brain metastases from non-small cell lung cancer brain metastases in MRI sequence images is crucial for the accurate diagnosis and treatment of lung cancer brain metastases. Multi-MRI modalities provide complementary and comprehensive information, but efficiently merging these sequences to achieve modality complementarity is challenging due to redundant information within radiomic features and heterogeneity across different modalities.

PURPOSE

To address these challenges, we propose a novel multimodal fusion network, termed MFN-VAE, which utilizes a variational auto-encoder (VAE) to compress and aggregate radiomic features derived from MRI images.

METHODS

Initially, we extract radiomic features from areas of interest in MRI images across T1WI, FLAIR, and DWI modalities. A VAE encoder is then constructed to project these multimodal features into a latent space, where they are decoded into reconstruction features using a decoder. The encoder-decoder network is trained to extract the underlying feature representation of each modality, capturing both the consistency and specificity of each domain.

RESULTS

Experimental results on our collected dataset of lung cancer brain metastases demonstrate the encouraging performance of our proposed MFN-VAE. The method achieved a 0.888 accuracy and a 0.920 AUC (area under the curve), outperforming state-of-the-art methods across different modal combinations.

CONCLUSIONS

In this study, we introduce the MFN-VAE, a new multimodal fusion network for differentiating small cell from non-small cell lung cancer brain metastases. Tested on a private dataset, MFN-VAE demonstrated high accuracy (ACC: 0.888; AUC: 0.920), effectively distinguishing between small cell lung cancer brain metastases (SCLC) and non-small cell lung cancer (NSCLC). The SHapley Additive explanation (SHAP) method was used to enhance model interpretability, providing clinicians with a reliable diagnostic tool. Overall, MFN-VAE shows great potential in improving the diagnosis and treatment of lung cancer brain metastases.

摘要

背景

在MRI序列图像中区分小细胞肺癌脑转移瘤与非小细胞肺癌脑转移瘤对于肺癌脑转移瘤的准确诊断和治疗至关重要。多模态MRI提供了互补且全面的信息,但由于放射组学特征中的冗余信息以及不同模态之间的异质性,有效地融合这些序列以实现模态互补具有挑战性。

目的

为应对这些挑战,我们提出了一种新颖的多模态融合网络,称为MFN-VAE,它利用变分自编码器(VAE)来压缩和聚合从MRI图像中提取的放射组学特征。

方法

首先,我们从T1WI、FLAIR和DWI模态的MRI图像感兴趣区域中提取放射组学特征。然后构建一个VAE编码器,将这些多模态特征投影到一个潜在空间中,在该空间中使用解码器将它们解码为重建特征。对编码器-解码器网络进行训练,以提取每个模态的潜在特征表示,同时捕获每个域的一致性和特异性。

结果

在我们收集的肺癌脑转移瘤数据集上的实验结果证明了我们提出的MFN-VAE的令人鼓舞的性能。该方法实现了0.888的准确率和0.920的曲线下面积(AUC),在不同模态组合上优于现有方法。

结论

在本研究中,我们介绍了MFN-VAE,这是一种用于区分小细胞肺癌与非小细胞肺癌脑转移瘤的新型多模态融合网络。在一个私有数据集上进行测试时,MFN-VAE显示出高准确率(ACC:0.888;AUC:0.920),有效地区分了小细胞肺癌脑转移瘤(SCLC)和非小细胞肺癌(NSCLC)。使用SHapley加性解释(SHAP)方法来增强模型的可解释性,为临床医生提供了一种可靠的诊断工具。总体而言,MFN-VAE在改善肺癌脑转移瘤的诊断和治疗方面显示出巨大潜力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验