• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于生存分析的双流跨模态融合对齐网络

Dual-stream cross-modal fusion alignment network for survival analysis.

作者信息

Song Jinmiao, Hao Yatong, Zhao Shuang, Zhang Peng, Feng Qilin, Dai Qiguo, Duan Xiaodong

机构信息

School of Software, Xinjiang University, Urumqi 830046, China.

School of Computer Science and Engineering, Dalian Minzu University, Dalian 116650, China.

出版信息

Brief Bioinform. 2025 Mar 4;26(2). doi: 10.1093/bib/bbaf103.

DOI:10.1093/bib/bbaf103
PMID:40116656
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11926988/
Abstract

Survival prediction serves as a pivotal component in precision oncology, enabling the optimization of treatment strategies through mortality risk assessment. While the integration of histopathological images and genomic profiles offers enhanced potential for patient stratification, existing methodologies are constrained by two fundamental limitations: (i) insufficient attention to fine-grained local features in favor of global representations, and (ii) suboptimal cross-modal fusion strategies that either neglect intrinsic correlations or discard modality-specific information. To address these challenges, we propose DSCASurv, a novel cross-modal fusion alignment framework designed to explore and integrate intrinsic correlations across multimodal data, thereby improving the accuracy of survival prediction. Specifically, DSCASurv leverages the local feature extraction capabilities of convolutional layers and the long-range dependency modeling of scanning state space models to extract intra-modal representations, while generating cross-modal representations through dual parallel mixer architectures. A cross-modal attention module functions as a bridge for inter-modal information exchange and complementary information transfer. The framework ultimately integrates all intra-modal representations to generate survival predictions by enhancing and recalibrating complementary information. Extensive experiments on five benchmark cancer datasets demonstrate the superior performance of our approach compared to existing methods.

摘要

生存预测是精准肿瘤学的关键组成部分,通过死亡率风险评估能够优化治疗策略。虽然组织病理学图像和基因组图谱的整合为患者分层提供了更大的潜力,但现有方法受到两个基本限制:(i)过于关注全局表征而对细粒度局部特征关注不足,以及(ii)跨模态融合策略欠佳,要么忽略内在相关性,要么丢弃特定模态信息。为应对这些挑战,我们提出了DSCASurv,这是一种新颖的跨模态融合对齐框架,旨在探索和整合多模态数据中的内在相关性,从而提高生存预测的准确性。具体而言,DSCASurv利用卷积层的局部特征提取能力和扫描状态空间模型的长程依赖建模来提取模态内表征,同时通过双并行混合器架构生成跨模态表征。一个跨模态注意力模块充当模态间信息交换和互补信息传递的桥梁。该框架最终通过增强和重新校准互补信息来整合所有模态内表征以生成生存预测。在五个基准癌症数据集上进行的大量实验表明,我们的方法比现有方法具有更优的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/8e49fbf739e3/bbaf103f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/1cd7dea74e3a/bbaf103f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/a07774137e29/bbaf103f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/8e49fbf739e3/bbaf103f3.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/1cd7dea74e3a/bbaf103f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/a07774137e29/bbaf103f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f00e/11926988/8e49fbf739e3/bbaf103f3.jpg

相似文献

1
Dual-stream cross-modal fusion alignment network for survival analysis.用于生存分析的双流跨模态融合对齐网络
Brief Bioinform. 2025 Mar 4;26(2). doi: 10.1093/bib/bbaf103.
2
Cross-modal alignment and contrastive learning for enhanced cancer survival prediction.用于增强癌症生存预测的跨模态对齐与对比学习
Comput Methods Programs Biomed. 2025 May;263:108633. doi: 10.1016/j.cmpb.2025.108633. Epub 2025 Feb 7.
3
A 3D hierarchical cross-modality interaction network using transformers and convolutions for brain glioma segmentation in MR images.一种使用变换和卷积的 3D 层次跨模态交互网络,用于磁共振图像中的脑胶质瘤分割。
Med Phys. 2024 Nov;51(11):8371-8389. doi: 10.1002/mp.17354. Epub 2024 Aug 13.
4
A Cross-Modal Attention-Driven Multi-Sensor Fusion Method for Semantic Segmentation of Point Clouds.一种用于点云语义分割的跨模态注意力驱动多传感器融合方法。
Sensors (Basel). 2025 Apr 14;25(8):2474. doi: 10.3390/s25082474.
5
SwinCross: Cross-modal Swin transformer for head-and-neck tumor segmentation in PET/CT images.SwinCross:用于 PET/CT 图像中头颈部肿瘤分割的跨模态 Swin 变换器。
Med Phys. 2024 Mar;51(3):2096-2107. doi: 10.1002/mp.16703. Epub 2023 Sep 30.
6
MMGCN: Multi-modal multi-view graph convolutional networks for cancer prognosis prediction.多模态多视图图卷积网络用于癌症预后预测。
Comput Methods Programs Biomed. 2024 Dec;257:108400. doi: 10.1016/j.cmpb.2024.108400. Epub 2024 Sep 6.
7
MACTFusion: Lightweight Cross Transformer for Adaptive Multimodal Medical Image Fusion.MACTFusion:用于自适应多模态医学图像融合的轻量级交叉变换器
IEEE J Biomed Health Inform. 2025 May;29(5):3317-3328. doi: 10.1109/JBHI.2024.3391620. Epub 2025 May 6.
8
A dual-stream feature decomposition network with weight transformation for multi-modality image fusion.一种具有权重变换的双流特征分解网络用于多模态图像融合。
Sci Rep. 2025 Mar 3;15(1):7467. doi: 10.1038/s41598-025-92054-0.
9
Multi-modal fusion network with intra- and inter-modality attention for prognosis prediction in breast cancer.多模态融合网络,具有内在和外在模态注意力,用于乳腺癌预后预测。
Comput Biol Med. 2024 Jan;168:107796. doi: 10.1016/j.compbiomed.2023.107796. Epub 2023 Dec 3.
10
SG-Fusion: A swin-transformer and graph convolution-based multi-modal deep neural network for glioma prognosis.SG-Fusion:一种基于 Swin-Transformer 和图卷积的多模态深度神经网络,用于脑胶质瘤预后。
Artif Intell Med. 2024 Nov;157:102972. doi: 10.1016/j.artmed.2024.102972. Epub 2024 Aug 31.

本文引用的文献

1
Multimodal artificial intelligence-based pathogenomics improves survival prediction in oral squamous cell carcinoma.基于多模态人工智能的病原体组学提高了口腔鳞状细胞癌的生存预测。
Sci Rep. 2024 Mar 7;14(1):5687. doi: 10.1038/s41598-024-56172-5.
2
End-to-end prognostication in colorectal cancer by deep learning: a retrospective, multicentre study.深度学习在结直肠癌中的端到端预后预测:一项回顾性、多中心研究。
Lancet Digit Health. 2024 Jan;6(1):e33-e43. doi: 10.1016/S2589-7500(23)00208-X.
3
Graph-Based Fusion of Imaging, Genetic and Clinical Data for Degenerative Disease Diagnosis.
基于图的成像、遗传和临床数据融合用于退行性疾病诊断。
IEEE/ACM Trans Comput Biol Bioinform. 2024 Jan-Feb;21(1):57-68. doi: 10.1109/TCBB.2023.3335369. Epub 2024 Feb 5.
4
AdvMIL: Adversarial multiple instance learning for the survival analysis on whole-slide images.AdvMIL:用于全切片图像生存分析的对抗多示例学习
Med Image Anal. 2024 Jan;91:103020. doi: 10.1016/j.media.2023.103020. Epub 2023 Nov 2.
5
Explainable survival analysis with uncertainty using convolution-involved vision transformer.利用卷积相关的视觉Transformer进行具有不确定性的可解释生存分析
Comput Med Imaging Graph. 2023 Dec;110:102302. doi: 10.1016/j.compmedimag.2023.102302. Epub 2023 Sep 23.
6
Multimodal deep learning to predict prognosis in adult and pediatric brain tumors.多模态深度学习用于预测成人和儿童脑肿瘤的预后。
Commun Med (Lond). 2023 Mar 29;3(1):44. doi: 10.1038/s43856-023-00276-y.
7
Prognostic models in COVID-19 infection that predict severity: a systematic review.COVID-19 感染中预测严重程度的预后模型:系统评价。
Eur J Epidemiol. 2023 Apr;38(4):355-372. doi: 10.1007/s10654-023-00973-x. Epub 2023 Feb 25.
8
Deep multimodal graph-based network for survival prediction from highly multiplexed images and patient variables.基于深度多模态图的网络,用于从高度多重化的图像和患者变量中进行生存预测。
Comput Biol Med. 2023 Mar;154:106576. doi: 10.1016/j.compbiomed.2023.106576. Epub 2023 Feb 1.
9
Risk-aware survival time prediction from whole slide pathological images.基于全切片病理图像的风险感知生存时间预测。
Sci Rep. 2022 Dec 19;12(1):21948. doi: 10.1038/s41598-022-26096-z.
10
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.