Suppr超能文献

DANet:通过动态对齐从苏木精-伊红(H&E)组织学图像预测空间基因表达

DANet: spatial gene expression prediction from H&E histology images through dynamic alignment.

作者信息

Wu Yulong, Xie Jin, Nie Jing, Cao Jiale, Zeng Yuansong, Wang Zheng

机构信息

School of Big Data and Software Engineering, Chongqing University, Chongqing 400044, China.

School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China.

出版信息

Brief Bioinform. 2025 Jul 2;26(4). doi: 10.1093/bib/bbaf422.

Abstract

Predicting spatial gene expression from Hematoxylin and Eosin histology images offers a promising approach to significantly reduce the time and cost associated with gene expression sequencing, thereby facilitating a deeper understanding of tissue architecture and disease mechanisms. Achieving accurate gene expression prediction requires the extraction of highly refined features from pathological images; however, existing methods often struggle to effectively capture fine-grained local details and model gene-gene correlations. Moreover, in bimodal contrastive learning, dynamically and efficiently aligning heterogeneous modalities remains a critical challenge. To address these issues, we propose a novel method for predicting gene expression. First, we introduce a dense connective structure that enables efficient feature reuse, thereby enhancing the capturing and mining of local refinement features. Second, we leverage the state space models to uncover underlying patterns and capture dependencies within 1D gene expression data, enabling more accurate modeling of gene-gene correlations. Furthermore, we design the Residual Kolmogorov-Arnold Network (RKAN) that uses a learnable activation function to dynamically adjust bimodal mappings based on input characteristics. Through continuous parameter updates during contrastive training, RKAN progressively refines the alignment between modalities. Extensive experiments conducted on two publicly available datasets, GSE240429 and HER2+, demonstrate the effectiveness of our approach and its significant improvements over existing methods. Source codes are available at https://github.com/202324131016T/DANet.

摘要

从苏木精和伊红组织学图像预测空间基因表达提供了一种很有前景的方法,可显著减少与基因表达测序相关的时间和成本,从而有助于更深入地了解组织结构和疾病机制。要实现准确的基因表达预测,需要从病理图像中提取高度精细的特征;然而,现有方法往往难以有效捕捉细粒度的局部细节并对基因-基因相关性进行建模。此外,在双峰对比学习中,动态且高效地对齐异构模态仍然是一个关键挑战。为了解决这些问题,我们提出了一种预测基因表达的新方法。首先,我们引入了一种密集连接结构,该结构能够实现高效的特征重用,从而增强对局部精细特征的捕捉和挖掘。其次,我们利用状态空间模型来揭示潜在模式并捕捉一维基因表达数据中的依赖性,从而能够更准确地对基因-基因相关性进行建模。此外,我们设计了残差柯尔莫哥洛夫-阿诺德网络(RKAN),它使用可学习的激活函数根据输入特征动态调整双峰映射。通过在对比训练期间不断更新参数,RKAN逐步优化模态之间的对齐。在两个公开可用的数据集GSE240429和HER2+上进行的大量实验证明了我们方法的有效性以及它相对于现有方法的显著改进。源代码可在https://github.com/202324131016T/DANet获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4b9f/12365980/c7c8920f4d8f/bbaf422ga1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验