Luo Bingying, Teng Fei, Tang Guo, Cen Weixuan, Liu Xing, Chen Jinmiao, Qu Chi, Liu Xuanzhu, Liu Xin, Jiang Wenyan, Huang Huaqiang, Feng Yu, Zhang Xue, Jian Min, Li Mei, Xi Feng, Li Guibo, Liao Sha, Chen Ao, Yu Weimiao, Xu Xun, Zhang Jiajun
BGI Research, Chongqing, No. 313, Jinyue road, Jiulongpo District, Chongqing 401329, China.
BGI Research, Shenzhen, No. 9, Yunhua Road, Yantian District, Shenzhen 518083, China.
Brief Bioinform. 2025 May 1;26(3). doi: 10.1093/bib/bbaf210.
Spatial omics technologies, generating high-throughput and multimodal data, have necessitated the development of advanced data integration methods to facilitate comprehensive biological and clinical treatment discoveries. Based on the cross-attention concept, we developed an AI learning based toolchain called StereoMM, a graph based fusion model that can incorporate omics data such as gene expression, histological images, and spatial location. StereoMM uses an attention module for omics data interaction and a graph autoencoder to integrate spatial positions and omics data in a self-supervised manner. Applying StereoMM across various cancer types and platforms has demonstrated its robust capability. StereoMM outperforms competitors in identifying spatial regions reflecting tumour progression and shows promise in classifying colorectal cancer patients into deficient mismatch repair and proficient mismatch repair groups. The comprehensive inter-modal integration and efficiency of StereoMM enable researchers to construct spatial views of integrated multimodal features efficiently, advancing thorough tissue and patient characterization.
空间组学技术可生成高通量和多模态数据,因此需要开发先进的数据整合方法,以促进全面的生物学和临床治疗发现。基于交叉注意力概念,我们开发了一种基于人工智能学习的工具链,称为StereoMM,这是一种基于图的融合模型,可整合诸如基因表达、组织学图像和空间位置等组学数据。StereoMM使用注意力模块进行组学数据交互,并使用图自动编码器以自监督方式整合空间位置和组学数据。在各种癌症类型和平台上应用StereoMM已证明了其强大的能力。StereoMM在识别反映肿瘤进展的空间区域方面优于竞争对手,并在将结直肠癌患者分为错配修复缺陷组和错配修复 proficient 组方面显示出前景。StereoMM的全面跨模态整合和效率使研究人员能够高效构建整合多模态特征的空间视图,推动对组织和患者的全面表征。