Amodio Matthew, Youlten Scott E, Venkat Aarthi, San Juan Beatriz P, Chaffer Christine L, Krishnaswamy Smita
Yale University Department of Computer Science, New Haven, CT, USA.
Garvan Institute of Medical Research, Darlinghurst, NSW, Australia.
Patterns (N Y). 2022 Sep 1;3(9):100577. doi: 10.1016/j.patter.2022.100577. eCollection 2022 Sep 9.
Exciting advances in technologies to measure biological systems are currently at the forefront of research. The ability to gather data along an increasing number of omic dimensions has created a need for tools to analyze all of this information together, rather than siloing each technology into separate analysis pipelines. To advance this goal, we introduce a framework called the single-cell multi-modal generative adversarial network (scMMGAN) that integrates data from multiple modalities into a unified representation in the ambient data space for downstream analysis using a combination of adversarial learning and data geometry techniques. The framework's key improvement is an additional diffusion geometry loss with a new kernel that constrains the otherwise over-parameterized GAN. We demonstrate scMMGAN's ability to produce more meaningful alignments than alternative methods on a wide variety of data modalities and that its output can be used to draw conclusions from real-world biological experimental data.
测量生物系统的技术方面令人兴奋的进展目前处于研究前沿。沿着越来越多的组学维度收集数据的能力,使得需要有工具来一起分析所有这些信息,而不是将每种技术孤立地放入单独的分析流程中。为了推进这一目标,我们引入了一个名为单细胞多模态生成对抗网络(scMMGAN)的框架,该框架使用对抗学习和数据几何技术的组合,将来自多个模态的数据整合到环境数据空间中的统一表示中,以便进行下游分析。该框架的关键改进是带有新内核的额外扩散几何损失,它可以约束原本参数过多的生成对抗网络。我们证明,scMMGAN在各种数据模态上比其他方法能够产生更有意义的比对,并且其输出可用于从真实世界的生物学实验数据中得出结论。