Suppr超能文献

多模态传感器融合在潜在表示空间中。

Multimodal sensor fusion in the latent representation space.

机构信息

School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK.

出版信息

Sci Rep. 2023 Feb 3;13(1):2005. doi: 10.1038/s41598-022-24754-w.

Abstract

A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.

摘要

引入了一种新的多模态传感器融合方法。该技术依赖于两阶段过程。在第一阶段,从无标签训练数据构建多模态生成模型。在第二阶段,生成模型作为传感器融合任务的重建先验和搜索流形。该方法还处理仅通过子采样(即压缩感知)访问观测值的情况。我们在一系列多模态融合实验(例如多感觉分类、去噪和从子采样观测中恢复)中证明了其有效性和优异性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7b74/9898225/8b669c50a975/41598_2022_24754_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验