School of Computer Science, Electrical and Electronic Engineering, and Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK.
Sci Rep. 2023 Feb 3;13(1):2005. doi: 10.1038/s41598-022-24754-w.
A new method for multimodal sensor fusion is introduced. The technique relies on a two-stage process. In the first stage, a multimodal generative model is constructed from unlabelled training data. In the second stage, the generative model serves as a reconstruction prior and the search manifold for the sensor fusion tasks. The method also handles cases where observations are accessed only via subsampling i.e. compressed sensing. We demonstrate the effectiveness and excellent performance on a range of multimodal fusion experiments such as multisensory classification, denoising, and recovery from subsampled observations.
引入了一种新的多模态传感器融合方法。该技术依赖于两阶段过程。在第一阶段,从无标签训练数据构建多模态生成模型。在第二阶段,生成模型作为传感器融合任务的重建先验和搜索流形。该方法还处理仅通过子采样(即压缩感知)访问观测值的情况。我们在一系列多模态融合实验(例如多感觉分类、去噪和从子采样观测中恢复)中证明了其有效性和优异性能。