Suppr超能文献

基于运动和强度预处理以及子空间自动编码的高光谱视频分析

Hyperspectral Video Analysis by Motion and Intensity Preprocessing and Subspace Autoencoding.

作者信息

Vitale Raffaele, Ruckebusch Cyril, Burud Ingunn, Martens Harald

机构信息

Univ. Lille, CNRS, LASIRE (UMR 8516), Laboratoire Avancé de Spectroscopie pour les Interactions, la Réactivité et l'Environnement, Lille, France.

Faculty of Science and Technology, Norwegian University of Life Sciences, Oslo, Norway.

出版信息

Front Chem. 2022 Mar 15;10:818974. doi: 10.3389/fchem.2022.818974. eCollection 2022.

Abstract

Hyperspectral imaging has recently gained increasing attention from academic and industrial world due to its capability of providing both spatial and physico-chemical information about the investigated objects. While this analytical approach is experiencing a substantial success and diffusion in very disparate scenarios, far less exploited is the possibility of collecting sequences of hyperspectral images over time for monitoring dynamic scenes. This trend is mainly justified by the fact that these so-called hyperspectral usually result in BIG DATA sets, requiring TBs of computer memory to be both stored and processed. Clearly, standard chemometric techniques do need to be somehow adapted or expanded to be capable of dealing with such massive amounts of information. In addition, hyperspectral video data are often affected by many different sources of variations in sample chemistry (for example, light absorption effects) and sample physics (light scattering effects) as well as by systematic errors (associated, .., to fluctuations in the behaviour of the light source and/or of the camera). Therefore, identifying, disentangling and interpreting all these distinct sources of information represents undoubtedly a challenging task. In view of all these aspects, the present work describes a multivariate hybrid modelling framework for the analysis of hyperspectral videos, which involves spatial, spectral and temporal parametrisations of both known and unknown chemical and physical phenomena underlying complex real-world systems. Such a framework encompasses three different computational steps: 1) motions ongoing within the inspected scene are estimated by optical flow analysis and compensated through IDLE modelling; 2) chemical variations are quantified and separated from physical variations by means of Extended Multiplicative Signal Correction (EMSC); 3) the resulting light scattering and light absorption data are subjected to the On-The-Fly Processing and summarised spectrally, spatially and over time. The developed methodology was here tested on a near-infrared hyperspectral video of a piece of wood undergoing drying. It led to a significant reduction of the size of the original measurements recorded and, at the same time, provided valuable information about systematic variations generated by the phenomena behind the monitored process.

摘要

由于能够提供有关被研究物体的空间和物理化学信息,高光谱成像最近在学术界和工业界越来越受到关注。虽然这种分析方法在非常不同的场景中取得了巨大成功并得到广泛应用,但很少有人利用随时间收集高光谱图像序列来监测动态场景的可能性。这种趋势主要是因为这些所谓的高光谱数据通常会形成大数据集,存储和处理这些数据集需要数TB的计算机内存。显然,标准的化学计量技术需要以某种方式进行调整或扩展,才能处理如此大量的信息。此外,高光谱视频数据通常受到样本化学(例如光吸收效应)和样本物理(光散射效应)中许多不同变化源的影响,以及系统误差(与光源和/或相机行为的波动相关)的影响。因此,识别、解开并解释所有这些不同的信息源无疑是一项具有挑战性的任务。鉴于所有这些方面,本工作描述了一个用于分析高光谱视频的多变量混合建模框架,该框架涉及复杂现实世界系统中已知和未知化学及物理现象的空间、光谱和时间参数化。这样一个框架包括三个不同的计算步骤:1)通过光流分析估计被检查场景内正在进行的运动,并通过IDLE建模进行补偿;2)通过扩展乘法信号校正(EMSC)对化学变化进行量化,并与物理变化分离;3)对所得的光散射和光吸收数据进行实时处理,并在光谱、空间和时间上进行汇总。这里将所开发的方法应用于一块正在干燥的木材的近红外高光谱视频进行测试。它使得记录的原始测量数据量显著减少,同时提供了有关被监测过程背后现象所产生的系统变化的有价值信息。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1816/8964463/3e169241d06c/fchem-10-818974-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验