Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland.
Sensors (Basel). 2022 Nov 10;22(22):8668. doi: 10.3390/s22228668.
Hyperspectral imaging and distance data have previously been used in aerial, forestry, agricultural, and medical imaging applications. Extracting meaningful information from a combination of different imaging modalities is difficult, as the image sensor fusion requires knowing the optical properties of the sensors, selecting the right optics and finding the sensors' mutual reference frame through calibration. In this research we demonstrate a method for fusing data from Fabry-Perot interferometer hyperspectral camera and a Kinect V2 time-of-flight depth sensing camera. We created an experimental application to demonstrate utilizing the depth augmented hyperspectral data to measure emission angle dependent reflectance from a multi-view inferred point cloud. We determined the intrinsic and extrinsic camera parameters through calibration, used global and local registration algorithms to combine point clouds from different viewpoints, created a dense point cloud and determined the angle dependent reflectances from it. The method could successfully combine the 3D point cloud data and hyperspectral data from different viewpoints of a reference colorchecker board. The point cloud registrations gained 0.29-0.36 fitness for inlier point correspondences and RMSE was approx. 2, which refers a quite reliable registration result. The RMSE of the measured reflectances between the front view and side views of the targets varied between 0.01 and 0.05 on average and the spectral angle between 1.5 and 3.2 degrees. The results suggest that changing emission angle has very small effect on the surface reflectance intensity and spectrum shapes, which was expected with the used colorchecker.
高光谱成像和距离数据以前曾用于航空、林业、农业和医学成像应用。从不同成像模式的组合中提取有意义的信息是困难的,因为图像传感器融合需要了解传感器的光学特性,选择合适的光学器件,并通过校准找到传感器的相互参考框架。在这项研究中,我们展示了一种融合法布里-珀罗干涉仪高光谱相机和 Kinect V2 飞行时间深度感应相机数据的方法。我们创建了一个实验应用程序,演示了利用深度增强高光谱数据来测量来自多视角推断点云的发射角相关反射率。我们通过校准确定了相机的内部和外部参数,使用全局和局部配准算法来组合来自不同视角的点云,创建密集点云并从中确定角度相关的反射率。该方法可以成功地将参考色卡板不同视角的 3D 点云数据和高光谱数据结合起来。点云配准的内点对应拟合度为 0.29-0.36,均方根误差约为 2,这表明配准结果相当可靠。目标前视图和侧视图之间测量反射率的均方根误差平均在 0.01 到 0.05 之间,光谱角在 1.5 到 3.2 度之间。结果表明,与使用的色卡相比,发射角的变化对表面反射率强度和光谱形状的影响很小。