Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China.
Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China.
Sensors (Basel). 2020 Apr 24;20(8):2426. doi: 10.3390/s20082426.
Hyperspectral images reconstruction focuses on recovering the spectral information from a single RGBimage. In this paper, we propose two advanced Generative Adversarial Networks (GAN) for the heavily underconstrained inverse problem. We first propose scale attention pyramid UNet (SAPUNet), which uses U-Net with dilated convolution to extract features. We establish the feature pyramid inside the network and use the attention mechanism for feature selection. The superior performance of this model is due to the modern architecture and capturing of spatial semantics. To provide a more accurate solution, we propose another distinct architecture, named W-Net, that builds one more branch compared to U-Net to conduct boundary supervision. SAPUNet and scale attention pyramid WNet (SAPWNet) provide improvements on the Interdisciplinary Computational Vision Lab at Ben Gurion University (ICVL) datasetby 42% and 46.6%, and 45% and 50% in terms of root mean square error (RMSE) and relative RMSE, respectively. The experimental results demonstrate that our proposed models are more accurate than the state-of-the-art hyperspectral recovery methods.
高光谱图像重建专注于从单个 RGB 图像中恢复光谱信息。在本文中,我们提出了两种用于严重欠定逆问题的先进生成对抗网络 (GAN)。我们首先提出了尺度注意金字塔 U-Net(SAPUNet),它使用带扩张卷积的 U-Net 来提取特征。我们在网络内部建立特征金字塔,并使用注意力机制进行特征选择。该模型的优越性能归因于现代架构和对空间语义的捕捉。为了提供更准确的解决方案,我们提出了另一种独特的架构,名为 W-Net,与 U-Net 相比,它增加了一个分支来进行边界监督。SAPUNet 和尺度注意金字塔 W-Net(SAPWNet)在 Interdisciplinary Computational Vision Lab at Ben Gurion University(ICVL)数据集上的改进分别为 42%和 46.6%,以及 45%和 50%,在均方根误差(RMSE)和相对 RMSE 方面。实验结果表明,我们提出的模型比现有的高光谱恢复方法更准确。