Suppr超能文献

端元引导解混网络(EGU-Net):一种用于自监督高光谱解混的通用深度学习框架。

Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing.

作者信息

Hong Danfeng, Gao Lianru, Yao Jing, Yokoya Naoto, Chanussot Jocelyn, Heiden Uta, Zhang Bing

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Nov;33(11):6518-6531. doi: 10.1109/TNNLS.2021.3082289. Epub 2022 Oct 27.

Abstract

Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial-spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net.

摘要

在过去几十年中,人们为提高用于高光谱解混(HU)的线性或非线性混合模型的性能付出了巨大努力。然而,由于数据拟合和重建能力较差以及对各种光谱变异性(SVs)敏感,它们同时概括各种光谱变异性并提取具有物理意义的端元的能力仍然有限。受深度学习(DL)强大学习能力的启发,我们试图通过充分考虑从高光谱图像中提取的端元的特性,开发一种用于HU的通用DL方法,称为端元引导解混网络(EGU-Net)。除了单独的类似自动编码器的架构外,EGU-Net是一个双流暹罗深度网络,它从纯或近纯端元中学习一个额外的网络,通过共享网络参数并添加光谱上有意义的约束(例如非负性和总和为一)来校正另一个解混网络的权重,以获得更准确和可解释的解混解决方案。此外,由此产生的通用框架不仅限于逐像素光谱解混,还适用于使用卷积算子进行空间光谱解混的空间信息建模。在三个不同数据集上进行的实验结果,以及与每种材料对应的丰度图的地面真值,证明了EGU-Net相对于现有解混算法的有效性和优越性。代码将从网站https://github.com/danfenghong/IEEE_TNNLS_EGU-Net获取。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验