Department of Textiles, Graphic Arts and Design, Faculty of Natural Sciences and Engineering, University of Ljubljana, Snežniška Ulica 5, SI-1000 Ljubljana, Slovenia.
Sensors (Basel). 2023 Jan 15;23(2):1000. doi: 10.3390/s23021000.
Knowledge of surface reflection of an object is essential in many technological fields, including graphics and cultural heritage. Compared to direct multi- or hyper-spectral capturing approaches, commercial RGB cameras allow for a high resolution and fast acquisition, so the idea of mapping this information into a reflectance spectrum (RS) is promising. This study compared two modelling approaches based on a training set of RGB-reflectance pairs, one implementing artificial neural networks (ANN) and the other one using multivariate polynomial approximation (PA). The effect of various parameters was investigated: the ANN learning algorithm-standard backpropagation (BP) or Levenberg-Marquardt (LM), the number of hidden layers (HLs) and neurons, the degree of multivariate polynomials in PA, the number of inputs, and the training set size on both models. In the two-layer ANN with significantly fewer inputs than outputs, a better MSE performance was found where the number of neurons in the first HL was smaller than in the second one. For ANNs with one and two HLs with the same number of neurons in the first layer, the RS reconstruction performance depends on the choice of BP or LM learning algorithm. RS reconstruction methods based on ANN and PA are comparable, but the ANN models' better fine-tuning capabilities enable, under realistic constraints, finding ANNs that outperform PA models. A profiling approach was proposed to determine the initial number of neurons in HLs-the search centre of ANN models for different training set sizes.
物体表面反射知识在许多技术领域都至关重要,包括图形和文化遗产领域。与直接的多光谱或高光谱捕获方法相比,商用 RGB 相机具有高分辨率和快速采集的优势,因此将这些信息映射到反射光谱 (RS) 中的想法很有前景。本研究比较了两种基于 RGB-反射对训练集的建模方法,一种方法使用人工神经网络 (ANN),另一种方法使用多元多项式逼近 (PA)。研究了各种参数的影响:ANN 的学习算法——标准反向传播 (BP) 或 Levenberg-Marquardt (LM)、隐藏层数 (HL) 和神经元数、PA 中多元多项式的次数、输入数以及两个模型的训练集大小。在输入数明显少于输出数的两层 ANN 中,发现了具有更小的第一个 HL 中神经元数的更好的均方误差 (MSE) 性能。对于具有相同第一层神经元数的一层和两层 ANN,RS 重建性能取决于 BP 或 LM 学习算法的选择。基于 ANN 和 PA 的 RS 重建方法相当,但 ANN 模型更好的微调能力使得在现实约束下能够找到优于 PA 模型的 ANN。提出了一种剖析方法来确定 HL 中神经元的初始数量——不同训练集大小的 ANN 模型的搜索中心。