J Opt Soc Am A Opt Image Sci Vis. 2021 Sep 1;38(9):1349-1356. doi: 10.1364/JOSAA.434860.
Computational color constancy algorithms are commonly evaluated only through angular error analysis on annotated datasets of static images. The widespread use of videos in consumer devices motivated us to define a richer methodology for color constancy evaluation. To this extent, temporal and spatial stability are defined here to determine the degree of sensitivity of color constancy algorithms to variations in the scene that do not depend on the illuminant source, such as moving subjects or a moving camera. Our evaluation methodology is applied to compare several color constancy algorithms on stable sequences belonging to the Gray Ball and Burst Color Constancy video datasets. The stable sequences, identified using a general-purpose procedure, are made available for public download to encourage future research. Our investigation proves the importance of evaluating color constancy algorithms according to multiple metrics, instead of angular error alone. For example, the popular fully convolutional color constancy with confidence-weighted pooling algorithm is consistently the best performing solution for error evaluation, but it is often surpassed in terms of stability by the traditional gray edge algorithm, and by the more recent sensor-independent illumination estimation algorithm.
计算颜色恒常性算法通常仅通过对静态图像注释数据集进行角度误差分析进行评估。由于视频在消费设备中的广泛使用,我们激励自己定义更丰富的颜色恒常性评估方法。在这方面,定义了时间和空间稳定性以确定颜色恒常性算法对不依赖光源的场景变化的敏感性程度,例如移动的物体或移动的相机。我们的评估方法应用于比较 Gray Ball 和 Burst Color Constancy 视频数据集的稳定序列上的几种颜色恒常性算法。使用通用过程标识的稳定序列可公开下载,以鼓励未来的研究。我们的调查证明了根据多个指标评估颜色恒常性算法的重要性,而不仅仅是角度误差。例如,流行的全卷积置信加权池化颜色恒常性算法在误差评估方面始终是性能最佳的解决方案,但在稳定性方面,它通常会被传统的灰度边缘算法以及更新的传感器独立照明估计算法所超越。