Artificial Intelligence Laboratory, Fujitsu Limited, 4-1-1 Kamikodanaka, Nakahara-Ku, Kawasaki, Kanagawa 211-8588, Japan.
Artificial Intelligence Laboratory, Fujitsu Limited, 4-1-1 Kamikodanaka, Nakahara-Ku, Kawasaki, Kanagawa 211-8588, Japan.
Neural Netw. 2022 Nov;155:119-143. doi: 10.1016/j.neunet.2022.07.026. Epub 2022 Jul 30.
The training data distribution is often biased towards objects in certain orientations and illumination conditions. While humans have a remarkable capability of recognizing objects in out-of-distribution (OoD) orientations and illuminations, Deep Neural Networks (DNNs) severely suffer in this case, even when large amounts of training examples are available. Neurons that are invariant to orientations and illuminations have been proposed as a neural mechanism that could facilitate OoD generalization, but it is unclear how to encourage the emergence of such invariant neurons. In this paper, we investigate three different approaches that lead to the emergence of invariant neurons and substantially improve DNNs in recognizing objects in OoD orientations and illuminations. Namely, these approaches are (i) training much longer after convergence of the in-distribution (InD) validation accuracy, i.e., late-stopping, (ii) tuning the momentum parameter of the batch normalization layers, and (iii) enforcing invariance of the neural activity in an intermediate layer to orientation and illumination conditions. Each of these approaches substantially improves the DNN's OoD accuracy (more than 20% in some cases). We report results in four datasets: two datasets are modified from the MNIST and iLab datasets, and the other two are novel (one of 3D rendered cars and another of objects taken from various controlled orientations and illumination conditions). These datasets allow to study the effects of different amounts of bias and are challenging as DNNs perform poorly in OoD conditions. Finally, we demonstrate that even though the three approaches focus on different aspects of DNNs, they all tend to lead to the same underlying neural mechanism to enable OoD accuracy gains - individual neurons in the intermediate layers become invariant to OoD orientations and illuminations. We anticipate this study to be a basis for further improvement of deep neural networks' OoD generalization performance, which is highly demanded to achieve safe and fair AI applications.
训练数据的分布往往偏向于某些特定方向和光照条件下的物体。虽然人类具有在非分布(Out-of-Distribution,OoD)方向和光照条件下识别物体的非凡能力,但深度神经网络(Deep Neural Network,DNN)在这种情况下却严重受到影响,即使有大量的训练样例可用。具有对方向和光照不变性的神经元被提出作为一种神经机制,可以促进 OoD 泛化,但目前尚不清楚如何鼓励这种不变神经元的出现。在本文中,我们研究了三种不同的方法,这些方法导致不变神经元的出现,并大大提高了 DNN 在识别 OoD 方向和光照条件下的物体的能力。具体来说,这些方法是(i)在分布内(In-Distribution,InD)验证准确性收敛后进行更长时间的训练,即晚期停止,(ii)调整批量归一化层的动量参数,以及(iii)强制中间层的神经活动对方向和光照条件不变。这些方法中的每一种都大大提高了 DNN 的 OoD 准确性(在某些情况下超过 20%)。我们在四个数据集上报告了结果:两个数据集是从 MNIST 和 iLab 数据集修改而来的,另外两个是新的(一个是 3D 渲染汽车的数据集,另一个是从各种受控方向和光照条件下获取的物体的数据集)。这些数据集允许研究不同程度的偏差的影响,并且具有挑战性,因为 DNN 在 OoD 条件下表现不佳。最后,我们证明了尽管这三种方法关注的是 DNN 的不同方面,但它们都倾向于导致相同的潜在神经机制,从而实现 OoD 准确性的提高——中间层的单个神经元对 OoD 方向和光照变得不变。我们预计这项研究将为进一步提高深度神经网络的 OoD 泛化性能奠定基础,这是实现安全和公平的 AI 应用所急需的。