Kawai Yuji, Tachikawa Kazuki, Park Jihoon, Asada Minoru
Symbiotic Intelligent Systems Research Center, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka 565-0871, Japan.
Graduate School of Engineering, Osaka University, Osaka 565-0871, Japan.
Brain Sci. 2022 Jun 28;12(7):849. doi: 10.3390/brainsci12070849.
The integrated gradients (IG) method is widely used to evaluate the extent to which each input feature contributes to the classification using a deep learning model because it theoretically satisfies the desired properties to fairly attribute the contributions to the classification. However, this approach requires an appropriate baseline to do so. In this study, we propose a compensated IG method that does not require a baseline, which compensates the contributions calculated using the IG method at an arbitrary baseline by using an example of the Shapley sampling value. We prove that the proposed approach can compute the contributions to the classification results reliably if the processes of each input feature in a classifier are independent of one another and the parameterization of each process is identical, as in shared weights in convolutional neural networks. Using three datasets on electroencephalogram recordings, we experimentally demonstrate that the contributions obtained by the proposed compensated IG method are more reliable than those obtained using the original IG method and that its computational complexity is much lower than that of the Shapley sampling method.
集成梯度(IG)方法被广泛用于评估每个输入特征对深度学习模型分类的贡献程度,因为从理论上讲,它满足了公平地将贡献归因于分类的理想属性。然而,这种方法需要一个合适的基线来实现这一点。在本研究中,我们提出了一种无需基线的补偿IG方法,该方法通过使用Shapley采样值的示例来补偿在任意基线下使用IG方法计算的贡献。我们证明,如果分类器中每个输入特征的过程相互独立且每个过程的参数化相同,如卷积神经网络中的共享权重那样,那么所提出的方法可以可靠地计算对分类结果的贡献。使用三个脑电图记录数据集,我们通过实验证明,所提出的补偿IG方法获得的贡献比使用原始IG方法获得的贡献更可靠,并且其计算复杂度远低于Shapley采样方法。