Skip to main content Skip to main navigation

Publikation

Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions

Xiaoting Shao; Arseny Skryagin; Wolfgang Stammer; Patrick Schramowski; Kristian Kersting
In: Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021. AAAI Conference on Artificial Intelligence (AAAI-2021), Pages 9533-9540, AAAI Press, 2021.

Zusammenfassung

Explaining black-box models such as deep neural networks is becoming increasingly important as it helps to boost trust and debugging. Popular forms of explanations map the features to a vector indicating their individual importance to a decision on the instance-level. They can then be used to prevent the model from learning the wrong bias in data possibly due to ambiguity. For instance, Ross et al.'s``right for the right reasons''propagates user explanations backwards to the network by formulating differentiable constraints based on input gradients. Unfortunately, input gradients as well as many other widely used explanation methods form an approximation of the decision boundary and assume the underlying model to be fixed. Here, we demonstrate how to make use of influence functions---a well known robust statistic---in the constraints to correct the model’s behaviour more effectively. Our empirical evidence demonstrates that this``right for better reasons''(RBR) considerably reduces the time to correct the classifier at training time and boosts the quality of explanations at inference time compared to input gradients. Besides, we also showcase the effectiveness of RBR in correcting" Clever Hans"-like behaviour in real, high-dimensional domain.

Weitere Links