Skip to main content Skip to main navigation

Publication

Detecting Covariate Drift with Explanations

Steffen Castle; Robert Schwarzenberg; Mohsen Pourvali
In: NLPCC Workshop on Explainable Artificial Intelligence. NLPCC Workshop on Explainable Artificial Intelligence (XAI-2021), October 15, Qingdao, China, Springer, 2021.

Abstract

Detecting when there is a domain drift between training and inference data is important for any model evaluated on data collected in real time. Many current data drift detection methods only utilize input features to detect domain drift. While effective, these methods disregard the model's evaluation of the data, which may be a significant source of information about the data domain. We propose to use information from the model in the form of explanations, specifically gradient times input, in order to utilize this information. Following the framework of Rabanser et al., we combine these explanations with two-sample tests in order to detect a shift in distribution between training and evaluation data. Promising initial experiments show that explanations provide useful information for detecting shift, which potentially improves upon the current state-of-the-art.