Skip to main content Skip to main navigation

Publikation

Making deep neural networks right for the right scientific reasons by interacting with their explanations

Patrick Schramowski; Wolfgang Stammer; Stefano Teso; Anna Brugger; Franziska Herbert; Xiaoting Shao; Hans-Georg Luigs; Anne-Katrin Mahlein; Kristian Kersting
In: Nature Machine Intelligence, Vol. 2, No. 8, Pages 476-486, Springer, 2020.

Zusammenfassung

Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior---making use of confounding factors within datasets---to achieve high performance. In this work, we introduce the novel learning setting of "explanatory interactive learning" (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help avoiding Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model.

Weitere Links