Project

XAINES

Explaining AI with Narratives

Explaining AI with Narratives

In the XAINES project, the aim is not only to ensure explainability, but also to provide explanations (narratives). The central question is whether AI can explain in one sentence why it acted the way it did or whether it has to explain it interactively to the user. To clarify this, one of the focal points of the project is the exploration of narrative and interactive narratives, which are particularly suitable for humans to absorb knowledge in any form, in their application with AI systems. To obtain explanatory narratives, (linguistically) labelled sensor data streams and predictive models are used. Sensor information is combined with speech information, from which the AI system develops so-called scene understanding, which then generates explanations. Narratives are divided into domain narratives and machine learning narratives. Domain narratives show what happened in the domain as captured by speech-based activity recognition. Machine Learning Narratives are those that explain the predictions of these models. Domain narratives and ML narratives are linked because domain narratives are constructed by machine learning. The end users of these narratives should be, on the one hand, the developers of the AI modules, and on the other hand, the subject matter experts who use the software, but also interested laypersons. The XAINES project, in which 7 DFKI research areas are working closely together, is funded by the Federal Ministry of Education and Research (BMBF). The project follows the new guideline Explainability and Transparency of Machine Learning and Artificial Intelligence (orig.: "Erklärbarkeit und Transparenz des Maschinenllen Lernens und der Künstlichen Intelligenz"), which was launched as part of the German government's AI strategy. The various use cases come from the fields of autonomous driving (ASR), automation in construction (EI) and interactive medical decision support (IML).

Partners

Forschungsbereiche: Agenten und Simulierte Realität (ASR), Interaktives Maschinelles Lernen (IML), Smarte Daten und Wissensdienste (SDS), Eingebettete Intelligenz (EI), Sprachtechnologie (SLT), Sprachtechnologie und Multilingualität (MLT), Algorithmic Business and Production (ABP)

Sponsors

BMBF - Federal Ministry of Education, Science, Research and Technology

BMBF - Federal Ministry of Education, Science, Research and Technology

Publications about the project

Nils Feldhus, Ajay Madhavan Ravichandran, Sebastian Möller

In: Rosina Weber, Ofra Amir, Tim Miller (editor). IJCAI 2022 - Workshop on Explainable Artificial Intelligence (XAI). IJCAI Workshop on Explainable Artificial Intelligence (XAI-2022) located at IJCAI-ECAI 2022 July 23-23 Vienna Austria International Joint Conferences on Artificial Intelligence Organization 7/2022.

To the publication
Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag

Bridging Human-Computer Interaction and Natural Language Processing (NAACL 2022) 7/2022.

To the publication

German Research Center for Artificial Intelligence
Deutsches Forschungszentrum für Künstliche Intelligenz