Skip to main content Skip to main navigation

Project | XAINES

Explaining AI with Narratives

Explaining AI with Narratives

In the XAINES project, the aim is not only to ensure explainability, but also to provide explanations (narratives). The central question is whether AI can explain in one sentence why it acted the way it did or whether it has to explain it interactively to the user. To clarify this, one of the focal points of the project is the exploration of narrative and interactive narratives, which are particularly suitable for humans to absorb knowledge in any form, in their application with AI systems. To obtain explanatory narratives, (linguistically) labelled sensor data streams and predictive models are used. Sensor information is combined with speech information, from which the AI system develops so-called scene understanding, which then generates explanations. Narratives are divided into domain narratives and machine learning narratives. Domain narratives show what happened in the domain as captured by speech-based activity recognition. Machine Learning Narratives are those that explain the predictions of these models. Domain narratives and ML narratives are linked because domain narratives are constructed by machine learning. The end users of these narratives should be, on the one hand, the developers of the AI modules, and on the other hand, the subject matter experts who use the software, but also interested laypersons. The XAINES project, in which 7 DFKI research areas are working closely together, is funded by the Federal Ministry of Education and Research (BMBF). The project follows the new guideline Explainability and Transparency of Machine Learning and Artificial Intelligence (orig.: "Erklärbarkeit und Transparenz des Maschinenllen Lernens und der Künstlichen Intelligenz"), which was launched as part of the German government's AI strategy. The various use cases come from the fields of autonomous driving (ASR), automation in construction (EI) and interactive medical decision support (IML).


Forschungsbereiche: Agenten und Simulierte Realität (ASR), Interaktives Maschinelles Lernen (IML), Smarte Daten und Wissensdienste (SDS), Eingebettete Intelligenz (EI), Sprachtechnologie (SLT), Sprachtechnologie und Multilingualität (MLT), Algorithmic Business and Production (ABP)

Publications about the project

  1. LLMCheckup: Conversational Examination of Large Language Models via Interpretability Tools and Self-Explanations

    Qianli Wang; Tatiana Anikina; Nils Feldhus; Josef van Genabith; Leonhard Hennig; Sebastian Möller

    In: Su Lin Blodgett; Amanda Cercas Curry; Sunipa Dev; Michael Madaio; Ani Nenkova; Diyi Yang; Ziang Xiao (Hrsg.). Proceedings of the Third Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP). NAACL Workshop on Bridging Human-Computer Interaction and Natural Language Processing (HCI+NLP-2024), located at North American Chapter of the Association for…


BMBF - Federal Ministry of Education, Science, Research and Technology

BMBF - Federal Ministry of Education, Science, Research and Technology