Skip to main content Skip to main navigation
DFKI Sprachtechnologie© DFKI, Berlin

Speech and Language Technology

XplaiNLP

Objective

Creation of Intelligent Decision Support Systems (IDSS) by researching the entire cycle from the development and implementation of large language models to the design of user interfaces with meaningful representations of model results and metadata for humans. This includes the implementation of explanations and transparency features for NLP-based predictions.

Topics:

  • Detection of fake news and hate speech (text and image)
  • Claim extraction, claim verification, and argument search
  • Downstream NLP tasks and RAGs
  • Detection of biases in datasets and models
  • Human-computer interaction
  • Explainable AI
  • AI regulation (Analysis of the impact of the AI Act, GDPR, DSA, DA on data scraping and LLMs)

Application Areas

We primarily work with text from the fields of news and healthcare.


Approach & Use Cases

In the XplaiNLP group, we develop and utilize LLMs for various use cases:

1. Detection of False Information and Disinformation:

  • Development and application of LLMs for detecting fake news and hate speech
  • Development and use of knowledge databases with known fabrications and facts
  • Use of RAGs to support human fact-checking
  • Factuality analysis of generated content for summarization or knowledge enrichment

2. Medical Data and Privacy:

  • Development and application of LLMs for anonymizing text-based medical records for open-source publication
  • LLM-based text anonymization of data for various sensitive use cases to enable open-source publication

3. Explainable AI:

  • Development of explanations (such as post-hoc explanations, causal reasoning, and Chain-of-Thought Prompting) for transparent AI models
  • Human-centered XAI is prioritized to develop explanations that can be personalized for user needs at different levels of abstraction and detail
  • Development of methods to verify model faithfulness, ensuring that explanations or predictions accurately reflect the actual internal decision-making process

Additionally, the XplaiNLP group’s work focuses not only on model development but also on the transparent and effective application of LLMs for the above-mentioned use cases.

  • Development and validation of IDSS for detecting fake news
  • Implementation and validation of AI-based human-centered explanations to enhance transparency and trust in system decisions
  • Analysis of legal requirements based on the AI Act, DSA, DA, and GDPR to ensure compliance in IDSS design and LLM implementation