Skip to main content Skip to main navigation

Project | IUML

Duration:
Interpretability in Unsupervised Machine Learning

Interpretability in Unsupervised Machine Learning

To foster trust, accountability, and informed decisions, it is crucial to develop methods for comprehending and explaining how these models arrive at their conclusions, thus accelerating industrial adaptation and ensuring the responsible use of AI algorithms.

We propose an explainable decision support tool for tax fraud detection tasks to overcome the limitations of the current automated un-explainable decision support system of tax consultancy and financial accounting organizations. This tool combines Explainable AI (XAI) and Large Language Models (LLMs) to explain ML or DL-based automated anomaly (fraud) detection tasks.

Funding Authorities

BMBF - Federal Ministry of Education and Research

01IS23064

BMBF - Federal Ministry of Education and Research