To foster trust, accountability, and informed decisions, it is crucial to develop methods for comprehending and explaining how these models arrive at their conclusions, thus accelerating industrial adaptation and ensuring the responsible use of AI algorithms.
We propose an explainable decision support tool for tax fraud detection tasks to overcome the limitations of the current automated un-explainable decision support system of tax consultancy and financial accounting organizations. This tool combines Explainable AI (XAI) and Large Language Models (LLMs) to explain ML or DL-based automated anomaly (fraud) detection tasks.