How can the decision-making of neural networks be made comprehensible when analysing complex time series? The German Research Center for Artificial Intelligence (DFKI) is addressing this central challenge of explainability with the AI toolbox iXplain, an innovative solution for visualising and transparently presenting algorithmic decision structures.
The iX.Viz tool offers a deep insight into the functioning of neural networks that operate on multivariate time series. An intuitive user dashboard graphically highlights critical input variables and decision-relevant patterns. iX.Viz thus not only enables the identification of key influencing factors, but also contributes significantly to quality assurance and error analysis in data-driven processes.
With iX.Tell, DFKI extends the functional scope of iX.Viz to include a linguistic interpretation layer. The technology calculates and analyses statistical metrics for anomaly detection and translates them into comprehensible, non-technical explanations. This functionality not only makes it easier for business users to interpret complex relationships, but also enables novices to understand the logic of neural decisions.
The iXplain demonstrator shows how explainable AI methods can help to improve the traceability and build trust in machine decision-making processes. As a DFKI research project, it provides valuable insights for the use of transparent AI models in quality analysis and can serve as a basis for further developments in this area.
Back to: DFKI at Hannover Messe 2025
Dr. Tobias Wirth
Research Department Smarte Daten und Wissensdienste
tobias.wirth@dfki.de
Dr. Dominique Mercier
Research Department Smarte Daten und Wissensdienste
dominique.mercier@dfki.de