Skip to main content Skip to main navigation

Publikation

On Explainability in AI-Solutions: A Cross-Domain Survey

Simon D. Duque Anton; Daniel Schneider; Hans D. Schotten
In: Mario Trapp; Erwin Schoitsch; Jérémie Guiochet; Friedemann Bitsch (Hrsg.). Computer Safety, Reliability, and Security. SAFECOMP 2022 Workshops. International Workshop on Underpinnings for Safe Distributed AI (USDAI-2022), located at SAFECOMP, September 6-9, Garching b. München, Germany, Pages 235-246, ISBN 978-3-031-14862-0, Springer International Publishing, 2022.

Zusammenfassung

Artificial Intelligence (AI) increasingly shows its potential to outperform predicate logic algorithms and human control alike. In automatically deriving a system model, AI algorithms learn relations in data that are not detectable for humans. This great strength, however, also makes use of AI methods dubious. The more complex a model, the more difficult it is for a human to understand the reasoning for the decisions. As currently, fully automated AI algorithms are sparse, every algorithm has to provide a reasoning for human operators. For data engineers, metrics such as accuracy and sensitivity are sufficient. However, if models are interacting with non-experts, explanations have to be understandable.

Projekte