Skip to main content Skip to main navigation

Publication

AI Explainability: Embedding Conceptual Models"

Wolfgang Maaß; Arturo Castellanos; Monica Tremblay; Roman Lukyanenko; Veda C. Storey
In: Proceedings of the International Conference on Information Systems (ICIS) 2022. International Conference on Information Systems (ICIS-2022), 12/2022.

Abstract

Artificial intelligence, especially efforts based on machine learning, is rapidly transforming business operations and entire industries. However, as many complex machine learning models are considered to be black boxes, both adoption and further reliance on artificial intelligence depends on the ability to understand how these automated models work – a problem known as explainable AI. We propose an approach to explainability that leverages conceptual models. Conceptual models are commonly used to capture and integrate domain rules and information requirements for the development of databases and other information technology components. We propose a method to embed machine learning models into conceptual models. Specifically, we propose a Model Embedding Method (MEM), which is based on conceptual models, for increasing the explainability of machine learning models, and illustrate through an application to publicly available mortgage data. This machine learning application predicts whether a mortgage is approved. We show how the explainability of machine learning can be improved by embedding machine learning models into domain knowledge from a conceptual model that represents a mental model of the real world, instead of algorithms. Our results suggest that such domain knowledge can help address some of the challenges of the explainability problem in AI.