Skip to main content Skip to main navigation

Publication

Causing Intended Effects in Collaborative Decision-Making

André Meyer-Vitali; Wico Mulder
In: Pradeep K. Murukannaiah; Teresa Hirzle (Hrsg.). Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence. International Workshop on Multidisciplinary Perspectives on Human-AI Team Trust (MULTITTRUST-2023), located at Second International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023), June 26, München, Germany, Pages 137-144, CEUR Workshop Proceedings (CEUR-WS.org), Vol. 3456, CEUR-WS.org, 8/2023.

Abstract

When humans and software agents collaborate on taking decisions together in hybrid teams, they typically share knowledge and goals based on their individual intentions. Goals can be modelled as the effects that are caused by events or actions taken. In order to decide and plan which actions to take, it is necessary to understand which actions or events cause the intended effects. In other words, we consider causal inferencing in a reverse way: instead of asking whether certain actions or events indeed cause corresponding effects, we consider establishing and using a causal model for determining the appropriate cause or causes, such that the causal chain results in the desired and intended outcomes. For example, your goal can be to arrive at a destination at a given time. By reasoning back which actions are required to get you there, piece by piece, a causal path can be constructed to determine the departure time and modes of traffic along the route. Due to shared intentions and causal models, humans and agents can mutually trust each other regarding their actions and outcomes.

Projekte

Weitere Links