Publication
"Do Not Disturb My Circles!" Identifying the Type of Counterfactual at Hand (Short Paper)
Moritz Willig; Matej Zecevic; Kristian Kersting
In: Philipp Cimiano; Anette Frank; Michael Kohlhase; Benno Stein (Hrsg.). Robust Argumentation Machines - First International Conference, RATIO 2024, Bielefeld, Germany, June 5-7, 2024, Proceedings. International Conference on Recent Advances in Robust Argumentation Machines (RATIO), Pages 266-275, Lecture Notes in Computer Science, Vol. 14638, Springer, 2024.
Abstract
When the phenomena of interest are in need of explanation, we are often in search of the underlying root causes. Causal inference provides tools for identifying these root causes—by performing interventions on suitably chosen variables we can observe down-stream effects in the outcome variable of interest. On the other hand, argumentation as an approach of attributing observed outcomes to specific factors, naturally lends itself as a tool for determining the most plausible explanation. We can further improve the robustness of such explanations by measuring their likelihood within a mutually agreed-upon causal model. For this, typically one of in-principle two distinct types of counterfactual explanations is used: interventional counterfactuals, which treat changes as deliberate interventions to the causal system, and backtracking counterfactuals, which attribute changes exclusively to exogenous factors. Although both frameworks share the common goal of inferring true causal factors, they fundamentally differ in their conception of counterfactuals. Here, we present the first approach that decides when to expect interventional and when to opt for backtracking counterfactuals.
