Publication
Structural causal models reveal confounder bias in linear program modelling
Matej Zecevic; Devendra Singh Dhami; Kristian Kersting
In: Journal of Machine Learning Research (JMLR), Vol. 113, No. 3, Pages 1329-1349, Springer, 2024.
Abstract
The recent years have been marked by extended research on adversarial attacks, especially
on deep neural networks. With this work we intend on posing and investigating the ques-
tion of whether the phenomenon might be more general in nature, that is, adversarial-style
attacks outside classical classification tasks. Specifically, we investigate optimization prob-
lems as they constitute a fundamental part of modern AI research. To this end, we con-
sider the base class of optimizers namely Linear Programs (LPs). On our initial attempt
of a naïve mapping between the formalism of adversarial examples and LPs, we quickly
identify the key ingredients missing for making sense of a reasonable notion of adversar-
ial examples for LPs. Intriguingly, the formalism of Pearl’s notion to causality allows for
the right description of adversarial like examples for LPs. Characteristically, we show the
direct influence of the Structural Causal Model (SCM) onto the subsequent LP optimiza-
tion, which ultimately exposes a notion of confounding in LPs (inherited by said SCM) that
allows for adversarial-style attacks. We provide both the general proof formally alongside
existential proofs of such intriguing LP-parameterizations based on SCM for three combi-
natorial problems, namely Linear Assignment, Shortest Path and a real world problem of
energy systems.
