Skip to main content Skip to main navigation

Publication

Lifted Message Passing as Reparametrization of Graphical Models

Martin Mladenov; Amir Globerson; Kristian Kersting
In: Nevin L. Zhang; Jin Tian (Hrsg.). Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence. Conference in Uncertainty in Artificial Intelligence (UAI-2014), July 23-27, Quebec City, Quebec, Canada, Pages 603-612, AUAI Press, 2014.

Abstract

Lifted inference approaches can considerably speed up probabilistic inference in Markov random fields (MRFs) with symmetries. Given evidence, they essentially form a lifted, ie, reduced factor graph by grouping together indistinguishable variables and factors. Typically, however, lifted factor graphs are not amenable to offthe-shelf message passing (MP) approaches, and hence requires one to use either generic optimization tools, which would be slow for these problems, or design modified MP algorithms. Here, we demonstrate that the reliance on modified MP can be eliminated for the class of MP algorithms arising from MAP-LP relaxations of pairwise MRFs. Specifically, we show that a given MRF induces a whole family of MRFs of different sizes sharing essentially the same MAPLP solution. In turn, we give an efficient algorithm to compute from them the smallest one that can be solved using off-the-shelf MP. This incurs no major overhead: the selected MRF is at most twice as large as the fully lifted factor graph. This has several implications for lifted inference. For instance, running MPLP results in the first convergent lifted MP approach for MAP-LP relaxations. Doing so can be faster than solving the MAP-LP using lifted linear programming. Most importantly, it suggests a novel view on lifted inference: it can be viewed as standard inference in a reparametrized model.

Weitere Links