Skip to main content Skip to main navigation

Publikation

Lifted Online Training of Relational Models with Stochastic Gradient Methods

Babak Ahmadi; Kristian Kersting; Sriraam Natarajan
In: Peter A. Flach; Tijl De Bie; Nello Cristianini (Hrsg.). Machine Learning and Knowledge Discovery in Databases - European Conference. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD-2012), September 24-28, Bristol, United Kingdom, Pages 585-600, Lecture Notes in Computer Science, Vol. 7523, Springer, 2012.

Zusammenfassung

Lifted inference approaches have rendered large, previously intractable probabilistic inference problems quickly solvable by employing symmetries to handle whole sets of indistinguishable random variables. Still, in many if not most situations training relational models will not benefit from lifting: symmetries within models easily break since variables become correlated by virtue of depending asymmetrically on evidence. An appealing idea for such situations is to train and recombine local models. This breaks long-range dependencies and allows to exploit lifting within and across the local training tasks. Moreover, it naturally paves the way for online training for relational models. Specifically, we develop the first lifted stochastic gradient optimization method with gain vector adaptation, which processes each lifted piece one after the other. On several datasets, the resulting optimizer converges to the same quality solution over an order of magnitude faster, simply because unlike batch training it starts optimizing long before having seen the entire mega-example even once.

Weitere Links