Skip to main content Skip to main navigation

Publication

Yes we can: simplex volume maximization for descriptive web-scale matrix factorization

Christian Thurau; Kristian Kersting; Christian Bauckhage
In: Jimmy X. Huang; Nick Koudas; Gareth J. F. Jones; Xindong Wu; Kevyn Collins-Thompson; Aijun An (Hrsg.). Proceedings of the 19th ACM Conference on Information and Knowledge Management. ACM International Conference on Information and Knowledge Management (CIKM-2010), October 26-30, Toronto, Ontario, Canada, Pages 1785-1788, ACM, 2010.

Abstract

Matrix factorization methods are among the most common techniques for detecting latent components in data. Popular examples include the Singular Value Decomposition or Non-negative Matrix Factorization. Unfortunately, most methods suffer from high computational complexity and therefore do not scale to massive data. In this paper, we present a linear time algorithm for the factorization of gigantic matrices that iteratively yields latent components. We consider a constrained matrix factorization s.t.~the latent components form a simplex that encloses most of the remaining data. The algorithm maximizes the volume of that simplex and thereby reduces the displacement of data from the space spanned by the latent components. Hence, it also lowers the Frobenius norm, a common criterion for matrix factorization quality. Our algorithm is efficient, well-grounded in distance geometry, and easily applicable to matrices with billions of entries. In addition, the resulting factors allow for an intuitive interpretation of data: every data point can now be expressed as a convex combination of the most extreme and thereby often most descriptive instances in a collection of data. Extensive experimental validations on web-scale data, including 80 million images and 1.5 million twitter tweets, demonstrate superior performance compared to related factorization or clustering techniques.

More links