Skip to main content Skip to main navigation

Publikation

Comparing Pretrained Multilingual Word Embeddings on an Ontology Alignment Task

Dagmar Gromann; Thierry Declerck
In: Proceedings of the 11th International Conference on Language Resources and Evaluation,. International Conference on Language Resources and Evaluation (LREC-18), May 7-12, Miyazaki, Japan, European Language Resources Association, Paris, 5/2018.

Zusammenfassung

Word embeddings capture a string's semantics and go beyond its surface form. In a multilingual environment, those embeddings need to be trained for each language, either separately or as a joint model. The more languages needed, the more computationally cost- and time-intensive the task of training. As an alternative, pretrained word embeddings can be utilized to compute semantic similarities of strings in different languages. This paper provides a comparison of three different multilingual pretrained word embedding repositories with a string-matching baseline and uses the task of ontology alignment as example scenario. A vast majority of ontology alignment methods rely on string similarity metrics, however, they frequently use string matching techniques that purely rely on syntactic aspects. Semantically oriented word embeddings have much to offer to ontology alignment algorithms, such as the simple Munkres algorithm utilized in this paper. The proposed approach produces a number of correct alignments on a non-standard data set based on embeddings from the three repositories, where FastText embeddings performed best on all four languages and clearly outperformed the string-matching baseline.

Projekte