Skip to main content Skip to main navigation

Publikation

Cross-lingual Neural Vector Conceptualization

Lisa Raithel; Robert Schwarzenberg
In: NLPCC 2019 Workshop on Explainable Artificial Intelligence. NLPCC Workshop on Explainable Artificial Intelligence (XAI-2019), located at 8th CCF International Conference on Natural Language Processing and Chinese Computing (NLPCC2019), October 12, Dunhuang, China, Lecture Notes in Artificial Intelligence (LNA), Springer, 2019.

Zusammenfassung

Recently, Neural Vector Conceptualization (NVC) was proposed as a means to interpret samples from a word vector space. For NVC, a neural model activates higher order concepts it recognizes in a word vector instance. To this end, the model first needs to be trained with a sufficiently large instance-to-concept ground truth, which only exists for a few languages. In this work, we tackle this lack of resources with word vector space alignment techniques: We train the NVC model on a high resource language and test it with vectors from an aligned word vector space of another language, without retraining or fine-tuning. A quantitative and qualitative analysis shows that the NVC model indeed activates meaningful concepts for unseen vectors from the aligned vector space. NVC thus becomes available for low resource languages for which no appropriate concept ground truth exists.

Projekte