Publikation
How Robust Are Character-BasedWord Embeddings in Tagging and MT Against Wrod Scramlbing or Randdm Nouse?
Georg Heigold; Stalin Varanasi; Günter Neumann; Josef van Genabith
In: Proceedings of the Conference of the Association for Machine Translation in the Americas. Conference of the Association for Machine Translation in the Americas (AMTA-2018), The13th Biennial Conference Organized by the Association for Machine Translation, March 17-21, Boston, MA, USA, 2018.
Zusammenfassung
This paper investigates the robustness of NLP against perturbed word forms. While neural
approaches can achieve (almost) human-like accuracy for certain tasks and conditions, they
often are sensitive to small changes in the input such as non-canonical input (eg, typos). Yet
both stability and robustness are desired properties in applications involving user-generated
content, and the more as humans easily cope with such noisy or adversary conditions. In this
paper, we study the impact of noisy input. We consider different noise distributions (one type
of noise, combination of noise types) and mismatched noise distributions for training and
testing. Moreover, we empirically evaluate the robustness of different models (convolutional
neural networks, recurrent neural networks, non-neural models), different basic units
(characters, byte pair encoding units), and different NLP tasks (morphological tagging.