Publikation
Semantics Derived Automatically from Language Corpora Contain Human-like Moral Choices
Sophie F. Jentzsch; Patrick Schramowski; Constantin A. Rothkopf; Kristian Kersting
In: Vincent Conitzer; Gillian K. Hadfield; Shannon Vallor (Hrsg.). Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. AAAI Conference on Artificial Intelligence (AAAI-2019), January 27-28, Honolulu, HI, USA, Pages 37-44, ACM, 2019.
Zusammenfassung
Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language---the same sort of language humans are exposed to every day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known psychological studies. We replicate these using a widely used, purely statistical machine-learning model---namely, the GloVe word embedding---trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards race or gender, or even simply veridical, reflecting the em status quo for the distribution of gender with respect to careers or first names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere exposure to everyday language can account for the biases we replicate here.