Publikation
Measuring Spurious Correlation in Classification: ``Clever Hans'' in Translationese
Angana Borah; Daria Pylypenko; Cristina España-Bonet; Josef van Genabith
In: Ruslan Mitkov; Galia Angelova (Hrsg.). Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing. International Conference on Recent Advances in Natural Language Processing (RANLP-2023), Varna, Bulgaria, Pages 196-206, INCOMA Ltd., Shoumen, Bulgaria, 2023.
Zusammenfassung
Recent work has shown evidence of ``Clever Hans'' behavior in high-performance neural translationese classifiers, where BERT-based classifiers capitalize on spurious correlations, in particular topic information, between data and target classification labels, rather than genuine translationese signals. Translationese signals are subtle (especially for professional translation) and compete with many other signals in the data such as genre, style, author, and, in particular, topic. This raises the general question of how much of the performance of a classifier is really due to spurious correlations in the data versus the signals actually targeted for by the classifier, especially for subtle target signals and in challenging (low resource) data settings. We focus on topic-based spurious correlation and approach the question from two directions: (i) where we have no knowledge about spurious topic information and its distribution in the data, (ii) where we have some indication about the nature of spurious topic correlations. For (i) we develop a measure from first principles capturing alignment of unsupervised topics with target classification labels as an indication of spurious topic information in the data. We show that our measure is the same as purity in clustering and propose a ``topic floor'' (as in a ``noise floor'') for classification. For (ii) we investigate masking of known spurious topic carriers in classification. Both (i) and (ii) contribute to quantifying and (ii) to mitigating spurious correlations.