Publikation
Effect of Trapping Questions on the Reliability of Speech Quality Judgments in a Crowdsourcing Paradigm
Babak Naderi; Tim Polzehl; Ina Wechsung; Friedemann Köster; Sebastian Möller
In: 16th Ann. Conf. of the Int. Speech Comm. Assoc. (Interspeech 2015). Conference in the Annual Series of Interspeech Events (INTERSPEECH-2015), Baixas, France, Pages 2799-2803, ISCA, 2015.
Zusammenfassung
This paper reports on a crowdsourcing study investigating the influence of trapping questions on the reliability of the collected data. The crowd workers were asked to provide quality ratings for speech samples from a standard database. In addition, they were presented with different types of trapping questions, which were designed based on previous research. The ratings obtained from the crowd workers were compared to ratings collected in a laboratory setting. Best results (i.e. highest correlation with and lowest root-mean-square deviation from the lab ratings) were observed for the type of trapping question, for which a recorded voice was presented in the middle of a random stimuli. The voice explained to the workers that high quality responses are important to us, and asked them to select a specific item to show their concentration. We hypothesize that this kind of trapping question communicates the importance and the value of their work to the crowd workers. Based on Herzberg two-factor theory of job satisfaction, the presence of factors, such as acknowledgment and the feeling of being valued, facilitates satisfaction and motivation, and eventually leads to better performance