Publikation
EmoNet-Voice: A Fine-Grained, Expert-Verified Benchmark for Speech Emotion Detection
Christoph Schuhmann; Robert Kaczmarczyk; Gollam Rabby; Felix Friedrich; Maurice Kraus; Kourosh Nadi; Huu Nguyen; Kristian Kersting; Sören Zepezauer
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2506.09827, Pages 1-20, Computing Research Repository, 2025.
Zusammenfassung
The advancement of text-to-speech and audio generation models necessitates robust
benchmarks for evaluating the emotional understanding capabilities of AI systems.
Current speech emotion recognition (SER) datasets often exhibit limitations in
emotional granularity, privacy concerns, or reliance on acted portrayals. This
paper introduces EMONET-VOICE, a new resource for speech emotion detection,
which includes EMONET-VOICE BIG, a large-scale pre-training dataset (featuring
over 4,500 hours of speech across 11 voices, 40 emotions, and 4 languages), and
EMONET-VOICE BENCH, a novel benchmark dataset with human expert anno-
tations. EMONET-VOICE is designed to evaluate SER models on a fine-grained
spectrum of 40 emotion categories with different levels of intensities. Leveraging
state-of-the-art voice generation, we curated synthetic audio snippets simulating
actors portraying scenes designed to evoke specific emotions. Crucially, we con-
ducted rigorous validation by psychology experts who assigned perceived intensity
labels. This synthetic, privacy-preserving approach allows for the inclusion of
sensitive emotional states often absent in existing datasets. Lastly, we introduce
EMPATHICINSIGHT-VOICE models that set a new standard in speech emotion
recognition with high agreement with human experts. Our evaluations across the
current model landscape exhibit valuable findings, such as high-arousal emotions
like anger being much easier to detect than low-arousal states like concentration.
