Publikation
EmoNet-Face: An Expert-Annotated Benchmark for Synthetic Emotion Recognition
Christoph Schuhmann; Robert Kaczmarczyk; Gollam Rabby; Felix Friedrich; Maurice Kraus; Krishna Kalyan; Kourosh Nadi; Huu Nguyen; Kristian Kersting; Sören Zepezauer
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2505.20033, Pages 1-29, Computing Research Repository, 2025.
Zusammenfassung
Effective human-AI interaction relies on AI’s ability to accurately perceive and
interpret human emotions. Current benchmarks for vision and vision-language
models are severely limited, offering a narrow emotional spectrum that over-
looks nuanced states (e.g., bitterness, intoxication) and fails to distinguish subtle
differences between related feelings (e.g., shame vs. embarrassment). Existing
datasets also often use uncontrolled imagery with occluded faces and lack de-
mographic diversity, risking significant bias. To address these critical gaps, we
introduce EMONET-FACE, a comprehensive benchmark suite. EMONET-FACE
features: (1) A novel 40-category emotion taxonomy, meticulously derived from
foundational research to capture finer details of human emotional experiences. (2)
Three large-scale, AI-generated datasets (EMONET-FACE HQ, EMONET-FACE
BINARY, and EMONET-FACE BIG) with explicit, full-face expressions and con-
trolled demographic balance across ethnicity, age, and gender. (3) Rigorous,
multi-expert annotations for training and high-fidelity evaluation. (4) We build
EMPATHICINSIGHT-FACE, a model achieving human-expert-level performance on
our benchmark. The publicly released EMONET-FACE suite—taxonomy, datasets,
and model—provides a robust foundation for developing and evaluating AI systems
with a deeper understanding of human emotions.
