The detection of stress enables targeted measures to reduce health risks. Conventional methods often rely on sensors that detect biomarkers and biosignals - but these can be uncomfortable for people and affect their behavior. Video-based analysis of facial features offers a non-invasive alternative.
Nele Brügge, a scientist at DFKI's AI in Medical Image and Signal Processing research group headed by Prof. Dr. Heinz Handels, together with researchers from the University of Lübeck and the Foundation for Research and Technology Hellas (FORTH), has developed an innovative, video-based approach for stress detection. This approach uses bag-level multiple instance learning to accurately analyze subtle facial expressions and behavioral variations in stressful situations. It uses a specially developed temporal feature network that exploits the temporal information in the videos to detect stress-related behavioral patterns. Combined with optimized data processing, it identifies particularly relevant sequences in the recordings and thus achieves a high degree of accuracy. The approach achieves an accuracy of 95.46% and an F1 score of 95.49%, outperforming methods that do not use multiple instance learning or an adapted feature network.
The team published the results of this research in the paper "Bag-Level Multiple Instance Learning for Acute Stress Detection from Video Data". For their outstanding scientific work, they were awarded the Best Student Paper Award at the International Conference on Health Informatics (HEALTHINF), which took place in Porto, Portugal, on February 20-22.
The awarded paper:
Brügge, N. S.; Korda, A.; Borgwardt, S.; Andreou, C.; Giannakakis, G. and Handels, H. (2025). Bag-Level Multiple Instance Learning for Acute Stress Detection from Video Data. In Proceedings of the 18th International Joint Conference on Biomedical Engineering Systems and Technologies - Volume 2: HEALTHINF, ISBN 978-989-758-731-3, ISSN 2184-4305, pages 285-296. DOI: 10.5220/0013364900003911
Link: https://www.scitepress.org/publishedPapers/2025/133649/pdf/index.html