Multimodal Speech-based Dialogue for the Mini-Mental State Examination

Alexander Prange, Mira Niemann, Antje Latendorf, Anika Steinert, Daniel Sonntag

In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM International Conference on Human Factors in Computing Systems (CHI-2019) May 4-9 Glasgow United Kingdom CS Seiten 13-1 CHI EA '19 ISBN 978-1-4503-5971-9 ACM New York, NY, USA 2019.


We present a system-initiative multimodal speech-based dialogue system for the Mini-Mental State Examination (MMSE). The MMSE is a questionnaire-based cognitive test, which is traditionally administered by a trained expert using pen and paper and afterwards scored manually to measure cognitive impairment. By using a digital pen and speech dialogue, we implement a multimodal system for the automatic execution and evaluation of the MMSE. User input is evaluated and scored in real-time. We present a user experience study with 15 participants and compare the usability of the proposed system with the traditional approach. Our experiment suggests that both modes perform equally well in terms of usability, but the proposed system has higher novelty ratings. We compare assessment scorings produced by our system with manual scorings made by domain experts.


Weitere Links

2019_Multimodal_speech-based_dialogue_for_the_Mini-Mental_State_Examination.pdf (pdf, 5 MB )

Deutsches Forschungszentrum für Künstliche Intelligenz
German Research Center for Artificial Intelligence