Skip to main content Skip to main navigation

Publication

Stay Quiet: Investigating the Effect of Overt Speech on EEG Classification Performance

Patrick Bings; Niklas Kueper; Elsa Andrea Kirchner
In: 2025 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE). IEEE International Conference on Metrology for Extended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE-2025), October 22-24, Ancona, Italy, Pages 7-12, IEEE, 11/2025.

Abstract

The detection of movement intentions for controlling robotic systems is becoming increasingly important in human-robot interaction especially for exoskeleton-supported rehabilitation. This movement intention can be expressed by a speech command, which can be used, for example, to generate training labels in electroencephalography (EEG)-based braincomputer interface (BCI) applications. However, the production of overt speech during BCI applications is strictly avoided in many studies due to the potential artifacts that affect the signal quality of the EEG and is a major limitation for out-of-the-lab use of BCI systems for various applications. In such applications, it cannot be strictly prohibited for the user to remain silent while using the BCI. In this work, the influence of overt speech commands on the detection of a person’s arm movement intentions was investigated. Our objective was to reduce the influence of overt speech artifacts in EEG-based classification. EEG data were recorded from six healthy subjects under three experimental conditions: unilateral arm movements (Uni), isolated speech commands (Sp), and unilateral arm movements with speech commands (UniSp), where subjects indicated their intention to move by saying the word begin before the onset of the arm movement. To investigate the effect of overt speech on the EEG classifier performance, we performed a classifier transfer between UniSp and Uni condition. Additionally, an independent component analysis (ICA)-approach was also applied to reduce artifacts caused by overt speech on the transferred classifier. For the Uni condition (baseline and no transfer), an accuracy of 0.828 was achieved. In contrast, the accuracy for the UniSp (no transfer) condition increased to 0.953. However, a naive classifier transfer from the UniSp condition to the Uni condition yielded a strongly reduced accuracy of 0.544 and this transfer combined with the ICA approach resulted in a slightly reduced accuracy of 0.534. These results indicate that the increase in performance in the UniSp condition compared to the Uni condition (baseline) resulted from specific patterns in the EEG data originating from overt speech. These patterns can also arise from muscle activity and speech-related movement artifacts. The poor classification accuracy of 0.544 and 0.534 (chance level 0.5) observed in the classifier transfer approach support the hypothesis that the artifact contamination in the EEG produced from overt speech result in a major decrease in classification performance. Therefore, very different patterns might have been learned by the classifier for the UniSp condition compared to the Uni condition. Finally, the results demonstrated that no increase in performance could be observed when applying an ICA-based approach to reduce the effect of overt speech on EEG classification performance. These findings strongly motivate the need for improved preprocessing and transfer learning strategies for more robust EEG classifications for an out-of-the-lab use of BCIs.

Projects