Skip to main content Skip to main navigation

Publication

Disambiguating Signs: Deep Learning-based Gloss-level Classification for German Sign Language by Utilizing Mouth Actions

Dinh Nam Pham; Vera Czehmann; Eleftherios Avramidis
In: Proceedings of the 31st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN-2023), October 4-6, Bruges, Belgium, ISBN 978-2-87587-088-9. i6doc.com publ. 10/2023.

Abstract

Despite the importance of mouth actions in Sign Languages, previous work on Automatic Sign Language Recognition (ASLR) has limited use of the mouth area. Disambiguation of homonyms is one of the functions of mouth actions, making them essential for tasks involving ambiguous hand signs. To measure their importance for ASLR, we trained a classifier to recognize ambiguous hand signs. We compared three models which use the upper body/hands area, the mouth, and both combined as input. We found that the addition of the mouth area in the model resulted in the best accuracy, giving an improvement of 7.2\% and 4.7\% on the validation and test set, while allowing disambiguation of the hand signs for most of the cases. In cases where the disambiguation failed, it was observed that the signers in the video samples occasionally didn't perform mouthings. In a few cases, the mouthing was enough to achieve full disambiguation of the signs. We conclude that further investigation on the modelling of the mouth region can be beneficial of future ASLR systems.

Projects