Skip to main content Skip to main navigation

Project

PYSA

Care documentation with a hybrid speech assistant

  • Duration:

Comprehensive and accurate documentation is essential for needs-based care. In many places, the systems and processes used today for care documentation are time-consuming and prone to errors. Caregivers are usually unable to document care continuously and have to enter their records manually into the computer. To improve care documentation, researchers in the PYSA project are developing an artificial intelligence (AI)-based voice assistant for smartphones. The AI assistant generates structured documentation entries from voice input, which is directly transferred to the existing documentation systems of the respective care facility. This enables continuous documentation directly during care.

A particular challenge in dealing with AI and language will be the consideration of different dialects and accents, since nursing staff in Germany are increasingly recruited from abroad and today's speech recognition systems are often unable to deliver high-quality results for them.

The new PYSA AI solution is being introduced into everyday practice in cooperation with several care facilities and will be tested and evaluated within the framework of an effectiveness study, and is further developed based on the knowledge gained . The AI assistant will also be able to answer questions interactively based on existing care data. For this purpose, user feedback is continuously integrated into the learning system. The system uses self-learning AI models specially optimized for nursing, which are executed offline on smartphones. This means that nursing homes without WLAN can also use PYSA. PYSA can be used for inpatient and outpatient care as well as in hospitals and promotes needs-based care, relieves nursing staff, and can thus sustainably increase the attractiveness of the nursing profession. DFKI participates with the COS and SLT research departments. We focus on speech recognition adaptation for nursing applications, research into individualizable interactive dialogue strategies, as well as expert knowledge and explainability.

Partners

  • voize GmbH
  • CFGG - Forschungsgruppe Geriatrie der Charité-Universitätsmedizin Berlin
  • Connext Communication GmbH
  • Kleeblatt Pflegeheime gGmbH
  • Evangelisches Johannesstift Altenhilfe gGmbH

Sponsors

BMBF - Federal Ministry of Education and Research

16SV8850

BMBF - Federal Ministry of Education and Research

Publications about the project

Tim Polzehl; Vera Schmitt; Nils Feldhus; Joachim Meyer; Sebastian Möller

In: Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - HUCAPP,. International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP-2022), Pages 267-278, ISBN 978-989-758-634-7, SciTePress, 2023.

To the publication

Carlos Franzreb; Tim Polzehl

In: DAGA 2023 - 49. Jahrestagung für Akustik. Deutsche Jahrestagung für Akustik (DAGA-2023), 49. March 6-9, Hamburg, Germany, Pages 1413-1416, ISBN 978-3-939296-21-8, DEGA e.V, 2023.

To the publication

Daniel Fernau; Stefan Hillmann; Nils Feldhus; Tim Polzehl; Sebastian Möller

In: Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue. Annual SIGdial Meeting on Discourse and Dialogue (SIGdial), Pages 135-145, ACL, 2022.

To the publication