Complex head/face-based devices will become increasingly suitable for use in everyday situations in the future due to a combination of changing consumer attitudes and technical progress. The development of those systems has already progressed so far that first products are available on the market and research continues – from lightweight consumer EEG, through in-the-ear headphones with an IMU, pulse sensor, bone microphone and gesture control, to sleep monitoring headphones with EEG, lightweight eye trackers to augmented reality glasses for cyclists. To date, however, most of the work has focused on specialized applications with individual sensor modalities. In this project we intend to develop the scientific and technological foundations of a highly multimodal, dynamically adaptive head/face-based sensor that is relevant for a wide range of applications.
The vision of the project can be summarized as following:
Considering a broad range of sensing modalities: from commercially available systems such as mobile EEG or eye trackers to sensing modalities developed by our group such as textile pressure sensor matrices and textile capacitive sensors, we will work towards answering the following questions:
- What type of information can be extracted from different types of sensors at different locations in the head/face area and how does it go beyond information that can be provided by existing non-head based approaches?
- What are the trade-offs between, on one side, ergonomic and user acceptance considerations imposed by different sensor locations and attachment methods and, on the other side, the information content and quality?
- How can noise and information loss caused by the need to adhere to the ergonomic and user acceptance considerations be compensated through signal processing and the fusion of information from different sensing modalities and locations on the head/face area?
- How can the information provided by a variety of head/face based sensors be used to detect semantically meaningful “face related events” such as changes in the facial expressions, shifts in the focus of attention, chewing, swallowing, laughing, coughing, sighing (etc.)?
- How can we go beyond simple individual events towards the recognition of high level context, in particular emotions, cognitive load, cognition related activities, interactions and nutrition?
The work will result in the development of a low power, adaptive sensing and signal processing architecture for the recognition of head/face related context, including the implementation of an ergonomic, smart glasses-like hardware and its evaluation in real- world applications inspired by projects currently being performed by our group. These include wearable systems for science education, group collaboration support (IGroups follow up) and health (in particular nutrition monitoring).
Partners
--