Skip to main content Skip to main navigation

Project

COMIC

COnversational Multimodal Interaction with Computers

COnversational Multimodal Interaction with Computers

  • Duration:

It is widely believed that future automatic services will come with interfaces that support conversational interaction. The interaction with devices and services will be as easy and natural as talking to a friend or an assistant. In face-to-face communication we use all our senses: we speak to each other, we see facial expressions, hand gestures, sketches and words scribbled with a pen, etc. Face-to-face-interaction is multimodal. In order to offer conversational interaction, future automatic services will be multimodal, which means that computers will be able to understand speech and typed text, recognize gestures, facial expressions and body posture of the human interlocutor, and that the computer can use the same communication channels, next to presenting graphics, to render in its responses.

COMIC starts from the assumption that multimodal interaction with computers should be firmly based on generic cognitive models for multimodal interaction. Much fundamental research is still needed in order to base multimodal interaction on the understanding of generic cognitive principles that form the basis of this type of interaction. COMIC will build a number of demonstrators to evaluate the applicability of the cognitive models in the domains of eWork and eCommerce.

Partners

  • Max-Planck Institute for Psycholinguistics (Konsortialleitung)
  • DFKI GmbH
  • Max-Planck-Institute for Biological Cybernetics
  • University of Edinburgh
  • University of Nijmegen
  • University of Sheffield
  • ViSoft GmbH

Sponsors

EU - European Union

EU - European Union

Images

Publications about the project

Michael Feld

Mastersthesis, Fachbereich 6.2 Informatik, Universität des Saarlandes, Deutschland, 2006.

To the publication

Christian Müller; Michael Feld

In: Proceedings of the 11th International Conference "Speech and Computer" SPECOM 2006. Speech and Computer Conference (SPECOM), St. Petersburg, Pages 120-124, Anatolya Publishers, 2006.

To the publication