Interactive and user-friendly interfaces form the basis of multimodal human-technology interaction. Personalized dialogue systems combine speech, gestures and facial expressions with physical interaction. This involves the use of user, task and domain models in order to make dialogue behaviour as natural as possible and dialogue comprehension stable even under difficult conditions. Future user interface concepts are detached from the usual mouse and keyboard paradigms and enable intuitive, gesture-based control.