Recent progress in mobile broadband communication and semantic web technology is enabling innovative internet services that provide advanced personalization and localization features. The goal of the SmartWeb project is to lay the foundations for multimodal user interfaces to distributed and composable semantic Web services on mobile devices. The SmartWeb consortium brings together experts from various research communities: mobile services, intelligent user interfaces, language and speech technology, information extraction, and semantic web technologies.
SmartWeb is based on two parallel efforts that have the potential of forming the ba-sis for the next generation of the Web. The first effort is the semantic Web which provides the tools for the explicit markup of the content of Web pages; the second effort is the development of semantic Web services which results in a Web where programs act as autonomous agents to become the producers and consumers of infor-mation and enable automation of transactions.
The appeal of being able to ask a question to a mobile internet terminal and receive an answer immediately has been renewed by the broad availability of information on the Web. Ideally, a spoken dialogue system that uses the Web as its knowledge base would be able to answer a broad range of questions. Practically, the size and dynamic nature of the Web and the fact that the content of most web pages is encoded in natu-ral language makes this an extremely difficult task. However, SmartWeb exploits the machine-understandable content of semantic Web pages for intelligent question-answering as a next step beyond today's search engines. Since semantically annotated Web pages are still very rare due to the time-consuming and costly manual markup, SmartWeb is using advanced language technology and information extraction methods for the automatic annotation of traditional web pages encoded in HTML or XML.
But SmartWeb does not only deal with information-seeking dialogues but also with task-oriented dialogues, in which the user wants to perform a transaction via a Web service (e.g. buy a ticket for a sports event or program his navigation system to find a souvenir shop).
SmartWeb provides a context-aware user interface, so that it can support the user in different roles, e.g. as a car driver, a motor biker, a pedestrian or a sports spectator. One of the planned demonstrators of SmartWeb is a personal guide for the 2006 FIFA world cup in Germany, that provides mobile infotainment services to soccer fans, anywhere and anytime. Another SmartWeb demonstrator is based on P2P communica-tion between a car and a motor bike. When the car's sensors detect aqua-planing, a succeeding motor biker is warned by SmartWeb "Aqua-planing danger in 200 meters!". The biker can interact with SmartWeb through speech and haptic feedback; the car driver can input speech and gestures.
SmartWeb is based on two new W3C standards for the semantic Web, the Resource Description Framework (RDF/S) and the Web Ontology Language (OWL) for repre-senting machine interpretable content on the Web. OWL-S ontologies support seman-tic service descriptions, focusing primarily on the formal specification of inputs, out-puts, preconditions, and effects of Web services. In SmartWeb, multimodal user re-quests will not only lead to automatic Web service discovery and invocation, but also to the automatic composition, interoperation and execution monitoring of Web services.
Partners
- DFKI GmbH (Konsortialleitung)
- BMW Forschung und Technik GmbH
- DaimlerChrysler AG
- Deutsche Telekom AG, T-Systems Nova GmbH
- European Media Laboratory GmbH
- FhG-FIRST
- Friedrich-Alexander-Universität Erlangen-Nürnberg
- International Computer Science Institute
- Ludwig-Maximilians-Universität München
- Ontoprise GmbH
- Siemens AG
- Sympalog Voice Solutions GmbH
- Universität des Saarlandes
- Universität Karlsruhe (TH)
- Universität Stuttgart