نتایج جستجو برای: multimodal input
تعداد نتایج: 250965 فیلتر نتایج به سال:
1 This work was done in collaboration with Mitsubishi Electric Research Laboratories in Cambridge, Massachusetts ABSTRACT People naturally perform multimodal interactions in everyday real world settings, as they collaborate over large visual surfaces such as paper maps on walls and tables. While a new generation of research technologies now supports co-located collaboration, they do not yet dir...
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for mapbased tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by uni cation of typed feature structures representing ...
Interfaces to mobile information access devices need to allow users to interact using whichever mode or combination of modes are most appropriate, given their user preference, task at hand, and physical and social environment. This paper describes a multimodal application architecture which facilitates rapid prototyping of flexible nextgeneration multimodal interfaces. Our sample application MA...
In this paper we introduceBiosignalsStudio (BSS), a framework for multimodal sensor data acquisition. Due to its flexible architecture it can be used for large scale multimodal data collections as well as a multimodal input layer for intelligent systems. The paper describes the software framework and its contributions to our research work and
Multimodal interaction provides the user with multiple modes of interacting with a system, such as gestures, speech, text, video, audio, etc. A multimodal system allows for several distinct means for input and output of data. In this paper, we present our work in the context of the I-SEARCH project, which aims at enabling context-aware querying of a multimodal search framework including real-wo...
Deutsche Telekom Laboratories and T-Systems recently developed various multimodal prototype applications and modules allowing kinesthetic input, i.e. devices that can be moved around in order to alter the applications’ state. We developed first prototypes were using proprietary technologies and principles, whereas latest demonstrators more and more follow the W3C’s Multimodal Architecture. This...
Recent empirical research has shown conclusive advantages of multimodal interaction over speech-only interaction for mapbased tasks. This paper describes a multimodal language processing architecture which supports interfaces allowing simultaneous input from speech and gesture recognition. Integration of spoken and gestural input is driven by unification of typed feature structures representing...
1 Abstract This paper presents work on multimodal communication with an anthropomorphic agent. It focuses on processing of multimodal input and output employing natural language and gestures in virtual environments. On the input side, we describe our approach to recognize and interpret co-verbal gestures used for pointing, object manipulation, and object description. On the output side, we pres...
Interfaces for mobile information access need to allow users flexibility in their choice of modes and interaction style in accordance with their preferences, the task at hand, and their physical and social environment. This paper describes the approach to multimodal language processing in MATCH (Multimodal Access To City Help), a mobile multimodal speech-pen interface to restaurant and subway i...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید