نتایج جستجو برای: multimodal input

تعداد نتایج: 250965  

Journal: :The British journal of developmental psychology 2017
Jeanne L Shinskey

Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is disp...

Journal: :EAI Endorsed Trans. Creative Technologies 2016
Nikolas Vidakis Kalafatis Konstantinos George A. Triantafyllidis

Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses such as equilibrioception and the sense of balance. Computer interfaces that are considered as a dif...

2003
Wolfgang Wahlster

We introduce the notion of symmetric multimodality for dialogue systems in which all input modes (eg. speech, gesture, facial expression) are also available for output, and vice versa. A dialogue system with symmetric multimodality must not only understand and represent the user's multimodal input, but also its own multimodal output. We present the SmartKom system, that provides full symmetric ...

2008
Aleksi Melto Markku Turunen Jaakko Hakulinen Anssi Kainulainen Tomi Heimonen

This paper presents results from a comparison of text and speech input methods on TravelMan, a multimodal route guidance application for mobile phones. TravelMan provides public transport information in Finland. The application includes a range of input methods, such as speech and predictive text inputs. In this paper we present results from the user evaluation focusing on entry rates of multi-...

2006
Lisa Anthony Jie Yang Kenneth R. Koedinger

Current interfaces for entering mathematical equations on computers are arguably limited and cumbersome. Mathematical notations have evolved to aid symbolic thinking and yet text-based interfaces relying on keyboard-and-mouse input do not take advantage of the natural two-dimensional aspects of mathematical equations. Due to its similarities to paper-based mathematics, pen-based handwriting inp...

Journal: :Behaviour & Information Technology 2022

Handheld mobile devices store a plethora of sensitive data, such as private emails, personal messages, photos, and location data. Authentication is essential to protect access However, the majority are currently secured by singlemodal authentication schemes which vulnerable shoulder surfing, smudge attacks, thermal attacks. While some against one these only few address all three them. We propos...

2006
Wolfgang Wahlster

Multimodal dialogue systems exploit one of the major characteristics of humanhuman interaction: the coordinated use of different modalities. Allowing all of the modalities to refer to and depend upon each other is a key to the richness of multimodal communication. We introduce the notion of symmetric multimodality for dialogue systems in which all input modes (e.g., speech, gesture, facial expr...

2008
Louis ten Bosch Lou Boves

Young infants learn words by detecting patterns in the speech signal and by associating these patterns to stimuli provided by nonspeech modalities (such as vision). In this paper, we discuss a computational model that is able to detect and build word-like representations on the basis of multimodal input data. Learning of words (and wordlike entities) takes place within a communicative loop betw...

2010
Etienne de Sevin Elisabetta Bevacqua Sathish Pammi Catherine Pelachaud Marc Schröder Björn Schuller

Our aim is to build a platform allowing a user to chat with virtual agent. The agent displays audio-visual backchannels as a response to the user’s verbal and nonverbal behaviours. Our system takes as inputs the audio-visual signals of the user and outputs synchronously the audio-visual behaviours of the agent. In this paper, we describe the SEMAINE architecture and the data flow that goes from...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید