نتایج جستجو برای: facial gesture

تعداد نتایج: 69701  

Journal: :Indonesian Journal of Electrical Engineering and Computer Science 2021

<span>The arabic sign language (ArSL) is the natural of deaf community in Arabic countries. ArSL suffers from a lack resources such as unified dictionaries and corpora. In this work, dictionary to has been constructed part translation system. The words are converted into hamburg notation system (HamNoSys) using eSign editor Software. HamNoSys was used create manual parameters (handshape, ...

1998
Nadia Magnenat Thalmann Daniel Thalmann

In this paper, we show how 3-D input devices used in virtual reality may drastically change the way of designing animation scenes. We analyze the various stages of animation production and show when and how these new concepts/devices may be involved into the creation process. We illustrate this new approach by describing the 5th Dimension animation system developed in our laboratories. We empha...

2016
Brigitte Krenn Hannes Pirker Martine Grice Paul Piwek Kees van Deemter Marc Schröder Martin Klesen Erich Gstrein

In this paper an architecture and special purpose markup language for simulated affective face-to-face communication is presented. In systems based on this architecture, users will be able to watch embodied conversational agents interact with each other in virtual locations on the internet. The markup language, or Rich Representation Language (RRL), has been designed to provide an integrated re...

2002
Anton Batliner Viktor Zeißler Elmar Nöth Heinrich Niemann

SmartKom is a multi-modal dialogue system which combines speech with gesture and facial expression. In this paper, we want to deal with one of those phenomena which can be observed in such elaborated systems that we want to call ‘offtalk’, i.e., speech that is not directed to the system (speaking to oneself, speaking aside). We report the classification results of first experiments which use a ...

1999
Wendy S. Ark D. Christopher Dryer Davia J. Lu

One goal of human computer interaction (HCI) is to make an adaptive, smart computer system. This type of project could possibly include gesture recognition, facial recognition, eye tracking, speech recognition, etc. Another non-invasive way to obtain information about a person is through touch. People use their computers to obtain, store and manipulate data using their computer. In order to sta...

Journal: :Science, engineering and technology 2021

People's emotions are rarely put into words, far more often they expressed through other cues. The key to intuiting another's feelings is in the ability read nonverbal channels, tone of voice, gesture, facial expression and like. Facial expressions used by humans convey various types meaning a variety contexts. range meanings extends from basic, probably innate, social-emotional concepts such a...

2009
Robert Wechsler

The Oklo Phenomenon is a solo dance piece controlling sound and light with movement. The performer(s) uses eye movement, facial gesture and body movement to control stage lighting and sound. The synergistic effect of all three of these media — movement, light and sound — generates a unique relationship between the performer and his surroundings as they become both an outside force with which he...

2002
Brigitte Krenn Hannes Pirker Martine Grice Paul Piwek Kees van Deemter Marc Schröder Martin Klesen Erich Gstrein

In this paper an architecture and special purpose markup language for simulated affective face-to-face communication is presented. In systems based on this architecture, users will be able to watch embodied conversational agents interact with each other in virtual locations on the internet. The markup language, or Rich Representation Language (RRL), has been designed to provide an integrated re...

2008
Kristiina Jokinen Costanza Navarretta Patrizia Paggio

This paper deals with the results of a machine learning experiment conducted on annotated gesture data from two case studies (Danish and Estonian). The data concern mainly facial displays, that are annotated with attributes relating to shape and dynamics, as well as communicative function. The results of the experiments show that the granularity of the attributes used seems appropriate for the ...

Journal: :Computer Vision and Image Understanding 2005
Alejandro Jaimes Nicu Sebe

In this paper, we review the major approaches to multimodal human–computer interaction, giving an overview of the field from a computer vision perspective. In particular, we focus on body, gesture, gaze, and affective interaction (facial expression recognition and emotion in audio). We discuss user and task modeling, and multimodal fusion, highlighting challenges, open issues, and emerging appl...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید