نتایج جستجو برای: driven facial animation

تعداد نتایج: 300139  

Journal: :EURASIP Journal on Audio, Speech, and Music Processing 2013

2005
Soo-Mi Choi Yong-Guk Kim

This paper describes a pipeline by which facial expression and eyegaze of the user are tracked, and then 3D facial animation is synthesized in the remote place based upon timing information of the facial and eye movement information. The system first detects a facial area within the given image and then classifies its facial expression into 7 emotional weightings. Such weighting information, tr...

2003
Yasuhiro MUKAIGAWA Yuichi NAKAMURA Yuichi OHTA

We propose a novel method for synthesizing facial animation with 3-D pose and expression changes. On animation synthesis, one of the most important issues has been realistic face generation. Usual methods with 3-D facial model, however, have not realized natural face synthesis which represents the details and delicate changes of facial expressions. In our method, a facial image is synthesized d...

2000
Sumedha Kshirsagar Stephane Garchery Nadia Magnenat-Thalmann

Robustness and speed are primary considerations when developing deformation methodologies for animatable mesh objects. The goal of this paper is to present such a robust and fast geometric mesh deformation algorithm. The algorithm is feature points based i.e. it can be applied to enable the animation of various mesh objects defined by the placement of their feature points. As a specific applica...

2004
Thomas Fuchs Jörg Haber Hans-Peter Seidel

This paper introduces a versatile language for specifying facial animations. The language Mimic can be used together with any facial animation system that employs animation parameters varying over time to control the animation. In addition to the automatic alignment of individual actions, the user can fine-tune the temporal alignment of actions relatively to each other. A set of predefined func...

Journal: :EAI Endorsed Trans. Creative Technologies 2015
Fabrizio Nunnari Alexis Héloir

In this paper, we present an architecture following a novel animation authoring pipeline seamlessly supporting performance capture and manual editing of key-frames animation. This pipeline allows novice users to record and author sophisticated facial animations in a fraction of the time that would be required using traditional animation tools. This approach paves the way towards novel animation...

Journal: :Kybernetes 2014
Ricardo L. Parreira Duarte Abdennour El Rhalibi Madjid Merabti

In this paper, we present a novel coarticulation and speech synchronization framework compliant with MPEG-4 facial animation. The system we have developed uses MPEG-4 facial animation standard and other development to enable the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It sup...

1997
Frederic Parke

Facial animation is now attracting more attention than ever before in its 25 years as an identifiable area of computer graphics. Imaginative applications of animated graphical faces are found in sophisticated human-computer interfaces, interactive games, multimedia titles, VR telepresence experiences, and, as always, in a broad variety of production animations. Graphics technologies underlying ...

2007
Qing Li Zhigang Deng

We present a novel data-driven 3D facial motion capture data editing system by automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popularized facial motion capture technique and blendshape approach. In this work, 3D facial motion capture editing problem is transformed to a blendshape animation editing problem. Given a collected...

1999
I-Chen Lin Cheng-Sheng Hung Tzong-Jer Yang Ming Ouhyoung

In this paper, a lifelike talking head system is proposed. The talking head, which is driven by speaker independent speech recognition, requires only one single face image to synthesize lifelike facial expression. The proposed system uses speech recognition engines to get utterances and corresponding time stamps in the speech data. Associated facial expressions can be fetched from an expression...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید