Alan WATT An abstract muscle model For three dimensional facial animations
نویسندگان
چکیده
muscle-based model – The abstract muscle-based model has been developed by Waters in 1987. This model is based on a coarse anatomic model due to the fact that 18 the deformation of the polygon mesh (equivalent to the skin) is made through two types of abstract muscle. The linear muscle, which pulls the mesh, is represented by a point of attachment and a vector. The sphincter muscle, which squeezes, is represented by an ellipse. Neither of them are connected to the polygon mesh and their action is defined by an influence zone. The advantage of this technique is that the system of mesh modification is independent of the topology of the face. The muscle abstract musclebased model is also used in conjunction with B-Spline patches by Carol Wang to animate a face (Figure 7). This face is controlled by 46 muscles, 23 for each side. Figure 7 Taken from [CWww]: Abstract muscle-based model to control B-Spline patches : Expression of a) sadness, b) smirk, c) fear and d) disgust. A more detailed description of the Abstract muscle-based model will be given in the next section. 4 The abstract muscle-based model This whole section is dedicated to the abstract muscle-based model due to the aim of the project, which is to study the qualities and defaults of this model to produce 3D facial animations. The abstract muscle-based model, first reported in 1987 by Waters [KW87], is one of the most popular models nowadays. It is based on facial anatomy due to the fact it uses
منابع مشابه
Lifelike Talking Faces for Interactive Services
Lifelike talking faces for interactive services are an exciting new modality for man–machine interactions. Recent developments in speech synthesis and computer animation enable the real-time synthesis of faces that look and behave like real people, opening opportunities to make interactions with computers more like face-to-face conversations. This paper focuses on the technologies for creating ...
متن کاملSight and sound: generating facial expressions and spoken intonation from context
This paper presents an implemented system for automatically producing prosodically appropriate speech and corresponding facial expressions for animated, three-dimensional agents that respond to simple database queries. Unlike previous text-to-facial animation approaches, the system described here produces synthesized speech and facial animations entirely from scratch, starting with semantic rep...
متن کاملA tool for designing MPEG-4 compliant expressions and animations on VRML cartoon-faces
We present a design environment which allows the generation, modification, and visual speech animation of 3D cartoon-like faces – Tinky. Our underlying face model is not based on a set of independent parameters that control specific abstract muscle emulations but is directed by a set of objects representing the elements of the face. In order to provide an easy to use authoring system producing ...
متن کاملS Ight and S Ound : G
This paper presents an implemented system for automatically producing prosodically appropriate speech and corresponding facial expressions for animated, three-dimensional agents that respond to simple database queries. Unlike previous text-to-facial animation approaches, the system described here produces synthesized speech and facial animations entirely from scratch, starting with semantic rep...
متن کاملRepurposing hand animation for interactive applications
In this paper we describe a method for automatically animating interactive characters based on an existing corpus of key-framed hand-animation. The method learns separate low-dimensional embeddings for subsets of the hand-animation corresponding to different semantic labels. These embeddings use the Gaussian Process Latent Variable Model to map high-dimensional rig control parameters to a three...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2001