نتایج جستجو برای: articulatory accuracy

تعداد نتایج: 338959  

2009
Lan Wang Hui Chen JianJun Ouyang

In this paper we present a data-driven 3D talking head system using facial video and a X-ray film database for speech research. In order to construct a database recording the three dimensional positions of articulators at phoneme-level, the feature points of articulators were defined and labeled in facial and X-ray images for each English phoneme. Dynamic displacement based deformations were us...

2015
Ganesh Sivaraman Vikramjit Mitra Mark K. Tiede Elliot Saltzman Louis Goldstein Carol Y. Espy-Wilson

Speech acoustic patterns vary significantly as a result of coarticulation and lenition processes that are shaped by segmental context or by performance factors such as production rate and degree of casualness. The resultant acoustic variability continues to offer serious challenges for the development of automatic speech recognition (ASR) systems. Articulatory phonology provides a formalism to ...

2005
Alexander Gutkin David E. Gay

A formal structural representation of speech consistent with the principles of combinatorial structure theory is presented in this paper. The representation is developed within the Evolving Transformation System (ETS) formalism and encapsulates speech processes at the articulatory level. We show how the class structure of several consonantal phonemes of English can be expressed with the help of...

Journal: :Computer speech & language 2016
Ming Li Jangwon Kim Adam C. Lammert Prasanta Kumar Ghosh Vikram Ramanarayanan Shrikanth S. Narayanan

We propose a practical, feature-level and score-level fusion approach by combining acoustic and estimated articulatory information for both text independent and text dependent speaker verification. From a practical point of view, we study how to improve speaker verification performance by combining dynamic articulatory information with the conventional acoustic features. On text independent spe...

2016
Basil Abraham Srinivasan Umesh Neethu Mariam Joy

Articulatory features provide robustness to speaker and environment variability by incorporating speech production knowledge. Pseudo articulatory features are a way of extracting articulatory features using articulatory classifiers trained from speech data. One of the major problems faced in building articulatory classifiers is the requirement of speech data aligned in terms of articulatory fea...

2004
Olov Engwall

Magnetic Resonance Images of nine subjects have been collected to determine scaling factors that can adapt a 3D tongue model to new subjects. The aim is to define few and simple measures that will allow for an automatic, but accurate, scaling of the model. The scaling should be automatic in order to be useful in an application for articulation training, in which the model must replicate the use...

2013
Javier Mikel Olaso M. Inés Torres

Phonological feature space has been proposed to represent acoustic models for automatic speech recognition (ASR) tasks. The most successful methods to detect articulatory gestures from the speech signal are based on Time Delay Neural Networks (TDNN). Stochastic Finite-State Automata have been effectively used in many speech-input natural language tasks. They are versatile models with well estab...

2011
Yurie Iribe Silasak Manosavanh Kouichi Katsurada Ryoko Hayashi Chunyue Zhu Tsuneo Nitta

We automatically generate CG animations to express the pronunciation movement of speech through articulatory feature (AF) extraction to help learn a pronunciation. The proposed system uses MRI data to map AFs to coordinate values that are needed to generate the animations. By using magnetic resonance imaging (MRI) data, we can observe the movements of the tongue, palate, and pharynx in detail w...

Journal: :Proceedings of the Linguistic Society of America 2023

In this study, I examined stress in speech production within the framework of Articulatory Phonology. Specifically, tested hypothesis that could be analyzed as a prosodic gesture. Using articulatory data from an English corpus, found CV lag–the gestural lag between consonant and vowel–of stressed syllables is significantly larger terms both duration proportion than unstressed syllables. also vo...

Journal: :Journal of phonetics 2013
Hosung Nam Louis M. Goldstein Sara Giulivi Andrea G. Levitt Douglas H. Whalen

There is a tendency for spoken consonant-vowel (CV) syllables, in babbling in particular, to show preferred combinations: labial consonants with central vowels, alveolars with front, and velars with back. This pattern was first described by MacNeilage and Davis, who found the evidence compatible with their "frame-then-content" (F/C) model. F/C postulates that CV syllables in babbling are produc...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید