نتایج جستجو برای: articulatory accuracy
تعداد نتایج: 338959 فیلتر نتایج به سال:
This paper presents a method for automatically animating the articulatory tongue model of a reference speaker from ultrasound images of the tongue of another speaker. This work is developed in the context of speech therapy based on visual biofeedback, where a speaker is provided with visual information about his/her own articulation. In our approach, the feedback is delivered via an articulator...
A novel approach to articulatory-acoustic feature extraction has been developed for enhancing the accuracy of classification associated with place and manner of articulation information. This “elitist” approach is tested on a corpus of spontaneous Dutch using two different systems, one trained on a subset of the same corpus, the other trained on a corpus from a different language (American Engl...
The area function of the vocal tract in all of its spatial detail is not directly computable from the speech signal. But is partial, yet phonetically distinctive, information about articulation recoverable from the acoustic signal that arrives at the listener's ear? The answer to this question is important for phonetics, because various theories of speech perception predict different answers. S...
In English, the feature [tense] or [ATR] (advanced tongue root) has been used to encompass the vowels that are produced on the extreme edges of the acoustic and articulatory spaces. The lax counterparts to these vowels are generally produced closer to the center of those spaces. The present study is based on the hypothesis that the extreme positioning in both articulatory and acoustic space evo...
Despite large-scale research, development of robust machines for imitation and inversion of human speech into articulatory movements has remained an unsolved problem. We propose a set of principles that can partially explain real infants’ speech acquisition processes and the emergence of imitation skills and demonstrate a simulation where a learning virtual infant (LeVI) learns to invert and im...
Although still in experimental stage, articulation-based silent speech interfaces may have significant potential for facilitating oral communication in persons with voice and speech problems. An articulation-based silent speech interface converts articulatory movement information to audible words. The complexity of speech production mechanism (e.g., coarticulation) makes the conversion a formid...
Semantic and phonological contributions to short-term repetition and long-term cued sentence recall.
The function of verbal short-term memory is supported not only by the phonological loop, but also by semantic resources that may operate on both short and long time scales. Elucidation of the neural underpinnings of these mechanisms requires effective behavioral manipulations that can selectively engage them. We developed a novel cued sentence recall paradigm to assess the effects of two factor...
In German intonational phonology two different approaches to model nuclear falling accents exist (cf. [6, 8] vs. GToBI [11]). While [6] and [8] assume only left-headed pitch accents, the GToBI system allows for leftand right headedness. Consequently, in the former approach one simple falling accent (H*L) is represented in the tonal grammar while GToBI distinguishes between two falling accents, ...
Aiming at detecting pronunciation errors produced by second language learners and providing corrective feedbacks related with articulation, we address effective articulatory models based on deep neural network (DNN). Articulatory attributes are defined for manner and place of articulation. In order to efficiently train these models of non-native speech without such data, which is difficult to c...
This paper presents a photo realistic visual speech synthesis method based on an audio visual articulatory dynamic Bayesian network model (AF_AVDBN) in which the maximum asynchronies between the articulatory features, such as lips, tongue and glottis/velum, can be controlled. Perceptual linear prediction (PLP) features from the audio speech and active appearance model (AAM) features from mouth ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید