نتایج جستجو برای: emotional speech database
تعداد نتایج: 478786 فیلتر نتایج به سال:
In today’s affective databases speech turns are often labelled on a continuous scale for emotional dimensions such as valence or arousal to better express the diversity of human affect. However, applications like virtual agents usually map the detected emotional user state to rough classes in order to reduce the multiplicity of emotion dependent system responses. Since these classes often do no...
The paper discuses the usage of linear transformations of Hidden Markov Models, normally employed for speaker and environment adaptation, as a way of extracting the emotional components from the speech. A constrained version of Maximum Likelihood Linear Regression (CMLLR) transformation is used as a feature for classification of normal or aroused emotional state. We present a procedure of incre...
Background and Aim: The applicability of any technology to enter a certain field is determined by defining the advantages and disadvantages of the system in that field. The aim of this study is to show the advantages and limitations of using speech recognition systems in health care and providing practical solutions to improve the acceptability of the system in that field. Materials and M...
We explore the construction of a system to classify the dominant emotion in spoken utterances, in a environment where resources such as labelled utterances are scarce. The research addresses two issues relevant to detecting emotion in speech: (a) compensating for the lack of resources and (b) finding features of speech which best characterise emotional expression in the cultural environment bei...
Processing emotional speech is an important issue for speech information science and there are many studies working on this issue. However, we still have no clear knowledge to answer what are the crucial acoustic features for emotional speech, except the fundamental frequency, and how human manipulate their speech organs to generate emotional speech. In this study, we investigate the acoustic f...
The present work studies the effect of emotional speech on a smarthome application. Specifically, we evaluate the recognition performance of the automatic speech recognition component of a smart-home dialogue system for various categories of emotional speech. The experimental results reveal that word recognition rate for emotional speech varies significantly across different emotion categories.
Perceived vocal features of emotional speech have rarely been investigated. In this contribution, a procedure allowing to collect reliable judgments on the perception of voice characteristics of emotional speech is presented. Relations between acoustic parameters and perceived features of speech are described. Some benefits and potential drawbacks of studying perceived vocal features in emotion...
For many years, speech has been the most natural and efficient means of information exchange for human beings. With the advancement of technology and the prevalence of computer usage, the design and production of speech recognition systems have been considered by researchers. Among this, lip-reading techniques encountered with many challenges for speech recognition, that one of the challenges b...
We present our current state of development regarding animated agents applicable to affective dialogue systems. A new set of tools are under development to support the creation of animated characters compatible with the MPEG-4 facial animation standard. Furthermore, we have collected a multimodal expressive speech database including video, audio and 3D point motion registration. One of the obje...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید