نتایج جستجو برای: emotional speech database
تعداد نتایج: 478786 فیلتر نتایج به سال:
Recent researches in speech synthesis are mainly focused on naturalness, and the emotional speech synthesis becomes one of the highlighted research topics. Although quite a many studies on emotional speech in English or Japanese have been addressed, the studies in Korean can seldom be found. This paper presents an analysis of emotional speech in Korean. Emotional speech features related to huma...
Recent researches in speech synthesis are mainly focused on naturalness, and the emotional speech synthesis becomes one of the highlighted research topics. Although quite a many studies on emotional speech in English or Japanese have been addressed, the studies in Korean can seldom be found. This paper presents an analysis of emotional speech in Korean. Emotional speech features related to huma...
Effects of emotion have been reported as early as 20 ms after an auditory stimulus onset for negative valence, and bivalent effects between 30 and 130 ms. To understand how emotional state influences the listener's brainstem evoked responses to speech, subjects looked at emotion-evoking pictures while listening to an unchanging auditory stimulus (danny). The pictures (positive, negative, or neu...
We introduce a novel emotion recognition approach which integrates ranking models. The approach is speaker independent, yet it is designed to exploit information from utterances from the same speaker in the test set before making predictions. It achieves much higher precision in identifying emotional utterances than a conventional SVM classifier. Furthermore we test several possibilities for co...
Since emotions are expressed through a combination of verbal and non-verbal channels, a joint analysis of speech and gestures is required to understand expressive human communication. To facilitate such investigations, this paper describes a new corpus named the “interactive emotional dyadic motion capture database” (IEMOCAP), collected by the Speech Analysis and Interpretation Laboratory (SAIL...
This paper analysis the feature of the time amplitude pitch and formant construction involved such four emotions as happiness anger surprise and sorrow. Through comparison with non-emotional speech signal, we sum up the distribution law of emotional feature including different emotional speech. Nine emotional features were extracted from emotional speech for recognizing emotion. We introduce th...
Accurate gender classification is useful in speech and speaker recognition as well as speech emotion classification, because a better performance has been reported when separate acoustic models are employed for males and females. Gender classification is also apparent in face recognition, video summarization, human-robot interaction, etc. Although gender classification is rather mature in appli...
Speech parameters originating from voice source and vocal tract were analyzed to find acoustic correlates of dimensional descriptions of emotional states. To achieve this goal best, we adopted the Utsunomiya University Spoken Dialogue Database, which was designed for studies on paralinguistic information in expressive conversational speech. Analyses for four female and two male speakers showed:...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید