نتایج جستجو برای: emotional speech database

تعداد نتایج: 478786  

2004
Sang-Jin Kim Kwang-Ki Kim Minsoo Hahn

Recent researches in speech synthesis are mainly focused on naturalness, and the emotional speech synthesis becomes one of the highlighted research topics. Although quite a many studies on emotional speech in English or Japanese have been addressed, the studies in Korean can seldom be found. This paper presents an analysis of emotional speech in Korean. Emotional speech features related to huma...

Journal: :Конференции 2020

2004
Sang-Jin Kim Kwang-Ki Kim Minsoo Hahn

Recent researches in speech synthesis are mainly focused on naturalness, and the emotional speech synthesis becomes one of the highlighted research topics. Although quite a many studies on emotional speech in English or Japanese have been addressed, the studies in Korean can seldom be found. This paper presents an analysis of emotional speech in Korean. Emotional speech features related to huma...

Journal: :Neuroscience letters 2010
Jade Q Wang Trent Nicol Erika Skoe Mikko Sams Nina Kraus

Effects of emotion have been reported as early as 20 ms after an auditory stimulus onset for negative valence, and bivalent effects between 30 and 130 ms. To understand how emotional state influences the listener's brainstem evoked responses to speech, subjects looked at emotion-evoking pictures while listening to an unchanging auditory stimulus (danny). The pictures (positive, negative, or neu...

2012
Houwei Cao Ragini Verma Ani Nenkova

We introduce a novel emotion recognition approach which integrates ranking models. The approach is speaker independent, yet it is designed to exploit information from utterances from the same speaker in the test set before making predictions. It achieves much higher precision in identifying emotional utterances than a conventional SVM classifier. Furthermore we test several possibilities for co...

Journal: :The Journal of the Acoustical Society of America 1996

Journal: :Language Resources and Evaluation 2008
Carlos Busso Murtaza Bulut Chi-Chun Lee Abe Kazemzadeh Emily Mower Provost Samuel Kim Jeannette N. Chang Sungbok Lee Shrikanth S. Narayanan

Since emotions are expressed through a combination of verbal and non-verbal channels, a joint analysis of speech and gestures is required to understand expressive human communication. To facilitate such investigations, this paper describes a new corpus named the “interactive emotional dyadic motion capture database” (IEMOCAP), collected by the Speech Analysis and Interpretation Laboratory (SAIL...

2000
Li Zhao Wei Lu Ye Jiang Zhenyang Wu

This paper analysis the feature of the time amplitude pitch and formant construction involved such four emotions as happiness anger surprise and sorrow. Through comparison with non-emotional speech signal, we sum up the distribution law of emotional feature including different emotional speech. Nine emotional features were extracted from emotional speech for recognizing emotion. We introduce th...

2009
M. H. Sedaaghi

Accurate gender classification is useful in speech and speaker recognition as well as speech emotion classification, because a better performance has been reported when separate acoustic models are employed for males and females. Gender classification is also apparent in face recognition, video summarization, human-robot interaction, etc. Although gender classification is rather mature in appli...

2007
Hiroki Mori Hideki Kasuya

Speech parameters originating from voice source and vocal tract were analyzed to find acoustic correlates of dimensional descriptions of emotional states. To achieve this goal best, we adopted the Utsunomiya University Spoken Dialogue Database, which was designed for studies on paralinguistic information in expressive conversational speech. Analyses for four female and two male speakers showed:...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید