نتایج جستجو برای: emotional speech
تعداد نتایج: 220212 فیلتر نتایج به سال:
The growing interest in emotional speech synthesis urges effective emotion conversion techniques to be explored. This paper estimates the relevance of three speech components (spectral envelope, residual excitation and prosody) for synthesizing identifiable emotional speech, in order to be able to customize the voice conversion techniques to the specific characteristics of each emotion. The ana...
Speech and emotion perception are dynamic processes in which it may be optimal to integrate synchronous signals emitted from different sources. Studies of audio-visual (AV) perception of neutrally expressed speech demonstrate supra-additive (i.e., where AV>[unimodal auditory+unimodal visual]) responses in left STS to crossmodal speech stimuli. However, emotions are often conveyed simultaneously...
The paper introduces an expressive mandarin speech corpus, which is supported by National Hi-tech program (863) and National Science Funding of China (NSFC), for research into expressive speech information processing. The corpus contains emotional speech, dialogue speech, etc. In order to get the subtle acoustic information, the paper also presents the annotation methods with multiple perceptio...
We analyze two German databases: the OLLO database [1] designed for doing speech recognition experiments on speech variabilities, and the Berlin emotional database [2] designed for the analysis and synthesis of emotional speech. The paper tries to find a relation between intrinsic speech variabilities and the emotions. Moreover, we study this relation from the point of view of speech recognitio...
This paper presents high-level strategies for controlling emotional speech morphing algorithms. Emotion morphing is realized by representing the acoustic features in their timefrequency plan that is warped and modified to generate natural morphed emotional speech. These acoustic features are desirable to be decomposed into multidimensional space and to be orthogonal. After matching these acoust...
In this paper we demonstrate how the emotional state of the speaker in uences his or her speech. We show that recognition accuracy varies signi cantly depending on the emotional state of the speaker. Our system models the pronunciation variation of emotional speech both at the acoustic and prosodic level. We show that using emotion-speci c acoustic and prosodic models allows the system to discr...
This paper explores the perceptual relevance of acoustical correlates of emotional speech by means of speech synthesis. Besides, the research aims at the development of »emotionrules« which enable an optimized speech synthesis system to generate emotional speech. Two investigations using this synthesizer are described: 1) the systematic variation of selected acoustical features to gain a prelim...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید