نتایج جستجو برای: speech emotion

تعداد نتایج: 162640  

2013
Rashmirekha Ram Hemanta Kumar Palo Mihir Narayan Mohanty

Emotion recognition from human speech is a challenge for the researchers. It is mostly considered under ideal acoustic conditions. The performance of such system is degraded while there is existence of environmental mismatches between training and testing phases. For robust speech recognition it requires for reduction of redundancy, variability, and capturing ability of speech signals in noisy ...

2009
Björn W. Schuller Stefan Steidl Anton Batliner

The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed – such as ...

2009
Shashidhar G. Koolagudi Sudhamay Maity Anil Kumar Vuppala Saswat Chakrabarti K. Sreenivasa Rao

In this paper, we are introducing the speech database for analyzing the emotions present in speech signals. The proposed database is recorded in Telugu language using the professional artists from All India Radio (AIR), Vijayawada, India. The speech corpus is collected by simulating eight different emotions using the neutral (emotion free) statements. The database is named as Indian Institute o...

2016
YONGMING HUANG Yongming Huang Ao Wu Guobao Zhang Yue Li

A wavelet packet based adaptive filter-bank construction combined with Deep Belief Network(DBN) feature learning method is proposed for speech signal processing in this paper. On this basis, a set of acoustic features are extracted for speech emotion recognition, namely Coiflet Wavelet Packet Cepstral Coefficients (CWPCC). CWPCC extends the conventional MelFrequency Cepstral Coefficients (MFCC)...

2016
Zhi Zhu Ryota Miyauchi Yukiko Araki Masashi Unoki

It has been reported that vocal emotion recognition is challenging for cochlear implant (CI) listeners due to the limited spectral cues with CI devices. As the mechanism of CI, modulation information is provided as a primarily cue. Previous studies have revealed that the modulation components of speech are important for speech intelligibility. However, it is unclear whether modulation informati...

2006
Björn Schuller Jan Stadermann Gerhard Rigoll

Automatic Speech Recognition fails to a certain extent when confronted with highly affective speech. In order to cope with this problem we suggest dynamic adaptation to the actual user emotion. The ASR framework is built by a hybrid ANN/HMM mono-phone 5k bi-gram LM recognizer. Based hereon we show adaptation to the affective speaking style. Speech emotion recognition takes place prior to the ac...

2014
Weilin Ye Xinghua Fan

This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-seven acoustic features are extracted from the speech input. Two different classifiers Support Vector Machines (SVMs) and BP neural network are adopted to classify the emotional states. In text analysis, we use the two-step classification method to recognize ...

2012
Ranniery Maia

This paper presents a study on the importance of shortterm spectral and excitation parameterizations for emotional hidden Markov model (HMM)-based speech synthesis. The analysis is performed through an emotion classification task by using two methods: K-means emotion clustering and Gaussian Mixture Models (GMMs)based emotion identification. Two known forms of parameterization for the short-term...

2010
Hynek Boril Seyed Omid Sadjadi Tristan Kleinschmidt John H. L. Hansen

Non-driving related cognitive load and variations of emotional state may impact a driver’s capability to control a vehicle and introduces driving errors. Availability of reliable cognitive load and emotion detection in drivers would benefit the design of active safety systems and other intelligent in-vehicle interfaces. In this study, speech produced by 68 subjects while driving in urban areas ...

2000
Li-chiung Yang

Emotion is an integral component of human speech, and prosody is the principle conveyer of the speaker's state. In this study we show how specific emotional states are expressed in the prosody of spontaneous speech. The significance of prosodic meaning to communicating judgements, attitudes, and the cognitive state of the speaker makes it essential to emotion-intention tracking and to natural-s...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید