نتایج جستجو برای: speech emotion recognition

تعداد نتایج: 377604  

2011
N. Murali Krishna P. V. Lakshmi Y. Srinivas J. Sirisha Devi

Emotion recognition helps to recognize the internal expressions of the individuals from the speech database. In this paper, Dynamic time warping (DTW) technique is utilized to recognize speaker independent Emotion recognition based on 39 MFCC features. A large audio of around 960 samples of isolated words of five different emotions are collected and recorded at 20 to 300 KHz sampling frequency....

2003
Vladimir Hozjan Zdravko Kacic

This paper presents and discusses the speaker dependent emotion recognition with large set of statistical features. The speaker dependent emotion recognition gains in present the best accuracy performance. Recognition was performed on English, Slovenian, Spanish, and French InterFace emotional speech databases. All databases include 9 speakers. The InterFace databases include neutral speaking s...

2009
N. Khetrapal ANJALI K. BHATARA

Khetrapal reviews the literature on music and autism and stresses the need for a greater focus on the cognitive and neural mechanisms underlying both autism and music perception. I build upon this review and discuss the strong connections between speech prosody and emotion in music. These connections imply that emotion recognition training in one domain can influence emotion recognition in the ...

2004
Atsushi Iwai Yoshikazu Yano Shigeru Okuma

This paper proposes complex emotion recognition system for a specific user ,where complex emotion has mingled emotions. In order to show the differences between individuals, we use Self-Organizing Feature Map(SOM) for proposed system. Additionaly, in order that emotion recognition system expresses complex emotion, we propose new method for labeling. We verify proposed system using emotional spe...

2010
Gang Liu Yun Lei John H. L. Hansen

We investigate an effective feature extraction front-end for speech emotion recognition, which performs well in clean and noisy conditions. First, we explore the use of perceptual minimum variance distortionless response (PMVDR). These features, originally proposed for accent/dialect and language identification (LID), can better approximate the perceptual scales and are less sensitive to noise ...

2008
Stefan Steidl Anton Batliner Elmar Nöth Joachim Hornegger

Prosodic features modelling pitch, energy, and duration play a major role in speech emotion recognition. Our word level features, especially duration and pitch features, rely on correct word segmentation and F0 extraction. For the FAU Aibo Emotion Corpus, the automatic segmentation of a forced alignment of the spoken word sequence and the automatically extracted F0 values have been manually cor...

2016
Pascale Fung Anik Dey Farhad Bin Siddique Ruixi Lin Yang Yang Yan Wan Ricky Ho Yin Chan

Zara the Supergirl is an interactive system that, while having a conversation with a user, uses its built in sentiment analysis, emotion recognition, facial and speech recognition modules, to exhibit the human-like response of sharing emotions. In addition, at the end of a 5-10 minute conversation with the user, it can give a comprehensive personality analysis based on the user’s interaction wi...

2011
Martin Wöllmer Felix Weninger Stefan Steidl Anton Batliner Björn W. Schuller

We present a study on the effect of reverberation on acousticlinguistic recognition of non-prototypical emotions during child-robot interaction. Investigating the well-defined Interspeech 2009 Emotion Challenge task of recognizing negative emotions in children’s speech, we focus on the impact of artificial and real reverberation conditions on the quality of linguistic features and on emotion re...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید