نتایج جستجو برای: emotional speech recognition

تعداد نتایج: 435631  

2016
Anjali Bhatara Petri Laukka Natalie Boll-Avetisyan Lionel Granjon Hillary Anger Elfenbein Tanja Bänziger

The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli includ...

Journal: :EURASIP J. Audio, Speech and Music Processing 2017
Dorota Kaminska Tomasz Sapinski Gholamreza Anbarjafari

This research paper presents parametrization of emotional speech using a pool of common features utilized in emotion recognition such as fundamental frequency, formants, energy,MFCC, PLP, and LPC coefficients. The pool is additionally expanded by perceptual coefficients such as BFCC, HFCC, RPLP, and RASTA PLP, which are used in speech recognition, but not applied in emotion detection. The main ...

2008
Yusuke Ijima Makoto Tachibana Takashi Nose Takao Kobayashi

This paper describes a model adaptation technique for emotional speech recognition based on multiple-regression HMM (MR-HMM). We use a low-dimensional vector called style vector which corresponds the degree of expressivity of emotional speech as the explanatory variable of the regression. In the proposed technique, first, the value of the style vector for input speech is estimated. Then, using ...

Journal: :IJCLCLP 2007
Chung-Hsien Wu Ze-Jing Chuang

This paper presents an approach to feature compensation for emotion recognition from speech signals. In this approach, the intonation groups (IGs) of the input speech signals are extracted first. The speech features in each selected intonation group are then extracted. With the assumption of linear mapping between feature spaces in different emotional states, a feature compensation approach is ...

2009
Fred Charles David Pizzi Marc Cavazza Thurid Vogt Elisabeth André

Whilst techniques for narrative generation and agent behaviour have made significant progress in recent years, natural language processing remains a bottleneck hampering the scalability of Interactive Storytelling systems. This demonstrator introduces a novel interaction technique based solely on emotional speech recognition. It allows the user to use speech to interact with virtual actors with...

2008
Thurid Vogt Elisabeth André Nikolaus Bee

We present EmoVoice, a framework for emotional speech corpus and classifier creation and for offline as well as real-time online speech emotion recognition. The framework is intended to be used by non-experts and therefore comes with an interface to create an own personal or application specific emotion recogniser. Furthermore, we describe some applications and prototypes that already use our f...

2009
Fred Charles David Pizzi Marc Cavazza Thurid Vogt Elisabeth André

Whilst techniques for narrative generation and agent behaviour have made significant progress in recent years, natural language processing remains a bottleneck hampering the scalability of Interactive Storytelling systems. This demonstrator introduces a novel interaction technique based solely on emotional speech recognition. It allows the user to use speech to interact with virtual actors with...

Journal: :Speech Communication 2003
Louis ten Bosch

Automatic recognition and understanding of speech are crucial steps towards natural human–machine interaction. Apart from the recognition of the word sequence, the recognition of properties such as prosody, emotion tags or stress tags may be of particular importance in this communication process. This paper discusses the possibilities to recognize emotion from the speech signal, primarily from ...

Performance of speech recognition systems is greatly reduced when speech corrupted by noise. One common method for robust speech recognition systems is missing feature methods. In this way, the components in time - frequency representation of signal (Spectrogram) that present low signal to noise ratio (SNR), are tagged as missing and deleted then replaced by remained components and statistical ...

2011
Shiqing Zhang Xiaoming Zhao Bicheng Lei

Recognizing human emotion from speech signals, i.e., spoken emotion recognition, is a new and interesting subject in artificial intelligence field. In this paper we present a new method of spoken emotion recognition based on radial basis function neutral networks (RBFNN). The acoustic features related to human emotion expression are extracted from speech signals and then fed into RBFNN for emot...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید