نتایج جستجو برای: speech emotion

تعداد نتایج: 162640  

Behzad Ghonsooly, Homayoon Mazaheri

Language and emotion are two related systems in use, in that one system (emotions) impacts the performance of the other (language). Both of them share their functionality in communication. Since the nature of foreign language classrooms is ideally interactional, emotional intelligence (EI) gains importance. The aim of this study was to find out whether one's total emotional quotient and its com...

2015
A. Albahri

This paper investigates the effects of standard speech compression techniques on the accuracy of automatic emotion recognition. Effects of Adaptive Multi-Rates (AMR), Adaptive Multi-Rate Wideband (AMR-WB) and Extended Adaptive Multi-Rate Wideband (AMR-WB+) speech codecs were compared against emotion recognition from uncompressed speech. The recognition methods included techniques based on three...

Journal: :CoRR 2013
Imen Trabelsi Dorra Ben Ayed Mezghanni Noureddine Ellouze

The purpose of speech emotion recognition system is to classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral and happiness. Speech features that are commonly used in speech emotion recognition (SER) rely on global utterance level prosodic features. In our work, we evaluate the impact of frame-level feature extraction. The speech samples are fro...

2005
Jonas Beskow Mikael Nordenberg

This paper describes initial experiments with synthesis of visual speech articulation for different emotions, using a newly developed MPEG-4 compatible talking head. The basic problem with combining speech and emotion in a talking head is to handle the interaction between emotional expression and articulation in the orofacial region. Rather than trying to model speech and emotion as two separat...

2005
Jonas Beskow

This paper describes initial experiments with synthesis of visual speech articulation for different emotions, using a newly developed MPEG-4 compatible talking head. The basic problem with combining speech and emotion in a talking head is to handle the interaction between emotional expression and articulation in the orofacial region. Rather than trying to model speech and emotion as two separat...

2013
Jun Seok Park Soo Hong Kim

In early research the basic acoustic features were the primary choices for emotion recognition from speech. Most of the feature vectors were composed with the simple extracted pitch-related, intensity related, and duration related attributes, such as maximum, minimum, median, range and variability values. However, researchers are still debating what features influence the recognition of emotion...

2013
Jainath Yadav

An emotion is made of several components such as physiological changes in the body, subjective feelings, and expressive behaviours. These changes in speech signal are mainly observed in prosody parameters such as pitch, duration and energy. In this work, prosody parameters are modified using instants of significant excitation (epochs) and these instants are detected using Zero Frequency Filteri...

Journal: :EURASIP J. Audio, Speech and Music Processing 2017
Dorota Kaminska Tomasz Sapinski Gholamreza Anbarjafari

This research paper presents parametrization of emotional speech using a pool of common features utilized in emotion recognition such as fundamental frequency, formants, energy,MFCC, PLP, and LPC coefficients. The pool is additionally expanded by perceptual coefficients such as BFCC, HFCC, RPLP, and RASTA PLP, which are used in speech recognition, but not applied in emotion detection. The main ...

2016
Sefik Emre Eskimez Melissa Sturge-Apple Zhiyao Duan Wendi B. Heinzelman

The ability to classify emotions from speech is beneficial in a number of domains, including the study of human relationships. However, manual classification of emotions from speech is time consuming. Current technology supports the automatic classification of emotions from speech, but these systems have some limitations. In particular, existing systems are trained with a given data set and can...

2012
Fabien Ringeval Mohamed Chetouani Björn W. Schuller

Whereas rhythmic speech analysis is known to bear great potential for the recognition of emotion, it is often omitted or reduced to the speaking rate or segmental durations. An obvious explanation is that the characterisation of speech rhythm is not an easy task itself and there exist many types of rhythmic information. In this paper, we study advanced methods to define novel metrics of speech ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید