نتایج جستجو برای: emotional speech recognition

تعداد نتایج: 435631  

Journal: :Speech Communication 2006
Dimitrios Ververidis Constantine Kotropoulos

In this paper we overview emotional speech recognition having in mind three goals. The first goal is to provide an up-todate record of the available emotional speech data collections. The number of emotional states, the language, the number of speakers, and the kind of speech are briefly addressed. The second goal is to present the most frequent acoustic features used for emotional speech recog...

2004
Ze-Jing Chuang Chung-Hsien Wu

This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-three acoustic features are extracted from the speech input. After Principle Component Analysis (PCA) is performed, 14 principle components are selected for discriminative representation. In this representation, each principle component is the combination of ...

2015
Farah Chenchah Zied Lachiri

Recognizing human emotions through vocal channel has gained increased attention recently. In this paper, we study how used features, and classifiers impact recognition accuracy of emotions present in speech. Four emotional states are considered for classification of emotions from speech in this work. For this aim, features are extracted from audio characteristics of emotional speech using Linea...

2014
Bo-Chang Chiou Chia-Ping Chen

In this paper, we investigate cross-lingual automatic speech emotion recognition. The basic idea is that since the emotion recognition system is based on the acoustic features only, it is possible to combine data in different languages to improve the recognition accuracy. We begin with the construction of a Mandarin database of emotional speech, which is similar to the well-known Berlin Databas...

2006
Thurid Vogt Elisabeth André

Feature extraction is still a disputed issue for the recognition of emotions from speech. Differences in features for male and female speakers are a well-known problem and it is established that gender-dependent emotion recognizers perform better than gender-independent ones. We propose a way to improve the discriminative quality of gender-dependent features: The emotion recognition system is p...

Journal: :Journal of Korean Institute of Intelligent Systems 2009

2017
Syeda Narjis Fatima Engin Erzin

Dyadic interactions encapsulate rich emotional exchange between interlocutors suggesting a multimodal, cross-speaker and cross-dimensional continuous emotion dependency. This study explores the dynamic inter-attribute emotional dependency at the cross-subject level with implications to continuous emotion recognition based on speech and body motion cues. We propose a novel two-stage Gaussian Mix...

Journal: :International Journal of Advanced Research in Artificial Intelligence 2015

Journal: :Journal of Korean Institute of Intelligent Systems 2009

Journal: :auditory and vestibular research 0
ensiyeh rahmani department of audiology, school of rehabilitation sciences, iran university of medical sciences, tehran, iran farnoush jarollahi department of audiology, school of rehabilitation sciences, iran university of medical sciences, tehran, iran agha fatemeh hosseini department of biostatistics, school of health, iran university of medical sciences, tehran, iran mahnaz soleymani department of audiology, school of rehabilitation sciences, iran university of medical sciences, tehran, iran

background and aim: bilingualism is an important phenomenon with different effects on each aspect of language processing. auditory temporal processing is a major component of the auditory processing ability. since bilingual and monolingual individual’s brain process are different, and no studies have yet been conducted on the effect of temporal processing on speech recognition performance of az...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید