نتایج جستجو برای: speech emotion

تعداد نتایج: 162640  

Journal: :Intelligent Decision Technologies 2014
David Sztahó Klára Vicsi

In speech communication emotions play a great role in expressing information. These emotions are partly given as reactions to our environment, to our partners during a conversation. Understanding these reactions and recognizing them automatically is highly important. Through them, we can get a clearer picture of the response of our partner in a conversation. In Cognitive InfoCommunication this ...

2014
P. Vijai Bhaskar S. Ramamohana Rao

Speech processing is the study of speech signals, and the methods used to process them. In application such as speech coding, speech synthesis, speech recognition and speaker recognition technology, speech processing is employed. In speech classification, the computation of prosody effects from speech signals plays a major role. In emotional speech signals pitch and frequency is a most importan...

2013
N. J. Nalini S. Palanivel M. Balasubramanian

Abstract--The main objective of this research is to develop a speech emotion recognition system using residual phase and MFCC features with autoassociative neural network (AANN). The speech emotion recognition system classifies the speech emotion into predefined categories such as anger, fear, happy, neutral or sad. The proposed technique for speech emotion recognition (SER) has two phases : Fe...

Journal: :I. J. Speech Technology 2012
Björn W. Schuller Zixing Zhang Felix Weninger Felix Burkhardt

Recognizing speakers in emotional conditions remains a challenging issue, since speaker states such as emotion affect the acoustic parameters used in typical speaker recognition systems. Thus, it is believed that knowledge of the current speaker emotion can improve speaker recognition in real life conditions. Conversely, speech emotion recognition still has to overcome several barriers before i...

2005
Hongwu Yang Shuang Li Lianhong Cai

Research efforts in the field of TTS have placed emphasis on the naturalness in synthesized speech to facilitate various applications in Human-Computer Interaction (HCI). The ideal synthetic speech for HCI should not only have proper pronunciations, but also convey the appropriate semantics within the context of use. “Context” refers to the textual context of the document, the identity of the i...

2006
Bufan Zhang Zhenhua Ling Long Qin Renhua Wang

This paper presents an approach to model the pitch contour in Chinese expressive speech synthesis by using SFC (Superposition of Functional Contours) model. Some functional contours corresponding to the expressions are introduced when applying SFC for expressive speech. During implementation, both the emotion-dependent method and emotion-independent method are realized and compared. Three emoti...

2015
Amarbir Singh

In the field of human computer interaction automatic speech emotion recognition is a current research topic. Emotion recognition in speech is a challenging problem because it is unclear that which features are effective for speech emotion recognition. In this paper we proposed an approach in which we extract the features of energy, spectral and acoustic domains and then merging these features b...

Journal: :IEEE Access 2023

Cross-corpus speech emotion recognition(SER) is a hot topic in classification. SER includes these four issues:feature selection, differences constraint, label regression and preservation of discriminative features. Seldom literature can solve issues jointly previous studies.In this work,we propose the transfer emotion-discriminative features subspace learning(TEDFSL) method.Acoustic are extract...

2010
Lan-Ying Yeh Tai-Shih Chi

Speech emotion recognition is mostly considered in clean speech. In this paper, joint spectro-temporal features (RS features) are extracted from an auditory model and are applied to detect the emotion status of noisy speech. The noisy speech is derived from the Berlin Emotional Speech database with added white and babble noises under various SNR levels. The clean train/noisy test scenario is in...

2014
Jun-Seok Park Soo-Hong Kim

In early research the basic acoustic features were the primary choices for emotion recognition from speech. Most of the feature vectors were composed with the simple extracted pitch-related, intensity related, and duration related attributes, such as maximum, minimum, median, range and variability values. However, researchers are still debating what features influence the recognition of emotion...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید