نتایج جستجو برای: emotional speech

تعداد نتایج: 220212  

2010
Shaikh Mostafa Al Masum Antonio Rui Ferreira Rebordão Keikichi Hirose

Some Text-to-Speech (TTS) systems revealed weaknesses in their emotional expressivity but this situation can be improved by a better parameterization of the acoustic and prosodic parameters. This paper describes a system, Affective Story Teller (AST), as an example of emotionally expressive speech synthesizer. Our technique uses several linguistic resources that recognizes emotions in the input...

2015
Lakshmi Saheer Xingyu Na Milos Cernak

Prosody plays an important role in neutral-to-emotional voice conversion. Prosodic features like pitch are usually estimated and altered at a segmental level based on short windowing of speech signal (where the signal is expected to be quasi-stationary). This results in a frame-wise change of acoustical parameters for synthesizing emotionalized speech. In order to convert a neutral speech to an...

2014
Máximo Sánchez-Gutiérrez Enrique Marcelo Albornoz Fabiola Martínez Licona Hugo Leonardo Rufiner John C. Goddard

Emotional speech recognition is a multidisciplinary research area that has received increasing attention over the last few years. The present paper considers the application of restricted Boltzmann machines (RBM) and deep belief networks (DBN) to the difficult task of automatic Spanish emotional speech recognition. The principal motivation lies in the success reported in a growing body of work ...

2005
Cecilia Ovesdotter Alm Richard Sproat

Whereas experimental studies on emotional speech often control for neutral semantics, speech in naturalistic speech corpora is characterized by contextual cues and non-neutral semantic content. Moreover, the target emotion of an utterance is generally unknown and must be inferred by the listener. Within the context of having child-directed expressive text-to-speech synthesis as goal, we describ...

M. H. Sedaaghi,

Accurate gender classification is useful in speech and speaker recognition as well as speech emotion classification, because a better performance has been reported when separate acoustic models are employed for males and females. Gender classification is also apparent in face recognition, video summarization, human-robot interaction, etc. Although gender classification is rather mature in a...

Journal: :Psychiatria Danubina 2014
Ana Gregl Marin Kirigin Radojka Sućeska Ligutić Snježana Bilać

BACKGROUND This study aims to establish whether mothers of children with specific language impairments (SLI) have reduced emotional competence and whether individual dimensions of maternal emotional competence are related to emotional and behavioral problems in children. SUBJECTS AND METHODS The clinical sample comprised 97 preschool children (23 girls) with SLI from, while the peer sample co...

Journal: :CoRR 2016
Abdul Malik Badshah Jamil Ahmad Mi Young Lee Sung Wook Baik

Besides spoken words, speech signals also carry information about speaker gender, age, and emotional state which can be used in a variety of speech analysis applications. In this paper, a divide and conquer strategy for ensemble classification has been proposed to recognize emotions in speech. Intrinsic hierarchy in emotions has been utilized to construct an emotions tree, which assisted in bre...

2014
Soroosh Mariooryad Reza Lotfian Carlos Busso

A key element in affective computing is to have large corpora of genuine emotional samples collected during natural conversations. Recording natural interactions through telephone is an appealing approach to build emotional databases. However, collecting real conversational data with expressive reactions is a challenging task, especially if the recordings are to be shared with the community (e....

2001
Marc Schröder Roddy Cowie Ellen Douglas-Cowie Machiel Westerdijk Stan C. A. M. Gielen

In a database of emotional speech, dimensional descriptions of emotional states have been correlated with acoustic variables. Many stable correlations have been found. The predictions made by linear regression widely agree with the literature. The numerical form of the description and the choice of acoustic variables studied are particularly well suited for future implementation in a speech syn...

2007
Jarosław Cichosz Krzysztof Ślot

The presented paper is concerned with emotion recognition based on speech signal. Two novel elements introduced in the method are an introduction of novel set of emotional speech descriptors and an application of a binary-tree based classifier, where consecutive emotions are extracted at each node, based on an assessment of feature triplets. The method has been verified using two databases of e...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید