نتایج جستجو برای: emotional speech database

تعداد نتایج: 478786  

2013
P. Gangamohan Sudarsana Reddy Kadiri Bayya Yegnanarayana

Emotional speech is produced when a speaker is in a state different from normal state. The objective of this study is to explore the deviations in the excitation source features of an emotional speech compared to normal speech. The features used for analysis are extracted at subsegmental level (1-3 ms) of speech. A comparative study of these features across different emotions indicates that the...

2015
Pavol Partila Miroslav Voznak Jaromir Tovarek

The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development...

Journal: :CoRR 2017
Karttikeya Mangalam Tanaya Guha

We investigate the effect and usefulness of spontaneity in speech (i.e. whether a given speech data is spontaneous or not) in the context of emotion recognition. We hypothesize that emotional content in speech is interrelated with its spontaneity, and thus propose to use spontaneity classification as an auxiliary task to the problem of emotion recognition. We propose two supervised learning set...

2005
Jianhua Tao Yongguo Kang

The paper analyzes the prosody features, which includes the intonation, speaking rate, intensity, based on classified emotional speech. As an important feature of voice quality, voice source are also deduced for analysis. With the analysis results above, the paper creates both a CART model and a weight decay neural network model to find acoustic importance towards the emotional speech classific...

2007
Donna Erickson Takaaki Shochi Caroline Menezes Hideki Kawahara Ken-Ichi Sakakibara

This paper investigates some non-F0 cues to emotional speech. Two speech samples were collected from spontaneous speech: the word “leave”--one sample spoken with emotion (sad) and the other, as not-emotional. Using the morphing algorithm of STRAIGHT [1], we morphed a series of 12 utterances, starting from the non-emotional “leave” to the emotional “leave”, keeping F0 at 300 Hz. Perception test ...

Journal: :International Journal of Power Electronics and Drive Systems 2023

<p>Speech emotion recognition aims to identify the expressed in speech by analyzing audio signals. In this work, data augmentation is first performed on samples increase number of for better model learning. The are comprehensively encoded as frequency and temporal domain features. classification, a light gradient boosting machine leveraged. hyperparameter tuning determine optimal settings...

Journal: :Speech Communication 2022

In this paper, we first provide a review of the state-of-the-art emotional voice conversion research, and existing speech databases. We then motivate development novel database (ESD) that addresses increasing research need. With ESD database1 is now made available to community. The consists 350 parallel utterances spoken by 10 native English Chinese speakers covers 5 emotion categories (neutral...

2013
Bernd J. Kröger

Cognitive goals – i.e. the intention to utter a sentence and to produce co-speech facial and hand-arm gestures – as well as the sensorimotor realization of the intended speech, co-speech facial, and co-speech hand-arm actions are modulated by the emotional state of the speaker. In this review paper it will be illustrated how cognitive goals and sensorimotor speech, co-speech facial, and co-spee...

Journal: :auditory and vestibular research 0
saeid aarabi department of audiology, school of rehabilitation sciences, iran university of medical sciences, tehran, iran farnoush jarollahi department of audiology, school of rehabilitation sciences, iran university of medical sciences, tehran, iran sajed badfar department of audiology, school of rehabilitation, arak university of medical sciences, arak, iran reza hosseinabadi department of audiology, school of rehablitation, tehran university of medical sciences, tehran, iran mohsen ahadi department of audiology, school of rehabilitation sciences, iran university of medical sciences, tehran, iran

background and aim: it will be discussed about five mechanisms in relation to speech in noise perception; including neural encoding and decoding, centrifugal pathways, pitch perception, asymmetric sampling in time and cognitive skills. these mechanisms are related to each other and each is important to recognize speech in noise. in this article, we have tried to rely on the latest studies to de...

2014
Olli Vuolteenaho Sinikka Eskelinen Eero Väyrynen EERO VÄYRYNEN Tapio Seppänen Klára Vicsi Raimo Ahonen

Emotion recognition, a key step of affective computing, is the process of decoding an embedded emotional message from human communication signals, e.g. visual, audio, and/or other physiological cues. It is well-known that speech is the main channel for human communication and thus vital in the signalling of emotion and semantic cues for the correct interpretation of contexts. In the verbal chan...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید