نتایج جستجو برای: emotional speech database
تعداد نتایج: 478786 فیلتر نتایج به سال:
The purpose of speech emotion recognition system is to classify speaker's utterances into different emotional states such as disgust, boredom, sadness, neutral and happiness. Speech features that are commonly used in speech emotion recognition (SER) rely on global utterance level prosodic features. In our work, we evaluate the impact of frame-level feature extraction. The speech samples are fro...
This paper presents a duration-embedded Bi-HMM framework for expressive voice conversion. First, Ward’s minimum variance clustering method is used to cluster all the conversion units (sub-syllables) in order to reduce the number of conversion models as well as the size of the required training database. The duration-embedded Bi-HMM trained with the EM algorithm is built for each sub-syllable cl...
In early research the basic acoustic features were the primary choices for emotion recognition from speech. Most of the feature vectors were composed with the simple extracted pitch-related, intensity related, and duration related attributes, such as maximum, minimum, median, range and variability values. However, researchers are still debating what features influence the recognition of emotion...
This paper presents three new speech databases for standard Basque. They are designed primarily for corpus-based synthesis but each database has its specific purpose: 1) AhoSyn: high quality speech synthesis (recorded also in Spanish), 2) AhoSpeakers: voice conversion and 3) AhoEmo3: emotional speech synthesis. The whole corpus design and the recording process are described with detail. Once th...
Massive amounts of digital audio material are stored in databases to be accessed via digital networks. A major challenge is how to organise and index this material to best support retrieval applications. Not enough manpower will ever be available to index the terabytes of digital material by hand. Methods for interpreting the complex data automatically or at least semi-automatically must theref...
The paper analyzes the prosody features, which includes the intonation, speaking rate, intensity, based on classified emotional speech. As an important feature of voice quality, voice source are also deduced for analysis. With the analysis results above, the paper creates both a CART model and a weight decay neural network model to find acoustic importance towards the emotional speech classific...
This paper proposes a method to recognize the emotion present in the speech signal using Iterative clustering technique. We propose Mel Frequency Perceptual Linear Predictive Cepstrum (MFPLPC) as a feature for recognizing the emotions. This feature is extracted from the speech and the clustering models are generated for each emotion. For the Speaker Independent classification technique, preproc...
Speech emotion recognition is an important and challenging task in the realm of human-computer interaction. Prior work proposed a variety of models and feature sets for training a system. In this work, we conduct extensive experiments using an attentive convolutional neural network with multi-view learning objective function. We compare system performance using different lengths of the input si...
This paper describes the modeling of various emotional expressions and speaking styles in synthetic speech using HMM-based speech synthesis. We show two methods for modeling speaking styles and emotional expressions. In the first method called style-dependent modeling, each speaking style and emotional expression is modeled individually. In the second one called style-mixed modeling, each speak...
educational researchers have provided evidence that teachers’ emotional intelligence has strong effects on various aspects of teaching and learning. yet, in the field of teaching english to speakers of other languages (tesol), inquiry into teachers’ emotional intelligence is nearly limited. given its documented powerful impact on teaching practices and student learning, it is critical to pursue...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید