نتایج جستجو برای: emotional speech database
تعداد نتایج: 478786 فیلتر نتایج به سال:
Although researches in the field of Persian speech recognition claim a thirty-year-old history in Iran which has achieved considerable progresses, due to the lack of well-defined experimental framework, outcomes from many of these researches are not comparable to each other and their accurate assessment won’t be possible. The experimental framework includes ASR toolkit and speech database ...
This paper reports on a behavioral study that explores the role of culture and gender in the recognition of emotional speech in an under investigated cultural context (a collectivist society: i.e., Iran). Participants were asked to recognize the emotional prosody of a set of validated emotional vocal portrayals (including the five basic emotions). Findings of the experiment were then comp...
The psychological classification of emotion has two main approaches. One is emotion category, in which emotions are classified into discrete and fundamental groups; the other is emotion dimension, in which emotions are characterized by multiple continuous scales. The cognitive classification of emotion by humans perceived from speech is not sufficiently established. Although there have been sev...
Automatic classification of emotional speech is a challenging task with applications in synthesis and recognition. In this paper, an adaptive sinusoidal model (aSM), called the extended adaptive Quasi-Harmonic Model eaQHM, is applied on emotional speech analysis for classification purposes. The parameters of the model (amplitude and frequency) are used as features for the classification. Using ...
The aim of this paper is to report on an attempt to design and implement an intelligent system capable of generating the correct part of speech for a given sentence while the sentence is totally new to the system and not stored in any database available to the system. It follows the same steps a normal individual does to provide the correct parts of speech using a natural language processor. It...
Emotion feature extraction is the key to speech emotional recognition. And ensemble empirical mode decomposition(EEMD) is a newly developed method aimed at eliminating emotion mode mixing present in the original empirical mode decomposition(EMD). To evaluate the performance of this new method, this paper investigates the effect of a parameters pertinent to EEMD: speech emotional envelope. First...
In this article the I3Media corpus is presented, a trilingual (Catalan, English, Spanish) speech database of neutral and emotional material collected for analysis and synthesis purposes. The corpus is actually made up of six different subsets of material: a neutral subcorpus, containing emotionless utterances; a ‘dialog’ subcorpus, containing typical call center utterances; an ‘emotional’ corpu...
Recent researches in speech synthesis are mainly focused on naturalness, and the emotional speech synthesis becomes one of the highlighted research topics. Although quite a many studies on emotional speech in English or Japanese have been addressed, the studies in Korean can seldom be found. This paper presents an analysis of emotional speech in Korean. Emotional speech features related to huma...
This paper presents a personal view of some the problems facing speech technologists in the study of emotional speech. It describes some databases that are currently being used, and points out that the majority of them use actors to reproduce the emotions, thereby possibly falsely representing the true characteristics of emotion in speech. Databases of real emotional speech, on the other hand, ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید