نتایج جستجو برای: emotional speech recognition

تعداد نتایج: 435631  

Journal: :auditory and vestibular research 0
seyyedeh zeynab nureddini department of audiology, school of rehabilitation, shahid beheshti university of medical sciences, tehran, iran ali mohammadzadeh department of audiology, school of rehabilitation, shahid beheshti university of medical sciences, tehran, iran majid ashrafi department of audiology, school of rehabilitation, shahid beheshti university of medical sciences, tehran, iran seyyed mehdi tabatabai department of statistics, school of rehabilitation, shahid beheshti university of medical sciences, tehran, iran leyla jalilvand karimi department of audiology, school of rehabilitation, shahid beheshti university of medical sciences, tehran, iran

background and aim: speech understanding almost never occurs in silence. verbal communication often occurs in environments where multiple speakers are talking. in such environments, babbling noise masks speech comprehension. consonants, in comparison to vowels, are more sensitive to noise masking. consonants provide most acoustic information needed for comprehending the meaning of the word. sin...

2013
Jun Seok Park Soo Hong Kim

In early research the basic acoustic features were the primary choices for emotion recognition from speech. Most of the feature vectors were composed with the simple extracted pitch-related, intensity related, and duration related attributes, such as maximum, minimum, median, range and variability values. However, researchers are still debating what features influence the recognition of emotion...

2015
Hariharan Muthusamy Kemal Polat Sazali Yaacob

In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone f...

Abdollah Moossavi, Enayatollah Bakhshi, Seyed Basir Hashemi Younes Lotfi Zahra Jeddi

Background: Variability in speech performance is a major concern for children with cochlear implants (CIs). Spectral resolution is an important acoustic component in speech perception. Considerable variability and limitations of spectral resolution in children with CIs may lead to individual differences in speech performance. The aim of this study was to assess the correlation between auditory ...

2011
Martijn Goudbeek Marie Postma

The development of our ability to recognize (vocal) emotional expression has been relatively understudied. Even less studied is the effect of linguistic (spoken) context on emotion perception. In this study we investigate the performance of young (1825) and old (60-85) listeners on two tasks: an emotion recognition task where emotions expressed in a sustained vowel (/a/) had to be recognized an...

2015
Elena E. Lyakso Olga V. Frolova Evgeniya Dmitrieva Aleksei Grigorev Heysem Kaya Albert Ali Salah Alexey Karpov

We present the first child emotional speech corpus in Russian, called “EmoChildRu”, which contains audio materials of 3-7 year old kids. The database includes over 20K recordings (approx. 30 hours), collected from 100 children. Recordings were carried out in three controlled settings by creating different emotional states for children: playing with a standard set of toys; repetition of words fr...

2001
Ping Li Michael C. Yip

Chinese is a language that is extensively ambiguous on a lexical-morphemic level. In this study, we examined the effects of prior context, frequency, and density of a homophone on spoken word recognition of Chinese homophones in a cross-modal experiment. Results indicate that prior context affects the access of the appropriate meaning from early on, and that context interacts with frequency of ...

Journal: :Applied sciences 2023

Speech emotion recognition is a critical component for achieving natural human–robot interaction. The modulation-filtered cochleagram feature based on auditory modulation perception, which contains multi-dimensional spectral–temporal representation. In this study, we propose an framework that utilizes multi-level attention network to extract high-level emotional representations from the cochlea...

2010
Lan-Ying Yeh Tai-Shih Chi

Speech emotion recognition is mostly considered in clean speech. In this paper, joint spectro-temporal features (RS features) are extracted from an auditory model and are applied to detect the emotion status of noisy speech. The noisy speech is derived from the Berlin Emotional Speech database with added white and babble noises under various SNR levels. The clean train/noisy test scenario is in...

Journal: :the modares journal of electrical engineering 2011
ayuob jafari farshad almasganj maryam nabi bidhendi

this paper introduces a novel approach to improve performance of speech recognition systems using a combination of features obtained from speech reconstructed phase space (rps) and frequency domain analysis. by choosing an appropriate value for the dimension, reconstructed phase space is assured to be topologically equivalent to the dynamics of the speech production system, and could therefore ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید