نتایج جستجو برای: 1 linguistic behavior 2 paralinguistic information 3 prosodic features 4 acoustic correlates

تعداد نتایج: 6474078  

2004
S. Raidt G. Bailly B. Holm H. Mixdorff

We face many options when designing a system that automatically generates prosody from linguistic and paralinguistic information. The literature provides several candidate phonetic models, phonological models and mapping tools to actually implement the system. We detail here some dimensions along which these models have to be compared. We show also that systems employing quite similar phonetic ...

2000
Peter Roach

It is inconceivable that there could be information present in the speech signal that could be detected by the human auditory system but which is not accessible to acoustic analysis and phonetic categorisation. We know that humans can reliably recognise a range of emotions produced by speakers of their own language on the basis of the acoustic signal alone, yet it appears that our ability to id...

2015
Tamara Rathcke Rachel Smith

This paper contributes to the recent debate in linguistic-phonetic rhythm research dominated by the idea of a perceptual dichotomy involving “syllabletimed” and “stress-timed” rhythm classes. Some previous studies have shown that it is difficult both to find reliable acoustic correlates of these classes and also to obtain reliable perceptual data for their support. In an experiment, we asked 12...

2013
Nandini Bondale Thippur Sreenivas

In spontaneous speech, emotion information is embedded at several levels: acoustic, linguistic, gestural (non-verbal), etc. For emotion recognition in speech, there is much attention to acoustic level and some attention at the linguistic level. In this study, we identify paralinguistic markers for emotion in the language. We study two Indian languages belonging to two distinct language families...

Journal: :Computer Speech & Language 2013
William Yang Wang Fadi Biadsy Andrew Rosenberg Julia Hirschberg

Traditional studies of speaker state focus primarily upon one-stage classification techniques using standard acoustic features. In this article, we investigate multiple novel features and approaches to two recent tasks in speaker state detection: level-of-interest (LOI) detection and intoxication detection. In the task of LOI prediction, we propose a novel Discriminative TFIDF feature to captur...

2008
Fadi Biadsy Andrew Rosenberg Rolf Carlson

Perception of charisma, the ability to influence others by virtue of one’s personal qualities, appears to be influenced to some extent by cultural factors. We compare results of five studies of charisma speech in which American, Palestinian, and Swedish subjects rated Standard American English political speech and Americans and Palestinians rated Palestinian Arabic speech. We identify acoustic-...

2013
Takatomo Kano Shinnosuke Takamichi Sakriani Sakti Graham Neubig Tomoki Toda Satoshi Nakamura

In previous work, we proposed a model for speech-to-speech translation that is sensitive to paralinguistic information such as duration and power of spoken words [1]. This model uses linear regression to map source acoustic features to target acoustic features directly and in continuous space. However, while the model is effective, it faces scalability issues as a single model must be trained f...

2011
Agustín Gravano Rivka Levitan Laura Willson Stefan Benus Julia Hirschberg Ani Nenkova

We describe acoustic/prosodic and lexical correlates of social variables annotated on a large corpus of task-oriented spontaneous speech. We employ Amazon Mechanical Turk to label the corpus with a large number of social behaviors, examining results of three of these here. We find significant differences between male and female speakers for perceptions of attempts to be liked, likeability, spee...

2010
Marcel Kockmann Lukás Burget Jan Cernocký

This paper describes Brno University of Technology (BUT) system for the Interspeech 2010 Paralinguistic Challenge. Our submitted systems for the Ageand Gender-Sub-Challenges employ fusions of several sub-systems. We make use of our own acoustic frame-based feature sets, as well as the provided utterance-based acoustic, prosodic and voice quality features. Modeling is based on Gaussian Mixture M...

2006
Mohammed E. Hoque Mohammed Yeasin Max M. Louwerse

This paper presents robust recognition of selected emotions from salient spoken words. The prosodic and acoustic features were used to extract the intonation patterns and correlates of emotion from speech samples in order to develop and evaluate models of emotion. The computed features are projected using a combination of linear projection techniques for compact and clustered representation of ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید