نتایج جستجو برای: acoustic correlation

تعداد نتایج: 473159  

2017
Athanasios Tsirigotis Despoina D. Deligianni

Citation: Tsirigotis A and Deligianni DD (2017) Combining Digital Image Correlation and Acoustic Emission for Monitoring of the Strain Distribution until Yielding during Compression of Bovine Cancellous Bone. Front. Mater. 4:44. doi: 10.3389/fmats.2017.00044 combining Digital image correlation and acoustic emission for Monitoring of the strain Distribution until Yielding during compression of B...

2014
A. Abgottspon T. Staubli

The low head hydro power plant in Wettingen, Switzerland, is in operation since 1933 and no effective cam optimization has been carried out up to now. The goal of the measuring campaign was to provide data enabling an economic study for a retrofit and upgrade project of the plant and the measurement of the efficiency improvement of turbines after refurbishment. The contribution discusses cam te...

Journal: :The Journal of the Acoustical Society of America 2008
Jonathan Darch Ben Milner Saeed Vaseghi

The aim of this work is to develop methods that enable acoustic speech features to be predicted from mel-frequency cepstral coefficient (MFCC) vectors as may be encountered in distributed speech recognition architectures. The work begins with a detailed analysis of the multiple correlation between acoustic speech features and MFCC vectors. This confirms the existence of correlation, which is fo...

Batool Alinezhad Elkhas Vaysi,

This paper aims to explore some acoustic properties (i.e. duration and pitch amplitude of speech) associated with three different emotions: anger, sadness and joy against neutrality as a reference point, all being intentionally expressed by six Persian speakers. The primary purpose of this study is to find out if there is any correspondence between the given emotions and prosody patterning in P...

Journal: :Speech Communication 2010
Zhen-Hua Ling Korin Richmond Junichi Yamagishi

This paper presents an investigation into predicting the movement of a speaker’s mouth from text input using hidden Markov models (HMM). A corpus of human articulatory movements, recorded by electromagnetic articulography (EMA), is used to train HMMs. To predict articulatory movements for input text, a suitable model sequence is selected and a maximum-likelihood parameter generation (MLPG) algo...

Journal: :journal of teaching language skills 2012
batool alinezhad elkhas vaysi

this paper aims to explore some acoustic properties (i.e. duration and pitch amplitude of speech) associated with three different emotions: anger, sadness and joy against neutrality as a reference point, all being intentionally expressed by six persian speakers. the primary purpose of this study is to find out if there is any correspondence between the given emotions and prosody patterning in p...

محبی, علیرضا, نجومی, مرضیه , عرفان, آرتمیس ,

    Background & Aim: Objective assessment of nasal airway is helpful in understanding nasal breathing function. Acoustic rhinometry is one of the most commonly used objective measurements of nasal airway. This test has the ability to measure cross-sectional areas of the nose in different distances and volume and also determines the site of minimal cross-sectional area. These variables are diff...

Introduction: Sound absorption coefficient determination is one of the most important factors among material selection for indoor noise control. The objectives of this study were: 1) comparison sound absorption coefficient of different materials by standing wave ratio and transfer function methods, and 2) developing a regression model in adjusting provided results. Methods: In this study, 46 a...

2001
Roland Göcke J. Bruce Millar Alexander Zelinsky Jordi Robert-Ribes

This paper investigates the statistical relationship between acoustic and visual speech features for vowels. We extract such features from our stereo vision AV speech data corpus of Australian English. A principal component analysis is performed to determine which data points of the parameter curve for each feature are the most important ones to represent the shape of each curve. This is follow...

2001
Roland Goecke J Bruce Millar Alexander Zelinsky Jordi Robert-Ribes

This paper investigates the statistical relationship between acoustic and visual speech features for vowels. We extract such features from our stereo vision AV speech data corpus of Australian English. A principal component analysis is performed to determine which data points of the parameter curve for each feature are the most important ones to represent the shape of each curve. This is follow...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید