نتایج جستجو برای: sgmm
تعداد نتایج: 70 فیلتر نتایج به سال:
Noise adaptive training (NAT) is an effective approach to normalise environmental distortions when training a speech recogniser on noise-corrupted speech. This paper investigates the model-based NAT scheme using joint uncertainty decoding (JUD) for subspace Gaussian mixture models (SGMMs). A typical SGMM acoustic model has much larger number of surface Gaussian components, which makes it comput...
In this paper, we propose two techniques to improve the acoustic model of a low-resource language by: (i) Pooling data from closely related languages using a phoneme mapping algorithm to build acoustic models like subspace Gaussian mixture model (SGMM), phone cluster adaptive training (Phone-CAT), deep neural network (DNN) and convolutional neural network (CNN). Using the low-resource language ...
The bottleneck (BN) feature, particularly based on deep structures, has gained significant success in automatic speech recognition (ASR). However, applying the BN feature to small/medium-scale tasks is nontrivial. An obvious reason is that the limited training data prevent from training a complicated deep network; another reason, which is more subtle, is that the BN feature tends to possess hig...
Common noise compensation techniques use vector Taylor series (VTS) to approximate the mismatch function. Recent work shows that the approximation accuracy may be improved by sampling. One such sampling technique is the unscented transform (UT), which draws samples deterministically from clean speech and noise model to derive the noise corrupted speech parameters. This paper applies UT to noise...
Voice activity detection (VAD) is a basic component of noise reduction algorithms. In this paper, we propose a voice activity detector based on a sequential Gaussian Mixture Model (SGMM) in log-spectral domain. This model comprises two Gaussian components, which respectively describe the speech and nonspeech log-power distributions. The initial distributions are firstly established by EM algori...
In this paper, large vocabulary children’s speech recognition is investigated by using the Deep Neural Network Hidden Markov Model (DNN-HMM) hybrid and the Subspace Gaussian Mixture Model (SGMM) acoustic modeling approach. In the investigated scenario training data is limited to about 7 hours of speech from children in the age range 7-13 and testing data consists in read clean speech from child...
In most of state-of-the-art speech recognition systems, Gaussian mixture models (GMMs) are used to model the density of the emitting states in the hidden Markov models (HMMs). In a conventional system, the model parameters of each GMM are estimated directly and independently given the alignment. This results a large number of model parameters to be estimated, and consequently, a large amount of...
Bu çal??mada, 2003-2016 y?llar?n? kapsayan Türkiye dâhil 16 geli?mekte olan ülke için; ekonomik kalk?nma ve gelir da??l?m?n?n yoksulluk üzerindeki etkisi Dinamik Panel Veri tahmin yöntemlerinden Sistem Genelle?tirilmi? Momentler Metodu (SGMM) kullan?larak edilmi?tir. amaçla iki farkl? model kullan?lm??t?r. SGMM tahmincisinden elde edilen sonuçlarda, modelde de de?i?keni kendi gecikmeli de?eri i...
The goal of this study is to examine the impact remittance inflow on inflation using System Generalized Method Moments (SGMM) and Dumitrescu-Hurlin Granger causality approach in countries from Central Eastern Europe over period 1994 2019. As levels economic financial development vary considerably across these some them are member states European Union (EU), we split into two more homogenous gro...
The recent success of convolutional neural network (CNN) in speech recognition is due to its ability to capture translational variance in spectral features while performing discrimination. The CNN architecture requires correlated features as input and thus fMLLR transform which is estimated in de-correlated feature space fails to give significant improvement. In this paper, we propose two metho...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید