نتایج جستجو برای: rank normalization and z
تعداد نتایج: 16871593 فیلتر نتایج به سال:
Low-rank tensor approximations are plagued by a well-known problem — a tensor may fail to have a best rank-r approximation. Over R, it is known that such failures can occur with positive probability, sometimes with certainty: in R2×2×2, every tensor of rank 3 fails to have a best rank-2 approximation. We will show that while such failures still occur over C, they happen with zero probability. I...
Background. A biofeedback program receives attention as an intervention program for athletes to regulate their psychological state. Objectives. The purpose was to apply for a heart rate variability (HRV) biofeedback training program and validate its effectiveness for racket sports players. Methods. The participants were eight elite sport athletes (three men and five women): five squash players...
In this paper, we describe the ICSI 2007 language recognition system. The system constitutes a variant of the classic PPRLM (parallel phone recognizer followed by language modeling) approach. We used a combination of frame-by-frame multilayer perceptron (MLP) phone classifiers for English, Arabic, and Mandarin and one open loop hidden Markov Model (HMM) phone recognizer (trained on English data...
The minimum rank of a simple graph G is defined to be the smallest possible rank over all symmetric real matrices whose ijth entry (for i = j) is nonzero whenever {i, j} is an edge in G and is zero otherwise. This paper introduces a new graph parameter, Z(G), that is the minimum size of a zero forcing set of vertices and uses it to bound the minimum rank for numerous families of graphs, often e...
Let H 0 (t); H 1 (t) be real 2 2 matrix{functions with entries from L 1 (0; 1), H 1 (t) = H 1 (t); H 0 (t) 0. We associate with these data the solution of the Cauchy problem for the diierential system dA(t; z) dt = A(t;z)fzH 0 (t) + H 1 (t)gJ; A(0;z) = 1 2 ; where J = 0 1 ?1 0 : The matrix{function A(z) = A(1;z) is called the monodromy matrix of the corresponding system 5]. More generally, let ...
We review the performance of a new two-stage cascaded machine learning approach for rescoring keyword search output for low resource languages. In the first stage Confusion Networks (CNs) are rescored for improved Automatic Speech Recognition (ASR) by reranking the arcs of each confusion bin. In the second stage we generate keyword search hypotheses from the rescored ASR output and rescore them...
These are well known and important theorems of Nehari, Ahlfors and Weill, and Krauss. We refer to Lehto's book [8] for a discussion of these results and for the properties of the Schwarzian that we shall need. The constants 2 in (1.1) and 6 in (1.3) are sharp. An example for the latter is the Koebe function k(z) = z(l — z)~ which has Schwarzian Sk(z) = — 6/(1 —z). We also remark that the class ...
Let X be a projective normal toric variety and T0 a rank one subtorus of the defining torus of X. We show that the normalization of the Chow quotient X//T0, in the sense of Kapranov-Sturmfels-Zelevinsky, coarsely represents the moduli space of stable log maps to X with discrete data given by T0 ⊂ X.
This paper presents some experiments with feature and score normalization for text-independent speaker verification of cellular data. The speaker verification system is based on cepstral features and Gaussian mixture models with 1024 components. The following methods, which have been proposed for feature and score normalization, are reviewed and evaluated on cellular data: cepstral mean subtrac...
We define a functor which gives the “global rank of a quiver representation” and prove that it has nice properties which make it a generalization of the rank of a linear map. We demonstrate how to construct other “rank functors” for a quiver Q, which induce ring homomorphisms (called “rank functions”) from the representation ring of Q to Z. These rank functions give discrete numerical invariant...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید