نتایج جستجو برای: kneser graph
تعداد نتایج: 198300 فیلتر نتایج به سال:
In this note, we investigate some properties of local Kneser graphs defined in [8]. In this regard, as a generalization of the Erdös-Ko-Rado theorem, we characterize the maximum independent sets of local Kneser graphs. Next, we present an upper bound for their chromatic number.
For a graph H, G is H-induced-saturated if does not contain an induced copy of but either removing arbitrary edge from or adding non-edge to creates H. Depending on the necessarily exist. In fact, (Martin and Smith, 2012) showed that P4-induced-saturated graphs do exist, where Pk denotes path k vertices. Given it easy construct Pk-induced-saturated for k?{2,3}, (Axenovich Csikós, 2019) asked wh...
This paper consists of two loosely related notes on the domination number of graphs. In the first part, we provide a new upper bound for the domination number of d-regular graphs. Our bound is the best known for d ≥ 6. In the second part, we compute the exact domination number and total domination number of certain Kneser graphs, and we provide some bounds on the domination number of other Knes...
The paper presents an in-depth analysis of a less known interaction between Kneser-Ney smoothing and entropy pruning that leads to severe degradation in language model performance under aggressive pruning regimes. Experiments in a data-rich setup such as google.com voice search show a significant impact in WER as well: pruning Kneser-Ney and Katz models to 0.1% of their original impacts speech ...
We present an algorithm for re-estimating parameters of backoff n-gram language models so as to preserve given marginal distributions, along the lines of wellknown Kneser-Ney (1995) smoothing. Unlike Kneser-Ney, our approach is designed to be applied to any given smoothed backoff model, including models that have already been heavily pruned. As a result, the algorithm avoids issues observed whe...
-gram models are the most widely used language models in large vocabulary continuous speech recognition. Since the size of the model grows rapidly with respect to the model order and available training data, many methods have been proposed for pruning the least relevant -grams from the model. However, correct smoothing of the -gram probability distributions is important and performance may degr...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید