نتایج جستجو برای: irreducible aperiodic markov chain
تعداد نتایج: 352282 فیلتر نتایج به سال:
For an aperiodic, irreducible Markov chain with the non-negative integers as state space it is shown that the existence of a solution to E-,op,ijy, y,; i N > 0 in which y, -* 0 is necessary and sufficient for recurrence, and the existence of a bounded solution to the same inequalities, with yk < y, , yN-1 for some k N, is necessary and sufficient for transience. RECURRENCE; TRANSIENCE Consider ...
For an aperiodic, irreducible Markov chain with the non-negative integers as state space it is shown that the existence of a solution to Σ ∞ j=0 p ij y i ≤ y i ; i>=N > 0 in which y i-> ∞ is necessary and sufficient for recurrence, and the existence of a bounded solution to the same inequalities, with y k < y 0 , …, y N-1 for some k >= N, is necessary and sufficient for transience.
In this paper we consider the transmission of classical information through a class of quantum channels with long-term memory, which are given by convex combinations of product channels. Hence, the memory of such channels is given by a Markov chain which is aperiodic but not irreducible. We prove the coding theorem and weak converse for this class of channels. The main techniques that we employ...
This paper is concerned with persistent system identification for plants that are equipped with binary sensors whose unknown parameter is a random process represented by a Markov chain. We treat two classes of problems. In the first class, the parameter is a stochastic process modeled by an irreducible and aperiodic Markov chain with transition rates sufficiently faster than adaptation rates of...
The backoff protocol is widely used for sharing a common channel among several stations in communication networks. The Binary Exponential Backoff (BEB) improves the system throughput but increases the capture effect, permitting a station to seize the channel for a long time. In this paper, we introduce and analyze a new class of adaptive backoff protocols where a station changes its contention ...
It is our pleasure to congratulate the authors (hereafter DKSC) on an interesting paper that was a delight to read. While DKSC provide a remarkable collection of connections between different representations of the Markov chains in their paper, we will focus on the “running time analysis” portion. This is a familiar problem to statisticians; given a target population, how can we obtain a repres...
A φ-irreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all” by “all,” which is potentially important when running Markov chain Monte Carlo algorithms. Full-dimensional Metropolis–Hastings algorithms are known to be Harris recur...
We present new results about the temporal-difference learning algorithm, as applied to approximating the cost-to-go function of a Markov chain using linear function approximators. The algorithm we analyze performs on-line updating of a parameter vector during a single endless trajectory of an aperiodic irreducible finite state Markov chain. Results include convergence (with probability 1), a ch...
We modify the dynamic pivot mechanism of Bergemann and Välimäki (Econometrica, 2010) in such a way that lump-sum fees are collected from the players. We show that the modified mechanism satisfies ex-ante budget balance as well as ex-post efficiency, periodic ex-post incentive compatibility, and periodic ex-post individual rationality, as long as the Markov chain representing the evolution of pl...
Here we consider the Kohonen algorithm with a constant learning rate as a Markov process evolving in a topological space.It is shown that the process is an irreducible and aperiodic T-chain, regardless of the dimension of both data space and network and the special shape of the neighborhood function.Moreover the validity of Deoblin's condition is proved. These imply the convergence in distribut...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید