نتایج جستجو برای: irreducible aperiodic markov chain

تعداد نتایج: 352282  

2006
S. Sawyer

2. The Metropolis-Hastings Algorithm. Metropolis’ idea is to start with a Markov chain Xn on the state space X with a fairly arbitrary Markov transition density q(x, y)dy and then modify it to define a Markov chain X∗ n that has π(x) as a stationary measure. By definition, q(x, y) is a Markov transition density if q(x, y) ≥ 0 and ∫ y∈X q(x, y)dy = 1. If the transformed random walk X ∗ n is irre...

2003
Amy N. Langville Carl D. Meyer

This paper deals with the various changes that can be made to the basic PageRank model. We document the recent findings and add a few new contributions. These contributions concern (1) the sensitivity of the PageRank vector, (2) another method of forcing the Markov chain to be irreducible, and (3) a proof of the full spectrum of the PageRank matrix.

2003
Jeffrey J. Hunter

A measure of the “mixing time” or “time to stationarity” in a finite irreducible discrete time Markov chain is considered. The statistic η π i ij j m j m = = ∑ 1 , where {πj} is the stationary distribution and mij is the mean first passage time from state i to state j of the Markov chain, is shown to be independent of the state i that the chain starts in (so that ηi = η for all i), is minimal i...

1988
David J. Aldous

Let (X,) be an irreducible continuous-time pure jump Markov chain on finite state space I = {i, j, k, . . .} with stationary distribution n: Classical theory says P(X, = j) + 3 as t -+ CO for all j, regardless of the initial distribution. The modern ‘coupling’ proof goes as follows. Let (Y,) be an independent copy of the chain. Then (X,, Y,), considered as a chain on I x I, is irreducible and h...

Journal: :Kybernetika 1989
Rolando Cavazos-Cadena

Average cost Markov decision chains with discrete time parameter are considered. The cost function is unbounded and satisfies an additional condition which frequently holds in applications. Also, we assume that there exists a single stationary policy for which the corresponding Markov chain is irreducible and ergodic with finite average cost. Within this framework, the existence of an average c...

Journal: :APJOR 2013
Jeffrey J. Hunter

The distribution of the “mixing time” or the “time to stationarity” in a discrete time irreducible Markov chain, starting in state i, can be defined as the number of trials to reach a state sampled from the stationary distribution of the Markov chain. Expressions for the probability generating function, and hence the probability distribution of the mixing time starting in state i are derived an...

2003
Ciamac Moallemi Benjamin Van Roy

1 Markov Decision Processes Consider a Markov chain (w(k), a(k)) defined for k = 0, 1,. .. and with w(k) ∈ W, a(k) in A, where W and A are finite sets representing the system state space and the action space, respectively. The transition probabilities are defined by the function P θ (w , a , w, a) = Pr w(k + 1) = w, a(k + 1) = a| w(k) = w , a(k) = a. Here, θ ∈ R N is a vector of policy paramete...

2003
Ciamac Moallemi Benjamin Van Roy

1 Markov Decision Processes Consider a Markov chain (w(k), a(k)) defined for k = 0, 1,. .. and with w(k) ∈ W, a(k) in A, where W and A are finite sets representing the system state space and the action space, respectively. The transition probabilities are defined by the function P θ (w , a , w, a) = Pr w(k + 1) = w, a(k + 1) = a| w(k) = w , a(k) = a. Here, θ ∈ R N is a vector of policy paramete...

Journal: :Journal of Mathematical Analysis and Applications 2023

Most of the existing literature on supervised machine learning problems focuses case when training data set is drawn from an i.i.d. sample. However, many practical are characterized by temporal dependence and strong correlation between marginals data-generating process, suggesting that assumption not always justified. This problem has been already considered in context Markov chains satisfying ...

Journal: :CoRR 2010
Girish Varma

We show the following. Theorem. Let M be an finite-state ergodic time-reversible Markov chain with transition matrix P and conductanceφ. Let λ ∈ (0, 1) be an eigenvalue of P. Then, φ + λ 6 1 This strengthens the well-known [4, 3, 2, 1, 5] inequality λ 6 1− φ2/2. We obtain our result by a slight variation in the proof method in [5, 4]; the same method was used earlier in [6] to obtain the same i...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید