نتایج جستجو برای: discrete time markov chain

تعداد نتایج: 2264072  

2011
Yuval Peres Perla Sousi

We consider irreducible reversible discrete time Markov chains on a finite state space. Mixing times and hitting times are fundamental parameters of the chain. We relate them by showing that the mixing time of the lazy chain is equivalent to the maximum over initial states x and large sets A of the hitting time of A starting from x. We also prove that the first time when averaging over two cons...

2008
VÍCTOR RUIZ

We introduce a class of stochastic processes in discrete time with finite state space by means of a simple matrix product. We show that this class coincides with that of the hidden Markov chains and provides a compact framework for it. We study a measure obtained by a projection on the real line of the uniform measure on the Sierpinski gasket, finding that the dimension of this measure fits wit...

2007
Arturo Leccadito Sergio Ortobelli Lozza Emilio Russo

This paper proposes markovian models in portfolio theory and risk management. At first, we describe discrete time optimal allocation models. Then, we examine the investor’s optimal choices either when the returns are uniquely determined by their mean and variance or when they are modeled by a Markov chain. We subject these models to back-testing on out-of-sample data, in order to assess their f...

2005

We shall only consider Markov chains with a finite, but usually very large, state space S = {1, . . . ,n}. An S-valued (discrete-time) stochastic process is a sequence X0,X1,X2, . . . of Svalued random variables over some probability space Ω, i.e. a sequence of (measurable) maps Xt : Ω → S, t = 0,1,2, . . . Such a process is a Markov chain if for all t ≥ 0 and any i0, i1, . . . , it−1, i, j ∈ S...

2013
Christian Bayer Hilmar Mai John Schoenmakers

We develop an EM algorithm for estimating parameters that determine the dynamics of a discrete time Markov chain evolving through a certain measurable state space. As a key tool for the construction of the EM method we develop forward-reverse representations for Markov chains conditioned on a certain terminal state. These representations may be considered as an extension of the earlier work [1]...

Journal: :CoRR 2008
Florian Horn Hugo Gimbert

We prove that optimal strategies exist in perfect-information stochastic games with finitely many states and actions and tail winning conditions. Introduction We prove that optimal strategies exist in perfect-information stochastic games with finitely many states and actions and tail winning conditions. This proof is different from the algorithmic proof sketched in [Hor08]. 1. Perfect-Informati...

2000
ADAM CZORNIK ANDRZEJ SWIERNIAK

In the paper we consider a problem of controllability of discrete time linear systems endowed with randomly jumping parameters which can be described by finite state Markov chain. An equivalence of three concepts of controllability is shown. Moreover necessary and sufficient conditions for all of them are presented. Key-Words: Jump linear systems, controllability, stabilizability, Markov proces...

2007

We shall only consider Markov chains with a finite, but usually very large, state space S = {1, . . . ,n}. An S-valued (discrete-time) stochastic process is a sequence X0,X1,X2, . . . of Svalued random variables over some probability space Ω, i.e. a sequence of (measurable) maps Xt : Ω → S, t = 0,1,2, . . . Such a process is a Markov chain if for all t ≥ 0 and any i0, i1, . . . , it−1, i, j ∈ S...

2013
James Ledoux Laurent Truffet

In this paper, we obtain Markovian bounds on a function of a homogeneous discrete time Markov chain. For deriving such bounds, we use well known results on stochastic majorization of Markov chains and the Rogers-Pitman’s lumpability criterion. The proposed method of comparison between functions of Markov chains is not equivalent to generalized coupling method of Markov chains although we obtain...

M. Khodabin

In this paper, the ambiguity of nite state irreducible Markov chain trajectories is reminded and is obtained for two state Markov chain. I give an applicable example of this concept in President election

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید