نتایج جستجو برای: continuous time markov chain

تعداد نتایج: 2344467  

2009
Richard F. Bass Takashi Kumagai Toshihiro Uemura

For each n let Y (n) t be a continuous time symmetric Markov chain with state space n −1 Z d. Conditions in terms of the conductances are given for the convergence of the Y (n) t to a symmetric Markov process Yt on R d. We have weak convergence of {Y (n) t : t ≤ t0} for every t0 and every starting point. The limit process Y has a continuous part and may also have jumps.

Journal: :IEEE Trans. Automat. Contr. 2000
Arnaud Doucet Andrew Logothetis Vikram Krishnamurthy

Jump Markov linear systems are linear systems whose parameters evolve with time according to a finite-state Markov chain. Given a set of observations, our aim is to estimate the states of the finite-state Markov chain and the continuous (in space) states of the linear system. The computational cost in computing conditional mean or maximum a posteriori (MAP) state estimates of the Markov chain o...

Journal: :Foundations and Trends in Signal Processing 2013
Yariv Ephraim Brian L. Mark

A bivariate Markov process comprises a pair of random processes which are jointly Markov. One of the two processes in that pair is observable while the other plays the role of an underlying process. We are interested in three classes of bivariate Markov processes. In the first and major class of interest, the underlying and observable processes are continuous-time with finite alphabet; in the s...

2004
Christel Baier Boudewijn R. Haverkort Holger Hermanns Joost-Pieter Katoen

A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a given time bound in a uniform CTMDP, i.e., a CTMDP in which the delay time distribution per stat...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه علامه طباطبایی - دانشکده اقتصاد 1389

در این پایان نامه نشان داده ایم که چگونه می توان مدل ریسک بیمه ای اسپیرر اندرسون را به کمک زنجیره های مارکوف تعریف کرد. سپس به کمک روش های آنالیز ماتریسی احتمال برشکستگی ، میزان مازاد در هنگام برشکستگی و میزان کسری بودجه در زمان وقوع برشکستگی را محاسبه کرده ایم. هدف ما در این پایان نامه بسیار محاسباتی و کاربردی تر از روش های است که در گذشته برای محاسبه این احتمال ارائه شده است. در ابتدا ما نشا...

Journal: :Finance and Stochastics 2005
Ragnar Norberg

Conditional expected values in Markov chains are solutions to a set of associated backward differential equations, which may be ordinary or partial depending on the number of relevant state variables. This paper investigates the validity of these differential equations by locating the points of non-smoothness of the state-wise conditional expected values, and it presents a numerical method for ...

2007
Fabrizio Leisen Antonietta Mira

Peskun ordering is a partial ordering defined on the space of transition matrices of discrete time Markov chains. If the Markov chains are reversible with respect to a common stationary distribution π, Peskun ordering implies an ordering on the asymptotic variances of the resulting Markov chain Monte Carlo estimators of integrals with respect to π. Peskun ordering is also relevant in the framew...

2009
Ievgen Karnaukh

The distribution of extrema and overjump functionals for the semi-continuous processes (processes that intersect positive or negative level continuously) on a Markov chain were considered by many authors( for instance, see [1] [3]). In the paper [4] the distribution of extrema for almost semi-continuous processes were treated (the processes that intersect positive or negative level by exponenti...

2007
Werner Sandmann

A discrete-time conversion is applied to the continuous-time Markov process that describes the dynamics of biochemically reacting systems within the discrete-state stochastic modeling approach (chemical master equation approach). This yields a stochastically identical discrete-time Markov chain and an according formulation of the chemical master equation. Simulating the resulting chain is equiv...

2005
Ashay Kadam Peter Lenk

We explore sources of heterogeneity in rating migration behavior using a continuous time Markov chain. Working in continuous time circumvents of the embedding problem, allows for arbitrary prediction horizons, mitigates the censoring effect, and facilitates term structure modeling. By adopting a Bayesian estimation procedure we are able to estimate for each issuer profile its own continuous tim...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید