نتایج جستجو برای: continuous markov chain

تعداد نتایج: 586647  

2015
Zhaojian Li Xunyuan Yin Xiang Yin Yi Xie Changhong Wang

This paper is concerned with distributed H∞ filtering for a class of continuous-time linear plants over sensor networks with multiple communication channels (MCCs). A practical framework is presented to optimize communication over MCCs with uncertain delays and switching characteristics. The channel switching is assumed to follow a continuous-time Markov chain and a Markov jump linear system (M...

2005
J. F C. KINGMAN

It is the object of this paper to draw together certain lines of research which during the last decade have grown out of the problem of characterizing the functions which can arise as transition probabilities of continuous time Markov chains. This problem is now solved (see Sections 8 and 9). although as usual its solution has thrown up further problems which demand attention. The evolution of ...

Journal: :Open Syst. Inform. Dynam. 2001
Volkmar Liebscher

We study special Quantum Markov chains on a Fock space related to iterated beam splittings as introduced in [23]. Besides a characterizatioin of the position distributions of the chain, we show some kind of weak convergence of such discrete time Quantum Markov chains to a kind of continuous time Quantum Markov process. Furthermore, we provide existence and uniqueness for the solution of a quant...

2004
Christel Baier Boudewijn R. Haverkort Holger Hermanns Joost-Pieter Katoen

A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a given time bound in a uniform CTMDP, i.e., a CTMDP in which the delay time distribution per stat...

Journal: :Queueing Syst. 2004
Vyacheslav Abramov Robert Sh. Liptser

In this paper, sufficient conditions are given for the existence of limiting distribution of a nonhomogeneous countable Markov chain with time-dependent transition intensity matrix. The method of proof exploits the fact that if the distribution of random process Q = (Qt )t 0 is absolutely continuous with respect to the distribution of ergodic random process Q◦ = (Q◦t )t 0, then Qt law −→ t→∞ wh...

Journal: :CoRR 2017
Eduardo M. Vasconcelos

Continuous Time Markov Chain (CMTC) is widely used to describe and analyze systems in several knowledge areas. Steady state availability is one important analysis that can be made through Markov chain formalism that allows researchers generate equations for several purposes, such as channel capacity estimation in wireless networks as well as system performance estimations. The problem with this...

Journal: :desert 2006
s. hajjam n. yusefi

meteorological stations usually contain some missing data for different reasons.there are several traditional methods for completing data, among them bivariate and multivariate linear and non-linear correlation analysis, double mass curve, ratio and difference methods, moving average and probability density functions are commonly used. in this paper a blended model comprising the bivariate expo...

1992
David E. Stewart

A multigrid-type method has been developed for nding quasi-stationary distributions of structured continuous-time Markov chains. Finding quasi-stationary distributions is equivalent to nding the eigenvector of the smallest eigenvalue of the deening matrix of the continuous-time Markov chain. The multigrid-type method used here is not equivalent to inverse iteration, and does not use the multigr...

2005
Giacomo Aletti Ely Merzbach

Given a strongly stationary Markov chain (discrete or continuous) and a finite set of stopping rules, we show a noncombinatorial method to compute the law of stopping. Several examples are presented. The problem of embedding a graph into a larger but minimal graph under some constraints is studied. Given a connected graph, we show a noncombinatorial manner to compute the law of a first given pa...

Journal: :J. Artificial Societies and Social Simulation 2006
Jan Lorenz

The agent-based bounded confidence model of opinion dynamics of Hegselmann and Krause (2002) is reformulated as an interactive Markov chain. This abstracts from individual agents to a population model which gives a good view on the underlying attractive states of continuous opinion dynamics. We mutually analyse the agent-based model and the interactive Markov chain with a focus on the number of...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید