نتایج جستجو برای: markov reward models

تعداد نتایج: 981365  

Journal: :Perform. Eval. 2009
Jasen Markovski Ana Sokolova Nikola Trcka Erik P. de Vink

A parallel composition is defined for Markov reward chains with stochastic discontinuity, and with fast and silent transitions. In this setting, compositionality with respect to the relevant aggregation preorders is established. For Markov reward chains with fast transitions the preorders are τ -lumping and τ -reduction. Discontinuous Markov reward chains are ‘limits’ of Markov reward chains wi...

2002
Christel Baier Boudewijn R. Haverkort Holger Hermanns Joost-Pieter Katoen

Markov chains (and their extensions with rewards) have been widely used to determine performance, dependability and performability characteristics of computer communication systems, such as throughput, delay, mean time to failure, or the probability to accumulate at least a certain amount of reward in a given time. Due to the rapidly increasing size and complexity of systems, Markov chains and ...

2007
Jasen Markovski Ana Sokolova Nikola Trcka Erik P. de Vink

A parallel composition is defined for Markov reward chains with fast transitions and for discontinuous Markov reward chains. In this setting, compositionality with respect to the relevant aggregation preorders is established. For Markov reward chains with fast transitions the preorders are τ -lumping and τ -reduction. Discontinuous Markov reward chains are ‘limits’ of Markov reward chains with ...

2016

The problem considered in the paper can not be solved by the traditional technique available for the analysis of Markov Regenerative Processes (MRGP). The widely used description of MRGPs, i.e. by the local and the global kernels, do not contain su cient information on the process to evaluate the distribution of reward measures. A new analytical approach is proposed and studied to utilize bette...

2015
Karel Sladký

Abstract. The article is devoted to second order optimality in Markov decision processes. Attention is primarily focused on the reward variance for discounted models and undiscounted transient models (i.e. where the spectral radius of the transition probability matrix is less then unity). Considering the second order optimality criteria means that in the class of policies maximizing (or minimiz...

1995
Aad P A van Moorsel Boudewijn R Haverkort

Over the past decade constantly increasing computer power has made analytic solution of Markovian performance and dependability models more attractive However its application for practical systems still needs improvement since detailed and realistic models typically result in Markov reward models that are too large to completely generate and store in memory In this paper we discuss this problem...

1999
Herbert Jaeger

The paper gives a novel account of quick decision making for maximising delayed reward in a stochastic world. The approach rests on observable operator models of stochastic systems, which generalize hidden Markov models. A particular kind of decision situations is outlined, and an algorithm is presented which allows to estimate the probability of future reward with a computational cost of only ...

1996
Graham Horton Kishor S. Trivedi

We describe the recently introduced Fluid Stochastic Petri-Nets as a means of computing the distribution of the accumulated rate reward in a GSPN. In practice, it is the expected value of a reward which is computed, a quantity which is dependent solely on the solution of the underlying Markov chain. Until now, the instantaneous reward rates have been a function of the GSPN marking only, and the...

2014
Dennis Guck Mark Timmer Stefan Blom

This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based ...

2001
Jacques Janssen Raimondo Manca

The first application of Semi-Markov Process (SMP) in actuarial field was given by J. Janssen [6]. Many authors successively used these processes and their generalizations In some books it is also shown how it is possible to use these processes in actuarial science, (see Pitacco, Olivieri, [10], CMIR12 [12]).These processes can be generalised introducing a reward structure see for example Howar...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید