نتایج جستجو برای: markov reward models
تعداد نتایج: 981365 فیلتر نتایج به سال:
in this paper semi-markov models are used to forecast the triple dimensions of next earthquake occurrences. each earthquake can be investigated in three dimensions including temporal, spatial and magnitude. semi-markov models can be used for earthquake forecasting in each arbitrary area and each area can be divided into several zones. in semi-markov models each zone can be considered as a state...
A new preorder relation is introduced that orders states of a Markov process with an additional reward structure according to the reward gained over any interval of finite or infinite length. The relation allows the comparison of different Markov processes and includes as special cases monotone and lumpable Markov processes.
The choice titration procedure presents a subject with a repeated choice between a standard option that always provides the same reward and an adjusting option for which the reward schedule is adjusted based on the subject’s previous choices. The procedure is designed to determine the point of indifference between the two schedules which is then used to estimate a utility equivalence point betw...
Measure-adaptive state-space construction is the process of exploiting symmetry in high-level model and performance measure specifications to automatically construct reduced state-space Markov models that support the evaluation of the performance measure. This paper describes a new reward variable specification technique, which, combined with recently developed state-space construction techniqu...
Due to the effective role of Markov models in customer relationship management (CRM), there is a lack of comprehensive literature review which contains all related literatures. In this paper the focus is on academic databases to find all the articles that had been published in 2011 and earlier. One hundred articles were identified and reviewed to find direct relevance for applying Markov models...
We study Probabilistic Workflow Nets (PWNs), a model extending van der Aalst’s workflow nets with probabilities. We give a semantics for PWNs in terms of Markov Decision Processes and introduce a reward model. Using a result by Varacca and Nielsen, we show that the expected reward of a complete execution of the PWN is independent of the scheduler. Extending previous work on reduction of non-pro...
investors use different approaches to select optimal portfolio. so, optimal investment choices according to return can be interpreted in different models. the traditional approach to allocate portfolio selection called a mean - variance explains. another approach is markov chain. markov chain is a random process without memory. this means that the conditional probability distribution of the nex...
Markov reward models are an important formalism by which to obtain dependability and performability measures of computer systems and networks. In this context, it is particularly important to determine the probability distribution function of the reward accumulated during a nite interval. The interval may correspond to the mission period in a mission-critical system , the time between scheduled...
This paper studies Value-at-Risk problems in finite-horizon Markov decision processes (MDPs) with finite state space and two forms of reward function. Firstly we study the effect of reward function on two criteria in a short-horizon MDP. Secondly, for long-horizon MDPs, we estimate the total reward distribution in a finite-horizon Markov chain (MC) with the help of spectral theory and the centr...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید