Optimal control of stationary Markov processes

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal Control of Markov Processes

The purpose of this article is to giye an overview of some recent developments in optimal stochastic control theory. The field has expanded a great deal during the last 20 years. It is not possible in this overview to go deeply into any topic, and a number of interesting topics have been omitted entirely. The list of references includes several books, conference proceedings and survey articles....

متن کامل

Stationary Markov Processes

We consider some classes of stationary, counting{measure{valued Markov processes and their companions under time{reversal. Examples arise in the L evy{It^ o decomposition of stable Ornstein{Uhlenbeck processes, the large{time asymptotics of the standard additive coalescent, and extreme value theory. These processes share the common feature that points in the support of the evolving counting{ me...

متن کامل

Optimal Control of Markov Regenerative Processes

In the paper the integration of available results on SemiMarkov Decision Processes and on Markov Regenerative Processes is attempted, in order to de ne the mathematical framework for solving decision problems where the underlying structure state process is a Markov Regenerative Process, referred to as Markov Regenerative Decision Process. The essential question investigated here is which descri...

متن کامل

Pure Stationary Optimal Strategies in Markov Decision Processes

Markov decision processes (MDPs) are controllable discrete event systems with stochastic transitions. Performances of an MDP are evaluated by a payoff function. The controller of the MDP seeks to optimize those performances, using optimal strategies. There exists various ways of measuring performances, i.e. various classes of payoff functions. For example, average performances can be evaluated ...

متن کامل

Geometry and Determinism of Optimal Stationary Control in Partially Observable Markov Decision Processes

It is well known that any finite state Markov decision process (MDP) has a deterministic memoryless policy that maximizes the discounted longterm expected reward. Hence for such MDPs the optimal control problem can be solved over the set of memoryless deterministic policies. In the case of partially observable Markov decision processes (POMDPs), where there is uncertainty about the world state,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Stochastic Processes and their Applications

سال: 1973

ISSN: 0304-4149

DOI: 10.1016/0304-4149(73)90002-1