Metrics for Markov Decision Processes with Infinite State Spaces
نویسندگان
چکیده
We present metrics for measuring state similarity in Markov decision processes (MDPs) with infinitely many states, including MDPs with continuous state spaces. Such metrics provide a stable quantitative analogue of the notion of bisimulation for MDPs, and are suitable for use in MDP approximation. We show that the optimal value function associated with a discounted infinite horizon planning task varies continuously with respect to our metric distances.
منابع مشابه
Bisimulation Metrics for Continuous Markov Decision Processes
In recent years, various metrics have been developed for measuring the behavioural similarity of states in probabilistic transition systems [Desharnais et al., Proceedings of CONCUR, (1999), pp. 258-273, van Breugel and Worrell, Proceedings of ICALP, (2001), pp. 421-432]. In the context of finite Markov decision processes, we have built on these metrics to provide a robust quantitative analogue...
متن کاملVerification of General Markov Decision Processes by Approximate Similarity Relations and Policy Refinement
In this work we introduce new approximate similarity relationsthat are shown to be key for policy (or control) synthesis over general Markovdecision processes. The models of interest are discrete-time Markov decisionprocesses, endowed with uncountably-infinite state spaces and metric output(or observation) spaces. The new relations, underpinned by the use of metrics,allow in...
متن کاملSemi-markov Decision Processes
Considered are infinite horizon semi-Markov decision processes (SMDPs) with finite state and action spaces. Total expected discounted reward and long-run average expected reward optimality criteria are reviewed. Solution methodology for each criterion is given, constraints and variance sensitivity are also discussed.
متن کاملPlanning and Programming with First-Order Markov Decision Processes: Insights and Challenges
Markov decision processes (MDPs) have become the de facto standard model for decision-theoretic planning problems. However, classic dynamic programming algorithms for MDPs [22] require explicit state and action enumeration. For example, the classical representation of a value function is a table or vector associating a value with each system state; such value functions are produced by iterating...
متن کاملSimplex Algorithm for Countable-State Discounted Markov Decision Processes
We consider discounted Markov Decision Processes (MDPs) with countably-infinite statespaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs areinventory management and queueing control problems in which there is no specific limit on thesize of inventory or queue. Existing solution methods obtain a sequence of policies that convergesto optimality i...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005