Solving Markov Decision Problems Using Heuristic Search
نویسندگان
چکیده
We describe a heuristic search algorithm for Markov decision problems, called LAO*, that is derived from the classic heuristic search algorithm AO*. LAO* shares the advantage heuristic search has over dynamic programming for simpler classes of problems: it can find optimal solutions without evaluating all problem states. The derivation of LAO* from AO* makes it easier to generalize refinements of heuristic search developed for simpler classes of problems for use in solving Markov decision problems more efficiently.
منابع مشابه
A Heuristic Search Algorithm for Markov Decision Problems
LAO* is a heuristic search algorithm for Markov decision problems that is derived from the classic heuristic search algorithm AO* (Hansen & Zilberstein 1998). It shares the advantage heuristic search has over dynamic programming for simpler classes of problems: it can nd optimal solutions without evaluating all problem states. In this paper , we show that the derivation of LAO* from AO* makes i...
متن کاملMAA*: A Heuristic Search Algorithm for Solving Decentralized POMDPs
We present multi-agent A* (MAA*), the first complete and optimal heuristic search algorithm for solving decentralized partiallyobservable Markov decision problems (DECPOMDPs) with finite horizon. The algorithm is suitable for computing optimal plans for a cooperative group of agents that operate in a stochastic environment such as multirobot coordination, network traffic control, or distributed...
متن کاملFaster Dynamic Programming for Markov Decision Processes
Markov decision processes (MDPs) are a general framework used in artificial intelligence (AI) to model decision theoretic planning problems. Solving real world MDPs has been a major and challenging research topic in the AI literature, since classical dynamic programming algorithms converge slowly. We discuss two approaches in expediting dynamic programming. The first approach combines heuristic...
متن کاملSymbolic LAO* Search for Factored Markov Decision Processes
We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. It uses state abstraction to avoid evaluating states individually. And it uses forward search from a start state, guided by an admissible heuristic, to avoid evaluating all states. These approaches are combined in a novel way that exploits symbolic model-checking techniq...
متن کاملLIFT-UP: Lifted First-Order Planning Under Uncertainty
We present a new approach for solving first-order Markov decision processes combining first-order state abstraction and heuristic search. In contrast to existing systems, which start with propositionalizing the decision process and then perform state abstraction on its propositionalized version we apply state abstraction directly on the decision process avoiding propositionalization. Secondly, ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2002