نتایج جستجو برای: partially s

تعداد نتایج: 828349  

2015
Koosha Khalvati Rajesh P. Rao

The degree of confidence in one’s choice or decision is a critical aspect of perceptual decision making. Attempts to quantify a decision maker’s confidence by measuring accuracy in a task have yielded limited success because confidence and accuracy are typically not equal. In this paper, we introduce a Bayesian framework to model confidence in perceptual decision making. We show that this model...

1997
Milos Hauskrecht

The focus of this paper is the framework of partially observable Markov decision processes (POMDPs) and its role in modeling and solving complex dynamic decision problems in stochastic and partially observable medical domains. The paper summarizes some of the basic features of the POMDP framework and explores its potential in solving the problem of the management of the patient with chronic isc...

2011
Ekhlas Sonu Prashant Doshi

We present a method for identifying actions that lead to observations which are only weakly informative in the context of partially observable Markov decision processes (POMDP). We call such actions as weak(inclusive of zero-) information inducing. Policy subtrees rooted at these actions may be computed more efficiently. While zero-information inducing actions may be exploited without error, th...

1995
Michael L. Littman Anthony R. Cassandra Leslie Pack Kaelbling

Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems , existing techniques for nding optimal behavior do not appear to scale well and have been unable to nd satisfactory policies for proble...

2015
Akshat Kumar Shlomo Zilberstein

Partially observable MDPs provide an elegant framework for sequential decision making. Finite-state controllers (FSCs) are often used to represent policies for infinite-horizon problems as they offer a compact representation, simple-toexecute plans, and adjustable tradeoff between computational complexity and policy size. We develop novel connections between optimizing FSCs for POMDPs and the d...

2007
Monica Dinculescu

We discuss the problem of comparing the behavioural equivalence of partially observable systems with observations. We examine different types of equivalence relations on states, and show that branching equivalence relations are stronger than linear ones. Finally, we discuss how this hierarchy can be used in duality theory.

2003
Hajime Fujita Yoichiro Matsuno Shin Ishii

We formulate an automatic strategy acquisition problem for the multi-agent card game “Hearts” as a reinforcement learning (RL) problem. Since there are often a lot of unobservable cards in this game, RL is approximately dealt with in the framework of a partially observable Markov decision process (POMDP). This article presents a POMDP-RL method based on estimation of unobservable state variable...

2004
Zhengzhu Feng Shlomo Zilberstein

We present a major improvement to the incremental pruning algorithm for solving partially observable Markov decision processes. Our technique targets the cross-sum step of the dynamic programming (DP) update, a key source of complexity in POMDP algorithms. Instead of reasoning about the whole belief space when pruning the cross-sums, our algorithm divides the belief space into smaller regions a...

2010
Emma Brunskill Stuart J. Russell

Despite the intractability of generic optimal partially observable Markov decision process planning, there exist important problems that have highly structured models. Previous researchers have used this insight to construct more efficient algorithms for factored domains, and for domains with topological structure in the flat state dynamics model. In our work, motivated by findings from the edu...

2015
Gavin Rens

A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in beliefdecision-trees while planning occurs. Whereas belief-decision-trees branch on actions and observations, with my method, they branch only on actions. This is achieved by unifying the branches required due to the nondeterminism o...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید