نتایج جستجو برای: partially s
تعداد نتایج: 828349 فیلتر نتایج به سال:
From 12.06.05 to 17.06.2005 the Dagstuhl Seminar 05241 Synthesis and Planning was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas ...
Goal Driven Autonomy (GDA) is an agent model for reasoning about goals while acting in a dynamic environment. Since anomalous events may cause an agent’s current goal to become invalid, GDA agents monitor the environment for such anomalies. When domains are both partially observable and dynamic, agents must reason about sensing and planning actions. Previous GDA work evaluated agents in domains...
We present a foveated gesture recognition system that guides an active camera to foveate salient features based on a reinforcement learning paradigm. Using vision routines previously implemented for an interactive environment, we determine the spatial location of salient body parts of a user and guide an active camera to obtain images of gestures or expressions. A hiddenstate reinforcement lear...
There are many sensing challenges for which one must balance the effectiveness of a given measurement with the associated sensing cost. For example, when performing a diagnosis a doctor must balance the cost and benefit of a given test (measurement), and the decision to stop sensing (stop performing tests) must account for the risk to the patient and doctor (malpractice) for a given diagnosis b...
We propose a novel approach, called parallel rollout, to solving (partially observable) Markov decision processes. Our approach generalizes the rollout algorithm of Bertsekas and Castanon (1999) by rolling out a set of multiple heuristic policies rather than a single policy. In particular, the parallel rollout approach aims at the class of problems where we have multiple heuristic policies avai...
We consider Incentive Decision Processes, where a principal seeks to reduce its costs due to another agent’s behavior, by offering incentives to the agent for alternate behavior. We focus on the case where a principal interacts with a greedy agent whose preferences are hidden and static. Though IDPs can be directly modeled as partially observable Markov decision processes (POMDP), we show that ...
V advisors often increase sales for those customers who find such online advice to be convenient and helpful. However, other customers take a more active role in their purchase decisions and prefer more detailed data. In general, we expect that websites are more preferred and increase sales if their characteristics (e.g., more detailed data) match customers’ cognitive styles (e.g., more analyti...
This paper presents a real-time system that guides stroke patients during upper extremity rehabilitation. The system automatically modifies exercise parameters to account for the specific needs and abilities of different individuals. We describe a partially observable Markov decision process (POMDP) model of a rehabilitation exercise that can capture this form of customization. The system will ...
Current point-based planning algorithms for solving partially observable Markov decision processes (POMDPs) have demonstrated that a good approximation of the value function can be derived by interpolation from the values of a specially selected set of points. The performance of these algorithms can be improved by eliminating unnecessary backups or concentrating on more important points in the ...
The ever increasing capabilities and complexity of sensor networks have led to an increased interest in sensor placement and observation planning problems. Many sensor placement and planning problems, however, lead to instances of the intractable classical planning problems or (similarly intractable) partially observable Markov decision processes. We consider the problem of planning sensor acti...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید