نتایج جستجو برای: mdp
تعداد نتایج: 3240 فیلتر نتایج به سال:
We describe a way to improve the performance of MDP planners by modifying them to use lower and upper bounds to eliminate non-optimal actions during their search. First, we discuss a particular state-abstraction formulation of MDP planning problems and how to use that formulation to compute bounds on the Q-functions of those planning problems. Then, we describe how to incorporate those bounds i...
We introduce the MDP-Evaluation Stopping Problem, the optimization problem faced by participants of the International Probabilistic Planning Competition 2014 that focus on their own performance. It can be constructed as a meta-MDP where actions correspond to the application of a policy on a base-MDP, which is intractable in practice. Our theoretical analysis reveals that there are tractable spe...
سابقه و هدف: برخی از مطالعات، نقش محافظتی الگوی رژیمی مدیترانه ای mdp (mediterranean dietary pattern) را در پیشگیری از اختلالات روانی نشان داده اند. هدف این مطالعه، تعیین رابطه میزان تبعیت از mdp با افسردگی، اضطراب و استرس در دختران نوجوان شهر تهران بود. مواد و روش ها: در این مطالعه مقطعی 280 دختر دانش آموز 18-15 ساله شهر تهران به روش نمونه گیری طبقه ای خوشه ای چند مرحله ای انتخاب شدند. دریافت ...
Abstract Markov decision processes (MDP) and continuous-time MDP (CTMDP) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of most classic objectives considered in their context. We provide first algorithm to compute mean probably approximately correctly unknown MDP; further, we extend it CTMDP. do not re...
amyloidosis is characterized by an abnormal extracellular deposition of amyloid in different organs, where it usually causes some type of dysfunction. its cause is unknown. five different types of amyloidosis have been described according to the underlying disease; immunoglobulin amyloidosis, familial amyloidosis, senile systemic amyloidosis, secondary amyloidosis and hemodialysis-associated am...
The Markov Decision Problem (MDP) plays a central role in AI as an abstraction of sequential decision making. We contribute to the theoretical analysis of MDP planning, which is the problem of computing an optimal policy for a given MDP. Specifically, we furnish improved strong worstcase upper bounds on the running time of MDP planning. Strong bounds are those that depend only on the number of ...
Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP is a collection of supervised and unsupervised learning algorithms and other data processing units that can be combined into data processing sequences and more complex feed-forward network architectures. Computations are performed efficiently in terms of speed and memory ...
To predict the profitability of a customer, today’s firms have to practice Customer Lifetime Value (CLV) computation. Different approaches are proposed in the last ten years to analyze the complex customer phenomenon. One of them is Markov Decision Process (MDP) model. The class of Markov Models is an effective and a flexibility decision model. Whereas the use of MDP model is limited by its ass...
This paper presents a novel algorithm for learning in a class of stochastic Markov decision processes (MDPs) with continuous state and action spaces that trades speed for accuracy. The algorithm can be seen as a generalization of linear quadratic control to nonlinear, non-regulation problems. A transform is presented of the stochastic MDP into a deterministic one which captures the essence of t...
This paper presents a novel algorithm for learning in a class of stochastic Markov decision processes (MDPs) with continuous state and action spaces that trades speed for accuracy. A transform of the stochastic MDP into a deterministic one is presented which captures the essence of the original dynamics, in a sense made precise. In this transformed MDP, the calculation of values is greatly simp...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید