نتایج جستجو برای: markov decision process graph theory
تعداد نتایج: 2385831 فیلتر نتایج به سال:
Classical stochastic Markov Decision Processes (MDPs) and possibilistic MDPs ( -MDPs) aim at solving the same kind of problems, involving sequential decision making under uncertainty. The underlying uncertainty model (probabilistic / possibilistic) and preference model (reward / satisfaction degree) change, but the algorithms, based on dynamic programming, are similar. So, a question maybe rais...
We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraint-based approach. Both approaches rely on the Causal Markov assumption, but the two di er signi cantly in theory and practice. An important di erence between the approaches is that the constraint-based approach uses categorical information about conditional-independence constraints...
In this paper we will describe a software package , called DT-Planner, able to represent and solve nite-state Markov Decision Processes, by exploiting a novel graphical formalism, called Innuence View. An Innuence View is a directed acyclic graph that depicts the probabilistic relationships between the problems state variables in a generic time transition ; additional variables, called event va...
Security of cyber-physical systems (CPS) continues to pose new challenges due the tight integration and operational complexity cyber physical components. To address these challenges, this article presents a domain-aware, optimization-based approach determine an effective defense strategy for CPS in automated fashion—by emulating strategic adversary loop that exploits system vulnerabilities, int...
the purpose of this study is identifying effective factors which make customers shop online in iran and investigating the importance of discovered factors in online customers’ decision. in the identifying phase, to discover the factors affecting online shopping behavior of customers in iran, the derived reference model summarizing antecedents of online shopping proposed by change et al. was us...
Planning agent’s actions in a dynamic and uncertain environment has been exten-sively studied. The framework of Markov decision process provides tools to model andsolve such problems. The field of game theory has allowed the study of strategic inter-actions between multiple agents for a given game. The framework of stochastic games isconsidered as a generalization of the fields ...
The Markov Chain Tree Theorem is a classical result which expresses the stable distribution of an irreducible Markov matrix in terms of directed spanning trees of its associated graph. In this article, we present what we believe to be an original elementary proof of the theorem (Theorem 5.1). Our proof uses only linear algebra and graph theory, and in particular, it does not rely on probability...
This paper examines approaches to representing uncertainty in reputation systems for electronic markets with the aim of constructing a decision theoretic framework for collecting information about selling agents and making purchase decisions in the context of a social reputation system. A selection of approaches to representing reputation using Dempster-Shafter Theory and Bayesian probability a...
We report an experiment on a decision task by SAMUELSON and BAZERMAN (1985). Subjects submit a bid for an item with an unknown value. A winner’s curse phenomenon arises when subjects bid too high and make losses. Learning direction theory can account for this. However, other influences on behaviour can also be identified. We introduce impulse balance theory to make quantitative predictions on t...
The paper presents a quick and simpli ed aggregation method for a large class of Markov chain functionals based on the concept of stochastic complementation. Aggregation results in a reduction in the number of Markov states by grouping them into a smaller number of aggregated states, thereby producing a considerable saving on computational complexity associated with maximum likelihood parameter...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید