نتایج جستجو برای: markovian decision process
تعداد نتایج: 1587387 فیلتر نتایج به سال:
Many Wireless Sensor Network (WSN) applications necessitate secure multicast services for the purpose of broadcasting delay sensitive data like video files and live telecast at fixed timeslot. This work provides a novel method to deal with end-to-end delay and drop rate of packets. Opportunistic Routing chooses a link based on the maximum probability of packet delivery ratio. Null Key Generatio...
The problem of optimizing Markovian models with infinitely or finite but infeasible large state space is considered. In several practically interesting cases the state space of the model is finite and extremely large or infinite, and the transition and decision structures have some regular property which can be exploited for efficient analysis and optimization. Among the Markovian models with r...
Markovian Process Algebras approximate their model of synchronisation events in order to preserve their Markovian nature. This paper investigates synchronisation models in a stochastic context and focuses on how the Markovian approximation of synchronisation a ects the accuracy of the performance model. TIPP and PEPA are used as speci c cases throughout, and their di erent methods of synchronis...
Markovian Process Algebras approximate their model of synchronisation events in order to preserve their Markovian nature. This paper investigates synchronisation models in a stochastic context and focuses on how the Markovian approximation of synchronisation a ects the accuracy of the performance model. TIPP and PEPA are used as speci c cases throughout, and their di erent methods of synchronis...
Markovian arrival processes are a powerful class of stochastic processes to represent stochastic workloads that include autocorrelation in performance or dependability modeling. However, fitting the parameters of a Markovian arrival process to given measurement data is non-trivial and most known methods focus on a single class case, where all events are of the same type and only the sequence of...
For reinforcement learning in environments in which an agent has access to a reliable state signal, methods based on the Markov decision process (MDP) have had many successes. In many problem domains, however, an agent suffers from limited sensing capabilities that preclude it from recovering a Markovian state signal from its perceptions. Extending the MDP framework, partially observable Markov...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید