نتایج جستجو برای: bellman zadehs principle
تعداد نتایج: 157398 فیلتر نتایج به سال:
In this paper, we study one kind of stochastic recursive optimal control problem with Markovian Switching. problem, the cost functional is described by solution backward differential equations Markov chains. We prove dynamic programming principle for and show that value function unique viscosity corresponding Hamilton-Jacobi-Bellman equation.
Abstract: Problem statement: We studied the inventory-production system with two-parameter Weibull distributed deterioration items. Approach: The inventory model was developed as linear optimal control problem and by the Pontryagin maximum principle, the optimal control problem was solved analytically to obtain the optimal solution of the problem. Results: It was then illustrated with the help ...
We apply stochastic Perron’s method to a singular control problem where an individual targets at a given consumption rate, invests in a risky financial market in which trading is subject to proportional transaction costs, and seeks to minimize her probability of lifetime ruin. Without relying on the dynamic programming principle (DPP), we characterize the value function as the unique viscosity ...
This paper derives the optimal debt ratio and dividend payment strategies for an insurance company. Taking into account the impact of reinsurance policies and claims from the credit derivatives, the surplus process is stochastic that is jointly determined by the reinsurance strategies, debt levels, and unanticipated shocks. The objective is to maximize the total expected discounted utility of d...
In this paper, which is a continuation of the discrete time paper [4], we develop a theory for continuous time stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We study these problems within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled con...
We give a short introduction to the stochastic calculus for ItôLévy processes and review briefly the two main methods of optimal control of systems described by such processes: (i) Dynamic programming and the Hamilton-Jacobi-Bellman (HJB) equation (ii) The stochastic maximum principle and its associated backward stochastic differential equation (BSDE). The two methods are illustrated by applica...
We give a short introduction to the stochastic calculus for Itô-Lévy processes, and review briey the two main methods of optimal control of stochastic systems described by such processes, namely: (i) Dynamic programming and the Hamilton-Jacobi-Bellman (HJB) equation (ii) The stochastic maximum principle and its associated adjoint backward stochastic di¤erential equation (BSDE). The two methods...
Abstract. In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraints for the cost function where the cost function is described by the solution of one reflected backward stochastic differential equations. We will give the dynamic programming principle for this kind of optimal control problem and show that the value function is the unique visco...
We provide a dynamic programming principle for stochastic optimal control problems with expectation constraints. A weak formulation, using test functions and a probabilistic relaxation of the constraint, avoids restrictions related to a measurable selection but still implies the Hamilton-Jacobi-Bellman equation in the viscosity sense. We treat open state constraints as a special case of expecta...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید