نتایج جستجو برای: bellman zadehs principle

تعداد نتایج: 157398  

2007
JIN MA

In this paper we study a class of pathwise stochastic control problems in which the optimality is allowed to depend on the paths of exogenous noise (or information). Such a phenomenon can be illustrated by considering a particular investor who wants to take advantage of certain extra information but in a completely legal manner. We show that such a control problem may not even have a “minimizin...

2017
FABIO CAMILLI MAURIZIO FALCONE CAMILLI M FALCONE

— We present a numerical approximation scheme for the infinité horizon problem related to diffusion processes The scheme is based on a discrete version of the dynamic programming principle and converges to the viscosity solution of the second order HamiltonJacohi-Bellman équation The diffusion can be degenerate The problem in W is solved in a bounded domain fl using a truncatwn technique and wi...

2007
Revaz Tevzadze

In this paper we show a general result of existence and uniqueness of Backward Stochastic Differential Equation (BSDE) with quadratic growth driven by continuous martingale. Backward stochastic differential equations have been introduced by Bismut [1] for the linear case as equations of the adjoint process in the stochastic maximum principle. A nonlinear BSDE (with Bellman generator) was first ...

Journal: :SIAM J. Control and Optimization 2007
Rainer Buckdahn Jin Ma

In this paper we study a class of pathwise stochastic control problems in which the optimality is allowed to depend on the paths of exogenous noise (or information). Such a phenomenon can be illustrated by considering a particular investor who wants to take advantage of certain extra information but in a completely legal manner. We show that such a control problem may not even have a “minimizin...

2010
Tomas Björk Agatha Murgoci

We develop a theory for stochastic control problems which, in various ways, are time inconsistent in the sense that they do not admit a Bellman optimality principle. We attach these problems by viewing them within a game theoretic framework, and we look for Nash subgame perfect equilibrium points. For a general controlled Markov process and a fairly general objective functional we derive an ext...

Journal: :Magasin fra Det Kongelige Bibliotek 1995

Journal: :Journal of Inequalities and Applications 2018

Journal: :Journal of Mathematical Inequalities 2020

2018
Navid Razmjooy Mehdi Ramezani

Optimal control is the policy of getting the optimized control value for minimizing a predefined cost function. Recently, several optimization methods have been introduced for achieving this purpose.1–4 Among these methods, Pontryagins maximum principle5 and the Hamilton Jacobi Bellman equation6 are the most popular. In Pontryagins maximum principle, the optimal control problem will be converte...

2008
Rainer Buckdahn Jin Ma Catherine Rainer

In this paper we study a class of stochastic control problems in which the control of the jump size is essential. Such a model is a generalized version for various applied problems ranging from optimal reinsurance selections for general insurance models to queueing theory. The main novel point of such a control problem is that by changing the jump size of the system, one essentially changes the...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید