نتایج جستجو برای: global convergence

تعداد نتایج: 551235  

Journal: :SIAM Journal on Optimization 2009
Andrew R. Conn Katya Scheinberg Luís N. Vicente

In this paper we prove global convergence for first and second-order stationary points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of quadratic (or linear) models built from evaluating the objective function at sample sets. The derivative-free models are required to satisfy Taylor-type bounds but, apar...

2007
H. Jaap van den Herik Daniel Hennes Michael Kaisers Karl Tuyls Katja Verbeeck

In this paper we compare state-of-the-art multi-agent reinforcement learning algorithms in a wide variety of games. We consider two types of algorithms: value iteration and policy iteration. Four characteristics are studied: initial conditions, parameter settings, convergence speed, and local versus global convergence. Global convergence is still difficult to achieve in practice, despite existi...

2007
Subhash Chandra Pandey Piyush Tripathi

This paper discusses the global output convergence for continuous time recurrent neural networks with continuous decreasing as well as increasing activation functions in probabilistic metric space. We establish three sufficient conditions to guarantee the global output convergence of this class of neural networks. The present result does not require symmetry in the connection weight matrix. The...

2004
Deacha Puangdownreong Sarawut Sujitjorn Thanatchai Kulworawanichpong

This paper presents a convergence proof of the adaptive tabu search (ATS) algorithms. The proof consists of two parts, i.e. convergence proof of all interested solutions in a finite search space, and that of searching processes of the ATS algorithms to the global minimum. With the proposed definitions and theorems, the proofs show that the ATS algorithms based on a random process have finite co...

Journal: :Applied Mathematics and Computation 2009
Yunong Zhang Yanyan Shi Ke Chen Chaoli Wang

Wang proposed a gradient-based neural network (GNN) to solve online matrix-inverses. Global asymptotical convergence was shown for such a neural network when applied to inverting nonsingular matrices. As compared to the previously-presented asymptotical convergence, this paper investigates more desirable properties of the gradient-based neural network; e.g., global exponential convergence for n...

1999
GEORGE D. MAGOULAS VASSILIS P. PLAGIANAKOS GEORGE S. ANDROULAKIS MICHAEL N. VRAHATIS

In this paper we propose a framework for developing globally convergent batch training algorithms with adaptive learning rate. The proposed framework provides conditions under which global convergence is guaranteed for adaptive learning rate training algorithms. To this end, the learning rate is appropriately tuned along the given descent direction. Providing conditions regarding the search dir...

2008
R. SILVA

In this paper we prove global convergence for first and second-order stationarity points of a class of derivative-free trust-region methods for unconstrained optimization. These methods are based on the sequential minimization of linear or quadratic models built from evaluating the objective function at sample sets. The derivative-free models are required to satisfy Taylor-type bounds but, apar...

Journal: :Math. Oper. Res. 1992
Takashi Tsuchiya

In this paper we investigate the global convergence property of the affine scaling method under the assumption of dual nondegeneracy. The behavior of the method near degenerate vertices is analyzed in detail on the basis of the equivalence between the affine scaling methods for homogeneous LP problems and Karmarkar's method. It is shown that the step-size 1/8, where the displacement vector is n...

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

Journal: :IEEE Trans. Automat. Contr. 2002
Mazen Alamir Luis Antonio Calvillo-Corona

In this paper, further results are proposed that concern the design and the convergence of receding horizon nonlinear observers. The key feature is the de nition of observability-radius in relation with a pre-speci ed compact set of initial con gurations. This enables a semi-global convergence result to be derived that turns to be a global convergence result when appropriate regularity assumpti...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید