نتایج جستجو برای: steepest descent method

تعداد نتایج: 1645898  

2011
Haiwen Xu

To reduce the difficulty and complexity in computing the projection from a real Hilbert space onto a nonempty closed convex subset, researchers have provided a hybrid steepest-descent method for solving VI(F,K) and a subsequent three-step relaxed version of this method. In a previous study, the latter was used to develop a modified and relaxed hybrid steepest-descent (MRHSD) method. However, ch...

Journal: :Math. Comput. 2012
Christian Kreuzer

We design and study an adaptive algorithm for the numerical solution of the stationary nonlinear Stokes problem. The algorithm can be interpreted as a disturbed steepest descent method, which generalizes Uzawa’s method to the nonlinear case. The outer iteration for the pressure is a descent method with fixed step-size. The inner iteration for the velocity consists of an approximate solution of ...

2009
Takemi Shigeta

The purpose of this study is to show some mathematical aspects of the adjoint method that is a numerical method for the Cauchy problem, an inverse boundary value problem. The adjoint method is an iterative method based on the variational formulation, and the steepest descent method minimizes an objective functional derived from our original problem. The conventional adjoint method is time-consu...

2007
Satoko MORIGUCHI Nobuyuki TSUCHIMURA

We consider the problem of minimizing a nonlinear discrete function with L-/M-convexity proposed in the theory of discrete convex analysis. For this problem, steepest descent algorithms and steepest descent scaling algorithms are known. In this paper, we use continuous relaxation approach which minimizes the continuous variable version first in order to find a good initial solution of a steepes...

2006
Oumar Diene Amit Bhaya

The standard conjugate gradient (CG) method uses orthogonality of the residues to simplify the formulas for the parameters necessary for convergence. In adaptive filtering, the sample-by-sample update of the correlation matrix and the cross-correlation vector causes a loss of the residue orthogonality in a modified online algorithm, which, in turn, results in loss of convergence and an increase...

2018
Ryo Tamura Koji Hukushima

An efficient method for finding a better maximizer of computationally extensive probability distributions is proposed on the basis of a Bayesian optimization technique. A key idea of the proposed method is to use extreme values of acquisition functions by Gaussian processes for the next training phase, which should be located near a local maximum or a global maximum of the probability distribut...

Journal: :J. Optimization Theory and Applications 2014
Zhou Wei Qing Hai He

In this paper, we first study a nonsmooth steepest descent method for nonsmooth functions defined on a Hilbert space and establish the corresponding algorithm by proximal subgradients. Then, we use this algorithm to find stationary points for those functions satisfying prox-regularity and Lipschitz continuity. As an application, the established algorithm is used to search for the minimizer of a...

2007
Julien Munier

In this paper we are interested in the asymptotic behavior of the trajectories of the famous steepest descent evolution equation on Riemannian manifolds. It writes ẋ (t) + gradφ (x (t)) = 0. It is shown how the convexity of the objective function φ helps in establishing the convergence as time goes to infinity of the trajectories towards points that minimize φ. Some numerical illustrations are ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید