نتایج جستجو برای: double parameter scaled quasi newton formula
تعداد نتایج: 648605 فیلتر نتایج به سال:
artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. this paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. for this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singula...
A system of nonlinear asset flow differential equations (AFDE) gives rise to an inverse problem involving optimization of parameters that characterize an investor population. The optimization procedure is used in conjunction with daily market prices and net asset values to determine the parameters for which the AFDE yield the best fit for the previous n days. Using these optimal parameters the ...
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the iterates is ...
Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...
The paper studies quasi interpolation by scaled shifts of a smooth and rapidly decaying function. The centers are images of a smooth mapping of the hZn lattice in Rs, s n, and the scaling parameters are proportional to h. We show that for a large class of generating functions the quasi interpolants provide high order approximations up to some prescribed accuracy. Although the approximants do no...
This paper introduces and analyses a new algorithm for minimizing a convex function subject to a finite number of convex inequality constraints. It is assumed that the Lagrangian of the problem is strongly convex. The algorithm combines interior point methods for dealing with the inequality constraints and quasi-Newton techniques for accelerating the convergence. Feasibility of the iterates is ...
In this paper, we extend the Ai-Zhang direction to the class of semidefinite optimization problems. We define a new wide neighborhood N (τ1, τ2, η) and, as usual but with a small change, we make use of the scaled Newton equations for symmetric search directions. After defining the “positive part” and the “negative part” of a symmetric matrix, we recommend to solve the Newton equation with its r...
The minimization problem of an L2-sensitivity measure subject to L2-norm dynamic-range scaling constraints is formulated for a class of two-dimensional (2-D) state-space digital filters. First, the problem is converted into an unconstrained optimization problem by using linear-algebraic techniques. Next, the unconstrained optimization problem is solved by applying an efficient quasi-Newton algo...
A new quasi-Newton updating formula for sparse optimization calculations is presented. It makes combined use of a simple strategy for fixing symmetry and a Schubert correction to the upper triangle of a permuted Hessian approximation. Interesting properties of this new update are that it is closed form and that it does not satisfy the secant condition at every iteration of the calculations. Som...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید