نتایج جستجو برای: compact quasi newton representation
تعداد نتایج: 418611 فیلتر نتایج به سال:
A quasi-Newton algorithm for semi-infinite programming using an Leo exact penalty function is described, and numerical results are presented. Comparisons with three Newton algorithms and one other quasi-Newton algorithm show that the algorithm is very promising in practice. AMS classifications: 65K05,90C30.
We describe stochastic Newton and stochastic quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present a significant computational burden (e.g., the size may exceed computer memory or data are collected in real-time). In our proposed framework, stochasticity is introduced in two different frameworks as a means to overcome these compu...
artificial neural networks have the advantages such as learning, adaptation, fault-tolerance, parallelism and generalization. this paper is a scrutiny on the application of diverse learning methods in speed of convergence in neural networks. for this aim, first we introduce a perceptron method based on artificial neural networks which has been applied for solving a non-singula...
Quasi-Newton methods are widely used in practise for convex loss minimization problems. These methods exhibit good empirical performance on a wide variety of tasks and enjoy super-linear convergence to the optimal solution. For largescale learning problems, stochastic Quasi-Newton methods have been recently proposed. However, these typically only achieve sub-linear convergence rates and have no...
In this paper, we investigate the semismooth Newton and quasi-Newton methods for the minimization problem in the weighted `−regularization of nonlinear inverse problems. We propose the conditions for obtaining the convergence of two methods. The semismooth Newton method is proven to locally converge with superlinear rate and the semismooth quasi-Newton method is proven to locally converge at le...
Although quasi-Newton algorithms generally converge in fewer iterations than conjugate gradient algorithms, they have the disadvantage of requiring substantially more storage. An algorithm will be described which uses an intermediate (and variable) amount of storage and which demonstrates convergence which is also intermediate, that is, generally better than that observed for conjugate gradient...
let $g$ be a locally compact group, $h$ be a compact subgroup of $g$ and $varpi$ be a representation of the homogeneous space $g/h$ on a hilbert space $mathcal h$. for $psi in l^p(g/h), 1leq p leqinfty$, and an admissible wavelet $zeta$ for $varpi$, we define the localization operator $l_{psi,zeta} $ on $mathcal h$ and we show that it is a bounded operator. moreover, we prove that the localizat...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید