نتایج جستجو برای: stepsize

تعداد نتایج: 879  

2009
Ilya O. Ryzhov Peter Frazier Warren Powell

Approximate value iteration is used in dynamic programming when we use random observations to estimate the value of being in a state. These observations are smoothed to approximate the expected value function, leading to the problem of choosing a stepsize (the weight given to the most recent observation). A stepsize of 1/n is a common (and provably convergent) choice. However, we prove that it ...

Journal: :SIAM J. Scientific Computing 1997
Kjell Gustafsson Gustaf Söderlind

In the numerical solution of ODEs by implicit time-stepping methods, a system of (nonlinear) equations has to be solved each step. It is common practice to use xed-point iterations or, in the stii case, some modiied Newton iteration. The convergence rate of such methods depends on the stepsize. Similarly, a stepsize change may force a refactorization of the iteration matrix in the Newton solver...

Journal: :CoRR 2017
Patrick R. Johnstone Pierre Moulin

The purpose of this manuscript is to derive new convergence results for several subgradient methods for minimizing nonsmooth convex functions with Hölderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and functions with weakly sharp minima as special cases. To this end there are four main contributions. First, for a constant and su...

2017
Philipp Bulling Klaus Linhard Arthur Wolf Gerhard Schmidt

A new approach for acoustic feedback cancellation is presented. The challenge in acoustic feedback cancellation is a strong correlation between the local speech and the loudspeaker signal. Due to this correlation, the convergence rate of adaptive algorithms is limited. Therefore, a novel stepsize control of the adaptive filter is presented. The stepsize control exploits reverberant signal perio...

Journal: :CoRR 2018
Weiran Wang Jialei Wang Mladen Kolar Nathan Srebro

We propose methods for distributed graph-based multi-task learning that are based on weighted averaging of messages from other machines. Uniform averaging or diminishing stepsize in these methods would yield consensus (single task) learning. We show how simply skewing the averaging weights or controlling the stepsize allows learning different, but related, tasks on the different machines.

Journal: :Applied Mathematics and Computation 2008
Ülo Lepik

A modification of the Haar wavelet method, for which the stepsize of the argument is variable, is proposed. To establish the efficiency of the method three test problems, for which exact solution is known, are considered. Computer simulations show clear preference of the suggested method compared with the Haar wavelet method of a constant stepsize. 2007 Elsevier Inc. All rights reserved.

Journal: :Journal of Computational and Applied Mathematics 1987

2017
Ming Tian Hui-Fang Zhang

The split feasibility problem (SFP) is finding a point [Formula: see text] such that [Formula: see text], where C and Q are nonempty closed convex subsets of Hilbert spaces [Formula: see text] and [Formula: see text], and [Formula: see text] is a bounded linear operator. Byrne's CQ algorithm is an effective algorithm to solve the SFP, but it needs to compute [Formula: see text], and sometimes [...

2017
Peiyuan Wang Jianjun Zhou Risheng Wang Jie Chen

Variable stepsize methods are effective for various modified CQ algorithms to solve the split feasibility problem (SFP). The purpose of this paper is first to introduce two new simpler variable stepsizes of the CQ algorithm. Then two new generalized variable stepsizes which can cover the former ones are also proposed in real Hilbert spaces. And then, two more general KM (Krasnosel'skii-Mann)-CQ...

2008
Chun-Nan Hsu Han-Shen Huang Yu-Ming Chang

Previously, Bottou and LeCun [1] established that the second-order stochastic gradient descent (SGD) method can potentially achieve generalization performance as well as empirical optimum in a single pass through the training examples. However, second-order SGD requires computing the inverse of the Hessian matrix of the loss function, which is usually prohibitively expensive. Recently, we inven...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید