نتایج جستجو برای: stepsize

تعداد نتایج: 879  

2009
A. Lee Swindlehurst

It has recently been observed that with appropriate stepsize normalization, the convergence speed of the constant modulus (CM) algorithm can be dramatically improved. In this correspondence, it is shown that if a different normalization strategy is used, one that takes into account the finite alphabet structure of the signals, a standard normalized version of the decision directed equalizer (DD...

1997
Stephen D Bond Benedict J Leimkuhler

The development of a Sundman-type time-transformation for reversible variable stepsize integration of few-body problems is discussed. While a time-transformation based on minimum particle separation is suitable if the collisions only occur pairwise and isolated in time, the control of stepsize is typically much more diicult for a three-body close approach. Nonetheless, we nd that a suitable cho...

1997
Jan Sieber

This paper presents an error test function usable for the local error contro l and the automatic stepsize selection in the numerical integration of general index-1 and index-2 differentialalgebraic equations (DAEs). This test function makes a compromise between a good approximation of the error arising per step by the discretization (local error) and the order and smoothness assumptions made by...

1997
Desmond J. Higham

Time-stepping methods that guarantee to avoid spurious fixed points are said to be regular. For fixed stepsize Runge-Kutta formulas, this concept has been well studied. Here, the theory of regularity is extended to the case of embedded Runge-Kutta pairs used in variable stepsize mode with local error control. First, the limiting case of a zero error tolerance is considered. A recursive regulari...

1998
Stephen D. Bond Benedict J. Leimkuhler

The development of a Sundman-type time-transformation for reversible variable stepsize integration of few-body problems is discussed. While a time-transformation based on minimum particle separation is suitable if the collisions only occur pairwise and isolated in time, the control of stepsize is typically much more diicult for a three-body close approach. Nonetheless, we nd that a suitable cho...

2012
Duy V. N. Luong Panos Parpas Daniel Rueckert Berç Rustem

Markov Random Fields (MRF) minimization is a well-known problem in computer vision. We consider the augmented dual of the MRF minimization problem and develop a Mirror Descent algorithm based on weighted Entropy and Euclidean Projection. The augmented dual problem consists of maximizing a non-differentiable objective function subject to simplex and linear constraints. We analyze the convergence...

Journal: :SIAM J. Scientific Computing 2006
Werner Römisch Renate Winkler

Abstract. A strategy for controlling the stepsize in the numerical integration of stochastic differential equations (SDEs) is presented. It is based on estimating the p-th mean of local errors. The strategy leads to stepsize sequences that are identical for all computed paths. For the family of Euler schemes for SDEs with small noise we derive computable estimates for the dominating term of the...

2003
M. A. Clark A. D. Kennedy

Computations with two flavours of dynamical staggered quarks are quite popular at present. There are a number of possible problems with such calculations such as flavour symmetry breaking and non-locality of the square-root of the fourflavour action. In this investigation we shall ignore these and consider only the possible errors introduced through algorithmic approximations. We propose the us...

2013
TOM GOLDSTEIN ERNIE ESSER RICHARD BARANIUK

The Primal-Dual hybrid gradient (PDHG) method is a powerful optimization scheme that breaks complex problems into simple sub-steps. Unfortunately, PDHG methods require the user to choose stepsize parameters, and the speed of convergence is highly sensitive to this choice. We introduce new adaptive PDHG schemes that automatically tune the stepsize parameters for fast convergence without user inp...

Journal: :CoRR 2018
Michal Rolinek Georg Martius

We propose a stepsize adaptation scheme for stochastic gradient descent. It operates directly with the loss function and rescales the gradient in order to make fixed predicted progress on the loss. We demonstrate its capabilities by strongly improving the performance of Adam and Momentum optimizers. The enhanced optimizers with default hyperparameters consistently outperform their constant step...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید