نتایج جستجو برای: adaptive learning rate

تعداد نتایج: 1694493  

2018
Jian Zhang Ioannis Mitliagkas

Hyperparameter tuning is one of the most time-consuming steps in machine learning. Adaptive optimizers, like AdaGrad and Adam, reduce this labor by tuning an individual learning rate for each variable. Lately, researchers have shown interest in simpler methods like momentum SGD as they often yield better results. We ask: can simple adaptive methods based on SGD perform well? We show empirically...

Journal: :CoRR 2015
Kumpati S. Narendra Yu Wang

This is the first of a series of papers that the authors propose to write on the subject of improving the speed of response of learning systems using multiple models. During the past two decades, the second author has worked on numerous methods for improving the stability, robustness, and performance of adaptive systems using multiple models and the other authors have collaborated with him on s...

2013
Rajesh Ranganath Chong Wang David M. Blei Eric P. Xing

Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. How...

2018
A. BATTOU O. BAZ D. Mammass

Adaptive and interactive Learning concepts has apprehended the interest of educational actors and partners, especially in higher education. However, the implementation of those concepts has faced many challenges, particularly in Interactive Adaptive Learning Systems (IALS). The present paper aims to give the foundation of a framework for an IALS that gives extensive attention at each stage of t...

Journal: :Journal of Forecasting 2016

2004
Remy Allard Jocelyn Faubert

In a previous study, a new adaptive method (AM) was developed to adjust the learning rate in artificial neural networks: the generalized no-decrease adaptive method (GNDAM). The GNDAM is fundamentally different from other traditional AMs. Instead of using the derivative sign of a given weight to adjust its learning rate, this AM is based on a trial and error heuristic where global learning rate...

1999
Vassilis P. Palgianakos Michael N. Vrahatis George D. Magoulas

In this paper, we present nonmonotone methods for feedforward neural network training, i.e. training methods in which error function values are allowed to increase at some iterations. More specifically, at each epoch we impose that the current error function value must satisfy an Armijo-type criterion, with respect to the maximum error function value of M previous epochs. A strategy to dynamica...

1998
V. P. Plagianakos

A method improving the convergence rate of the backpropagation algorithm is proposed. This method adapts the learning rate using the Barzilai and Borwein [IMA J.Numer. Anal., 8, 141–148, 1988] steplength update for gradient descent methods. The determined learning rate is different for each epoch and depends on the weights and gradient values of the previous one. Experimental results show that ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید