نتایج جستجو برای: adaptive learning rate
تعداد نتایج: 1694493 فیلتر نتایج به سال:
Hyperparameter tuning is one of the most time-consuming steps in machine learning. Adaptive optimizers, like AdaGrad and Adam, reduce this labor by tuning an individual learning rate for each variable. Lately, researchers have shown interest in simpler methods like momentum SGD as they often yield better results. We ask: can simple adaptive methods based on SGD perform well? We show empirically...
This is the first of a series of papers that the authors propose to write on the subject of improving the speed of response of learning systems using multiple models. During the past two decades, the second author has worked on numerous methods for improving the stability, robustness, and performance of adaptive systems using multiple models and the other authors have collaborated with him on s...
Stochastic variational inference finds good posterior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. How...
Adaptive and interactive Learning concepts has apprehended the interest of educational actors and partners, especially in higher education. However, the implementation of those concepts has faced many challenges, particularly in Interactive Adaptive Learning Systems (IALS). The present paper aims to give the foundation of a framework for an IALS that gives extensive attention at each stage of t...
In a previous study, a new adaptive method (AM) was developed to adjust the learning rate in artificial neural networks: the generalized no-decrease adaptive method (GNDAM). The GNDAM is fundamentally different from other traditional AMs. Instead of using the derivative sign of a given weight to adjust its learning rate, this AM is based on a trial and error heuristic where global learning rate...
In this paper, we present nonmonotone methods for feedforward neural network training, i.e. training methods in which error function values are allowed to increase at some iterations. More specifically, at each epoch we impose that the current error function value must satisfy an Armijo-type criterion, with respect to the maximum error function value of M previous epochs. A strategy to dynamica...
A method improving the convergence rate of the backpropagation algorithm is proposed. This method adapts the learning rate using the Barzilai and Borwein [IMA J.Numer. Anal., 8, 141–148, 1988] steplength update for gradient descent methods. The determined learning rate is different for each epoch and depends on the weights and gradient values of the previous one. Experimental results show that ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید