نتایج جستجو برای: adaptive learning rate

تعداد نتایج: 1694493  

2008
Ludmila I. Kuncheva Catrin O. Plumpton

We propose a strategy for updating the learning rate parameter of online linear classifiers for streaming data with concept drift. The change in the learning rate is guided by the change in a running estimate of the classification error. In addition, we propose an online version of the standard linear discriminant classifier (O-LDC) in which the inverse of the common covariance matrix is update...

1997
Damon A. Miller Jacek M. Zurada

Structural learning with forgetting is a prominent method of multilayer, feedforward neural network complexity regularization. The level of regularization is controlled by a parameter known as the forgetting rate. The goal of this paper is to establish a dynamical system framework f o r the study of structural learning both to ofSer new insights into this methodology and to potentially provide ...

2010
Wolfram Schenck Ralph Welsch Alexander Kaiser Ralf Möller

We propose a novel algorithm for adaptive learning rate control for Gaussian mixture models of the NGPCA type. The core idea is to introduce a unit–specific learning rate which is adjusted automatically depending on the match between the local principal component analysis of each unit (interpreted as Gaussian distribution) and the empirical distribution within the unit’s data partition. In cont...

Journal: :journal of ai and data mining 2015
f. tatari m. b. naghibi-sistani

in this paper, the optimal adaptive leader-follower consensus of linear continuous time multi-agent systems is considered. the error dynamics of each player depends on its neighbors’ information. detailed analysis of online optimal leader-follower consensus under known and unknown dynamics is presented. the introduced reinforcement learning-based algorithms learn online the approximate solution...

2001
Ira Cohen Alexandre Bronstein Fabio G. Cozman

The paper introduces Voting EM, an adaptive online learning algorithm of Bayesian network parameters. Voting EM is an extension of the EM( ) algorithm suggested by [1]. We show convergence properties of the Voting EM that uses a constant learning rate. We use the convergence properties to formulate an error driven scheme for adapting the learning rate. The resultant algorithm converges with the...

Journal: :international journal of health policy and management 2015
gemma carey brad crammond eleanor malbon nic carey

inequalities in the social determinants of health (sdh), which drive avoidable health disparities between different individuals or groups, is a major concern for a number of international organisations, including the world health organization (who). despite this, the pathways to changing inequalities in the sdh remain elusive. the methodologies and concepts within system science are now viewed ...

2006
Rui M. Castro Robert D. Nowak

This paper analyzes the potential advantages and theoretical challenges of ”active learning” algorithms. Active learning involves sequential, adaptive sampling procedures that use information gleaned from previous samples in order to focus the sampling and accelerate the learning process relative to “passive learning” algorithms, which are based on non-adaptive (usually random) samples. There a...

2000
Jacob Hurst Larry Bull

The use and potential benefits of self-adaptive mutation operators are well-known within evolutionary computing. In this paper we begin by examining the use of self-adaptive mutation in Learning Classifier Systems. We implement the operator in the simple ZCS classifier and examine its performance in two maze environments. It is shown that, although no significant increase in performance is seen...

2007
K. Bousson

Purpose – This paper is concerned with an online parameter estimation algorithm for nonlinear uncertain time-varying systems for which no stochastic information is available. Design/methodology/approach – The estimation procedure, called nonlinear learning rate adaptation (NLRA), computes an individual adaptive learning rate for each parameter instead of using a single adaptive learning rate fo...

1999
G. D. Magoulas

Batch training algorithms with a di erent learning rate for each weight are investigated. The adaptive learning rate algorithms of this class that apply inexact one{ dimensional subminimization are analyzed and their global convergence is studied. Simulations are conducted to evaluate the convergence behavior of two training algorithms of this class and to compare them with several popular trai...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید