نتایج جستجو برای: gradient descent

تعداد نتایج: 137892  

2007
Rohitash Chandra Christian W. Omlin

We present a training approach for recurrent neural networks by combing evolutionary and gradient descent learning. We train the weights of the network using genetic algorithms. We then apply gradient descent learning on the knowledge acquired by genetic training to further refine the knowledge. We also use genetic neural learning and gradient descent learning for training on the same network t...

2004
Will Smart Mengjie Zhang

This paper describes an approach to the use of gradient descent search in genetic programming for continuously evolving genetic programs for object classification problems. An inclusion factor is introduced to each node in a genetic program and gradient descent search is applied to the inclusion factors. Three new on-zero operators and two new continuous genetic operators are developed for evol...

1995
Kim W. C. Ku M. W. Mak W. C. Siu

Recurrent neural networks (RNNs), with the capability of dealing with spatio-temporal relationship, are more complex than feed-forward neural networks. Training of RNNs by gradient descent methods becomes more dii-cult. Therefore, another training method, which uses cellular genetic algorithms, is proposed. In this paper, the performance of training by a gradient descent method is compared with...

Journal: :IEEE Transactions on Neural Networks and Learning Systems 2020

Journal: :CoRR 2017
Matteo Pirotta Marcello Restelli

In this paper we propose a novel approach to automatically determine the batch size in stochastic gradient descent methods. The choice of the batch size induces a trade-off between the accuracy of the gradient estimate and the cost in terms of samples of each update. We propose to determine the batch size by optimizing the ratio between a lower bound to a linear or quadratic Taylor approximatio...

Farhad Sarani, Hadi Nosratipour

In [1] (Hybrid Conjugate Gradient Algorithm for Unconstrained Optimization J. Optimization. Theory Appl. 141 (2009) 249 - 264), an efficient hybrid conjugate gradient algorithm, the CCOMB algorithm is proposed for solving unconstrained optimization problems. However, the proof of Theorem 2.1 in [1] is incorrect due to an erroneous inequality which used to indicate the descent property for the s...

2016
Kelvin Kai Wing Ng

1) There exists a study on employing mini-batch approach on SVRG, one of the VR methods. It shows that the approach cannot scale well that there is no significant difference between using 16 threads and more[2]. This study observes the cause of the poor scalability of this existing mini-batch approach on VR method. 2) The performance of mini-batch approach on distributed setting is improved by ...

2013
Philipp Hennig

Stochastic gradient descent remains popular in large-scale machine learning, on account of its very low computational cost and robustness to noise. However, gradient descent is only linearly efficient and not transformation invariant. Scaling by a local measure can substantially improve its performance. One natural choice of such a scale is the Hessian of the objective function: Were it availab...

2016
Qiang Liu

Although optimization can be done very efficiently using gradient-based optimization these days, Bayesian inference or probabilistic sampling has been considered to be much more difficult. Stein variational gradient descent (SVGD) is a new particle-based inference method derived using a functional gradient descent for minimizing KL divergence without explicit parametric assumptions. SVGD can be...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید