نتایج جستجو برای: stochastic gradient descent

تعداد نتایج: 258150  

2017
Shuxia Lu Zhao Jin

In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the l...

2016

Lemmas 1, 2, 3 and 4, and Corollary 1, were originally derived by Toulis and Airoldi (2014). These intermediate results (and Theorem 1) provide the necessary foundation to derive Lemma 5 (only in this supplement) and Theorem 2 on the asymptotic optimality of θ̄n, which is the key result of the main paper. We fully state these intermediate results here for convenience but we point the reader to t...

Journal: :Statistics and Computing 2021

Abstract Stochastic gradient descent is an optimisation method that combines classical with random subsampling within the target functional. In this work, we introduce stochastic process as a continuous-time representation of descent. The dynamical system coupled Markov living on finite state space. system—a flow—represents part, space represents subsampling. Processes type are, for instance, u...

2013
Huanhuan Xu

We propose an Adaptive Stochastic Conjugate Gradient (ASCG) optimization algorithm for temporal medical image registration. This method combines the advantages of Conjugate Gradient (CG) method and Adaptive Stochastic Gradient Descent (ASGD) method. The main idea is that the search direction of ASGD is replaced by stochastic approximations of the conjugate gradient of the cost function. In addi...

Journal: :International Journal of Networked and Distributed Computing 2021

Journal: :Proceedings of the AAAI Conference on Artificial Intelligence 2020

Journal: :Proceedings of the AAAI Conference on Artificial Intelligence 2019

Journal: :Applied Soft Computing 2021

Stochastic gradient descent (SGD) algorithm and its variations have been effectively used to optimize neural network models. However, with the rapid growth of big data deep learning, SGD is no longer most suitable choice due natural behavior sequential optimization error function. This has led development parallel algorithms, such as asynchronous (ASGD) synchronous (SSGD) train networks. it int...

Journal: :Proceedings of the ... AAAI Conference on Artificial Intelligence 2021

The stability and generalization of stochastic gradient-based methods provide valuable insights into understanding the algorithmic performance machine learning models. As main workhorse for deep learning, gradient descent has received a considerable amount studies. Nevertheless, community paid little attention to its decentralized variants. In this paper, we novel formulation descent. Leveragin...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید