نتایج جستجو برای: stochastic gradient descent

تعداد نتایج: 258150  

Journal: :IEEE Transactions on Automatic Control 2022

Recent years have seen increased interest in performance guarantees of gradient descent algorithms for nonconvex optimization. A number works uncovered that noise plays a critical role the ability recursions to efficiently escape saddle-points and reach second-order stationary points. Most available limit component be bounded with probability one or sub-Gaussian leverage concentration inequalit...

Journal: :Journal of Mathematical Physics 2021

We study the convergence to equilibrium of an underdamped Langevin equation that is controlled by a linear feedback force. Specifically, we are interested in sampling possibly multimodal invariant probability distribution system at small noise (or low temperature), for which dynamics can easily get trapped inside metastable subsets phase space. follow Chen et al. [J. Math. Phys. 56, 113302 (201...

2017
Cong Fang Zhouchen Lin

Nowadays, asynchronous parallel algorithms have received much attention in the optimization field due to the crucial demands for modern large-scale optimization problems. However, most asynchronous algorithms focus on convex problems. Analysis on nonconvex problems is lacking. For the Asynchronous Stochastic Descent (ASGD) algorithm, the best result from (Lian et al., 2015) can only achieve an ...

2007
Yong-Hyun Cho Seong-Jun Hong

This paper proposes a global learning of neural networks by hybrid optimization algorithm. The hybrid algorithm combines a stochastic approximation with a gradient descent. The stochastic approximation is first applied for estimating an approximation point inclined toward a global escaping from a local minimum, and then the backpropagation(BP) algorithm is applied for high-speed convergence as ...

Journal: :CoRR 2016
Xi-Lin Li

Recurrent neural networks (RNN), especially the ones requiring extremely long term memories, are difficult to training. Hence, they provide an ideal testbed for benchmarking the performance of optimization algorithms. This paper reports test results of a recently proposed preconditioned stochastic gradient descent (PSGD) algorithm on RNN training. We find that PSGD may outperform Hessian-free o...

Journal: :Applied optics 2001
T Weyrauch M A Vorontsov T G Bifano J A Hammer M Cohen G Cauwenberghs

The performance of adaptive systems that consist of microscale on-chip elements [microelectromechanical mirror (mu-mirror) arrays and a VLSI stochastic gradient descent microelectronic control system] is analyzed. The mu-mirror arrays with 5 x 5 and 6 x 6 actuators were driven with a control system composed of two mixed-mode VLSI chips implementing model-free beam-quality metric optimization by...

Journal: :CoRR 2016
Zhouyuan Huo Bin Gu Heng Huang

In the era of big data, optimizing large scale machine learning problems becomes a challenging task and draws significant attention. Asynchronous optimization algorithms come out as a promising solution. Recently, decoupled asynchronous proximal stochastic gradient descent (DAP-SGD) is proposed to minimize a composite function. It is claimed to be able to offload the computation bottleneck from...

2012

I compare two common techniques to compute matrix factorizations for recommender systems, specifically using the Netflix prize data set. Accuracy, run-time, and scalability are discussed for stochastic gradient descent and non-linear conjugate gradient.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید