نتایج جستجو برای: stochastic gradient descent

تعداد نتایج: 258150  

Journal: :Advances in Applied Probability 2021

Abstract We consider stochastic differential equations of the form $dX_t = |f(X_t)|/t^{\gamma} dt+1/t^{\gamma} dB_t$ , where f ( x ) behaves comparably to $|x|^k$ in a neighborhood origin, for $k\in [1,\infty)$ . show that there exists threshold value $ \,{:}\,{\raise-1.5pt{=}}\, \tilde{\gamma}$ $\gamma$ depending on k such if $\gamma \in (1/2, \tilde{\gamma})$ then $\mathbb{P}(X_t\rightarrow 0...

Journal: :Journal of the American Statistical Association 2021

The stochastic gradient descent (SGD) algorithm is widely used for parameter estimation, especially huge datasets and online learning. While this recursive popular computation memory efficiency, quantifying variability randomness of the solutions has been rarely studied. This article aims at conducting statistical inference SGD-based estimates in an setting. In particular, we propose a fully es...

Journal: :Journal of computational mathematics and data science 2022

Stochastic gradient descent (SGD) is widely used in deep learning due to its computational efficiency, but a complete understanding of why SGD performs so well remains major challenge. It has been observed empirically that most eigenvalues the Hessian loss functions on landscape over-parametrized neural networks are close zero, while only small number large. Zero indicate zero diffusion along c...

Journal: :IEEE Transactions on Neural Networks and Learning Systems 2020

Journal: :Mathematics 2022

Linear regression is the use of linear functions to model relationship between a dependent variable and one or more independent variables. models have been widely used in various fields such as finance, industry, medicine. To address problem that traditional difficult handle uncertain data, we propose granule-based elastic network model. First construct granules granular vectors by granulation ...

2018
Filip Hanzely Peter Richt'arik

Relative smoothness a notion introduced in [6] and recently rediscovered in [3, 18] generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze ...

Journal: :CoRR 2016
Jie Liu Martin Takác

We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a c...

2013
Santitham Prom-on Peter Birkholz Yi Xu

This paper reports preliminary results of our effort to address the acoustic-to-articulatory inversion problem. We tested an approach that simulates speech production acquisition as a distal learning task, with acoustic signals of natural utterances in the form of MFCC as input, VocalTractLab — a 3D articulatory synthesizer controlled by target approximation models as the learner, and stochasti...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید