نتایج جستجو برای: stochastic averaging
تعداد نتایج: 146740 فیلتر نتایج به سال:
We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as !1-norm for promoting sparsity. We develop extensions of Nesterov’s dual averaging method, that can exploit the regularization structure in an online setting...
We consider d-dimensional linear stochastic approximation algorithms (LSAs) with a constant step-size and the so called Polyak-Ruppert (PR) averaging of iterates. LSAs are widely applied in machine learning and reinforcement learning (RL), where the aim is to compute an appropriate θ∗ ∈ R (that is an optimum or a fixed point) using noisy data and O(d) updates per iteration. In this paper, we ar...
This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...
In this paper we present the greedy step averaging(GSA) method, a parameter-free stochastic optimization algorithm for a variety of machine learning problems. As a gradient-based optimization method, GSA makes use of the information from the minimizer of a single sample’s loss function, and takes average strategy to calculate reasonable learning rate sequence. While most existing gradient-based...
Averaging is an important method to extract effective macroscopic dynamics from complex systems with slow modes and fast modes. This article derives an averaged equation for a class of stochastic partial differential equations without any Lipschitz assumption on the slow modes. The rate of convergence in probability is obtained as a byproduct. Importantly, the deviation between the original equ...
A stochastic model for visual evoked response generation is proposed based on a compound neurological generator approach. Participation of individual generators is stochastically modelled in a physiologically realistic manner that captures the inherent variability in latencies and amplitudes associated with the component phases of the response. The model is invertible such that decomposition of...
This paper considers a wide spectrum of regularized stochastic optimization problems where both the loss function and regularizer can be non-smooth. We develop a novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss. In particular, for strongly convex loss, it achieves the opti...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید