نتایج جستجو برای: stochastic averaging

تعداد نتایج: 146740  

2007
Reuven Y. Rubinstein

We show that Polyak's (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modiied to solve convex-concave stochas-tic saddle point problems. We also show that the extended algorithm, considered on general families of stochastic convex-concave saddle point ...

Journal: :IEEE Trans. Signal Processing 1998
Oguz Tanrikulu Anthony G. Constantinides

Fig. 5. Comparison of the SRM-based algorithms for the " Bad " channel, SNR = 10 dB. SG: Standard stochastic gradient algorithm. Norm: Normalized stochastic gradient algorithm. SRM: Off-line SRM algorithm of [3], where S k is obtained at time k by averaging all of the data through time k to estimate ensemble averages. Conj Grad: Conjugate gradient algorithm. The results are averaged over 200 sa...

Journal: :Entropy 2018
Wantao Jia Yong Xu Dongxi Li

We investigate the stochastic dynamics of a prey-predator type ecosystem with time delay and the discrete random environmental fluctuations. In this model, the delay effect is represented by a time delay parameter and the effect of the environmental randomness is modeled as Poisson white noise. The stochastic averaging method and the perturbation method are applied to calculate the approximate ...

2004
R. KONDA JOHN N. TSITSIKLIS

We study the rate of convergence of linear two-time-scale stochastic approximation methods. We consider two-time-scale linear iterations driven by i.i.d. noise, prove some results on their asymptotic covariance and establish asymptotic normality. The well-known result [Polyak, B. T. (1990). Automat. Remote Contr. 51 937–946; Ruppert, D. (1988). Technical Report 781, Cornell Univ.] on the optima...

Journal: :The Journal of chemical physics 2010
Basil Bayati Houman Owhadi Petros Koumoutsakos

We present a simple algorithm for the simulation of stiff, discrete-space, continuous-time Markov processes. The algorithm is based on the concept of flow averaging for the integration of stiff ordinary and stochastic differential equations and ultimately leads to a straightforward variation of the the well-known stochastic simulation algorithm (SSA). The speedup that can be achieved by the pre...

2014
Vilen Jumutc Johan A. K. Suykens

Recent advances in stochastic optimization and regularized dual averaging approaches revealed a substantial interest for a simple and scalable stochastic method which is tailored to some more specific needs. Among the latest one can find sparse signal recovery and l0-based sparsity inducing approaches. These methods in particular can force many components of the solution shrink to zero thus cla...

Journal: :Journal of Differential Equations 2021

By using the technique of Zvonkin's transformation and classical Khasminskii's time discretization method, we prove averaging principle for slow-fast stochastic partial differential equations with bounded Hölder continuous drift coefficients. An example is also provided to explain our result.

Journal: :Stochastic Processes and their Applications 2021

In this paper, we prove a convergence theorem for singular perturbations problems class of fully nonlinear parabolic partial differential equations (PDEs) with ergodic structures. The limit function is represented as the viscosity solution to degenerate PDEs. Our approach mainly based on G-stochastic analysis argument. As byproduct, also establish averaging principle stochastic driven by G-Brow...

Journal: :SIAM Journal of Applied Mathematics 2012
Luis L. Bonilla Axel Klar S. Martin

Abstract. We investigate linear Fokker-Planck equations or stochastic Hamiltonian systems with periodic forcing where the impact of deterministic forcing is not captured by classical stochastic averaging. To overcome this problem, a formal energy projection method is introduced, which splits the corresponding Fokker-Planck equation and allows the computation of higher order stochastic averages....

Journal: :CoRR 2016
Tomoya Murata Taiji Suzuki

We consider a composite convex minimization problem associated with regularized empirical risk minimization, which often arises in machine learning. We propose two new stochastic gradient methods that are based on stochastic dual averaging method with variance reduction. Our methods generate a sparser solution than the existing methods because we do not need to take the average of the history o...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید