نتایج جستجو برای: t convergence

تعداد نتایج: 811002  

2004
George V. Moustakides

We introduce a novel method for analyzing a well known class of adaptive algorithms. By combining recent developments from the theory of Markov processes and long existing results from t h e theory of Perturbations of Linear Operators we s tudy first the behavior and convergence properties of a class of products of random matrices. This i n t u r n allows for the analysis of t h e first a n d s...

2000
ERIK A. VAN DOORN

Taking up a recent proposal by Stadje and Parthasarathy in the setting of the many-server Poisson queue, we consider the integral ∫∞ 0 [limu→∞E(X(u))− E(X(t))]dt as a measure of the speed of convergence towards stationarity of the process {X(t), t ≥ 0}, and evaluate the integral explicitly in terms of the parameters of the process in the case that {X(t), t ≥ 0} is an ergodic birth-death process...

2010
L. D. BERKOVITZ

It is shown that the integral functional I(y,z) = J"0 f(t,y(t), z(t))dp. is lower semicontinuous on its domain with respect to the joint strong convergence of yk -» y in Lp(G) and the weak convergence of zk -» z in LAG), where 1 < p < oo and 1 < q < oo, under the following conditions. The function/: (t,x,w) -*f(t,x,w) is measurable in / for fixed (x, w), is continuous in (x, w) for a.e. /, and ...

2010
YANG WANG ZHENGFANG ZHOU

Empirical Mode Decomposition (EMD), an adaptive technique for data and signal decomposition, is a valuable tool for many applications in data and signal processing. One approach to EMD is the iterative filtering EMD, which iterates certain banded Toeplitz operators in l∞(Z). The convergence of iterative filtering is a challenging mathematical problem. In this paper we study this problem, namely...

2013
Mehrdad Mahdavi Lijun Zhang Rong Jin

It is well known that the optimal convergence rate for stochastic optimization of smooth functions is O(1/ √ T ), which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of O(1/T ). In this work, we consider a new setup for optimizing smooth functions, termed asMix...

Journal: :Numerische Mathematik 2013
Daniel B. Szyld Fei Xue

We study the local convergence of several inexact numerical algorithms closely related to Newton’s method for the solution of a simple eigenpair of the general nonlinear eigenvalue problem T (λ)v = 0. We investigate inverse iteration, Rayleigh quotient iteration, residual inverse iteration, and the single-vector Jacobi-Davidson method, analyzing the impact of the tolerances chosen for the appro...

Journal: :SIAM Journal of Applied Mathematics 1994
Jessica G. Gaines Terry J. Lyons

We describe a method of random generation of the integrals A 1;2 (t; t + h) = Z t+h t Z s t dw 1 (r)dw 2 (s) ? Z t+h t Z s t dw 2 (r)dw 1 (s) together with the increments w 1 (t+h)?w 1 (t) and w 2 (t+h)?w 2 (t) of a two-dimensional Brownian path (w 1 (t);w 2 (t)). The method chosen is based on Marsaglia's `rectangle-wedge-tail' method, gen-eralised to higher dimensions. The motivation is the ne...

Journal: :CoRR 2013
Tianbao Yang Lijun Zhang

We motivate this study from a recent work on a stochastic gradient descent (SGD) method with only one projection (Mahdavi et al., 2012), which aims at alleviating the computational bottleneck of the standard SGD method in performing the projection at each iteration, and enjoys an O(log T/T ) convergence rate for strongly convex optimization. In this paper, we make further contributions along th...

Journal: :Universität Trier, Mathematik/Informatik, Forschungsbericht 1999
Richard Rödler

We consider stochastic processes fX(t; ) : t 2 Tg with continuous parameter t 2 T = [0;1[. These processes are so called generalized supermartingales, where the generalization comes from a modification of the right side of the supermartingale inequality. The assumptions on the process and the probability space are the same as for the classical convergence theorem of DOOB (see KOPP [2]). The aim...

2014
Avram Sidi William F. Ford

The recursion relations tha~ were proposed in [2] for implementing vector extrapolation methods are 'used for devising generali}:3tions of the power method for linear operators. These generalizations are shown to produce approximatioQS to largest eigenval'ues of a linear operator under certain conditions. They are similar in fonn to. the quotient-difference algorithm and share similar convergen...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید