نتایج جستجو برای: t convergence

تعداد نتایج: 811002  

2014
Denis Belomestny John G. M. Schoenmakers

Given a Lévy process L, we consider the so-called statistical Skorohod embedding problem of recovering the distribution of an independent random time T based on i.i.d. sample from LT . Our approach is based on the genuine use of the Mellin and Laplace transforms. We propose consistent estimators for the density of T, derive their convergence rates and prove their optimality. It turns out that t...

2007
Tao Tang Jinghua Wang

We consider the convergence and stability property of MUSCL relaxing schemes applied to conservation laws with stii source terms. The maximum principle for the numerical schemes will be established. It will be also shown that the MUSCL relaxing schemes are uniformly l 1-and T V-stable in the sense that they are bounded by a constant independent of the relaxation parameter , the Lipschitz consta...

2011
Gilles CHRISTOL GILLES CHRISTOL

We present an algorithm computing, for any first order differential equation L over the affine line and any (Berkovich) point t of this affine line, the p-adic radius of convergence RL(t) of the solutions of L near t. We do explicit computations for the equation (0.1) L(f) def = xf ′ − π(px + ax)f = 0 (πp−1 = −p). where a lies in some valued extension of Qp. For a = −1 and t = 0, a solution of ...

2009
Yi-An Chen

Let C be a nonempty closed convex subset of a Hilbert spaceH, T a self-mapping of C. Recall that T is said to be nonexpansive if ‖Tx − Ty‖ ≤ ‖x − y‖, for all x, y ∈ C. Construction of fixed points of nonexpansive mappings via Mann’s iteration 1 has extensively been investigated in literature see, e.g., 2–5 and reference therein . But the convergence about Mann’s iteration and Ishikawa’s iterati...

2009
Cristina Pereyra Lesley A. Ward

Contents Introduction xv Chapter 1. Fourier series: some motivation 1 1.1. Some examples and key definitions 1 1.2. Main questions 5 1.3. Fourier series and Fourier coefficients 7 1.4. A little history, and motivation from the physical world 11 Chapter 2. Interlude 17 2.1. Nested classes of functions on bounded intervals 17 2.2. Modes of convergence 28 2.3. Interchanging limit operations 34 2.4...

2015
Joan T. Matamalas Julia Poncela-Casasnovas Sergio Gómez Alex Arenas

Scientific Reports 5:9519; doi: 10.1038/srep09519; published online 27 April 2015; updated on 02 September 2015 In the Supplementary Information file originally published with this Article, there are typographical errors. In the section under ‘Convergence’, “In order to evaluate such convergence, we fit the last tγ time steps of the evolution to a linear trend, α β ( ) = +  c t t using the QR ...

2008
Jean Jacod

This paper is concerned with the asymptotic behavior of sums of the form U(f)t = ∑[t/∆n] i=1 f(Xi∆n − X(i−1)∆n), where X is a 1-dimensional semimartingale and f a suitable test function, typically f(x) = |x|r, as ∆n → 0. We prove a variety of “laws of large numbers”, that is convergence in probability of U(f)t, sometimes after normalization. We also exhibit in many cases the rate of convergence...

Journal: :Annales UMCS, Informatica 2010
Magdalena Lapinska-Chrzczonowicz

The Cauchy problem for a semilinear parabolic equation is considered. Under the conditions u(x, t) = X(x)T1(t) + T2(t), ∂u ∂x = 0, it is shown that the problem is equivalent to the system of two ordinary differential equations for which exact difference scheme (EDS) with special Steklov averaging and difference schemes with arbitrary order of accuracy (ADS) are constructed on the moving mesh. T...

2014
Leon Wenliang Zhong James T. Kwok

In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from O ( 1 √ T ) to O ( 1 T ) , where T is the ...

2018
Mor Shpigel Nacson Jason Lee Suriya Gunasekar Nathan Srebro Daniel Soudry

The implicit bias of gradient descent is not fully understood even in simple linear classification tasks (e.g., logistic regression). Soudry et al. (2018) studied this bias on separable data, where there are multiple solutions that correctly classify the data. It was found that, when optimizing monotonically decreasing loss functions with exponential tails using gradient descent, the linear cla...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید