نتایج جستجو برای: ideal convergence

تعداد نتایج: 199142  

Journal: :SIAM J. Matrix Analysis Applications 2013
Josef A. Sifuentes Mark Embree Ronald B. Morgan

How does GMRES convergence change when the coefficient matrix is perturbed? Using spectral perturbation theory and resolvent estimates, we develop simple, general bounds that quantify the lag in convergence such a perturbation can induce. This analysis is particularly relevant to preconditioned systems, where an ideal preconditioner is only approximately applied in practical computations. To il...

2010
JOSÉ MARIO MARTINEZ J. M. MARTINEZ

In this paper we introduce a local convergence theory for Least Change Secant Update methods. This theory includes most known methods of this class, as well as some new interesting quasi-Newton methods. Further, we prove that this class of LCSU updates may be used to generate iterative linear methods to solve the Newton linear equation in the Inexact-Newton context. Convergence at a ¡j-superlin...

Journal: :Foundations of Computational Mathematics 2002
Grégoire Lecerf

Newton’s iterator is one of the most popular components of polynomial equation system solvers, either from the numeric or symbolic point of view. This iterator usually handles smooth situations only (when the Jacobian matrix associated to the system is invertible). This is often a restrictive factor. Generalizing Newton’s iterator is still an open problem: How to design an efficient iterator wi...

2011
JOSEF A. SIFUENTES MARK EMBREE RONALD B. MORGAN R. B. MORGAN

How does GMRES convergence change when the coefficient matrix is perturbed? Using spectral perturbation theory and resolvent estimates, we develop simple, general bounds that quantify the lag in convergence such a perturbation can induce. This analysis is particularly relevant to preconditioned systems, where an ideal preconditioner is only approximately applied in practical computations. To il...

Journal: :CoRR 2018
Naresh Manwani

In this paper, we propose an online learning algorithm PRIL for learning ranking classifiers using interval labeled data and show its correctness. We show its convergence in finite number of steps if there exists an ideal classifier such that the rank given by it for an example always lies in its label interval. We then generalize this mistake bound result for the general case. We also provide ...

2014
Michael Y. Hu

The goal of a Markov Chain Monte Carlo (MCMC) simulation is to generate samples from a target probability distribution π by simulating a Markov chain whose stationary distribution is π. However, often this ideal is not achieved, and the practitioner actually samples from an approximate distribution π̃ that is close to π in variation distance. These circumstances have spawned an array of literatu...

2004
Nicolas Besse Dietmar Kröner

We present the convergence analysis of locally divergence-free discontinuous Galerkin methods for the induction equations which appear in the ideal magnetohydrodynamic system. When we use a second order Runge Kutta time discretization, under the CFL condition ∆t ∼ h, we obtain error estimates in L of order O(∆t + h) where m is the degree of the local polynomials.

2012
KAZUMOTO IGUCHI

Equation of state and viriai coefficients of an ideal gas with fractional exclusion (i.L. ualdane_wu) statistics in arbitrary dimensions are derived herein, using the quantum statistical mecha,nics formulation for pressure and density of the system in terms of the D-dimensional momentum representation. The relationship between the convergence of the virial expansion and the existence of condens...

2016
Ekrem Savaş

Our goal in this work is to introduce the notion [V, λ] (I)2-summability and ideal λ-double statistical convergence of order α with respect to the intuitionistic fuzzy norm (μ, v). We also make some observations about these spaces and prove some inclusion relations.

2008
Xiaotong Shen Lifeng Wang

In this article, we study rates of convergence of the generalization error of multi-class margin classifiers. In particular, we develop an upper bound theory quantifying the generalization error of various large margin classifiers. The theory permits a treatment of general margin losses, convex or nonconvex, in presence or absence of a dominating class. Three main results are established. First...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید