نتایج جستجو برای: ε quasi chebyshev subspace
تعداد نتایج: 120394 فیلتر نتایج به سال:
Let 1 < p ̸= 2 < ∞, ε > 0 and let T : lp(l2) into → Lp[0, 1] be an isomorphism Then there is a subspace Y ⊂ lp(l2), (1 + ε)-isomorphic to lp(l2), such that: T|Y is an (1+ ε)-isomorphism and T (Y ) is Kp-complemented in Lp [0, 1], with Kp depending only on p. Moreover, Kp ≤ (1 + ε)γp if p > 2 and Kp ≤ (1 + ε)γp/(p−1) if 1 < p < 2, where γr is the Lr norm of a standard Gaussian variable.
In this work, we develop fast algorithms for computations involving finite expansions in Gegenbauer polynomials. We describe a method to convert a linear combination of Gegenbauer polynomials up to degree n into a representation in a different family of Gegenbauer polynomials with generally O(n log(1/ε)) arithmetic operations where ε is a prescribed accuracy. Special cases where source or targe...
Proof. For the m = d case, first rotate the subspace W to become span(e1, . . . , ed) (via multiplication by an orthogonal matrix), and then project to the first d coordinates. This clearly preserves norms in W exactly. Now, assume there is an ε-subspace embedding Π ∈ Rm×n for m < d. Then, the map Π : W → Rm has a nontrivial kernel, in particular there is some w ∈ W,w 6= 0 such that Πw = 0. On ...
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Ne...
In this paper we propose a subspace limited memory quasi-Newton method for solving large-scale optimization with simple bounds on the variables. The limited memory quasi-Newton method is used to update the variables with indices outside of the active set, while the projected gradient method is used to update the active variables. The search direction consists of three parts: a subspace quasi-Ne...
We give new constructions of two classes of algebraic code families which are efficiently list decodable with small output list size from a fraction 1 − R − ε of adversarial errors where R is the rate of the code, for any desired positive constant ε. The alphabet size depends only ε and is nearly-optimal. The first class of codes are obtained by folding algebraic-geometric codes using automorph...
It is shown that the four vector extrapolation methods, minimal polynomial extrapolation, reduced rank extrapolation, modified minimal polynomial extrapolation, and topological epsilon algorithm, when applied to linearly generated vector sequences, are Krylov subspace methods, and are equivalent to some well known conjugate gradient type methods. A unified recursive method that includes the con...
It follows from [1] and [7] that any closed n-codimensional subspace (n ≥ 1 integer) of a real Banach space X is the kernel of a projection X → X, of norm less than f(n) + ε (ε > 0 arbitrary), where f(n) = 2 + (n − 1) √ n + 2 n + 1 . We have f(n) < √ n for n > 1, and f(n) = √ n − 1 √ n + O (
Solving the electronic structure problem for nanoscale systems remains a computationally challenging problem. The numerous degrees of freedom, both electronic and nuclear, make the problem impossible to solve without some effective approximations. Here we illustrate some advances in algorithm developments to solve the Kohn-Sham eigenvalue problem, i.e. we solve the electronic structure problem ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید