Accurate Polynomial Evaluation in Floating Point Arithmetic
نویسندگان
چکیده
One of the three main processes associated with polynomials is evaluation; the two other ones being interpolation and root finding. Higham [1, chap. 5] devotes an entire chapter to polynomials and more especially to polynomial evaluation. The small backward error the Horner scheme introduce when evaluated in floating point arithmetic justifies its practical interest. It is well known that the computed evaluation of p(x) is the exact value at x of a polynomial obtained by making relative perturbations of at most size 2nu to the coefficients of p where n denotes the polynomial degree and u the finite precision of the computation [1]. Nevertheless, the computed result can be arbitrary less accurate than the working precision u when evaluating p(x) is ill-conditioned. This is the case for example in the neighborhood of multiple roots where all the digits or even the order of the computed value of p(x) can be false. Numerous multiprecision libraries are available when the computing precision u is not sufficient to guarantee a prescribed accuracy. Fixed-length expansions such as in “double-double” libraries [2] are actual and effective solutions to simulate twice the IEEE-754 double precision. These fixed-length expansions are currently embedded in major developments such as the new extended and mixed precision BLAS [2]. On the other hand, an approach similar to Kahan’s compensated summation (see [1, p.83–88]) can be used, such as in [3] where efficient algorithms are proposed for the computation of summations and dot products in a specified precision. In this paper we present an accurate and fast compensated algorithm to evaluate univariate polynomials in floating point arithmetic. By accurate, we mean that the accuracy of our compensated Horner scheme is similar to the one given by the Horner scheme computed in doubled working precision, for example using fixed-length expansions. Of course the accuracy of the result still depends on the condition number. By fast, we mean that the algorithm should run at least twice as fast as the fixed-length expansion counterparts that produce the same output accuracy. We constraint the computation to be performed in the same precision as the data, for example the IEEE-754 double precision. Thanks to an a priori error analysis, we provide a sufficiant criterium on the condition number of the polynomial evaluation to ensure that our compensated Horner scheme provides a faithful rounding of the exact evaluation. By faithful rounding we mean that the computed result is one of the two floating point neighbours of the exact result. We also derive a validated running error analysis of our algorithm, which leads to a sharper enclosure of the computed result. In particular, our numerical experiments show that the computed evaluation can be prooved to be faithfully rounded at the running time, as long as the condition number is smaller than the inverse of the working precision.
منابع مشابه
Error-free transformations in real and complex floating point arithmetic
Error-free transformation is a concept that makes it possible to compute accurate results within a floating point arithmetic. Up to now, it has only be studied for real floating point arithmetic. In this short note, we recall the known error-free transformations for real arithmetic and we propose some new error-free transformations for complex floating point arithmetic. This will make it possib...
متن کاملImplementation of binary floating-point arithmetic on embedded integer processors - Polynomial evaluation-based algorithms and certified code generation
Today some embedded systems still do not integrate their own floating-point unit, for area, cost, or energy consumptionconstraints. However, this kind of architectures is widely used in application domains highly demanding on floating-point calculations (multimedia, audio and video, or telecommunications). To compensate this lack of floating-pointhardware, floating-point arithmetic ...
متن کاملAccurate summation, dot product and polynomial evaluation in complex floating point arithmetic
Article history: Available online 30 March 2012
متن کاملA Floating-Point Processor for Fast and Accurate Sine/Cosine Evaluation
A VLSI architecture for fast and accurate floating-point sine/cosine evaluation is presented, combining floating-point and simple fixed-point arithmetic. The algorithm implemented by the architecture is based on second-order polynomial interpolation within an approximation interval which is partitioned into regions of unequal length. The exploitation of certain properties of the trigonometric f...
متن کاملCompensated Horner Scheme
We present a compensated Horner scheme, that is an accurate and fast algorithm to evaluate univariate polynomials in floating point arithmetic. The accuracy of the computed result is similar to the one given by the Horner scheme computed in twice the working precision. This compensated Horner scheme runs at least as fast as existing implementations producing the same output accuracy. We also pr...
متن کاملMore Instruction Level Parallelism Explains the Actual Efficiency of Compensated Algorithms
The compensated Horner algorithm and the Horner algorithm with double-double arithmetic improve the accuracy of polynomial evaluation in IEEE-754 floating point arithmetic. Both yield a polynomial evaluation as accurate as if it was computed with the classic Horner algorithm in twice the working precision. Both algorithms also share the same low-level computation of the floating point rounding ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2006