Professional Documents
Culture Documents
A root-nding algorithm is a numerical method, or and b, such that f(a) and f(b) have opposite signs. Alalgorithm, for nding a value x such that f(x) = 0, for a though it is reliable, it converges slowly, gaining one bit
given function f. Such an x is called a root of the function of accuracy with each iteration.
f.
This article is concerned with nding scalar, real or
complex roots, approximated as oating point numbers.
Finding integer roots or exact algebraic roots are separate
problems, whose algorithms have little in common with
those discussed here. (See: Diophantine equation for integer roots)
2 Open methods
1
Bracketing methods
similar
2.2
Secant method
2.3
Interpolation
The secant method also arises if one approximates the unknown function f by linear interpolation. When quadratic
interpolation is used instead, one arrives at Mullers
method. It converges faster than the secant method. A
particular feature of this method is that the iterates xn
may become complex.
care to ensure numerical stability. The closed-form solutions for degrees three (cubic polynomial) and four (quartic polynomial) are complicated and require much more
care; consequently, numerical methods such as Bairstows
method may be easier to use. Fifth-degree (quintic) and
higher-degree polynomials do not have a general solution
according to the AbelRuni theorem (1824, 1799).
This iterative scheme is numerically unstable; the approximation errors accumulate during the successive factorizations, so that the last roots are determined with a polynomial that deviates widely from a factor of the original
Sidis method allows for interpolation with an arbitrarily polynomial. To reduce this error, it is advisable to nd
high degree polynomial. The higher the degree of the in- the roots in increasing order of magnitude.
terpolating polynomial, the faster the convergence. Sidis
method allows for convergence with an order arbitrarily Wilkinsons polynomial illustrates that high precision
may be necessary when computing the roots of a polyclose to 2.
nomial given its coecients: the problem of nding the
roots from the coecients is in general ill-conditioned.
2.4
Inverse interpolation
The appearance of complex values in interpolation methods can be avoided by interpolating the inverse of f,
resulting in the inverse quadratic interpolation method.
Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
The most simple method to nd a single root fast is Newtons method. One can use Horners method twice to
eciently evaluate the value of the polynomial function
and its rst derivative; this combination is called Birge
Vietas method. This method provides quadratic convergence for simple roots at the cost of two polynomial evaluations per step.
Another class of methods is based on translating the problem of nding polynomial roots to the problem of nding eigenvalues of the companion matrix of the polynomial. In principle, one can use any eigenvalue algorithm
to nd the roots of the polynomial. However, for eciency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrixfree form. Among these methods are the power method,
whose application to the transpose of the companion ma-
4.5
4.3
The simple DurandKerner and the slightly more complicated Aberth methods simultaneously nd all of the roots
using only simple complex number arithmetic. Accelerated algorithms for multi-point evaluation and interpolation similar to the fast Fourier transform can help speed
them up for large degrees of the polynomial. It is advisable to choose an asymmetric, but evenly distributed set
of initial points.
Another method with this style is the Dandelin
Gre method (sometimes also falsely ascribed to
Lobachevsky), which uses polynomial transformations to
repeatedly and implicitly square the roots. This greatly
magnies variances in the roots. Applying Vietes formulas, one obtains easy approximations for the modulus of
the roots, and with some more eort, for the roots themselves.
4.4
Several fast tests exist that tell if a segment of the real line
or a region of the complex plane contains no roots. By
bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one
can isolate small regions that may contain roots and then
apply other methods to locate them exactly.
All these methods require to nd the coecients of
shifted and scaled versions of the polynomial. For large
degrees, FFT-based accelerated methods become viable.
For real roots, Sturms theorem and Descartes rule of
signs with its extension in the BudanFourier theorem
rational intervals. The test of an interval for real roots using Budans theorem is computationally simple but may
yield false positive results. However, these will be reliably detected in the following transformation and renement of the interval. The test based on Sturm chains is
computationally more involved but gives always the exact
number of real roots in the interval.
The algorithm for isolating the roots, using Descartes
rule of signs and Vincents theorem, had been originally called modied Uspenskys algorithm by its inventors
Collins and Akritas.[1] After going through names like
CollinsAkritas method and Descartes method (too
confusing if ones considers Fouriers article[2] ), it was nally Franois Boulier, of Lille University, who gave it
the name VincentCollinsAkritas (VCA) method,[3] p.
24, based on Uspenskys method not existing[4] and
neither does Descartes method.[5] This algorithm has
been improved by Rouillier and Zimmerman,[6] and the
resulting implementation is, to date, the fastest bisection
method. It has the same worst case complexity as the
Sturm algorithm, but is almost always much faster. It
is the default algorithm of Maple root-nding function
fsolve. Another method based on Vincents theorem is
the VincentAkritasStrzeboski (VAS) method;[7] it has
been shown[8] that the VAS (continued fractions) method
is faster than the fastest implementation of the VCA (bisection) method,[6] which was independently conrmed
elsewhere;[9] more precisely, for the Mignotte polynomials of high degree, VAS is about 50,000 times faster than
the fastest implementation of VCA. VAS is the default algorithm for root isolation in Mathematica, Sage, SymPy,
Xcas. See Budans theorem for a description of the historical background of these methods. For a comparison
between Sturms method and VAS, use the functions realroot(poly) and time(realroot(poly)) of Xcas. By default,
4.6
See also
nth root algorithm
Broydens method
Multiplicity (mathematics)
Polynomial greatest common divisor
Polynomial
Graees method
Cryptographically secure pseudorandom number
generator a class of functions designed specically to be unsolvable by root-nding algorithms.
System of polynomial equations root-nding algorithms in the multivariate case
EXTERNAL LINKS
Sources
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). Chapter 9. Root Finding and
Nonlinear Sets of Equations. Numerical Recipes:
The Art of Scientic Computing (3rd ed.). New
York: Cambridge University Press. ISBN 978-0521-88068-8.
7 External links
Animations for Fixed Point Iteration
MPSolve
References
Notes
8.1
Text
8.2
Images
8.3
Content license