Richardson extrapolation acts as a powerful convergence accelerator in numerical methods. Instead of relying solely on extremely fine discretizations—which can be computationally expensive—it combines results from coarser simulations to estimate the zero-step (continuum) limit.
This idea appears in several numerical techniques (e.g., Romberg integration, mesh convergence studies in engineering), where it can significantly improve accuracy at a modest additional cost.
When the error behaves regularly, Richardson extrapolation can dramatically reduce the error—sometimes by an order of magnitude or more—without requiring prohibitively fine meshes.
Consider the following (slowly) converging series (see this post Zeta summation: Basel problem):
and calculate its corresponding Richardson extrapolation using up to 4-term partial sums:
The Richardson extrapolation is (see this post Richardson extrapolation):
in this case:
We have:
Using solely a partial series with 4 terms, we were able to speed up the convergence of the above-mentioned series considerably using Richardson extrapolation.