Month: March 2026

  • Richardson extrapolation: example

    Richardson extrapolation acts as a powerful convergence accelerator in numerical methods. Instead of relying solely on extremely fine discretizations—which can be computationally expensive—it combines results from coarser simulations to estimate the zero-step (continuum) limit.

    This idea appears in several numerical techniques (e.g., Romberg integration, mesh convergence studies in engineering), where it can significantly improve accuracy at a modest additional cost.

    When the error behaves regularly, Richardson extrapolation can dramatically reduce the error—sometimes by an order of magnitude or more—without requiring prohibitively fine meshes.

    Consider the following (slowly) converging series (see this post Zeta summation: Basel problem):

    \displaystyle 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + ... = \frac{\pi^2}{6}

    and calculate its corresponding Richardson extrapolation using up to 4-term partial sums:

    \displaystyle N = 2
    \displaystyle S_2 = 1 + \frac{1}{4} = \frac{5}{4}
    \displaystyle S_3 = 1 + \frac{1}{4} + \frac{1}{9} = \frac{49}{36}
    \displaystyle S_4 = 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} = \frac{205}{144}

    The Richardson extrapolation is (see this post Richardson extrapolation):

    \displaystyle \mathcal{R}_N = \frac{N^2S_N -2(N+1)^2S_{N+1} + (N+2)^2S_{N+2}}{2}

    in this case:

    \displaystyle \mathcal{R}_2 = \frac{4 \times (\frac{5}{4}) - 2 \times 9 \times (\frac{49}{36}) + 16 \times \frac{205}{144}}{2}
    \displaystyle = \frac{5 - \frac{49}{2} + \frac{205}{9}}{2}
    \displaystyle = 1.6389

    We have:

    \displaystyle \frac{\pi^2}{6} = 1.6449
    \displaystyle \mathcal{R}_{2} = 1.6389
    \displaystyle S_4 = 1.4236

    Using solely a partial series with 4 terms, we were able to speed up the convergence of the above-mentioned series considerably using Richardson extrapolation.

  • Richardson extrapolation

    Another method for accelerating convergence is Richardson extrapolation. This method is similar to the Shanks transformation. Let’s model the partial sum as:

    \displaystyle S_N = \sum_{n=0}^{N} a_n = S + \frac{a}{N} + \frac{b}{N^2} + \frac{c}{N^3} + \dots

    where S = \lim_{N \to \infty} S_N.

    \displaystyle S_N = S + \frac{a}{N} + \frac{b}{N^2} + \frac{c}{N^3} + \dots
    \displaystyle S_{N+1} = S + \frac{a}{N+1} + \frac{b}{(N+1)^2} + \frac{c}{(N+1)^3} + \dots
    \displaystyle S_{N+2} = S + \frac{a}{N+2} + \frac{b}{(N+2)^2} + \frac{c}{(N+2)^3} + \dots

    equivalently:

    \displaystyle N^2 S_N = N^2 S + N a + b + \frac{c}{N} + \dots
    \displaystyle (N+1)^2 S_{N+1} = (N+1)^2 S + (N+1) a + b + \frac{c}{N+1} + \dots
    \displaystyle (N+2)^2 S_{N+2} = (N+2)^2 S + (N+2) a + b + \frac{c}{N+2} + \dots

    and

    \displaystyle N^2 S_N = N^2 S + N a + b + \frac{c}{N} + \dots
    \displaystyle -2 (N+1)^2 S_{N+1} = -2 (N+1)^2 S - 2 (N+1) a - 2 b - 2 \frac{c}{N+1} + \dots
    \displaystyle (N+2)^2 S_{N+2} = (N+2)^2 S + (N+2) a + b + \frac{c}{N+2} + \dots

    Summing up the system above:

    \displaystyle N^2 S_N - 2 (N+1)^2 S_{N+1} + (N+2)^2 S_{N+2} = N^2 S + N a + b + \frac{c}{N} + \dots
    \displaystyle -2 (N+1)^2 S - 2 (N+1) a - 2 b - 2 \frac{c}{N+1} + \dots
    \displaystyle + (N+2)^2 S + (N+2) a + b + \frac{c}{N+2} + \dots

    Simplifying and taking the limit N \to \infty:

    \displaystyle N^2 S_N - 2 (N+1)^2 S_{N+1} + (N+2)^2 S_{N+2} = N^2 S + a N + b
    \displaystyle - 2 N^2 S - 4 N S - 2 S - 2 a N - 2 a - 2 b
    \displaystyle + N^2 S + 4 N S + 4 S + a N + 2 a + b
    \displaystyle = 2 S
    \displaystyle \implies S = \frac{N^2 S_N - 2 (N+1)^2 S_{N+1} + (N+2)^2 S_{N+2}}{2} \quad \text{(as } N \to \infty\text{)}

    On this basis, we define the Richardson extrapolation as:

    \boxed{\displaystyle \mathcal{R}_N := \frac{N^2 S_N - 2 (N+1)^2 S_{N+1} + (N+2)^2 S_{N+2}}{2}}
  • Shanks transformation: Example 2

    To illustrate the Shanks transformation we will use the following series:

    \displaystyle \ln(x) = (x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\dots
    \displaystyle \ln(x) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{(x-1)^n}{n}

    This series is converging very slowly. To illustrate this we will compute ln(2):

    \displaystyle \ln(2) = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} + \dots
    \displaystyle \ln(2) = \sum_{n=1}^{\infty} (-1)^{n+1} \frac{1}{n}

    The corresponding partial sum of ln(2) is:

    \displaystyle S_N = \sum_{n=1}^{N} (-1)^{n+1} \frac{1}{n}

    Let’s calculate some terms of this partial sum:

    \displaystyle S_1 = 1 \quad S_{10} = 0.6456
    \displaystyle S_2 = 0.5 \quad S_{400} = 0.6919
    \displaystyle S_3 = 0.8333 \quad \ln(2) = 0.6931

    Perform the Shanks transform (using SN = S4):

    \displaystyle S_4 = 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} = \frac{12}{12} - \frac{6}{12} + \frac{4}{12} - \frac{3}{12} = \frac{7}{12}
    \displaystyle S_4^2 = \frac{49}{144}
    \displaystyle S_5 = \frac{7}{12} + \frac{1}{5} = \frac{35}{60} + \frac{12}{60} = \frac{47}{60}
    \displaystyle S_3 = 1 -\frac{1}{2} + \frac{1}{3} = \frac{6}{6} - \frac{3}{6} + \frac{2}{6} = \frac{5}{6}
    \displaystyle \mathcal{S} = \frac{S_N^2 - S_{N+1}S_{N-1}}{2S_N -S_{N+1}- S_{N-1}}
    \displaystyle \mathcal{S} = \frac{\frac{49}{144} - (\frac{47}{60} \frac{5}{6})}{\frac{14}{12} - \frac{47}{60} -\frac{5}{6}}
    \displaystyle \mathcal{S} = \frac{\frac{49}{144} - \frac{235}{360}}{\frac{70}{60} - \frac{47}{60} -\frac{50}{60}}
    \displaystyle \mathcal{S} = \frac{-225/720}{-27/60}
    \displaystyle \mathcal{S} = 0.6944

    Since S_{358} = 0.6918 and \ln(2) = 0.6931 It is necessary to calculate up to 358 terms of the partial sum SN to obtain a relative error equal to that of the Shanks transformation \mathcal{S} based on the first 4 terms.

  • Shanks transformation: Example 1

    Let’s transform the geometric series:

    \displaystyle 1+x+x^2+x^3+x^4+\dots

    using the corresponding partial sum S2:

    \displaystyle 1+x+x^2
    \displaystyle S_2^2 = (1+x+x^2)^2 = 1 + 2x + 3x^2 + 2x^3 + x^4
    \displaystyle S_3 S_1 = (1+x+x^2+x^3)(1+x) = 1 + 2x + 2x^2 + 2x^3 + x^4
    \displaystyle 2S_2 = 2 + 2x + 2x^2
    \displaystyle S_3 = 1+x+x^2+x^3
    \displaystyle S_1 = 1 + x
    \displaystyle \mathcal{S} = \frac{(1+2x+3x^2+2x^3+x^4) - (1+2x+2x^2+2x^3+x^4)}{(2+2x +2x^2) - (1+x+x^2+x^3) - (1+x)}
    \displaystyle \mathcal{S} = \frac{x^2}{x^2-x^3}
    \displaystyle \mathcal{S} = \frac{1}{1-x}

    Therefore the Shanks transformation of the geometric series is correctly \frac{1}{1-x}. A similar result can be obtained for the series:

    \displaystyle 1-x+x^2-x^3+\dots

    The corresponding Shanks transformation is \frac{1}{1+x}.

    Remarkably, the Shanks transformation yields the exact sum of the geometric series using only a few terms, since the error model S_N = S + \alpha \beta^N perfectly describes such a series. The Gregory series for \pi is notoriously slow, requiring five hundred terms just to calculate two correct decimal places. However, by applying the Shanks transformation, we can accelerate this convergence dramatically, extracting high-precision results from only a few initial partial sums. This demonstration highlights how a simple mathematical shift can transform an inefficient infinite series into a powerful computational tool.