Month: February 2026

  • Accelerate convergence (Shanks Transformation)

    Imagine you’ve solved a problem using a Taylor series expansion of the solution, and the resulting series converges very slowly. The Shanks transformation is one way of improving the speed of convergence of a convergent series.
    Given the partial sum:

    \displaystyle S_N = a_0 + a_1 + a_2 + \dots + a_N

    Suppose that this partial sum converges to:

    \displaystyle S = a_0 + a_1 + a_2 + \dots
    \displaystyle S = \lim_{N \to \infty} \sum_{k=0}^{N} a_k

    We can model the difference between S and SN as follows:

    \displaystyle S_N = S + \alpha \beta^{N}

    Where |\beta| < 1:

    \displaystyle S_{N+1} = S + \alpha \beta^{N+1}
    \displaystyle S_N = S + \alpha \beta^{N}
    \displaystyle S_{N-1} = S + \alpha \beta^{N-1}

    and

    \displaystyle S_{N+1} - S = \alpha \beta^{N+1}
    \displaystyle S_N - S = \alpha \beta^{N}
    \displaystyle S_{N-1} - S = \alpha \beta^{N-1}

    Therefore:

    \displaystyle \frac{S_N - S}{S_{N-1}-S} = \frac{\alpha \beta^{N}}{\alpha \beta^{N-1}} = \beta
    \displaystyle \frac{S_{N+1} - S}{S_{N}-S} = \frac{\alpha \beta^{N+1}}{\alpha \beta^{N}} = \beta
    \displaystyle \frac{S_N - S}{S_{N-1}-S} = \frac{S_{N+1} - S}{S_{N}-S}
    \displaystyle (S_N - S)(S_{N}-S) = (S_{N+1} - S)(S_{N-1}-S)
    \displaystyle S_N^2 - 2S_NS + S^2 = S_{N+1}S_{N-1}- SS_{N+1} - SS_{N-1} + S^2
    \displaystyle S_N^2 - 2S_NS + S^2= S_{N+1}S_{N-1} + S^{2}- S(S_{N+1}+ S_{N-1})
    \displaystyle S_N^2 - 2S_NS = S_{N+1}S_{N-1}- S(S_{N+1}+ S_{N-1})
    \displaystyle S_N^2 - S_{N+1}S_{N-1} = - S(S_{N+1}+ S_{N-1}) - 2S_NS
    \displaystyle S_N^2 - S_{N+1}S_{N-1} = S(-S_{N+1}- S_{N-1})+ 2S_N
    \displaystyle S = \frac{S_N^2 - S_{N+1}S_{N-1}}{2S_N -S_{N+1}- S_{N-1}}

    According to this result, the Shanks transformation is defined as:

    \displaystyle \mathcal{S} := \frac{S_N^2 - S_{N+1}S_{N-1}}{2S_N -S_{N+1}- S_{N-1}}
  • Summation summary

    Series Standard Euler Borel Borel–Écalle Zeta
    \displaystyle 1+\frac{1}{4}+\frac{1}{9}+\dots
    \displaystyle \frac{\pi^2}{6}
    \displaystyle \frac{\pi^2}{6}
    \displaystyle \frac{\pi^2}{6}
    \displaystyle \frac{\pi^2}{6}
    \displaystyle \frac{\pi^2}{6}
    \displaystyle 1-1+1-1+\dots
    \displaystyle \frac12
    \displaystyle \frac12
    \displaystyle \frac12
    \displaystyle \frac12
    \displaystyle 1!-2!+3!-4!+\dots
    \displaystyle \approx 0.5963
    \displaystyle \approx 0.5963
    \displaystyle 1+2+3+4+\dots
    \displaystyle -\frac{1}{12}
    \displaystyle 1!+2!+3!+4!+\dots

    The various summation methods presented in this table are not arbitrary assignments of values to divergent series; they are governed by the fundamental principle of regularity and consistency. A method is “regular” if it preserves the sum of any convergent series, and “consistent” if different methods yield the same value when they both apply to a given series.

    In theoretical physics and asymptotic analysis, these techniques are indispensable. Borel and Euler summations are frequently used to handle “weakly” divergent perturbative expansions in quantum mechanics. Zeta-function regularization is a cornerstone of modern physics, particularly in the calculation of the Casimir effect and in string theory, where it provides a rigorous way to subtract infinities from physical observables. Meanwhile, the Borel–Écalle theory of resurgent functions offers the most sophisticated framework for “decoding” the non-perturbative information hidden within divergent power series. By bridging the gap between divergent expansions and their underlying analytic structures, these methods allow mathematicians and physicists to extract precise physical predictions from seemingly ill-defined mathematical objects.

  • Classic Summation Axioms

    So far, we’ve seen several ways of summing series (the usual method, Euler, Borel, generic, Borel-Écalle and Zeta summation). All of these methods fulfill all three properties — except Zeta summation, which fulfills none of them.

    \displaystyle \mathcal{S}(a_0+a_1+a_2+ ...) = \lim_{n \to \infty} \sum_{k =0}^{n}a_k \quad\text{(Regularity)}
    \displaystyle \mathcal{S}(\sum(\alpha a_n) + \sum(\beta b_n)) = \alpha \mathcal{S}(\sum a_n) + \beta \mathcal{S}(\sum b_n) \quad\text{(Linearity)}
    \displaystyle \mathcal{S}(a_0+a_1+a_2+ ...) = a_0 + \mathcal{S}(a_1+a_2+ ...) \quad\text{(Stability)}

    Concerning the Zeta summation we should restrain our enthusiasm by saying that a certain meromorphic complex function called \zeta which have the value \zeta(s)=\sum_{n=1}^{\infty} \frac{1}{n^s} for all Re(s)>1 can be defined on the whole complex plane except at 1, in such a way that, \zeta(0) = -\frac{1}{2}. This ‘summation’ does not rely on any of the properties cited above. So it’s important to be clear about the method we are using and what properties it fulfills.

  • Zeta summation VI

    If we want to calculate \zeta(-2) for example we need to have a proper value for \Gamma(s/2) in for Re(s) \leq 0. The problem is that the Bernoulli representation of the Gamma function presented above:

    \displaystyle \Gamma(s) = \int_{0}^{\infty} e^{-t} t^{s-1}\,dt

    is only valid for Re(s) > 0. We have to provide another representation of the Gamma function for negative values of s:

     \Gamma(s) = \int_{0}^{\infty} e^{-t}\, t^{s-1}\, dt
     = \int_{1}^{\infty} e^{-t}\, t^{s-1}\, dt + \int_{0}^{1} e^{-t}\, t^{s-1}\, dt
     = \int_{1}^{\infty} e^{-t}\, t^{s-1}\, dt + \int_{0}^{1} \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} t^{n+s-1}\, dt
     = \int_{1}^{\infty} e^{-t}\, t^{s-1}\, dt + \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} \int_{0}^{1} t^{n+s-1}\, dt
     = \int_{1}^{\infty} e^{-t}\, t^{s-1}\, dt + \sum_{n=0}^{\infty} \frac{(-1)^n}{n!} \left[ \frac{t^{n+s}}{n+s} \right]_{0}^{1}
     = \int_{1}^{\infty} e^{-t}\, t^{s-1}\, dt + \sum_{n=0}^{\infty} \frac{(-1)^n}{n!\,(n+s)}

    The sum \sum_{n=0}^{\infty} \frac{(-1)^n}{n!(n+s)} exists for negative s except for s= -1,-2,-3,... This implies that \frac{1}{\Gamma(s/2)} is entire with simple zeros at s= -2,-4,-6,.... The simple pole of \xi(s) at zero is cancelled by the corresponding zero of \frac{1}{\Gamma(s/2)}. As a consequence the only singularity of \zeta(s) is a single pole at s=1. Since

    \displaystyle \zeta(s) = \pi^{s/2}\frac{\xi(s)}{\Gamma(s/2)}

    and \frac{1}{\Gamma(s/2)} is entire with simple zeros at s= -2,-4,-6,..., this implies that for example:

    \displaystyle \zeta(-2) = 0

    We have in general

    \displaystyle \zeta(-2n)=0

    The so-called “trivial zeros” of the Zeta function. They occur at the negative even integers.