Author: approximaths

  • Zeta summation I

    The Zeta function is defined in the complex-plan for Re(z) > 1 as:

    \displaystyle \zeta(z)=\frac{1}{1^z}+\frac{1}{2^z}+\frac{1}{3^z}+\frac{1}{4^z}+...=\sum_{n=1}^{\infty}\frac{1}{n^z}

    For example:

    \displaystyle \zeta(2)=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+...=\frac{\pi^2}{6}

    If we would like to use the Zeta function in order to perform a summation with Re(z)\leq 1 we should use the analytic continuation of the Zeta function.

    If f(z) is single-valued in some region of the complex-plan, the derivative of f(z) is defined as:

    \displaystyle f'(z)=\lim_{\Delta z\to0}\frac{f(z+\Delta z)-f(z)}{\Delta z}

    If this limit exists we say that f(z) is differentiable at z. If the derivative exists in all points of a region of the complex-plan we say that f(z) is analytic in this region. If the function is analytic inside some circle of convergence C_0 it can be represented by the Taylor series:

    \displaystyle a_0+a_1(z-z_0)+a_2(z-z_0)^2+...

    where z_0 is the center of the circle C_0. By choosing a new point z_1 in C_0 which is the center of a new circle C_1 the function can be represented by the Taylor series:

    \displaystyle b_0+b_1(z-z_1)+b_2(z-z_1)^2+...

    The figure above illustrates analytic continuation. I like to imagine analytic functions as exquisite porcelain—delicately fragile, yet rigid, able to be extended through analytic continuation.

    In order to analytically continue the Zeta function we first consider the following function:

    \displaystyle f(\xi)=e^{-\pi \xi^2}

    where \xi\in\mathbb{R}. We would like to show that this function is equivalent to its Fourier transform

    \displaystyle f(\xi)=e^{-\pi \xi^2}=\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2\pi i x\xi}\,dx

    if \xi=0:

    \displaystyle f(0)=1=\int_{-\infty}^{\infty}e^{-\pi x^2}\,dx

    if \xi<0 :

    \displaystyle \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi}\,dx = \int_{-\infty}^{\infty} e^{-\pi u^2} e^{2\pi i u (-\xi)}\,du = \int_{-\infty}^{\infty} e^{-\pi u^2} e^{-2\pi i u |\xi|}\,du = e^{-\pi \xi^2}

    The demonstration will be completed in the next post.

    I wish you a merry Christmas 2025 !

  • Anharmonic oscillator II

    Remember that the perturbative series for the anharmonic oscillator is

    Anharmonic oscillator I

    \displaystyle E_0(\epsilon) = \frac{1}{2} + \frac{3}{4}\epsilon - \frac{21}{8}\epsilon^2 + \frac{333}{16}\epsilon^3 + \mathcal{O}(\epsilon^4)

    We used Padé approximants to compute the ground state energy E_0(\epsilon). Now we aim to calculate E_0(\epsilon) using Borel summation. The (formal) Borel sum is given by

    \displaystyle \int_{0}^{\infty} e^{-t} \sum_{n=0}^\infty \frac{a_n}{n!} (\epsilon t)^n \, dt.

    Using the first three coefficients of the perturbative series, the truncated Borel transform is approximated by

    \displaystyle \mathcal{B}E_0(t) \approx \frac{1}{2} + \frac{3}{4} t - \frac{21}{16} t^2,

    so the truncated Borel sum writes

    \displaystyle E_0(\epsilon) \approx \int_0^\infty e^{-t} \left(\frac{1}{2} + \frac{3}{4} \epsilon t - \frac{21}{16} \epsilon^2 t^2 \right) dt = \frac{1}{2} + \frac{3}{4} \epsilon - \frac{21}{8} \epsilon^2.

    since

    \displaystyle \int_0^\infty \frac{1}{2} e^{-t} \, dt = \frac{1}{2}
    \displaystyle \int_0^\infty e^{-t} \cdot \frac{3}{4} \epsilon \, t \, dt = \frac{3}{4} \epsilon
    \displaystyle \int_0^\infty e^{-t} \cdot \left( -\frac{21}{16} \epsilon^2 t^2 \right) dt = -\frac{21}{8} \epsilon^2

    This doesn’t really seem like progress, since the first terms of the Borel sum are identical to those of the perturbative expansion. The perturbative expansion E_0(\epsilon) = \sum_{n=0}^\infty a_n \epsilon^n above has coefficients that grow factorially. Bender and Wu showed that for large n,

    \displaystyle a_n \sim -r\, (-1)^n \left(\frac{3}{2}\right)^n \Gamma\!\left(n+\tfrac12\right), \qquad r>0.

    Since

    \displaystyle \Gamma\!\left(n+\tfrac12\right) \sim n!\, n^{-1/2},
    formule avec n! en rouge et gras

    thus the series diverges for all \epsilon \neq 0.

    Borel summation improves convergence by dividing out this factorial growth.

    In summary, even with just a few terms, Borel summation correctly recovers the perturbative results for the anharmonic oscillator then It turns the divergent series into a well-defined and useful result.

  • Borel–Écalle summation

    Consider the archetypal divergent series

    \displaystyle \sum_{n=0}^\infty n!\, x^n.

    In a previous post we have seen the Borel summation of a_1 + a_2 + ... + a_n+ ...

    \displaystyle \begin{aligned} B &:= \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n \\ &= \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}a_ndt \end{aligned}

    Step 1 – Ordinary Borel summation

    \displaystyle \sum_{n=0}^{\infty} \frac{t^n}{n!} a_n = \sum_{n=0}^{\infty} \frac{t^n}{n!} n!x^n = \sum_{n=0}^\infty (x t)^n = \frac{1}{1 - x t}

    The ordinary Borel summation fails for x > 0 the analytic continuation of \mathcal{B}(t) has a simple pole at t = 1/x lying on the positive real axis. The ordinary Borel integral

    \displaystyle \int_0^\infty e^{-t} \frac{1}{1 - x t}\, dt

    therefore diverges for all x > 0 (the pole blocks the integration path).

    Step 2 – Borel–Écalle summation
    Define the two lateral Borel transforms by deforming the contour slightly above (+) or below (-) the real axis:

    \displaystyle B^\pm(x) = \int_0^{\infty e^{\pm i 0}} e^{-t} \frac{1}{1 - x t}\, dt.

    The notation ∞e±i0 means that the upper limit of integration is taken to infinity along a ray that approaches the positive real axis from above (angle +0) or from below (angle −0). This slight contour deformation is necessary when the integrand has a singularity (here at t = 1/x) on the positive real axis itself, which would cause the ordinary Borel integral to be ill-defined.

    These integrals exist, and

    \displaystyle B^+(x) - B^-(x) = \frac{2\pi i \, e^{-1/x}}{x}.

    The Borel–Écalle summation is defined by

    \displaystyle y(x) := \frac{B^+(x) + B^-(x)}{2}

    And in this case:

    \displaystyle y(x) = -e^{1/x} E_i\!\left(\frac{1}{x}\right)

    where E_i(z) = \int_z^\infty \frac{e^{-u}}{u}\, du is the exponential integral function. Thus, despite the divergence of the original series and the failure of ordinary Borel summation due to the pole on the integration path, the Borel–Écalle median summation recovers the exact analytic continuation on the positive real axis.

  • ‘Generic’ summation II

    Consider the series:

    \displaystyle 1+0-1+1+0-1+1+0-1+\dots

    using the generic summation (summation “machine”):

    \displaystyle \begin{aligned} s &= \mathcal{S}(1+0-1+1+0-1+1+0-1+\dots) \\ s &= 1 + \mathcal{S}(0-1+1+0-1+1+0-1+\dots) \\ s &= 1 + \mathcal{S}(-1+1+0-1+1+0-1+\dots) \end{aligned}

    Sum these 3 equations term by term:

    \displaystyle \begin{aligned} 3s &= 2 + \mathcal{S}(0+0+0+0+0+0+0+\dots) \\ s &= \frac{2}{3} \end{aligned}

    Now apply Euler summation to the same series:

    \displaystyle 1+0-1+1+0-1+1+0-1+\dots

    We multiply each n-term with x^n:

    \displaystyle \begin{aligned} f(x) &:= 1x^0+0x^1-1x^2+1x^3+0x^4-1x^5+1x^6+0x^7-1x^8+\dots \\ &:= 1-x^2+x^3-x^5+x^6-x^8+\dots \\ &:= (1+x^3+x^6+x^9+\dots) - (x^2+x^5+x^8+\dots) \end{aligned}

    In this last step we changed the order of summation of a Taylor series. Inside its radius of convergence this step is valid. We have:

    \displaystyle \begin{aligned} f(x) &= (1+x^3+x^6+x^9+\dots) - (x^2+x^5+x^8+\dots) \\ &= (1+x^3+x^6+x^9+\dots) - x^2(1+x^3+x^6+\dots) \\ &= ((x^3)^0+(x^3)^1+(x^3)^2+(x^3)^3+\dots) - x^2((x^3)^0+(x^3)^1+(x^3)^2+(x^3)^3+\dots) \\ &= \frac{1}{1-x^3} - x^2\frac{1}{1-x^3} \\ &= \frac{1-x^2}{1-x^3} \end{aligned}

    Now calculate the limit (using Hospital’s rule):

    \displaystyle \lim_{x\to 1} \frac{1-x^2}{1-x^3} = \frac{2}{3}

    The generic summation “machine” and the Euler summation deliver the same result.

  • ‘Generic’ summation I

    Let imagine that we have engineered a summation ‘machine’ called \mathcal{S}() and consider the geometric series:

    \displaystyle 1+x+x^2+x^3+x^4+\dots

    Apply our summation ‘machine’ on this series and call the result s:

    \displaystyle s = \mathcal{S}(1+x+x^2+x^3+x^4+\dots)

    We assign the following two properties for the ‘machine’ \mathcal{S}:

    \displaystyle \mathcal{S}(a_0+a_1+a_2+\dots) = a_0 + \mathcal{S}(a_1+a_2+\dots) \quad \text{(first property)}
    \displaystyle \mathcal{S}(\sum(\alpha a_n) + \sum(\beta b_n)) = \alpha \mathcal{S}(\sum a_n) + \beta \mathcal{S}(\sum b_n)) \quad \text{(second property)}

    where \alpha and \beta are constants. Equipped with the machine \mathcal{S}() and its two properties consider again the geometric series:

    \displaystyle \begin{aligned} s &= \mathcal{S}(1+x+x^2+x^3+x^4+\dots) &\quad\text{(definition)}\\ s &= 1+ \mathcal{S}(x+x^2+x^3+x^4+\dots) &\quad\text{(first property)}\\ s &= 1+ x\mathcal{S}(1+x+x^2+x^3+\dots) &\quad\text{(second property)}\\ s &= 1+ xs &\quad\text{(definition)}\\ s &= \frac{1}{1-x} \end{aligned}

    Consider the series:

    \displaystyle 1-1+1-1+1-1+\dots

    If we use traditional, ‘rigorous’, summation techniques we will conclude that this alternating series is not converging. Now apply the summation “machine” as described above:

    \displaystyle \begin{aligned} s &= \mathcal{S}(1-1+1-1+1-1+\dots ) \\ s &= 1+ \mathcal{S}(-1+1-1+1-1+\dots ) \\ s &= 1 -\mathcal{S}(1-1+1-1+1-\dots ) \\ s &= 1-s \\ s &= \frac{1}{2} \end{aligned}
  • Borel summation

    Let introduce the Borel summation by first recalling the following property:

    \displaystyle n!= \int_{0}^{\infty} e^{-t}t^{n}dt
    \displaystyle 1 = \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!}

    We would like to sum the following series:

    \displaystyle \sum_{n=0}^{\infty}a_n = a_0 + a_1 + a_2 + a_3 + a_4 +...
    \displaystyle \phantom{\sum_{n=0}^{\infty}a_n} = 1a_0 + 1a_1 + 1a_2 + 1a_3 + 1a_4 +...
    \displaystyle \phantom{\sum_{n=0}^{\infty}a_n} = \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n

    The Borel sum B is defined as follow:

    \displaystyle B := \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n
    \displaystyle \phantom{B :=} = \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}a_n\,dt

    The factorial term makes this sum with better chance to converge. In the section concerning the Euler summation we have seen that:

    \displaystyle E(1 - 1 + 1 - 1 + 1 - 1 + ...) = \frac{1}{2}

    Let calculate the corresponding Borel sum:

    \displaystyle B(1 - 1 + 1 - 1 + 1 - 1 + ...) = \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}(-1)^n \,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{(-t)^n}{n!} \,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \int_{0}^{\infty} e^{-t}e^{-t}\,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \int_{0}^{\infty} e^{-2t}\,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = -\frac{1}{2} e^{-2t}\Big|_{0}^{\infty}
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = 0 - (-\frac{1}{2})
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \frac{1}{2}

    and therefore:

    \displaystyle B(1-1+1-1+1-1+...) = E(1-1+1-1+1-1+...)

    Consider the series:

    \displaystyle -1+2!-3!+4!-5!+6!-...
    \displaystyle B(-1+2!-3!+4!-...) = \int_{0}^{\infty}e^{-t} \sum_{n=1}^{\infty} \frac{t^n}{n!}(-1)^n n! \,dt
    \displaystyle \phantom{B(-1+2!-3!+4!-...)} = \int_{0}^{\infty}e^{-t} \sum_{n=1}^{\infty} (-t)^n \,dt

    \sum_{n=0}^{\infty} (-t)^n converges for t < 1 to -\frac{1}{1+t} therefore:

    \displaystyle B = -\int_{0}^{\infty} \frac{te^{-t}}{(1+t)} \,dt \approx -0.40365

    If a series is summable in the sense of Euler, then it is also summable in the sense of Borel, and both summation methods yield the same value. The converse is false: there exist series that are summable in the sense of Borel but not in the sense of Euler. In other words, Borel summation is more powerful, as it applies to more strongly divergent series.

  • Euler summation II

    In the previous post, we introduced Euler summation. The following are two examples where it fails to produce a finite result.

    Consider the divergent series:

    \displaystyle 0 + 1 + 2 + 3 + 4 + \dots

    Define:

    \displaystyle \begin{array}{rcl} f(x) &=& 0x^{0} + 1x^{1} + 2x^{2} + 3x^{3} + 4x^{4} + \dots \\ &=& \sum_{n=0}^{\infty} nx^{n} = \frac{x}{(1-x)^{2}} \\ \end{array}

    The Euler sum is:

    \displaystyle E = \lim_{x \to 1_{-}} \frac{x}{(1-x)^{2}} = \infty
    \displaystyle E(0+ 1 + 2 + 3 + 4 + \dots) = \infty

    Now consider the divergent series:

    \displaystyle 1 + 4 + 9 + 16 + 25 + 36 + \dots

    Define:

    \displaystyle \begin{array}{rcl} f(x) &=& 1^{2}x^{1}+2^{2}x^{2}+3^{2}x^{3}+4^{2}x^{4}+5^{2}x^{5}+6^{2}x^{6}+\dots \\ &=& \sum_{n=1}^{\infty} n^{2} x^{n} = \frac{x(1+x)}{(1-x)^{3}} \\  \end{array}

    The Euler sum is:

    \displaystyle E = \lim_{x \to 1_{-}}\frac{x(1+x)}{(1-x)^{3}} = \infty
    \displaystyle E(1^{2} + 2^{2} + 3^{2} + 4^{2} + \dots) = \infty

    Advantages

    Regularization of slowly divergent series:
    Euler summation can assign a finite value to some divergent series that oscillate or diverge slowly, such as

    \displaystyle 1 - 1 + 1 - 1 + \dots

    where E(series) = \tfrac{1}{2}.

    Improved convergence:
    For many convergent series, Euler transformation accelerates convergence, making it useful for numerical computations.

    Analytic continuation link:
    It provides a bridge between ordinary summation and more advanced summation methods (e.g. Borel or zeta regularization).

    Disadvantages

    Limited domain of applicability:
    Euler summation fails for series that diverge too rapidly, such as

    \displaystyle 1 + 2 + 3 + 4 + \dots

    where E(series) = \infty.

    Not uniquely defined for all divergent series:
    Some series cannot be assigned a finite Euler sum, or the method may yield inconsistent results depending on the transformation order.

    Weaker than analytic regularization:
    Compared to zeta or Borel summation, Euler’s method handles fewer classes of divergent series and lacks a rigorous analytic continuation framework.

  • Euler summation I

    So far we’ve seen Padé’s approximants. These have enabled us to approximate a function from its corresponding Taylor series and then transform this (potentially non-convergent) series into a convergent rational fraction.

    We’d like to introduce other techniques that could be used to sum non-convergent series. First, we’ll take a look at Euler summation.

    If the series \sum_{n=0}^{\infty} a_n is algebraically divergent (the terms blow up like some power of n), then the series:

    \displaystyle f(x) = \sum_{n=0}^{\infty}a_n x^n

    converges for x \in (-1,1). If the limit

    \displaystyle E := \lim_{x \to 1_{-}}f(x)

    exists and is finite then it is called the Euler sum E of the original series.

    For example, consider the divergent series:

    \displaystyle 1 - 1 + 1 - 1 + 1 - 1 + ...

    and multiply each n-term with x^n (starting at n = 0):

    \displaystyle f(x) = x^0 - x^1 + x^2 - x^3 + x^4 - x^5 + ...
    \displaystyle = \frac{1}{1-(-x)} = \frac{1}{1+x}
    \displaystyle E = \lim_{x \to 1_{-}}{\frac{1}{1+x}} = \frac{1}{2}

    Therefore (according to Euler summation):

    \displaystyle 1 - 1 + 1 - 1 + 1 - 1 + ... = \frac{1}{2}
  • Padé approximants: Convergence III

    For a function f(z) analytic at z=0, with Taylor series

    \displaystyle f(z) = \sum_{k=0}^{\infty} a_k z^k

    valid within its radius of convergence R, the Padé approximant of order [m/n], denoted \frac{P_m(z)}{Q_n(z)} with \deg P_m \leq m and \deg Q_n \leq n, satisfies

    \displaystyle f(z) - \frac{P_m(z)}{Q_n(z)} = O(z^{m+n+1})

    near z=0. The rational structure of \frac{P_m(z)}{Q_n(z)} allows it to approximate f(z) beyond the disk |z| < R by modeling singularities (e.g., poles or branch points) through the zeros of Q_n(z). This enables analytic continuation into regions where the Taylor series diverges.

    Formally, for a meromorphic function f(z) in a domain D, the diagonal Padé approximants [n/n] often converge to f(z) in D \setminus S, where S is the set of poles of f:

    Let f(z) be meromorphic in a domain D \subseteq \mathbb{C}, with a set of poles S of finite total multiplicity. The diagonal Padé approximants [n/n], defined as rational functions \frac{P_n(z)}{Q_n(z)} satisfying

    \displaystyle f(z) - \frac{P_n(z)}{Q_n(z)} = O(z^{2n+1})

    near z=0, converge uniformly to f(z) on compact subsets of D \setminus S as n \to \infty.

    The zeros of Q_n(z) approximate the poles in S, enabling analytic continuation of f(z) beyond the radius of convergence of its Taylor series.

  • Nuttall’s Padé approximant

    Let f(z) = \sum_{k=0}^\infty C_k z^k be a power series. The denominator Q_{n-1}(z) of the Padé approximant P(n, n-1) is given by Nuttall’s compact form:

    \displaystyle Q_{n-1}(z) = \frac{ \begin{vmatrix} C_0 & C_1 & C_2 & \cdots & C_{n-1} & C_n \\ C_1 & C_2 & C_3 & \cdots & C_n & C_{n+1} \\ C_2 & C_3 & C_4 & \cdots & C_{n+1} & C_{n+2} \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ C_{n-2} & C_{n-1} & C_n & \cdots & C_{2n-3} & C_{2n-2} \\ 1 & z & z^2 & \cdots & z^{n-2} & z^{n-1} \end{vmatrix} }{ \begin{vmatrix} C_0 & C_1 & C_2 & \cdots & C_{n-1} \\ C_1 & C_2 & C_3 & \cdots & C_n \\ C_2 & C_3 & C_4 & \cdots & C_{n+1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ C_{n-1} & C_n & C_{n+1} & \cdots & C_{2n-1} \end{vmatrix} }

    The numerator P_n(z) is obtained by satisfying the Padé approximation condition: f(z) Q_{n-1}(z) - P_n(z) = O(z^{2n}).

    The compact form of Nuttall’s Padé approximant P(n, n-1) is particularly valuable in numerical analysis and theoretical physics for its efficiency in computing Padé approximants without explicitly solving large linear systems.

    By expressing the denominator Q_{n-1}(z) as a ratio of determinants, it provides a direct and elegant method to capture the approximant’s poles, which is crucial for analyzing singularities of functions, especially in Stieltjes series or meromorphic functions.

    This formulation simplifies calculations, facilitates the study of convergence properties, and connects Padé approximants to orthogonal polynomials, enabling applications in areas like quantum field theory and asymptotic analysis where rapid computation and singularity detection are essential.