Approximaths

  • Zeta summation IV

    In the previous post we derived the functional equation of the theta function

    \displaystyle \vartheta(t) = t^{-1/2} \vartheta(1/t), \qquad t>0.

    With t > 0. We observe that:

    \displaystyle \vartheta(t) \leq C t^{-1/2} \text{ as } t \to 0

    note that

    \displaystyle \sum_{n \geq 1}e^{-\pi n^2 t} \leq \sum_{n \geq 1}e^{-\pi n t} \leq Ce^{-\pi t}, t \geq 1

    Remember the definition of the Gamma function. For s >0 the Gamma function is defined by:

    \displaystyle \Gamma(s) = \int_{0}^{\infty} e^{-t} t^{s-1}\,dt
    \displaystyle \Gamma(s/2) = \int_{0}^{\infty} e^{-t} t^{\frac{s}{2}-1}\,dt

    Define u:= \frac{t}{\pi n^2} \implies du = \frac{1}{\pi n^2}\,dt and dt = \pi n^2 \,du

    \displaystyle \Gamma(s/2) = \int_{0}^{\infty} e^{-\pi n^2 u} (\pi n^2 u)^{\frac{s}{2}-1} \pi n^2\,du
    \displaystyle \Gamma(s/2) = \pi^{s/2} n^{s}\int_{0}^{\infty} e^{-\pi n^2 u} u^{\frac{s}{2}-1}\,du
    \displaystyle \pi^{-s/2} \Gamma(s/2) n^{-s} = \int_{0}^{\infty} e^{-\pi n^2 u} u^{\frac{s}{2}-1}\,du \qquad (n \geq 1)

    Observe that for the \vartheta(t) defined above:

    \displaystyle \vartheta(u) = \sum_{n \in \mathbb{Z}} e^{-\pi n^2 u}
    \displaystyle = 2 \sum_{n = 1}^{\infty} e^{-\pi n^2 u} + 1
    \displaystyle \frac{\vartheta(u) -1}{2} = \sum_{n = 1}^{\infty} e^{-\pi n^2 u}

    It follows that:

    \displaystyle \int_{0}^{\infty} u^{\frac{s}{2}-1}  \biggl(\frac{\vartheta(u) -1}{2}\biggr) \,du = \int_{0}^{\infty} u^{\frac{s}{2}-1} \sum_{n = 1}^{ \infty} e^{-\pi n^2 u}  \,du
    \displaystyle = \sum_{n = 1}^{\infty} \int_{0}^{\infty} u^{\frac{s}{2}-1} e^{-\pi n^2 u}\,du
    \displaystyle = \sum_{n = 1}^{\infty} \int_{0}^{\infty} e^{-\pi n^2 u} u^{\frac{s}{2}-1} \,du
    \displaystyle = \sum_{n = 1}^{\infty} \pi^{-s/2} \Gamma(s/2) n^{-s}
    \displaystyle =  \pi^{-s/2} \Gamma(s/2) \sum_{n = 1}^{\infty} n^{-s}
    \displaystyle =  \pi^{-s/2} \Gamma(s/2) \zeta(s)

    Therefore:

    \displaystyle \zeta(s) = \frac{\int_{0}^{\infty} u^{\frac{s}{2}-1}  \biggl(\frac{\vartheta(u) -1}{2}\biggr) \,du}{\pi^{-s/2} \Gamma(s/2) }

    The define the Xi function as :

    \displaystyle \xi(s) := \int_{0}^{\infty} u^{\frac{s}{2}-1}  \biggl(\frac{\vartheta(u) -1}{2}\biggr) \,du

    We finally have

    \boxed{\displaystyle \zeta(s) = \pi^{s/2}\frac{\xi(s)}{\Gamma(s/2)}}

    which is the meromorphic continuation into the complex plane of the Zeta function with a simple pole at s = 1.

  • Zeta summation III

    Recall that in the previous section our goal was to analytically extend the Riemann zeta function.

    To this end, we showed that the Gaussian function e^{-\pi x^2} is invariant under the Fourier transform:

    \displaystyle \mathcal{F}(e^{-\pi x^2})(\xi) = \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi}\,dx = e^{-\pi \xi^2}.

    We recall that the Fourier transform is defined by

    \displaystyle \mathcal{F}(f)(\xi) = \int_{-\infty}^{\infty} f(x)\,e^{-2\pi i x \xi}\,dx.

    For a>0, a change of variables shows that

    \displaystyle \mathcal{F}(f(ax))(\xi) = \frac{1}{a}\,\mathcal{F}\!\left(f\right)\!\left(\frac{\xi}{a}\right).

    Applying this to the Gaussian f(x)=e^{-\pi x^2} with a=\sqrt{t}, we obtain

    \displaystyle \mathcal{F}\!\left(e^{-\pi t x^2}\right)(\xi) = t^{-1/2} e^{-\pi \xi^2/t}.

    Both functions belong to the class \mathcal{S} of functions (The Schwartz class consists of infinitely differentiable functions on \mathbb{R}^n that, together with all their derivatives, decay faster than any polynomial at infinity (rapidly decreasing functions)) and Poisson summation applies:

    \displaystyle \sum_{n\in\mathbb{Z}} f(n) = \sum_{n\in\mathbb{Z}} \mathcal{F}(f)(n).

    In the present case, this yields

    \displaystyle \sum_{n\in\mathbb{Z}} e^{-\pi t n^2} = \sum_{n\in\mathbb{Z}} t^{-1/2} e^{-\pi n^2/t}.

    The left-hand side defines the theta function

    \displaystyle \vartheta(t) := \sum_{n\in\mathbb{Z}} e^{-\pi t n^2}.

    We immediately obtain the functional equation

    \displaystyle \boxed{\vartheta(t) = t^{-1/2} \vartheta(1/t),\qquad t>0.}
  • Zeta summation II

    Now look at:

    \displaystyle f(z) = e^{-\pi z^2}

    where z \in \mathbb{C}. This function is an entire function (differentiable in the entire complex plane). Since this function is entire the Cauchy’s integral theorem applies:

    \displaystyle \int_{\gamma_1} f(z)\,dz = 0

    Where \gamma_1 is any closed contour in the complex plane. Let us consider the contour presented in the figure below:

    \displaystyle \int_{-R}^{R} f(z) \,dz + \int_{R}^{R+ i \xi} f(z) \,dz + \int_{R+ i \xi}^{-R+ i \xi} f(z) \,dz + \int_{-R+ i \xi}^{-R} f(z) \,dz = 0

    For the first term of the integral we set: z(t) = t.
    For the second term: z(t) = R + it.
    For the third term: z(t) = t + i \xi.
    For the fourth term: z(t) = -R + it.
    The integral becomes:

    \displaystyle \int_{-R}^{R} f(t) \,dt + \int_{R}^{R+ i \xi} f(R + it) i\,dt + \int_{R+ i \xi}^{-R+ i \xi} f(t + i \xi) \,dt + \int_{-R+ i \xi}^{-R} f(-R + it) i\,dt =
    \displaystyle \int_{-R}^{R} f(t) \,dt + \int_{0}^{\xi} f(R + it) i\,dt + \int_{R}^{-R} f(t + i \xi) \,dt + \int_{\xi}^{0} f(-R + it) i\,dt =
    \displaystyle \int_{-R}^{R} e^{-\pi t^2} \,dt + \int_{0}^{\xi} e^{-\pi (R + it)^2} i\,dt + \int_{R}^{-R} e^{-\pi (t + i \xi)^2} \,dt + \int_{\xi}^{0} e^{-\pi (-R + it)^2} i\,dt = 0

    Observe that the last term is equal to (using u = -t):

    \displaystyle  \int_{\xi}^{0} e^{-\pi (-R + it)^2} i\,dt = -\int_{-\xi}^{0} e^{-\pi (-R - iu)^2} i\,du

    The term (-R - iu)^2 can be written (R + iu)^2:

    \displaystyle  \int_{\xi}^{0} e^{-\pi (-R + it)^2} i\,dt = -\int_{-\xi}^{0} e^{-\pi (R + iu)^2} i\,du

    now:

    \displaystyle -\int_{-\xi}^{0} e^{-\pi (R + iu)^2} i\,du = - \left( -\int_{0}^{-\xi} e^{-\pi (R + iu)^2} i\,du \right) = \int_{0}^{-\xi} e^{-\pi (R + iu)^2} i\,du

    The second and fourth term cancel and the equation above becomes:

    \displaystyle  \int_{-R}^{R} e^{-\pi t^2} dt + \int_{R}^{-R} e^{-\pi (t + i \xi)^2}\,dt = 0 \\ \int_{-R}^{R} e^{-\pi t^2} dt = - \int_{R}^{-R} e^{-\pi (t + i \xi)^2}\,dt \\ \int_{-R}^{R} e^{-\pi t^2} dt = \int_{-R}^{R} e^{-\pi (t^2 + 2 t i \xi - \xi^2)}\,dt \\ \int_{-R}^{R} e^{-\pi t^2} dt = e^{\pi \xi^2} \int_{-R}^{R} e^{-\pi t^2} e^{-2\pi i t \xi}\,dt \\

    Using

    \displaystyle \lim_{R \to \infty} \int_{-R}^{R} e^{-\pi t^2}\,dt = 1

    Therefore, if \xi \in \mathbb{R}:

    \displaystyle     e^{-\pi \xi^2} = \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi} \,dx

    We conclude from this post and the previous post that the Gaussian function e^{-\pi \xi^2} is its own Fourier transform.

  • Zeta summation I

    The Zeta function is defined in the complex-plan for Re(z) > 1 as:

    \displaystyle \zeta(z)=\frac{1}{1^z}+\frac{1}{2^z}+\frac{1}{3^z}+\frac{1}{4^z}+...=\sum_{n=1}^{\infty}\frac{1}{n^z}

    For example:

    \displaystyle \zeta(2)=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+...=\frac{\pi^2}{6}

    If we would like to use the Zeta function in order to perform a summation with Re(z)\leq 1 we should use the analytic continuation of the Zeta function.

    If f(z) is single-valued in some region of the complex-plan, the derivative of f(z) is defined as:

    \displaystyle f'(z)=\lim_{\Delta z\to0}\frac{f(z+\Delta z)-f(z)}{\Delta z}

    If this limit exists we say that f(z) is differentiable at z. If the derivative exists in all points of a region of the complex-plan we say that f(z) is analytic in this region. If the function is analytic inside some circle of convergence C_0 it can be represented by the Taylor series:

    \displaystyle a_0+a_1(z-z_0)+a_2(z-z_0)^2+...

    where z_0 is the center of the circle C_0. By choosing a new point z_1 in C_0 which is the center of a new circle C_1 the function can be represented by the Taylor series:

    \displaystyle b_0+b_1(z-z_1)+b_2(z-z_1)^2+...

    The figure above illustrates analytic continuation. I like to imagine analytic functions as exquisite porcelain—delicately fragile, yet rigid, able to be extended through analytic continuation.

    In order to analytically continue the Zeta function we first consider the following function:

    \displaystyle f(\xi)=e^{-\pi \xi^2}

    where \xi\in\mathbb{R}. We would like to show that this function is equivalent to its Fourier transform

    \displaystyle f(\xi)=e^{-\pi \xi^2}=\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2\pi i x\xi}\,dx

    if \xi=0:

    \displaystyle f(0)=1=\int_{-\infty}^{\infty}e^{-\pi x^2}\,dx

    if \xi<0 :

    \displaystyle \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi}\,dx = \int_{-\infty}^{\infty} e^{-\pi u^2} e^{2\pi i u (-\xi)}\,du = \int_{-\infty}^{\infty} e^{-\pi u^2} e^{-2\pi i u |\xi|}\,du = e^{-\pi \xi^2}

    The demonstration will be completed in the next post.

    I wish you a merry Christmas 2025 !

  • Anharmonic oscillator II

    Remember that the perturbative series for the anharmonic oscillator is

    Anharmonic oscillator I

    \displaystyle E_0(\epsilon) = \frac{1}{2} + \frac{3}{4}\epsilon - \frac{21}{8}\epsilon^2 + \frac{333}{16}\epsilon^3 + \mathcal{O}(\epsilon^4)

    We used Padé approximants to compute the ground state energy E_0(\epsilon). Now we aim to calculate E_0(\epsilon) using Borel summation. The (formal) Borel sum is given by

    \displaystyle \int_{0}^{\infty} e^{-t} \sum_{n=0}^\infty \frac{a_n}{n!} (\epsilon t)^n \, dt.

    Using the first three coefficients of the perturbative series, the truncated Borel transform is approximated by

    \displaystyle \mathcal{B}E_0(t) \approx \frac{1}{2} + \frac{3}{4} t - \frac{21}{16} t^2,

    so the truncated Borel sum writes

    \displaystyle E_0(\epsilon) \approx \int_0^\infty e^{-t} \left(\frac{1}{2} + \frac{3}{4} \epsilon t - \frac{21}{16} \epsilon^2 t^2 \right) dt = \frac{1}{2} + \frac{3}{4} \epsilon - \frac{21}{8} \epsilon^2.

    since

    \displaystyle \int_0^\infty \frac{1}{2} e^{-t} \, dt = \frac{1}{2}
    \displaystyle \int_0^\infty e^{-t} \cdot \frac{3}{4} \epsilon \, t \, dt = \frac{3}{4} \epsilon
    \displaystyle \int_0^\infty e^{-t} \cdot \left( -\frac{21}{16} \epsilon^2 t^2 \right) dt = -\frac{21}{8} \epsilon^2

    This doesn’t really seem like progress, since the first terms of the Borel sum are identical to those of the perturbative expansion. The perturbative expansion E_0(\epsilon) = \sum_{n=0}^\infty a_n \epsilon^n above has coefficients that grow factorially. Bender and Wu showed that for large n,

    \displaystyle a_n \sim -r\, (-1)^n \left(\frac{3}{2}\right)^n \Gamma\!\left(n+\tfrac12\right), \qquad r>0.

    Since

    \displaystyle \Gamma\!\left(n+\tfrac12\right) \sim n!\, n^{-1/2},
    formule avec n! en rouge et gras

    thus the series diverges for all \epsilon \neq 0.

    Borel summation improves convergence by dividing out this factorial growth.

    In summary, even with just a few terms, Borel summation correctly recovers the perturbative results for the anharmonic oscillator then It turns the divergent series into a well-defined and useful result.

  • Borel–Écalle summation

    Consider the archetypal divergent series

    \displaystyle \sum_{n=0}^\infty n!\, x^n.

    In a previous post we have seen the Borel summation of a_1 + a_2 + ... + a_n+ ...

    \displaystyle \begin{aligned} B &:= \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n \\ &= \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}a_ndt \end{aligned}

    Step 1 – Ordinary Borel summation

    \displaystyle \sum_{n=0}^{\infty} \frac{t^n}{n!} a_n = \sum_{n=0}^{\infty} \frac{t^n}{n!} n!x^n = \sum_{n=0}^\infty (x t)^n = \frac{1}{1 - x t}

    The ordinary Borel summation fails for x > 0 the analytic continuation of \mathcal{B}(t) has a simple pole at t = 1/x lying on the positive real axis. The ordinary Borel integral

    \displaystyle \int_0^\infty e^{-t} \frac{1}{1 - x t}\, dt

    therefore diverges for all x > 0 (the pole blocks the integration path).

    Step 2 – Borel–Écalle summation
    Define the two lateral Borel transforms by deforming the contour slightly above (+) or below (-) the real axis:

    \displaystyle B^\pm(x) = \int_0^{\infty e^{\pm i 0}} e^{-t} \frac{1}{1 - x t}\, dt.

    The notation ∞e±i0 means that the upper limit of integration is taken to infinity along a ray that approaches the positive real axis from above (angle +0) or from below (angle −0). This slight contour deformation is necessary when the integrand has a singularity (here at t = 1/x) on the positive real axis itself, which would cause the ordinary Borel integral to be ill-defined.

    These integrals exist, and

    \displaystyle B^+(x) - B^-(x) = \frac{2\pi i \, e^{-1/x}}{x}.

    The Borel–Écalle summation is defined by

    \displaystyle y(x) := \frac{B^+(x) + B^-(x)}{2}

    And in this case:

    \displaystyle y(x) = -e^{1/x} E_i\!\left(\frac{1}{x}\right)

    where E_i(z) = \int_z^\infty \frac{e^{-u}}{u}\, du is the exponential integral function. Thus, despite the divergence of the original series and the failure of ordinary Borel summation due to the pole on the integration path, the Borel–Écalle median summation recovers the exact analytic continuation on the positive real axis.

  • ‘Generic’ summation II

    Consider the series:

    \displaystyle 1+0-1+1+0-1+1+0-1+\dots

    using the generic summation (summation “machine”):

    \displaystyle \begin{aligned} s &= \mathcal{S}(1+0-1+1+0-1+1+0-1+\dots) \\ s &= 1 + \mathcal{S}(0-1+1+0-1+1+0-1+\dots) \\ s &= 1 + \mathcal{S}(-1+1+0-1+1+0-1+\dots) \end{aligned}

    Sum these 3 equations term by term:

    \displaystyle \begin{aligned} 3s &= 2 + \mathcal{S}(0+0+0+0+0+0+0+\dots) \\ s &= \frac{2}{3} \end{aligned}

    Now apply Euler summation to the same series:

    \displaystyle 1+0-1+1+0-1+1+0-1+\dots

    We multiply each n-term with x^n:

    \displaystyle \begin{aligned} f(x) &:= 1x^0+0x^1-1x^2+1x^3+0x^4-1x^5+1x^6+0x^7-1x^8+\dots \\ &:= 1-x^2+x^3-x^5+x^6-x^8+\dots \\ &:= (1+x^3+x^6+x^9+\dots) - (x^2+x^5+x^8+\dots) \end{aligned}

    In this last step we changed the order of summation of a Taylor series. Inside its radius of convergence this step is valid. We have:

    \displaystyle \begin{aligned} f(x) &= (1+x^3+x^6+x^9+\dots) - (x^2+x^5+x^8+\dots) \\ &= (1+x^3+x^6+x^9+\dots) - x^2(1+x^3+x^6+\dots) \\ &= ((x^3)^0+(x^3)^1+(x^3)^2+(x^3)^3+\dots) - x^2((x^3)^0+(x^3)^1+(x^3)^2+(x^3)^3+\dots) \\ &= \frac{1}{1-x^3} - x^2\frac{1}{1-x^3} \\ &= \frac{1-x^2}{1-x^3} \end{aligned}

    Now calculate the limit (using Hospital’s rule):

    \displaystyle \lim_{x\to 1} \frac{1-x^2}{1-x^3} = \frac{2}{3}

    The generic summation “machine” and the Euler summation deliver the same result.

  • ‘Generic’ summation I

    Let imagine that we have engineered a summation ‘machine’ called \mathcal{S}() and consider the geometric series:

    \displaystyle 1+x+x^2+x^3+x^4+\dots

    Apply our summation ‘machine’ on this series and call the result s:

    \displaystyle s = \mathcal{S}(1+x+x^2+x^3+x^4+\dots)

    We assign the following two properties for the ‘machine’ \mathcal{S}:

    \displaystyle \mathcal{S}(a_0+a_1+a_2+\dots) = a_0 + \mathcal{S}(a_1+a_2+\dots) \quad \text{(first property)}
    \displaystyle \mathcal{S}(\sum(\alpha a_n) + \sum(\beta b_n)) = \alpha \mathcal{S}(\sum a_n) + \beta \mathcal{S}(\sum b_n)) \quad \text{(second property)}

    where \alpha and \beta are constants. Equipped with the machine \mathcal{S}() and its two properties consider again the geometric series:

    \displaystyle \begin{aligned} s &= \mathcal{S}(1+x+x^2+x^3+x^4+\dots) &\quad\text{(definition)}\\ s &= 1+ \mathcal{S}(x+x^2+x^3+x^4+\dots) &\quad\text{(first property)}\\ s &= 1+ x\mathcal{S}(1+x+x^2+x^3+\dots) &\quad\text{(second property)}\\ s &= 1+ xs &\quad\text{(definition)}\\ s &= \frac{1}{1-x} \end{aligned}

    Consider the series:

    \displaystyle 1-1+1-1+1-1+\dots

    If we use traditional, ‘rigorous’, summation techniques we will conclude that this alternating series is not converging. Now apply the summation “machine” as described above:

    \displaystyle \begin{aligned} s &= \mathcal{S}(1-1+1-1+1-1+\dots ) \\ s &= 1+ \mathcal{S}(-1+1-1+1-1+\dots ) \\ s &= 1 -\mathcal{S}(1-1+1-1+1-\dots ) \\ s &= 1-s \\ s &= \frac{1}{2} \end{aligned}
  • Borel summation

    Let introduce the Borel summation by first recalling the following property:

    \displaystyle n!= \int_{0}^{\infty} e^{-t}t^{n}dt
    \displaystyle 1 = \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!}

    We would like to sum the following series:

    \displaystyle \sum_{n=0}^{\infty}a_n = a_0 + a_1 + a_2 + a_3 + a_4 +...
    \displaystyle \phantom{\sum_{n=0}^{\infty}a_n} = 1a_0 + 1a_1 + 1a_2 + 1a_3 + 1a_4 +...
    \displaystyle \phantom{\sum_{n=0}^{\infty}a_n} = \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n

    The Borel sum B is defined as follow:

    \displaystyle B := \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n
    \displaystyle \phantom{B :=} = \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}a_n\,dt

    The factorial term makes this sum with better chance to converge. In the section concerning the Euler summation we have seen that:

    \displaystyle E(1 - 1 + 1 - 1 + 1 - 1 + ...) = \frac{1}{2}

    Let calculate the corresponding Borel sum:

    \displaystyle B(1 - 1 + 1 - 1 + 1 - 1 + ...) = \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}(-1)^n \,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{(-t)^n}{n!} \,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \int_{0}^{\infty} e^{-t}e^{-t}\,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \int_{0}^{\infty} e^{-2t}\,dt
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = -\frac{1}{2} e^{-2t}\Big|_{0}^{\infty}
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = 0 - (-\frac{1}{2})
    \displaystyle \phantom{B(1 - 1 + 1 - 1 + 1 - 1 + ...)} = \frac{1}{2}

    and therefore:

    \displaystyle B(1-1+1-1+1-1+...) = E(1-1+1-1+1-1+...)

    Consider the series:

    \displaystyle -1+2!-3!+4!-5!+6!-...
    \displaystyle B(-1+2!-3!+4!-...) = \int_{0}^{\infty}e^{-t} \sum_{n=1}^{\infty} \frac{t^n}{n!}(-1)^n n! \,dt
    \displaystyle \phantom{B(-1+2!-3!+4!-...)} = \int_{0}^{\infty}e^{-t} \sum_{n=1}^{\infty} (-t)^n \,dt

    \sum_{n=0}^{\infty} (-t)^n converges for t < 1 to -\frac{1}{1+t} therefore:

    \displaystyle B = -\int_{0}^{\infty} \frac{te^{-t}}{(1+t)} \,dt \approx -0.40365

    If a series is summable in the sense of Euler, then it is also summable in the sense of Borel, and both summation methods yield the same value. The converse is false: there exist series that are summable in the sense of Borel but not in the sense of Euler. In other words, Borel summation is more powerful, as it applies to more strongly divergent series.

  • Euler summation II

    In the previous post, we introduced Euler summation. The following are two examples where it fails to produce a finite result.

    Consider the divergent series:

    \displaystyle 0 + 1 + 2 + 3 + 4 + \dots

    Define:

    \displaystyle \begin{array}{rcl} f(x) &=& 0x^{0} + 1x^{1} + 2x^{2} + 3x^{3} + 4x^{4} + \dots \\ &=& \sum_{n=0}^{\infty} nx^{n} = \frac{x}{(1-x)^{2}} \\ \end{array}

    The Euler sum is:

    \displaystyle E = \lim_{x \to 1_{-}} \frac{x}{(1-x)^{2}} = \infty
    \displaystyle E(0+ 1 + 2 + 3 + 4 + \dots) = \infty

    Now consider the divergent series:

    \displaystyle 1 + 4 + 9 + 16 + 25 + 36 + \dots

    Define:

    \displaystyle \begin{array}{rcl} f(x) &=& 1^{2}x^{1}+2^{2}x^{2}+3^{2}x^{3}+4^{2}x^{4}+5^{2}x^{5}+6^{2}x^{6}+\dots \\ &=& \sum_{n=1}^{\infty} n^{2} x^{n} = \frac{x(1+x)}{(1-x)^{3}} \\  \end{array}

    The Euler sum is:

    \displaystyle E = \lim_{x \to 1_{-}}\frac{x(1+x)}{(1-x)^{3}} = \infty
    \displaystyle E(1^{2} + 2^{2} + 3^{2} + 4^{2} + \dots) = \infty

    Advantages

    Regularization of slowly divergent series:
    Euler summation can assign a finite value to some divergent series that oscillate or diverge slowly, such as

    \displaystyle 1 - 1 + 1 - 1 + \dots

    where E(series) = \tfrac{1}{2}.

    Improved convergence:
    For many convergent series, Euler transformation accelerates convergence, making it useful for numerical computations.

    Analytic continuation link:
    It provides a bridge between ordinary summation and more advanced summation methods (e.g. Borel or zeta regularization).

    Disadvantages

    Limited domain of applicability:
    Euler summation fails for series that diverge too rapidly, such as

    \displaystyle 1 + 2 + 3 + 4 + \dots

    where E(series) = \infty.

    Not uniquely defined for all divergent series:
    Some series cannot be assigned a finite Euler sum, or the method may yield inconsistent results depending on the transformation order.

    Weaker than analytic regularization:
    Compared to zeta or Borel summation, Euler’s method handles fewer classes of divergent series and lacks a rigorous analytic continuation framework.