Approximaths

  • Zeta summation V

    In a previous post we have computed the meromorphic continuation into the entire complex plane of the Zeta function:

    \displaystyle \zeta(s) = \pi^{s/2}\frac{\xi(s)}{\Gamma(s/2)}

    This analytic continuation allows us to sum the Zeta function in the whole complex plane. For special (‘known’) values of the Xi function we can calculate the Zeta function. For example for

    s = 2 we have \xi(2) = \frac{\pi}{6} and \Gamma(2/2) = \Gamma(1) = 1 and therefore:

    \displaystyle \zeta(2) = \frac{\pi^2}{6}
    \displaystyle \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + ... = \frac{\pi^2}{6}

    For s = 4 we have \xi(4) = \frac{\pi^2}{90} and \Gamma(4/2) = \Gamma(2) = 1:

    \displaystyle \zeta(4) = \frac{\pi^4}{90}
    \displaystyle \frac{1}{1^4} + \frac{1}{2^4} + \frac{1}{3^4} + \frac{1}{4^4} + ... = \frac{\pi^4}{90}

    For s = 6 we have \xi(6) = \frac{2\pi^3}{945} and \Gamma(6/2) = \Gamma(3) = 2:

    \displaystyle \zeta(6) = \frac{\pi^6}{945}
    \displaystyle \frac{1}{1^6} + \frac{1}{2^6} + \frac{1}{3^6} + \frac{1}{4^6} + ... = \frac{\pi^6}{945}

    So, Zeta of all the positive even integers has the form

    \displaystyle\zeta(2n) = (-1)^{n+1} \frac{B_{2n} (2\pi)^{2n}}{2(2n)!}

    Where B_{2n} are Bernoulli numbers.

  • 1 + 2 + 3 + 4 + …

    To calculate the value of the Riemann zeta function at s = -1, we use the functional equation (presented here):

    \displaystyle \zeta(s) = \pi^{s/2} \frac{\xi(s)}{\Gamma(s/2)}

    The Dirichlet series for the Riemann zeta function is defined for \Re(s) > 1 as:

    \displaystyle \zeta(s) = \sum_{n=1}^{\infty} \frac{1}{n^s}

    For s = -1, the formal series (which is divergent in the classical sense) is:

    \displaystyle \zeta(-1) = \sum_{n=1}^{\infty} \frac{1}{n^{-1}}= \sum_{n=1}^{\infty} n = 1 + 2 + 3 + 4 + \dots

    Substituting s = -1 into the formula, we obtain:

    \displaystyle \zeta(-1) = \pi^{-1/2} \frac{\xi(-1)}{\Gamma(-1/2)}

    Using the recurrence property of the Gamma function, \Gamma(z) = \frac{\Gamma(z+1)}{z}, and the fact that \Gamma(1/2) = \sqrt{\pi}:

    \displaystyle \Gamma(-1/2) = \frac{\Gamma(-1/2+1)}{-1/2} = \frac{\Gamma(1/2)}{-1/2} = -2\sqrt{\pi}

    We exploit the functional equation \xi(s) = \xi(1-s), which implies:

    \displaystyle \xi(-1) = \xi(1 - (-1)) = \xi(2)

    From the definition of \xi(s), we have:

    \displaystyle \xi(2) = \pi^{-2/2} \Gamma(2/2) \zeta(2)

    Using \Gamma(1) = 1 and the solution of the Basel problem presented in the previous post \zeta(2) = \frac{\pi^2}{6}:

    \displaystyle \xi(-1) = \pi^{-1} \cdot 1 \cdot \frac{\pi^2}{6} = \frac{\pi}{6}

    Combining these results into our original expression:

    \displaystyle \zeta(-1) =\frac{1}{\sqrt{\pi}} \cdot \frac{\frac{\pi}{6}}{-2\sqrt{\pi}}
    \displaystyle = \frac{\pi}{-12 \cdot (\sqrt{\pi} \cdot \sqrt{\pi})}
    \displaystyle = \frac{\pi}{-12\pi}
    \displaystyle \boxed{\zeta(-1) = -\frac{1}{12}}

    Though surprising at first, turns out to be extremely useful in physics — most notably in string theory and the calculation of the Casimir effect, where the infinite sum 1 + 2 + 3 + 4 + … = −1/12 naturally appears when regularizing the vacuum energy between two conducting plates.

  • Zeta summation: Basel problem

    For s= 2, the zeta series is:

    \displaystyle \zeta(2) = \sum_{n=1}^{\infty} \frac{1}{n^{2}}=  1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \dots

    We would like to evaluate this series using the representation presented the previous post in we obtain

    \displaystyle \zeta(s) = \pi^{s/2} \frac{\xi(s)}{\Gamma(s/2)}

    For s = 2,

    \displaystyle \zeta(2) = \pi^{2/2} \frac{\xi(2)}{\Gamma(2/2)} = \pi \, \xi(2)

    since \Gamma(1) = 1. We need to evaluate

    \displaystyle \xi(2) = \int_0^\infty \frac{\vartheta(u) - 1}{2} \, du.

    We split the integral as follows:

    \displaystyle \int_0^1 \frac{\vartheta(u) - 1}{2} \, du + \int_1^\infty \frac{\vartheta(u) - 1}{2} \, du

    For the second part, we make the change of variable u = 1/v, so du = -dv/v^2. Using the functional equation of the theta function presented in a previous post.

    \displaystyle \vartheta(u) = u^{-1/2} \vartheta(1/u), \qquad u>0.

    This can be rewritten as

    \displaystyle \vartheta(1/v) = \sqrt{v} \, \vartheta(v)

    one can show (after standard calculations) that the two parts combine to yield a convergent integral. The final evaluation, which relies on the Poisson summation formula applied to the Gaussian, gives

    \displaystyle \xi(2) = \frac{\pi}{6}

    We obtain

    \displaystyle \zeta(2) = \pi \cdot \xi(2) = \pi \cdot \frac{\pi}{6} = \frac{\pi^2}{6}

    Finally

    \displaystyle 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \dots = \frac{\pi^2}{6}

    The Basel problem was sloved by Euler in 1735, who showed that the sum equals \displaystyle \frac{\pi^2}{6}.

  • Zeta summation IV

    In the previous post we derived the functional equation of the theta function

    \displaystyle \vartheta(t) = t^{-1/2} \vartheta(1/t), \qquad t>0.

    With t > 0. We observe that:

    \displaystyle \vartheta(t) \leq C t^{-1/2} \text{ as } t \to 0

    note that

    \displaystyle \sum_{n \geq 1}e^{-\pi n^2 t} \leq \sum_{n \geq 1}e^{-\pi n t} \leq Ce^{-\pi t}, t \geq 1

    Remember the definition of the Gamma function. For s >0 the Gamma function is defined by:

    \displaystyle \Gamma(s) = \int_{0}^{\infty} e^{-t} t^{s-1}\,dt
    \displaystyle \Gamma(s/2) = \int_{0}^{\infty} e^{-t} t^{\frac{s}{2}-1}\,dt

    Define u:= \frac{t}{\pi n^2} \implies du = \frac{1}{\pi n^2}\,dt and dt = \pi n^2 \,du

    \displaystyle \Gamma(s/2) = \int_{0}^{\infty} e^{-\pi n^2 u} (\pi n^2 u)^{\frac{s}{2}-1} \pi n^2\,du
    \displaystyle \Gamma(s/2) = \pi^{s/2} n^{s}\int_{0}^{\infty} e^{-\pi n^2 u} u^{\frac{s}{2}-1}\,du
    \displaystyle \pi^{-s/2} \Gamma(s/2) n^{-s} = \int_{0}^{\infty} e^{-\pi n^2 u} u^{\frac{s}{2}-1}\,du \qquad (n \geq 1)

    Observe that for the \vartheta(t) defined above:

    \displaystyle \vartheta(u) = \sum_{n \in \mathbb{Z}} e^{-\pi n^2 u}
    \displaystyle = 2 \sum_{n = 1}^{\infty} e^{-\pi n^2 u} + 1
    \displaystyle \frac{\vartheta(u) -1}{2} = \sum_{n = 1}^{\infty} e^{-\pi n^2 u}

    It follows that:

    \displaystyle \int_{0}^{\infty} u^{\frac{s}{2}-1}  \biggl(\frac{\vartheta(u) -1}{2}\biggr) \,du = \int_{0}^{\infty} u^{\frac{s}{2}-1} \sum_{n = 1}^{ \infty} e^{-\pi n^2 u}  \,du
    \displaystyle = \sum_{n = 1}^{\infty} \int_{0}^{\infty} u^{\frac{s}{2}-1} e^{-\pi n^2 u}\,du
    \displaystyle = \sum_{n = 1}^{\infty} \int_{0}^{\infty} e^{-\pi n^2 u} u^{\frac{s}{2}-1} \,du
    \displaystyle = \sum_{n = 1}^{\infty} \pi^{-s/2} \Gamma(s/2) n^{-s}
    \displaystyle =  \pi^{-s/2} \Gamma(s/2) \sum_{n = 1}^{\infty} n^{-s}
    \displaystyle =  \pi^{-s/2} \Gamma(s/2) \zeta(s)

    Therefore:

    \displaystyle \zeta(s) = \frac{\int_{0}^{\infty} u^{\frac{s}{2}-1}  \biggl(\frac{\vartheta(u) -1}{2}\biggr) \,du}{\pi^{-s/2} \Gamma(s/2) }

    The define the Xi function as :

    \displaystyle \xi(s) := \int_{0}^{\infty} u^{\frac{s}{2}-1}  \biggl(\frac{\vartheta(u) -1}{2}\biggr) \,du

    We finally have

    \boxed{\displaystyle \zeta(s) = \pi^{s/2}\frac{\xi(s)}{\Gamma(s/2)}}

    which is the meromorphic continuation into the complex plane of the Zeta function with a simple pole at s = 1.

  • Zeta summation III

    Recall that in the previous section our goal was to analytically extend the Riemann zeta function.

    To this end, we showed that the Gaussian function e^{-\pi x^2} is invariant under the Fourier transform:

    \displaystyle \mathcal{F}(e^{-\pi x^2})(\xi) = \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi}\,dx = e^{-\pi \xi^2}.

    We recall that the Fourier transform is defined by

    \displaystyle \mathcal{F}(f)(\xi) = \int_{-\infty}^{\infty} f(x)\,e^{-2\pi i x \xi}\,dx.

    For a>0, a change of variables shows that

    \displaystyle \mathcal{F}(f(ax))(\xi) = \frac{1}{a}\,\mathcal{F}\!\left(f\right)\!\left(\frac{\xi}{a}\right).

    Applying this to the Gaussian f(x)=e^{-\pi x^2} with a=\sqrt{t}, we obtain

    \displaystyle \mathcal{F}\!\left(e^{-\pi t x^2}\right)(\xi) = t^{-1/2} e^{-\pi \xi^2/t}.

    Both functions belong to the class \mathcal{S} of functions (The Schwartz class consists of infinitely differentiable functions on \mathbb{R}^n that, together with all their derivatives, decay faster than any polynomial at infinity (rapidly decreasing functions)) and Poisson summation applies:

    \displaystyle \sum_{n\in\mathbb{Z}} f(n) = \sum_{n\in\mathbb{Z}} \mathcal{F}(f)(n).

    In the present case, this yields

    \displaystyle \sum_{n\in\mathbb{Z}} e^{-\pi t n^2} = \sum_{n\in\mathbb{Z}} t^{-1/2} e^{-\pi n^2/t}.

    The left-hand side defines the theta function

    \displaystyle \vartheta(t) := \sum_{n\in\mathbb{Z}} e^{-\pi t n^2}.

    We immediately obtain the functional equation

    \displaystyle \boxed{\vartheta(t) = t^{-1/2} \vartheta(1/t),\qquad t>0.}
  • Zeta summation II

    Now look at:

    \displaystyle f(z) = e^{-\pi z^2}

    where z \in \mathbb{C}. This function is an entire function (differentiable in the entire complex plane). Since this function is entire the Cauchy’s integral theorem applies:

    \displaystyle \int_{\gamma_1} f(z)\,dz = 0

    Where \gamma_1 is any closed contour in the complex plane. Let us consider the contour presented in the figure below:

    \displaystyle \int_{-R}^{R} f(z) \,dz + \int_{R}^{R+ i \xi} f(z) \,dz + \int_{R+ i \xi}^{-R+ i \xi} f(z) \,dz + \int_{-R+ i \xi}^{-R} f(z) \,dz = 0

    For the first term of the integral we set: z(t) = t.
    For the second term: z(t) = R + it.
    For the third term: z(t) = t + i \xi.
    For the fourth term: z(t) = -R + it.
    The integral becomes:

    \displaystyle \int_{-R}^{R} f(t) \,dt + \int_{R}^{R+ i \xi} f(R + it) i\,dt + \int_{R+ i \xi}^{-R+ i \xi} f(t + i \xi) \,dt + \int_{-R+ i \xi}^{-R} f(-R + it) i\,dt =
    \displaystyle \int_{-R}^{R} f(t) \,dt + \int_{0}^{\xi} f(R + it) i\,dt + \int_{R}^{-R} f(t + i \xi) \,dt + \int_{\xi}^{0} f(-R + it) i\,dt =
    \displaystyle \int_{-R}^{R} e^{-\pi t^2} \,dt + \int_{0}^{\xi} e^{-\pi (R + it)^2} i\,dt + \int_{R}^{-R} e^{-\pi (t + i \xi)^2} \,dt + \int_{\xi}^{0} e^{-\pi (-R + it)^2} i\,dt = 0

    Observe that the last term is equal to (using u = -t):

    \displaystyle  \int_{\xi}^{0} e^{-\pi (-R + it)^2} i\,dt = -\int_{-\xi}^{0} e^{-\pi (-R - iu)^2} i\,du

    The term (-R - iu)^2 can be written (R + iu)^2:

    \displaystyle  \int_{\xi}^{0} e^{-\pi (-R + it)^2} i\,dt = -\int_{-\xi}^{0} e^{-\pi (R + iu)^2} i\,du

    now:

    \displaystyle -\int_{-\xi}^{0} e^{-\pi (R + iu)^2} i\,du = - \left( -\int_{0}^{-\xi} e^{-\pi (R + iu)^2} i\,du \right) = \int_{0}^{-\xi} e^{-\pi (R + iu)^2} i\,du

    The second and fourth term cancel and the equation above becomes:

    \displaystyle  \int_{-R}^{R} e^{-\pi t^2} dt + \int_{R}^{-R} e^{-\pi (t + i \xi)^2}\,dt = 0 \\ \int_{-R}^{R} e^{-\pi t^2} dt = - \int_{R}^{-R} e^{-\pi (t + i \xi)^2}\,dt \\ \int_{-R}^{R} e^{-\pi t^2} dt = \int_{-R}^{R} e^{-\pi (t^2 + 2 t i \xi - \xi^2)}\,dt \\ \int_{-R}^{R} e^{-\pi t^2} dt = e^{\pi \xi^2} \int_{-R}^{R} e^{-\pi t^2} e^{-2\pi i t \xi}\,dt \\

    Using

    \displaystyle \lim_{R \to \infty} \int_{-R}^{R} e^{-\pi t^2}\,dt = 1

    Therefore, if \xi \in \mathbb{R}:

    \displaystyle     e^{-\pi \xi^2} = \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi} \,dx

    We conclude from this post and the previous post that the Gaussian function e^{-\pi \xi^2} is its own Fourier transform.

  • Zeta summation I

    The Zeta function is defined in the complex-plan for Re(z) > 1 as:

    \displaystyle \zeta(z)=\frac{1}{1^z}+\frac{1}{2^z}+\frac{1}{3^z}+\frac{1}{4^z}+...=\sum_{n=1}^{\infty}\frac{1}{n^z}

    For example:

    \displaystyle \zeta(2)=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+...=\frac{\pi^2}{6}

    If we would like to use the Zeta function in order to perform a summation with Re(z)\leq 1 we should use the analytic continuation of the Zeta function.

    If f(z) is single-valued in some region of the complex-plan, the derivative of f(z) is defined as:

    \displaystyle f'(z)=\lim_{\Delta z\to0}\frac{f(z+\Delta z)-f(z)}{\Delta z}

    If this limit exists we say that f(z) is differentiable at z. If the derivative exists in all points of a region of the complex-plan we say that f(z) is analytic in this region. If the function is analytic inside some circle of convergence C_0 it can be represented by the Taylor series:

    \displaystyle a_0+a_1(z-z_0)+a_2(z-z_0)^2+...

    where z_0 is the center of the circle C_0. By choosing a new point z_1 in C_0 which is the center of a new circle C_1 the function can be represented by the Taylor series:

    \displaystyle b_0+b_1(z-z_1)+b_2(z-z_1)^2+...

    The figure above illustrates analytic continuation. I like to imagine analytic functions as exquisite porcelain—delicately fragile, yet rigid, able to be extended through analytic continuation.

    In order to analytically continue the Zeta function we first consider the following function:

    \displaystyle f(\xi)=e^{-\pi \xi^2}

    where \xi\in\mathbb{R}. We would like to show that this function is equivalent to its Fourier transform

    \displaystyle f(\xi)=e^{-\pi \xi^2}=\int_{-\infty}^{\infty}e^{-\pi x^2}e^{-2\pi i x\xi}\,dx

    if \xi=0:

    \displaystyle f(0)=1=\int_{-\infty}^{\infty}e^{-\pi x^2}\,dx

    if \xi<0 :

    \displaystyle \int_{-\infty}^{\infty} e^{-\pi x^2} e^{-2\pi i x \xi}\,dx = \int_{-\infty}^{\infty} e^{-\pi u^2} e^{2\pi i u (-\xi)}\,du = \int_{-\infty}^{\infty} e^{-\pi u^2} e^{-2\pi i u |\xi|}\,du = e^{-\pi \xi^2}

    The demonstration will be completed in the next post.

    I wish you a merry Christmas 2025 !

  • Anharmonic oscillator II

    Remember that the perturbative series for the anharmonic oscillator is

    Anharmonic oscillator I

    \displaystyle E_0(\epsilon) = \frac{1}{2} + \frac{3}{4}\epsilon - \frac{21}{8}\epsilon^2 + \frac{333}{16}\epsilon^3 + \mathcal{O}(\epsilon^4)

    We used Padé approximants to compute the ground state energy E_0(\epsilon). Now we aim to calculate E_0(\epsilon) using Borel summation. The (formal) Borel sum is given by

    \displaystyle \int_{0}^{\infty} e^{-t} \sum_{n=0}^\infty \frac{a_n}{n!} (\epsilon t)^n \, dt.

    Using the first three coefficients of the perturbative series, the truncated Borel transform is approximated by

    \displaystyle \mathcal{B}E_0(t) \approx \frac{1}{2} + \frac{3}{4} t - \frac{21}{16} t^2,

    so the truncated Borel sum writes

    \displaystyle E_0(\epsilon) \approx \int_0^\infty e^{-t} \left(\frac{1}{2} + \frac{3}{4} \epsilon t - \frac{21}{16} \epsilon^2 t^2 \right) dt = \frac{1}{2} + \frac{3}{4} \epsilon - \frac{21}{8} \epsilon^2.

    since

    \displaystyle \int_0^\infty \frac{1}{2} e^{-t} \, dt = \frac{1}{2}
    \displaystyle \int_0^\infty e^{-t} \cdot \frac{3}{4} \epsilon \, t \, dt = \frac{3}{4} \epsilon
    \displaystyle \int_0^\infty e^{-t} \cdot \left( -\frac{21}{16} \epsilon^2 t^2 \right) dt = -\frac{21}{8} \epsilon^2

    This doesn’t really seem like progress, since the first terms of the Borel sum are identical to those of the perturbative expansion. The perturbative expansion E_0(\epsilon) = \sum_{n=0}^\infty a_n \epsilon^n above has coefficients that grow factorially. Bender and Wu showed that for large n,

    \displaystyle a_n \sim -r\, (-1)^n \left(\frac{3}{2}\right)^n \Gamma\!\left(n+\tfrac12\right), \qquad r>0.

    Since

    \displaystyle \Gamma\!\left(n+\tfrac12\right) \sim n!\, n^{-1/2},
    formule avec n! en rouge et gras

    thus the series diverges for all \epsilon \neq 0.

    Borel summation improves convergence by dividing out this factorial growth.

    In summary, even with just a few terms, Borel summation correctly recovers the perturbative results for the anharmonic oscillator then It turns the divergent series into a well-defined and useful result.

  • Borel–Écalle summation

    Consider the archetypal divergent series

    \displaystyle \sum_{n=0}^\infty n!\, x^n.

    In a previous post we have seen the Borel summation of a_1 + a_2 + ... + a_n+ ...

    \displaystyle \begin{aligned} B &:= \sum_{n=0}^{\infty} \frac{\int_{0}^{\infty} e^{-t}t^{n}dt}{n!} a_n \\ &= \int_{0}^{\infty}e^{-t} \sum_{n=0}^{\infty} \frac{t^n}{n!}a_ndt \end{aligned}

    Step 1 – Ordinary Borel summation

    \displaystyle \sum_{n=0}^{\infty} \frac{t^n}{n!} a_n = \sum_{n=0}^{\infty} \frac{t^n}{n!} n!x^n = \sum_{n=0}^\infty (x t)^n = \frac{1}{1 - x t}

    The ordinary Borel summation fails for x > 0 the analytic continuation of \mathcal{B}(t) has a simple pole at t = 1/x lying on the positive real axis. The ordinary Borel integral

    \displaystyle \int_0^\infty e^{-t} \frac{1}{1 - x t}\, dt

    therefore diverges for all x > 0 (the pole blocks the integration path).

    Step 2 – Borel–Écalle summation
    Define the two lateral Borel transforms by deforming the contour slightly above (+) or below (-) the real axis:

    \displaystyle B^\pm(x) = \int_0^{\infty e^{\pm i 0}} e^{-t} \frac{1}{1 - x t}\, dt.

    The notation ∞e±i0 means that the upper limit of integration is taken to infinity along a ray that approaches the positive real axis from above (angle +0) or from below (angle −0). This slight contour deformation is necessary when the integrand has a singularity (here at t = 1/x) on the positive real axis itself, which would cause the ordinary Borel integral to be ill-defined.

    These integrals exist, and

    \displaystyle B^+(x) - B^-(x) = \frac{2\pi i \, e^{-1/x}}{x}.

    The Borel–Écalle summation is defined by

    \displaystyle y(x) := \frac{B^+(x) + B^-(x)}{2}

    And in this case:

    \displaystyle y(x) = -e^{1/x} E_i\!\left(\frac{1}{x}\right)

    where E_i(z) = \int_z^\infty \frac{e^{-u}}{u}\, du is the exponential integral function. Thus, despite the divergence of the original series and the failure of ordinary Borel summation due to the pole on the integration path, the Borel–Écalle median summation recovers the exact analytic continuation on the positive real axis.

  • ‘Generic’ summation II

    Consider the series:

    \displaystyle 1+0-1+1+0-1+1+0-1+\dots

    using the generic summation (summation “machine”):

    \displaystyle \begin{aligned} s &= \mathcal{S}(1+0-1+1+0-1+1+0-1+\dots) \\ s &= 1 + \mathcal{S}(0-1+1+0-1+1+0-1+\dots) \\ s &= 1 + \mathcal{S}(-1+1+0-1+1+0-1+\dots) \end{aligned}

    Sum these 3 equations term by term:

    \displaystyle \begin{aligned} 3s &= 2 + \mathcal{S}(0+0+0+0+0+0+0+\dots) \\ s &= \frac{2}{3} \end{aligned}

    Now apply Euler summation to the same series:

    \displaystyle 1+0-1+1+0-1+1+0-1+\dots

    We multiply each n-term with x^n:

    \displaystyle \begin{aligned} f(x) &:= 1x^0+0x^1-1x^2+1x^3+0x^4-1x^5+1x^6+0x^7-1x^8+\dots \\ &:= 1-x^2+x^3-x^5+x^6-x^8+\dots \\ &:= (1+x^3+x^6+x^9+\dots) - (x^2+x^5+x^8+\dots) \end{aligned}

    In this last step we changed the order of summation of a Taylor series. Inside its radius of convergence this step is valid. We have:

    \displaystyle \begin{aligned} f(x) &= (1+x^3+x^6+x^9+\dots) - (x^2+x^5+x^8+\dots) \\ &= (1+x^3+x^6+x^9+\dots) - x^2(1+x^3+x^6+\dots) \\ &= ((x^3)^0+(x^3)^1+(x^3)^2+(x^3)^3+\dots) - x^2((x^3)^0+(x^3)^1+(x^3)^2+(x^3)^3+\dots) \\ &= \frac{1}{1-x^3} - x^2\frac{1}{1-x^3} \\ &= \frac{1-x^2}{1-x^3} \end{aligned}

    Now calculate the limit (using Hospital’s rule):

    \displaystyle \lim_{x\to 1} \frac{1-x^2}{1-x^3} = \frac{2}{3}

    The generic summation “machine” and the Euler summation deliver the same result.