Category: General

  • Summation Techniques for Perturbation Theory

    After several months of exploring various summation and acceleration methods, it is time to synthesize these tools before we apply them to one of their most important domains: Perturbation Theory.

    When a power series \displaystyle \sum_{n=0}^\infty a_n z^{n} has a small or even zero radius of convergence, or converges too slowly to be numerically useful, we require more than standard arithmetic. Here is a recap of the methods we have covered and how they bridge the gap between formal series and numerical values.


    Summation Methods: Taming Divergence

    • Borel and Borel–Écalle: By using the Borel transform, we transform factorially growing coefficients into a convergent series in the Borel plane. The Borel–Écalle framework further allows us to handle singularities via resurgence theory, providing a unique “resummed” value even for non-Borel-summable series.
    • Euler and Zeta summation: These methods assign finite values to divergent sums by analytic continuation. While Euler summation is ideal for alternating series, Zeta summation provides a powerful way to handle the infinite sums that frequently appear in quantum vacuum energy calculations.
    • Generic summation: Beyond these classical approaches, we have also explored the idea of generic summation, which provides a unifying framework for assigning values to divergent or slowly convergent series.

    Acceleration Techniques: Enhancing Convergence

    • Shanks Transformation and Padé Approximants: These nonlinear transformations (often related to continued fractions) excel at capturing the behavior of functions beyond their radius of convergence, particularly when poles are present.
    • Richardson Extrapolation: A fundamental tool for numerical analysis that cancels out the leading error terms, allowing us to estimate the limit of a sequence Sn with much higher precision from only a few terms.

    An important consistency check underlies all these techniques: whenever different summation or acceleration methods apply to the same series, they agree on a common value. This convergence of independent approaches is not accidental, it reflects the fact that these methods capture an underlying analytic object beyond the formal series itself. When they work, they do not merely assign a value: they reveal a coherent extension of the function that the original divergent expansion was hinting at.

  • Motivation

    The general idea of this blog is to present mathematical techniques for solving complicated or even impossible-to-solve-exactly problems using approximation methods. In fact, the vast majority of problems encountered in mathematics or physics cannot be solved exactly, and most problems that could be solved exactly have already been solved.

    We’ll primarily focus on a technique known as perturbation theory. Informally, perturbation theory is a method for tackling complex problems by reducing them to a sequence of simpler ones.

    The basic principle: break a complex problem into many (potentially infinitely many) simpler ones, then “glue” their solutions together to approximate the solution to the original.

    Before presenting perturbation theory, we need to develop approximation techniques for summing potentially divergent series. We will therefore begin by presenting various approximation and summation techniques. Our approach is closer to the methods of natural sciences than to the rigorous ‘Theorem-Proof’ style of pure mathematics. This mathematical style, although less rigorous, will enable us to solve problems that are challenging for the rigorous method. In this spirit, our presentation resembles that of a geologist or chemist discovering new minerals or elements: carefully recording observations in a field journal or lab notebook, making sense of what is seen, and developing concepts to address real problems.

    In summary, we would like to present analytical approximation methods for solving problems that are difficult or impossible to solve exactly.

    This particular approach to mathematics is not new; it has already been described by the great French mathematician Henri Poincaré in Chapter 8 of his book Les méthodes nouvelles de la mécanique céleste (1892). Henri Poincaré begins this chapter with the following comment concerning the summation of series:

    “There is a sort of misunderstanding between geometers and astronomers about the meaning of the word convergence. Geometers, preoccupied with perfect rigour and often too indifferent to the length of inextricable calculations, the possibility of which they conceive without thinking of actually undertaking them, say that a series is convergent when the sum of the terms tends towards a given limit, even if the first terms decrease very slowly. Astronomers, on the other hand, are accustomed to saying that a series converges when the first twenty terms, for example, decrease very rapidly, even though the following terms should increase indefinitely.

    So, to take a simple example, let’s consider the two series with general terms:

    Geometers will say that the first converges, and even that it converges rapidly, because the millionth term is much smaller than the 999,999th; but they will regard the second as divergent, because the general term can grow beyond any limit.

    Astronomers, on the other hand, will regard the first series as divergent, because the first 1000 terms are increasing; and the second as convergent because the first 1000 terms are decreasing and this decrease is initially very rapid.

    Both rules are legitimate: the first, in theoretical research; the second, in numerical applications. Both must prevail, but in two separate domains whose boundaries must be clearly defined.”

    Another historical example from theoretical physics vividly illustrates Poincaré’s point. In quantum electrodynamics (QED), the perturbative series expansions are divergent in the “geometer’s” sense. However, by keeping only the first few terms — since calculating further terms becomes prohibitively difficult — physicists obtain predictions that match experimental results with astonishing precision.

    In 1965, the Nobel Prize in Physics was awarded to Feynman, Schwinger, and Tomonaga for their groundbreaking work in QED using these techniques. Poincaré would likely have counted them among the “astronomers.”