The general idea of this blog is to present mathematical techniques for solving complicated or even impossible-to-solve-exactly problems using approximation methods. In fact, the vast majority of problems encountered in mathematics or physics cannot be solved exactly, and most problems that could be solved exactly have already been solved.
We’ll primarily focus on a technique known as perturbation theory. Informally, perturbation theory is a method for tackling complex problems by reducing them to a sequence of simpler ones.
The basic principle: break a complex problem into many (potentially infinitely many) simpler ones, then “glue” their solutions together to approximate the solution to the original.
Before presenting perturbation theory, we need to develop approximation techniques for summing potentially divergent series. We will therefore begin by presenting various approximation and summation techniques. Our approach is closer to the methods of natural sciences than to the rigorous ‘Theorem-Proof’ style of pure mathematics. This mathematical style, although less rigorous, will enable us to solve problems that are challenging for the rigorous method. In this spirit, our presentation resembles that of a geologist or chemist discovering new minerals or elements: carefully recording observations in a field journal or lab notebook, making sense of what is seen, and developing concepts to address real problems.
In summary, we would like to present analytical approximation methods for solving problems that are difficult or impossible to solve exactly.
This particular approach to mathematics is not new; it has already been described by the great French mathematician Henri Poincaré in Chapter 8 of his book Les méthodes nouvelles de la mécanique céleste (1892). Henri Poincaré begins this chapter with the following comment concerning the summation of series:
“There is a sort of misunderstanding between geometers and astronomers about the meaning of the word convergence. Geometers, preoccupied with perfect rigour and often too indifferent to the length of inextricable calculations, the possibility of which they conceive without thinking of actually undertaking them, say that a series is convergent when the sum of the terms tends towards a given limit, even if the first terms decrease very slowly. Astronomers, on the other hand, are accustomed to saying that a series converges when the first twenty terms, for example, decrease very rapidly, even though the following terms should increase indefinitely.
So, to take a simple example, let’s consider the two series with general terms:
Geometers will say that the first converges, and even that it converges rapidly, because the millionth term is much smaller than the 999,999th; but they will regard the second as divergent, because the general term can grow beyond any limit.
Astronomers, on the other hand, will regard the first series as divergent, because the first 1000 terms are increasing; and the second as convergent because the first 1000 terms are decreasing and this decrease is initially very rapid.
Both rules are legitimate: the first, in theoretical research; the second, in numerical applications. Both must prevail, but in two separate domains whose boundaries must be clearly defined.”
Another historical example from theoretical physics vividly illustrates Poincaré’s point. In quantum electrodynamics (QED), the perturbative series expansions are divergent in the “geometer’s” sense. However, by keeping only the first few terms — since calculating further terms becomes prohibitively difficult — physicists obtain predictions that match experimental results with astonishing precision.
In 1965, the Nobel Prize in Physics was awarded to Feynman, Schwinger, and Tomonaga for their groundbreaking work in QED using these techniques. Poincaré would likely have counted them among the “astronomers.”
Leave a comment