# Generalised Fourier Series, Part 2: Making Series Expansions

In the first part of this series on series expansions (introduction), we defined several concepts from first-year linear algebra that we need to work in a general, abstract setting rather than having to repeat the same derivations over and over again. We are now looking at the topic at the centre of this series; generalising the Fourier series method.

Amazingly, the few definitions I gave in the previous article let us define a whole family of different functional series expansions. Like before, we will consider only continuous, finite functions defined on the interval $[−1,1]$. Let’s say we have a basis of functions that are orthogonal under the inner product

$⟨f,g⟩=∫_{−1}f(x)g(x)dx.$One example of this is the trigonometric functions $sin(kπx)$ and $cos(kπx)$ for all non-negative integers $k$. It’s beyond the scope of undergraduate physics courses to prove that the trigonometric functions span this space, but they do, and you also can verify that the inner-product integral is indeed zero for any unequal pair. We’ll refer to elements of this basis as $ϕ_{n}(x)$, where $n$ is just a unique label.

Now, since the functions span the space of functions we can write down any function $f$ in terms of our basis $ϕ_{n}$ and some scalar coefficients $c_{n}$ as

$f(x)=n=0∑∞ c_{n}ϕ_{n}(x),$and this series is unique. For convenience, we’ll call the series representation $F_{f}$ to distinguish it from $f$ while we’re still determining the coefficients.

This isn’t very helpful without knowing what the $c_{n}$ *are*, but since our $t_{n}$ form an orthogonal basis, we can use some properties of the inner product to find them.

First we need a measure of how good our choice of $c_{n}$ are. For this we use the idea of magnitude: define the “error” vector (function) $r$ as

$r(x) =f(x)−F_{f}(x)=f(x)−n=0∑∞ c_{n}ϕ_{n}(x), $and now we want to minimise the magnitude squared $⟨r,r⟩$. Using the properties of the inner product, we find

$⟨r,r⟩ =⟨f−F_{f},f−F_{f}⟩=⟨f,f⟩−2⟨f,F_{f}⟩+⟨F_{f},F_{f}⟩=⟨f,f⟩−2n=0∑∞ c_{n}⟨f,ϕ_{n}⟩+n=0∑∞ m=0∑∞ c_{n}c_{m}⟨ϕ_{n},ϕ_{m}⟩=⟨f,f⟩−2n=0∑∞ c_{n}⟨f,ϕ_{n}⟩+n_{0}∑∞ c_{n}⟨ϕ_{n},ϕ_{n}⟩ (by linearity and symmetry)(by linearity)(by orthogonality). $The last line follows because orthogonality of the functions implies

$⟨ϕ_{n},ϕ_{m}⟩={⟨ϕ_{n},ϕ_{n}⟩0 ifn=mifn=m. $Now we try to minimise $⟨r,r⟩$ with respect to each of the $c_{n}$ by setting the derivatives to zero:

$0 =∂c_{n}∂ ⟨r,r⟩=2c_{n}⟨ϕ_{n},ϕ_{n}⟩−2⟨f,ϕ_{n}⟩, $so we find that

$c_{n}=⟨ϕ_{n},ϕ_{n}⟩⟨f,ϕ_{n}⟩ .$We should make sure that this is a *minimum*, not a maximum. This is quite simple—since there’s only one value, we know that this is the only extreme point and that if we added on some really large number to all the $c_{n}$ then the approximation would obviously be worse. This makes the single extremum a minimum for certain. We could also do the more rigorous second-derivative test, which would also show us that it is a minimum as $⟨ϕ_{n},ϕ_{n}⟩>0$ is a property of the inner product.

This solution for $c_{n}$ is really a remarkable result. You can put in the trigonometric definitions of the $ϕ_{n}$ and see that it retrieves the definitions of the Fourier series coefficients way up at the top of this article.

What’s more impressive, however, is that everything we did *did not care what the $ϕ_{n}$ were*! In fact, they only had to be an orthogonal basis; the trigonometric functions were just one possibility.

Another valid basis we could have used is made up of polynomials; the monomials $x_{n}$ themselves aren’t orthogonal under this inner product, but there is a method called the Gram–Schmidt procedure that can be used to turn them into an orthogonal basis. If you do this, you come up with a series of polynomials $P_{n}(x)$ called the Legendre polynomials. The first few of these are

$P_{0}(x)P_{1}(x)P_{2}(x)P_{3}(x) =1,=x,=21 (3x_{2}−1),=21 (5x_{3}−3x). $These appear in the expansion of the electrostatic potential around a multipole in Cartesian coordinates, and consequently in the spherical harmonic functions, which turn up all over physics.

Now this basis is also orthogonal, so if we want to make a “Fourier–Legendre” series expansion of $f$ called $L_{f}$

$L_{f}(x)=n=0∑∞ ℓ_{n}P_{n}(x),$then we already know that the coefficients $ℓ_{n}$ are defined by

$ℓ_{n}=⟨P_{n},P_{n}⟩⟨f,P_{n}⟩ .$*This* is why abstract concepts in linear algebra are so useful; with no additional work we gained a whole new method of series expansion!

In the next part of this series, we’ll compare how this new Legendre series expansion behaves in comparison to the Fourier and Taylor series.

This article is the second part of a series. You can find all of the rest of the articles in this series here: