Generalised Fourier Series, Part 2: Making Series Expansions

In the first part of this series on series expansions (introduction), we defined several concepts from first-year linear algebra that we need to work in a general, abstract setting rather than having to repeat the same derivations over and over again. We are now looking at the topic at the centre of this series; generalising the Fourier series method.

Amazingly, the few definitions I gave in the previous article let us define a whole family of different functional series expansions. Like before, we will consider only continuous, finite functions defined on the interval [1,1][-1, 1]. Let’s say we have a basis of functions that are orthogonal under the inner product

f,g=11f(x)g(x) dx. \langle f,g\rangle = \int_{-1}^1 f(x) g(x)\,\mathrm dx.

One example of this is the trigonometric functions sin(kπx)\sin(k\pi x) and cos(kπx)\cos(k \pi x) for all non-negative integers kk. It’s beyond the scope of undergraduate physics courses to prove that the trigonometric functions span this space, but they do, and you also can verify that the inner-product integral is indeed zero for any unequal pair. We’ll refer to elements of this basis as ϕn(x)\phi_n(x), where nn is just a unique label.

Now, since the functions span the space of functions we can write down any function ff in terms of our basis ϕn\phi_n and some scalar coefficients cnc_n as

f(x)=n=0cnϕn(x), f(x) = \sum_{n=0}^\infty c_n \phi_n(x),

and this series is unique. For convenience, we’ll call the series representation FfF_f to distinguish it from ff while we’re still determining the coefficients.

This isn’t very helpful without knowing what the cnc_n are, but since our tnt_n form an orthogonal basis, we can use some properties of the inner product to find them.

First we need a measure of how good our choice of cnc_n are. For this we use the idea of magnitude: define the “error” vector (function) rr as

r(x)=f(x)Ff(x)=f(x)n=0cnϕn(x),\begin{aligned} r(x) &= f(x) - F_f(x) \\ &= f(x) - \sum_{n=0}^\infty c_n \phi_n(x), \end{aligned}

and now we want to minimise the magnitude squared r,r\langle r,r \rangle. Using the properties of the inner product, we find

r,r=fFf,fFf=f,f2f,Ff+Ff,Ff(by linearity and symmetry)=f,f2n=0cnf,ϕn+n=0m=0cncmϕn,ϕm(by linearity)=f,f2n=0cnf,ϕn+n0cn2ϕn,ϕn(by orthogonality).\begin{alignedat}2 \langle r, r\rangle &= \langle f-F_f, f-F_f \rangle && \\ &= \langle f,f\rangle - 2\langle f,F_f\rangle + \langle F_f,F_f\rangle &&\text{(by linearity and symmetry)}\\ &= \langle f,f\rangle - 2\sum_{n=0}^\infty c_n \langle f, \phi_n\rangle + \sum_{n=0}^\infty\sum_{m=0}^\infty c_n c_m \langle \phi_n,\phi_m\rangle &&\text{(by linearity)}\\ &= \langle f,f\rangle - 2\sum_{n=0}^\infty c_n\langle f,\phi_n\rangle + \sum_{n_0}^\infty c_n^2 \langle \phi_n,\phi_n\rangle &&\text{(by orthogonality).} \end{alignedat}

The last line follows because orthogonality of the functions implies

ϕn,ϕm={ϕn,ϕnif n=m0if nm. \langle \phi_n, \phi_m \rangle = \begin{cases} \langle \phi_n, \phi_n \rangle &\text{if $n = m$} \\ 0 &\text{if $n \ne m$.} \end{cases}

Now we try to minimise r,r\langle r,r\rangle with respect to each of the cnc_n by setting the derivatives to zero:

0=cnr,r=2cnϕn,ϕn2f,ϕn,\begin{aligned} 0 &= \frac\partial{\partial c_n}\langle r,r \rangle \\ &= 2c_n\langle\phi_n,\phi_n\rangle - 2\langle f,\phi_n\rangle, \end{aligned}

so we find that

cn=f,ϕnϕn,ϕn. c_n = \frac{\langle f,\phi_n\rangle}{\langle\phi_n,\phi_n\rangle}.

We should make sure that this is a minimum, not a maximum. This is quite simple—since there’s only one value, we know that this is the only extreme point and that if we added on some really large number to all the cnc_n then the approximation would obviously be worse. This makes the single extremum a minimum for certain. We could also do the more rigorous second-derivative test, which would also show us that it is a minimum as ϕn,ϕn>0\langle \phi_n,\phi_n\rangle > 0 is a property of the inner product.

This solution for cnc_n is really a remarkable result. You can put in the trigonometric definitions of the ϕn\phi_n and see that it retrieves the definitions of the Fourier series coefficients way up at the top of this article.

What’s more impressive, however, is that everything we did did not care what the ϕn\phi_n were! In fact, they only had to be an orthogonal basis; the trigonometric functions were just one possibility.

Another valid basis we could have used is made up of polynomials; the monomials xnx^n themselves aren’t orthogonal under this inner product, but there is a method called the Gram–Schmidt procedure that can be used to turn them into an orthogonal basis. If you do this, you come up with a series of polynomials Pn(x)P_n(x) called the Legendre polynomials. The first few of these are

P0(x)=1,P1(x)=x,P2(x)=12(3x21),P3(x)=12(5x33x).\begin{aligned} P_0(x) &= 1,\\ P_1(x) &= x,\\ P_2(x) &= \frac12(3x^2 -1),\\ P_3(x) &= \frac12(5x^3 - 3x).\\ \end{aligned}

These appear in the expansion of the electrostatic potential around a multipole in Cartesian coordinates, and consequently in the spherical harmonic functions, which turn up all over physics.

Now this basis is also orthogonal, so if we want to make a “Fourier–Legendre” series expansion of ff called LfL_f

Lf(x)=n=0nPn(x), L_f(x) = \sum_{n=0}^\infty \ell_n P_n(x),

then we already know that the coefficients n\ell_n are defined by

n=f,PnPn,Pn. \ell_n = \frac{\langle f,P_n\rangle}{\langle P_n,P_n\rangle}.

This is why abstract concepts in linear algebra are so useful; with no additional work we gained a whole new method of series expansion!

In the next part of this series, we’ll compare how this new Legendre series expansion behaves in comparison to the Fourier and Taylor series.

This article is the second part of a series. You can find all of the rest of the articles in this series here: