Posts tagged 'python'

Generalised Fourier Series, Part 3: Comparing Series Expansions

In the second part of this series on series expansions (introduction), we found a way to make a series expansion using any basis of orthogonal polynomials. This is how the standard Fourier series is defined, and then we also used it to make a series expansion out of the Legendre polynomials. Now we’re going to compare how these behave with different numbers of terms.

This is finally the fun part of the seminar! My second-year physics students should generally have already known the previous two articles from first-year courses, so they started here, pretty much. To interact with these series expansions, you should load up my Jupyter notebook on either Colab or myBinder, and try plotting your own functions and seeing how the expansions behave.

In reality we don’t use the infinite form of series expansions, we only use the first few terms to get a good approximation of a function. The notebook is used for investigating how different series expansions work with comparably few terms, and which would be most appropriate in a given situation. The series there are the standard Fourier series, the Fourier–Legendre series, and the Taylor series about x=0x=0 (which is formed by a different method to the other two). If you need a reminder of how these series look like, the Taylor and Fourier series forms were given in the introduction, and the Legendre series was derived in part two.

I asked the students to consider a couple of the following questions, and discuss them in groups of around five:

  • Which series would you call the “most accurate”? Why? Does it depend on the function?
  • What sorts of functions are the different expansions best at approximating? Which are they bad at?
  • The Legendre series often seems to “give up” in the middle of some shapes at low orders (e.g. sinusoids). Why is this, and are there any things the series is still useful for?
  • The Taylor series almost invariably has the largest pointwise error. Why is this? What is the Taylor series useful for?

Generalised Fourier Series

In all my time as a PhD student I have been involved with teaching undergraduate and postgraduate physics students at Imperial College London (they even gave me a prize in 2018!) as a teaching assistant in classroom-style tutorials and seminars. For my final year, though, I’ve also moved up into helping write the teaching materials, particularly for the differential equations part of the second-year course.

The first differential equations seminar we had was showing off some of the properties and uses of the Legendre polynomials, which the students had just met by solving Legendre’s equation. My part of the seminar was illustrating how any orthogonal basis of functions can be used to make a series expansion, and then getting the students to investigate how different types of functional expansion behave at different orders. If you just want to play with this, load up the Jupyter notebook on Colab or myBinder. The source code for this notebook is available on GitHub. You can see an example plot of these below.

Series approximations of a twelfth-order polynomial using many terms in the expansion.

The end result we’ll achieve mathematically is an abstract way of making series expansions. In general, a series expansion approximates a function f(x)f(x) by using a (possibly infinite) sequence of terms, where each term is a constant multiplied by some basis function. Different series expansions use different bases and different methods of determining the coefficients.

Perhaps the most familiar example is the Taylor series. The Taylor series Tf(x)T_f(x) that approximates a function f(x)f(x) around some point x0x_0 is

QuTiP data layer and the end of Google Summer of Code 2020

This post is the final permalink for the work I’ve done for QuTiP in the Google Summer of Code 2020. My mentors have been Alex Pitchford, Eric Giguère and Nathan Shammah, and QuTiP is under the numFOCUS umbrella.

The main aim of the project was to make Qobj, the primary data type in QuTiP, able to use both sparse and dense representations of matrices and have them interoperate seamlessly. This was a huge undertaking that had far-reaching implications all across the library, but we have now succeeded. There is still plenty of work to be done in additional development documentation and on sanding out the edges to improve the UX, but we are moving towards a public beta of a major version update next year.

Overview of the new QuTiP data layer

This post is partial documentation for the implementation of the data-layer that I wrote in the last week or so as part of Google Summer of Code with QuTiP. I may return to talk a bit more about how various algorithms are achieved internally, but for now, this is some indication of what I’ve been working on.

I had previously replaced the old fast_csr_matrix type with the new, custom CSR type as the data backing for Qobj, and all internal QuTiP data representations. This produced some speed-ups in some places due to improved algorithms and better cache usage in places, but its principle advantage was the massive reduction in overhead for function calls between Python and C space, which largely affected small objects.

The full aim, however, is to have QuTiP 5 support many different data representations as the backing of Qobj, and use the most suitable representation for the given data. This will not require every single QuTiP function to have an exponential number of versions for every possible combination of inputs, but only to have specialisations for the most common data combinations. This concept is the “data layer”.

All code examples in this post are prefixed with

1>>> from qutip.core import data

Contiguous memory and finding segfaults in Cython

One of the main advantages of using Cython in the algebraic core of scientific code is fast, non-bounds-checked access to contiguous memory. Of course, when you elide safety-checking, it’s not surprising when you start to get segfaults due to out-of-bounds access and use-after-free bugs in your Cython code.

This post talks a little about why we chose to use raw pointers to access our memory in the upcoming major version of QuTiP rather than other possible methods, and how I tracked down a couple of memory-safety bugs in my code which would have been caught had we been using safer alternatives.

Design draft: improving tensor-product dimensions in QuTiP

Qobj instantiation and mathematical operations have a large overhead, mostly because of handling the dims parameter in tensor-product spaces. I’m proposing one possible way to speed this up, while also gaining some additional safety and knowledge about mathematical operations on tensor-product spaces.

The steps:

  1. rigourously define the “grammar” of dims, and allow all of dimensions.py to assume that this grammar is followed to speed up parsing
  2. maintain a private data structure type dimensions._Parsed inside Qobj which is constructed once, and keeps all details of the parsing so they need not be repeated. Determine Qobj.type from this data structure
  3. maintain knowledge of the individual type of every subspace in the full Hilbert space (e.g. with a list). There is still a “global” Qobj.type, but this can now be one in the set {'bra', 'ket', 'oper', 'scalar', 'super', 'other'}. 'other' is for when the individual elements do not all match each other. Individual elements cannot be 'other'. 'scalar' is added to operations can keep track of tensor elements which have been contracted, say by a bra-ket product—operations will then broadcast scalar up to the correct dimensions on certain operations.
  4. dimension parsing is now sped up by using the operation-specific type knowledge. For example, bra + bra -> bra, and ket.dag() -> bra. Step 3 is necessary to allow matrix multiplication to work. These lookups could be done with enum values instead of string hashing.

Note: this is part of a design discussion for the next major release of QuTiP. I originally wrote this on 2020-07-13, and any further discussion may be found at the corresponding GitHub issue.

Sphinx and the QuTiP Developers' Guide

Overhauling the internals of a mathematical library is no good if no other developers on the team know how to use the new systems you’ve put in place, and don’t know why you’ve made the choices you’ve made. In the last week I’ve been writing a new QuTiP developers’ guide to the new data layer that I’m creating as part of my Google Summer of Code project, which has involved learning a lot more about the Sphinx documentation tool, and a little bit of GitHub esoterica.

Currently we don’t have a complete plan for how this guide will be merged into the QuTiP documentation, and where exactly it will go, so for now it is hosted on my own GitHub repository. I have also put up a properly rendered version on a GitHub pages site linked to the repository.

My Sphinx conf.py file for this repository is not (at the time of commit 0edf49e) very exciting. Fortunately, Sphinx largely just works out-of-the-box as one would expect from a mature Python project. Perhaps the boldest part of that file is the intersphinx_mapping dictionary, which uses the intersphinx built-in to link to other projects’ documentation also built with Sphinx.

Right now, the intersphinx documentation is perhaps a little lacking, and sometimes seems to just involve some hope (and some disappointment). In particular, I have several external references set up as

1
2
3
4
5
6
7intersphinx_mapping = {
    'qutip': ('http://qutip.org/docs/latest/', None),
    'python': ('https://docs.python.org/3', None),
    'numpy': ('https://numpy.org/doc/stable/', None),
    'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
    'cython': ('https://cython.readthedocs.io/en/latest/', None),
}

Design draft: data structure specifications

These are some of the discussion topics around how new linear algebra data structures will be defined in QuTiP as part of my Google Summer of Code project.

After some discussion, including some further designs of how the new dispatcher methods would function, we are trying to pursue a “light” data structure strategy. This will hopefully have very lightweight instances of the data, and dispatch methods are simplified; mulitple dispatch is a difficult concept to fit into a true object-oriented style, and we believe that dispatcher methods will make adding new data types and dispatched functions significantly easier.

Originally written on the 8th of June, 2020.

Design draft: data layer separation

This is an early design document about the separation of data layer in QuTiP as part of my Google Summer of Code project. This is a very early-stage document, which is significantly liable to change, but is indicative of the direction we were planning to go.

Originally written on the 1st of June, 2020.

Detecting star imports in Python

Star (or wildcard) imports are one of the methods for importing libraries in Python, and although they are generally discouraged, they are rather prevalent in a lot of notebook-style scientific code. Within a package being imported, there is no official way of knowing whether this is by a star import, but since Python allows overriding just about everything, we can detect slight differences in the methods and inject arbitrary code if the wildcard is used.

In QuTiP, we currently suffer from long import times and want to move to a scipy-like style where fewer symbols are in the global package namespace, and submodules are only imported if explicitly requested. We want to issue a warning to people currently using the from qutip import * syntax, because the number of symbols available to them will soon decrease.