Recent posts

Unusual effects of Python scoping

Python scoping can be deceptive, especially since it can appear fairly simple at first. Generally, name resolution starts from an inner-most scope, and proceeds outwards until some outer scope either contains the name, or we run out of scopes to search. This procedure is familiar to users of many programming languages. Newcomers from other languages are often surprised that various control-flow constructs like for and with don’t open new scopes, so new variables defined with them are still accessible after the block has completed, but this is relatively simple to accept; Python often seems very permissive. This post presents a few corners of Python’s scoping rules that are more unusual than this, some of which continue to trip me up.

Mutating Python tuples by C FFI abuse

Tuples are as immutable as things get in Python. They have special treatment being a core builtin, so unlike user-defined classes, there’s no private attribute that you can mutate if you really want to mess around. However, Python is an interpreted language with no baked-in access control, and (almost) everything really is mutable if you try hard enough.

All Python objects at the C level can be addressed by a pointer of type PyObject *, which (for most regular CPython builds) has the elements

struct _object {
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
}
These objects do what they say on the tin. The different types of C object then have further elements after these, to specialise them.

Tuples are a type of “variable-length” object, in the sense that the same C struct definition is used for all sizes. With the head components inlined, this looks like

struct _tuple {
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
    Py_ssize_t ob_size;
    PyObject *ob_item[1];
}
The first two arguments are the reference count and the Python type object, as before, then the next is the number of elements in the tuple, and the last one is an array of (pointers to) Python objects. The array is declared as size one, but Python guarantees that enough space will actually have been allocated to hold ob_size elements. The items in the tuple are stored in this array.

From Python space, tuple.__setitem__ raises a TypeError, but in principle at the C level, there is no such immutability. As of Python 3.11, the tupleobject.h header file even has a comment explaining how tuple can be used as a temporary array, and there is a PyTuple_SetItem function implementation, although this has slightly different reference-counting semantics to normal.

We can use the Python C FFI ctypes, and the CPython implementation detail that id returns the memory address of the Python object to cheekily define our own Python-space tuple_setitem. The PyObject type in ctypes doesn’t expose the internal fields of the object, but instead we can change the items we want by casting the integer output of id into an size_t (assuming, as in many systems that pointers have the same size as size_t), and manually assigning into the ob_item array. To be better citizens of Python—rampant pointer abuse aside—we should also increment the reference count of the object we are inserting into the tuple, and decrement the count for the object we are removing. This leaves us with the Python-space function

from ctypes import cast, c_size_t, c_ssize_t, POINTER
from typing import Any

def tuple_setitem(tup: tuple, index: int, new: Any):
    if -len(tup) <= index < 0:
        index = len(tup) + index
    if not 0 <= index < len(tup):
        raise IndexError("tuple index out of range")

    # Decrement the old item's refcount.
    old = tup[index]
    cast(id(old), POINTER(c_ssize_t))[0] -= 1

    # Increment the new item's refcount.
    cast(id(new), POINTER(c_ssize_t))[0] += 1

    # Replace the item.
    cast(id(tup), POINTER(c_size_t))[3 + index] = id(new)
Now let’s try it out:
>>> tup = (1, 2, 3)
>>> tuple_setitem(tup, 0, 7)
>>> tup
(7, 2, 3)

Some Small Useful Features of GitHub Actions

Due to recent problems free CI servers have had with abuse from cryptocurrency miners, Travis CI are scaling back what services are available to open source projects. We used to use Travis for QuTiP, but the new pricing model has meant that we won’t be able to afford it once the old free tier travis-ci.org completely shuts down. We trialled GitHub Actions to build the 28 different wheels we now distribute on PyPI, and then moved all our testing there as well.

Here are few nice features that I’ve been using since we started. These are mostly not unique to GitHub Actions, but all my example configuration code will be for it.

Generalised Fourier Series, Part 3: Comparing Series Expansions

In the second part of this series on series expansions (introduction), we found a way to make a series expansion using any basis of orthogonal polynomials. This is how the standard Fourier series is defined, and then we also used it to make a series expansion out of the Legendre polynomials. Now we’re going to compare how these behave with different numbers of terms.

This is finally the fun part of the seminar! My second-year physics students should generally have already known the previous two articles from first-year courses, so they started here, pretty much. To interact with these series expansions, you should load up my Jupyter notebook on either Colab or myBinder, and try plotting your own functions and seeing how the expansions behave.

In reality we don’t use the infinite form of series expansions, we only use the first few terms to get a good approximation of a function. The notebook is used for investigating how different series expansions work with comparably few terms, and which would be most appropriate in a given situation. The series there are the standard Fourier series, the Fourier–Legendre series, and the Taylor series about x=0x=0 (which is formed by a different method to the other two). If you need a reminder of how these series look like, the Taylor and Fourier series forms were given in the introduction, and the Legendre series was derived in part two.

I asked the students to consider a couple of the following questions, and discuss them in groups of around five:

  • Which series would you call the “most accurate”? Why? Does it depend on the function?
  • What sorts of functions are the different expansions best at approximating? Which are they bad at?
  • The Legendre series often seems to “give up” in the middle of some shapes at low orders (e.g. sinusoids). Why is this, and are there any things the series is still useful for?
  • The Taylor series almost invariably has the largest pointwise error. Why is this? What is the Taylor series useful for?

Generalised Fourier Series, Part 2: Making Series Expansions

In the first part of this series on series expansions (introduction), we defined several concepts from first-year linear algebra that we need to work in a general, abstract setting rather than having to repeat the same derivations over and over again. We are now looking at the topic at the centre of this series; generalising the Fourier series method.

Amazingly, the few definitions I gave in the previous article let us define a whole family of different functional series expansions. Like before, we will consider only continuous, finite functions defined on the interval [1,1][-1, 1]. Let’s say we have a basis of functions that are orthogonal under the inner product

f,g=11f(x)g(x) dx. \langle f,g\rangle = \int_{-1}^1 f(x) g(x)\,\mathrm dx.

One example of this is the trigonometric functions sin(kπx)\sin(k\pi x) and cos(kπx)\cos(k \pi x) for all non-negative integers kk. It’s beyond the scope of undergraduate physics courses to prove that the trigonometric functions span this space, but they do, and you also can verify that the inner-product integral is indeed zero for any unequal pair. We’ll refer to elements of this basis as ϕn(x)\phi_n(x), where nn is just a unique label.

Now, since the functions span the space of functions we can write down any function ff in terms of our basis ϕn\phi_n and some scalar coefficients cnc_n as

f(x)=n=0cnϕn(x), f(x) = \sum_{n=0}^\infty c_n \phi_n(x),

and this series is unique. For convenience, we’ll call the series representation FfF_f to distinguish it from ff while we’re still determining the coefficients.

Generalised Fourier Series, Part 1: Linear Algebra Basics

In the introduction to this series of articles, we introduced the idea of a series expansion, and saw the explicit forms of the Taylor and Fourier series. We are building towards a generalised series expansions based on “orthogonal” polynomials, but first we have to define some concepts from linear algebra. This article is roughly at the level of early first-year physics undergradutes.

Students at this level have come across the word “orthogonal” before when talking about Euclidean (“normal”) vectors. Here it means the same thing as “perpendicular” or “at right angles”, at least while you have three or fewer dimensions. Once you have more than that, or you’re dealing with some other type of vector, the definition is a little more abstract.

Generalised Fourier Series

In all my time as a PhD student I have been involved with teaching undergraduate and postgraduate physics students at Imperial College London (they even gave me a prize in 2018!) as a teaching assistant in classroom-style tutorials and seminars. For my final year, though, I’ve also moved up into helping write the teaching materials, particularly for the differential equations part of the second-year course.

The first differential equations seminar we had was showing off some of the properties and uses of the Legendre polynomials, which the students had just met by solving Legendre’s equation. My part of the seminar was illustrating how any orthogonal basis of functions can be used to make a series expansion, and then getting the students to investigate how different types of functional expansion behave at different orders. If you just want to play with this, load up the Jupyter notebook on Colab or myBinder. The source code for this notebook is available on GitHub. You can see an example plot of these below.

Series approximations of a twelfth-order polynomial using many terms in the expansion.

The end result we’ll achieve mathematically is an abstract way of making series expansions. In general, a series expansion approximates a function f(x)f(x) by using a (possibly infinite) sequence of terms, where each term is a constant multiplied by some basis function. Different series expansions use different bases and different methods of determining the coefficients.

Perhaps the most familiar example is the Taylor series. The Taylor series Tf(x)T_f(x) that approximates a function f(x)f(x) around some point x0x_0 is

QuTiP data layer and the end of Google Summer of Code 2020

This post is the final permalink for the work I’ve done for QuTiP in the Google Summer of Code 2020. My mentors have been Alex Pitchford, Eric Giguère and Nathan Shammah, and QuTiP is under the numFOCUS umbrella.

The main aim of the project was to make Qobj, the primary data type in QuTiP, able to use both sparse and dense representations of matrices and have them interoperate seamlessly. This was a huge undertaking that had far-reaching implications all across the library, but we have now succeeded. There is still plenty of work to be done in additional development documentation and on sanding out the edges to improve the UX, but we are moving towards a public beta of a major version update next year.

Overview of the new QuTiP data layer

This post is partial documentation for the implementation of the data-layer that I wrote in the last week or so as part of Google Summer of Code with QuTiP. I may return to talk a bit more about how various algorithms are achieved internally, but for now, this is some indication of what I’ve been working on.

I had previously replaced the old fast_csr_matrix type with the new, custom CSR type as the data backing for Qobj, and all internal QuTiP data representations. This produced some speed-ups in some places due to improved algorithms and better cache usage in places, but its principle advantage was the massive reduction in overhead for function calls between Python and C space, which largely affected small objects.

The full aim, however, is to have QuTiP 5 support many different data representations as the backing of Qobj, and use the most suitable representation for the given data. This will not require every single QuTiP function to have an exponential number of versions for every possible combination of inputs, but only to have specialisations for the most common data combinations. This concept is the “data layer”.

All code examples in this post are prefixed with

>>> from qutip.core import data

Contiguous memory and finding segfaults in Cython

One of the main advantages of using Cython in the algebraic core of scientific code is fast, non-bounds-checked access to contiguous memory. Of course, when you elide safety-checking, it’s not surprising when you start to get segfaults due to out-of-bounds access and use-after-free bugs in your Cython code.

This post talks a little about why we chose to use raw pointers to access our memory in the upcoming major version of QuTiP rather than other possible methods, and how I tracked down a couple of memory-safety bugs in my code which would have been caught had we been using safer alternatives.

older posts…