Design draft: improving tensor-product dimensions in QuTiP
Qobj
instantiation and mathematical operations have a large overhead, mostly because of handling the dims
parameter in tensor-product spaces. I’m proposing one possible way to speed this up, while also gaining some additional safety and knowledge about mathematical operations on tensor-product spaces.
The steps:
- rigourously define the “grammar” of
dims
, and allow all ofdimensions.py
to assume that this grammar is followed to speed up parsing - maintain a private data structure type
dimensions._Parsed
insideQobj
which is constructed once, and keeps all details of the parsing so they need not be repeated. DetermineQobj.type
from this data structure - maintain knowledge of the individual
type
of every subspace in the full Hilbert space (e.g. with a list). There is still a “global”Qobj.type
, but this can now be one in the set{'bra', 'ket', 'oper', 'scalar', 'super', 'other'}
.'other'
is for when the individual elements do not all match each other. Individual elements cannot be'other'
.'scalar'
is added to operations can keep track of tensor elements which have been contracted, say by abra-ket
product—operations will then broadcast scalar up to the correct dimensions on certain operations. - dimension parsing is now sped up by using the operation-specific type knowledge. For example,
bra + bra -> bra
, andket.dag() -> bra
. Step 3 is necessary to allow matrix multiplication to work. These lookups could be done with enum values instead of string hashing.
Note: this is part of a design discussion for the next major release of QuTiP. I originally wrote this on 2020-07-13, and any further discussion may be found at the corresponding GitHub issue.