Mackey  V3.3
A C++ library for computing RO(G) graded homology
Beneath the surface

Dense vs sparse

The differential matrices can be stored in either dense or sparse formats.

  • A dense matrix is an array representation of a matrix where all values are stored. For a matrix with \(n\) many rows and columns this means space complexity:

    \[O(n^2)\]

  • A sparse matrix is an array representation of a matrix where only the nonzero values and their indices are stored. For a matrix with \(n\) many rows and columns and \(r\) many nonzero elements this means space complexity:

    \[O(n\cdot r)\]

For this library, it is necessary to use sparse matrices for groups beyond \(C_4\); the reason is that for \(C_8\) and above, matrices can have millions of rows and columns but are 99% zeros. Using the dense format can easily mean memory usage north of 60GB.

Using sparse matrices has an interesting side-effect that we explain in the next section.

Smith normal form

There are multiple ways to compute the Smith Normal Form of a matrix, but since we are mostly interested in the coefficient matrices, we employ the row/column elimination algorithm. To use the algorithm we need to be able to choose a pivot for our matrix each time we want to eliminate a row and column.

The pivoting choice is quite important for two reasons:

  • Coefficient explosion (applies to infinite base rings): Even if the entries of the matrix are small ( \(\pm 1\)), the entries of the Smith Normal Form (and especially those of the coefficient matrices) can easily overflow if we make the wrong choice of pivot. You can read more about this problem in Recognizing badly presented Z modules.
  • Fill-in (applies to all base rings): A wrong choice of pivot may result in creating nonzero entries, making our matrix less sparse.

The strategy we employ is to pick the pivot so as to minimize two criteria:

  • 1: absolute value. This is only relevant over infinite base rings like \(\mathbf Z\)
  • 2: the Markowitz metric

    \[N(i,j)=|\{a_{is}\neq 0: s\}| + |\{a_{sj}\neq 0:s\}|\]

    This is an upper bound to the amount of fill-in one can get by a single round of row/column elimination.

There are many other possible choices as explained here.

Note
Over fields we use partial pivoting instead of full pivoting i.e. we perform only column eliminations and get a triangular matrix at the end.

Algebraic Morse theory

It is possible to reduce a chain complex to a homotopy equivalent chain complex that's smaller, using algebraic Morse theory (AMT). See this paper for more details and an implementation of the AMT algorithm.

An elaboration on the ideas of that paper allows us to perform the reduction while preserving equivariance. This effectively means that whenever we have a box product \(C_*\otimes D_*\) we can replace it by an equivalent equivariant chain complex \(T_*\) that is significantly smaller. There are two upsides:

  • Overall lower memory usage and better performance, especially in the Smith Normal Form algorithm
  • We can reduce triple box products to double box products:

    \[C_*\otimes D_*\otimes E_*=T_*\otimes E_*\]

    The RHS is significantly smaller than the LHS and can be further compressed by using the equivariant AMT algorithm on it.

    Todo:
    Further explore/improve the equivariant AMT algorithm.

Equivariant chain complexes

Our chain complexes don't consist of \(R\)-modules but rather \(R[G]\) modules of the form

\[\oplus_H R[G/H]\]

To record the equivariant information we use rank arrays: for example consider

\[\mathbf Z[C_4]\text{ vs } \mathbf Z[C_2]\oplus \mathbf Z[C_2]\]

We say that the former has rank \([4]\) while the latter has rank \([2,2]\),

Moreover, a basis over \(R[G]\) expands to an \(R\) basis that we call equivariant; that's a basis where each generator is followed by its orbit:

\[x_1,gx_1,...,g^{s_1}x_1,x_2,gx_2,....,g^{s_n}x_n\]

with \(g^{s_i+1}x_i=1\).

We need to use equivariant bases to transfer our differentials from the bottom level to higher levels.

Box products

The differential of the tensor product of chain complexes can be pictured as:



where

\[L_i(x\otimes y)=d_ix\otimes y\text{ , }R_i(x\otimes y)=(-1)^{|x|}x\otimes d_i'y\]

The matrix for the differential is:

\[d^{\otimes}=\begin{pmatrix}R_0&L_1\\&R_1&L_2&\\ &&\ddots&\ddots\\&&&R_{k-1}&L_{k-1}\end{pmatrix}\]

and it remains to compute the matrix expressions of every \(L,R\).
Given bases for \(C_j, D_k\) we can form two lexicographical bases for \(C_j\otimes D_k\); one of them we call left convenient because \(L\) is block diagonal w.r.t. it (each block being \(d\)); the other we call right convenient because \(R\) is block diagonal w.r.t. it (each block being \(d'\)).

However, we aren't allowed to use these bases in \(d^{\otimes}\): we have equivariant chain complexes and we need equivariant bases for them. We refer to the equivariant basis for the tensor product as the canonical basis.
For example if \(A,B\) have equivariant bases \(a,ga,g^2a,g^3a\) and \(b,gb\) respectively then the left convenient, right convenient and canonical bases are respectively:

\[ a\otimes b\text{ , }ga\otimes b\text{ , }g^2a\otimes b\text{ , } g^3\otimes b\text{ , } a\otimes gb\text{ , } ga\otimes gb \text{ , } g^2a\otimes gb \text{ , }g^3a \otimes gb\\ a\otimes b\text{ , }a\otimes gb\text{ , }ga\otimes b \text{ , } ga\otimes gb\text{ , } g^2a\otimes b\text{ , } g^2a\otimes gb \text{ , } g^3a\otimes b \text{ , } g^3a\otimes gb\\ a\otimes b\text{ , }g(a\otimes b)\text{ , }g^2(a\otimes b)\text{ , }g^3(a\otimes b)\text{ , } a\otimes gb\text{ , } g(a\otimes gb)\text{ , }g^2(a\otimes gb)\text{ , } g^3(a\otimes gb)\]

We can compute the change of basis matrices between these three bases, and using them allows us to write \(L,R\) w.r.t. the canonical basis. This is how \(d^{\otimes}\) is obtained.