TL;DR
Charles Dodgson (Lewis Carroll) devised a condensation procedure for computing determinants by repeatedly forming determinants of 2×2 blocks and shrinking the matrix. The method matches Gaussian elimination in asymptotic cost (O(n³)), preserves integer entries during intermediate steps, and is naturally parallelizable, but requires care to avoid division by zero.
What happened
In an 1867 paper Charles Dodgson — better known as Lewis Carroll — presented a technique now called Dodgson condensation or the method of contractants for computing determinants. The approach repeatedly “condenses” an n×n matrix to an (n−1)×(n−1) matrix: each entry of the new matrix is the determinant of a 2×2 submatrix formed by an element and its south, east and southeast neighbors, with the matrix’s bottom row and rightmost column dropped. From the second condensation step onward each computed 2×2 determinant is divided by a previously computed element from two steps back. Dodgson warned against zeros in the interior because the algorithm involves division; he suggested rearranging or adding row multiples to eliminate interior zeros. The blog example applies the method to a 4×4 integer matrix and confirms the result (228) with Mathematica.
Why it matters
- Provides an alternative to cofactor expansion that scales well (cofactor expansion is O(n!)).
- Has the same asymptotic cost as Gaussian elimination (O(n³)) but preserves integrality when inputs are integers.
- Inherently parallel: each 2×2 determinant at a condensation step can be computed independently.
- Requires precautions to avoid division by zero, influencing preconditioning or row/column operations before running.
Key facts
- The technique is known as the method of contractants, Dodgson condensation, or condensation.
- Each condensation step reduces matrix size by one row and one column by replacing entries with 2×2 determinants built from local neighbors.
- From the second step onward, each new entry is divided by an element computed two steps earlier.
- Dodgson recommended rearranging rows/columns or adding multiples of rows to avoid interior zeros and thus divisions by zero.
- When the original matrix has all integer entries, the divisions in the algorithm are exact and intermediate matrices remain integer-valued.
- The method and Gaussian elimination both have O(n³) operation counts; cofactor (Laplace) expansion is O(n!).
- The blog demonstrates the method on a specific 4×4 integer matrix; Mathematica verifies the determinant is 228.
- Condensation is well suited to parallel computation because 2×2 determinants at a given step are independent.
What to watch next
- Opportunities to exploit the algorithm’s inherent parallelism in multi-core or GPU implementations.
- not confirmed in the source: detailed numerical stability comparisons between Dodgson condensation and standard Gaussian elimination in floating-point arithmetic.
- not confirmed in the source: adoption of Dodgson condensation in mainstream numerical linear algebra libraries.
Quick glossary
- Determinant: A scalar value computed from a square matrix that encodes properties such as invertibility and volume scaling.
- 2×2 determinant: For a 2×2 matrix [[a,b],[c,d]], the determinant equals ad − bc; used as building blocks in condensation.
- Gaussian elimination: A standard method to solve linear systems and compute determinants by reducing a matrix to triangular form, typically with partial pivoting.
- Cofactor (Laplace) expansion: A determinant calculation method that expands along a row or column using minors and cofactors; easy conceptually but factorial time in naive form.
- Partial pivoting: A strategy in Gaussian elimination where rows are swapped to avoid dividing by small or zero pivot elements, improving numerical robustness.
Reader FAQ
Did Lewis Carroll (Charles Dodgson) invent this method?
Yes — the method was described by Charles Dodgson in an 1867 paper.
Does the algorithm require division by previously computed elements?
Yes. From the second condensation step onward, each 2×2 determinant is divided by an element from two steps earlier, so divisions occur.
What if there are zeros inside the matrix?
Dodgson recommended rearranging rows or columns or adding multiples of rows to remove interior zeros to avoid division by zero.
Is this faster than Gaussian elimination?
No — both Dodgson condensation and Gaussian elimination have O(n³) operation counts; cofactor expansion is much slower (O(n!)).

How Lewis Carroll computed determinants Posted on 10 July 2023 by John Charles Dodgson, better known by his pen name Lewis Carroll, discovered a method of calculating determinants now known…
Sources
- How Lewis Carroll computed determinants (2023)
- Dodgson condensation
- Dodgson condensation – Wikipedia, the free encyclopedia
Related posts
- Drawing with zero-width characters — how Quranic Unicode upended text rendering
- How I think about Kubernetes — More than a container orchestrator
- Rethinking Kubernetes as a declarative runtime with a type system