In Defense of Cramer's Rule

(and determinants)

In this post, I am going to defend my dear determinants. I am a fan of them, and I think they are useful. I am not saying that they are always the best tool for the job.

The Problems with Determinants

Now, to discuss why determinants can be great, I must first discuss why people don’t like them. Sheldon Axler famously hates them. Why? Well, there are a few main reasons:

All of these are absolutely valid criticisms. However, in the vast majority of cases where you would be expected to do them by hand, these issues simply do not matter!

2x2 Cramer’s Rule

\(2\times2\) determinants are easy. The formula is easy enough to do in your head.

\[\begin{equation} \det\begin{pmatrix}a&b\\c&d\end{pmatrix}=ad-bc \end{equation}\]

And using Cramer’s rule, we can solve any \(2\times2\) system \(\mathbf{A}\mathbf{x}=\mathbf{b}\) by computing exactly three \(2\times2\) determinants!

First, we define the notation: For the system \(\mathbf{A}\mathbf{x}=\mathbf{b}\), where \(\mathbf{A}\) is a square matrix, we define \(\mathbf{A}_i\) to be the matrix obtained by replacing the \(i\)th column with \(\mathbf{b}\). For example, if we have

\[\begin{equation} \begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}=\begin{pmatrix}B_1\\B_2\end{pmatrix} \end{equation}\]

then

\[\mathbf{A}_1=\begin{pmatrix}B_ 1&b\\B_ 2&d\end{pmatrix},\quad \mathbf{A}_2=\begin{pmatrix}a&B_1\\c&B_2\end{pmatrix}\]

Then, Cramer’s rule tells us that if \(\det(A)\neq0\), then the solution to the above system is

\[\begin{equation} x_1=\frac{\begin{vmatrix}B_1&b\\B_2&d\end{vmatrix}}{ad-bc},\quad x_2=\frac{\begin{vmatrix}a&B_1\\c&B_2\end{vmatrix}}{ad-bc} \end{equation}\]

The more general result being

\[\begin{equation} x_i=\frac{\det(\mathbf{A}_i)}{\det(\mathbf{A})} \end{equation}\]

And this is actually very quick! First you do \(ad-bc\), and see if it is zero. If it is, then you can check by inspection if the \(\mathbf{b}\) vector is a scalar multiple of the columns of \(\mathbf{A}\). If not, then you can replace the columns with the \(\mathbf{b}\) vector and take the determinant. Personally, what I do, is cover up the columns of an augmented matrix, and negate the first determinant I take.

\[\left( \begin{array}{cc|c} a&b&B_1\\ c&d&B_2 \end{array} \right)\]

A 2x2 example

Consider the system of equations

\[\left( \begin{array}{cc|c} 6&-5&26\\ 14&3&2 \end{array} \right)\]

Now, this does not look fun to row reduce. Especially if you are like me and avoid fractions like the plague. So, let us use Cramer’s rule instead. I will show the work the way I would do it (avoiding large multiplications)

\[\begin{vmatrix}6&-5\\14&3\end{vmatrix}=2\begin{vmatrix}3&-5\\7&3\end{vmatrix}=2(9+35)=88\]

This tells us that

\[x_1=-\frac{}{88},\quad x_2=\frac{}{88}\]

Now, we cover up the first column:

\[\begin{vmatrix}-5&26\\3&2\end{vmatrix}=2\begin{vmatrix}-5&13\\3&1\end{vmatrix}=2(-5-39)=-88\] \[\implies x_1=-\frac{-88}{88}=1\]

Next, we cover up the second column:

\[\begin{vmatrix}6&26\\14&2\end{vmatrix}=2^2\begin{vmatrix}3&13\\7&1\end{vmatrix}=4(3-91)=-4(88)\] \[\implies x_2=\frac{-4(88)}{88}=-4\]

Done.

A side note about 3x3s

Certainly there are cases where Cramer’s rule can be optimal for \(3\times3\) systems and larger, if the matrix and \(\mathbf{b}\) vector are particularly simple. But this is so rare, I am not going to bother creating a magical example.

A Defense of Determinants

Finally, I am going to stay up on my soap box and tell y’all why determinants can be very useful.

Mostly, this is focused on the \(2\times2\) and \(3\times3\) cases, which are the vast majority of problems you are expected to do by hand.

I will briefly detail my reasonings for these points.

Computing small determinants

As we mentioned above, \(2\times2\) determinants are no problems at all. \(3\times3\)’s are similarly not too bad. If they cannot be done in one’s head, a few steps of row reduction can often bring it to that point. And plugging it into a calculator usually isn’t that bad.

Information for small matrices

Especially for \(3\times3\) matrices, the effort of computing the determinant can have big gains. Specifically, because it is often very difficult to see that a matrix has rank 2 by inspection. Rank 1 is obvious, because every column is a scalar multiple, but seeing that one of the rows is a linear combination of the other two isn’t so easy. Therefore, for the \(3\times3\) case, the determinant’s lack of sensitivity is not a problem. Usually, a zero determinant means that the rank is exactly two.

3x3 inverses

I will make a blog post about tricks for matrix inverses someday, I promise. And the adjugate matrix is generally the method of choice for \(3\times3\) matrices. You do not have to do that awful row reduction of a super augmented matrix.

Giving an explicit formula

Finally, I want to shout out Cramer’s rule for its applications. Variation of parameters for ordinary differential equations relies on Cramer’s rule to make the formula compact.

If you only need a single variable from a system of equations, Cramer’s rule can also save you from having to row reduce the entire thing.

Additionally, I have used it in my own personal research.