In this post, I am going to defend my dear determinants. I am a fan of them, and I think they are useful. I am not saying that they are always the best tool for the job.
The Problems with Determinants
Now, to discuss why determinants can be great, I must first discuss why people don’t like them. Sheldon Axler famously hates them. Why? Well, there are a few main reasons:
While very manageable when small, they get out of hand very quickly and become very computationally expensive
They are not very sensitive tools. When the determinant is zero, it tells you nothing about the rank besides that it is not full. For example, if the determinant of a matrix is zero, you know basically nothing about its rank. And the computation power required to calculate that will really not have been worth it, when you could have just row reduced it to learn the same thing and more with much less work.
It is difficult to define them without sounding like a raving lunatic
They cannot be easily used for non-square systems
All of these are absolutely valid criticisms. However, in the vast majority of cases where you would be expected to do them by hand, these issues simply do not matter!
2x2 Cramer’s Rule
determinants are easy. The formula is easy enough to do in your head.
And using Cramer’s rule, we can solve any system by computing exactly three determinants!
First, we define the notation: For the system , where is a square matrix, we define to be the matrix obtained by replacing the th column with . For example, if we have
then
Then, Cramer’s rule tells us that if , then the solution to the above system is
The more general result being
And this is actually very quick! First you do , and see if it is zero. If it is, then you can check by inspection if the vector is a scalar multiple of the columns of . If not, then you can replace the columns with the vector and take the determinant. Personally, what I do, is cover up the columns of an augmented matrix, and negate the first determinant I take.
A 2x2 example
Consider the system of equations
Now, this does not look fun to row reduce. Especially if you are like me and avoid fractions like the plague. So, let us use Cramer’s rule instead. I will show the work the way I would do it (avoiding large multiplications)
This tells us that
Now, we cover up the first column:
Next, we cover up the second column:
Done.
A side note about 3x3s
Certainly there are cases where Cramer’s rule can be optimal for systems and larger, if the matrix and vector are particularly simple. But this is so rare, I am not going to bother creating a magical example.
A Defense of Determinants
Finally, I am going to stay up on my soap box and tell y’all why determinants can be very useful.
Mostly, this is focused on the and cases, which are the vast majority of problems you are expected to do by hand.
They are very easy to compute when small, or at least not difficult to enter into a calculator
There are methods to do them in your head
They can tell you a lot for small matrices
They provide the easiest method for inverses
They can give you an explicit formula for the solution to a system
I will briefly detail my reasonings for these points.
Computing small determinants
As we mentioned above, determinants are no problems at all. ’s are similarly not too bad. If they cannot be done in one’s head, a few steps of row reduction can often bring it to that point. And plugging it into a calculator usually isn’t that bad.
Information for small matrices
Especially for matrices, the effort of computing the determinant can have big gains. Specifically, because it is often very difficult to see that a matrix has rank 2 by inspection. Rank 1 is obvious, because every column is a scalar multiple, but seeing that one of the rows is a linear combination of the other two isn’t so easy. Therefore, for the case, the determinant’s lack of sensitivity is not a problem. Usually, a zero determinant means that the rank is exactly two.
3x3 inverses
I will make a blog post about tricks for matrix inverses someday, I promise. And the adjugate matrix is generally the method of choice for matrices. You do not have to do that awful row reduction of a super augmented matrix.
Giving an explicit formula
Finally, I want to shout out Cramer’s rule for its applications. Variation of parameters for ordinary differential equations relies on Cramer’s rule to make the formula compact.
If you only need a single variable from a system of equations, Cramer’s rule can also save you from having to row reduce the entire thing.