How to simplify matrix multiplication with the best perspectives (and also find certain inverse matrices fast!)
First, let’s start with general matrix column vector multiplication. We’ll focus on
If we define the matrices
Then,
But look what happens when we separate this result by the
Which are just the original columns of
Which we can interpret as saying: “we want
And, in general, if we denote the columns of an
Thus, when multiplying a matrix on the right by a column vector, the column vector tells us how many of each column we are taking.
Suppose we have that
Consider the product
Now, look at each column of the product. Notice that the second column of
If you don’t quite see what I mean, look at what happens if we separate the multiplication by the columns of
Which is the first column of
Which is the second column of
Thus, we see that
And more generally,
That is to say, each column of the product, is just the left matrix times the individual column. Hence, we can use the column perspective for each column individually.
This can GREATLY speed up certain computations.
For example,
will just give us the first column of
is just the first column and the second column added together.
This can also help us in the reverse direction! Take the example of solving
We can see that if we take the third column as it is (that is to say, taking exactly
since taking
Let’s use an example of matrix multiplication
The first column of the right matrix tells us to add up the second and third columns:
Personally, I can say that I prefer adding up the columns as opposed to doing nine three-dimensional dot products.
Row perspective, while not quite as useful as column perspective, still has its share of uses and applications. It is essentially the transpose of column perspective. Then, we will define our notation for the rows of an
Row perspective, is as follows.
Where the entries of
To see this for a
So an example of this would be
The first row of the left matrix tells us we want
I use row perspective often when row reducing matrices. It helps me do multiple steps at the same time. So, let’s say I am trying to row reduce
My thought process is as follows:
First, the second row would be better if its negative was the first row, because then we would have a pivot in the first column and a zero in the second entry. Thus, the first row of my row reduction matrix should be
Next, if we add up the first two rows, then we get a pivot in the second column and a zero in the first entry. So, the second row of our row reduction matrix will be
Finally, we can cancel out the first three entries of the third row by adding
Putting it all together, our row reduction matrix is
And if we multiply
The final steps of row reduction are then very simple. Divide row three by
We can also use these perspectives to find inverses relatively easily (depending on the matrix). Of course, this is relatively pointless for a
Let’s say we want to invert our row reduction matrix from before
This is a good candidate for the perspectives, because there are lots of zeros! The more zeros, the easier it is to use this. If a matrix has no nonzero entries, unless there is some amazingly obvious pattern, I would just use either the adjugate matrix if it’s
So, to do this, we want to find combinations of the rows and columns to get the identity matrix.
The most obvious one to me is that if we take just
Also, if we take
I can also see that to get the second column of the identity matrix we can take
Finally, to get the first column, we need a combination of columns one and three to get
So, to cancel out that
We could have also looked at rows one and three individually! I chose to do the whole column at once because that was my personal preference. You can do it in whatever order you like. I just do whatever is most obvious to me first.