Matrices for Linear Maps
If you already knew some linear algebra before reading my posts, you might be wondering where the heck all the matrices are. The goal of this post is to connect the theory of linear maps and vector spaces to the theory of matrices and computation.
The key observation is that linear maps between finite-dimensional vector spaces can be completely described solely in terms of how they act on basis vectors. Let's explore this idea in a more precise setting.
Suppose
where
We can go further and decompose each
where
Since the scalars
To put it another way, say we've chosen bases for
If we were, for instance, to write the scalars
then we would have a compact and easy to read way of describing the linear map
This is what we call an
It can be a bit confusing at first trying to remember what the shape of a linear map's matrix should be. The number of columns corresponds to the dimension of the domain, and the number of rows corresponds to the dimension of the codomain. The
The scalar
If
Example. Recall the zero map we defined last time,
where for every . Suppose is any basis for and is any basis for . To compute the matrix for the zero map with respect to these bases, all we need to do is figure out how it acts on basis vectors. Choose and
for which . Then
for some scalars
. But since is a basis for , it is linearly independent and so this is only possible if for . That means the matrix for the zero map looks like this:
And it looks like this for any choice of bases for
and , since there's only one way to write the zero vector in terms of any basis. We call this the zero matrix of dimension
.
Example. Recall the identity map we defined last time,
where for every . Suppose is any basis for . How does
act on basis vectors? Choose any with . Then
Since
is linearly independent, this is only possible if for every and . That is,
This means that the matrix for the identity map is a square matrix which looks like this:
with
s along the main diagonal and s everywhere else. And it looks like this for any choice of basis for , as long as we use the same basis for both copies of (the domain and the codomain). Otherwise, all bets are off. We call this the identity matrix of dimension
. We also give a name to formula
for the scalars . The Kronecker delta is a function of two variables defined by
We can rewrite the identity map in terms of the Kronecker delta, if we want:
There are a number of important operations we can define on matrices.
Definition. If
and are matrices, their sum is defined component-wise:
It is easy to show that with this definition and the normal operation of addition of functions,
Addition of matrices is not defined for matrices of different sizes, since this would be like trying to add functions with different domains and codomains.
Definition. If
is an matrix, its scalar product with is defined component-wise:
So far we have seen a lot of matrices but I have not shown you what they are useful for. The rest of this post will address their uses.
Suppose we have a linear map
where the scalars
The natural question to ask is this: given the matrix
So the matrix for
Observe that all of the information about
Definition. If
is an matrix for a linear map and is an matrix for a vector, we define matrix-vector multiplication as follows:
By definition then, we have that
Now, suppose we have three vector spaces
and
then
Let's take this one step further, and compute
Then
We thus make the following definition:
Definition. If
is an matrix and is an matrix, we defined matrix-matrix multiplication as follows:
Note that the product is an
matrix.
By definition then, we have that
I'm going to end here for now because all of these indices are making my head spin. But we'll see matrices again all over the place.