Bases and Dimension of Vector Spaces
Bases
Previously, I discussed span and linear independence. Essentially, the span of a list of vectors is the set of all possible linear combinations of those vectors, and a list of vectors is linearly independent if none of them can be expressed as a linear combination of the others.
Now I'll use these ideas to introduce one of the most important concepts in linear algebra:
Definition. A basis for a vector space
is a linearly independent list of vectors which spans .
This definition, combined with what we have already discussed, tells us three things:
- Every vector in
can be expressed as a linear combination of basis vectors, since a basis must span . - This representation is unique, since the basis elements are linearly independent.
is the direct sum of the spans of the basis vectors.
Stated more explictly, if
Bases for vector spaces are similar to bases for topological spaces. The idea is that a basis is a small, easy to understand subset of vectors from which it is possible to extrapolate pretty much everything about the vector space as a whole.
Here are some examples and non-examples.
Example. The list
is a basis for the vector space , since the list is linearly independent and for any vector , we may write
Example. The list
is also a basis for . To see that it is linearly independent, we must show that
only if
. Since in this vector space addition and scalar multiplication are done component-wise, equation reduces to the system of equations:
Equation
implies that , from which equation implies that , as desired. To see that our list spans
, we note that any vector can be written
Example. The list
is not a basis for because it is linearly dependent and it does not span . To see that it is linearly dependent, note that
That is, we have written the zero vector as a linear combination of vectors in the list with nonzero coefficients.
To see that it does not span
, note that no vector in with a nonzero second component can ever be written as a scalar multiple of these vectors, since both have a second component of zero.
Example. The list
is not a basis for because it is linearly dependent. We can see this immediately from the linear dependence lemma, since it has length three and we have already exhibited a linearly dependent list of length two which spans .
Example. The list
is a basis for , the vector space of polynomials over a field with degree at most three. To justify this claim, note that this list is certainly linearly independent, and any polynomial of degree less than or equal three can be written
for some choice of
.
The next theorem is fairly obvious, but without it we would be pretty much lost. Recall that a vector space is finite-dimensional if it is spanned by a finite list of vectors. For this next proof we will use this definition, as well the linear dependence lemma.
Theorem. Every finite-dimensional vector space has a basis.
Let
denote a finite-dimensional vector space. Then is spanned by some finite list of vectors, not all zero, . If this list is linearly independent, then we are done. Otherwise, we can use the linear dependence lemma to remove a vector from this list and produce a new list of length which still spans . We continue this process until we are left with a linearly independent list, which will take at most steps, since any list containing one nonzero vector is automatically linearly independent. This resulting list is, by definition, a basis for .
This argument only works for finite-dimensional vector spaces, since it relies on the fact that we only have to apply the linear dependence lemma a finite number of times. Infinite-dimensional vector spaces, as I've mentioned before, are a lot grosser. It is possible to show that infinite-dimensional vector spaces are guaranteed to have bases if we accept the axiom of choice. Since most mathematicians much prefer to have bases for their vector spaces, this is just one more point in favor of accepting the axiom of choice. Luckily for us, we aren't currently interested in infinite-dimensional vector spaces and so we can simply ignore this icky business.
Dimension
Next comes a super important result which will finally settle the question I posed last time about how to define the dimension of a vector space. Its proof will employ the theorem I proved at the end of my last post, that the length of a list of spanning vectors is never shorter than any linearly independent list.
Theorem. All bases of a finite-dimensional vector space have the same length.
Proof. Let
denote a finite dimensional vector space, and suppose and are both bases for . Since is linearly independent and spans , we have that . Similarly, since is linearly independent and spans , we have that . It follows that , as desired.
What we have really shown is that the length of a basis for a finite-dimensional vector space is an invariant of that space! And it's a particularly special invariant:
Definition. The dimension of a finite-dimensional vector space is the length of any basis for that space.
If the dimension of a vector space
is , we write
As a special case, recall that we defined
We've already shown that dimension is well-defined, since all bases for a vector space have the same length. But does this definition coincide with what we expect? For instance, we would certainly expect that
Definition. Given a positive integer
, the standard basis for is the list containing the vectors
That is,
is the vector whose th component is and every other component is zero.
It should be fairly obvious that for any
Some Useful Theorems
This first result tells use how to calculate the dimension of a sum of two subspaces. Recall that we previously showed that the intersection of two subspaces is itself a subspace.
Theorem. If
and are subspaces of a finite-dimensional vector space, then
Proof. Since
is a subspace of a finite-dimensional vector space, it is also finite-dimensional. Thus it has a basis, which we will denote . This basis for
is linearly independent, and thus we may extend it to a basis for by adding new linearly independent vectors. That is, there is some basis for , and certainly . We may similarly extend
to a basis for by adding new linearly independent vectors. That is, there is some basis for , and certainly . We argue that
is a basis for . Certainly
and thus
To see that this list is linearly independent, suppose that
for some scalars
, and . We can rewrite as
from which we see that
. Since is a basis for , it follows that we may write it uniquely as a linear combination of basis vectors,
for some scalars
. But is linearly independent, so it follows from equation that for all . This means we may rewrite equation as
Since
is linearly independent, it follows that for all and for all . Since all coefficients in equation have been shown to be zero, it follows that is a basis for . It follows then that
completing the proof.
If you think that proof was gross, try doing it for sums of three or more subspaces. Or don't, because as far as I know there is no general formula for such a thing.
The next result ties together our work from last post regarding direct sums, and follows immediately from the theorem we just proved. Recall that if a sum of two subspaces is direct, their intersection is trivial.
Theorem. If
are subspaces of a finite-dimensional vector space for which
then
Proof. We first rewrite the sum as
We work outward, first considering the subspace
and then the subspace , then the subspace , etc. This takes iterations, but eventually we reach . But
since the sum is direct, and so by the previous theorem we see that
And in general, since
,
Setting
, the result follows.
Next time I will introduce linear maps, which are (besides vector spaces themselves) the primary objects of study in linear algebra.