Linear Independent Vectors

Independent Vectors

Definition 1: Vectors X1, …, Xk of the same size and shape are independent if for any scalar values b1, … bk, if b1 X1 + ⋅⋅⋅ + bXk = 0, then b1 = ⋅⋅⋅ = bk  = 0.

Vectors X1, …, Xk are dependent if they are not independent, i.e. there are scalars b1, … bk, at least one of which is non-zero, such that b1 X1 + ⋯ + bk Xk = 0.

Observation: If X1, …, Xk  are independent, then Xj ≠ 0 for all j.

Property 1X1, …, Xk are dependent if and only if at least one of the vectors can be expressed as a linear combination of the others.

Proof: Suppose X1, …, Xk are dependent. Then there are scalars b1, … bk, at least one of which is non-zero such that b1 X1 + ⋯ + bk Xk = 0. Say bi ≠ 0. Then

image072x

Now suppose that Xi = \sum_{j \neq i} b_j X_j. Then b1 X1 + ⋯ + bk Xk = 0, where b= -1, and so X1, …, Xk  are dependent.

Span and Basis

Definition 2: The span of independent vectors X1, …, Xk consists of all the vectors which are a linear combination of these vectors. If W is any set of vectors, then the vectors X1, …, Xk are said to be a basis of W if they are independent and their span equals W. We call a set of vectors W closed if W is the span of some set of vectors. Since we will only consider finite sets of vectors, henceforth we will assume that any closed set is finite.

Property 2: If X1, …, Xk is a basis of W, then every element in W has a unique representation as a linear combination of X1, …, Xk.

Proof: That one such representation exists follows from the definition of span. We now show uniqueness. Suppose there are two such representations, as follows:

\sum_{j = 1}^k b_j X_j  = \sum_{j = 1}^k c_j X_j

Then \sum_{j = 1}^k (b_j - c_j) X_j = 0. But since X1, …, Xk are independent, bj – cj = 0 for all j, i.e. bj = cj for all j, and so the two representations are equal.

Closed set of vectors

Property 3: If B is a set of independent vectors such that the span of B is a subset of a closed (finite) set of vectors W, then B can be expanded to be a (finite) basis for W.

Proof: Let W = {X1, …, Xn}. We now build a finite set of vectors recursively as follows. Start with B0 = B. For each k from 0 to n–1, define Bk+1 = Bk ∪ {Xk} if Xk is not already in the span of Bk and Bk+1 = Bk otherwise. Clearly, each Bk is an independent set of vectors and the span of Bn is W.

Corollary 1: Every closed set of vectors W has a (finite) basis.

Proof: The corollary is Property 3 with B = Ø.

Size of a Basis

Property 4: If Y1,…,Yn is a basis for W and X1, …, Xm is a set of independent vectors in W, then m ≤ n.

Proof: We now show by induction that for each k, 1 ≤ km, there are vectors Z1,…,Zk such that {Z1,…,Zk} ⊆ {Y1,…,Yn} and Z1,…,Zk, Xk+1,…,Xm are independent.

We now assume the assertion is true for all values less than k and show it is true for k, where km, i.e. we need to prove that for at least one jZ1,…,Zk-1, YjXk+1,…, Xm are independent.

If there is no such j then by Property 1, each Yj can be expressed as a linear combination of Z1,…,Zk-1Xk+1,…,Xm. But since Xk is in and Y1,…,Yn is a basis for W, we can express Xk as a linear combination of the Y1,…,Yn and therefore as a linear combination of Z1,…,Zk-1Xk+1,…,Xm. Thus by Property 1, Z1,…,Zk-1, XkXk+1,…,Xare not independent, a violation of the induction hypothesis.

Thus, there is some j for which Z1,…,Zk-1YjXk+1,…,Xm are independent. We set Zequal to any such Yjwhich yields the desired result, namely that Z1,…, ZkXk+1,…, Xm are independent.

Where k = m, we conclude that {Z1, …, Zm} ⊆ {Y1, …, Yn} and that Z1, …, Zm are independent. Since Z1,…, Zare independent, they are all distinct, but since {Z1, …, Zm} ⊆ {Y1, …, Yn}, it follows that ≤ n.

Corollaries

Corollary 2: Any two bases for a finite set of vectors have the same number of elements.

Proof: Suppose X1, …, Xm  and Y1, …, Yn are bases for W. By Property 4, m ≤ n and also nm, and so m = n.

Corollary 3: Let Y1, …, Ym be a basis for W and let X1, …, Xm be a set of independent vectors in W. Then X1, …, Xm is also basis for W.

Proof: Suppose X1, …, Xm is not a basis for W. By Property 3, it can be expanded to become a basis for W. This basis has n elements for some n > m, but this contradicts Property 4 since Y1, …, Ym is a basis for W. Thus X1, …, Xm must also be a basis for W.

Corollary 4: Any set of n linearly independent n × 1 column vectors is a basis for the set of n × 1 column vectors. Similarly, any set of n linearly independent 1 × n row vectors is a basis for the set of 1 × n row vectors.

Proof: Let Cj be the jth column of the identity matrix In. It is easy to see that for any n, C1, …, Cn forms a basis for the set of all n × 1 column vectors. The result for column vectors now follows by Corollary 3. The proof for row vectors is similar.

Dimension

Definition 3: The dimension of a closed set of vectors W is the size of any basis of W.

This definition makes sense since, as we have seen from the above, any closed set of vectors has a basis, and any two bases have the same number of elements. Further, note that any element in W can be expressed uniquely as a linear combination of the elements in any basis.

The dimension of a closed set W of vectors is equal to the rank of the matrix whose rows (or columns) consist of a basis for W (see Rank of a Matrix).

References

Golub, G. H., Van Loan, C. F. (1996) Matrix computations. 3rd ed. Johns Hopkins University Press

Searle, S. R. (1982) Matrix algebra useful for statistics. Wiley

Perry, W. L. (1988) Elementary linear algebra. McGraw-Hill

Fasshauer, G. (2015) Linear algebra.
https://math.iit.edu/~fass/532_handouts.html

Lambers, J. (2010) Numerical linear algebra
https://www.yumpu.com/en/document/view/41276350

2 thoughts on “Linear Independent Vectors”

  1. I just discovered your site. It is very instructive!

    Quick question: in the summation as part of the first proof on this page, should the b(j) term be -b(j)?

    Reply

Leave a Comment