# Answers to Exercise Set II1 Drills 1 _Outline only_ _a_ Suppose by akgame

VIEWS: 4 PAGES: 6

• pg 1
```									Answers to Exercise Set II.1.

Drills

1. (Outline only)

(a) Suppose a(1, 1, 0) + b(1, 2, 0) = (0, 0, 0). Then a + b = 0, a + 2b = 0, which gives
a = 0 and b = 0.
(b) Suppose a(1, 1, i)+b(1, i, 1)+c(i, 1, 1) = (0, 0, 0). Then a+b+ic = 0, a+ib+c = 0,
ia + b + c = 0. Adding all three equations, we have (2 + i)(a + b + c) = 0 and
hence a + b + c = 0. Now there is no diﬃculty to get a = 0, b = 0 and c = 0.
(c) Suppose ap(x) + bp′ (x) + cp′′ (x) = 0, or a(1 + x + x2 ) + b(1 + 2x) + 2c = 0, or
a + b + 2c + (a + 2b)x + ax2 = 0. Thus a + b + 2c = 0, a + 2b = 0 and a = 0. So
a = 0, b = 0 and c = 0.
(d) Suppose ap(x) + bp(x + 1) + cp(x + 2) = 0. Then ax2 + b(x + 1)2 + c(x + 2)2 = 0,
or (a + b + c)x2 + (2b + 4c)x + b + 4c = 0. Thus a + b + c = 0, 2b + 4c = 0 and
4c = 0, which give a = 0, b = 0 and c = 0. .

(e) Suppose aA + bA2 = O. Then

1 −1    1 1   0           0
a        +b     =               .
0 −2    0 4   0           0

Thus we have a + b = 0, −a + b = 0 and −2a + 4b = 0, which give a = 0, b = 0
and c = 0.

2. (Outline only)
(a) 87(1, 1, 0) + (1, 2, 0) − (88, 89, 0) = (0, 0, 0).
(b) (1, i, 1) − i(i, 1, i) − 2(1, 0, 1) = (0, 0, 0).
(c) p(x) − 3p(x + 1) + 3p(x + 2) − p(x + 3) = 0.

1   −1   1       1   1 −3   0 0
(d) −2A + A2 + A3 = O, or −2                   +           +      =     .
0   −2   0       4   0 −8   0 0

3. (Outline only)
(a) Suppose a(u + v) + b(u − v) = 0, or (a + b)u + (a − b)v = 0. Hence a + b = 0
and a − b = 0, which gives a = b = 0.
(b) Suppse a(u + v) + b(u + w) + c(v + w) = 0, or (a + b)u + (a + c)v + (b + c)w = 0.
Thus a + b = 0, a + c = 0, b + c = 0, which give a = 0, b = 0 and c = 0.

1
(c) Suppose a(u + iv) + b(iu + v) = 0, or (a + bi)(u + (ai + b)v = 0. So a + bi = 0
and i(a − ib) = ai + b = 0. Hence a = b = 0.
4. True or False:
(a) False. Need the condition v = 0 to turn it into a true statement.
(b) True.
(c) False. The following correction turns it into a true statement: if two nonzero
vectors are linearly dependent, then each of them is a scalar multiple of the
other.
(d) False.       (e) True.       (f) False.

5. (Outline only)
(a) Suppose a1 A1 + a2 A2 + a3 A3 = O. Then

a1 BA1 + a2 BA2 + a3 BA3 = B(a1 A1 + a2 A2 + a3 A3 ) = O

and hence a1 = a2 = a3 = 0.
(b) Suppose a1 p1 (x) + a2 p2 (x) + a3 p3 (x) = 0. Then
d
a1 p′ (x) + a2 p′ (x) + a3 p′ (x) =
1           2           3            (a1 p1 (x) + a2 p2 (x) + a3 p3 (x)) = 0
dx
and hence a1 = a2 = a3 = 0.

6. Find the leading vectors and express the other vectors as their linear combinations
(a) Using the standard basis {1, x, x2 }, the coordinate vectors of the given polyno-
mials are arranged as the columns of
                                                
1 2 −1 0         0 1         1 2 0 1 0 4
1 2      1 2 −2 5  ∼  0 0 1 1 0 3  .
0 0     0 0      1 1         0 0 0 0 1 1

Hence the leading vectors are x + 1, x − 1 and x2 − 2x. Furthermore, 2x + 2 =
2(x + 1), 2x = (x + 1) + (x − 1), x2 + 5x + 1 = 4(x + 1) + 3(x − 1) + (x2 − 2x).
(b) Arranged the given vectors as columns of

1+i i 1        1   1 (1 + i)/2           0 (3 + 4i)/4
∼                                    .
1−i 1 i        2   0     0               1 (1 − 2i)/2

Leading vectors are (1 + i, 1 − i) and (1, i). Also,
1+i                              3+i                  1 − 2i
(i, 1) =       (1 + i, 1 − i),   (1, 2) =       (1 + i, 1 − i) +        (1, i).
2                                4                     2
2
(c) Leading vectors are C1 , C3 , C4 . Also, C2 = 2C1 , C5 = 5C1 + 2C3 + 6C4 ,
C6 = 3C1 + 4C3 + 7C4 .
7. Invariant subspaces in Example 1.3.1 for operators D and Ta .
(a) Invariance of V1 for D: D(1) = 0, D(x) = 1, D(x2 ) = 2x are in V1 .
(b) V1 Invariant for Ta : Ta (1) = 1, Ta (x) = a + x, Ta (x2 ) = a2 + 2ax + x2 are in V1 .
(c) Invariance of V2 for D:

D(ekx ) = kekx , D(xekx ) = ekx + kxekx , D(x2 ekx ) = 2xekx + kx2 ekx

are vectors in V2 .
(d) Invariance of V2 for Ta :

Ta (ekx ) = eka ekx ,
Ta (xekx ) = (aeka )ekx + (eka )xekx ,
Ta (x2 ekx ) = (a2 eka )ekx + (2aeka )(xekx ) + eka (x2 ekx ),
which are vectors in V2 .
(e) Invariance of V4 for D:
D(cos bx) = (−b) sin bx,     D(sin bx) = b cos bx,
D(x cos bx) = cos bx − bx sin bx
D(x sin bx) = sin bx + bx cos bx
D(x2 cos bx) = 2x cos bx − bx2 sin bx
D(x2 sin bx) = 2x sin bx + bx2 cos bx
which are vectors in V4 .
(f) Invariance of V4 for Ta :
Ta (cos bx) = (cos ba) cos bx + (− sin ba) sin bx
Ta (sin bx) = (sin ba) cos bx + (cos ba) sin bx
Ta (x cos bx) = (a cos ba) cos bx + (−a sin ba) sin bx
+ (cos ba)x cos bx + (− sin ba)x sin bx
Ta (x sin bx) = (a sin ba) cos bx + (a cos ba) sin bx + (sin ba)x cos bx + (cos ba)x sin bx
Ta (x2 cos bx) = (a2 cos ba) cos bx + (−a2 sin ba) sin bx + (2a cos ba)x cos bx
+ (−2a sin ba)x sin bx + (cos ba)x2 cos bx + (− sin ba)x2 sin bx
Ta (x2 sin bx) = (a2 sin ba) cos bx + (a2 cos ba) sin bx + (2a sin ba)x cos bx
+ (2a cos ba)x sin bx + (sin ba)x2 cos bx + (cos ba)x2 sin bx

3
which are vectors in V4 .

Exercises

1. Suppose a1 v1 + a2 v2 + · · · + ar vr = 0. Then

a1 T v1 + a2 T v2 + · · · + ar T vr = T (a1 v1 + a2 v2 + · · · + ar vr ) = 0.

Since T v1 , T v2 , . . . , T vr are linearly independent, we have a1 = a2 = · · · = ar = 0.
This shows the independence of v1 , v2 , . . . , vr . Take any set of linearly independent
set of vectors v1 , v2 , . . . vr , and let T = O, then . Then T v1 , T v2 , . . . T vr are
zero vectors and hence are linearly dependent. This shows that the converse of this
statement is false.

2. Suppose that v1 , v2 , . . . , vr are linearly independent. Then v1 = 0. For each k with
2 ≤ k ≤ r, vk is not in the linear span of v1 , v2 , . . . , vk−1 . Indeed, if this is not the
case, we can write vk = b1 v1 +b2 v2 +· · ·+bk−1 vk−1 for some scalars b1 , b2 , . . . , bk−1 .
Thus we have the nontrivial linear relation

a1 v1 + a2 v2 + · · · + ar vr = 0

where aj = bj for j < k, ak = −1 and aj = 0 for j > k. This contradicts the linear
independence of v1 , v2 , . . . , vr . Now we prove the “if” part. Assume the contrary
that v1 , v2 , . . . , vr are linearly dependent, say, it satisﬁes a nontrivial linear relation
a1 v1 + a2 v2 + · · · + ar vr = 0. At least one of aj is nonzero. Let k be the largest index
for which ak = 0. It follows that aj = 0 for j > k and hence

a1 v1 + a2 v2 + · · · + ak vk = 0 with ak = 0.

If k = 1, we have a1 v1 = 0 and since a1 = ak = 0, we have v1 = 0. So we assume
k > 0. In this case we have vk = b1 v1 + b2 v2 + · · · + bk−1 vk−1 where bj = −aj /ak .
Thus vk can be expressed as a linear combination of v1 , v2 , . . . , vk−1 , contradicting
our given condition.

3. (a) Suppose
a0 v + a1 T v + a2 T 2 v + · · · + an T n v = 0.                   (∗)

Applying T n to this identity, we get a0 T n v +a1 T n+ 1 v +a2 T n+ 2 v +· · ·+an T 2n v = 0,
which becomes a0 T n v = 0 in view of T n+ 1 v = 0. Since T n v = 0, we have a0 = 0.
Suppose that we already have a0 = a1 = · · · = ak−1 = 0 for some k ≤ n. Then (∗)
becomes ak T k v + ak+ 1 T k+ 1 v + · · · + an T n v = 0. Applying T n−k to the last identity,

4
we obtain ak T n v + ak+ 1 T n+ 1 v + · · · + an T 2n−k v = 0, which can be rewritten as
ak T n v = 0 in view of T n+ 1 v = 0. Since T n v = 0, we have ak = 0. Now it is clear
that a0 = a1 = · · · = an = 0. (b) When p(x) is a polynomial of degree n, then the
degree of p′ (x) is n − 1, the degree of p′′ (x) is n − 2 and so forth. Finaly the degree
of p(n − 1)(x) is 1, p(n) (x) is a nonzero constant and p(n+ 1) (x) = 0. Thus part (a) is
applicable.

4. We have
0        1   0   0             0       0   1   0                0   0   0   1
                                                                        
0         0   1   0          0        0   0   1             1    0   0   0
P =                         P2 =                            P3 =                .
0        0   0   1             1       0   0   0                0   1   0   0
                             
1        0   0   0             0       1   0   0                0   0   1   0

Suppose aI + bP + cP 2 + dP 3 = O. Then we can rewrite this identity as

a   b c d     0            0     0   0
                                      
d    a b c 0              0     0   0
=
c   d a b     0            0     0   0
                                        
b   c d a     1            0     0   0

and hence a = b = c = d = 0.

5. (a) An example of three linearly dependent vectors v1 , v2 , v3 in R3 such that every pair
of them form a linearly independent set: v1 = (1, 0, 0), v2 = (0, 1, 0), v3 = (1, 1, 0).
(b) An example of two linear operators S and T on R2 such that, as vectors in L (V ),
S and T are linearly independent, but, for every v in V , the vectors Sv and T v are
linearly dependent: S = MA and T = MB with

1 0                         1   1
A=                and B =                 .
0 0                         0   0

Notice that aA + bB = O gives

a+b b   0 0
=
0 0   0 0

and hence a + b = 0, b = 0, which give a = b = 0. For each vetor v = (x, y), we have
Sv = (x, 0) and T v = (x + y, 0), which are linearly dependent: there exists a, b, not
both zeros, such that a(x, 0) + b(x + y, 0) = (0, 0). Indeed, when x = 0, we may let
a = 1 and b = 0. When x = 0, we may let a = (x + y)/x and b = −1.

6. Assume that H ∩ K = {0}. Suppose

a1 h1 + a2 h2 + · · · + ar hr + b1 k1 + b2 k2 + · · · + bs ks = 0.

5
Let v = a1 h1 +a2 h2 +· · ·+ar hr ≡ −b1 k1 −b2 k2 +· · ·−bs ks . Then v is in both H and
K. Since H ∩ K = {0}, we must have v = 0. Thus a1 h1 + a2 h2 + · · · + ar hr = 0 and
b1 k1 +b2 k2 +· · ·+bs ks = 0. Since SH is linearly independent, we have a1 = a2 = · · · =
ar = 0. Since SK is linearly independent, we have b1 = b2 = · · · = bs = 0. This shows
that S is linearly independent. Conversely, suppose that S is linearly independent. We
have to show H ∩ K = {0}. Take any vector in H ∩ K. That v is in H means that we
can write v as a linear combination of vectors in SH , say v = a1 h1 + a2 h2 + · · · + ar hr .
Similarly, since v is in K, we can write v = b1 k1 + b2 k2 + · · · + bs ks . Thus we have

a1 h1 + a2 h2 + · · · + ar hr − b1 k1 − b2 k2 − · · · − bs ks = 0.

Since h1 , h2 , . . . , hr , k1 , k2 , . . . , ks are assumed to be linearly independent, we have
a1 = a2 = · · · = ar = b1 = b2 = · · · = bs = 0. Hence v = a1 h1 + a2 h2 + · · · + ar hr = 0.
This shows H ∩ K = {0}.

7. Take a basis SH ≡ {h1 , h2 , . . . , hr } in H. Extend this basis to a basis of V , say
S ≡ {h1 , h2 , . . . , hr , k1 , k2 , . . . , ks }; (such a basis of V exists, according to Corollary
1.4.4). Let SK ≡ {k1 , k2 , . . . , ks } and let K be the subspace spanned by SK . By the
last exercise we see that H ∩ K = {0}. Next, let v be any vector in V . Since S is a
basis of V , we can express v as a linear combination of vectors in S, say

v = a1 h1 + a2 h2 + · · · + ar hr + b1 k1 + b2 k2 + · · · + bs ks .

Thus we have v = h + k, where h = a1 h1 + a2 h2 + · · · + ar hr is in H and k =
b1 k1 + b2 k2 + · · · + bs ks is in K. This shows V = H + K.

6

```
To top