Hilbert Space

Document Sample
Hilbert Space Powered By Docstoc
					Chapter 4

Hilbert Space

4.1     Inner Product Space
Let E be a complex vector space, a mapping (·, ·) : E × E →        is called an
inner product on E if
   i) (x, x) ≥ 0 ∀x ∈ E and (x, x) = 0 if and only if x = 0;
   ii) (·, x) is linear on E for each x ∈ E; and
   iii) (x, y) = (y, x) for all x, y in E.
With such an inner product E is called an inner product space. If we write x
for (x, x), then · is a norm on E and hence E is a normed vector space.
For this fact, we show first
 Schwarz Inequality |(x, y)| ≤ x · y , for x, y ∈ E.

   Proof. For t ∈   Ê we have
              0 ≤ (x + ty, x + ty) = (x, x + ty) + (ty, x + ty)
                                           2
                                     = x       + 2t Re(x, y) + t2 y 2 ,

from which we necessarily have | Re(x, y)|2 ≤ x 2 · y 2 , and hence | Re(x, y)| ≤
  x · y . In the above, replace y by θy for some θ ∈          with |θ| = 1 and
Re(x, θy) = |(x, y)|, then

                  |(x, y)| = Re(x, θy) ≤ x · θy = x · y .

   ¿From Schwarz inequality, it follows that

              2         2                      2           2                  2
       x+y        = x       + 2 Re(x, y) + y       ≤   x       +2 x · y + y
                  = ( x + y )2 ,

                                         41
42                                                            CHAPTER 4. HILBERT SPACE

or
                                      x+y ≤ x + y ,

i.e. triangular inequality holds for · . Hence · is a norm on E. For an inner
product space E, the norm on E is the norm so defined unless stated otherwise.

Examples.
                   n
  i) Let E =                                                    ′            ′
                       . For z = (z1 , · · · , zn ) and z ′ = (z1 , · · · , zn ) in     n
                                                                                            let
                                                     n
                                       (z, z ′ ) =         zj z ′ j .
                                                     j=1


                       Æ
   ii) Let E = l2 ( ) = {(z1 , z2 , · · · ) :
                                                      ∞
                                                      j=1      |zj |2 < ∞}. For z = (z1 , z2 , · · · ),
            ′    ′
                                  Æ
and z ′ = (z1 , z2 , · · · ) in l2 ( ) let
                                                     ∞
                                       (z, z ′ ) =         zj z ′ j .
                                                     j=1


                            Æ
     The space E = l2 ( ) will be hereafter simply denoted by l2 .
     iii) Let E = L2 (Ω, Σ, µ). For f , and g in L2 (Ω, Σ, µ), define

                                       (f, g) :=          f gdµ.
                                                      Ω

                                           Æ
Exercise 4.1.1. i) For z, z ′ ∈ l2 ( ), s how that {|zj z ′ j |}j∈Æ is summable and
hence (z, z ′ ) defined above is absolutely convergent.
                         Æ
   ii) Show that l2 ( ) is complete.

    An inner product space is called a Hilbert space if it is complete. Both n
         Æ
and l2 ( ) are Hilbert spaces. Since L2 (Ω, Σ, µ) is complete with respect to the
metric defined by L2 -norm and since L2 -norm is given by the inner product
defined in Example iii), L2 (Ω, Σ, µ) is a Hilbert space of which both n and
     Æ
l2 ( ) are special cases.

Exercise 4.1.2. Define real inner product space and real Hilbert space.


4.2       Geometry in Hilbert Space
Theorem 4.2.1. Let E be an inner product space, and M a complete convex
subset of E. Suppose x ∈ E, then the following are equivalent:
   1) y ∈ M satisfies x − y = minz∈M x − z ;
   2) y ∈ M satisfies Re(y − x, y − z) ≤ 0 ∀ z ∈ M .
Furthermore, there is unique y ∈ M satisfying 1) and 2).
4.2. GEOMETRY IN HILBERT SPACE                                                                 43

   Proof. 1) ⇒ 2): For z ∈ M and 0 < θ ≤ 1, let

                                                        2                          2
             f (θ) = x − {(1 − θ)y + θz}                    = x − y + θ(y − z)
                                   2
                     = x−y             + θ2 y − z       2
                                                            + 2θ Re(x − y, y − z).

                                        2
   Since f (θ) ≥ f (0) = x − y              for 0 < θ ≤ 1, we have

                            f (θ) − f (0)
                     lim                  = 2 Re(x − y, y − z) ≥ 0.
                      θ↓0         θ

   2) ⇒ 1): For z ∈ M we have

              Re(y − x, y − z) = − Re(x − y, y − x + x − z)
                                                    2
                                       = x−y            − Re(x − y, x − z) ≤ 0,

hence
                            2
                 x−y            ≤ Re(x − y, x − z) ≤ x − y · x − z ,

and consequently
                                        x−y ≤ x−z

for all z ∈ M .
    That there exists at most one such y follows from 2), for if both y1 and y2
satisfy 2) for all z ∈ M , then

                 2
  0 ≤ y1 − y2        = (y1 − y2 , y1 − y2 ) = (y1 − x, y1 − y2 ) + (y2 − x, y2 − y1 )
                     = Re(y1 − x, y1 − y2 ) + Re(y2 − x, y2 − y1 ) ≤ 0,

so y1 = y2 .
    To show that there exists y satisfying 1), let

                                       α = inf      x−z .
                                             z∈M

Consider then a sequence {zn } ⊂ M satisfying

                                                                    1
                                 α2 ≤ x − zn        2
                                                        ≤ α2 +        .
                                                                    n
We claim that {zn } is a Cauchy sequence. We have

                        2                                       2
            zn − zm         = (zn − x) − (zm − x)
                                             2                 2
                            = zn − x             + zm − x           − 2 Re(zn − x, zm − x) ;
                        2
         zn + zm                             2                 2
     4           −x         = zn − x             + zm − x           + 2 Re(zn − x, zm − x) ,
            2
44                                                     CHAPTER 4. HILBERT SPACE

consequently
                                                                            2
                    2                  2                 2       zn + zm
          zn − zm       = 2 zn − x         + 2 zm − x        −4          −x
                                                                    2
                                   1                    1                1   1
                        ≤ 2 α2 +           + 2 α2 +           − 4α2 = 2    +     ,
                                   n                    m                n m

which shows that {zn } is a Cauchy sequence. Now since M is complete, there
is y ∈ M with y = limn→∞ zn . Obviously x − y = limn→∞ x − zn = α.

   The map t : E → M defined by tx = y, where y is the unique element in
M which satisfies 1) and 2) of Theorem 1 is called the projection from E onto
M and is more precisely denoted by tM if necessary. Theorem 2.1 is usually
applied in the special case when M is a closed convex subset of a Hilbert space.
Corollary 4.2.1. Let M be a closed convex subset of a Hilbert space E, then
t = tM has the following properties:
   i) t2 = t (t is idempotent);
   ii) tx − ty ≤ x − y (t is contractive); and
   iii) Re(tx − ty, x − y) ≥ 0 (t is monotone).

     Proof. i) is obvious.
     ii): From Re(tx − x, tx − ty) ≤ 0 and Re(ty − y, ty − tx) ≤ 0 we obtain

                          Re(x − y − (tx − ty), tx − ty) ≥ 0 ,

hence tx−ty 2 ≤ Re(x−y, tx−ty) ≤ x−y · tx−ty , from which tx−ty ≤
 x − y follows.
   iii) : Again from Re(x − y − (tx − ty), tx − ty) ≥ 0 we have
                                            2
                         0 ≤ tx − ty            ≤ Re(x − y, tx − ty).



Exercise 4.2.1. If M is a closed convex cone of a Hilbert space E and x ∈ E,
then y = tx if and only if Re(x − y, y) = 0 and Re(x − y, z) ≤ 0 for all z ∈ M .
Note : A convex set M in a vector space is called a convex cone if αx ∈ M for
x ∈ M and α > 0.
Exercise 4.2.2. Let M be a closed convex cone in a Hilbert space E and let
N = {y ∈ E : Re(y, x) ≤ 0 ∀ x ∈ M }. Put t = tM and s = tN . Show that
  i) s = 1 − t, 1 being the identity map of E.
  ii) t(λx) = λtx if λ ≥ 0 (t is positively homogeneous),
  iii) x 2 = tx 2 + sx 2 , x ∈ E,
4.3. LINEAR TRANSFORMATION                                                   45

   iv) N = {x ∈ E : tx = 0}, M = {x ∈ E : sx = 0}.
   v) Re(tx, sx) = 0 and x = tx + sx; conversely if x = y + z, y ∈ M , z ∈ N
and Re(y, z) = 0, then y = tx, z = sx.
   In the remaining part of the exercise, suppose that M is a closed vector
subspace of E. Show that
   vi) N = M ⊥ := {y ∈ E : (y, x) = 0 ∀ x ∈ M }.
   vii) both t and s are continuous and linear.
   viii) M = tE = ker s; N = ker t = sE.
   ix) (tx, y) = (x, ty) ∀ x, y ∈ E.
   x) tx and sx are the unique elements y ∈ M and z ∈ M ⊥ such that x = y +z.


4.3     Linear transformation
In this section we consider a linear transformation T from a normed vector space
X into a normed vector space Y over the same field    Ê   or .

Exercise 4.3.1. Show that T is continuous on X if and only if it is continuous
at one point.

Theorem 4.3.1. T is continuous if and only if there is C ≥ 0 such that

                                  Tx ≤ C x                                 (4.1)

for all x ∈ X.

    Proof. If there is C ≥ 0 such that (3.1) holds for all x ∈ X, then T is
obviously continuous at x = 0 and hence by Exercise 3.1 it is continuous on X.
    Conversely, suppose that T is continuous on X, and is hence continuous at
x = 0. There is then δ > 0 such that if x ≤ δ, then T x ≤ 1. Let now
                           δ                  δ                          1
x ∈ X and x = 0, then x x = δ, so T ( x x) ≤ 1. Thus T x ≤ δ x .
                   1
If we choose C = δ , then (3.1) holds for x = 0. But when x = 0, (3.1) holds
always.

   ¿From this theorem it follows that if T is a continuous linear transformation
from X into Y , then

                                            Tx
                         T :=     sup            < +∞                      (4.2)
                                x∈X , x=0   x

and is the smallest C for which (3.1) holds. T is called the norm of T . Of
course, T can be defined for any linear transformation T from X into Y ,
and T is continuous if and only if T < +∞. Hence a continuous linear
transformation is also called a bounded linear transformation.
46                                                CHAPTER 4. HILBERT SPACE

Exercise 4.3.2. Show that T = supx∈X,             x =1    Tx .
Exercise 4.3.3. Let L(X, Y ) be the space of all bounded linear transformations
from X into Y . Show that it is a normed vector space with norm given by (3.2).
Theorem 4.3.2. If Y is a Banach space, then L(X, Y ) is a Banach space.

   Proof. We will show that L(X, Y ) is complete. Let {Tn } be a Cauchy se-
quence in L(X, Y ). Since

                Tn x − Tm x = (Tn − Tm )x ≤ Tn − Tm · x ,

{Tn x} is a Cauchy sequence in Y for each x ∈ X. Put T x = limn→∞ Tn x. T is
obviously a linear transformation from X into Y .
   We claim now T ∈ L(X, Y ). Since {Tn } is Cauchy Tn ≤ C for some
C > 0, and for all n. Now

                    T x = lim     Tn x ≤ lim inf Tn · x
                            n→∞               n→∞

                                  ≤     sup Tn       x ≤C x
                                        n

for each x ∈ X. Hence T is a bounded linear transformation.
    We show next limn→∞ Tn − T = 0. Given ε > 0, there is n0 such that
 Tn − Tm < ε if n, m ≥ n0 . Let n ≥ n0 , we have

                  Tn − T   =      sup        Tn x − T x
                               x∈X, x =1

                           =      sup       lim     Tn x − Tm x
                               x∈X, x =1 m→∞

                           ≤      sup       lim inf Tn − Tm · x
                               x∈X, x =1 m→∞

                           ≤      sup       ε x     = ε,
                               x∈X, x =1

this shows that limn→∞ Tn − T = 0, or limn→∞ Tn = T . Thus the sequence
{Tn } has a limit in L(X, Y ).

                     Ê
   L(X, ), or L(X, ), depending on whether X is a complex or a real vector
space, is called the topological dual of X and is denoted by X ′ . X ′ is a Banach
space.
Theorem 4.3.3. (Riesz Representation Theorem) Let X be a Hilbert space and
ℓ ∈ X ′ , then there is y0 ∈ X such that

                                  ℓ(x) = (x, y0 )

for x ∈ X. Furthermore, the mapping ℓ → y0 is conjugate linear and ℓ = y0 .
4.4. LEBESGUE-NIKODYM THEOREM                                                    47

   Proof. We may assume ℓ = 0. Let M = ker ℓ, then M ⊥ is one dimensional.
For x ∈ X, x can be uniquely expressed as x = v + λx0 , where x0 is a fixed
non-zero element of M ⊥ , v ∈ M and λ a scalar. We have

                           ℓ(x) = ℓ(v) + λℓ(x0 ) = λℓ(x0 )

and
                          (x, x0 ) = (v + λx0 , x0 ) = λ x0 2 .
                          ℓ(x0 )
Hence if we let y0 =       x0 2 x0 ,   then (x, y0 ) = λℓ(x0 ) = ℓ(x). All the other
assertions are obvious.


4.4     Lebesgue-Nikodym Theorem
Let (Ω, Σ, µ) be a measure space and f a Σ-measurable function on Ω. Suppose
that Ω f dµ has a meaning, then the set function ν defined by

                               ν(A) =               f dµ, A ∈ Σ
                                                A

is called the indefinite integral of f . Then ν(∅) = 0 and ν is σ-additive i.e. if
{An } ⊂ Σ is a disjointed sequence, then

                                ν          An       =       ν(An ).
                                       n                n

It enjoys also the following property: ν(A) = 0 whenever A ∈ Σ and µ(A) = 0.
This fact suggests the following definition of absolute continuity of a measure
with respect to another measure. Let (Ω, Σ, µ) and (Ω, Σ, ν) be measure spaces,
ν is said to be absolutely continuous w.r.t. µ if ν(A) = 0 whenever A ∈ Σ and
µ(A) = 0.
Theorem 4.4.1. (Lebesgue-Nikodym Theorem) Let (Ω, Σ, µ) and (Ω, Σ, ν) be
measure spaces with µ(Ω) < +∞ and ν(Ω) < +∞. Suppose that ν is absolutely
continuous w.r.t. to µ, then there is a unique h ∈ L1 (Ω, Σ, µ) such that

                              ν(A) =                hdµ, A ∈ Σ.
                                                A

Furthermore, h ≥ 0 µ − a.e.

   Proof. Let ρ = µ + ν, then ρ is a finite measure on Σ. Consider the real
Hilbert space L2 (Ω, Σ, ρ) and consider the linear functional ℓ on L2 (Ω, Σ, ρ)
defined by
                                       ℓ(f ) =          f dν.
48                                                            CHAPTER 4. HILBERT SPACE

Since
                                       1/2                  1/2                                  1/2
          |ℓ(f )| ≤         |f |2 dν                  1dν         ≤ ν(Ω)1/2           |f |2 dρ

                    = ν(Ω)1/2 f        L2(ρ) ,

ℓ is a bounded linear functional on L2 (Ω, Σ, ρ). By Riesz Representation Theo-
rem there is unique g ∈ L2 (Ω, Σ, ρ) such that

                            f dν =        f gdρ =            f gdµ +          f gdν

for all f ∈ L2 (Ω, Σ, ρ), or

                                       f (1 − g)dν =              f gdµ                                   (4.3)

for all f ∈ L2 (Ω, Σ, ρ).
Claim 1. 0 ≤ g(x) < 1 for ρ − a.e. x on Ω.
   Let A1 = {x ∈ Ω : g(x) < 0} and A2 = {x ∈ Ω : g(x) ≥ 1}. If we let
f = χA1 in (4.1), then 0 ≤ ν(A1 ) ≤ A1 (1 − g)dν = A1 gdµ, from which follows
that µ(A1 ) = 0 and hence ν(A1 ) = 0. Thus ρ(A1 ) = 0. Now in (4.1) choose
f = χA2 , we have 0 ≥ A2 (1 − g)dν = A2 gdµ ≥ µ(A2 ). This implies µ(A2 ) = 0,
hence ν(A2 ) = 0. Consequently, ρ(A2 ) = 0. Thus Claim 1 is established.
Claim 2. (4.1) holds for all Σ-measurable and ρ − a.e. non-negative functions
f.
   For each positive integer n, let fn = f ∧n. Since 1−g > 0 and g ≥ 0 ρ−a.e.,
0 ≤ fn (1 − g) ր f (1 − g), and 0 ≤ fn g ր f g, then from Monotone Convergence
Theorem and (4.1) it follows that

           f (1 − g)dν = lim             fn (1 − g)dν = lim                   fn gdµ =           f gdµ,
                             n→∞                                  n→∞

which proves the claim.
                                                                                        z
For a Σ-measurable and ρ − a.e. ≥ 0 function z, choose f =                             1−g   in (4.1), then

                                                      g
                                zdν =            z       dµ =           zhdµ,                             (4.4)
                                                     1−g
             g
where h =   1−g .   If for A ∈ Σ we take z = χA in (4.2), then

                                ν(A) =               IA hdν =          hdµ.
                                                                   A

Since ν(Ω) < +∞, we know hdµ < +∞, hence h ∈ L1 (Ω, Σ, µ). That such an
h is unique is obvious. That h ≥ 0 µ − a.e. is also obvious.
4.5. THE LAX-MILGRAM THEOREM                                                   49

Exercise 4.4.1. A measure space (Ω, Σ, µ) is said to be σ-finite if there are
A1 , A2 , · · · in Σ such that An = Ω and µ(An ) < +∞, n = 1, 2, · · · . Show
that Lebesgue-Nikodym Theorem holds if both (Ω, Σ, µ) and (Ω, Σ, ν) are σ-
finite. But in this case h may not be µ-integrable.


4.5     The Lax-Milgram Theorem
Let X be a Hilbert space. For definiteness, let X be a complex Hilbert space.
B(·, ·) : X × X → is called sesquilinear if for x, x1 , x2 in X and λ1 , λ2 ∈
the following equalities hold

                B(λ1 x1 + λ2 x2 , x) = λ1 B(x1 , x) + λ2 B(x2 , x),
                                       ¯              ¯
                B(x, λ1 x1 + λ2 x2 ) = λ1 B(x, x1 ) + λ2 B(x, x2 ).

B is said to be bounded if there is r > 0 such that |B(x, y)| ≤ r x · y for
all x and y in X; and B is said to be positive definite if there exists ρ > 0 such
that B(x, x) ≥ ρ x 2 for all x in X.

Exercise 4.5.1. Suppose that B is a bounded, positive definite and sesquilinear
function on X × X and assume that B(x, y) = B(y, x) for all x and y in X.
let ((·, ·)) = B(·, ·), then X, ((·, ·)) is a Hilbert space which is equivalent to
 X, (·, ·) as Banach space.

Theorem 4.5.1. (The Lax-Milgram Theorem) Let X be a Hilbert space and B
a bounded, positive definite and sesquilinear functional on X × X. Then there
is a unique bounded linear operator S : X → X such that (x, y) = B(Sx, y) for
all x and y in X and S ≤ ρ−1 . Furthermore S −1 exists and is bounded with
 S −1 ≤ r.

   Proof. Let D = {y ∈ X : ∃ y ∗ ∈ X such that (x, y) = B(x, y ∗ ) ∀ x ∈ X}.
D is not empty, for 0 ∈ D. Also y ∗ is uniquely determined by y. For, if
      ∗           ∗                                  ∗    ∗
B(x, y1 ) = B(x, y2 ) = (x, y) ∀ x ∈ X, then B(x, y1 − y2 ) = 0 ∀ x ∈ X, and
              ∗     ∗ ∗     ∗        ∗    ∗ 2           ∗    ∗          ∗    ∗
hence 0 = B(y1 − y2 , y1 − y2 ) ≥ ρ y1 − y2 , implying y1 − y2 =0, or y1 = y2 .
   For y ∈ D, let Sy = y ∗ . Since B is sesquilinear, D is a vector subspace of
X and S is linear on D. Furthermore, from ρ Sy 2 ≤ B(Sy, Sy) = (Sy, y) ≤
 y · Sy , it follows that Sy ≤ ρ−1 y for y ∈ D. Thus S is bounded on D
with S ≤ ρ−1 . We proceed to show that D = X. For this we show first that
D is closed. Let {yn }∞ ⊂ D with limn→∞ yn = y for some y ∈ X, then
                      n=1


                     (x, y) = lim (x, yn ) = lim B(x, Syn ),
                             n→∞             n→∞
50                                                   CHAPTER 4. HILBERT SPACE

for all x ∈ X. Since S is bounded on D, Syn is Cauchy in X, and hence has a
limit z ∈ X. ¿From this and the fact that B is bounded, it follows that

                             (x, y) = lim B(x, Syn ) = B(x, z),
                                      n→∞

for all x ∈ X. Hence y ∈ D and z = Sy. So D is closed. Now if D = X, there
is y0 ∈ D⊥ , y0 = 0. Consider the linear functional ℓ defined on X by

                                   ℓ(x) = B(x, y0 ), x ∈ X.

As B is bounded, ℓ is a bounded linear functional on X, and hence by Riesz
Representation Theorem there is x0 ∈ X such that

                                 B(x, y0 ) = (x, x0 ), x ∈ X.

Thus x0 ∈ D and ρ y0 2 ≤ B(y0 , y0 ) = (x0 , y0 ) = 0. Hence y0 = 0, this
contradicts the fact that y0 = 0. Therefore D = X. Thus S is a bounded linear
operator on X and S ≤ ρ−1 .
    As Sy = 0 implies (x, y) = B(x, Sy) = 0 ∀ x ∈ X and hence y = 0, S is an
one-to-one map. Applying Riesz Representation Theorem again, as in the last
paragraph, for each y ∗ in X, there is y ∈ X such that (x, y) = B(x, y ∗ ) ∀ x ∈ X,
i.e. y ∗ = Sy. Thus S is an onto map. Hence S −1 exists. But from

             S −1 y   2
                          = |(S −1 y, S −1 y)| = |B(S −1 y, y)| ≤ r S −1 y · y ,

it follows that S −1 ≤ r.


4.6      Gram-Schmidt Orthogonalization Procedure
A family {xα }α∈I of non-zero elements of a Hilbert space X is said to be or-
thogonal if (xα , xβ ) = 0 whenever α = β. An orthogonal family is obviously
linearly independent. A finite or countable orthogonal family is usually referred
to as an orthogonal system in X. ¿From a linearly independent system {xn }
in X, one can construct an orthogonal system {yn } in the following way. Let
Ek = < x1 , · · · , xk >, the vector subspace of X generated by {x1 , · · · , xk },
and let tk = tEk be the orthogonal projection of X onto Ek . {yn } is defined
inductively as follows. Let y1 = x1 ; suppose y1 , · · · , yk have been defined, let
yk+1 = xk+1 − tk xk+1 .

Exercise 4.6.1. Show that (i) < y1 , · · · , yn > = < x1 , · · · , xn >, n = 1, 2, 3, · · · ;
(ii) {yn } is an orthogonal system in X.
4.7. BESSEL INEQUALITY AND PARSEVAL RELATION                                         51

     This procedure of obtaining an orthogonal system from a linearly indepen-
dent system is called the Gram-Schmidt orthogonalization procedure.
     An orthogonal family {xα }α∈I is called an orthonormal family if (xα , xβ ) =
δαβ , where δαβ = 1 or 0 according as α = β or α = β. If {yn } is the orthogonal-
ized system of {xn } through Gram-Schmidt procedure, the system {en } , en =
  yn
  yn , n = 1, 2, 3, · · · , is called the Gram-Schmidt orthonormalization of {xn }.



4.7      Bessel Inequality and Parseval Relation
Let {en } be an orthonormal system in a Hilbert space X, U be the closed
vector subspace of X generated by {en }, and tU be the orthogonal projection
of X onto U . For each positive integer k, let Ek = < e1 , · · · , ek > and tk = tEk ,
the orthogonal projection of X onto Ek . The following propositions hold:
1) tk x = k (x, ej )ej , x ∈ X.
           j=1

                         k
   Proof. Let tk x =     j=1 λj ej ,   λj ∈        . Then
                                        k                       k
                    (tk x, ei ) =            λj ej , ei    =         λj (ej , ei )
                                       j=1                     j=1
                               = λi , 1 ≤ i ≤ k.

But

                    (x, ei ) = tk x + (1 − tk )x, ei = (tk x, ei )
                            = λi , 1 ≤ i ≤ k,
                                                                   ⊥
where the fact that 1 − tk is the orthogonal projection of X onto Ek (See
Exercise 2.2) has been used, hence
                                               k
                                    tk x =          (x, ej )ej .
                                              j=1




2) For x ∈ X, limk→∞ tk x = tU x.

   Proof. For y ∈ U and ε > 0, there is a finite linear combination z =
  N
          such that z − y < ε. For k ≥ N , z ∈ Ek , and hence tk z = z,
  j=1 λj ej
then

              tk y − y = tk y − tk z + z − y ≤ tk (y − z) + z − y
                       ≤ 2 y − z < 2ε.

Consequently, (limk→∞ tk y = y.) Now for x ∈ X, tU x ∈ U , thus from what is
proved above (limk→∞ tk (tU x) = tU x.) But tk ◦ tU = tk .
52                                                      CHAPTER 4. HILBERT SPACE

                                                      k
3) For each k and x, y in X, (tk x, tk y) =           j=1 (x, ej )(y, ej ).


     Proof. This follows easily from 1).
                                        ∞
4) For x, y in X, (tU x, tU y) =        j=1 (x, ej )(y, ej ).


   Proof. Since (tU x, tU y) − (tk x, tk y) = (tU x − tk x, tU y) + (tk x, tU y − tk y), we
have

       |(tU x, tU y) − (tk x, tk y)| ≤ tU y · tU x − tk x + tk x · tU y − tk y
                                    ≤ y · tU x − tk x + x · tU y − tk y ,

and hence by 2) and 3)
                                                  k                          ∞
       (tU x, tU y) = lim (tk x, tk y) = lim          (x, ej )(y, ej ) =         (x, ej )(y, ej ).
                      k→∞                 k→∞ j=1                            j=1




        ∞
5) (    j=1   |(x, ej )|2 ≤ x 2 , x ∈ X.) (Bessel inequality)
                             ∞
     Proof. ¿From 4), (      j=1   |(x, ej )|2 = tU x    2
                                                             ≤ x 2. )
        ∞
6) (    j=1   |(x, ej )|2 = x 2 ) for all x ∈ X) if and only if U = X.

     Proof. If U = X, then tU = 1 and hence from 4)
                              ∞
                                   |(x, ej )|2 = tU x   2
                                                             = x   2
                             j=1

for all x ∈ X. On the other hand, if U = X, there is x ∈ X such that x = tU x,
hence
                                                       ∞
             x 2 = tU x 2 + (1 − tU )x 2 > tU x 2 =      |(x, ej )|2 .
                                                                       j=1




     An orthonormal system {en } in X is called complete if U = X.

Exercise 4.7.1. Show that an orthonormal system {en } is complete if and only
if from the fact that (en , x) = 0 for all n it follows that x = 0.

     A Hilbert space is called separable if it contains a countable dense subset.

Theorem 4.7.1. A separable Hilbert X is isometrically isomorphic either to
 n
   for some n or to l2 .
4.7. BESSEL INEQUALITY AND PARSEVAL RELATION                                             53

    Proof. Let {zk }∞ be a sequence of elements which is dense in X. By a
                     k=1
well-known elementary selection procedure in linear algebra, one can extract
from {zk } an independent subsequence {xk } such that {xk } = {zk } . If {xk }
is finite, then the proof that X is isometrically isomorphic to n , where n is the
cardinality of {xk }, is an easy imitation of that of the case when {xk } is infinite.
Hence we assume that {xk } is infinite. From Gram-Schmidt orthonormalization
procedure we construct from {xk } an orthonormal system {ek }∞ such that
                                                                     k=1
< {xk } > = < {ek } >. As before, let U be the closure of {ek } , then U = X;
thus for x, y in X we have from 4)
                                            ∞
                              (x, y) =              (x, ek )(y, ek ),                  (4.5)
                                         k=1

because tU = I. Define a map τ : X → l2 by letting, for x ∈ X, τ x = (αk )∞ ,k=1
                                        ∞
where αk = (x, ek ). Since x 2 = k=1 |(x, ek )|2 by 6), τ x is in l2 and x =
 τ x l2 , so τ is an isometry. That τ is linear is obvious. We show now that τ is
onto l2 . Let (αk )∞ ∈ l2 . For each positive integer n, let
                    k=1

                                                    n
                                      xn =               αk ek .
                                                k=1

We claim that {xn } is a Cauchy sequence in X. For n > m we have
                                                           n
                                            2
                              xn − xm           =                  |αk |2 ,
                                                        k=m+1

which tends to 0 as m → ∞. Hence {xn } is a Cauchy sequence. Let (x =
limn→∞ xn ). Then
                                                     n
                       (x, ek ) = lim           ,         αj ej , ek      = αk
                                      n→∞           j=1

and hence τ x = (αk )∞ . Therefore τ is onto. That τ is an isomorphism follows
                     k=1
from (7.1).
                    ∞             2             2
    The equality    j=1 |(x, ej )|    = x            for x ∈ X and (7.1) are called Parseval
relations.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:11
posted:5/22/2011
language:English
pages:13