# Compact operators filename compact.tex March 22,2010 by zgm52709

VIEWS: 0 PAGES: 12

• pg 1
```									                                             Compact operators
ﬁlename: compact.tex
March 22, 2010

Additional references

◦ B. Simon, Trace ideals and their applications

Preliminaries

We assume that all operators act on a separable (inﬁnite dimensional) Hilbert space H. An
operator A is called invertible if there is a bounded operator A−1 such that AA−1 = A−1 A = I.
The polar decomposition of a bounded operator:

Lemma 1.1 [RS VI.10] Every operator A can be written as a product A = U |A| where |A| = (A∗ A)1/2 and
U is a partial isometry with KerU = KerA and RanU = RanA.

Operator valued analytic functions: A bounded operator valued function F (z) is called ana-
lytic if the complex derivative exists, i.e., for every z there is an operator F ′ (z) with

lim w−1 (F (z + w) − F (z)) − F ′ (z) = 0
w→0

Here   ·   denotes the operator norm.

Problem 1.1: Suppose that F (z) is a continuous family of bounded operators. Show that F (z) is
analytic if φ, F (z)ψ is an analytic function for every choice of φ, ψ

Deﬁnitions and basic propeties

A bounded operator F has ﬁnite rank if its range is a ﬁnite dimensional subspace of H. A operator
of ﬁnite rank is essentially an n × n matrix.

Problem 1.2: Show that every ﬁnite rank operator can be written
n
F =         ψi φi , ·
i=1

Is the adjoint F ∗ also ﬁnite rank?

1
A bounded operator K is compact if it is the norm limit of ﬁnite rank operators. (An alternative
deﬁnition is that K is compact if it maps the unit ball in H to a set with compact closure. For a Hilbert
space, these two deﬁnitions are equivalent, but not in a Banach space, where the theory of compact
operators is more difﬁcult.)
The compact operators form an ideal.

Theorem 1.2 If K is compact and A is bounded then K ∗ , AK and KA are compact.

Theorem 1.3 A compact operator maps weakly convergent sequences into norm convergent sequences.

Proof: Let K be a compact operator and suppose fn ⇀ f is a weakly convergent sequence. Then
gn = fn − f converges weakly to zero. Every weakly convergent sequence is bounded (by the
uniform boundedness principle), so supn gn < C . Given ǫ > 0 ﬁnd a ﬁnite rank operator F with
K − F < ǫ/C . Then
Kfn − Kf = Kgn = (K − F + F )gn

≤ (K − F )     gn + F gn

≤ ǫ + F gn
N
But F gn =     i=1   φi , gn ψn . This tends to zero in norm since each φi , gn → 0 by weak convergence,
and the sum is ﬁnite. Thus
lim   Kfn − Kf ≤ ǫ
n→∞

for every ǫ.

Example: This theorem can be used together with a Mourre estimate and the Virial theorem to show
odinger operator H cannot accumulate in an interval I . A Mourre estimate
that eigenvalues of a Schr¨
is an inequality of the form
2
EI [H, A]EI ≥ αEI + K

Where α > 0 and EI is a spectral projection for H corresponding to the interval I . If ψ is an
eigenfunction of H , i.e., Hψ = λψ with eigenvalue λ contained in the interval I , then EI ψ = ψ .
The Virial theorem is the statement that ψ, [H, A]ψ = 0. Formally, this is obviously true (by
expanding the commutator). However, in applications, H and A are both unbounded operators, and ψ
need not lie in the domain of A. In this situation one the commutator [H, A] is deﬁned using a limiting
e
process, and the Virial theorem may be false (see Georgescu and G´ rard []).
Suppose, though, that both the Mourre estimate and the Virial theorem hold. Then there cannot
be an inﬁnite sequence of eigenvalues λj all contained in I . For suppose there was such a sequence.
Then the corresponding orthonormal eigenvectors ψj converge weakly to zero. Moreover EI ψj = ψj
so by the Virial theorem and the Mourre estimate

2
0 = ψj , [H, A]ψj = ψj , EI [H, A]EI ψj ≥ α EI ψj           + ψj , Kψj = α + ψj , Kψj

2
But ψj converge weakly to zero, so Kψj tends to zero in norm. Thus ψj , Kψj → 0 which gives rise
to the contradiction 0 ≥ α.

The Analytic Fredholm Theorem

In many situations one wants to ﬁnd a solution φ to an equation of the form

(I − K)φ = f

If the operator (I −K) is invertible then there is a unique solution given by φ = (I −K)−1 f . Otherwise,
for a general opeator K , the analysis of this equation is delicate.

Problem 1.3: Find a bounded operator A such that I − A is not invertible, but A does not have 1
as an eigenvalue (i.e., the kernel of I − A is zero).

There are two situations where this equation is easy to analyze. The ﬁrst is when K < 1. In this
case the inverse (I − K)−1 exists and is given by the convergent Neumann expansion
∞
−1
(I − K)         =         Kn
n=0
The other situation where the equation is easy to understand is when K has ﬁnite rank. In this case
(I − K) is invertible if and only if K does not have eigenvalue 1. (If K does have 1 as an eigenvalue,
then the equation has either no solutions or inﬁnitely many solutions, depending on whether f is in
the range of I − K ). This situation can be generalized to compact operartors K .
Notice that in the second situation, if f = 0, then either I − K is invertible, or the equation has a
non-trivial solution (any element in the kernel of (I − K)). This dichotomy is known as the Fredholm
alternative.
In fact it is very fruitful to consider not a single compact operator K but an analytic family of
compact operators K(z) deﬁned on some domain D in the complex plane.
Suppose for a moment that K(z) is a matrix. Let S denote the values of z for which I − K(z) is not
invertible. Then the elements of S are the values of z for which det(I − K(z)) = 0. Since det(I − K(z))
is analytic, S is the set of zeros of an analytic function: either all of D (in the case that det(I − K(z)) is
identically equal to 0) or a discrete set, i.e., a set with no accumulation points in D.

Theorem 1.4 [RS VI.16] Let K(z) be a compact operator valued analytic function of z , deﬁned for z in some
domain D in the complex plane. Then either
(i) I − K(z) is never invertible, or
(ii) I − K(z) is invertible for all z in D\S where S is a discrete set in D. In this case (I − K(z))−1 is
meromorphic in D with ﬁnite rank residues at each point in S . For each point in S , the equation (I −K(z))ψ = 0
has non-trivial solutions.

3
Proof: The main step in the proof is to show that the conclusions of the theorem hold near every point
in D. Fix z0 ∈ D. There is a disk about z0 such K(z) − K(z0 ) < 1/2 for all z in this disk. There is a
n
ﬁnite rank operator F =       i=1   ψi φi , · with K(z0 ) − F < 1/2. Let A(z) = K(z) − F . Then

A(z) = K(z) − K(z0 ) + K(z0 ) − F ≤ K(z) − K(z0 ) + K(z0 ) − F < 1

for z in the disk. So for z in the disk, I − A(z) is invertible and

I − K(z) = I − A(z) − F = (I − F (I − A(z))−1 )(I − A(z))

This shows that I − K(z) is invertible if and only if the ﬁnite rank operator (I − F (I − A(z))−1 )
is. But (I − F (I − A(z))−1 ) is invertible if and only if F (I − A(z))−1 ) does not have eigenvalue 1.
The eigenvalue equation F (I − A(z))−1 )ψ = ψ can be rewritten as a matrix equation by expanding
ψ=      βj ψj . This expansion is possible since ψ lies in the range of F . The resulting matrix equation is
n
φi , (I − A(z))−1 ψj βj = βi
j=1

From this we conclude that F (I − A(z))−1 ) has eigenvalue 1 if and only if

det(I − [ φi , (I − A(z))−1 ψj ]) = 0.

It is not hard to verify that (I − A(z))−1 is analytic. Thus, the points of non-invertibility for
I − K(z) in the disk are the zeros of an analytic function. This function is either identically zero, in
which case (i) holds, or not, in which case (ii) holds.
At points of invertibility we have

(I − K(z))−1 = (I − A(z))−1 (I − F (I − A(z))−1 )−1

In the disk about z0 , (I − A(z))−1 is analytic. The inverse of (I − F (I − A(z))−1 ) can be written in
terms of cofactors as an analytic matrix divided by a determinant. This leads to a proof of the second
part of the theorem.

The Fredholm alternative and the Riesz-Schauder Theorem

Theorem 1.5 If K is compact, then either I − K is invertible or there is a non-trivial solution to Kψ = ψ .

This theorem, called the Fredholm alternative, asserts that λ = 1 is either in the resolvent set or
an eigenvalue. The same statement holds for any non-zero λ. This follows from the Riesz-Schauder
theorem:

4
Theorem 1.6 If K is compact, the σ(K) is a discrete set with except for a possible accumulation point at 0.
Every non-zero λ ∈ σ(K) is an eigenvalue of ﬁnite multiplicity.

Proof: We have K − λI = −λ(I − λ−1 K), so we may use the analytic Fredholm theorem with z = λ−1 .
Since λ−1 K → 0 as |λ| → ∞, (I − λ−1 K) is invertible for large |λ| case (ii) of that theorem applies.

Hilbert-Schmidt Theorem

Theorem 1.7 If K is compact and self-adjoint then there is an orthonormal basis of eigenvectors {ψn } with
Kψn = λn ψn and λn → 0.

Proof: (Sketch) The main point here is that a self-adjoint operator is zero if its spectral radius is zero
˜
(see Reed-Simon). Let {ψn } be the eigenvectors of K , chosen to be an orthonormal set. Let K be the
˜
restriction of K to the orthogonal complement of the span of {ψn }. The K is compact and self-adjoint.
˜
By the Riesz-Schauder theorem, any non-zero point in the spectrum of K must be an eigenvalue. But
˜          ˜                    ˜
this is not possible by the deﬁntion of K . Thus σ(K) = 0 which implies K = 0.

Canonical form for compact operators

Theorem 1.8 If K is compact, then there exist orthonormal sets {ψi } and {φi } and positive numbers µi so
that
K=           µi ψi , · φi
i

This is a norm convergent expansion. The positive numbers µi are eigenvalues of |K| and are called the singular
values of K .

Proof: If K is compact then K ∗ K is compact, self-adjoint and non-negative. This implies that K ∗ K =
µ2 ψi , · ψi for some (positive) numbers µi . Since Ker(K) = span(ψi )⊥ ,
i

Kψ = K              ψi , ψ ψi
i
−1
=       µi ψi , ψ    µi Kψi
i

−1
So it remains to show that φi = µi Kψi form an orthonormal set. This is easy to see.
This theorem can also proven using the polar decomposition K = U |K| and the Hilbert-Schmidt
theorem for |K|. The vectors ψi are the eigenvectors of K and φi = U ψi .

5
Inequalities on eigenvalues and singular values

We will denote the singular values of a compact operator K by µn (K), n = 1, 2, . . ., arranged in
order decreasing size. The eigenvalues of K will be denoted λn (K), n = 1, 2, . . ., arranged in order of
decreasing absolute value, and counted with algebraic multiplicity. (The algebraic multiplicity of an
eigenvalue λ of a compact operator A can be deﬁned as dim(Pλ ) where Pλ is the projection given by
the countour integral
1
−             (A − z)−1 dz
2πi |λ−z|=ǫ
for ǫ sufﬁciently small. See Simon’s trace ideals book.)
Since K =           K ∗K ,
µ1 (K) = K .

Also, since every eigenvalue is bounded by the norm,

|λ1 (K)| ≤ µ1 (K)

Lemma 1.9
µn (K ∗ ) = µn (K)

Proof: The singular values of K ∗ are the positive square roots of the eigenvalues of KK ∗ . We may write
K =     µn φn , · ψn . Then K ∗ =       µn ψn , · φn , which implies KK ∗ =               µ2 ψn , · ψn . But this
n

formula shows that the eigenvalues of KK ∗ are µ2 (with eigenvectors ψn .)
n

Our next proof requires the min–max formula for the eigenvalues of compact self-adjoint operators.
Here is a statement of the min–max formula.

Theorem 1.10 If K is a compact positive self-adjoint operator, then

λn (K) =       min              max            Kψ
φ1 ,...,φn−1   ψ∈[φ1 ,...,φn−1 ]⊥
ψ =1

Corollary 1.11 If K is compact, then

µn (K) =       min              max            Kψ
φ1 ,...,φn−1   ψ∈[φ1 ,...,φn−1 ]⊥
ψ =1

Proof: This follows from µn (K) = λn (|K|) and |K|ψ = Kψ .

Theorem 1.12 If K is compact and B is bounded then
µn (BK)
≤ B µn (K)
µn (KB)
Proof: We have
µn (BK) =         min               max             BKψ
φ1 ,...,φn−1   ψ∈[φ1 ,...,φn−1 ]⊥
ψ =1

≤ B           min               max            Kψ
φ1 ,...,φn−1   ψ∈[φ1 ,...,φn−1 ]⊥
ψ =1

= B µn (K)
The other inequality follows from µn (KB) = µn (B ∗ K ∗ ) ≤ B ∗ µn (K ∗ ) = B µn (K)

6
Theorem 1.13 If A and B are compact then

µn+m+1 (A + B) ≤ µn+1 (A) + µm+1 (B)

Proof:
max            (A + B)ψ ≤                max             Aψ +            max            Bψ
ψ∈[φ1 ,...,φn+m ]⊥                         ψ∈[φ1 ,...,φn+m ]⊥              ψ∈[φ1 ,...,φn+m ]⊥
ψ =1                                        ψ =1                            ψ =1

≤       max             Aψ +            max              Bψ
ψ∈[φ1 ,...,φn ]⊥             ψ∈[φn+1 ,...,φn+m ]⊥
ψ =1                            ψ =1

Minimizing the left side over φ1 , . . . , φn+m gives µn+m+1 (A + B). The ﬁrst term on the right only
involves φ1 , . . . , φn and the second term only φn+1 , . . . , φn+m . Thus, minimizing the right side over
φ1 , . . . , φn+m gives
                                                                 

min               max            Aψ +             max               Bψ 
φ1 ,...,φn+m       ψ∈[φ1 ,...,φn ]⊥          ψ∈[φn+1 ,...,φn+m ]⊥
ψ =1                         ψ =1

= min                max            Aψ +          min                max             Bψ
φ1 ,...,φn   ψ∈[φ1 ,...,φn ]⊥             φn+1 ,...,φn+m   ψ∈[φn+1 ,...,φn+m ]⊥
ψ =1                                             ψ =1

= µn+1 (A) + µm+1 (B)

Fan (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1063464/) proves a similar inequal-
ity for the singular values of AB .

Theorem 1.14 If A and B are compact then

µn+m+1 (AB) ≤ µn+1 (A)µm+1 (B)

Here are two inequalities involving products of singular values and eigenvalues.

Theorem 1.15 If A and B are compact then
k                      k
µn (AB) ≤              µn (A)µn (B)
n=1                     n=1

Theorem 1.16 If A is compact then
k                      k
|λn (A)| ≤             µn (A)
n=1                    n=1

The proof to these two inequalities uses the exterior tensor powers Λk (H) of the Hilbert space
H. Brieﬂy, every operator A on H gives rise to an operator Λk (A) on Λk (H) satisfying Λk (AB) =
Λk (A)Λk (B) and Λk (A∗ ) = Λk (A)∗ . If A is compact and self-adjoint then eigenvalues of Λk (A) are
products of k distinct eigenvalues of A, counted with algebraic multiplicity. In particular
k
k
λ1 Λ (A) =                   λn (A)
n=1

7
Since
µ1 (Λk (A))2 = λ1 (Λk (A)∗ Λk (A))

= λ1 (Λk (A∗ )Λk (A))

= λ1 (Λk (A∗ A))
k
=         λn (A∗ A)
n=1
k
=         µn (A)2
n=1

we also have
k
µ1 Λk (A) =                µn (A)
n=1

The ﬁrst theorem just says Λk (AB) = Λk (A)Λk (B) ≤ Λk (A)                          Λk (B) . The second theorem is
a rephrasing of |λ1 (Λk (A))| ≤ µ1 (Λk (A))
One might hope that |λn (A)| ≤ µn (A) for every n. While this need not be true, there is Weyl’s
inequality

Theorem 1.17 If K is compact and 1 ≤ p < ∞ then
k                      k
|λn (K)|p ≤             µn (K)p
n=1                      n=1

A consequence of Theorem 1.15 is the following theorem, which leads to the non-commutative
¨
Holder inequality below.

Theorem 1.18 If K is compact then
k                    k
µn (AB) ≤            µn (A)µn (B)
n=1                  n=1

The trace ideals Ip

A compact operator K is in Ip if {µn (K)} ∈ ℓp . A common notation is

K   p   = {µn (K)}        ℓp

Operators in I1 are called trace class and operators in I2 are called Hilbert-Schmidt. There are other
trace ideals that are useful occasionaly. For example the spaces Ip,w are based on the the weak ℓp
spaces.

Problem 1.4: Use the inequalities in the previous section to prove:
(i) Each Ip is a subspace whose closure in H is the space of compact operators.

(ii) Each Ip is an ideal, i.e., if K ∈ Ip and B is bounded then BK, KB ∈ Ip .

8
Problem 1.5: If A ∈ Ip and B ∈ Iq , for which r is Ir guaranteed to contain AB ?

In fact, a non-commutative version of the Holder inequality is true: for 1/p = 1/q + 1/r, 1 ≤
¨
p, q, r ≤ ∞,
AB     p   ≤ A      q   B r.

For a proof, see p. 31 of Simon’s trace ideals book.

Hilbert Schmidt operators

Suppose a compact operator K is given explicitly as an inﬁnite matrix or an integral operator.
When p = 2, it still may be difﬁcult to decide whether K ∈ Ip . However, p = 2 is special.

Theorem 1.19 Let {fi } be an orthonormal basis for H and let ki,j = fi , Kfj be the matrix elements of K .
Then        i,j   |ki,j |2 < ∞ iff K ∈ I2 and
|ki,j |2 = K         2
2
i,j
2
Proof: Suppose           i,j   |ki,j | < ∞. Let Pn denote the projection onto subspace spanned by the ﬁrst n
basis elements. Then the ﬁnite rank operator Pn KPn converges to K in norm, which shows that K is
compact. Since the sum of matrix elements is absolutely convergent we may evaluate it in any order.
Thus
|ki,j |2 =            fi , K ∗ fj       fj , Kfi =        fi , K ∗ Kfi .
i,j                i   j                                    i
∗                       2
Write K =            n µn ψn φn , · . Then K K =                 n µn φn φn , · . Therefore

fi , K ∗ Kfi =               µ2 | φn , fi |2 =
n                            µ2
n       | φn , fi |2 =       µ2 φn
n
2
=       µ2
n
i                              i   n                             n        i                    n                   n
The exchange of sums is permitted, since the summands are positive.
If K ∈ I2 , we may reverse the argument.

Now we consider the situation where our (separable) Hilbert space is of the form L2 (X, dµ).
An operator K is called an integral operator if there exists a function K(x, y) such that for every
f, g ∈ L2 (X, dµ)
f, Kg =               f (x)K(x, y)g(y) dµ(x)dµ(y)
X×X
Example: An important class of integral operators are the convolution operators on L2 (Rn , dx ). These
are operators with integral kernels of the forms K(x, y) = f (x − y) and arise in the following way.
Recall that the Fourier transform F converts differentiation to multiplication. In other words, for nice
functions ψ(x), F (−i∇)ψ(x) = ξ(F ψ)(ξ), so that

(−i∇)ψ(x) = F −1 ξF ψ

Thus it is natural to deﬁne f (−i∇) to be the operator sending ψ to F −1 f (ξ)F ψ . A calculation shows
ˇ
that this is an integral operator with integral kernel (2π)−n f (x − y).

9
Theorem 1.20 Suppose H is a separable Hilbert space L2 (X, dµ). If K(x, y) ∈ L2 (X × X, dµ × dµ) then
K deﬁnes an integral operator in K ∈ I2 with

K   2   = K     L2 (X×X,dµ×dµ) .                                          (1.1)

Conversely, every operator K ∈ I2 has an integral kernel K(x, y) ∈ L2 (X × X, dµ × dµ) such that (1.1)
holds.

Proof: Let {fi } be an orthonormal basis for L2 (X, dµ). Then {fi (x)fj (y)} is an orthonormal basis for
L2 (X × X, dµ × dµ). So, if K(x, y) ∈ L2 (X × X, dµ × dµ) then

K(x, y) =            ki,j fi (x)fj (y)
i,j

with

K   L2 (X×X,dµ×dµ)       =          |ki,j |2
i,j

But ki,j = fi , Kfj are the matrix elements of the integral operator K deﬁned by K. So by the previous
theorem, K ∈ I2 and (1.1) holds.

On the other hand, if K =     n   µn ψn φn , · is in I2 then                n   µ2 < ∞. Since {ψn (x)φn (y)} is an
n

orthonormal set in L2 (X × X, dµ × dµ),            n   µn ψn (x)φn (y) converges in L2 (X × X, dµ × dµ) to a
function K(x, y). Clearly, K is an integral kernel for K , so (1.1) holds.

Example: An operator of the form f (x)g(−i∇) on L2 (Rn , dn x) has integral kernel given by

K(x, y) = (2π)−n f (x)ˇ(x − y).
g

If f, g ∈ L2 (Rn , dn x), then

|K(x, y)|2 dn ydn x = (2π)−2n               |f (x)|2 |ˇ(x − y)|2 dn ydn x
g

= (2π)−2n           |f (x)|2 dn x          |ˇ(z)|2 dn y
g

= (2π)−2n f          2
L2   g    2
L2

Thus f (x)g(−i∇) ∈ I2 .

If f (x) and p(x) are continuous functions that tend to zero as |x| → ∞ and |p| → ∞ then
f (x)g −ii∇) is compact. To see this ﬁrst approximate f (x) and g(x) uniformly by compactly supported
functions f (x)n and p(x)m . Then fn (x)gm (−i∇) → f (x)g(−i∇) in norm, and each fn (x)gm (−i∇) is
Hilbert Schmidt. This shows that f (x)g(−i∇) is compact.

10
Trace class operators

Theorem 1.21 Suppose that K ∈ I1 . For every orthonormal basis {ηi },                        i   | ηi , Kηi | < ∞ and the trace
of K , deﬁned by
tr(K) =                 ηi , Kηi
i
is basis independent. Moreover |tr(K)| < K        1   so that A → tr(A) is a bounded linear functional on I1 . If B
is a bounded operator then tr(AB) = tr(BA)

Proof: Let K =     n   µn ψn φn , · . By Cauchy-Schwarz
1/2                             1/2

| ηi , ψn φn , ηi | ≤          | ηi , ψn |2                      | φn , ηi |2             = ψn   φn = 1
i                              i                                 i
Thus
| ηi , Kηi | =                   µn ηi , ψn φn , ηi
i                       i         n

≤                  µn | ηi , ψn φn , ηi |
i     n
(1.2)
=         µn           | ηi , ψn φn , ηi |
n             i

≤         µn
n
= K       1

This implies tr(K) ≤ K        1.   Also, the absolute convergence in the double sum allows changing the
order of summation in the following calculation.
tr(K) =           ηi , Kηi
i

=              µn ηi , ψn φn , ηi
i    n

=        µn             ηi , ψn φn , ηi
n          i

=        µn φn , ψn .
n
This shows the basis independence. Finally, we ﬁnd

tr(BK) =           µn (K) φn , Bψn = tr(KB).
n

Notice that the product of two Hilbert Schmidt operators is trace class. In fact

K    2   = tr(K ∗ K)

and I2 is a Hilbert space with inner product A, B = tr(A∗ B).
If K on L2 (X, dµ) is given directly by an integral kernel K(x, y) there is no simple necessary and
sufﬁcient condition for K ∈ I1 (see Simon for some results).

11
Example: Suppose X is a compact smooth Riemannian manifold and dµ is the Riemannian density.
If K(x, y) is smooth then it deﬁnes an operator in K ∈ I1 . The idea behind the proof is to use an
unbounded self-adjoint operator like the Laplace operator ∆ whose singular values (i.e., eigenvalues)
are either known explicity or can be estimated. Then, even though ∆ is unbounded, the product ∆p K
deﬁnes a bounded operator with integral kernel ∆p K(x, y). Then
x

µn (K) = µn (∆−p ∆p K) ≤ ∆p K µn (∆−p )

so K is in I1 if ∆−p is.

It need not be true in general that

tr(K) =   K(x, x)dµ(x).                                   (1.3)

After all, typically the diagonal has measure zero in X × X , so the right side is meaningless. Never-
theless, (1.3) does hold in many situations.

Example: Suppose X is a compact smooth Riemannian manifold and dµ is the Riemannian density. If
K ∈ I1 and K(x, y) is continuous then (1.3) holds.

For a matrix, the trace is equal to the sum of the eigenvalues. This is true for operators in I1
too, but not easy to prove. The result is called Lidskii’s theorem. The proof uses the determinant
det(I + K), which is deﬁned for K ∈ I1 .

12

```
To top