<< стр. 2(всего 11)СОДЕРЖАНИЕ >>

Vp 2
вү¤ hk1 . . . hkp . (5.2)
1вү¤k1 <k2 <...<kp вү¤nвҲ’1

Therefore due to Lemma 2.4.1,
nвҲ’1
k2 2
hj )k .
вү¤
V Оіn,k (
j=1
20 2. Norms of Matrix Functions

But
nвҲ’1
hj вү¤ N 2 (V ).
j=1
We thus have derived the required result. 2
Theorem 2.5.1 and (1.5) imply
Corollary 2.5.2 For any nilpotent operator V in Cn , the inequalities
N k (V )
k
вү¤вҲҡ (k = 1, . . . , n вҲ’ 1)
V
k!
are valid.
An independent proof of this corollary can be found in (GilвҖ™, 1995, p. 50,
Lemma 2.3.1).

2.6 Proof of Theorem 2.1.1
Let D and V be the diagonal and nilpotent parts of A, respectively. According
to Lemma 1.7.1, RО» (D)V is a nilpotent operator. So by virtue of Theorem
2.5.1,
(RО» (D)V )k вү¤ N k (RО» (D)V )Оіn,k (k = 1, ..., n вҲ’ 1). (6.1)
Since D is a normal operator, we can write down RО» (D) = ПҒвҲ’1 (D, О»). It
is clear that
N (RО» (D)V ) вү¤ N (V ) RО» (D) = N (V )ПҒвҲ’1 (D, О»). (6.2)
According to (3.4),
A вҲ’ О»I = D + V вҲ’ О»I = (D вҲ’ О»I)(I + RО» (D)V ).
We thus have
nвҲ’1
вҲ’1
(RО» (D)V )k RО» (D).
RО» (A) = (I + RО» (D)V ) RО» (D) = (6.3)
k=0

Now (6.1) and (6.2) yield the inequality
nвҲ’1
N k (V )Оіn,k ПҒвҲ’kвҲ’1 (D, О»).
RО» (A) вү¤
k=0
This relation proves the stated result, since A and D have the same eigen-
values and N (V ) = g(A), due to Lemma 2.3.2. 2
An additional proof of Corollary 2.1.2: Corollary 2.5.2 implies
N k (RО» (D)V )
(RО» (D)V )k вү¤ вҲҡ (k = 1, ..., n вҲ’ 1).
k!
Now the required result follows from (6.2) and (6.3), since N (V ) = g(A) due
to Lemma 2.3.2. 2
2.7. Analytic Matrix Functions 21

2.7 Estimates for the Norm
of Analytic Matrix-Valued Functions
Recall that g(A) and Оіn,k are deп¬Ғned in Section 2.1, and B(Cn ) is the set of
all linear operators in Cn .

Theorem 2.7.1 Let A вҲҲ B(Cn ) and let f be a function regular on a neigh-
borhood of the closed convex hull co(A) of the eigenvalues of A. Then
nвҲ’1
Оіn,k
sup |f (k) (О»)|g k (A)
f (A) вү¤ . (7.1)
k!
k=0 О»вҲҲco(A)

The proof of this theorem is divided into a series of lemmas, which are
presented in the next section.
Theorem 2.7.1 is exact: if A is a normal matrix and

sup |f (О»)| = sup |f (О»)|,
О»вҲҲco(A) О»вҲҲПғ(A)

then we have the equality f (A) = supО»вҲҲПғ(A) |f (О»)|. Theorem 2.7.1 and
inequalities (1.5) yield
Corollary 2.7.2 Let A вҲҲ B(Cn ) and let f be a function regular on a neigh-
borhood of the closed convex hull co(A) of the eigenvalues of A. Then
nвҲ’1
g k (A)
(k)
f (A) вү¤ sup |f (О»)| . (7.2)
(k!)3/2
k=0 О»вҲҲco(A)

An additional proof of this corollary is presented in the next section.
Example 2.7.3 For a linear operator A in Cn , Theorem 2.7.1 and Corollary
2.7.2 give us the estimates
nвҲ’1 nвҲ’1
g k (A)tk
k Оіn,k
О±(A)t k О±(A)t
exp(At) вү¤ e вү¤e (t вүҘ 0)
g (A)t
(k!)3/2
k!
k=0 k=0

where О±(A) = maxk=1,...,n Re О»k (A). In addition,
nвҲ’1 nвҲ’1
Оіn,k m!g k (A)rs (A)
mвҲ’k
m!g k (A)rs (A)
mвҲ’k
m
вү¤ вү¤
A (m = 1, 2, ...)
(m вҲ’ k)!k! (m вҲ’ k)!(k!)3/2
k=0 k=0

where rs (A) is the spectral radius. Recall that 1/(m вҲ’ k)! = 0 if m < k.
22 2. Norms of Matrix Functions

2.8 Proof of Theorem 2.7.1
Lemma 2.8.1 Let {dk } be an orthogonal normal basis in Cn , A1 , . . . , Aj
n Г— n-matrices and ПҶ(k1 , ..., kj+1 ) a scalar-valued function of arguments

k1 , ..., kj+1 = 1, 2, ..., n; j < n.

Deп¬Ғne projectors Q(k) by Q(k)h = (h, dk )dk (h вҲҲ Cn , k = 1, ..., n), and set

T= ПҶ(k1 , ..., kj+1 )Q(k1 )A1 Q(k2 ) . . . Aj Q(kj+1 ).
1вү¤k1 ,...,kj+1 вү¤n

Then T вү¤ a(ПҶ) |A1 ||A2 | . . . |Aj | , where

|ПҶ(k1 , ..., kj+1 )|
a(ПҶ) = max
1вү¤k1 ,...,kj+1 вү¤n

and |Ak | (k = 1, ..., j) are the matrices, whose entries in {dk } are the absolute
values of the entries of Ak in {dk }.

Proof: For any entry Tsm = (T ds , dm ) (s, m = 1, ..., n) of operator T we
have
(1) (j)
Tsm = ПҶ(s, k2 , ..., kj , m)ask2 . . . akj ,m ,
1вү¤k2 ,...,kj вү¤n

(i)
where ajk = (Ai dk , dj ) are the entries of Ai . Hence,

(1) (j)
|Tsm | вү¤ a(ПҶ) |ask2 . . . akj m |.
1вү¤k2 ,...,kj вү¤n

This relation and the equality
n
2
|(T x)j |2 (x вҲҲ Cn ),
Tx =
j=1

where (.)j means the j-th coordinate, imply the required result. 2
Furthermore, let |V | be the operator whose matrix elements in the or-
thonormed basis of the triangular representation (the Schur basis) are the
absolute values of the matrix elements of the nilpotent part V of A with
respect to this basis. That is,
n kвҲ’1
|V | = |ajk |(., ek )ej ,
k=1 j=1

where {ek } is the Schur basis and ajk = (Aek , ej ).
2.8. Proof of Theorem 2.7.1 23

Lemma 2.8.2 Under the hypothesis of Theorem 2.7.1, the estimate
nвҲ’1
|V |k
(k)
f (A) вү¤ sup |f (О»)|
k!
k=0 О»вҲҲco(A)

is true, where V is the nilpotent part of A.

Proof: It is not hard to see that the representation (3.4) implies the equality

RО» (A) вүЎ (A вҲ’ IО»)вҲ’1 = (D + V вҲ’ О»I)вҲ’1 =

(I + RО» (D)V )вҲ’1 RО» (D)
for all regular О». According to Lemma 1.7.1 RО» (D)V is a nilpotent operator
because V and RО» (D) have common invariant subspaces. Hence,

(RО» (D)V )n = 0.

Therefore,
nвҲ’1
(RО» (D)V )k (вҲ’1)k RО» (D).
RО» (A) = (8.1)
k=0

Due to the representation for functions of matrices
nвҲ’1
1
f (A) = вҲ’ f (О»)RО» (A)dО» = Ck , (8.2)
2ПҖi О“ k=0

where
1
Ck = (вҲ’1)k+1 f (О»)(RО» (D)V )k RО» (D)dО».
2ПҖi О“

Here О“ is a closed contour surrounding Пғ(A). Since D is a diagonal ma-
trix with respect to the Schur basis {ek } and its diagonal entries are the
eigenvalues of A, then
n
Qj
RО» (D) = ,
О»j (A) вҲ’ О»
j=1

where Qk = (., ek )ek . We have
n n n
Ck = Qj1 V Qj2 V . . . V Qjk Ij1 j2 ...jk+1 .
j1 =1 j2 =1 jk =1

Here
(вҲ’1)k+1 f (О»)dО»
Ij1 ...jk+1 = .
(О»j1 вҲ’ О») . . . (О»jk+1 вҲ’ О»)
2ПҖi О“
24 2. Norms of Matrix Functions

Lemma 2.8.1 gives us the estimate

|Ij1 ...jk+1 | |V |k .
Ck вү¤ max
1вү¤j1 вү¤...вү¤jk+1 вү¤n

Due to Lemma 1.5.1
|f (k) (О»)|
|Ij1 ...jk+1 | вү¤ sup .
k!
О»вҲҲco(A)

This inequality and (8.2) imply the result. 2
Proof of Theorem 2.7.1: Theorem 2.5.1 implies

|V |k вү¤ Оіn,k N k (|V |) (k = 1, ..., n вҲ’ 1).

But N (|V |) = N (V ). Moreover, thanks to Lemma 2.3.2, N (V ) = g(A). Thus

|V |k вү¤ Оіn,k g k (A) (k = 1, ..., n вҲ’ 1).

Now the previous lemma yields the required result. 2
An additional proof of Corollary 2.7.2: Corollary 2.5.2 implies

N k (V )
k
вү¤вҲҡ
|V | (k = 1, ..., n вҲ’ 1)
k!
Now the required result follows from Lemma 2.8.2, since N (V ) = g(A) due
to Lemma 2.3.2. 2

2.9 The First Multiplicative Representation of
the Resolvent
Recall that B(Cn ) is the set of linear operators in Cn , I is the unit operator.
Let Pk (k = 1, . . . , n) be the maximal chain of the invariant projectors of an
A вҲҲ B(Cn ). That is, Pk are orthogonal projectors,

APk = Pk APk (k = 1, . . . , n)

and
0 = P0 Cn вҠ‚ P1 Cn вҠ‚ ... вҠ‚ Pn Cn = Cn .
So dim вҲҶPk = 1. Here

вҲҶPk = Pk вҲ’ PkвҲ’1 (k = 1, ..., n).

We use the triangular representation

A = D + V. (9.1)
2.9. The First Multiplicative Representation 25

(see Section 1.7). Here V is the nilpotent part of A and
n
D= О»k (A)вҲҶPk
k=1

is the diagonal part. For X1 , X2 , ..., Xn вҲҲ B(Cn ) denote
вҶ’
Xk вүЎ X1 X2 ...Xn .
1вү¤kвү¤n

That is, the arrow over the symbol of the product means that the indexes of
the co-factors increase from left to right.
Theorem 2.9.1 For any A вҲҲ B(Cn ),
вҶ’
AвҲҶPk
О»RО» (A) = вҲ’ ) (О» вҲҲ Пғ(A)),
(I +
О» вҲ’ О»k (A)
1вү¤kвү¤n

where Pk , k = 1, ..., n is the maximal chain of the invariant projectors of A.
Denote Ek = I вҲ’ Pk . Since
Proof:

A = (Ek + Pk )A(Ek + Pk ) for any k = 1, ..., n

and E1 AP1 = 0, we get the relation

A = P1 AE1 + P1 AP1 + E1 AE1 .

Take into account that вҲҶP1 = P1 and

P1 AP1 = О»1 (A)вҲҶP1 .

Then
A = О»1 (A)вҲҶP1 + P1 AE1 + E1 AE1 =
О»1 (A)вҲҶP1 + AE1 . (9.2)
Now, we check the equality

RО» (A) = ОЁ(О»), (9.3)

where
вҲҶP1 вҲҶP1
ОЁ(О») вүЎ вҲ’ AE1 RО» (A)E1 + E1 RО» (A)E1 .
О»1 (A) вҲ’ О» О»1 (A) вҲ’ О»
In fact, multiplying this equality from the left by A вҲ’ IО» and taking into
account equality (9.2), we obtain the relation

(A вҲ’ IО»)ОЁ(О») = вҲҶP1 вҲ’ вҲҶP1 AE1 RО» (A)E1 + (A вҲ’ IО»)E1 RО» (A)E1 .
26 2. Norms of Matrix Functions

But E1 AE1 = E1 A and thus E1 RО» (A)E1 = E1 RО» (A). I.e. we can write
(A вҲ’ IО»)ОЁ(О») = вҲҶP1 + (вҲ’вҲҶP1 A + A вҲ’ IО»)E1 RО» (A) =
вҲҶP1 + E1 (A вҲ’ IО»)RО» (A) = вҲҶP1 + E1 = I.
Similarly, we multiply (9.3) by A вҲ’ IО» from the right and take into account
(9.2). This gives I. Therefore, (9.3) is correct.
Due to (9.3)
I вҲ’ ARО» (A) =
(I вҲ’ (О»1 (A) вҲ’ О»)вҲ’1 AвҲҶP1 )(I вҲ’ AE1 RО» (A)E1 ). (9.4)
Now we apply the above arguments to operator AE1 . We obtain the following
expression which is similar to (9.4):
I вҲ’ AE1 RО» (A)E1 =
(I вҲ’ (О»2 (A) вҲ’ О»)вҲ’1 AвҲҶP2 )(I вҲ’ AE2 RО» (A)E2 ).
For any k < n, it similarly follows that
I вҲ’ AEk RО» (A)Ek =
AвҲҶPk+1
(I вҲ’ )(I вҲ’ AEk+1 RО» (A)Ek+1 ).
О»k+1 (A) вҲ’ О»
Substitute this in (9.4), as long as k = 1, 2, ..., n вҲ’ 1. We have
I вҲ’ ARО» (A) =
вҶ’
AвҲҶPk
)(I вҲ’ AEnвҲ’1 RО» (A)EnвҲ’1 ).
(I + (9.5)
О» вҲ’ О»k (A)
1вү¤kвү¤nвҲ’1

It is clear that EnвҲ’1 = вҲҶPn . I.e.,
AвҲҶPn
I вҲ’ AEnвҲ’1 RО» (A)EnвҲ’1 = I + .
О» вҲ’ О»n (A)
Now the identity
I вҲ’ ARО» (A) = вҲ’О»RО» (A)
and (9.5) imply the result. 2
Let A be a normal matrix. Then
n
A= О»k (A)вҲҶPk .
k=1

Hence, AвҲҶPk = О»k (A)вҲҶPk . Since вҲҶPk вҲҶPj = 0 for j = k, Theorem 2.9.1
gives us the equality
n
(I + (О» вҲ’ О»k (A))вҲ’1 О»k (A)вҲҶPk ).
О»RО» (A) = вҲ’
k=1
2.10. The Second Multiplicative Representation 27

But
n
I= вҲҶPk .
k=1

The result is
n
[1 + (О» вҲ’ О»k (A))вҲ’1 О»k (A)]вҲҶPk ) =
О»RО» (A) = вҲ’
k=1

n
вҲҶPk
вҲ’ О» .
О» вҲ’ О»k (A)
k=1

Or
n
вҲҶPk
RО» (A) = .
О»k (A) вҲ’ О»
k=1

We have obtained the well-known spectral representation for the resolvent of
a normal matrix.
Thus, Theorem 2.9.1 generalizes the spectral representation for the resol-
vent of a normal matrix.

2.10 The Second Multiplicative Representation
of the Resolvent
Lemma 2.10.1 Let V вҲҲ B(Cn ) be a nilpotent operator and Pk , k = 1, ..., n,
be the maximal chain of its invariant projectors. Then
вҶ’
вҲ’1
(I вҲ’ V ) = (I + V вҲҶPk ). (10.1)
2вү¤kвү¤n

Proof: In fact, all the eigenvalues of V are equal to zero, and V вҲҶP1 = 0.
Now Theorem 2.9.1 gives us relation (10.1). 2
Relation (10.1) allows us to prove the second multiplicative representation
of the resolvent of A.

Theorem 2.10.2 Let D and V be the diagonal and nilpotent parts of A вҲҲ
B(Cn ), respectively. Then
вҶ’
V вҲҶPk
] (О» вҲҲ Пғ(A)),
RО» (A) = RО» (D) [I + (10.2)
О» вҲ’ О»k (A)
2вү¤kвү¤n

where Pk , k = 1, ..., n, is the maximal chain of invariant projectors of A.
28 2. Norms of Matrix Functions

Proof: Due to (9.1)

RО» (A) = (A вҲ’ О»I)вҲ’1 = (D + V вҲ’ О»I)вҲ’1 = RО» (D)(I + V RО» (D))вҲ’1 .

But V RО» (D) is a nilpotent operator. Take into account that

RО» (D)вҲҶPk = (О»k (A) вҲ’ О»)вҲ’1 вҲҶPk .

Now (10.1) ensures the relation (10.2). 2

2.11 The First Relation between
Determinants and Resolvents
In this section, a relation between the determinant and resolvent of a matrix
is derived. It improves the Carleman inequality. We recall the Carleman
inequality in Section 2.15.

Theorem 2.11.1 Let A вҲҲ B(Cn ) (n > 1) and I вҲ’ A be invertible. Then

(I вҲ’ A)вҲ’1 det (I вҲ’ A) вү¤
1
(N 2 (A) вҲ’ 2Re T race (A) + 1)](nвҲ’1)/2 .
[1 + (11.1)
nвҲ’1
The proof of this theorem is presented in this section below.
Corollary 2.11.2 Let A вҲҲ B(Cn ). Then

(IО» вҲ’ A)вҲ’1 det (О»I вҲ’ A) вү¤

N 2 (A) вҲ’ 2Re (О» T race (A)) + n|О»|2 (nвҲ’1)/2
[ ] (О» вҲҲ Пғ(A)).
nвҲ’1
(11.2)
In particular, let V be a nilpotent matrix. Then

(IО» вҲ’ V )вҲ’1 вү¤

1 1 N 2 (V ) (nвҲ’1)/2
[1 + (1 + )] (О» = 0). (11.3)
|О»| nвҲ’1 |О»|2
Indeed, inequality (11.2) is due to Theorem 11.1 with О»вҲ’1 A instead of A,
and the equality |О»|2 О»вҲ’1 = О» taken into account. If V is nilpotent, then

|det (О»I вҲ’ V )|2 = det (IО» вҲ’ V ) det (IО» вҲ’ V вҲ— ) = |О»|2n .

Moreover, T race V = 0. So (11.2) implies (11.3).
To prove Theorem 2.11.1, we need the following
2.11. The First Relation for Determinants 29

Lemma 2.11.3 Let A вҲҲ B(Cn ) be a positive deп¬Ғnite Hermitian matrix:
A = AвҲ— > 0. Then
T race A nвҲ’1
AвҲ’1 det A вү¤ [ ] .
nвҲ’1
Proof: Without loss of generality assume that
О»n (A) = min О»k (A).
k=1,...,n

Then AвҲ’1 = О»вҲ’1 (A) and
n
nвҲ’1
вҲ’1
A det A = О»k (A).
k=1

Hence, due to the inequality between the arithmetic and geometric mean
values we get
nвҲ’1
вҲ’1 вҲ’1
О»k ]nвҲ’1 вү¤ [(n вҲ’ 1)вҲ’1 T race A]nвҲ’1 ,
det A вү¤ [(n вҲ’ 1)
A
k=1

since A is positive deп¬Ғnite. As claimed. 2

Proof of Theorem 2.11.1: For any A вҲҲ B(Cn ), the operator
B вүЎ (I вҲ’ A)(I вҲ’ AвҲ— )
is positive deп¬Ғnite and
det B = det (I вҲ’ A)(I вҲ’ AвҲ— ) = det (I вҲ’ A)det (I вҲ’ AвҲ— ) =
|det (I вҲ’ A)|2 .
Moreover,
T race [(I вҲ’ A)(I вҲ’ AвҲ— )] = T race I вҲ’ T race (A + AвҲ— ) + T race (AAвҲ— ) =
n вҲ’ 2Re T race A + N 2 (A).
But
B вҲ’1 = (I вҲ’ A)вҲ’1 (I вҲ’ AвҲ— )вҲ’1 = (I вҲ’ A)вҲ’1 2 .
Now Lemma 2.11.2 yields
B вҲ’1 det B = (I вҲ’ A)вҲ’1 det (I вҲ’ A) 2
вү¤
1
(n + N 2 (A) вҲ’ 2Re T race (A))]nвҲ’1 = .
[
nвҲ’1
1
(N 2 (A) вҲ’ 2Re T race (A) + 1)]nвҲ’1 ,
[1 +
nвҲ’1
as claimed. 2
30 2. Norms of Matrix Functions

2.12 The Second Relation between
Determinants and Resolvents
Without any loss of generality assume that for a regular О» the relation
|О» вҲ’ О»1 (A)| = mink=1,...,n |О» вҲ’ О»k (A)| (12.1)
is valid. Recall that g(A) and Оіn,k are deп¬Ғned in Section 2.1.
Theorem 2.12.1 Let A вҲҲ B(Cn ) and (12.1) hold. Then
n
RО» (A)det(A вҲ’ IО») вү¤ max {1, |О»j (A) вҲ’ О»|}G(A), (12.2)
j=2

where
nвҲ’1
g k (A)Оіn,k .
G(A) =
k=0

The proof of this theorem is given in the next section.
Theorem 2.12.1 is exact. In fact, for instance, let A be a unitary operator
and О» = 0. Since any unitary operator is normal and |О»k (A)| = 1, then
G(A) = 1 and due to (12.3)
|detA| = 1 вү¤ A .
But the norm of a unitary operator equals 1. Thus, we arrive at the equality
|detA| = A = 1.
Let rs (A) be the spectral radius of a matrix A. Let A be nonsingular. Put
A0 = rвҲ’1 (A)A. Due to Theorem 2.12.1,
AвҲ’1 det(A0 ) вү¤ G(A0 ).
0
вҲ’1
But g(A0 ) = rs (A) g(A). Therefore, the following result holds
Corollary 2.12.2 Let A вҲҲ B(Cn ) be nonsingular. Then
nвҲ’1
вҲ’1 nвҲ’kвҲ’1
(A)g k (A)Оіn,k .
det(A) вү¤
A rs
k=0

2.13 Proof of Theorem 2.12.1
For brevity, put
О»j (A) = О»j (j = 1, ..., n).
Without any loss of generality assume that
|О»1 | = min |О»k | (13.1)
k=1,...,n
2.13. Proof of Theorem 2.12.1 31

Lemma 2.13.1 Let A be a nonsingular operator in Cn and condition (13.1)
hold. Then with the notations

ПҮk = max{1, |О»k |}

and
n
Пү(A) = ПҮj ,
j=2

the inequality
det (A)AвҲ’1 вү¤ G(A).
holds.

Proof: We have

AвҲ’1 = DвҲ’1 (I + V DвҲ’1 )вҲ’1 вү¤ (I + V DвҲ’1 )вҲ’1 DвҲ’1 . (13.2)

Clearly, DвҲ’1 = |О»1 |вҲ’1 . To estimate (I + V DвҲ’1 )вҲ’1 , observe that V DвҲ’1
is a nilpotent matrix and, due to Lemma 2.10.1,
вҶ’
(I вҲ’ О»вҲ’1 V вҲҶPk ),
вҲ’1 вҲ’1
(I + V D ) = k
2вү¤kвү¤n

since
DвҲ’1 вҲҶPk = О»вҲ’1 вҲҶPk .
k

This yields
n
О»вҲ’1 K = О»1 [det(A)]вҲ’1 K,
вҲ’1 вҲ’1
(I + V D ) = (13.3)
k
k=2

where вҶ’
(IО»k вҲ’ V вҲҶPk ).
K=
2вү¤kвү¤n

It not hard to check that

|IО»k вҲ’ V вҲҶPk | вү¤ (I + |V |вҲҶPk ) ПҮk .

Here |A| means the matrix ( |aij | ), if A = (aij ). The inequality B вү¤ C for
non-negative matrices B, C is understood in the natural sense. So we have
вҶ’ вҶ’
|K| вү¤ ПҮk (I + |V |вҲҶPk ) = Пү(A) (I + |V |вҲҶPk ) =
2вү¤kвү¤n 2вү¤kвү¤n

Пү(A) (I вҲ’ |V |)вҲ’1 .
32 2. Norms of Matrix Functions

Taking into account that N (V ) = N (|V |) and N (V ) = g(A), by virtue of
Theorem 2.1.1 we get the inequality
n
вҲ’1
Оіk,n N k (V ) = G(A).
(I вҲ’ |V |) вү¤
k=0

That is,
K вү¤ Пү(A)G(A).
Due to (13.2) and (13.3)

AвҲ’1 вү¤ |det(A)|вҲ’1 Пү(A)G(A).

As claimed. 2
The assertion of Theorem 2.12.1 follows from the latter lemma with AвҲ’О»I

2.14 An Additional Estimate for Resolvents
Theorem 2.14.1 Let A вҲҲ B(Cn ), n > 1. Then

g 2 (A)
1 1
вҲ’1
) ](nвҲ’1)/2
(IО» вҲ’ A) вү¤ [1 + (1 + 2
nвҲ’1
ПҒ(A, О») ПҒ (A, О»)

for any regular О» of A.

Proof: Due to the triangular representation (see Section 1.7),

(IО» вҲ’ A)вҲ’1 = (IО» вҲ’ D вҲ’ V )вҲ’1 = (IО» вҲ’ D)вҲ’1 (I + BО» )вҲ’1 , (14.1)

where BО» := вҲ’V (IО»вҲ’D)вҲ’1 . But operator BО» is a nilpotent one. So Theorem
2.11.1 implies

1 + N 2 (BО» ) (nвҲ’1)/2
вҲ’1
вү¤ [1 +
(I + BО» ) ] . (14.2)
nвҲ’1
Take into account that

N (BО» ) = N (V (IО» вҲ’ D)вҲ’1 ) вү¤ (IО» вҲ’ D)вҲ’1 N (V ) = ПҒвҲ’1 (D, О»)N (V ).

Moreover, Пғ(D) and Пғ(A) coincide and due to Lemma 2.3.2, N (V ) = g(A).
Thus,
N (BО» ) вү¤ ПҒвҲ’1 (D, О»)g(A) = ПҒвҲ’1 (A, О»)g(A).
Now (14.1) and (14.2) imply the required result. 2
2.15. Notes 33

2.15 Notes
The quantity g(A) was introduced both by P. Henrici (1962) and indepen-
dently by M.I. GilвҖ™ (1979b).
Theorem 2.1.1 was derived in the paper (GilвҖ™, 1979a) in a more general
Recall that Carleman has derived the inequality
n
(1 вҲ’ О»вҲ’1 О»k (A))exp[О»вҲ’1 О»k (A)] вү¤
RО» (A)
k=1

|О»|exp[1 + N 2 (AО»вҲ’1 )/2],
cf. (Dunford, N and Schwartz, 1963, p. 1023).
Theorem 2.3.1 was published in (GilвҖ™, 1993a). It improves SchurвҖ™s in-
equality
n
|О»k (A)|2 вү¤ N 2 (A)
k=1

and BrownвҖ™s inequality
n
|Im О»k (A)|2 вү¤ N 2 (AI )
k=1

(see (Marcus and Minc, 1964)). A very interesting inequality for eigenvalues
of matrices was derived in (Kress et. al, 1974).
GelвҖ™fand and G.E. Shilov (1958) have established the estimate
nвҲ’1
sup |f (k) (О»)|(2 A )k .
f (A) вү¤
k=0 О»вҲҲco(A)

About other estimations for the matrix exponent, see (Coppel, 1978).
Theorem 2.7.1 was derived in the paper (GilвҖ™, 1979b) in the case of the
Theorems 2.9.1 and 2.10.1 were published in the more general situation
in (GilвҖ™, 1973).
Theorems 2.11.1 and 2.12.1 are probably new.

References
 Coppel, W.A. (1978). Dichotomies in Stability Theory. Lecture Notes in
Mathematics, No. 629, Springer-Verlag, New York.

 Dunford, N and Schwartz, J. T. (1963). Linear Operators, part II. Spec-
tral Theory. Interscience publishers, New York, London.
34 2. Norms of Matrix Functions

 GelвҖ™fand, I.M. and Shilov, G.E. (1958). Some Questions of Theory of
Diп¬Җerential Equations. Nauka, Moscow (in Russian).
 GilвҖ™, M. I. (1973). On the representation of the resolvent of a nonselfad-
joint operator by the integral with respect to a spectral function, Soviet
Math. Dokl., 14 , 1214-1217.

 GilвҖ™, M. I. (1979a). An estimate for norms of resolvent of completely
continuous operators, Mathematical Notes, 26 , 849-851.

 GilвҖ™, M. I. (1979b). Estimating norms of functions of a Hilbert-Schmidt
operator (in Russian), Izvestiya VUZ, Matematika, 23, 14-19. English
translation in Soviet Math., 23 , 13-19.

 GilвҖ™, M. I. (1983). One estimate for resolvents of nonselfadjoint operators
which are вҖқnearвҖқ to selfadjoint and to unitary ones, Mathematical Notes,
33, 81-84.
 GilвҖ™, M. I. (1992). On an estimate for the norm of a function of a quasi-
hermitian operator, Studia Mathematica, 103 (1), 17-24.

 GilвҖ™, M. I. (1993a). On inequalities for eigenvalues of matrices, Linear
Algebra and its Applications, 184, 201-206.
 GilвҖ™, M. I. (1993b). Estimates for norm of matrix-valued functions ,
Linear and Multilinear Algebra, 35, 65-73.

 GilвҖ™, M. I. (1995). Norm Estimations for Operator-valued Functions and
Applications. Marcel Dekker, New York.
 Henrici, P. (1962). Bounds for iterates, inverses, spectral variation and
п¬Ғeld of values of nonnormal matrices. Numerische Mathematik, 4, 24-39.
 Kress, R., De Vries, H. L. and Wegmann, R. (1974). On non-normal
matrices. Linear Algebra Appl., 8, 109-120.

 Marcus, M. and Minc, H. (1964). A Survey of Matrix Theory and Matrix
Inequalities. Allyn and Bacon, Boston.
3. Invertibility of Finite
Matrices

The present chapter deals with various types of invertibility conditions for
п¬Ғnite matrices. In particular, we improve the classical Levy-Desplanques
theorem and other well-known invertibility results for matrices that are close
to triangular ones.

3.1 Preliminary Results
For a matrix
A = (ajk )n
j,k=1 (n > 1),

put
пЈ« пЈ¶ пЈ« пЈ¶
0 a12 . . . a1n 0 ... 0 0
пЈ¬0 0 пЈ· пЈ¬ a21 0пЈ·
. . . a2n пЈ· ... 0
V+ = пЈ¬ , VвҲ’ = пЈ¬ пЈ·
пЈӯ . ... .пЈё пЈӯ. пЈё
. ... .
00 ... 0 an1 . . . an,nвҲ’1 0
and
D = diag (a11 , a22 , ..., ann ).
So A = D + V+ + VвҲ’ . In this chapter it is assumed that all the diagonal
entries are nonzero. So

min |ajj | > 0.
d0 :=
k=1,...,n

In the present section . is an arbitrary norm in Cn . Recall that I is the
unit operator. Set
WВ± = DвҲ’1 VВ± .

M.I. GilвҖ™: LNM 1830, pp. 35вҖ“48, 2003.
c Springer-Verlag Berlin Heidelberg 2003
36 3. Invertibility of Finite Matrices

Theorem 3.1.1 With the notation
J(WВ± ) := (I вҲ’ WВ± )вҲ’1 ,
let
1 1
ОҪ(A) вүЎ max{ вҲ’ WвҲ’ , вҲ’ W+ } > 0. (1.1)
J(W+ ) J(WвҲ’ )
Then A is invertible and the inverse matrix satisп¬Ғes the inequality
DвҲ’1
AвҲ’1 вү¤ . (1.2)
ОҪ(A)
To prove this theorem we need the following two simple lemmas.
Lemma 3.1.2 Let
ПҲ0 вүЎ (I + WвҲ’ )вҲ’1 W+ < 1.
Then the operator B вүЎ I вҲ’ WвҲ’ + W+ is boundedly invertible. Moreover,
J(WвҲ’ )
B вҲ’1 вү¤ .
1 вҲ’ ПҲ0
Proof: Clearly, operators WВ± are nilpotent. So operators I + WВ± are in-
vertible. We have
B = (I + WвҲ’ )(I + (I + WвҲ’ )вҲ’1 W+ ).
Moreover,
вҲһ
вҲ’1 вҲ’1
((I + WвҲ’ )вҲ’1 W+ )k вү¤
вү¤
(I + (I + WвҲ’ ) W+ )
k=0
вҲһ
ПҲ0 = (1 вҲ’ ПҲ0 )вҲ’1 .
k

k=0
Thus,
B вҲ’1 вү¤ (I + (I + WвҲ’ )вҲ’1 W+ )вҲ’1 (I + WвҲ’ )вҲ’1 вү¤
(1 вҲ’ ПҲ0 )вҲ’1 (I + WвҲ’ )вҲ’1 ,
as claimed. 2
Lemma 3.1.3 Let at least one of the following inequalities:
W+ J(WвҲ’ ) < 1 (1.3)
or
WвҲ’ J(W+ ) < 1 (1.4)
hold. Then relation (1.1) is valid. Moreover, the operator B вүЎ I + WвҲ’ + W+
is invertible and
1
B вҲ’1 вү¤ .
ОҪ(A)
3.2. lp Norms of Powers of Nilpotent Matrices 37

Proof: If condition (1.3) holds, then Lemma 3.1.2 yields the inequality
J(WвҲ’ ) 1
B вҲ’1 вү¤ = вҲ’1 . (1.5)
1 вҲ’ W+ J(WвҲ’ ) J (WвҲ’ ) вҲ’ W+
Interchanging WвҲ’ and W+ , under condition (1.4), we get
1
B вҲ’1 вү¤ .
J вҲ’1 (W + ) вҲ’ WвҲ’

This relation and (1.5) yield the required result. 2

Proof of Theorem 3.1.1: Clearly, condition (1.1) implies that at least
one of inequalities (1.3) or (1.4) holds. But
A = D + V+ + VвҲ’ = D(I + W+ + WвҲ’ ) = DB.
Now the required result follows from Lemma 3.1.3.

lp -Norms of Powers of Nilpotent Matrices
3.2
Recall that
n
|hk |p ]1/p (h = (hk ) вҲҲ Cn ; 1 < p < вҲһ).
h =[
p
k=1

In the present and next sections A p is an operator norm of a matrix A with
respect to the vector norm . p . Put
Оіn,m,p = [CnвҲ’1 (n вҲ’ 1)вҲ’m ]1/p (m = 1, ..., n вҲ’ 1) and Оіn,0,p = 1,
m

m
where Cn = n!/(n вҲ’ m)!m! are the binomial coeп¬ғcients. Note that
(n вҲ’ 1)! (n вҲ’ 1)...(n вҲ’ m)
p
вү¤
Оіn,m,p = .
(n вҲ’ 1)m (n вҲ’ 1 вҲ’ m)!m! (n вҲ’ 1)m m!
Hence,
1 p
Оіn,m,p вү¤
(m = 1, ...n вҲ’ 1).
m!
Lemma 3.2.1 For any upper triangular nilpotent matrix
V+ = (ajk )n
j,k=1 with ajk = 0 (1 вү¤ k вү¤ j вү¤ n)

the inequality
m m
вү¤ Оіn,m,p Mp (V+ ) (m = 1, . . . , n вҲ’ 1)
V+ (2.1)
p

is valid with the notation
nвҲ’1 n
|ajk |q ]p/q )1/p (pвҲ’1 + q вҲ’1 = 1).
Mp (V+ ) = ( [
j=1 k=j+1
38 3. Invertibility of Finite Matrices

For a natural s = 1, ..., n вҲ’ 1, denote
Proof:
n
|xk |p )1/p ,
x =(
p,s
k=s

where xk are the coordinates of a vector x вҲҲ Cn . We can write out
nвҲ’1 n
p
ajk xk |p .
|
V+ x =
p,s
j=s k=j+1

By HВЁlderвҖ™s inequality,
o
nвҲ’1
p
p
вү¤
V+ x hj x p,j+1 , (2.2)
p,s
j=s

where
n
|ajk |q ]p/q .
hj = [
k=j+1

Similarly,
nвҲ’1 n
V+ x p
2
ajk (V+ x)k |p вү¤
|
=
p,s
j=s k=j+1

nвҲ’1
p
hj V + x p,j+1 .
j=s

Here (V+ x)k are the coordinates of V+ x. Taking into account (2.2), we obtain
nвҲ’1 nвҲ’1
p
V+ x p
2
вү¤ hj hk x =
p,s p,k+1
j=s k=j+1

p
hj hk x p,k+1 .
sвү¤j<kвү¤nвҲ’1

Therefore,
p
2 p 2
вү¤
V+ = V+ hj hk .
p p,1
1вү¤j<kвү¤nвҲ’1

Repeating these arguments, we arrive at the inequality
m p
вү¤
V+ hk1 . . . hkm . (2.3)
p
1вү¤k1 <k2 <...<km вү¤nвҲ’1

Since,
n n n
|ajk |q ]p/q = Mp (V+ ),
p
hj = [
j=1 j=1 k=j+1
3.3. Invertibility in the Norm . 39
p

due to Lemma 2.4.1 and (2.3) we have
m p
вү¤ Mp (V+ )CnвҲ’1 (n вҲ’ 1)вҲ’m ,
mp m
V+ p

as claimed. 2

Similarly we can prove
Lemma 3.2.2 For any lower triangular nilpotent matrix

VвҲ’ = (ajk )n
j,k=1 with ajk = 0 (1 вү¤ j вү¤ k вү¤ n).

the inequality
m m
вү¤ Оіn,m,p Mp (VвҲ’ ) (m = 1, . . . , n вҲ’ 1)
VвҲ’ p

is valid, where
jвҲ’1
n
|ajk |q ]p/q )1/p .
Mp (VвҲ’ ) = ( [
j=2 k=1

Consider the case p = 2. The Euclidean norm . is invariant with respect
2
to an orthogonal basis. Moreover,
nвҲ’1 n
2 вҲ—
|ajk |2 = N2 (V+ ),
2
M2 (V+ ) = T race V+ V+ =
j=1 k=j+1

where N2 (.) = N (.) is the Hilbert-Schmidt norm. Due to Lemma 3.2.1 we
have
Corollary 3.2.3 Any n Г— n-nilpotent matrix V satisп¬Ғes the inequalities

Vm m
вү¤ Оіn,m,2 N2 (V ) (m = 1, ..., n вҲ’ 1).
2

Thus, Lemma 3.2.1 gives us the new proof of Lemma 2.5.1, since Оіn,m,2 =
Оіn,m .

Invertibility in the Norm . (1 < p < вҲһ)
3.3 p
Recall that A, d0 and VВ± are deп¬Ғned in Section 3.1; Оіn,m,p and Mp (VВ± ) are
deп¬Ғned in the previous section. In addition, WВ± = DвҲ’1 VВ± . So
jвҲ’1
n
|ajk |q p/q 1/p
Mp (WвҲ’ ) = ( [ ])
|ajj |q
j=2 k=1

and
nвҲ’1 n
|ajk |q p/q 1/p вҲ’1
(p + q вҲ’1 = 1).
Mp (W+ ) = ( [ ])
q
|ajj |
j=1 k=j+1
40 3. Invertibility of Finite Matrices

Theorem 3.3.1 With the notation
nвҲ’1
k
Jp (WВ± ) := Оіn,k,p Mp (WВ± ),
k=0

let
1 1
вҲ’ WвҲ’ p , вҲ’ W+ p } > 0.
ОҪp (A) := max{
Jp (W+ ) Jp (WвҲ’ )
Then A is invertible and the inverse matrix satisп¬Ғes the inequality
1
AвҲ’1 вү¤ .
p
d0 ОҪp (A)
Proof: Clearly,
nвҲ’1
вҲ’1 k
(I вҲ’ WВ± ) вү¤ WВ± p.
p
k=0
Lemmas 3.2.1 and 3.2.2 imply
(I вҲ’ WВ± )вҲ’1 вү¤ Jp (WВ± ).
p

Now Theorem 3.1.1 yields the required result. 2

Invertibility in the Norm .
3.4 вҲһ
For a matrix A = (ajk )n take the norm
j=1
n
вүЎ max |ajk |.
A вҲһ
j=1,...,n
k=1

Recall that d0 is deп¬Ғned in Section 3.1. Under the condition d0 > 0, introduce
the notation:
vk := max |ajk | (k = 2, ..., n);
Лң
j=1,...,kвҲ’1

|ajk | (k = 1, ..., n вҲ’ 1),
wk :=
Лң max
j=k+1,...,n
n nвҲ’1
vk
Лң wk
Лң
mup (A) := (1 + ) and mlow (A) := (1 + ).
|akk | |akk |
k=2 k=1

Theorem 3.4.1 Let the condition
mup (A)mlow (A) < mup (A) + mlow (A) (4.1)
be fulп¬Ғlled. Then matrix A is invertible and the inverse matrix satisп¬Ғes the
inequality
mup (A)mlow (A)
AвҲ’1 вү¤ . (4.2)
вҲһ
(mup (A) + mlow (A) вҲ’ mup (A)mlow (A))d0
3.5. Proof of Theorem 3.4.1 41

The proof of this theorem is divided into a series of lemmas which are pre-
sented in the next section. Note that condition (4.1) is equivalent to the
following one:

Оё(A) := (mup (A) вҲ’ 1)(mlow (A) вҲ’ 1) < 1. (4.3)

Inequality (4.2) can be written as

mup (A)mlow (A)
AвҲ’1 вү¤ . (4.4)
вҲһ
d0 (1 вҲ’ Оё(A))

If matrix A is triangular and has nonzero diagonal entries, then (4.1) obvi-
ously holds.

3.5 Proof of Theorem 3.4.1
Recall that VВ± and D are introduced in Section 3.1, and WВ± = VВ± DвҲ’1 . In
this section A is the operator norm of A with respect to an arbitrary vector
norm.
Lemma 3.5.1 Let the condition
nвҲ’1
j
(вҲ’1)k+j WвҲ’ W+ < 1
k
Лң
Оё0 вүЎ (5.1)
j,k=1

hold. Then A is invertible and the inverse matrix satisп¬Ғes the inequality

AвҲ’1 вү¤ DвҲ’1 (I + WвҲ’ )вҲ’1 (I + W+ )вҲ’1 (1 вҲ’ Оё0 )вҲ’1 .
Лң (5.2)

Proof: Clearly,

A = D + WвҲ’ + W = (I + WвҲ’ + W+ )D =

[(I + WвҲ’ )(I + W+ ) вҲ’ WвҲ’ W+ ]D.
But W+ and WвҲ’ are nilpotent:
n n
WвҲ’ = W+ = 0. (5.3)

So the operators, I + WвҲ’ and I + W+ are invertible:
nвҲ’1
вҲ’1
(вҲ’1)k WвҲ’ ,
k
(I + WвҲ’ ) =
k=0

nвҲ’1
вҲ’1
(вҲ’1)k W+ .
k
(I + W+ ) = (5.4)
k=0
42 3. Invertibility of Finite Matrices

Thus
A = (I + WвҲ’ )[I вҲ’ (I + WвҲ’ )вҲ’1 WвҲ’ W+ (I + W+ )вҲ’1 ](I + W+ )D.
Thanks to (5.4) we have
nвҲ’1
вҲ’1
(вҲ’1)kвҲ’1 WвҲ’ , W+ (I + W+ )вҲ’1 =
k
(I + WвҲ’ ) WвҲ’ =
k=1

nвҲ’1
(вҲ’1)kвҲ’1 W+ .
k

k=1
So
nвҲ’1
j
(вҲ’1)k+j WвҲ’ W+ ](I + W+ )D.
k
A = (I + WвҲ’ )[I вҲ’
j,k=1

Therefore, if (5.1) holds then A is invertible. Moreover
nвҲ’1
j
вҲ’1 вҲ’1 вҲ’1
(вҲ’1)k+j WвҲ’ W+ ]вҲ’1 (I + WвҲ’ )вҲ’1 .
k
[I вҲ’
A =D (I + W+ ) (5.5)
j,k=1

Condition (5.1) yields
nвҲ’1
j
(вҲ’1)k+j WвҲ’ W+ ]вҲ’1 вү¤ (1 вҲ’ Оё0 )вҲ’1 .
k Лң
[I вҲ’
j,k=1

Now inequality (5.2) is due to (5.5). 2
Denote
n nвҲ’1
m(V+ ) =
Лң (1 + vk ) and m(VвҲ’ ) =
Лң Лң (1 + wk ).
Лң
k=2 k=1

Lemma 3.5.2 The inequalities
(I вҲ’ V+ )вҲ’1 вү¤ m(V+ )
Лң (5.6)
вҲһ

and
(I вҲ’ VвҲ’ )вҲ’1 вү¤ m(VвҲ’ )
Лң (5.7)
вҲһ

are valid.
Proof: Let Qk be the projectors onto the standard basis:
Qk h = (h1 , h2 , ..., hk , 0, 0, ..., 0) (k = 1, ..., n), Q0 = 0
for an arbitrary vector h = (h1 , ..., hn ) вҲҲ Cn . Clearly, Qk project onto the
invariant subspaces of V+ . So according to Lemma 2.10.1,
вҶ’
вҲ’1
(I вҲ’ V+ ) (I + V+ вҲҶQk ), where вҲҶQk = Qk вҲ’ QkвҲ’1 .
= (5.8)
2вү¤kвү¤n
3.5. Proof of Theorem 3.4.1 43

It is not hard to check that

V+ вҲҶQk = vk .
Лң
вҲһ

Лң
Now inequality (5.6) follows from (5.8). Further, deп¬Ғne a projector Qk by
Лң
Qk h = (0, 0, ..., hnвҲ’k+1 , hnвҲ’k+2 , ..., hn )
Лң
(k = 1, ..., n), Q0 = 0.
Лң
Simple calculation show that Qk project onto invariant subspaces of VвҲ’ . So
according Lemma 2.10.1,
вҶ’
вҲ’1 Лң
(I вҲ’ VвҲ’ ) = (I + VвҲ’ вҲҶQk ),
2вү¤kвү¤n

Лң Лң Лң
(вҲҶQk = Qk вҲ’ QkвҲ’1 ). (5.9)
Лң
It is not hard to check that VвҲ’ вҲҶQk вҲһ = wnвҲ’k+1 . Now inequality (5.7)
Лң
follows from (5.9). 2

Lemma 3.5.3 The inequalities
nвҲ’1 nвҲ’1
k k
(вҲ’1)k VвҲ’
k
вү¤ m(V+ ) вҲ’ 1 and вү¤ m(VвҲ’ ) вҲ’ 1 (5.10)
(вҲ’1) V+ вҲһ Лң Лң
вҲһ
k=1 k=1

are valid.

Let B = (bjk )n be a nonnegative matrix with the property
Proof: k=1

Bh вүҘ h (5.11)

for any nonnegative h вҲҲ Cn . Then bjj вүҘ 1 (j = 1, ..., n). Hence,
n
BвҲ’I bjk вҲ’ Оҙjk ] = B вҲ’ 1.
= max [ (5.12)
вҲһ вҲһ
j=1,...,n
k=1

Here Оҙjk is the Kronecker symbol. Furthermore, since V+ is nilpotent,
nвҲ’1 nвҲ’1
k k
|V+ |k
вү¤
(вҲ’1) V+ вҲһ =
вҲһ
k=1 k=1

(I вҲ’ |V+ |)вҲ’1 вҲ’ I вҲһ

where |V+ | is the matrix whose entries are the absolute values of the entries
of V . Moreover, clearly,
nвҲ’1
|V+ |k h вүҘ h
k=0
44 3. Invertibility of Finite Matrices

for any nonnegative h вҲҲ Cn . So according to (5.11) and (5.12),
nвҲ’1 nвҲ’1
k k
|V+ |k
вү¤
(вҲ’1) V+ вҲһ =
вҲһ
k=1 k=1

(I вҲ’ |V+ |)вҲ’1 вҲ’ I вҲһ

Since
 << стр. 2(всего 11)СОДЕРЖАНИЕ >>