<<

. 2
( 11)



>>


Vp 2
¤ hk1 . . . hkp . (5.2)
1¤k1 <k2 <...<kp ¤n’1

Therefore due to Lemma 2.4.1,
n’1
k2 2
hj )k .
¤
V γn,k (
j=1
20 2. Norms of Matrix Functions


But
n’1
hj ¤ N 2 (V ).
j=1
We thus have derived the required result. 2
Theorem 2.5.1 and (1.5) imply
Corollary 2.5.2 For any nilpotent operator V in Cn , the inequalities
N k (V )
k
¤√ (k = 1, . . . , n ’ 1)
V
k!
are valid.
An independent proof of this corollary can be found in (Gil™, 1995, p. 50,
Lemma 2.3.1).


2.6 Proof of Theorem 2.1.1
Let D and V be the diagonal and nilpotent parts of A, respectively. According
to Lemma 1.7.1, R» (D)V is a nilpotent operator. So by virtue of Theorem
2.5.1,
(R» (D)V )k ¤ N k (R» (D)V )γn,k (k = 1, ..., n ’ 1). (6.1)
Since D is a normal operator, we can write down R» (D) = ρ’1 (D, »). It
is clear that
N (R» (D)V ) ¤ N (V ) R» (D) = N (V )ρ’1 (D, »). (6.2)
According to (3.4),
A ’ »I = D + V ’ »I = (D ’ »I)(I + R» (D)V ).
We thus have
n’1
’1
(R» (D)V )k R» (D).
R» (A) = (I + R» (D)V ) R» (D) = (6.3)
k=0

Now (6.1) and (6.2) yield the inequality
n’1
N k (V )γn,k ρ’k’1 (D, »).
R» (A) ¤
k=0
This relation proves the stated result, since A and D have the same eigen-
values and N (V ) = g(A), due to Lemma 2.3.2. 2
An additional proof of Corollary 2.1.2: Corollary 2.5.2 implies
N k (R» (D)V )
(R» (D)V )k ¤ √ (k = 1, ..., n ’ 1).
k!
Now the required result follows from (6.2) and (6.3), since N (V ) = g(A) due
to Lemma 2.3.2. 2
2.7. Analytic Matrix Functions 21


2.7 Estimates for the Norm
of Analytic Matrix-Valued Functions
Recall that g(A) and γn,k are de¬ned in Section 2.1, and B(Cn ) is the set of
all linear operators in Cn .

Theorem 2.7.1 Let A ∈ B(Cn ) and let f be a function regular on a neigh-
borhood of the closed convex hull co(A) of the eigenvalues of A. Then
n’1
γn,k
sup |f (k) (»)|g k (A)
f (A) ¤ . (7.1)
k!
k=0 »∈co(A)

The proof of this theorem is divided into a series of lemmas, which are
presented in the next section.
Theorem 2.7.1 is exact: if A is a normal matrix and

sup |f (»)| = sup |f (»)|,
»∈co(A) »∈σ(A)


then we have the equality f (A) = sup»∈σ(A) |f (»)|. Theorem 2.7.1 and
inequalities (1.5) yield
Corollary 2.7.2 Let A ∈ B(Cn ) and let f be a function regular on a neigh-
borhood of the closed convex hull co(A) of the eigenvalues of A. Then
n’1
g k (A)
(k)
f (A) ¤ sup |f (»)| . (7.2)
(k!)3/2
k=0 »∈co(A)

An additional proof of this corollary is presented in the next section.
Example 2.7.3 For a linear operator A in Cn , Theorem 2.7.1 and Corollary
2.7.2 give us the estimates
n’1 n’1
g k (A)tk
k γn,k
±(A)t k ±(A)t
exp(At) ¤ e ¤e (t ≥ 0)
g (A)t
(k!)3/2
k!
k=0 k=0

where ±(A) = maxk=1,...,n Re »k (A). In addition,
n’1 n’1
γn,k m!g k (A)rs (A)
m’k
m!g k (A)rs (A)
m’k
m
¤ ¤
A (m = 1, 2, ...)
(m ’ k)!k! (m ’ k)!(k!)3/2
k=0 k=0

where rs (A) is the spectral radius. Recall that 1/(m ’ k)! = 0 if m < k.
22 2. Norms of Matrix Functions


2.8 Proof of Theorem 2.7.1
Lemma 2.8.1 Let {dk } be an orthogonal normal basis in Cn , A1 , . . . , Aj
n — n-matrices and φ(k1 , ..., kj+1 ) a scalar-valued function of arguments

k1 , ..., kj+1 = 1, 2, ..., n; j < n.

De¬ne projectors Q(k) by Q(k)h = (h, dk )dk (h ∈ Cn , k = 1, ..., n), and set

T= φ(k1 , ..., kj+1 )Q(k1 )A1 Q(k2 ) . . . Aj Q(kj+1 ).
1¤k1 ,...,kj+1 ¤n


Then T ¤ a(φ) |A1 ||A2 | . . . |Aj | , where

|φ(k1 , ..., kj+1 )|
a(φ) = max
1¤k1 ,...,kj+1 ¤n


and |Ak | (k = 1, ..., j) are the matrices, whose entries in {dk } are the absolute
values of the entries of Ak in {dk }.

Proof: For any entry Tsm = (T ds , dm ) (s, m = 1, ..., n) of operator T we
have
(1) (j)
Tsm = φ(s, k2 , ..., kj , m)ask2 . . . akj ,m ,
1¤k2 ,...,kj ¤n

(i)
where ajk = (Ai dk , dj ) are the entries of Ai . Hence,

(1) (j)
|Tsm | ¤ a(φ) |ask2 . . . akj m |.
1¤k2 ,...,kj ¤n


This relation and the equality
n
2
|(T x)j |2 (x ∈ Cn ),
Tx =
j=1


where (.)j means the j-th coordinate, imply the required result. 2
Furthermore, let |V | be the operator whose matrix elements in the or-
thonormed basis of the triangular representation (the Schur basis) are the
absolute values of the matrix elements of the nilpotent part V of A with
respect to this basis. That is,
n k’1
|V | = |ajk |(., ek )ej ,
k=1 j=1


where {ek } is the Schur basis and ajk = (Aek , ej ).
2.8. Proof of Theorem 2.7.1 23


Lemma 2.8.2 Under the hypothesis of Theorem 2.7.1, the estimate
n’1
|V |k
(k)
f (A) ¤ sup |f (»)|
k!
k=0 »∈co(A)

is true, where V is the nilpotent part of A.

Proof: It is not hard to see that the representation (3.4) implies the equality

R» (A) ≡ (A ’ I»)’1 = (D + V ’ »I)’1 =

(I + R» (D)V )’1 R» (D)
for all regular ». According to Lemma 1.7.1 R» (D)V is a nilpotent operator
because V and R» (D) have common invariant subspaces. Hence,

(R» (D)V )n = 0.

Therefore,
n’1
(R» (D)V )k (’1)k R» (D).
R» (A) = (8.1)
k=0

Due to the representation for functions of matrices
n’1
1
f (A) = ’ f (»)R» (A)d» = Ck , (8.2)
2πi “ k=0

where
1
Ck = (’1)k+1 f (»)(R» (D)V )k R» (D)d».
2πi “

Here “ is a closed contour surrounding σ(A). Since D is a diagonal ma-
trix with respect to the Schur basis {ek } and its diagonal entries are the
eigenvalues of A, then
n
Qj
R» (D) = ,
»j (A) ’ »
j=1


where Qk = (., ek )ek . We have
n n n
Ck = Qj1 V Qj2 V . . . V Qjk Ij1 j2 ...jk+1 .
j1 =1 j2 =1 jk =1

Here
(’1)k+1 f (»)d»
Ij1 ...jk+1 = .
(»j1 ’ ») . . . (»jk+1 ’ »)
2πi “
24 2. Norms of Matrix Functions


Lemma 2.8.1 gives us the estimate

|Ij1 ...jk+1 | |V |k .
Ck ¤ max
1¤j1 ¤...¤jk+1 ¤n

Due to Lemma 1.5.1
|f (k) (»)|
|Ij1 ...jk+1 | ¤ sup .
k!
»∈co(A)

This inequality and (8.2) imply the result. 2
Proof of Theorem 2.7.1: Theorem 2.5.1 implies

|V |k ¤ γn,k N k (|V |) (k = 1, ..., n ’ 1).

But N (|V |) = N (V ). Moreover, thanks to Lemma 2.3.2, N (V ) = g(A). Thus

|V |k ¤ γn,k g k (A) (k = 1, ..., n ’ 1).

Now the previous lemma yields the required result. 2
An additional proof of Corollary 2.7.2: Corollary 2.5.2 implies

N k (V )
k
¤√
|V | (k = 1, ..., n ’ 1)
k!
Now the required result follows from Lemma 2.8.2, since N (V ) = g(A) due
to Lemma 2.3.2. 2



2.9 The First Multiplicative Representation of
the Resolvent
Recall that B(Cn ) is the set of linear operators in Cn , I is the unit operator.
Let Pk (k = 1, . . . , n) be the maximal chain of the invariant projectors of an
A ∈ B(Cn ). That is, Pk are orthogonal projectors,

APk = Pk APk (k = 1, . . . , n)

and
0 = P0 Cn ‚ P1 Cn ‚ ... ‚ Pn Cn = Cn .
So dim ∆Pk = 1. Here

∆Pk = Pk ’ Pk’1 (k = 1, ..., n).

We use the triangular representation

A = D + V. (9.1)
2.9. The First Multiplicative Representation 25


(see Section 1.7). Here V is the nilpotent part of A and
n
D= »k (A)∆Pk
k=1

is the diagonal part. For X1 , X2 , ..., Xn ∈ B(Cn ) denote

Xk ≡ X1 X2 ...Xn .
1¤k¤n

That is, the arrow over the symbol of the product means that the indexes of
the co-factors increase from left to right.
Theorem 2.9.1 For any A ∈ B(Cn ),

A∆Pk
»R» (A) = ’ ) (» ∈ σ(A)),
(I +
» ’ »k (A)
1¤k¤n

where Pk , k = 1, ..., n is the maximal chain of the invariant projectors of A.
Denote Ek = I ’ Pk . Since
Proof:

A = (Ek + Pk )A(Ek + Pk ) for any k = 1, ..., n

and E1 AP1 = 0, we get the relation

A = P1 AE1 + P1 AP1 + E1 AE1 .

Take into account that ∆P1 = P1 and

P1 AP1 = »1 (A)∆P1 .

Then
A = »1 (A)∆P1 + P1 AE1 + E1 AE1 =
»1 (A)∆P1 + AE1 . (9.2)
Now, we check the equality

R» (A) = Ψ(»), (9.3)

where
∆P1 ∆P1
Ψ(») ≡ ’ AE1 R» (A)E1 + E1 R» (A)E1 .
»1 (A) ’ » »1 (A) ’ »
In fact, multiplying this equality from the left by A ’ I» and taking into
account equality (9.2), we obtain the relation

(A ’ I»)Ψ(») = ∆P1 ’ ∆P1 AE1 R» (A)E1 + (A ’ I»)E1 R» (A)E1 .
26 2. Norms of Matrix Functions


But E1 AE1 = E1 A and thus E1 R» (A)E1 = E1 R» (A). I.e. we can write
(A ’ I»)Ψ(») = ∆P1 + (’∆P1 A + A ’ I»)E1 R» (A) =
∆P1 + E1 (A ’ I»)R» (A) = ∆P1 + E1 = I.
Similarly, we multiply (9.3) by A ’ I» from the right and take into account
(9.2). This gives I. Therefore, (9.3) is correct.
Due to (9.3)
I ’ AR» (A) =
(I ’ (»1 (A) ’ »)’1 A∆P1 )(I ’ AE1 R» (A)E1 ). (9.4)
Now we apply the above arguments to operator AE1 . We obtain the following
expression which is similar to (9.4):
I ’ AE1 R» (A)E1 =
(I ’ (»2 (A) ’ »)’1 A∆P2 )(I ’ AE2 R» (A)E2 ).
For any k < n, it similarly follows that
I ’ AEk R» (A)Ek =
A∆Pk+1
(I ’ )(I ’ AEk+1 R» (A)Ek+1 ).
»k+1 (A) ’ »
Substitute this in (9.4), as long as k = 1, 2, ..., n ’ 1. We have
I ’ AR» (A) =

A∆Pk
)(I ’ AEn’1 R» (A)En’1 ).
(I + (9.5)
» ’ »k (A)
1¤k¤n’1

It is clear that En’1 = ∆Pn . I.e.,
A∆Pn
I ’ AEn’1 R» (A)En’1 = I + .
» ’ »n (A)
Now the identity
I ’ AR» (A) = ’»R» (A)
and (9.5) imply the result. 2
Let A be a normal matrix. Then
n
A= »k (A)∆Pk .
k=1

Hence, A∆Pk = »k (A)∆Pk . Since ∆Pk ∆Pj = 0 for j = k, Theorem 2.9.1
gives us the equality
n
(I + (» ’ »k (A))’1 »k (A)∆Pk ).
»R» (A) = ’
k=1
2.10. The Second Multiplicative Representation 27


But
n
I= ∆Pk .
k=1

The result is
n
[1 + (» ’ »k (A))’1 »k (A)]∆Pk ) =
»R» (A) = ’
k=1

n
∆Pk
’ » .
» ’ »k (A)
k=1

Or
n
∆Pk
R» (A) = .
»k (A) ’ »
k=1

We have obtained the well-known spectral representation for the resolvent of
a normal matrix.
Thus, Theorem 2.9.1 generalizes the spectral representation for the resol-
vent of a normal matrix.


2.10 The Second Multiplicative Representation
of the Resolvent
Lemma 2.10.1 Let V ∈ B(Cn ) be a nilpotent operator and Pk , k = 1, ..., n,
be the maximal chain of its invariant projectors. Then

’1
(I ’ V ) = (I + V ∆Pk ). (10.1)
2¤k¤n


Proof: In fact, all the eigenvalues of V are equal to zero, and V ∆P1 = 0.
Now Theorem 2.9.1 gives us relation (10.1). 2
Relation (10.1) allows us to prove the second multiplicative representation
of the resolvent of A.

Theorem 2.10.2 Let D and V be the diagonal and nilpotent parts of A ∈
B(Cn ), respectively. Then

V ∆Pk
] (» ∈ σ(A)),
R» (A) = R» (D) [I + (10.2)
» ’ »k (A)
2¤k¤n


where Pk , k = 1, ..., n, is the maximal chain of invariant projectors of A.
28 2. Norms of Matrix Functions


Proof: Due to (9.1)

R» (A) = (A ’ »I)’1 = (D + V ’ »I)’1 = R» (D)(I + V R» (D))’1 .

But V R» (D) is a nilpotent operator. Take into account that

R» (D)∆Pk = (»k (A) ’ »)’1 ∆Pk .

Now (10.1) ensures the relation (10.2). 2


2.11 The First Relation between
Determinants and Resolvents
In this section, a relation between the determinant and resolvent of a matrix
is derived. It improves the Carleman inequality. We recall the Carleman
inequality in Section 2.15.

Theorem 2.11.1 Let A ∈ B(Cn ) (n > 1) and I ’ A be invertible. Then

(I ’ A)’1 det (I ’ A) ¤
1
(N 2 (A) ’ 2Re T race (A) + 1)](n’1)/2 .
[1 + (11.1)
n’1
The proof of this theorem is presented in this section below.
Corollary 2.11.2 Let A ∈ B(Cn ). Then

(I» ’ A)’1 det (»I ’ A) ¤

N 2 (A) ’ 2Re (» T race (A)) + n|»|2 (n’1)/2
[ ] (» ∈ σ(A)).
n’1
(11.2)
In particular, let V be a nilpotent matrix. Then

(I» ’ V )’1 ¤

1 1 N 2 (V ) (n’1)/2
[1 + (1 + )] (» = 0). (11.3)
|»| n’1 |»|2
Indeed, inequality (11.2) is due to Theorem 11.1 with »’1 A instead of A,
and the equality |»|2 »’1 = » taken into account. If V is nilpotent, then

|det (»I ’ V )|2 = det (I» ’ V ) det (I» ’ V — ) = |»|2n .

Moreover, T race V = 0. So (11.2) implies (11.3).
To prove Theorem 2.11.1, we need the following
2.11. The First Relation for Determinants 29


Lemma 2.11.3 Let A ∈ B(Cn ) be a positive de¬nite Hermitian matrix:
A = A— > 0. Then
T race A n’1
A’1 det A ¤ [ ] .
n’1
Proof: Without loss of generality assume that
»n (A) = min »k (A).
k=1,...,n

Then A’1 = »’1 (A) and
n
n’1
’1
A det A = »k (A).
k=1

Hence, due to the inequality between the arithmetic and geometric mean
values we get
n’1
’1 ’1
»k ]n’1 ¤ [(n ’ 1)’1 T race A]n’1 ,
det A ¤ [(n ’ 1)
A
k=1

since A is positive de¬nite. As claimed. 2

Proof of Theorem 2.11.1: For any A ∈ B(Cn ), the operator
B ≡ (I ’ A)(I ’ A— )
is positive de¬nite and
det B = det (I ’ A)(I ’ A— ) = det (I ’ A)det (I ’ A— ) =
|det (I ’ A)|2 .
Moreover,
T race [(I ’ A)(I ’ A— )] = T race I ’ T race (A + A— ) + T race (AA— ) =
n ’ 2Re T race A + N 2 (A).
But
B ’1 = (I ’ A)’1 (I ’ A— )’1 = (I ’ A)’1 2 .
Now Lemma 2.11.2 yields
B ’1 det B = (I ’ A)’1 det (I ’ A) 2
¤
1
(n + N 2 (A) ’ 2Re T race (A))]n’1 = .
[
n’1
1
(N 2 (A) ’ 2Re T race (A) + 1)]n’1 ,
[1 +
n’1
as claimed. 2
30 2. Norms of Matrix Functions


2.12 The Second Relation between
Determinants and Resolvents
Without any loss of generality assume that for a regular » the relation
|» ’ »1 (A)| = mink=1,...,n |» ’ »k (A)| (12.1)
is valid. Recall that g(A) and γn,k are de¬ned in Section 2.1.
Theorem 2.12.1 Let A ∈ B(Cn ) and (12.1) hold. Then
n
R» (A)det(A ’ I») ¤ max {1, |»j (A) ’ »|}G(A), (12.2)
j=2

where
n’1
g k (A)γn,k .
G(A) =
k=0

The proof of this theorem is given in the next section.
Theorem 2.12.1 is exact. In fact, for instance, let A be a unitary operator
and » = 0. Since any unitary operator is normal and |»k (A)| = 1, then
G(A) = 1 and due to (12.3)
|detA| = 1 ¤ A .
But the norm of a unitary operator equals 1. Thus, we arrive at the equality
|detA| = A = 1.
Let rs (A) be the spectral radius of a matrix A. Let A be nonsingular. Put
A0 = r’1 (A)A. Due to Theorem 2.12.1,
A’1 det(A0 ) ¤ G(A0 ).
0
’1
But g(A0 ) = rs (A) g(A). Therefore, the following result holds
Corollary 2.12.2 Let A ∈ B(Cn ) be nonsingular. Then
n’1
’1 n’k’1
(A)g k (A)γn,k .
det(A) ¤
A rs
k=0



2.13 Proof of Theorem 2.12.1
For brevity, put
»j (A) = »j (j = 1, ..., n).
Without any loss of generality assume that
|»1 | = min |»k | (13.1)
k=1,...,n
2.13. Proof of Theorem 2.12.1 31


Lemma 2.13.1 Let A be a nonsingular operator in Cn and condition (13.1)
hold. Then with the notations

χk = max{1, |»k |}

and
n
ω(A) = χj ,
j=2

the inequality
det (A)A’1 ¤ G(A).
holds.

Proof: We have

A’1 = D’1 (I + V D’1 )’1 ¤ (I + V D’1 )’1 D’1 . (13.2)

Clearly, D’1 = |»1 |’1 . To estimate (I + V D’1 )’1 , observe that V D’1
is a nilpotent matrix and, due to Lemma 2.10.1,

(I ’ »’1 V ∆Pk ),
’1 ’1
(I + V D ) = k
2¤k¤n

since
D’1 ∆Pk = »’1 ∆Pk .
k

This yields
n
»’1 K = »1 [det(A)]’1 K,
’1 ’1
(I + V D ) = (13.3)
k
k=2

where ’
(I»k ’ V ∆Pk ).
K=
2¤k¤n

It not hard to check that

|I»k ’ V ∆Pk | ¤ (I + |V |∆Pk ) χk .

Here |A| means the matrix ( |aij | ), if A = (aij ). The inequality B ¤ C for
non-negative matrices B, C is understood in the natural sense. So we have
’ ’
|K| ¤ χk (I + |V |∆Pk ) = ω(A) (I + |V |∆Pk ) =
2¤k¤n 2¤k¤n


ω(A) (I ’ |V |)’1 .
32 2. Norms of Matrix Functions


Taking into account that N (V ) = N (|V |) and N (V ) = g(A), by virtue of
Theorem 2.1.1 we get the inequality
n
’1
γk,n N k (V ) = G(A).
(I ’ |V |) ¤
k=0

That is,
K ¤ ω(A)G(A).
Due to (13.2) and (13.3)

A’1 ¤ |det(A)|’1 ω(A)G(A).

As claimed. 2
The assertion of Theorem 2.12.1 follows from the latter lemma with A’»I
instead of A. 2


2.14 An Additional Estimate for Resolvents
Theorem 2.14.1 Let A ∈ B(Cn ), n > 1. Then

g 2 (A)
1 1
’1
) ](n’1)/2
(I» ’ A) ¤ [1 + (1 + 2
n’1
ρ(A, ») ρ (A, »)

for any regular » of A.

Proof: Due to the triangular representation (see Section 1.7),

(I» ’ A)’1 = (I» ’ D ’ V )’1 = (I» ’ D)’1 (I + B» )’1 , (14.1)

where B» := ’V (I»’D)’1 . But operator B» is a nilpotent one. So Theorem
2.11.1 implies

1 + N 2 (B» ) (n’1)/2
’1
¤ [1 +
(I + B» ) ] . (14.2)
n’1
Take into account that

N (B» ) = N (V (I» ’ D)’1 ) ¤ (I» ’ D)’1 N (V ) = ρ’1 (D, »)N (V ).

Moreover, σ(D) and σ(A) coincide and due to Lemma 2.3.2, N (V ) = g(A).
Thus,
N (B» ) ¤ ρ’1 (D, »)g(A) = ρ’1 (A, »)g(A).
Now (14.1) and (14.2) imply the required result. 2
2.15. Notes 33


2.15 Notes
The quantity g(A) was introduced both by P. Henrici (1962) and indepen-
dently by M.I. Gil™ (1979b).
Theorem 2.1.1 was derived in the paper (Gil™, 1979a) in a more general
situation and was extended in (Gil™, 1995) (see also (Gil™, 1993b)).
Recall that Carleman has derived the inequality
n
(1 ’ »’1 »k (A))exp[»’1 »k (A)] ¤
R» (A)
k=1

|»|exp[1 + N 2 (A»’1 )/2],
cf. (Dunford, N and Schwartz, 1963, p. 1023).
Theorem 2.3.1 was published in (Gil™, 1993a). It improves Schur™s in-
equality
n
|»k (A)|2 ¤ N 2 (A)
k=1

and Brown™s inequality
n
|Im »k (A)|2 ¤ N 2 (AI )
k=1

(see (Marcus and Minc, 1964)). A very interesting inequality for eigenvalues
of matrices was derived in (Kress et. al, 1974).
Gel™fand and G.E. Shilov (1958) have established the estimate
n’1
sup |f (k) (»)|(2 A )k .
f (A) ¤
k=0 »∈co(A)

About other estimations for the matrix exponent, see (Coppel, 1978).
Theorem 2.7.1 was derived in the paper (Gil™, 1979b) in the case of the
Hilbert-Schmidt operators (see also (Gil™, 1993b)).
Theorems 2.9.1 and 2.10.1 were published in the more general situation
in (Gil™, 1973).
Theorems 2.11.1 and 2.12.1 are probably new.


References
[1] Coppel, W.A. (1978). Dichotomies in Stability Theory. Lecture Notes in
Mathematics, No. 629, Springer-Verlag, New York.

[2] Dunford, N and Schwartz, J. T. (1963). Linear Operators, part II. Spec-
tral Theory. Interscience publishers, New York, London.
34 2. Norms of Matrix Functions


[3] Gel™fand, I.M. and Shilov, G.E. (1958). Some Questions of Theory of
Di¬erential Equations. Nauka, Moscow (in Russian).
[4] Gil™, M. I. (1973). On the representation of the resolvent of a nonselfad-
joint operator by the integral with respect to a spectral function, Soviet
Math. Dokl., 14 , 1214-1217.

[5] Gil™, M. I. (1979a). An estimate for norms of resolvent of completely
continuous operators, Mathematical Notes, 26 , 849-851.

[6] Gil™, M. I. (1979b). Estimating norms of functions of a Hilbert-Schmidt
operator (in Russian), Izvestiya VUZ, Matematika, 23, 14-19. English
translation in Soviet Math., 23 , 13-19.

[7] Gil™, M. I. (1983). One estimate for resolvents of nonselfadjoint operators
which are ”near” to selfadjoint and to unitary ones, Mathematical Notes,
33, 81-84.
[8] Gil™, M. I. (1992). On an estimate for the norm of a function of a quasi-
hermitian operator, Studia Mathematica, 103 (1), 17-24.

[9] Gil™, M. I. (1993a). On inequalities for eigenvalues of matrices, Linear
Algebra and its Applications, 184, 201-206.
[10] Gil™, M. I. (1993b). Estimates for norm of matrix-valued functions ,
Linear and Multilinear Algebra, 35, 65-73.

[11] Gil™, M. I. (1995). Norm Estimations for Operator-valued Functions and
Applications. Marcel Dekker, New York.
[12] Henrici, P. (1962). Bounds for iterates, inverses, spectral variation and
¬eld of values of nonnormal matrices. Numerische Mathematik, 4, 24-39.
[13] Kress, R., De Vries, H. L. and Wegmann, R. (1974). On non-normal
matrices. Linear Algebra Appl., 8, 109-120.

[14] Marcus, M. and Minc, H. (1964). A Survey of Matrix Theory and Matrix
Inequalities. Allyn and Bacon, Boston.
3. Invertibility of Finite
Matrices



The present chapter deals with various types of invertibility conditions for
¬nite matrices. In particular, we improve the classical Levy-Desplanques
theorem and other well-known invertibility results for matrices that are close
to triangular ones.


3.1 Preliminary Results
For a matrix
A = (ajk )n
j,k=1 (n > 1),

put
«  « 
0 a12 . . . a1n 0 ... 0 0
¬0 0 · ¬ a21 0·
. . . a2n · ... 0
V+ = ¬ , V’ = ¬ ·
 . ... . . 
. ... .
00 ... 0 an1 . . . an,n’1 0
and
D = diag (a11 , a22 , ..., ann ).
So A = D + V+ + V’ . In this chapter it is assumed that all the diagonal
entries are nonzero. So

min |ajj | > 0.
d0 :=
k=1,...,n

In the present section . is an arbitrary norm in Cn . Recall that I is the
unit operator. Set
W± = D’1 V± .


M.I. Gil™: LNM 1830, pp. 35“48, 2003.
c Springer-Verlag Berlin Heidelberg 2003
36 3. Invertibility of Finite Matrices


Theorem 3.1.1 With the notation
J(W± ) := (I ’ W± )’1 ,
let
1 1
ν(A) ≡ max{ ’ W’ , ’ W+ } > 0. (1.1)
J(W+ ) J(W’ )
Then A is invertible and the inverse matrix satis¬es the inequality
D’1
A’1 ¤ . (1.2)
ν(A)
To prove this theorem we need the following two simple lemmas.
Lemma 3.1.2 Let
ψ0 ≡ (I + W’ )’1 W+ < 1.
Then the operator B ≡ I ’ W’ + W+ is boundedly invertible. Moreover,
J(W’ )
B ’1 ¤ .
1 ’ ψ0
Proof: Clearly, operators W± are nilpotent. So operators I + W± are in-
vertible. We have
B = (I + W’ )(I + (I + W’ )’1 W+ ).
Moreover,

’1 ’1
((I + W’ )’1 W+ )k ¤
¤
(I + (I + W’ ) W+ )
k=0

ψ0 = (1 ’ ψ0 )’1 .
k

k=0
Thus,
B ’1 ¤ (I + (I + W’ )’1 W+ )’1 (I + W’ )’1 ¤
(1 ’ ψ0 )’1 (I + W’ )’1 ,
as claimed. 2
Lemma 3.1.3 Let at least one of the following inequalities:
W+ J(W’ ) < 1 (1.3)
or
W’ J(W+ ) < 1 (1.4)
hold. Then relation (1.1) is valid. Moreover, the operator B ≡ I + W’ + W+
is invertible and
1
B ’1 ¤ .
ν(A)
3.2. lp Norms of Powers of Nilpotent Matrices 37


Proof: If condition (1.3) holds, then Lemma 3.1.2 yields the inequality
J(W’ ) 1
B ’1 ¤ = ’1 . (1.5)
1 ’ W+ J(W’ ) J (W’ ) ’ W+
Interchanging W’ and W+ , under condition (1.4), we get
1
B ’1 ¤ .
J ’1 (W + ) ’ W’

This relation and (1.5) yield the required result. 2

Proof of Theorem 3.1.1: Clearly, condition (1.1) implies that at least
one of inequalities (1.3) or (1.4) holds. But
A = D + V+ + V’ = D(I + W+ + W’ ) = DB.
Now the required result follows from Lemma 3.1.3.


lp -Norms of Powers of Nilpotent Matrices
3.2
Recall that
n
|hk |p ]1/p (h = (hk ) ∈ Cn ; 1 < p < ∞).
h =[
p
k=1

In the present and next sections A p is an operator norm of a matrix A with
respect to the vector norm . p . Put
γn,m,p = [Cn’1 (n ’ 1)’m ]1/p (m = 1, ..., n ’ 1) and γn,0,p = 1,
m

m
where Cn = n!/(n ’ m)!m! are the binomial coe¬cients. Note that
(n ’ 1)! (n ’ 1)...(n ’ m)
p
¤
γn,m,p = .
(n ’ 1)m (n ’ 1 ’ m)!m! (n ’ 1)m m!
Hence,
1 p
γn,m,p ¤
(m = 1, ...n ’ 1).
m!
Lemma 3.2.1 For any upper triangular nilpotent matrix
V+ = (ajk )n
j,k=1 with ajk = 0 (1 ¤ k ¤ j ¤ n)

the inequality
m m
¤ γn,m,p Mp (V+ ) (m = 1, . . . , n ’ 1)
V+ (2.1)
p

is valid with the notation
n’1 n
|ajk |q ]p/q )1/p (p’1 + q ’1 = 1).
Mp (V+ ) = ( [
j=1 k=j+1
38 3. Invertibility of Finite Matrices


For a natural s = 1, ..., n ’ 1, denote
Proof:
n
|xk |p )1/p ,
x =(
p,s
k=s

where xk are the coordinates of a vector x ∈ Cn . We can write out
n’1 n
p
ajk xk |p .
|
V+ x =
p,s
j=s k=j+1

By H¨lder™s inequality,
o
n’1
p
p
¤
V+ x hj x p,j+1 , (2.2)
p,s
j=s

where
n
|ajk |q ]p/q .
hj = [
k=j+1

Similarly,
n’1 n
V+ x p
2
ajk (V+ x)k |p ¤
|
=
p,s
j=s k=j+1

n’1
p
hj V + x p,j+1 .
j=s

Here (V+ x)k are the coordinates of V+ x. Taking into account (2.2), we obtain
n’1 n’1
p
V+ x p
2
¤ hj hk x =
p,s p,k+1
j=s k=j+1

p
hj hk x p,k+1 .
s¤j<k¤n’1

Therefore,
p
2 p 2
¤
V+ = V+ hj hk .
p p,1
1¤j<k¤n’1

Repeating these arguments, we arrive at the inequality
m p
¤
V+ hk1 . . . hkm . (2.3)
p
1¤k1 <k2 <...<km ¤n’1

Since,
n n n
|ajk |q ]p/q = Mp (V+ ),
p
hj = [
j=1 j=1 k=j+1
3.3. Invertibility in the Norm . 39
p



due to Lemma 2.4.1 and (2.3) we have
m p
¤ Mp (V+ )Cn’1 (n ’ 1)’m ,
mp m
V+ p

as claimed. 2

Similarly we can prove
Lemma 3.2.2 For any lower triangular nilpotent matrix

V’ = (ajk )n
j,k=1 with ajk = 0 (1 ¤ j ¤ k ¤ n).

the inequality
m m
¤ γn,m,p Mp (V’ ) (m = 1, . . . , n ’ 1)
V’ p

is valid, where
j’1
n
|ajk |q ]p/q )1/p .
Mp (V’ ) = ( [
j=2 k=1

Consider the case p = 2. The Euclidean norm . is invariant with respect
2
to an orthogonal basis. Moreover,
n’1 n
2 —
|ajk |2 = N2 (V+ ),
2
M2 (V+ ) = T race V+ V+ =
j=1 k=j+1

where N2 (.) = N (.) is the Hilbert-Schmidt norm. Due to Lemma 3.2.1 we
have
Corollary 3.2.3 Any n — n-nilpotent matrix V satis¬es the inequalities

Vm m
¤ γn,m,2 N2 (V ) (m = 1, ..., n ’ 1).
2

Thus, Lemma 3.2.1 gives us the new proof of Lemma 2.5.1, since γn,m,2 =
γn,m .


Invertibility in the Norm . (1 < p < ∞)
3.3 p
Recall that A, d0 and V± are de¬ned in Section 3.1; γn,m,p and Mp (V± ) are
de¬ned in the previous section. In addition, W± = D’1 V± . So
j’1
n
|ajk |q p/q 1/p
Mp (W’ ) = ( [ ])
|ajj |q
j=2 k=1

and
n’1 n
|ajk |q p/q 1/p ’1
(p + q ’1 = 1).
Mp (W+ ) = ( [ ])
q
|ajj |
j=1 k=j+1
40 3. Invertibility of Finite Matrices


Theorem 3.3.1 With the notation
n’1
k
Jp (W± ) := γn,k,p Mp (W± ),
k=0

let
1 1
’ W’ p , ’ W+ p } > 0.
νp (A) := max{
Jp (W+ ) Jp (W’ )
Then A is invertible and the inverse matrix satis¬es the inequality
1
A’1 ¤ .
p
d0 νp (A)
Proof: Clearly,
n’1
’1 k
(I ’ W± ) ¤ W± p.
p
k=0
Lemmas 3.2.1 and 3.2.2 imply
(I ’ W± )’1 ¤ Jp (W± ).
p

Now Theorem 3.1.1 yields the required result. 2



Invertibility in the Norm .
3.4 ∞
For a matrix A = (ajk )n take the norm
j=1
n
≡ max |ajk |.
A ∞
j=1,...,n
k=1

Recall that d0 is de¬ned in Section 3.1. Under the condition d0 > 0, introduce
the notation:
vk := max |ajk | (k = 2, ..., n);
˜
j=1,...,k’1

|ajk | (k = 1, ..., n ’ 1),
wk :=
˜ max
j=k+1,...,n
n n’1
vk
˜ wk
˜
mup (A) := (1 + ) and mlow (A) := (1 + ).
|akk | |akk |
k=2 k=1

Theorem 3.4.1 Let the condition
mup (A)mlow (A) < mup (A) + mlow (A) (4.1)
be ful¬lled. Then matrix A is invertible and the inverse matrix satis¬es the
inequality
mup (A)mlow (A)
A’1 ¤ . (4.2)

(mup (A) + mlow (A) ’ mup (A)mlow (A))d0
3.5. Proof of Theorem 3.4.1 41


The proof of this theorem is divided into a series of lemmas which are pre-
sented in the next section. Note that condition (4.1) is equivalent to the
following one:

θ(A) := (mup (A) ’ 1)(mlow (A) ’ 1) < 1. (4.3)

Inequality (4.2) can be written as

mup (A)mlow (A)
A’1 ¤ . (4.4)

d0 (1 ’ θ(A))

If matrix A is triangular and has nonzero diagonal entries, then (4.1) obvi-
ously holds.


3.5 Proof of Theorem 3.4.1
Recall that V± and D are introduced in Section 3.1, and W± = V± D’1 . In
this section A is the operator norm of A with respect to an arbitrary vector
norm.
Lemma 3.5.1 Let the condition
n’1
j
(’1)k+j W’ W+ < 1
k
˜
θ0 ≡ (5.1)
j,k=1

hold. Then A is invertible and the inverse matrix satis¬es the inequality

A’1 ¤ D’1 (I + W’ )’1 (I + W+ )’1 (1 ’ θ0 )’1 .
˜ (5.2)

Proof: Clearly,

A = D + W’ + W = (I + W’ + W+ )D =

[(I + W’ )(I + W+ ) ’ W’ W+ ]D.
But W+ and W’ are nilpotent:
n n
W’ = W+ = 0. (5.3)

So the operators, I + W’ and I + W+ are invertible:
n’1
’1
(’1)k W’ ,
k
(I + W’ ) =
k=0

n’1
’1
(’1)k W+ .
k
(I + W+ ) = (5.4)
k=0
42 3. Invertibility of Finite Matrices


Thus
A = (I + W’ )[I ’ (I + W’ )’1 W’ W+ (I + W+ )’1 ](I + W+ )D.
Thanks to (5.4) we have
n’1
’1
(’1)k’1 W’ , W+ (I + W+ )’1 =
k
(I + W’ ) W’ =
k=1

n’1
(’1)k’1 W+ .
k

k=1
So
n’1
j
(’1)k+j W’ W+ ](I + W+ )D.
k
A = (I + W’ )[I ’
j,k=1

Therefore, if (5.1) holds then A is invertible. Moreover
n’1
j
’1 ’1 ’1
(’1)k+j W’ W+ ]’1 (I + W’ )’1 .
k
[I ’
A =D (I + W+ ) (5.5)
j,k=1

Condition (5.1) yields
n’1
j
(’1)k+j W’ W+ ]’1 ¤ (1 ’ θ0 )’1 .
k ˜
[I ’
j,k=1

Now inequality (5.2) is due to (5.5). 2
Denote
n n’1
m(V+ ) =
˜ (1 + vk ) and m(V’ ) =
˜ ˜ (1 + wk ).
˜
k=2 k=1

Lemma 3.5.2 The inequalities
(I ’ V+ )’1 ¤ m(V+ )
˜ (5.6)


and
(I ’ V’ )’1 ¤ m(V’ )
˜ (5.7)


are valid.
Proof: Let Qk be the projectors onto the standard basis:
Qk h = (h1 , h2 , ..., hk , 0, 0, ..., 0) (k = 1, ..., n), Q0 = 0
for an arbitrary vector h = (h1 , ..., hn ) ∈ Cn . Clearly, Qk project onto the
invariant subspaces of V+ . So according to Lemma 2.10.1,

’1
(I ’ V+ ) (I + V+ ∆Qk ), where ∆Qk = Qk ’ Qk’1 .
= (5.8)
2¤k¤n
3.5. Proof of Theorem 3.4.1 43


It is not hard to check that

V+ ∆Qk = vk .
˜


˜
Now inequality (5.6) follows from (5.8). Further, de¬ne a projector Qk by
˜
Qk h = (0, 0, ..., hn’k+1 , hn’k+2 , ..., hn )
˜
(k = 1, ..., n), Q0 = 0.
˜
Simple calculation show that Qk project onto invariant subspaces of V’ . So
according Lemma 2.10.1,

’1 ˜
(I ’ V’ ) = (I + V’ ∆Qk ),
2¤k¤n

˜ ˜ ˜
(∆Qk = Qk ’ Qk’1 ). (5.9)
˜
It is not hard to check that V’ ∆Qk ∞ = wn’k+1 . Now inequality (5.7)
˜
follows from (5.9). 2


Lemma 3.5.3 The inequalities
n’1 n’1
k k
(’1)k V’
k
¤ m(V+ ) ’ 1 and ¤ m(V’ ) ’ 1 (5.10)
(’1) V+ ∞ ˜ ˜

k=1 k=1

are valid.

Let B = (bjk )n be a nonnegative matrix with the property
Proof: k=1

Bh ≥ h (5.11)

for any nonnegative h ∈ Cn . Then bjj ≥ 1 (j = 1, ..., n). Hence,
n
B’I bjk ’ δjk ] = B ’ 1.
= max [ (5.12)
∞ ∞
j=1,...,n
k=1

Here δjk is the Kronecker symbol. Furthermore, since V+ is nilpotent,
n’1 n’1
k k
|V+ |k
¤
(’1) V+ ∞ =

k=1 k=1

(I ’ |V+ |)’1 ’ I ∞

where |V+ | is the matrix whose entries are the absolute values of the entries
of V . Moreover, clearly,
n’1
|V+ |k h ≥ h
k=0
44 3. Invertibility of Finite Matrices


for any nonnegative h ∈ Cn . So according to (5.11) and (5.12),
n’1 n’1
k k
|V+ |k
¤
(’1) V+ ∞ =

k=1 k=1


(I ’ |V+ |)’1 ’ I ∞

Since

<<

. 2
( 11)



>>