<< стр. 3(всего 11)СОДЕРЖАНИЕ >>
m(V+ ) = m(|V+ |),
Лњ Лњ
equality (5.6) with V+ = |V+ | yields the п¬Ѓrst inequality (5.10). Similarly, the
second inequality (5.10) can be proved. 2
Proof of Theorem 3.4.1: Due to Lemma 3.5.2,

(I + Wв€’ )в€’1 в‰¤ mup (A), (I + W+ )в€’1 в‰¤ mlow (A).
в€ћ в€ћ

Lemma 3.5.3 yields
nв€’1
(в€’1)k Wв€’
k
в‰¤ mup (A) в€’ 1,
в€ћ
k=1

and
nв€’1
(в€’1)k W+
k
в‰¤ mlow (A) в€’ 1.
в€ћ
k=1

Hence,
nв€’1
j
(в€’1)k+j Wв€’ W+
k
в‰¤ Оё(A).
в€ћ
j,k=1

Now the required result follows directly from Lemma 3.5.1. 2

3.6 Positive Invertibility of Matrices
For h = (hk ), w = (wk ) в€€ Rn , we write h в‰Ґ (>) g, if hk в‰Ґ (>) wk , (k =
1, ..., n). A matrix A is (non-negative ) positive: A > (в‰Ґ) 0, if all its entries
are (non-negative) positive. For matrices A, B we write A > (в‰Ґ) B if Aв€’B >
(в‰Ґ) 0.
Again V+ , Vв€’ and D are the upper triangular, lower triangular, and di-
agonal parts of matrix A = (ajk )n j,k=1 , respectively (see Section 3.1).
n
Matrix A = (ajk )j=1,k=1 is called a Z-matrix, if the conditions

ajk в‰¤ 0 for j = k, and akk > 0 (j, k = 1, ..., n) (6.1)

hold. That is, D > 0, VВ± в‰¤ 0.
3.7. Positive Matrix-Valued Functions 45

Lemma 3.6.1 Let A be a Z-matrix, and let condition (4.1) hold. Then A
is positively invertible and the inverse operator satisп¬Ѓes the inequalities (4.2)
and
Aв€’1 в‰Ґ Dв€’1 . (6.2)

Proof: As it was proved in the previous section, condition (4.1) implies the
invertibility of A. Moreover, relation (5.5) holds. But WВ± в‰¤ 0. So
j
(в€’1)k+j Wв€’ W+ в‰Ґ 0
k

and thus
nв€’1
j
(в€’1)k+j Wв€’ W+ ]в€’1 в‰Ґ 0,
k
[I в€’
j,k=1

since
nв€’1
j
(в€’1)k+j Wв€’ W+ в‰¤ Оё0 < 1.
k

j,k=1

nв€’1
в€’1
(в€’1)k WВ± в‰Ґ 0.
k
(I + WВ± ) =
k=1

Now (5.5) implies (6.2). Inequality (4.2) is due to Theorem 3.4.1. 2

3.7 Positive Matrix-Valued Functions
We will call A an anti-Hurwitz matrix if its spectrum Пѓ(A) lies in the open
right half-plane:
ОІ(A) := inf Re Пѓ(A) > 0.

For an anti-Hurwitzian matrix A, deп¬Ѓne the matrix-valued function F (A) by
в€ћ
eв€’At f (t)dt,
F (A) = (7.1)
0

where
eв€’ОІ(A)t f (t) в€€ L1 [0, в€ћ). (7.2)

Lemma 3.7.1 Let A be an anti-Hurwitzian Z-matrix. In addition, let rela-
tions (7.1), (7.2) hold, and f (t) в‰Ґ 0 for almost all t в‰Ґ 0. Then

F (A) в‰Ґ F (D) в‰Ґ 0. (7.3)
46 3. Invertibility of Finite Matrices

Since D > 0 and VВ± в‰¤ 0, we have
Proof:
eв€’At = lim (I в€’ Anв€’1 )nt = lim (I в€’ nв€’1 D в€’ nв€’1 V )nt в‰Ґ
nв†’в€ћ nв†’в€ћ

lim (I в€’ nв€’1 D)nt = eв€’Dt ,
nв†’в€ћ
where I is the unit matrix. Thus
в€ћ в€ћ
в€’At
eв€’Dt f (t)dt = F (D).
f (t)dt в‰Ґ
F (A) = e
0 0

As claimed. 2

In, particular, let A be an anti-Hurwitzian matrix, and
в€ћ
ak tsk
f (t) в‰Ў
О“(sk + 1)
k=1

(sk = const > 0, ak = const в‰Ґ 0, k = 1, 2, ...),
where О“(.) is the Euler Gamma function. Then
в€ћ
ak Aв€’sk в€’1 ,
F (A) = (7.4)
k=1

provided the series converges. The following functions are examples of the
functions deп¬Ѓned by (7.4):
в€’1
Aв€’ОЅ , Aв€’ОЅ eb0 A (b0 = const > 0; 0 < ОЅ в‰¤ 1).
Lemma 3.7.2 Let A be a Z-matrix. Then it is anti-Hurwitzian and satisп¬Ѓes
the inequality (6.2) if and only if it is positively invertible.
Proof: Let A be anti-Hurwitzian. Then due to Lemma 3.7.1,
в€ћ
в€’1
eв€’At dt в‰Ґ Dв€’1 > 0.
A =
0

Conversely, let A be positively invertible. Put T = в€’V+ в€’ Vв€’ . Then for any
О» with Re О» в‰Ґ 0,
|(A + О»)h| = |(D + О» в€’ T )h| в‰Ґ |(D + О»)h| в€’ T |h| в‰Ґ D|h| в€’ T |h| = A|h|.
Here |h| means the vectors whose coordinates are the absolute values of the
coordinates of h. This proves the result. 2
Lemma 3.6.1 and the previous lemma imply
Corollary 3.7.3 Let F be deп¬Ѓned by (7.1) with a non-negative function f
satisfying (7.2). In addition, let A be a Z-matrix and condition (4.1) hold.
Then relation (7.3) is valid.
3.8. Notes 47

3.8 Notes
As it was mentioned, although excellent computer softwares are now available
for eigenvalue computation, new results on invertibility and spectrum inclu-
sion regions for п¬Ѓnite matrices are still important, since computers are not
very useful, in particular, for analysis of matrices dependent on parameters.
So the problem of п¬Ѓnding invertibility conditions and spectrum inclusion re-
gions for п¬Ѓnite matrices continues to attract attention of many specialists,
cf. (Brualdi, 1982), (Farid, 1995 and 1998), (Gudkov, 1967), (Li and Tsat-
someros, 1997), (Tam et al., 1997) and references given therein.
Let A = (ajk ) be a complex n Г— n-matrix (n в‰Ґ 2) with the nonzero
diagonal: akk = 0 (k = 1, ..., n). Put
n
|ajk |.
Pj =
k=1, k=j

The well-known Levy-Desplanques theorem states that if |ajj | > Pj (j =
1, ..., n), then A is nonsingular. This theorem has been improved in many
ways. For example, each of the following is known to be a suп¬ѓcient condition
for non-singularity of A:
(i) |aii ||ajj | > Pj Pi (i, j = 1, ..., n) ( Marcus and Minc, 1964, p. 149).
(ii) |ajj | в‰Ґ Pj (j = 1, ..., n), provided that at least one inequality is strict
and A is irreducible ( Marcus and Minc, 1964, p. 147).
(iii) |ajj | в‰Ґ rj mj (j = 1, ..., n), where rj are positive numbers satisfying
n
(1 + rk )в€’1 в‰¤ 1 and mj = max |ajk | (see (Bailey and Crabtree, 1969).
k=j
k=1

(iv) |ajj | > Pj Q1в€’ (j = 1, ..., n) where
j
n
0 в‰¤ в‰¤ 1, Qj = |akj | ( Marcus and Minc, 1964, p. 150) .
k=1, k=j

Theorems 3.1.1, 3.3.1 and 3.4.1 yield new invertibility conditions which
improve the mentioned results, when the considered matrices are close to
triangular ones. Moreover, they give us estimates for diп¬Ђerent norms of the
inverse matrices. Note that Theorems 3.1.1, 2.1.1 and 2.14.1 allow us to
derive additional invertibility conditions in the terms of the Euclidean norm.
The material in Chapter 3 is based on the papers (GilвЂ™, 1997), (GilвЂ™, 1998)
and (GilвЂ™, 2001).

References
 Bailey D. W. and D. E. Crabtree, (1969), Bounds for determinants,
Linear Algebra and Its Applications, 2, 303-309.
48 3. Invertibility of Finite Matrices

 Brualdi, R.A. (1982), Matrices, eigenvalues and directed graphs, Linear
and Multilinear Algebra, 11, 143-165
 Collatz, L. (1966). Functional Analysis and Numerical Mathematics.
Academic press, New York and London.
 Farid, F.O. (1995), Criteria for invertibility of diagonally dominant ma-
trices, Linear Algebra and Its Applications, 215, 63-93.

 Farid, F.O. (1998), Topics on a generalization of GershgorinвЂ™s theorem,
Linear Algebra and Its Applications, 268, 91-116.
 GilвЂ™, M.I. (1997), A nonsingularity criterion for matrices, Linear Algebra
and Its Applications, 253, 79-87.
 GilвЂ™, M.I. (1998), On positive invertibility of matrices, Positivity, 2, 165-
170.
 GilвЂ™, M.I. (2001), Invertibility and positive invertibility of matrices, Lin-
ear Algebra and and Its Applications, 327, 95-104

 Gudkov, V. V. (1967). On a certain test for nonsingularity of matrices,
Latvian Math. Yearbook, 1965, 385-390, (Math. Reviews, 33), review
1323)

 Horn, R. A. and Johnson Ch. R. (1991). Topics in Matrix Analysis,
Cambridge, Cambridge University Press.

 KrasnoselвЂ™skii, M. A., Lifshits, J. and A. Sobolev. (1989). Positive Linear
Systems. The Method of Positive Operators, Heldermann Verlag, Berlin.

 Li B. and Tsatsomeros, M.J. (1997), Doubly diagonally dominant ma-
trices, Linear Algebra and Its Applications, 261, 221-235.

 Marcus M. and Minc, H. (1964). A Survey of Matrix Theory and Matrix
Inequalities, Allyn and Bacon, Boston.
 Tam, B.S., Yang, S. and Zhang. X. (1997) Invertibility of irreducible
matrices, Linear Algebra and Its Applications, 259, 39-70
4. Localization of
Eigenvalues
of Finite Matrices

The present chapter is concerned with perturbations of п¬Ѓnite matrices and
bounds for their eigenvalues. In particular, we improve the classical Gersh-
gorin result for matrices, which are вЂќcloseвЂќ to triangular ones. In addition,
we derive upper and lower estimates for the spectral radius. Under some
restrictions, these estimates improve the Frobenius inequalities. Moreover,
we present new conditions for the stability of matrices, which supplement the
Rohrbach theorem.

4.1 Deп¬Ѓnitions and Preliminaries
In this chapter, . is an arbitrary norm in Cn , A and B are n Г— n-matrices
having eigenvalues О»1 (A), ..., О»n (A) and О»1 (B), ..., О»n (B), respectively, and
q = Aв€’B .
We recall some well-known deп¬Ѓnitions from matrix perturbation theory
(see (Stewart and Sun, 1990)). The spectral variation of B with respect to A
is
svA (B) := max min |О»i (B) в€’ О»j (A)|.
i j

The Hausdorп¬Ђ distance between the spectra of A and B is
hd(A, B) := max{svA (B), svB (A)}.
The matching distance between eigenvalues of A and B is
md(A, B) := min max |О»ПЂ(i) (B) в€’ О»i (A)|,
ПЂ i

M.I. GilвЂ™: LNM 1830, pp. 49вЂ“63, 2003.
c Springer-Verlag Berlin Heidelberg 2003
50 4. Eigenvalues of Finite Matrices

where ПЂ is taken over all permutations of {1, 2, ..., n}.
Recall also that Пѓ(A) denotes the spectrum of A.

Lemma 4.1.1 For any Вµ в€€ Пѓ(B), we have either Вµ в€€ Пѓ(A), or

q RВµ (A) в‰Ґ 1. (1.1)

Proof: Suppose that the inequality

q RВµ (A) < 1. (1.2)

holds. We can write RВµ (A) в€’ RВµ (B) = RВµ (B)(B в€’ A)RВµ (A). This yields

RВµ (A) в€’ RВµ (B) в‰¤ RВµ (B) q RВµ (A) . (1.3)

Thus, (1.2) implies

RВµ (B) в‰¤ RВµ (A) (1 в€’ q RВµ (A) )в€’1 .

That is, Вµ is a regular point of B. This contradiction proves the result. 2

Lemma 4.1.2 Assume that

RО» (A) в‰¤ П†(ПЃв€’1 (A, О»)) for all regular О» of A, (1.4)

where П†(x) is a monotonically increasing non-negative function of a non-
negative variable x, such that П†(0) = 0 and П†(в€ћ) = в€ћ. Then the inequality

svA (B) в‰¤ z(П†, q) (1.5)

is true, where z(П†, q) is the extreme right-hand (positive) root of the equation

1 = qП†(1/z). (1.6)

Proof: Due to Lemma 4.1.1 and condition (1.4),

1 в‰¤ qП†(ПЃв€’1 (A, О»)) for all О» в€€ Пѓ(B).

Since П†(x) monotonically increases, z(П†, q) is a unique positive root of (1.6)
and ПЃ(A, О») в‰¤ z(П†, q). Thus, the inequality (1.5) is valid. 2

4.2 Perturbations of Multiplicities
and Matching Distance
Put
в„¦(c, r) в‰Ў {z в€€ C : |z в€’ c| в‰¤ r} (c в€€ C, r > 0).
4.2. Perturbations of Multiplicities 51

Lemma 4.2.1 Under condition (1.4), let A have an eigenvalue О»(A) of the
algebraic multiplicity ОЅ and the distance from О»(A) to the rest of Пѓ(A) is equal
to 2d, i.e.
distance{О»(A), Пѓ(A)/О»(A)} = 2d. (2.1)
In addition, for a positive number a в‰¤ d, let
qП†(1/a) < 1. (2.2)
Then in в„¦(О»(A), a) operator B has eigenvalues whose total algebraic multi-
plicity is equal to ОЅ.
Proof: This result is a particular case of the well-known Theorem 3.18
(Kato, 1966, p. 215). 2

Since П† is a nondecreasing function, comparing (2.2) with (1.6), we get
Corollary 4.2.2 Let A have an eigenvalue О»(A) of the algebraic multiplicity
ОЅ. In addition, under conditions (1.4) and (2.1), let the extreme right-hand
root z(П†, q) of equation (1.6) satisп¬Ѓes the inequality
z(П†, q) в‰¤ d. (2.3)
Then in в„¦(О»(A), z(П†, q)) operator B has eigenvalues whose total algebraic
multiplicity is equal to ОЅ.
Let Оё1 , ..., Оёn1 (n1 в‰¤ n) be diп¬Ђerent eigenvalues of A, i.e.,
Лњ
d(A) := min{|Оёj в€’ Оёk | : j = k; j, k = 1, .., n1 } > 0 (2.4)
and
n1
ОЅk = n,
k=1
where ОЅk is the multiplicity of the eigenvalue Оёk .
By virtue of Lemma 4.2.1, we arrive at the following result.
Lemma 4.2.3 Let condition (1.4) hold. In addition, for a positive number
Лњ
a в‰¤ d(A), let condition (2.2) is valid. Then all the eigenvalues of operator B
lie in the set
в€Єn1 в„¦(Оёk , a)
k=1
and thus, md (A, B) в‰¤ a.
Moreover, Corollary 4.2.2 implies
Corollary 4.2.4 Under conditions (1.4) and (2.4), let the extreme right-
Лњ
hand root z(П†, q) of equation (1.6) satisп¬Ѓes the inequality z(П†, q) в‰¤ d(A).
Then all the eigenvalues of operator B lie in the set
в€Єn1 в„¦(Оёk , z(П†, q))
k=1

and thus md (A, B) в‰¤ z(П†, q).
52 4. Eigenvalues of Finite Matrices

4.3 Perturbations of Eigenvectors
and Eigenprojectors
Under condition (1.4), let A have an eigenvalue О»(A) of the algebraic multi-
plicity ОЅ and (2.1) hold. Let в€‚в„¦ be the boundary of в„¦(О»(A), d):

в€‚в„¦ в‰Ў {z в€€ C : |z в€’ О»(A)| = d}.

Put
1
P (A) = в€’ RО» (A)dО» (3.1)
2ПЂi в€‚в„¦

and
1
P (B) = в€’ RО» (B)dО». (3.2)
2ПЂi в€‚в„¦

That is, both P (A) and P (B) are the projectors onto the eigenspaces of
A and B, respectively, corresponding to points of the spectra, which lie in
в„¦(О»(A), d).

Lemma 4.3.1 Let A satisfy condition (1.4) and have an eigenvalue О»(A) of
the algebraic multiplicity ОЅ, such that the conditions (2.1) and

qП†(1/d) < 1 (3.3)

hold. Then dim P (A) = dim P (B). Moreover,

qdП†2 (1/d)
P (A) в€’ P (B) в‰¤ . (3.4)
1 в€’ qП†(1/d)

Proof: Thanks to Lemma 4.2.1 dim P (A) = dim P (B). From (1.3) and
(1.4) it follows that

RО» (B) в‰¤ RО» (A) (1 в€’ qП†(1/d))в€’1 в‰¤ П†(1/d)(1 в€’ qП†(1/d))в€’1 (О» в€€ в€‚в„¦)

and
1
P (A) в€’ P (B) в‰¤ RО» (A) в€’ RО» (B) |dО»| в‰¤
2ПЂ в€‚в„¦

qП†2 (1/d)d
1
RО» (B) qП†(1/d)|dО»| в‰¤ ,
1 в€’ qП†(1/d)
2ПЂ в€‚в„¦

as claimed. 2

We will say that an eigenvalue of a linear operator is a simple eigenvalue,
if its algebraic multiplicity is equal to one. The eigenvector corresponding to
a simple eigenvalue will be called a simple eigenvector.
4.4. Perturbations in Euclidean Norm 53

Lemma 4.3.2 Suppose A has a simple eigenvalue О»(A), such that the rela-
tions (1.4), (2.1) and
q[dП†2 (1/d) + П†(1/d)] < 1 (3.5)
hold. Then for the eigenvector e of A corresponding to О»(A) with e = 1,
there exists a simple eigenvector f of B with f = 1, such that
e в€’ f в‰¤ 2Оґ(1 в€’ Оґ)в€’1 ,
where
qdП†2 (1/d)
Оґв‰Ў .
1 в€’ qП†(1/d)
Proof: Firstly, note that condition (3.5) implies the relations (3.3) and
Оґ < 1, and B has in в„¦(О»(A), d) a simple eigenvalue О»(B) due to Lemma
4.2.1. Let P (B) and P (A) be deп¬Ѓned by (3.1) and (3.2). Due to Lemma
4.3.1,
P (A) в€’ P (B) в‰¤ Оґ < 1.
Consequently, P (B)e = 0, since P (A)e = e. Thanks to the relation
BP (B)e = О»(B)P (B)e,
P (B)e is an eigenvector of B. Let N = P (B)e . Then f в‰Ў N в€’1 P (B)e is a
normed eigenvector of B. So
e в€’ f = P (A)e в€’ N в€’1 P (B)e = e в€’ N в€’1 e + N в€’1 (P (A) в€’ P (B))e.
But
N в‰Ґ P (A)e в€’ (P (A) в€’ P (B))e в‰Ґ 1 в€’ Оґ.
Hence N в€’1 в‰¤ (1 в€’ Оґ)в€’1 and
e в€’ f в‰¤ (N в€’1 в€’ 1) e + N в€’1 P (A) в€’ P (B) в‰¤
(1 в€’ Оґ)в€’1 в€’ 1 + (1 в€’ Оґ)в€’1 Оґ = 2Оґ(1 в€’ Оґ)в€’1 ,
as claimed. 2

4.4 Perturbations of Matrices
in the Euclidean Norm
4.4.1 Perturbations of eigenvalues
We recall that the quantities . 2 , g(A) and Оіn,k are deп¬Ѓned in Sections 1.2
q2 := A в€’ B 2 .
The norm for matrices is understood in the sense of the operator norm.
54 4. Localization of Eigenvalues of Finite Matrices

Theorem 4.4.1 Let A and B be n Г— n-matrices. Then

svA (B) в‰¤ z(q2 , A), (4.1)

where z(q2 , A) is the extreme right-hand (unique nonegative) root of the al-
gebraic equation
nв€’1
n
Оіn,j z nв€’jв€’1 g j (A).
z = q2 (4.2)
j=0

Proof: Theorem 2.1.1 gives us the inequality
nв€’1
g k (A)Оіn,k
в‰¤ (О» в€€ Пѓ(A)).
RО» (A) (4.3)
2
ПЃk+1 (A, О»)
k=0

Rewrite (4.2) as
nв€’1
Оіn,j g j (A)
1= .
z j+1
j=0

Now the required result is due to Lemma 4.1.2 and (4.3). 2
Put
nв€’1
wn = Оіn,j .
j=0

Setting z = g(A)y in (4.2) and applying Lemma 1.6.1, we have the estimate
z(q2 , A) в‰¤ q2 wn , provided
q2 wn в‰Ґ g(A) (4.4)
and
z(q2 , A) в‰¤ g 1в€’1/n (A)[q2 wn ]1/n ,
provided
q2 wn в‰¤ g(A). (4.5)
Now Theorem 4.4.1 ensures the following result.
Corollary 4.4.2 Let condition (4.4) hold. Then svA (B) в‰¤ qwn . If condition
(4.5) holds, then
svA (B) в‰¤ g 1в€’1/n (A)[qwn ]1/n .

4.4.2 Perturbations of multiplicities
and matching distance
For a positive scalar variable x, set
nв€’1
Оіn,j xj+1 g j (A).
Gn (x) в‰Ў
j=0
4.4. Perturbations in the Euclidean Norm 55

Lemma 4.4.3 Let A have an eigenvalue О»(A) of the algebraic multiplicity ОЅ
and (2.1) hold. In addition, for a positive number a < d, let

q2 Gn (1/a) < 1. (4.6)

Then in в„¦(О»(A), a) operator B has eigenvalues whose total algebraic multi-
plicity is equal to ОЅ.

This result follows from (4.3) and Lemma 4.2.1. 2
Proof:

Inequality (4.3) and Corollary 4.2.2 imply

Corollary 4.4.4 Let A have an eigenvalue О»(A) of the algebraic multiplic-
ity ОЅ. In addition, under condition (2.1), let the extreme right-hand root
z(q2 , A) of equation (4.2) satisп¬Ѓes the inequality z(q2 , A) в‰¤ d. Then in
в„¦(О»(A), z(q2 , A)) operator B has eigenvalues whose total algebraic multiplic-
ity is equal to ОЅ.

Let Оё1 , ..., Оёn1 (n1 в‰¤ n) be the diп¬Ђerent eigenvalues of A, again. By virtue
of (4.3) and Lemma 4.2.3 we get the following result.
Лњ
Lemma 4.4.5 Under (2.4), for a positive number a < d(A), let condition
(4.6) is valid. Then all the eigenvalues of operator B lie in the set

в€Єn1 в„¦(Оёk , a)
k=1

and thus md (A, B) в‰¤ a.

In addition, due to (4.3) and Corollary 4.2.4, we get

Corollary 4.4.6 Under (2.4), let the extreme right-hand root z(q2 , A) of
Лњ
equation (4.2) satisп¬Ѓes the inequality z(q2 , A) в‰¤ d(A). Then all the eigenval-
ues of operator B lie in the set

в€Єn1 в„¦(Оёk , z(q2 , A))
k=1

and thus md (A, B) в‰¤ z(q2 , A).

To estimate z(q2 , A) we can apply Lemma 1.6.1.

4.4.3 Perturbations of eigenvectors and eigenprojectors
in the Euclidean norm
Let A have an eigenvalue О»(A) of the algebraic multiplicity ОЅ and (2.1) hold.
Deп¬Ѓne P (A) and P (B) as in Section 3.1. Recall that Gn and q2 are deп¬Ѓned
in the previous two subsections.
56 4. Eigenvalues of Finite Matrices

Lemma 4.4.7 Let A and B be linear operators in Cn . In addition, let A
have an eigenvalue О»(A) of the algebraic multiplicity ОЅ, such that the condi-
tions (2.1) and
q2 Gn (1/d) < 1
hold. Then dim P (A) = dim P (B). Moreover,

q2 Gn (1/d)d
P (A) в€’ P (B) в‰¤ .
2
1 в€’ q2 Gn (1/d)

This result is due to (4.3) and Lemma 4.3.1.

Lemma 4.4.8 Let A and B be linear operators in Cn . Suppose A has a
simple eigenvalue О»(A), such that the relations (2.1) and

q2 [G2 (1/d)d + Gn (1/d)] < 1
n

hold. Then for the eigenvector e of A corresponding to О»(A) with e = 1,
2
there exists the simple eigenvector f of B with f 2 = 1, such that

в‰¤ 2Оґ2 (1 в€’ Оґ2 )в€’1 ,
eв€’f 2

where
q2 dG2 (1/d)
n
Оґ2 в‰Ў .
1 в€’ q2 Gn (1/d)

This result is due to (4.3) and Lemma 4.3.2.

4.5 Upper Bounds for Eigenvalues in Terms
of the Euclidean Norm
Let A be an n Г— n-matrix with entries ajk (j, k = 1, ..., n). Recall that D, V+
and Vв€’ are the diagonal, upper nilpotent part and lower nilpotent part of A,
respectively, (see Section 3.1), and
nв€’1
wn = Оіn,j ,
j=0

where Оіn,j are deп¬Ѓned in Section 2.1. Assume that V+ = 0, Vв€’ = 0 and
denote
Вµ2 (A) = wn min {N в€’1 (V+ ) Vв€’ 2 , N в€’1 (Vв€’ ) V+ 2 },
and
1/n 1/n
Оґ2 (A) = wn min {N 1в€’1/n (V+ ) Vв€’
1/n 1в€’1/n
2} if Вµ2 (A) в‰¤ 1
2 ,N (Vв€’ ) V+

and Оґ2 (A) = min { Vв€’ 2 , V+ 2 } if Вµ2 (A) > 1.
4.6. Lower Bounds for the Spectral Radius 57

Theorem 4.5.1 All the eigenvalues of matrix A = (ajk )n j,k=1 lie in the union
of the discs
{О» в€€ C : |О» в€’ akk | в‰¤ Оґ2 (A)}, k = 1, ..., n. (5.1)
Proof: Take A+ = D + V+ . Since A+ is triangular,
Пѓ(A+ ) = Пѓ(D) = {akk , k = 1, ..., n}. (5.2)
Since A в€’ A+ = Vв€’ , Corollary 4.4.2 implies
svA+ (A) в‰¤ N 1в€’1/n (V+ )[ Vв€’ 2 wn ]1/n , (5.3)
provided that wn Vв€’ 2 в‰¤ N (V+ ). Replace A+ by Aв€’ = D + Vв€’ . Repeating
the above procedure, we get
svAв€’ (A) в‰¤ N 1в€’1/n (Vв€’ )[ V+ 2 wn ]1/n ,
provided that wn V+ 2 в‰¤ N (Vв€’ ). In addition, Пѓ(Aв€’ ) = Пѓ(D). These rela-
tions with (5.2) and (5.3) complete the proof in the case Вµ2 (A) в‰¤ 1. Te case
Вµ2 (A) > 1 is similarly considered 2

4.6 Lower Bounds for the Spectral Radius
Let A = (ajk ) be an n Г— n-matrix. Recall that rs (A) = sup |Пѓ(A)| is the
О±(A) = sup Re Пѓ(A) and rl (A) = inf |Пѓ(A)|
is the inner (lower) spectral radius. Let V+ , Vв€’ be the upper and lower
triangular parts of A (see Section 3.1). Denote by z(ОЅ) the unique positive
root of the equation
nв€’1
n
g k (A)Оіn,k z nв€’kв€’1
z (A) = ОЅ(A) (6.1)
k=0

where
ОЅ(A) = min{ Vв€’ 2 , V+ 2 }.
Theorem 4.6.1 Let A = (ajk )n j,k=1 be an n Г— n-matrix. Then for any k =
1, ..., n, there is an eigenvalue Вµ0 of A, such that
|Вµ0 в€’ akk | в‰¤ z(ОЅ). (6.2)
Moreover, the following inequalities are true:
rs (A) в‰Ґ max{0, max |akk | в€’ z(ОЅ)}, (6.3)
k=1,...,n

rl (A) в‰¤ min |akk | + z(ОЅ), (6.4)
k=1,...,n

and
О±(A) в‰Ґ max Re akk в€’ z(ОЅ). (6.5)
k=1,...,n
58 4. Eigenvalues of Finite Matrices

Proof: Take A+ = D + V+ . So relation (5.2) holds and A в€’ A+ = Vв€’ .
Theorem 4.3.1 gives us the inequality

svA (A+ ) в‰¤ zв€’ , (6.6)

where zв€’ is the extreme right-hand root of the equation
nв€’1
n
Оіn,j z nв€’jв€’1 g j (A).
z = Vв€’ 2
j=0

Replace A+ by Aв€’ = D + Vв€’ . Repeating the same arguments, we get

svA (Aв€’ ) в‰¤ z+ , (6.7)

where z+ is the extreme right-hand root of the equation
nв€’1
n
Оіn,j z nв€’jв€’1 g j (A).
z = V+ 2
j=0

Relations (6.6) and (6.7) imply (6.2). Furthermore, take Вµ in such a way
that |Вµ| = rs (D). Then due to (6.2), there is Вµ0 в€€ Пѓ(A), such that |Вµ0 | в‰Ґ
rs (D) в€’ z(ОЅ). Hence, (6.3) follows. Similarly, inequality (6.4) can be proved.
Now take Вµ in such a way that Re Вµ = О±(D). Due to (6.2) for some
Вµ0 в€€ Пѓ(A), |Re Вµ0 в€’ О±(D)| в‰¤ z(ОЅ). So, either Re Вµ0 в‰Ґ О±(D), or Re Вµ0 в‰Ґ
О±(D) в€’ z(ОЅ). Thus, inequality (6.5) is also proved. The proof is complete. 2
Again put
nв€’1
wn = Оіn,j .
j=0

Setting z = g(A)y in (6.1) and applying Lemma 1.6.1, we obtain the estimate
z(ОЅ) в‰¤ Оґn (A), where

if ОЅ(A)wn в‰Ґ g(A),
ОЅ(A)wn
Оґn (A) = .
g 1в€’1/n (A)[ОЅ(A)wn ]1/n if ОЅ(A)wn в‰¤ g(A)

Now Theorem 4.6.1 ensures the following result.
Corollary 4.6.2 For a matrix A = (ajk )n
j,k=1 , the inequalities

rs (A) в‰Ґ max |akk | в€’ Оґn (A), (6.8)
k=1,...,n

rl (A) в‰¤ min |akk | + Оґn (A),
k=1,...,n

О±(A) в‰Ґ max Re akk в€’ Оґn (A) (6.9)
k=1,...,n

are valid.

In the present section instead of the Euclidean norm we use the norm . в€ћ .
Besides, we derive additional bounds for eigenvalues.
Let A be an n Г— n-matrix with entries ajk (j, k = 1, ..., n), again. Denote
jв€’1 n
|ajk |, qlow = |ajk |.
qup = max max
j=2,...,n j=1,...,nв€’1
j+1
k=1

Recall that vk , wk , mup (A) and mlow (A) are deп¬Ѓned in Section 3.4. Without
ЛњЛњ
lossing the generality, assume that

min {qlow mup (A), qup mlow (A)} в‰¤ 1. (7.1)

Theorem 4.7.1 Under condition (1.1), all the eigenvalues of A lie in the
union of the discs

{О» в€€ C : |О» в€’ akk | в‰¤ Оґв€ћ (A)}, k = 1, ..., n,

where
n
Оґв€ћ (A) := min{qlow mup (A), qup mlow (A)}.

The proof of this theorem is presented in the next section.
If A is a triangular matrix, then Theorem 4.7.1 gives the exact relation

Пѓ(A) = {akk , k = 1, ..., n}.

Moreover, we have
Corollary 4.7.2 Under condition (1.1), the spectral radius rs (A) of A sat-
isп¬Ѓes the inequality

rs (A) в‰¤ max |akk | + Оґв€ћ (A).
k=1,...,n

Furthermore, consider the quantity

s(A) в‰Ў max |О»i (A) в€’ О»j (A)|.
i,j

According to Theorem 4.7.1, for arbitrary О»i (A), О»j (A), there are akk , amm ,
such that

|О»i (A) в€’ akk | в‰¤ Оґв€ћ (A), |О»j (A) в€’ amm | в‰¤ Оґв€ћ (A).

Consequently,

|О»i (A) в€’ О»j (A)| в‰¤ |О»i (A) в€’ akk | + |О»j (A) в€’ amm | + s(D)

в‰¤ s(D) + 2Оґв€ћ (A) (i, j в‰¤ n),
where s(D) = maxi,j |aii в€’ ajj |. So we have
60 4. Eigenvalues of Finite Matrices

Corollary 4.7.3 Under condition (1.1), s(A) в‰¤ s(D) + 2Оґв€ћ (A).
According to Theorem 4.7.1, Re О»k (A) в‰¤ maxj Re ajj + Оґв€ћ (A). We thus get
Corollary 4.7.4 Under (7.1), let maxj Re ajj + Оґв€ћ (A) < 0. Then A is a
Hurwitz matrix.
These results supplement the well known theorems by Hirsh and Rochrbach
(Marcus and Minc, 1964, Sections III.1.3 and Section III.3.1).

4.8 Proof of Theorem 4.7.1
Recall that VВ± and D are deп¬Ѓned in Section 3.1.
Lemma 4.8.1 The following estimate is true:
n
vk
Лњ
в€’1
в‰¤ПЃ
RО» (D + V+ ) (D, О») [1 + ],
в€ћ
|akk в€’ О»|
k=2

where ПЃ(D, О») = mink=1,...n |akk в€’ О»|.
Clearly, RО» (D + V+ ) = RО» (D)(I + Vup RО» (D))в€’1 . Due to Lemma
Proof:
2.10.1,
n
vk
Лњ
в€’1
в‰¤
(I + V+ RО» (D)) [1 + ],
в€ћ
|akk в€’ О»|
k=2

since V+ RО» (D) is nilpotent and its entries are ajk (akk в€’ О»)в€’1 . Hence, the
required result follows. 2

Lemma 4.8.2 Each eigenvalue Вµ of the matrix A = (ajk ) satisп¬Ѓes the in-
equality
n
vk
Лњ
в€’1
1 в‰¤ qlow ПЃ (D, Вµ) (1 + ).
|akk в€’ Вµ|
k=2

Proof: Take B = V+ + D. We have
jв€’1
Aв€’B |ajk | = qlow .
= Vв€’ = max
в€ћ в€ћ
j=2,...,n
k=1

Lemma 4.1.1 and Lemma 4.8.1 imply
n
vk
Лњ
в€’1
1 в‰¤ qlow RВµ (D + V+ ) в‰¤ qup ПЃ (D, Вµ) (1 + )
|akk в€’ О»|
k=2

for any Вµ в€€ Пѓ(A), as claimed. 2
4.8. Proof of Theorem 4.7.1 61

Lemma 4.8.3 All the eigenvalues of matrix A = (ajk )n
j,k=1 lie in the set

в€Єn в„¦(akk , z(A)),
j=2

where

в„¦(akk , z(A)) = {О» в€€ C : |О» в€’ akk | в‰¤ z(A)} (k = 1, ..., n)

and z(A) is the extreme right-hand (unique positive and simple) root of the
algebraic equation
n
n
z = qlow (z + vk ).
Лњ (8.1)
k=2

Proof: Since |akk в€’ О»| в‰Ґ ПЃ(D, О») for all k = 1, ..., n, Lemma 4.8.2 implies
the inequality
n
в€’1
(1 + vk )ПЃв€’1 (D, Вµ))
1 в‰¤ qlow Оґ (D, Вµ) Лњ (8.2)
k=2

for any eigenvalues Вµ of A. Dividing the algebraic equation (8.1) by z n we
can write down
n
в€’1
(1 + vk z в€’1 ).
1 = qlow z Лњ
k=2

Comparing this with (8.2) and taking into account that z(A) is the extreme
right root of (8.1) we obtain ПЃ(A, О») в‰¤ z(A), as claimed. 2
Proof of Theorem 4.7.1: By Lemma 1.6.1,

z(A) в‰¤ qlow mup (A) if qlow mup (A) в‰¤ 1.
n

Then due to Lemma 4.8.3 all the eigenvalues of A lie in the union of the discs

{О» в€€ C : |О» в€’ akk | в‰¤ qlow mup (A) } , k = 1, ..., n.
n

Replace A by the adjoint matrix Aв€— , and repeat our arguments. We can
assert that all the eigenvalues of Aв€— lie in the union of the discs

{О» в€€ C : |О» в€’ akk | в‰¤ qup mlow (A) } ; k = 1, ..., n,
n

provided that qup mlow (A) в‰¤ 1. Since Пѓ(A) = Пѓ(Aв€— ), we get the required
result. 2
62 4. Eigenvalues of Finite Matrices

4.9 Notes
Recall that, for nonnegative matrices, Frobenius has derived the following
lower estimate:
n
rs (A) в‰Ґ r(A) в‰Ў min
Лњ ajk , (9.1)
j=1,...,n
k=1

cf. (Marcus and Minc, 1964, Chapter 3, Section 3.1). Relation (6.8) improves
estimate (9.1) in the case ajk в‰Ґ 0 (j, k = 1, ..., n), provided that maxk akk в€’
Оґn (A) > r(A). That is, (6.8) is sharper than (9.1) for matrices which are
Лњ
close to triangular ones, since Оґn (A) в†’ 0, when Vв€’ в†’ 0 or V+ в†’ 0.
Due to inequality (6.9), matrix A is unstable, when

max Re akk в€’ z(ОЅ) > 0.
k=1,...,n

The latter result supplements the Rorhbach theorem (Marcus and Minc, 1964,
Chapter 3, Section 3.3.3).
A lot of papers and books are devoted to bounds for m(A, B), cf. (Stewart
and Sun, 1990), (Bhatia, 1987), (Elsner, 1985), (Phillips, 1990 and 1991) and
references therein. One of the recent results belongs to R. Bhatia, L. Elsner
and G. Krause (1990). They have proved that

m(A, B) в‰¤ 22в€’1/n q 1/n ( A + B 2 )1в€’1/n . (9.2)
2

In the paper (Farid, 1992) that inequality was improved in the case when
both A and B are normal matrices with spectra on two intersecting lines.
Corollary 4.4.6 improves (9.2), if A is close to a triangular matrix.
The contents of Sections 4.2 and 4.3 are taken from (GilвЂ™, 1997) while the
results of Sections 4.4 and 4.5 are adapted from (GilвЂ™, 2001). The material
in Section 4.6 is taken from (GilвЂ™, 1995).
About other perturbation results see, for instance, the books (Kato, 1966)
and (Baumgartel, 1985).

References
 Baumgartel, H. (1985). Analytic Perturbation Theory for Matrices and
Operators. Operator Theory, Advances and Appl., 52. Birkhauser Verlag,
Basel, Boston, Stuttgart

 Bhatia, R. (1987). Perturbation Bounds for Matrix Eigenvalues, Pitman
Res. Notes Math. 162, Longman Scientiп¬Ѓc and Technical, Essex, U.K.

 Bhatia R., Elsner, L. and Krause, G. (1990). Bounds for variation of the
roots of a polynomial and the eigenvalues of a matrix. Linear Algebra
and Appl. 142, 195-209
4.9. Notes 63

 Elsner, L. (1985). On optimal bound for the spectral variation of two
matrices. Linear Algebra and Appl. 71, 77-80.
 Farid, F. 0. (1992). The spectral variation for two matrices with spectra
on two intersecting lines, Linear Algebra and Appl. 177: 251-273.
 GilвЂ™, M.I. (1995). Norm Estimations for Operator-Valued Functions and
Applications. Marcel Dekker, Inc. New York.

 GilвЂ™, M.I. (1997). A nonsingularity criterion for matrices, Linear Algebra
Appl. 253, 79-87.
 GilвЂ™, M.I. (2001). Invertibility and positive invertibility of matrices, Lin-
ear Algebra and Appl., 327, 95-104
 Kato, T. (1966). Perturbation Theory for Linear Operators, Springer-
Verlag. New York.
 Marcus M. and Minc, H. (1964). A Survey of Matrix Theory and Matrix
Inequalities, Allyn and Bacon, Boston.

 Phillips, D. (1990). Improving spectral-variation bound with Chebyshev
polynomials, Linear Algebra Appl., 133, 165-173
 Phillips, D. (1991). Resolvent bounds and spectral variation, Linear Al-
gebra Appl., 149, 35-40
 Stewart, G. W. and Ji-guang Sun, (1990). Matrix Perturbation Theory,
5. Block Matrices and
ПЂ-Triangular Matrices

The present chapter is devoted to block matrices. In particular, we derive
the invertibility conditions, which supplement the generalized Hadamard cri-
terion and some other well-known results for block matrices.

5.1 Invertibility of Block Matrices
Let an n Г— n-matrix A = (ajk )n
j,k=1 be partitioned in the following manner:

пЈ« пЈ¶
A11 A12 . . . A1m
пЈ¬ A21 . . . A2m пЈ·
A22
A=пЈ¬ пЈ·, (1.1)
пЈ­. .пЈё
... .
Am1 Am2 . . . Amm

where m < n, Ajk are matrices. Again I = In is the unit operator in Cn and
n
|ajk |.
A = max
в€ћ
j=1,...,n
k=1

Let the diagonal blocks Akk be invertible. Denote
up
Ajk Aв€’1
vk = max (k = 2, ..., m),
в€ћ
kk
j=1,2,...,kв€’1

and
Ajk Aв€’1
low
(k = 1, ..., m в€’ 1).
vk = max в€ћ
kk
j=k+1,...,m

M.I. GilвЂ™: LNM 1830, pp. 65вЂ“74, 2003.
c Springer-Verlag Berlin Heidelberg 2003
66 5. Block Matrices

Theorem 5.1.1 Let the diagonal blocks Akk (k = 1, ..., m) be invertible. In
up low
Mup в‰Ў (1 + vk ), Mlow в‰Ў (1 + vk ),
2в‰¤kв‰¤m 1в‰¤kв‰¤mв€’1

let the condition
Mlow Mup < Mlow + Mup (1.2)
hold. Then matrix A deп¬Ѓned by (1.1) is invertible. Moreover,

maxk Aв€’1 в€ћ Mlow Mup
в€’1 kk
в‰¤
A . (1.3)
в€ћ
Mlow + Mup в€’ Mlow Mup

The proof of this theorem is divided into a series of lemmas, which are pre-
sented in Sections 5.2-5.5.
Theorem 5.1.1 is exact in the following sense. Let the matrix A in (1.1) be
upper block triangular and Akk be invertible. Then Mlow = 1 and condition
(1.2) takes the form Mup < 1 + Mup . Thus, due to Theorem 5.1.1, A is
invertible. We have the same result if the matrix in (1.1) is lower block
triangular.
Consider a block matrix with m = 2:
A11 A12
A= . (1.4)
A21 A22

Then
up
v2 = A12 Aв€’1 v1 = A21 Aв€’1
low
в€ћ, в€ћ.
22 11

up low
Mup = 1 + v2 and Mlow = 1 + v1 .
Assume that
up up
low low
(1 + v2 )(1 + v1 ) < 2 + v2 + v1 ,
or, equivalently, that

A12 Aв€’1 A21 Aв€’1 < 1. (1.5)
в€ћ в€ћ
22 11

Then due to Theorem 5.1.1, the matrix in (1.4) is invertible. Moreover,

maxk=1,2 Aв€’1 в€ћ Mlow Mup
в€’1 kk
в‰¤
A .
в€ћ
Mlow + Mup в€’ Mlow Mup

Now assume that m is even: m = 2m0 with a natural m0 , and Ajk are 2 Г— 2
matrices:
a2jв€’1,2kв€’1 a2jв€’1,2k
Ajk =
a2jв€’1,2k a2j,2k
5.2. ПЂ-Triangular Matrices 67

with j, k = 1, ..., m0 . Take into account that
в€’a2kв€’1,2k
a2k,2k
Aв€’1 = dв€’1
kk k в€’a2k,2kв€’1 a2kв€’1,2kв€’1
with
dk = a2kв€’1,2kв€’1 a2k,2k в€’ a2k,2kв€’1 a2kв€’1,2k .
up low
Thus, the quantities vk , vk are simple to calculate. Now relation (1.2)
yields the invertibility, and (1.3) gives the estimate for the inverse matrix.
In Section 5.4 below, we show that the matrix in (1.4) is invertible pro-
vided
A12 Aв€’1 A21 Aв€’1 < 1 (1.6)
22 11

with an arbitrary matrix norm . in Cn .

ПЂ-Triangular Matrices
5.2
Let B(Cn ) be the set of all linear operators in Cn . In what follows

ПЂ = {Pk , k = 0, ..., m в‰¤ n}

is a chain of orthogonal projectors Pk in Cn , such that

0 = P0 вЉ‚ P1 вЉ‚ .... вЉ‚ Pm = In .

The relation Pkв€’1 вЉ‚ Pk means that

Pkв€’1 Cn вЉ‚ Pk Cn (k = 1, ..., m).

Let operators A, D, V в€€ B(Cn ) satisfy the relations

APk = Pk APk (k = 1, ..., m), (2.1)

DPk = Pk D (k = 1, ..., m), (2.2)
V Pk = Pkв€’1 V Pk (k = 2, ..., m); V P1 = 0. (2.3)
Then A, D and V will be called a ПЂ-triangular operator, a ПЂ- diagonal one
and a ПЂ-nilpotent one, respectively
Since

V m = V m Pm = V mв€’1 Pmв€’1 V = V mв€’2 Pmв€’2 V Pmв€’1 V =

... = V P1 ...V Pmв€’2 V Pmв€’1 V,
we have
V m = 0. (2.4)
That is, every ПЂ-nilpotent operator is a nilpotent operator. Denote

в€†Pk = Pk в€’ Pkв€’1 ; Vk = V в€†Pk (k = 1, ..., m).
68 5. Block Matrices

Lemma 5.2.1 Let A be ПЂ-triangular. Then
A = D + V, (2.5)
where V is a ПЂ-nilpotent operator and D is a ПЂ-diagonal one.
Proof: Clearly,
m m m k m
A= в€†Pj A в€†Pk = в€†Pj Aв€†Pk = Pk Aв€†Pk .
j=1 k=1 j=1
k=1 k=1

Hence (2.5) is valid with
m
D= в€†Pk Aв€†Pk , (2.6)
k=1

and
m m
V= Pkв€’1 Aв€†Pk = Vk . (2.7)
k=2 k=2
2

Deп¬Ѓnition 5.2.2 Let A в€€ B(Cn ) be a ПЂ-triangular operator and suppose
(2.5) holds. Then the ПЂ-diagonal operator D and ПЂ-nilpotent operator V will
be called the ПЂ-diagonal part and ПЂ-nilpotent part of A, respectively.
Lemma 5.2.3 Let ПЂ = {Pk }m be a chain of orthogonal projectors in Cn . If
k=1
Лњ is a ПЂ-nilpotent operator, and A is a ПЂ-triangular one, then both operators
V
Лњ Лњ
AV and V A are ПЂ-nilpotent ones.
Proof: By (2.1) and (2.3) we get
Лњ Лњ Лњ Лњ
V APk = V Pk APk = Pkв€’1 V Pk APk = Pkв€’1 V APk , k = 1, ..., m.
Лњ
That is, V A is indeed a ПЂ-nilpotent operator. Similarly we can prove that
Лњ
AV is a ПЂ-nilpotent operator. 2

Lemma 5.2.4 Let A в€€ B(Cn ) be a ПЂ- triangular operator and D be its
ПЂ-diagonal part. Then the spectrum of A coincides with the spectrum of D.
Proof: Due to (2.5)
RО» (A) = (A в€’ О»I)в€’1 = (D + V в€’ О»I)в€’1 = RО» (D)(I + V RО» (D))в€’1 . (2.8)
According to Lemma 5.2.3, V RО» (D) is ПЂ- nilpotent if О» is not an eigenvalue
of D. Therefore,
mв€’1
в€’1
RО» (D)(в€’V RО» (D))k .
RО» (A) = RО» (D)(I + V RО» (D)) =
k=0
5.3. Multiplicative Representation of Resolvents 69

This relation implies that О» is a regular point of A if it is a regular point of
D.
Conversely, let О» в€€ Пѓ(A). Due to (2.5)

RО» (D) = (D в€’ О»I)в€’1 = (A в€’ V в€’ О»I)в€’1 =

RО» (A)(I в€’ V RО» (A))в€’1 .
According to Lemma 5.2.3, V RО» (A) is ПЂ- nilpotent. Therefore,
mв€’1
RО» (A)(V RО» (A))k .
RО» (D) =
k=0

So О» в€€ Пѓ(D), provided let О» в€€ Пѓ(A). The lemma is proved. 2

5.3 Multiplicative Representation
of Resolvents of ПЂ-Triangular Operators
For X1 , X2 , ..., Xm в€€ B(Cn ) and j < m, again denote
в†’
Xk в‰Ў Xj Xj+1 ...Xm .
jв‰¤kв‰¤m

Lemma 5.3.1 Let ПЂ = {Pk }m be a chain of orthogonal projectors in Cn (m в‰¤
k=1
n), V be a ПЂ-nilpotent operator. Then
в†’
в€’1
(I в€’ V ) = (I + V в€†Pk ). (3.1)
2в‰¤kв‰¤m

Proof: According to (2.4)
mв€’1
в€’1
V k.
(I в€’ V ) = (3.2)
k=0

On the other hand,
в†’ m
(I + V в€†Pk ) = I + Vk + V k1 V k 2
2в‰¤kв‰¤m k=2 2в‰¤k1 <k2 в‰¤m

+... + V2 V3 ...Vm .
Here, as above, Vk = V в€†Pk . However,

V k1 V k2 = V в€†Pk1 V в€†Pk2 =
2в‰¤k1 <k2 в‰¤m 2в‰¤k1 <k2 в‰¤m
70 5. Block Matrices

Pk1 в€’1 V в€†Pk2 = V 2 в€†Pk2 = V 2 .
V
3в‰¤k2 в‰¤m 3в‰¤k2 в‰¤m

Similarly,
Vk1 Vk2 ...Vkj = V j
2в‰¤k1 <k3 ...<kj в‰¤m

for j < m. Thus from (3.2) follows (3.1). This is the desired conclusion. 2.

Theorem 5.3.2 For any ПЂ- triangular operator A and a regular О» в€€ C
в†’
в€’1
(I в€’ V в€†Pk (D в€’ О»I)в€’1 в€†Pk ),
RО» (A) = (D в€’ О»I)
2в‰¤kв‰¤m

where D and V are the ПЂ-diagonal and ПЂ-nilpotent parts of A, respectively.

Proof: Due to Lemma 5.2.3, V RО» (D) is ПЂ-nilpotent. Now Lemma 5.3.1
gives
в†’
в€’1
(I в€’ V RО» (D)в€†Pk ).
(I + V RО» (D)) =
2в‰¤kв‰¤m

But RО» (D)в€†Pk = в€†Pk RО» (D). This proves the result. 2

5.4 Invertibility with Respect to a Chain of
Projectors
Again, let
ПЂ = {Pk , k = 0, ..., m}
be a chain of orthogonal projectors Pk . Denote the chain of the complemen-
tary projectors by ПЂ :
Лњ

Лњ
Pk = In в€’ Pmв€’k and ПЂ = {In в€’ Pmв€’k , k = 0, ..., m}.
Лњ

In the present section, V is a ПЂ-nilpotent operator, D is a ПЂ-diagonal one,
and W is a ПЂ -nilpotent operator.
Лњ

Lemma 5.4.1 Any operator A в€€ B(Cn ) admits the representation

A = D + V + W. (4.1)

Proof: Clearly,
m m
A= в€†Pj A в€†Pk .
j=1 k=1
5.4. Invertibility with Respect to a Chain 71

Hence, (4.1) holds, where D and V are deп¬Ѓned by (2.6) and (2.7), and
m m m
Лњ Лњ
W= в€†Pj Aв€†Pk = Pkв€’1 Aв€†Pk
k=1 j=k+1 k=2

Лњ Лњ Лњ
with в€†Pk = Pk в€’ Pkв€’1 . 2

Let . be an arbitrary norm in Cn .
Lemma 5.4.2 Let the ПЂ-diagonal matrix D be invertible. In addition, with
the notations
VA в‰Ў V Dв€’1 , WA в‰Ў W Dв€’1 ,
let the condition

Оё := VA (I + VA )в€’1 (I + WA )в€’1 WA < 1 (4.2)

hold. Then the operator A deп¬Ѓned by (4.1) is invertible. Moreover,

Aв€’1 в‰¤ (I + VA )в€’1 Dв€’1 (I + WA )в€’1 (1 в€’ Оё)в€’1 . (4.3)

Proof: Due to Lemma 5.4.1,

A = D + V + W = (I + VA + WA )D = [(I + VA )(I + WA ) в€’ VA WA ]D.

Clearly, VA and WA are nilpotent matrices and, consequently, the matrices,
I + VA and I + WA are invertible. Thus,

A = (I + VA )(I в€’ BA )(I + WA )D, (4.4)

where
BA = (I + VA )в€’1 VA WA (I + WA )в€’1 .
Condition (4.2) yields

(I в€’ BA )в€’1 в‰¤ (1 в€’ Оё)в€’1 .

Therefore, (4.2) provides the invertibility of A. Moreover, according to (4.4),
inequality (4.3) is valid. 2

Corollary 5.4.3 Under condition (1.6), the matrix in (1.4) is invertible.
 << стр. 3(всего 11)СОДЕРЖАНИЕ >>