Lecture Notes in Mathematics
Editors:
J.M. Morel, Cachan
F. Takens, Groningen
B. Teissier, Paris
3
Berlin
Heidelberg
New York
Hong Kong
London
Milan
Paris
Tokyo
Michael I. Gil™
Operator Functions
and
Localization of Spectra
13
Author
Michael I. Gil™
Department of Mathematics
Ben Gurion University of Negev
P.O. Box 653
BeerSheva 84105
Israel
email: gilmi@cs.bgu.ac.il
CataloginginPublication Data applied for
Bibliographic information published by Die Deutsche Bibliothek
Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data is available in the Internet at http://dnb.ddb.de
Mathematics Subject Classification (2000): 47A10, 47A55, 47A56, 47A75, 47E05,
47G10, 47G20, 30C15, 45P05, 15A09, 15A18, 15A42
ISSN 00758434
ISBN 354022463 SpringerVerlag Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from SpringerVerlag. Violations are
liable for prosecution under the German Copyright Law.
SpringerVerlag is a part of Springer Science+Business Media
springeronline.com
c SpringerVerlag Berlin Heidelberg 2003
Printed in Germany
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: Cameraready TEX output by the authors
SPIN: 10964781 41/3142/du  543210  Printed on acidfree paper
Preface
1. A lot of books and papers are concerned with the spectrum of linear op
erators but deal mainly with the asymptotic distributions of the eigenvalues.
However, in many applications, for example, in numerical mathematics and
stability analysis, bounds for eigenvalues are very important, but they are in
vestigated considerably less than asymptotic distributions. The present book
is devoted to the spectrum localization of linear operators in a Hilbert space.
Our main tool is the estimates for norms of operatorvalued functions. One
of the ¬rst estimates for the norm of a regular matrixvalued function was
established by I. M. Gel™fand and G. E. Shilov in connection with their inves
tigations of partial di¬erential equations, but this estimate is not sharp; it is
not attained for any matrix. The problem of obtaining a precise estimate for
the norm of a matrixvalued function has been repeatedly discussed in the lit
erature. In the late 1970s, I obtained a precise estimate for a regular matrix
valued function. It is attained in the case of normal matrices. Later, this
estimate was extended to various classes of nonselfadjoint operators, such as
HilbertSchmidt operators, quasiHermitian operators (i.e., linear operators
with completely continuous imaginary components), quasiunitary operators
(i.e., operators represented as a sum of a unitary operator and a compact
one), etc. Note that singular integral operators and integrodi¬erential ones
are examples of quasiHermitian operators.
On the other hand, Carleman, in the 1930s, obtained an estimate for
the norm of the resolvent of ¬nite dimensional operators and of operators
belonging to the NeumannSchatten ideal. In the early 1980s sharp estimates
for norms of the resolvent of nonselfadjoint operators of various types were
established, that supplement and extend Carleman™s estimates. In this book,
we present the mentioned estimates and, as it was pointed out, systematically
apply them to spectral problems.
2. The book consists of 19 chapters. In Chapter 1, we present some well
known results for use in the next chapters.
Chapters 25 of the book are devoted to ¬nite dimensional operators and
functions of such operators.
In Chapter 2 we derive estimates for the norms of operatorvalued func
tions in a Euclidean space. In addition, we prove relations for eigenvalues of
¬nite matrices, which improve Schur™s and Brown™s inequalities.
VI Preface
Although excellent computer softwares are now available for eigenvalue
computation, new results on invertibility and spectrum inclusion regions for
¬nite matrices are still important, since computers are not very useful, in par
ticular, for analysis of matrices dependent on parameters. But such matrices
play an essential role in various applications, for example, in the stability
and boundedness of coupled systems of partial di¬erential equations. In ad
dition, the bounds for eigenvalues of ¬nite matrices allow us to derive the
bounds for spectra of in¬nite matrices. Because of this, the problem of ¬nd
ing invertibility conditions and spectrum inclusion regions for ¬nite matrices
continues to attract the attention of many specialists. Chapter 3 deals with
various invertibility conditions. In particular, we improve the classical Levy
Desplanques theorem and other wellknown invertibility results for matrices
that are close to triangular ones. Chapter 4 is concerned with perturbations
of ¬nite matrices and bounds for their eigenvalues. In particular, we derive
upper and lower estimates for the spectral radius. Under some restrictions,
these estimates improve the Frobenius inequalities. Moreover, we present
new conditions for the stability of matrices, which supplement the Rohrbach
theorem.
Chapter 5 is devoted to block matrices. In this chapter, we derive the in
vertibility conditions, which supplement the generalized Hadamard criterion
and some other wellknown results for block matrices.
Chapters 69 form the crux of the book. Chapter 6 contains the estimates
for the norms of the resolvents and analytic functions of compact operators
in a Hilbert space. In particular, we consider HilbertSchmidt operators and
operators belonging to the von NeumannSchatten ideals.
Chapter 7 is concerned with the estimates for the norms of resolvents and
analytic functions of noncompact operators in a Hilbert space. In partic
ular, we consider socalled P triangular operators. Roughly speaking, a P 
triangular operator is a sum of a normal operator and a compact quasinilpo
tent one, having a su¬ciently rich set of invariant subspaces. Operators
having compact Hermitian components are examples of P triangular opera
tors.
In Chapters 8 and 9 we derive the bounds for the spectra of quasi
Hermitian operators.
In Chapter 10 we introduce the notion of the multiplicative operator in
tegral. By virtue of the multiplicative operator integral, we derive spectral
representations for the resolvents of various linear operators. That represen
tation is a generalization of the classical spectral representation for resolvents
of normal operators. In the corresponding cases the multiplicative integral is
an operator product.
Chapters 11 and 12 are devoted to perturbations of the operators of the
form A = D + W , where D is a normal boundedly invertible operator and
D’1 W is compact. In particular, estimates for the resolvents and bounds for
the spectra are established.
Preface VII
Chapters 13 and 14 are concerned with applications of the main results
from Chapters 712 to integral, integrodi¬erential and di¬erential operators,
as well as to in¬nite matrices. In particular, we suggest new estimates for
the spectral radius of integral operators and in¬nite matrices. Under some
restrictions, they improve the classical results.
Chapter 15 deals with operator matrices. The spectrum of operator ma
trices and related problems have been investigated in many works. Mainly,
Gershgorintype bounds for spectra of operator matrices with bounded oper
ator entries are derived. But Gershgorintype bounds give good results in the
cases when the diagonal operators are dominant. In Chapter 15, under some
restrictions, we improve these bounds for operator matrices. Moreover, we
consider matrices with unbounded operator entries. The results of Chapter
15 allow us to derive bounds for the spectra of matrix di¬erential operators.
Chapters 1618 are devoted to HilleTamarkin integral operators and ma
trices, as well as integral operators with bounded kernels.
Chapter 19 is devoted to applications of our abstract results to the theory
of ¬nite order entire functions. In that chapter we consider the following
problem: if the Taylor coe¬cients of two entire functions are close, how close
are their zeros? In addition, we establish bounds for sums of the absolute
values of the zeros in the terms of the coe¬cients of its Taylor series. They
supplement the Hadamard theorem.
3. This is the ¬rst book that presents a systematic exposition of bounds
for the spectra of various classes of linear operators in a Hilbert space. It
is directed not only to specialists in functional analysis and linear algebra,
but to anyone interested in various applications who has had at least a ¬rst
year graduate level course in analysis. The functional analysis is developed
as needed.
I was very fortunate to have had fruitful discussions with the late Profes
sors I.S. Iohvidov and M.A. Krasnosel™skii, to whom I am very grateful for
their interest in my investigations.
Michael I. Gil™
Table of Contents
1. Preliminaries 1
1.1 Vector and Matrix Norms . . . . ...... . . . . . . . . . . 1
1.2 Classes of Matrices . . . . . . . . ...... . . . . . . . . . . 2
1.3 Eigenvalues of Matrices . . . . . ...... . . . . . . . . . . 3
1.4 MatrixValued Functions . . . . . ...... . . . . . . . . . . 4
1.5 Contour Integrals . . . . . . . . . ...... . . . . . . . . . . 5
1.6 Algebraic Equations . . . . . . . ...... . . . . . . . . . . 6
1.7 The Triangular Representation of Matrices . . . . . . . . . . 7
1.8 Notes . . . . . . . . . . . . . . . ...... . . . . . . . . . . 8
References . . . . . . . . . . . . . . . . ...... . . . . . . . . . . 8
2. Norms of MatrixValued Functions 11
2.1 Estimates for the Euclidean Norm of the Resolvent . . . . . . 11
2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Relations for Eigenvalues . . . . . . . . . . . . . . . . . . . . 14
2.4 An Auxiliary Inequality . . . . . . . . . . . . . . . . . . . . . 17
2.5 Euclidean Norms of Powers of Nilpotent Matrices . . . . . . . 18
2.6 Proof of Theorem 2.1.1 . . . . . . . . . . . . . . . . . . . . . . 20
2.7 Estimates for the Norm
of Analytic MatrixValued Functions . . . . . . . . . . . . . . 21
2.8 Proof of Theorem 2.7.1 . . . . . . . . . . . . . . . . . . . . . . 22
2.9 The First Multiplicative Representation of the Resolvent . . . 24
2.10 The Second Multiplicative Representation of the Resolvent . 27
2.11 The First Relation between Determinants and Resolvents . . 28
2.12 The Second Relation between Determinants and Resolvents . 30
2.13 Proof of Theorem 2.12.1 . . . . . . . . . . . . . . . . . . . . 30
2.14 An Additional Estimate for Resolvents . . . . . . . . . . . . . 32
2.15 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3. Invertibility of Finite Matrices 35
3.1 Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . 35
3.2 lp Norms of Powers of Nilpotent Matrices . . . . . . . . . . . 37
X Table of Contents
3.3 Invertibility in the Norm . p (1 < p < ∞) . . . . . . . . . . . 39
3.4 Invertibility in the Norm . ∞ . . . . . . . . . . . . . . . . . 40
3.5 Proof of Theorem 3.4.1 . . . . . . . . . . . . . . . . . . . . . . 41
3.6 Positive Invertibility of Matrices . . . . . . . . . . . . . . . . 44
3.7 Positive MatrixValued Functions . . . . . . . . . . . . . . . . 45
3.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4. Localization of Eigenvalues of Finite Matrices 49
4.1 De¬nitions and Preliminaries . . . . . . . . . . . . . . . . . . 49
4.2 Perturbations of Multiplicities and Matching Distance . . . . 50
4.3 Perturbations of Eigenvectors and Eigenprojectors . . . . . . 52
4.4 Perturbations of Matrices in the Euclidean Norm . . . . . . . 53
4.5 Upper Bounds for Eigenvalues in Terms
of the Euclidean Norm . . . . . . . . . . . . . . . . . . . . . 56
4.6 Lower Bounds for the Spectral Radius . . . . . . . . . . . . . 57
4.7 Additional Bounds for Eigenvalues . . . . . . . . . . . . . . . 59
4.8 Proof of Theorem 4.7.1 . . . . . . . . . . . . . . . . . . . . . . 60
4.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5. Block Matrices and πTriangular Matrices 65
5.1 Invertibility of Block Matrices . . . . . . . . . . . . ...... 65
5.2 πTriangular Matrices . . . . . . . . . . . . . . . . ...... 67
5.3 Multiplicative Representation of Resolvents
of πTriangular Operators . . . . . . . . . . . . . . . . . . . . 69
5.4 Invertibility with Respect to a Chain of Projectors . . . . . . 70
5.5 Proof of Theorem 5.1.1 . . . . . . . . . . . . . . . . . . . . . . 72
5.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6. Norm Estimates for Functions of Compact Operators
in a Hilbert Space 75
6.1 Bounded Operators in a Hilbert Space . . . . . . . . . . . . . 75
6.2 Compact Operators in a Hilbert Space . . . . . . . . . . . . 77
6.3 Triangular Representations of Compact Operators . . . . . . 79
6.4 Resolvents of HilbertSchmidt Operators . . . . . . . . . . . . 83
6.5 Equalities for Eigenvalues of a HilbertSchmidt Operator . . . 84
6.6 Operators Having HilbertSchmidt Powers . . . . . . . . . . . 86
6.7 Resolvents of NeumannSchatten Operators . . . . . . . . . . 88
6.8 Proofs of Theorems 6.7.1 and 6.7.3 . . . . . . . . . . . . . . . 88
6.9 Regular Functions of HilbertSchmidt Operators . . . . . . . 91
6.10 A Relation between Determinants and Resolvents . . . . . . . 93
6.11 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Table of Contents XI
7. Functions of Noncompact Operators 97
7.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.2 P Triangular Operators . . . . . . . . . . . . . . . . . . . . . 98
7.3 Some Properties of Volterra Operators . . . . . . . . . . . . . 99
7.4 Powers of Volterra Operators . . . . . . . . . . . . . . . . . . 100
7.5 Resolvents of P Triangular Operators . . . . . . . . . . . . . 101
7.6 Triangular Representations of QuasiHermitian Operators . . 104
7.7 Resolvents of Operators
with HilbertSchmidt Hermitian Components . . . . . . . . . 106
7.8 Operators with the Property Ap ’ (A— )p ∈ C2 . . . . . . . . . 107
7.9 Resolvents of Operators
with Neumann  Schatten Hermitian Components . . . . . .. 108
7.10 Regular Functions of Bounded QuasiHermitian Operators . 109
7.11 Proof of Theorem 7.10.1 . . . . . . . . . . . . . . . . . . . .. 110
7.12 Regular Functions of Unbounded Operators . . . . . . . . .. 113
7.13 Triangular Representations of Regular Functions . . . . . .. 115
7.14 Triangular Representations of Quasiunitary Operators . . .. 116
7.15 Resolvents and Analytic Functions
of Quasiunitary Operators . . . . . . . . . . . . . . . . . . . . 117
7.16 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
8. Bounded Perturbations of Nonselfadjoint Operators 123
8.1 Invertibility of Boundedly Perturbed
P Triangular Operators . . . . . . . . . . . . . . . . . . . . . 123
8.2 Resolvents of Boundedly Perturbed
P Triangular Operators . . . . . . . . . . . . . . . . . . . . . 126
8.3 Roots of Scalar Equations . . . . . . . . . . . . . . . . . . . . 127
8.4 Spectral Variations . . . . . . . . . . . . . . . . . . . . . . . . 129
8.5 Perturbations of Compact Operators . . . . . . . . . . . . . . 130
8.6 Perturbations of Operators
with Compact Hermitian Components . . . . . . . . . . . . . 132
8.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
9. Spectrum Localization of Nonselfadjoint Operators 135
9.1 Invertibility Conditions . . . . . . . . . . . . . . . . . . . . . 135
9.2 Proofs of Theorems 9.1.1 and 9.1.3 . . . . . . . . . . . . . . . 137
9.3 Resolvents of Quasinormal Operators . . . . . . . . . . . . . 139
9.4 Upper Bounds for Spectra . . . . . . . . . . . . . . . . . . . . 142
9.5 Inner Bounds for Spectra . . . . . . . . . . . . . . . . . . . . 143
9.6 Bounds for Spectra of HilbertSchmidt Operators . . . . . . . 145
9.7 Von NeumannSchatten Operators . . . . . . . . . . . . . . . 146
9.8 Operators with HilbertSchmidt Hermitian Components . . . 147
9.9 Operators with NeumannSchatten Hermitian Components . 148
XII Table of Contents
9.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
10. Multiplicative Representations of Resolvents 151
10.1 Operators with Finite Chains of Invariant Projectors . . . . . 151
10.2 Complete Compact Operators . . . . . . . . . . . . . . . . . . 154
10.3 The Second Representation for Resolvents
of Complete Compact Operators . . . . . . . . . . . . . . . . 156
10.4 Operators with Compact Inverse Ones . . . . . . . . . . . . . 157
10.5 Multiplicative Integrals . . . . . . . . . . . . . . . . . . . . . 158
10.6 Resolvents of Volterra Operators . . . . . . . . . . . . . . . . 159
10.7 Resolvents of P Triangular Operators . . . . . . . . . . . . . 159
10.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
11. Relatively P Triangular Operators 163
11.1 De¬nitions and Preliminaries . . . . . . . . . . . . . . . . . . 163
11.2 Resolvents of Relatively P Triangular Operators . . . . . . . 165
11.3 Invertibility of Perturbed RPTO . . . . . . . . . . . . . . . . 166
11.4 Resolvents of Perturbed RPTO . . . . . . . . . . . . . . . . . 167
11.5 Relative Spectral Variations . . . . . . . . . . . . . . . . . . . 167
11.6 Operators with von NeumannSchatten Relatively
Nilpotent Parts . . . . . . . . . . . . . . . . . . . . . . . . . . 168
11.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
12. Relatively Compact Perturbations of Normal Operators 173
12.1 Invertibility Conditions . . . . . . . . . . . . . . . . . . . . . 173
12.2 Estimates for Resolvents . . . . . . . . . . . . . . . . . . . . . 175
12.3 Bounds for the Spectrum . . . . . . . . . . . . . . . . . . . . 176
12.4 Operators with Relatively von Neumann  Schatten
O¬diagonal Parts . . . . . . . . . . . . . . . . . . . . . . . . 177
12.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
13. In¬nite Matrices in Hilbert Spaces
and Di¬erential Operators 181
13.1 Matrices with Compact o¬ Diagonals . . . . . . . . . . . . . . 181
13.2 Matrices with Relatively Compact O¬diagonals . . . . . . . 184
13.3 A Nonselfadjoint Di¬erential Operator . . . . . . . . . . . . . 185
13.4 Integrodi¬erential Operators . . . . . . . . . . . . . . . . . . 186
13.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Table of Contents XIII
14. Integral Operators in Space L2 189
14.1 Scalar Integral Operators . . . . . . . . . . . . . . . . . . . . 189
14.2 Matrix Integral Operators with Relatively Small Kernels . . . 191
14.3 Perturbations of Matrix Convolutions . . . . . . . . . . . . . 193
14.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
15. Operator Matrices 199
15.1 Invertibility Conditions . . . . . . . . . . . . . . . . . . . . . 199
15.2 Bounds for the Spectrum . . . . . . . . . . . . . . . . . . . . 202
15.3 Operator Matrices with Normal Entries . . . . . . . . . . . . 204
15.4 Operator Matrices with Bounded o¬ Diagonal Entries . . . . 205
15.5 Operator Matrices with HilbertSchmidt Diagonal Operators 207
15.6 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
15.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
16. Hille  Tamarkin Integral Operators 215
16.1 Invertibility Conditions . . . . . . . . . . . . . . . . . . . . . 215
16.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
16.3 Powers of Volterra Operators . . . . . . . . . . . . . . . . . . 219
16.4 Spectral Radius of a Hille  Tamarkin Operator . . . . . . . . 221
16.5 Nonnegative Invertibility . . . . . . . . . . . . . . . . . . . . . 222
16.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
16.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
17. Integral Operators in Space L∞ 227
17.1 Invertibility Conditions . . . . . . . . . . . . . . . . . . . . . 227
17.2 Proof of Theorem 17.1.1 . . . . . . . . . . . . . . . . . . . . . 228
17.3 The Spectral Radius . . . . . . . . . . . . . . . . . . . . . . . 230
17.4 Nonnegative Invertibility . . . . . . . . . . . . . . . . . . . . . 231
17.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
17.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
18. Hille  Tamarkin Matrices 235
18.1 Invertibility Conditions . . . . . . . . . . . . . . . . . . . . . 235
18.2 Proof of Theorem 18.1.1 . . . . . . . . . . . . . . . . . . . . . 237
18.3 Localization of the Spectrum . . . . . . . . . . . . . . . . . . 238
18.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
XIV Table of Contents
19. Zeros of Entire Functions 243
19.1 Perturbations of Zeros . . . . . . . . . . . . . . . . . . . . . . 243
19.2 Proof of Theorem 19.1.2 . . . . . . . . . . . . . . . . . . . . . 246
19.3 Bounds for Sums of Zeros . . . . . . . . . . . . . . . . . . . . 248
19.4 Applications of Theorem 19.3.1 . . . . . . . . . . . . . . . . . 250
19.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
List of Main Symbols 253
Index 255
1. Preliminaries
In this chapter we present some wellknown results for use in the next chap
ters.
1.1 Vector and Matrix Norms
Let Cn be an ndimensional complex Euclidean space. A function
ν : Cn ’ [0, ∞)
is said to be a norm on Cn (or a vector norm), if ν satis¬es the following
conditions:
ν(x) = 0 i¬ x = 0, ν(±x) = ±ν(x), ν(x + y) ¤ ν(x) + ν(y) (1.1)
for all x, y ∈ Cn , ± ∈ C. Usually, a norm is denoted by the symbol . .
That is, ν(x) = x . The following important properties follow immediately
from the de¬nition:
x ’ y ≥ x ’ y and x = ’ x .
There are an in¬nite number of norms on Cn . However, the following norms
are most commonly used in practice:
n
xk p ]1/p (1 ¤ p < ∞) and x = max xk 
x =[
p ∞
k=1,...,n
k=1
for an x = (xk ) ∈ Cn . The norm x 2 is called the Euclidean norm.
Throughout this chapter x means an arbitrary norm of a vector x. We
will use the following matrix norms: the operator norm and the Frobenius
M.I. Gil™: LNM 1830, pp. 1“9, 2003.
c SpringerVerlag Berlin Heidelberg 2003
2 1. Preliminaries
(HilbertSchmidt) norm. The operator norm of a matrix (a linear operator
in Cn ) A is
Ax
A = sup .
x
x∈C n
The relations
A > 0 (A = 0), »A = » A (» ∈ C),
AB ¤ A B , and A + B ¤ A + B
are valid for all matrices A and B. The Frobenius norm of A is
n
ajk 2 .
N (A) =
j,k=1
Here ajk are the entries of matrix A in some orthogonal normal basis.
The Frobenius norm does not depend on the choice of an orthogonal normal
basis. The relations
N (A) > 0 (A = 0); N (»A) = »N (A) (» ∈ C),
N (AB) ¤ N (A)N (B) and N (A + B) ¤ N (A) + N (B)
are true for all matrices A and B.
1.2 Classes of Matrices
For an n — nmatrix A, A— denotes the conjugate matrix. That is, if ajk are
entries of A, then akj (j, k = 1, ..., n) are entries of A— . In other words
(Ax, y) = (x, A— y) (x, y ∈ C).
The symbol (., .) = (., .)C n means the scalar product in Cn . We use I to
denote the unit matrix in Cn .
De¬nition 1.2.1 A matrix A = (ajk )n j,k=1 is
—
1. symmetric (Hermitian) if A = A;
2. positive de¬nite (negative de¬nite ) if it is Hermitian and
(Ah, h) ≥ (¤) 0 (h ∈ Cn );
3. unitary if A— A = AA— = I;
4. normal if AA— = A— A;
5. nilpotent if An = 0.
Let A be an arbitrary matrix. Then the matrices
AI = (A ’ A— )/2i and AR = (A + A— )/2
are the imaginary Hermitian component and the real Hermitian one of A,
respectively. A matrix A is dissipative if its real Hermitian component is
negative de¬nite. By A’1 the matrix inverse to A is denoted: AA’1 =
A’1 A = I.
1.3. Eigenvalues of Matrices 3
1.3 Eigenvalues of Matrices
Let A be an arbitrary matrix. Then if for some » ∈ C, the equation Ah = »h
has a nontrivial solution, » is an eigenvalue of A and h is its eigenvector. An
eigenvalue » has the (algebraic) multiplicity r if
dim(∪n ker(A ’ »I)k ) = r.
k=1
Here ker B denotes the kernel of a mapping B.
Let »k (A) (k = 1, . . . , n) be the eigenvalues of A, including with their
multiplicities. Then the set σ(A) = {»k (A)}n is the spectrum of A.
k=1
All the eigenvalues of a Hermitian matrix A are real. If, in addition, A
is positive (negative) de¬nite, then all its eigenvalues are nonnegative (non
positive). Furthermore,
rs (A) = max »k (A)
k=1,...,n
is the spectral radius of A. Denote
±(A) = max Re»k (A), β(A) = min Re»k (A).
k=1,...,n k=1,...,n
A matrix A is said to be a Hurwitz matrix if all its eigenvalues lie in the open
left halfplane, i.e., ±(A) < 0.
A complex number » is a regular point of A if it does not belong to the
spectrum of A, i.e., if » = »k (A) for any k = 1, . . . , n.
The trace of A is sometimes denoted by T r (A):
n
T race (A) = T r (A) = »k (A).
k=1
So the Frobenius norm can be de¬ned as
N 2 (A) = T race (A— A) = T race (AA— ).
Recall that T r(AB) = T r(BA) and T r(A + B) = T r(A) + T r(B) for all
matrices A and B. In addition, det(A) means the determinant of A:
n
det(A) = »k (A).
k=1
The polynomial
n
p(») = det(»I ’ A) = (» ’ »k (A))
k=1
is said to be the characteristic polynomial of A. All the eigenvalues of A are
the roots of its characteristic polynomial. The algebraic multiplicity of an
eigenvalue of A coincides with the multiplicity of the corresponding root of
the characteristic polynomial. A polynomial is said to be a Hurwitz one if all
its roots lie in the open left halfplane. Thus, the characteristic polynomial
of a Hurwitz matrix is a Hurwitz polynomial.
4 1. Preliminaries
1.4 MatrixValued Functions
Let A be a matrix and let f (») be a scalarvalued function which is analytical
on a neighborhood M of σ(A). We de¬ne the function f (A) of A by the
generalized integral formula of Cauchy
1
f (A) = ’ f (»)R» (A)d», (4.1)
2πi “
where “ ‚ M is a closed smooth contour surrounding σ(A), and
R» (A) = (A ’ »I)’1
is the resolvent of A. If an analytic function f (») is represented in some
domain by the Taylor series
∞
ck »k ,
f (») =
k=0
and the series
∞
ck Ak
k=0
converges in the norm of space Cn , then
∞
ck Ak .
f (A) =
k=0
In particular, for any matrix A,
∞
Ak
A
e= .
k!
k=0
Example 1.4.1 Let A be a diagonal matrix:
«
a1 0 ... 0
¬0 a2 . . . 0 ·
A=¬ ·.
. .
... .
0 . . . 0 an
«
Then
f (a1 ) 0 ... 0
¬0 0·
f (a2 ) . . .
f (A) = ¬ ·.
. ... . .
0 ... 0 f (an )
Example 1.4.2 If a matrix J is an n — nJordan block:
1.5. Contour Integrals 5
«
»0 1 0 ... 0
¬ ·
0 »0 1 ... 0
¬ ·
¬ ·
. . . ... .
¬ ·
J =¬ ·,
. . . ... .
¬ ·
¬ ·
. . . ... .
¬ ·
0 0 . . . »0 1
0 0 . . . 0 »0
«
then
f (n’1) (»0 )
f (»0 )
f (»0 ) ...
1! (n’1)!
¬ ·
¬ ·
0 f (»0 ) ...
¬ ·
¬ ·
. . ... .
¬ ·
f (J) = ¬ ·.
. . ... .
¬ ·
¬ ·
. . ... .
¬ ·
f (»0 )
0 ... f (»0 ) 1!
0 ... 0 f (»0 )
1.5 Contour Integrals
Lemma 1.5.1 Let M0 be the closed convex hull of points x0 , x1 , ..., xn ∈ C
and let a scalarvalued function f be regular on a neighborhood D1 of M0 .
In addition, let “ ‚ D1 be a Jordan closed contour surrounding the points
x0 , x1 , ..., xn . Then
1 f (»)d» 1
sup f (n) (»).
 ¤
(» ’ x0 )...(» ’ xn )
2πi n! »∈M0
“
Proof: First, let all the points be distinct: xj = xk for j = k (j, k =
0, ..., n), and let Df (x0 , x1 , ..., xn ) be a divided di¬erence of function f at
points x0 , x1 , ..., xn . The divided di¬erence admits the representation
1 f (»)d»
Df (x0 , x1 , ..., xn ) = (5.1)
(» ’ x0 )...(» ’ xn )
2πi “
(see (Gel™fond, 1967, formula (54)). But, on the other hand, the following
estimate is wellknown:
1
sup f (n) (»)
 Df (x0 , x1 , ..., xn )  ¤
n! »∈M0
(Gel™fond, 1967, formula (49)). Combining that inequality with relation (5.1),
we arrive at the required result. If xj = xk for some j = k, then the claimed
inequality can be obtained by small perturbations and the previous reason
ings. 2
6 1. Preliminaries
Lemma 1.5.2 Let x0 ¤ x1 ¤ ... ¤ xn be real points and let a function f
be regular on a neighborhood D1 of the segment [x0 , xn ]. In addition, let
“ ‚ D1 be a Jordan closed contour surrounding [x0 , xn ]. Then there is a
point · ∈ [x0 , xn ], such that the equality
1 f (»)d» 1 (n)
= f (·)
(» ’ x0 )...(» ’ xn )
2πi n!
“
is true.
Proof: First suppose that all the points are distinct: x0 < x1 < ... < xn .
Then the divided di¬erence Df (x0 , x1 , ..., xn ) of f in the points x0 , x1 , ..., xn
admits the representation
1 (n)
Df (x0 , x1 , ..., xn ) = f (·)
n!
with some point · ∈ [x0 , xn ] (Gel™fond, 1967, formula (43)), (Ostrowski, 1973,
page 5 ). Combining that equality with representation (5.1), we arrive at the
required result. If xj = xk for some j = k, then the claimed inequality can
be obtained by small perturbations and the previous reasonings. 2
1.6 Algebraic Equations
Let us consider the algebraic equation
n’1
n
cj z n’j’1
z = P (z) (n > 1), where P (z) = (6.1)
j=0
with nonnegative coe¬cients cj (j = 0, ..., n ’ 1).
Lemma 1.6.1 The extreme righthand root z0 of equation (6.1) is non
negative and the following estimates are valid:
z0 ¤ [P (1)]1/n if P (1) ¤ 1, (6.2)
and
1 ¤ z0 ¤ P (1) if P (1) ≥ 1. (6.3)
Proof: Since all the coe¬cients of P (z) are nonnegative, it does not de
crease as z > 0 increases. From this it follows that if P (1) ¤ 1, then z0 ¤ 1.
n
So z0 ¤ P (1), as claimed.
Now let P (1) ≥ 1, then due to (6.1) z0 ≥ 1, because P (z) does not
decrease. It is clear that
n’1
P (z0 ) ¤ z0 P (1)
in this case. Substituting this inequality in (6.1), we get (6.3). 2
1.7. The Triangular Representation 7
Setting z = ax with a positive constant a in (6.1), we obtain
n’1
n
cj a’j’1 xn’j’1 .
x= (6.4)
j=0
Let
√
a≡2 max cj .
j+1
j=0,...,n’1
Then
n’1 n’1
’j’1
2’j’1 = 1 ’ 2’n+1 < 1.
¤
cj a
j=0 j=0
Let x0 be the extreme righthand root of equation (6.4), then by (6.2) we
have x0 ¤ 1. Since z0 = ax0 , we have derived
Corollary 1.6.2 The extreme righthand root z0 of equation (6.1) is non
negative. Moreover,
√
z0 ¤ 2 max j+1 c .
j
j=0,...,n’1
1.7 The Triangular Representation
of Matrices
Let B(Cn ) be the set of all linear operators (matrices) in Cn . A subspace
M ‚ Cn is an invariant subspace of an A ∈ B(Cn ), if the relation h ∈ M
implies Ah ∈ M . If P is a projector onto an invariant subspace of A, then
P AP = AP. (7.1)
By Schur™s theorem (Marcus and Minc, 1964, Section I.4.10.2 ), for a linear
operator A ∈ B(Cn ), there is an orthogonal normal basis {ek }, such that A
is a triangular matrix. That is,
k
Aek = ajk ej with ajk = (Aek , ej ) (j = 1, ..., n), (7.2)
j=1
where (., .) is the scalar product. This basis is called Schur™s basis of the
operator A. In addition,
ajj = »j (A),
where »j (A) are the eigenvalues of A. According to (7.2),
A=D+V (7.3)
with a normal (diagonal) operator D de¬ned by
Dej = »j (A)ej (j = 1, ..., n)
8 1. Preliminaries
and a nilpotent (uppertriangular) operator V de¬ned by
k’1
V ek = ajk ej (k = 2, ..., n).
j=1
We will call equality (7.3) the triangular representation of matrix A. In
addition, D and V will be called the diagonal part and the nilpotent part of
A, respectively.
Put
j
Pj = (., ek )ek (j = 1, ..., n), P0 = 0.
k=1
Then
0 ‚ P1 Cn ‚ ... ‚ Pn Cn = Cn .
Moreover,
APk = Pk APk ; V Pk’1 = Pk V Pk ; DPk = DPk (k = 1, ..., n). (7.4)
So A, V and D have the same chain of invariant subspaces.
Lemma 1.7.1 Let Q, V ∈ B(Cn ) and let V be a nilpotent operator. Suppose
that all the invariant subspaces of V and of Q are the same. Then V Q and
QV are nilpotent operators.
Proof: Since all the invariant subspaces of V and Q are the same, these
operators have the same basis of the triangular representation. Taking into
account that the diagonal entries of V are equal to zero, we easily determine
that the diagonal entries of QV and V Q are equal to zero. This proves the
required result. 2
1.8 Notes
This book presupposes a knowledge of basic matrix theory, for which there
are good introductory texts. The books (Bellman, 1970), (Gantmaher, 1967),
( Marcus and Minc, 1964) are classical. For more details about the notions
presented in Sections 1.11.4 also see (Collatz, 1966) and (Stewart and Sun,
1990).
Estimates for roots of algebraic equations similar to Corollary 1.6.2 can
be found in (Ostrowski, 1973, page 277).
References
[1] Bellman, R.E. (1970). Introduction to Matrix Analysis. McGrawHill,
New York.
1.8. Notes 9
[2] Collatz, L. (1966). Functional Analysis and Numerical Mathematics.
Academic Press, New YorkLondon.
[3] Gantmaher, F. R. (1967). Theory of Matrices. Nauka, Moscow (In Rus
sian).
[4] Gelfond, A. O. (1967). Calculations of Finite Di¬erences. Nauka,
Moscow (In Russian ).
[5] Marcus, M. and Minc, H. (1964). A Survey of Matrix Theory and Matrix
Inequalities. Allyn and Bacon, Boston .
[6] Ostrowski, A. M. (1973). Solution of Equations in Euclidean and Banach
spaces. Academic Press, New York  London.
[7] Stewart, G. W. and Sun Jiguang (1990). Matrix Perturbation Theory.
Academic Press, New York.
2. Norms of
MatrixValued Functions
In the present chapter we derive estimates for the norms of operatorvalued
functions in a Euclidean space. In addition, we prove relations for eigenvalues
of ¬nite matrices, which improve Schur™s and Brown™s inequalities.
2.1 Estimates for the Euclidean Norm of the
Resolvent
Throughout the present chapter . means the Euclidean norm. That is,
. = . 2 (see Section 1.1).
Let A = (ajk ) be an n — nmatrix (n > 1). The following quantity plays
a key role in the sequel:
n
2
»k (A)2 )1/2 .
g(A) = (N (A) ’ (1.1)
k=1
Recall that I is the unit matrix, N (A) is the Frobenius (HilbertSchmidt)
norm of A, and »k (A) (k = 1, ..., n) are the eigenvalues taken with their
multiplicities. Since
n
»k (A)2 ≥ T race A2 ,
k=1
we get
g 2 (A) ¤ N 2 (A) ’ T race A2 . (1.2)
In Section 2.2 we will prove the relations
12—
g 2 (A) ¤ N (A ’ A) (1.3)
2
M.I. Gil™: LNM 1830, pp. 11“34, 2003.
c SpringerVerlag Berlin Heidelberg 2003
12 2. Norms of Matrix Functions
and
g(ei„ A + zI) = g(A) (1.4)
for all „ ∈ R and z ∈ C. To formulate the result, for a natural n > 1
introduce the numbers
k
Cn’1
(k = 1, ..., n ’ 1) and γn,0 = 1.
γn,k =
(n ’ 1)k
Here
(n ’ 1)!
k
Cn’1 =
(n ’ k ’ 1)!k!
are binomial coe¬cients. Evidently, for all n > 2,
(n ’ 1)(n ’ 2) . . . (n ’ k) 1
2
¤ (k = 1, 2, ..., n ’ 1).
γn,k = (1.5)
(n ’ 1)k k! k!
Theorem 2.1.1 Let A be a linear operator in Cn . Then its resolvent R» (A) =
(A ’ »I)’1 satis¬es the inequality
n’1
g k (A)γn,k
R» (A) ¤ for any regular point » of A,
ρk+1 (A, »)
k=0
where ρ(A, ») = mink=1,...,n » ’ »k (A).
The proof of this theorem is divided into a series of lemmas which are pre
sented in Sections 2.32.6.
Theorem 2.1.1 is exact: if A is a normal matrix, then g(A) = 0 and
1
R» (A) = for all regular points » of A.
ρ(A, »)
Let A be an invertible n — nmatrix. Then by Theorem 2.1.1,
n’1
γn,k
’1
g k (A)
¤
A ,
ρk+1 (A)
0
k=0
where ρ0 (A) = ρ(A, 0) is the smallest modulus of the eigenvalues of A:
»k (A).
ρ0 (A) = inf
k=1,...,n
Moreover, Theorem 2.1.1 and inequalities (1.5) imply
Corollary 2.1.2 Let A be a linear operator in Cn . Then
n’1
g k (A)
√
R» (A) ¤ for any regular point » of A.
k!ρk+1 (A, »)
k=0
An independent proof of this corollary is presented in Section 2.6.
2.2. Examples 13
2.2 Examples
In this section we present some examples of calculations of g(A).
Example 2.2.1 Consider the matrix
a11 a12
A=
a21 a22
where ajk (j, k = 1, 2) are real numbers.
First, consider the case of nonreal eigenvalues: »2 (A) = »1 (A). It can be
written
det(A) = »1 (A)»1 (A) = »1 (A)2
and
»1 (A)2 + »2 (A)2 = 2»1 (A)2 = 2det(A) = 2[a11 a22 ’ a21 a12 ].
Thus,
g 2 (A) = N 2 (A) ’ »1 (A)2 ’ »2 (A)2 =
a2 + a2 + a2 + a2 ’ 2[a11 a22 ’ a21 a12 ].
11 12 21 22
Hence,
(a11 ’ a22 )2 + (a21 + a12 )2 .
g(A) = (2.1)
Let n = 2 and a matrix A have real entries again, but now the eigenvalues
of A are real. Then
»1 (A)2 + »2 (A)2 = T race A2 .
Obviously,
a2 + a12 a21 a11 a12 + a12 a22
2 11
A= .
a2 + a21 a12
a21 a11 + a21 a22 22
We thus get the relation
»1 (A)2 + »2 (A)2 = a2 + 2a12 a21 + a2 .
11 22
Consequently,
g 2 (A) = N 2 (A) ’ »1 (A)2 ’ »2 (A)2 =
a2 + a2 + a2 + a2 ’ (a2 + 2a12 a21 + a2 ).
11 12 21 22 11 22
Hence,
g(A) = a12 ’ a21 . (2.2)
Example 2.2.2 Let A be an uppertriangular matrix:
14 2. Norms of Matrix Functions
«
a11 a12 . . . a1n
¬0 . . . a2n ·
a21
A=¬ ·.
. .
... .
0 ... 0 ann
Then
n k’1
ajk 2 ,
g(A) = (2.3)
k=1 j=1
since the eigenvalues of a triangular matrix are its diagonal elements.
Example 2.2.3 Consider the matrix
«
’a1 . . . ’an’1 ’an
¬1 0·
... 0
A=¬ ·
. .
... .
0 ... 1 0
with complex numbers ak . Such matrices play a key role in the theory of
scalar ordinary di¬erential equations. Take into account that
«2
a1 ’ a2 . . . a1 an’1 ’ a1 a1 an
¬ ’a1 ’an ·
’an’1
...
¬ ·
A =¬ 0 ·.
2
1 ... 0
¬ ·
.
. ... .
0 ... 0 0
Thus, we obtain, T race A2 = a2 ’ 2a2 . Therefore
1
n
2 2 2
a2 ak 2 .
g (A) ¤ N (A) ’ T race A  = n ’ 1 ’ ’ 2a2  + (2.4)
1
k=1
Now let ak be real. Then (2.4) gives us the inequality
n
2
a2 .
g (A) ¤ n ’ 1 + 2a2 + (2.5)
k
k=2
2.3 Relations for Eigenvalues
Theorem 2.3.1 For any linear operator A in Cn ,
n n
2 2 2 2
Im »k (A)2 ,
g (A) := N (A) ’ »k (A) = 2N (AI ) ’ 2 (3.1)
k=1 k=1
where »k (A) are the eigenvalues of A with their multiplicities and AI =
(A ’ A— )/2i.
2.3. Relations for Eigenvalues 15
To prove this theorem we need the following two lemmas.
Lemma 2.3.2 For any linear operator A in Cn ,
n
2 2 2
»k (A)2 ,
N (V ) = g (A) ≡ N (A) ’
k=1
where V is the nilpotent part of A (see Section 1.7).
Proof: Let D be the diagonal part of A. Then, due to Lemma 1.7.1 both
matrices V — D and D— V are nilpotent. Therefore,
T race (D— V ) = 0 and T race (V — D) = 0. (3.2)
It is easy to see that
n
—
»k (A)2 .
T race (D D) = (3.3)
k=1
Since
A = D + V, (3.4)
due to (3.2) and (3.3)
N 2 (A) = T race (D + V )— (V + D) = T race (V — V + D— D) =
n
2
»k (A)2 ,
N (V ) + (3.5)
k=1
and the required equality is proved. 2
Lemma 2.3.3 For any linear operator A in Cn ,
n
2 2
Im »k (A)2 ,
N (V ) = 2N (AI ) ’ 2
k=1
where V is the nilpotent part of A.
Proof: Clearly,
’4(AI )2 = (A ’ A— )2 = AA ’ AA— ’ A— A + A— A— .
But due to (3.2) and (3.4)
T race (A ’ A— )2 = T race (V + D ’ V — ’ D— )2 =
T race [(V ’ V — )2 + (V ’ V — )(D ’ D— )+
(D ’ D— )(V ’ V — ) + (D ’ D— )2 ] =
16 2. Norms of Matrix Functions
T race (V ’ V — )2 + T race (D ’ D— )2 .
Hence,
N 2 (AI ) = N 2 (VI ) + N 2 (DI ), (3.6)
where
VI = (V ’ V — )/2i and DI = (D ’ D— )/2i.
It is not hard to see that
n m’1
1 12
2
akm 2 =
N (VI ) = N (V ),
2 m=1 2
k=1
where ajk are the entries of V in the Schur basis. Thus,
N 2 (V ) = 2N 2 (AI ) ’ 2N 2 (DI ).
But
n
2
Im »k (A)2 .
N (DI ) =
k=1
Thus, we arrive at the required equality. 2
The assertion of Theorem 2.3.1 follows from Lemmas 2.3.2 and 2.3.3.
The inequality (1.3) follows from Theorem 2.3.1.
Furthermore, take into account that the nilpotent parts of the matrices
Ae and Aei„ + zI with a real number „ and a complex one z, coincide.
i„
Hence, due to Lemma 2.3.2 we obtain the following
Corollary 2.3.4 For any linear operator A in Cn , a real number „ , and a
complex one z, relation (1.4) holds.
Corollary 2.3.5 For arbitrary commuting linear operators A, B in Cn ,
g(A + B) ¤ g(A) + g(B).
In fact, A and B have the same Schur basis. This clearly forces
VA+B = VA + VB ,
where VA+B , VA and VB are the nilpotent parts of A + B, A and B, respec
tively. Due to Lemma 2.3.2 the relations
g(A) = N (VA ), g(B) = N (VB ), g(A + B) = N (VA+B )
are true. Now the property of the norm implies the result.
2.4. An Auxiliary Inequality 17
Corollary 2.3.6 For any n — n matrix A and real numbers t, „ the equality
n
2 it — ’it
eit »k (A) ’ e’it »k (A)2 =
N (Ae ’ A e )’
k=1
n
2 i„ — ’i„
ei„ »k (A) ’ e’i„ »k (A)2
’A e )’
N (Ae
k=1
is true.
The proof consists in replacing A by Aeit and Aei„ and using Theorem 2.3.1.
In particular, take t = 0 and „ = π/2. Due to Corollary 2.3.6,
n n
2 2 2
Re »k (A)2
N (AI ) ’ Im »k (A) = N (AR ) ’
k=1 k=1
with AR = (A + A— )/2 .
2.4 An Auxiliary Inequality
Lemma 2.4.1 For arbitrary positive numbers a1 , ..., an and m = 1, ..., n, we
have
n
’m m
ak ]m .
ak1 . . . akm ¤ n Cn [ (4.1)
1¤k1 <k2 <...<km ¤n k=1
Proof: Consider the following function of n positive variables y1 , ..., yn :
Rm (y1 , ..., yn ) ≡ yk1 yk2 . . . ykm .
1¤k1 <k2 <...<km ¤n
Let us prove that under the condition
n
yk = n b (4.2)
k=1
where b is a given positive number, function Rm has a unique conditional
maximum. To this end denote
‚Rm (y1 , ..., yn )
Fj (y1 , ..., yn ) ≡ .
‚yj
Obviously, Fj (y1 , ..., yn ) does not depend on yj , symmetrically depends on
other variables, and monotonically increases with respect to each of its vari
ables. The conditional extremums of Rm under (4.2) are the roots of the
equations
n
‚
Fj (y1 , ..., yn ) ’ » yk = 0 (j = 1, ..., n),
‚yj
k=1
18 2. Norms of Matrix Functions
where » is the Lagrange factor. Therefore,
Fj (y1 , ..., yn ) = » (j = 1, ..., n).
Since Fj (y1 , ..., yn ) does not depend on yj , and Fk (y1 , ..., yn ) does not depend
on yk , equality
Fj (y1 , ..., yn ) = Fk (y1 , ..., yn ) = »
for all k = j is possible if and only if yj = yk . Thus Rm has under (4.2) a
unique extremum when
y1 = y2 = ... = yn = b. (4.3)
But
Rm (b, ..., b) = bm 1 = bm Cn .
m
(4.4)
1¤k1 <k2 <...<km ¤n
Let us check that (4.3) gives us the maximum. Letting
y1 ’ nb and yk ’ 0 (k = 2, ..., n),
we get
Rm (y1 , ..., yn ) ’ 0.
Since the extremum (4.3) is unique, it is the maximum. Thus, under (4.2)
Rm (y1 , ..., yn ) ¤ bm Cn (yk ≥ 0, k = 1, ..., n).
m
Take yj = aj and
a1 + ... + an
b= .
n
Then
n
Cn n’m [
m
ak ]m .
Rm (a1 , ..., an ) ¤
k=1
We thus get the required result. 2
2.5 Euclidean Norms of Powers
of Nilpotent Matrices
Theorem 2.5.1 For any nilpotent operator V in Cn , the inequalities
V k ¤ γn,k N k (V ) (k = 1, . . . , n ’ 1)
are valid.
2.5. Powers of Nilpotent Matrices 19
Proof: Since V is nilpotent, due to the Shur theorem we can represent it
by an uppertriangular matrix with the zero diagonal:
V = (ajk )n
j,k=1 with ajk = 0 (j ≥ k).
Denote
n
xk 2 )1/2 for m < n,
x =(
m
k=m
where xk are coordinates of a vector x. We can write
n’1 n
2
ajk xk 2 for all m ¤ n ’ 1.

Vx =
m
j=m k=j+1
Now we have (by Schwarz™s inequality) the relation
n’1
2 2
¤
Vx hj x j+1 , (5.1)
m
j=m
where
n
ajk 2 (j < n).
hj =
k=j+1
Further, by Schwarz™s inequality
n’1 n n’1
2 2 2 2
 ajk (V x)k  ¤
Vx = hj V x j+1 .
m
j=m k=j+1 j=m
Here (V x)k are coordinates of V x. Taking into account (5.1), we obtain
n’1 n’1
2 2 2 2
¤
Vx hj hk x = hj hk x k+1 .
m k+1
j=m k=j+1 m¤j<k¤n’1
Hence,
V2 2
¤ hj hk .
1¤j<k¤n’1
Repeating these arguments, we arrive at the inequality