<< стр. 15(всего 16)СОДЕРЖАНИЕ >>
The full development of this theory is outside the scope of this work.
Probabilistic Concepts 357

where, we have:
a = t0 < t1 < . . . < tn = b
Оґ = max (tk в€’ tkв€’1 )
k=1,...,n

Let us now consider a stochastic process Zt (for which we wish to deп¬Ѓne the stochastic
differential) and a standard Brownian motion wt . If there is a stochastic process zt such that
t
Zt = Z0 + 0 zs dws , then it is said that Zt admits the stochastic differential dZt = zt dwt .
This differential is interpreted as follows: the stochastic differential dZt represents the
variation (for a very short period of time dt) of Zt , triggered by a random variation dwt
weighted by zt , which represents the volatility of Zt at the moment t.
More generally, the deп¬Ѓnition of dXt = at (Xt ) В· dt + bt (Xt ) В· dwt is given by
t2 t2
Xt2 в€’ Xt1 = at (Xt ) dt + bt (Xt ) dwt
t1 t1

The stochastic differential has some of the properties of ordinary differentials, such as
linearity. Not all of them, however, remain true. For example,9 the stochastic differential
of a product of two stochastic processes for which the stochastic differential of the factors
is known,
dXt(i) = at(i) dt + bt(i) dwt i = 1, 2

is given by:
d(Xt(1) Xt(2) ) = Xt(1) dXt(2) + Xt(2) dXt(1) + bt(1) bt(2) dt

Another property, which corresponds to the derivation formula for a compound function,
is the well-known Ito formula.10 This formula gives the differential for a two-variable
Л†
function: a stochastic process for which the stochastic differential is known, and time.
If the process Xt has the stochastic differential dXt = at dt + bt dwt and if f (x, t) is a
C2 -class function, the process f (X, t) will admit the following stochastic differential:

1
df (Xt , t) = ft (Xt , t) + fx (Xt , t)at + fxx (Xt , t)bt2 В· dt + fx (Xt , t)bt В· dwt
2

We will from now on leave out the argument X in the expression of s functions a and b.
9
10
Also known as the ItЛ† вЂ™s lemma.
o
Appendix 3
Statistical Concepts1

3.1 INFERENTIAL STATISTICS
3.1.1 Sampling
3.1.1.1 Principles
In inferential statistics, we are usually interested in a population and the variables measured
on the individual members of that population. Unfortunately, the population as a whole
is often far too large, and sometimes not sufп¬Ѓciently well known, to be handled directly.
For cases of observed information, therefore, we must conп¬Ѓne ourselves to a subset
of the population, known as a sample. Then, on the basis of observations made in
relation to that sample, we attempt to deduce (infer) conclusions in relation to the popu-
lation.
The operation that consists of extracting the sample from the population is known as
sampling. It is here that probability theory becomes involved, constituting the link between
the population and the sample. It is deп¬Ѓned as simply random when the individual mem-
bers are extracted independently from the population and all have the same probability
of being chosen. In practice, this is not necessarily the case and the procedures set up for
carrying out the sampling process must imitate the chance as closely as possible.

3.1.1.2 Sampling distribution
Suppose that we are interested in a parameter Оё of the population. If we extract a sample
x1 , x2 , . . . , xn from the population, we can calculate the parameter Оё for this sample
Оё (x1 , x2 , . . . , xn ).
As the sampling is at the origin of the fortuitous aspect of this procedure, for another
sample x1 , x2 , . . . , xn , we would have obtained another parameter value Оё (x1 , x2 , . . . , xn ).
We are therefore constructing a r.v. , in which the various possible values are the
results of the calculation of Оё for all the possible samples. The law of probability for this
r.v. is known as the sampling distribution.
In order to illustrate this concept, let us consider the sampling distribution for the mean
of the population and suppose that the variable considered has a mean and variance Вµ
and Пѓ 2 respectively. On the basis of the various samples, it is possible to calculate an
average on each occasion:
n n
1 1
x= xi x= xi В·В·В·
n n
i=1 i=1

1
Readers interested in п¬Ѓnding out more about the concepts developed below should read: Ansion G., EconomВґ trie pour
e
lвЂ™enterprise, Eyrolles, 1988. Dagnelie P., ThВґ orie et mВґ thodes statistique, (2 volumes), Presses Agronomiques de Gembloux,
e e
1975. Johnston J., Econometric Methods, McGraw-Hill, 1972. Justens D., Statistique pour dВґ cideurs, De Boeck, 1988. Kendall
e
M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifп¬Ѓn, 1977.
360 Asset and Risk Management

We thus deп¬Ѓne a r.v. X for which it can be demonstrated that:

E(X) = Вµ
Пѓ2
var(X) =
n

The п¬Ѓrst of these two relations justiп¬Ѓes the choice of the average of the sample as an
estimator for the mean of the population. It is referred to as an unbiased estimator.

Note
If we examine in a similar way the sampling distribution for the variance, calculated on
the basis of a sample using s 2 = n n (xi в€’ x)2 , the associated r.v. S 2 will be such that
1
i=1
nв€’1 2
E(S 2 ) = Пѓ.
n
We are no longer looking at an unbiased estimator, but an asymptotically unbiased
estimator (for n tending towards inп¬Ѓnity). For this reason, we frequently choose the
1 n
i=1 (xi в€’ x) .
2
following expression as an estimator for the variance:
nв€’1

3.1.2 Two problems of inferential statistics
3.1.2.1 Estimation
If the problem is therefore one of estimating a parameter Оё of the population, we must
construct an estimator that is a function of the values observed through the sampling
procedure. It is therefore important for this estimator to be of good quality for evaluating
the parameter Оё . We thus often require an unbiased estimator: E( ) = Оё .
Nevertheless, of all the unbiased estimators, we want the estimator adopted to have
other properties, and most notably its dispersion around the central value Оё to be as small
as possible. Its variance var( ) = E(( в€’ Оё )2 ) must be minimal.2
Alongside this selective estimation (there is only one estimation for a sample), a preci-
sion is generally calculated for the estimation by determining an interval [ 1 ; 2 ] centred
that contains the true value of the parameter Оё to be estimated with a
on the value
given probability:
Pr[ 1 в‰¤ Оё в‰¤ 2 ] = 1 в€’ О±

with О± = 0.05, for example. This interval is termed the conп¬Ѓdence interval for Оё and the
number (1 в€’ О±) is the conп¬Ѓdence coefп¬Ѓcient.
This estimation by conп¬Ѓdence interval is only possible if one knows the sampling
distribution for Оё , for example because the population obeys this or that known distribution
or if certain asymptotic results, such as central limit theorem, can be applied to it.
Let us examine, by way of an example, the estimate of the mean of a normal population
through conп¬Ѓdence interval. It is already known that the вЂ˜bestвЂ™ estimator is the average
Пѓ
of sampling, which is distributed following a normal law with parameters Вµ; в€љ and
n
2
For example, the sample average is the unbiased estimator for the minimal variance for the average of the population.
Statistical Concepts 361

Xв€’Вµ
в€љ is thus standard normal. If the quantile for this last distribution is termed
the r.v.
Пѓ/ n
Q(u), we have:

О± Xв€’Вµ О±
Pr Q в‰¤ в€љ в‰¤Q 1в€’ =1в€’О±
Пѓ/ n
2 2
Пѓ О± Пѓ О±
Pr X в€’ в€љ Q 1 в€’ в‰¤Вµв‰¤Xв€’ в€љ Q =1в€’О±
n n
2 2
Пѓ О± Пѓ О±
Pr X в€’ в€љ Q 1 в€’ в‰¤Вµв‰¤X+ в€љ Q 1в€’ =1в€’О±
n n
2 2

This last equality makes up the conп¬Ѓdence interval formula for the mean; it can also be
written more concisely as:
Пѓ О±
I.C.(Вµ) : X В± в€љ Q 1 в€’ (s.p.О±)
n 2

We indicate that in this last formula, the standard deviation for the population Пѓ is
generally not known. If it is replaced by its estimator calculated on the basis of the
sample, the quantile for the distribution must be replaced by the quantile relative to the
Student distribution at (n в€’ 1) degrees of freedom.

3.1.2.2 Hypothesis test
The aim of a hypothesis test is to conп¬Ѓrm or refute a hypothesis formulated by a popu-
lation, on the basis of a sample. In this way, we will know:

вЂў The goodness-of-п¬Ѓt tests: verifying whether the population from which the sample is
taken is distributed according to a given law of probability.
вЂў The independence tests between certain classiп¬Ѓcation criteria deп¬Ѓned on the population
(these are also used for testing independence between r.v.s).
вЂў The compliance tests: verifying whether a population parameter is equal to a given
value.
вЂў The homogeneity tests: verifying whether the values for a parameter measured on
more than one population are the same (this requires one sample to be extracted
per population).

The procedure for carrying out a hypothesis test can be shown as follows. After deп¬Ѓning
the hypothesis to be tested H0 , also known as the null hypothesis, and the alternative
hypotheses H1 , we determine under H0 the sampling distribution for the parameter to
be studied. With the п¬Ѓxed conп¬Ѓdence coefп¬Ѓcient (1 в€’ О±), the sample is allocated to the
region of acceptance (AH0 ) or to the region of rejection (RH0 ) within H0 .
Four situations may therefore arise depending on the reality on one hand and the
decision taken on the other hand (see Table A3.1).
Zones (a) and (d) in Table A3.1 correspond to correct conclusions of the test. In zone
(b) the hypothesis is rejected although it is true; this is a п¬Ѓrst-type error for which the
probability is the complementary О± of the conп¬Ѓdence coefп¬Ѓcient п¬Ѓxed beforehand. In zone
362 Asset and Risk Management
Table A3.1 Hypothesis test conclusions

Decision
reality AH0 RH0

H0 a b
H1 c d

(c), the hypothesis is accepted although it is false; this is a second-type error for which
the probability ОІ is unknown. A good test will therefore have a small parameter ОІ; the
complementary (1 в€’ ОІ) of this probability is called the power of the test.
By way of an example, we present the compliance test for the mean of a normal
population. The hypothesis under test is, for example, H0 : Вµ = 1.
The rival hypothesis is written as: H1 : Вµ = 1.
Xв€’1
в€љ follows a normal law and the hypothesis being tested will
Under H0 , the r.v.
Пѓ/ n
therefore be rejected when:

Xв€’1 О±
в€љ >Q 1в€’ (s.p.О±).
Пѓ/ n 2

Again, the normal distribution quantile is replaced by the quantile for the Student dis-
tribution with (n в€’ 1) degrees of freedom if the standard deviation for the population is
replaced by the standard deviation for the sample.

3.2 REGRESSIONS
3.2.1 Simple regression
Let us assume that a variable Y depends on another variable X through a linear relation
Y = aX + b and that a series of observations is available for this pair of variables (X, Y ):
(xt , yt ) t = 1, . . . , n.

3.2.1.1 Estimation of model
If the observation pairs are represented on the (X, Y ) plane, it will be noticed that there
are differences between them and a straight line (see Figure A3.1). These differences

Y

Y = aX + b
yt
Оµt
axt + b

xt X

Figure A3.1 Simple regression
Statistical Concepts 363

may arise, especially in the п¬Ѓeld of economics, through failure to take account of certain
explanatory factors of variable Y .
It is therefore necessary to п¬Ѓnd the straight line that passes as closely as possible to the
point cloud, that is, the straight line for which Оµt = yt в€’ (axt + b) are as small as possible
overall. The criterion most frequently used is that of minimising the sum of the squares
of these differences (referred to as the least square method ). The problem is therefore
one of searching for the parameters a and b for which the expression
n n
2
Оµt2 = yt в€’ (axt + b)
t=1 t=1

is minimal. It can be easily shown that these parameters total:
n
(xt в€’ x)(yt в€’ y)
sxy t=1
a=
Л† = n
sx2
(xt в€’ x)2
t=1

Л†
b = y в€’ ax
Л†

These are unbiased estimators of the real unknown parameters a and b. In addition, of
all the unbiased estimators expressed linearly as a function of yt , they are the ones with
the smallest variance.3 The straight line obtained using the procedure is known as the
regression line.

3.2.1.2 Validation of model
The signiп¬Ѓcantly explanatory character of the variable X in this model can be proved by
testing the hypothesis H0 : a = 0.
If we are led to reject the hypothesis, it is because X signiп¬Ѓcantly explains Y through
the model, that is therefore validated. Because under certain probability hypotheses on
the residuals Оµt the estimator for a is distributed according to a Student law with (n в€’ 2)
degrees of freedom, the hypothesis will be rejected (and the model therefore accepted) if

aЛ† (nв€’2)
> t1в€’О±/2 (s.p.О±)
sa

where sa is the standard deviation for the estimator for a, measured on the observations.

3.2.2 Multiple regression
The regression model that we have just presented can be generalised when several explana-
tory variables are involved at once:

Y = О±0 + О±1 X1 + В· В· В· + О±k Xk .
3
They are referred to as BLUE (Best Linear Unbiased Estimators).
364 Asset and Risk Management

In this case, if the observations x and y and the parameters О± are presented as matrices
пЈ« пЈ¶ пЈ«пЈ¶ пЈ«пЈ¶
1 x11 В· В· В· x1k y1 О±1
пЈ¬ 1 x21 В· В· В· x2k пЈ· пЈ¬ y2 пЈ· пЈ¬ О±2 пЈ·
пЈ¬ пЈ· пЈ¬пЈ· пЈ¬пЈ·
X=пЈ¬. . пЈ· Y = пЈ¬ . пЈ· О± = пЈ¬ . пЈ·,
. ..
пЈ­. . .пЈё пЈ­.пЈё пЈ­.пЈё
.
. . . . .
1 xn1 В· В· В· xnk yn О±n

Л†
it can be shown that the vector for the parameter estimations is given by О± = (Xt X)в€’1 (Xt Y ).
In addition, the Student validation test shown for the simple regression also applies
here. It is used to test the signiп¬Ѓcantly explanatory nature of a variable within the multiple
model, the only alteration being the number of degrees of freedom, which passes from
(n в€’ 2) to (n в€’ k в€’ 1). We should mention that there are other tests for the overall validity
of the multiple regression model.

3.2.3 Nonlinear regression
It therefore turns out that the relation allowing Y to be explained by X1 , X2 , . . . , Xk is
not linear: Y = f (X1 , X2 , . . . , Xk ).
In this case, sometimes, the relation can be made linear by a simple analytical conver-
sion. For example, Y = aXb is converted by a logarithmic transformation:

ln Y = ln a + b ln X
Y в€— = a в€— + bXв€—

We are thus brought round to a linear regression model.
Other models cannot be transformed quite so simply. Thus, Y = a + Xb is not equiv-
alent to the linear model. In this case, much better developed techniques, generally of an
iterative nature, must be used to estimate the parameters for this type of model.
Appendix 4
Extreme Value Theory

4.1 EXACT RESULT
Let us consider a sequence of r.v.s X1 , X2 , . . . , Xn , independent and identically distributed
with a common distribution function FX . Let us also consider the sequence of r.v.s Z1 ,
Z2 , . . . , Zn , deп¬Ѓned by:

Zk = max(X1 , . . . , Xk ). k = 1, . . . , n

The d.f. for Zn is given by:

F (n) (z) = Pr[max(X1 , . . . , Xn ) в‰¤ z]
= Pr([X1 в‰¤ z] в€© В· В· В· в€© [Xn в‰¤ z])
= Pr[X1 в‰¤ z] В· В· В· В· В·Pr[Xn в‰¤ z]
n
= FX (z)

Note
When one wishes to study the distribution of an extreme Zn for a large number n of r.v.s,
the precise formula established by us is not greatly useful. In fact, we need to have a
result that does not depend essentially on the d.f., as Fx is not necessarily known with any
great accuracy. In addition, when n tends towards the inп¬Ѓnite, the r.v. Zn tends towards
a degenerate r.v., as:
0 si FX (z) < 1
lim F (n) (z) =
1 si FX (z) = 1
nв†’в€ћ

It was for this reason that asymptotic extreme value theory was developed.

4.2 ASYMPTOTIC RESULTS
Asymptotic extreme value theory originates in the work of R. A. Fisher,1 and the problem
was fully solved by B. Gnedenko.2

4.2.1 Extreme value theorem
The extreme value theorem states that under the hypothesis of independence and equal
distribution of r.v.s X1 , X2 , . . . , Xn , if there are also two sequences of coefп¬Ѓcients О±n > 0

1
Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or smallest member of a
sample, Proceedings of the Cambridge Philosophical Society, Vol. 24, 1978, pp. 180вЂ“90.
2
Gnedenko B. V., On the distribution limit of the maximum term of a random series, Annals of Mathematics, Vol. 44,
1943, pp. 423вЂ“53.
366 Asset and Risk Management

and ОІn (n = 1, 2, . . .) so that the limit (for n в†’ в€ћ) of the random variable

max(X1 , . . . , Xn ) в€’ ОІn
Yn =
О±n

is not degenerate, it will admit a law of probability deп¬Ѓned by a distribution function that
must be one of the following three forms:

(z) = exp[в€’eв€’z ]
zв‰¤0
0
(z) =
exp[в€’zв€’k ] z>0
exp[в€’(в€’z)k ] z<0
(z) =
zв‰Ґ0
1

Here, k is a positive constant. These three laws are known respectively as GumbelвЂ™s law
for , FrВґ chetвЂ™s law for and WeibullвЂ™s law for .
e
The parameter О±n is a parameter for measuring the dispersion of the law of probability.
The parameter ОІn , meanwhile, is a location parameter that when n tends towards inп¬Ѓnity
tends towards the limit distribution mode.3
We should point out that although the hypotheses of independence and equal distribution
of the initial r.v.s are demanding, the extreme value theorem allows for extensions if these
hypotheses are partly taken by default.

4.2.2 Attraction domains
There is the question of knowing for what type of initial d.f. FX the distribution of
extremes tends towards GumbelвЂ™s, FrВґ chetвЂ™s or WeibullвЂ™s law. Gnedenko has also provided
e
an answer to this question. These sets of d.f.s FX are the attraction domains for each of
the three laws.
The attraction domain for GumbelвЂ™s law is characterised by the presence of a number
x0 for which FX (x0 ) = 1 and FX (x) < 1 when x < x0 , so that there exists a continuous
function g verifying:
пЈ±
пЈґ lim g(x) = 0
пЈІ xв†’x0 в€’
пЈґ lim 1 в€’ FX [x(1 + yg(x))] = eв€’y в€Ђy
пЈі
1 в€’ FX (x)
xв†’x0 в€’

It will be seen that the initial FX laws that verify this condition are laws for which
the density has a tail with at least exponential decrease, such as the normal law, the
exponential law or the chi-square law.
The attraction domain for FrВґ chetвЂ™s law is characterised by the presence of a positive
e
parameter k, so that
1 в€’ FX (x)
= uk в€Ђu > 0
lim
xв†’в€ћ 1 в€’ FX (ux)

3
That is, the value that corresponds to the maximum of the probability density.
Extreme Value Theory 367

The laws covered by this description are the laws for which the tails decrease less rapidly
than the exponential, such as StudentвЂ™s law, CauchyвЂ™s law and stable ParetoвЂ™s law.
Finally, the attraction domain of WeibullвЂ™s law is characterised by the presence of
a number x0 for which FX (x0 ) = 1 and FX (x) < 1 when x < x0 , and the presence of a
positive parameter k, so that

1 в€’ FX (x0 + ux)
= uk в€Ђu > 0
lim
xв†’0в€’ 1 в€’ FX (x0 + x)

This category contains the bounded support distributions, such as the uniform law.

4.2.3 Generalisation
A. F. Jenkinson has been able to provide GnedenkoвЂ™s result with a uniп¬Ѓed form.
In fact, if for FrВґ chetвЂ™s law it is suggested that z = 1 в€’ П„y and k = в€’1/П„ , we will
e
п¬Ѓnd, when П„ < 0 and we obtain

exp[в€’zв€’k ] = exp[в€’(1 в€’ П„y)1/П„ ]

a valid relation for z > 0, that is, y > 1/П„ (for the other values of y, the r.v. takes the
value 0).
In the same way, for WeibullвЂ™s law , it is suggested that z = П„y в€’ 1 and k = 1/П„ .
We then п¬Ѓnd, when П„ > 0 and we obtain

exp[в€’(в€’z)k ] = exp[в€’(1 в€’ П„y)1/П„ ]

a valid relation for z < 0, that is, y < 1/П„ (for the other values of y, the r.v. takes the
value 1).
We therefore have the same analytical expression in both cases. We will also see that
the same applies to GumbelвЂ™s law . By passage to the limit, we can easily п¬Ѓnd:

y n
= exp[в€’eв€’y ]
lim exp[в€’(1 в€’ П„y)1/П„ ] = exp в€’ lim 1в€’
n
nв†’В±в€ћ
П„ в†’0В±

which is the expression set out in GumbelвЂ™s law.
To sum up: by paring a(y) = exp[в€’(1 в€’ П„y)1/П„ ], the d.f. FY of the extreme limit distri-
bution is written as follows:

si y в‰¤ 1/П„
0
If t < 0, FY (y) = (FrВґ chetвЂ™s law).
e
a(y) si y > 1/П„
If t = 0, FY (y) = a(y) в€Ђy (GumbelвЂ™s Law).
a(y) si y < 1/П„
If t > 0, FY (y) = (WeibullвЂ™s law).
si y в‰Ґ 1/П„
1

This, of course, is the result shown in Section 7.4.2.
Appendix 5
Canonical Correlations

5.1 GEOMETRIC PRESENTATION OF THE METHOD
The aim of canonical analysis 1 is to study the linear relations that exist between the static
spreads and dynamic spreads observed on the same sample. We are looking for a linear
combination of static spreads and a linear combination of dynamic spreads that are as
well correlated as possible.
We therefore have two sets of characters: x1 , x2 , . . . , xp on one hand and y1 , y2 , . . . , yq
on the other hand. In addition, it is assumed that the characters are centred, standardized
and observed for in the same number n of individuals.
Both sets of characters generate the respective associated vectorial subspaces, V1 and
V2 of R. We also introduce the matrices X and Y with respective formats (n, p) and
(n, q), in which the various columns are observations relative to the different characters.
As the characters are centred, the same will apply to the vectorial subspaces. Geomet-
rically, therefore, the problem of canonical analysis can be presented as follows: we need
to п¬Ѓnd Оѕ в€€ ОµV1 and О· в€€ V2 , so that cos2 (Оѕ, О·) = r 2 (Оѕ, О·) is maximised.

5.2 SEARCH FOR CANONICAL CHARACTERS
Let us assume that the characters Оѕ 1 and О·1 are solutions to the problem вЂ“ see Figure A5.1.
The angle between Оѕ 1 and О·1 does not depend on their norm (length). In fact, V1 and V2
are invariable when the base vectors are multiplied by a scalar and therefore cos2 (Оѕ 1 , О·1 )
does not depend on the base vector norms. It is then assumed that ||Оѕ 1 || = ||О·1 || = 1.
The character О·1 must be co-linear with the orthogonal projection of Оѕ 1 over V2 , which
is the vector of V2 that makes a minimum angle with Оѕ 1 . This condition is written as

A2 Оѕ 1 = r1 О·1

where r 2 1 = cos2 (Оѕ 1 , О·1 ) and A2 is the operator of the orthogonal projection on V2 . In
the same way, we have A1 О·1 = r1 Оѕ 1 . These two relations produce the system

A1 A2 Оѕ 1 = О»1 Оѕ 1
A2 A1 О·1 = О»1 О·1

where О»1 = r 2 1 = cos2 (Оѕ 1 , О·1 ).
It is therefore deduced that Оѕ 1 and О·1 are respectively the eigenvectors of operators
A1 A2 and A2 A1 associated with the same highest eigenvalue О»1 , this value being equal

1
A detailed description of this method and other multivariate statistical methods, is found in Chatп¬Ѓeld C. and Collins
A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics,
Technip, 1990.
370 Asset and Risk Management

V1
x1

A1h1

A2x1
h1
V2

Figure A5.1 Canonical correlations

to their squared cosine or their squared correlation. The characters Оѕ 1 and О·1 are deduced
from each other by a simple linear application:

1 1
О·1 = в€љ A2 Оѕ 1 and Оѕ 1 = в€љ A1 О·1 .
О»1 О»1

The following canonical characters are the eigenvectors of A1 A2 , associated with the
eigenvalue О»1 sorted in decreasing order. If the п¬Ѓrst canonical characters of order i
are written
Оѕ i = a1 x1 + В· В· В· + aP xP and О·i = b1 y1 + В· В· В· + bq yq

(in other words, in terms of matrix, Оѕ i = Xa and О·i = Y b) and if the diagonal matrix of
the weights is expressed as D, it can be shown that:
пЈ±
пЈґ b = в€љ (Y t DY )в€’1 (Xt DY )t a
1
пЈґ
пЈІ
О»i
пЈґ 1
пЈґ a = в€љ (Xt DX)в€’1 (Xt DY )b
пЈі
О»i
Appendix 6
Algebraic Presentation of Logistic
Regression

Let Y be the binary qualitative variable (0 for periods of equilibrium, 1 for breaks in
equilibrium) that we wish to explain by the explanatory quantitative variables X1,p, . The
model looks to evaluate the following probabilities:

pi = Pr [Y = 1] X1 = xi1 ; . . . ; Xp = xip

The logistic regression model1 is a nonlinear regression model. Here, the speciп¬Ѓcation for
the model is based on the use of a logistic function:

p
G(p) = ln
1в€’p

In this type of model, it is considered that there is linear dependency between G(pi ) and
the explanatory variables:

G(pi ) = ОІ0 + ОІ1 xi1 + В· В· В· + ОІp xip

where, ОІ0 , ОІ1 , . . . , ОІp are the unknown parameters to be estimated. By introducing the
vector ОІ for these coefп¬Ѓcients, so that
пЈ« пЈ¶
1
пЈ¬ xi1 пЈ·
пЈ¬ пЈ·
zi = пЈ¬ . пЈ·
пЈ­.пЈё.
xip

the binomial probability can be expressed in the form

eОІzi
pi =
1 + eОІzi
The method for estimating the parameters is that of maximising the probability function
through successive iterations. This probability function is the product of the statistical
density relative to each individual member:

1
eОІzi В·
L(ОІ) =
1 + eОІzi
{i:yi =1} {i:y =0}
i

1
A detailed description of this method and other multivariate statistical methods, is found in Chatп¬Ѓeld C. and Collins
A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics,
Technip, 1990.
Appendix 7
Time Series Models:
ARCH-GARCH and EGARCH

7.1 ARCH-GARCH MODELS
The ARCH-GARCH (auto-regressive conditional heteroscedasticity or generalised auto-
regressive conditional heteroscedasticity) models were developed by Engel1 in 1982 in the
context of studies of macroeconomic data. The ARCH model allows speciп¬Ѓc modelling
of variance in terms of error. Heteroscedasticity can be integrated by introducing an
exogenous variable x, which provides for variance in the term of error. This modelling
can take one of the following forms:

yt = et В· xtв€’1 yt = et В· ytв€’1
or

Here, et is a white noise (sequence of r.v.s not correlated, with zero mean and the
same variance).
In order to prevent the variance in this geometric series from being inп¬Ѓnite or zero, it
is preferable to take the following formulations:
p
yt = a0 + ai ytв€’i + Оµt
i=1

пЈ±
with:
пЈґ E(Оµt ) = 0
пЈІ q
пЈґ var(Оµt ) = Оі + О±i Оµtв€’i
2
пЈі
i=1

This type of model is generally expressed as AR(p) в€’ ARCH(q) or ARCH(p, q).

7.2 EGARCH MODELS
These models, unlike the ARCH-GARCH model, allow the conditional variance to re-
spond to a fall or rise in the series in different ways. This conп¬Ѓguration is of particular
interest in generally increasing п¬Ѓnancial series. An example of this type of model is
NelsonвЂ™s:2 в€љ
xt = Вµ + ht П‡t
в€љ
ln ht = О± + ОІ ln htв€’1 + Оґ |П‡t | в€’ 2/ПЂ + Оі П‡tв€’1

Here, П‡t /Itв€’1 follows a standard normal law (Itв€’1 representing the information available
at the moment t в€’ 1).
1
Engel R. F., Auto-regressive conditional heteroscedasticity with estimate of the variance of United Kingdom inп¬‚ation,
Econometrica No. 50, 1982, pp. 987вЂ“1003. A detailed presentation of the chronological series models will also be found in
Droebske J. J, Fichet B. and Tassi P., ModВґ lisation ARCH, thВґ orie statistique et applications dans le domaine de la п¬Ѓnance,
e e
Вґ
Editions ULB, 1994; and in Gourjeroux C., Mod` les ARCH et applications п¬Ѓnanci` res, Economica, 1992.
e e
2
Nelson D. B., Conditional heteroscedasticity in asset returns: a new approach, Econometrica No. 39, 1991, pp. 347вЂ“70.
Appendix 8
Numerical Methods for Solving
Nonlinear Equations1

An equation is said to be nonlinear when it involves terms of degree higher than 1 in
the unknown quantity. These terms may be polynomial or capable of being broken down
into Taylor series of degrees higher than 1.
Nonlinear equations cannot in general be solved analytically. In this case, therefore, the
solutions of the equations must be approached using iterative methods. The principle of
these methods of solving consists in starting from an arbitrary point вЂ“ the closest possible
point to the solution sought вЂ“ and involves arriving at the solution gradually through
successive tests.
The two criteria to take into account when choosing a method for solving nonlinear
equations are:

вЂў Method convergence (conditions of convergence, speed of convergence etc.).
вЂў The cost of calculating of the method.

8.1 GENERAL PRINCIPLES FOR ITERATIVE METHODS
8.1.1 Convergence
Any nonlinear equation f (x) = 0 can be expressed as x = g(x).
If x0 constitutes the arbitrary starting point for the method, it will be seen that the
solution x в€— for this equation, x в€— = g(x в€— ), can be reached by the numerical sequence:

xn+1 = g(xn ) n = 0, 1, 2, . . .

This iteration is termed a Picard process and x в€— , the limit of the sequence, is termed the
п¬Ѓxed iterative point.
In order for the sequence set out below to tend towards the solution of the equation,
it has to be guaranteed that this sequence will converge. A sufп¬Ѓcient condition for con-
vergence is supplied by the following theorem: if x = g(x) has a solution a within the
interval I = [a в€’ b; a + b] = {x : |x в€’ a| в‰¤ b} and if g(x) satisп¬Ѓes LipschitzвЂ™s condition:

в€ѓL в€€ [0; 1[ : в€Ђx в€€ I, |g(x) в€’ g(a)| в‰¤ L|x в€’ a|

Then, for every x0 в€€ I :

вЂў all the iterated values xn will belong to I;
вЂў the iterated values xn will converge towards a;
вЂў the solution a will be unique within interval I .

1
This appendix is mostly based on Litt F. X., Analyse numВґ rique, premi` re partie, ULG, 1999. Interested readers should
e e
also read: Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981; and Nougier J. P., MВґ thodes
e
de calcul numВґ rique, Masson, 1993.
e
376 Asset and Risk Management

We should also show a case in which LipschitzвЂ™s condition is satisп¬Ѓed: it is sufп¬Ѓcient that
for every x в€€ I , g (x) exists and is such that |g (x)| в‰¤ m with m < 1.

8.1.2 Order of convergence
It is important to choose the most suitable of the methods that converge. At this level, one
of the most important criteria to take into account is the speed or order of convergence.
Thus the sequence xn , deп¬Ѓned above, and the error en = xn в€’ a. If there is a number
p and a constant C > 0 so that

|en+1 |
=C
lim
nв†’в€ћ |en |p

p will then be termed the order of convergence for the sequence and C is the asymptotic
error constant.
When the speed of convergence is unsatisfactory, it can be improved by the Aitken
extrapolation,2 which is a convergence acceleration process. The speed of convergence
of this extrapolation is governed by the following result:

вЂў If PicardвЂ™s iterative method is of the order p, the Aitken extrapolation will be of the
order 2p в€’ 1.
вЂў If PicardвЂ™s iterative method is of the п¬Ѓrst order, AitkenвЂ™s extrapolation will be of the
second order in the case of a simple solution and of the п¬Ѓrst order in the case of a
multiple solution. In this last case, the asymptotic error constant is equal to 1 в€’ 1/m
where m is the multiplicity of the solution.

8.1.3 Stop criteria
As stated above, the iterative methods for solving nonlinear equations supply an approached
solution to the solution of the equation. It is therefore essential to be able to estimate the
error in the solution.
Working on the mean theorem:

f (xn ) = (xn в€’ a)f (Оѕ ), with Оѕ в€€ [xn ; a]

we can deduce the following estimation for the error:

|f (xn )|
|xn в€’ a| в‰¤ , |f (xn )| в‰Ґ M, x в€€ [xn ; a]
M

In addition, the rounding error inherent in every numerical method limits the accuracy of
the iterative methods to:
Оґ
Оµa =
f (a)

2
We refer to Litt F. X., Analyse numВґ rique, premi` re partie, ULG 1999, for further details.
e e
Numerical Methods for Solving Nonlinear Equations 377

in which Оґ represents an upper boundary for the rounding error in iteration n:

Оґ в‰Ґ |Оґn | = f (xn ) в€’ f (xn )

f (xn ) represents the calculated value for the function.
Let us now assume that we wish to determine a solution a with a degree of precision
Оµ. We could stop the iterative process on the basis of the error estimation formula.
These formulae, however, require a certain level of information on the derivative f (x),
information that is not easy to obtain. On the other hand, the limit speciп¬Ѓcation Оµa will not
generally be known beforehand.3 Consequently, we are running the risk of Оµ, the accuracy
level sought, never being reached, as it is better than the limit precision Оµa (Оµ < Оµa ). In
this case, the iterative process will carry on indeп¬Ѓnitely.
This leads us to accept the following stop criterion:

|xn в€’ xnв€’1 | < Оµ
|xn+1 в€’ xn | в‰Ґ |xn в€’ xnв€’1 |

This means that the iteration process will be stopped when the iteration n produces a
variation in value less than that of the iteration n + 1. The value of Оµ will be chosen in
a way that prevents the iteration from stopping too soon.

8.2 PRINCIPAL METHODS
Deп¬Ѓning an iterative method is based ultimately on deп¬Ѓning the function h(x) of the
equation x = g(x) в‰Ў x в€’ h(x)f (x).
The choice of this function will determine the order of the method.

8.2.1 First order methods
The simplest choice consists of taking h(x) = m = constant = 0.

8.2.1.1 Chord method
This deп¬Ѓnes the chord method (Figure A8.1), for which the iteration is xn+1 = xn в€’
mf (xn ).

y = f (x)

y = x/m

x1
x0
x2

Figure A8.1 Chord method

This will in effect require knowledge of f (a), when a is exactly what is being sought.
3
378 Asset and Risk Management

y = f (x)

x0 x2
x1

Figure A8.2 Classic chord method

The sufп¬Ѓcient convergence condition (see Section A8.1.1) for this method is 0 <
mf (x) < 2, in the neighbourhood of the solution. In addition, it can be shown that
|en+1 |
= |g (a)| = 0.
limnв†’в€ћ
|en |
The chord method is therefore clearly a п¬Ѓrst-order method (see Section A8.1.2).

8.2.1.2 Classic chord method
It is possible to improve the order of convergence by making m change at each iteration:

xn+1 = xn в€’ mn f (xn )

The classic chord method (Figure A8.2) takes as the value for mn the inverse of the slope
for the straight line deп¬Ѓned by the points (xnв€’1 ; f (xnв€’1 )) and (xn ; f (xn )):

xn в€’ xnв€’1
xn+1 = xn в€’ f (xn )
f (xn ) в€’ f (xnв€’1 )

This method will converge if f (a) = 0 and f (x) is continuous in the neighbourhood
of a. In addition, it can be shown that
1/p
|en+1 | f (a)
= =0
lim
nв†’в€ћ |en |p 2f (a)
в€љ
for p = 1 (1 + 5) = 1.618 . . . > 1, which greatly improves the order of convergence for
2
the method.

8.2.1.3 Regula falsi method
The regula falsi method (Figure A8.3) takes as the value for mn the inverse of the slope
for the straight line deп¬Ѓned by the points (xn ; f (xn )) and (xn ; f (xn )) where n is the
highest index for which f (xn ).f (xn ) < 0:

xn в€’ xn
xn+1 = xn в€’ f (xn )
f (xn ) в€’ f (xn )
Numerical Methods for Solving Nonlinear Equations 379

y = f (x)

x2
x1
x0

Figure A8.3 Regula falsi method

This method always converges when f (x) is continuous. On the other hand, the conver-
gence of this method is linear and therefore less effective than the convergence of the
classic chord method.

8.2.2 NewtonвЂ“Raphson method
If, in the classic chord method, we choose mn so that g (xn ) = 0, that is, f (xn ) = 1/mn ,
we will obtain a second-order iteration.
The method thus deп¬Ѓned,
f (xn )
xn+1 = xn в€’
f (xn )
is known as the Newton-Raphson method (Figure A8.4).
It is clearly a second-order method, as
|en+1 | 1 f (a)
= =0
lim
nв†’в€ћ |en |2 2 f (a)
The NewtonвЂ“Raphson method is therefore rapid insofar as the initial iterated value is not
too far from the solution sought, as global convergence is not assured at all.
A convergence criterion is therefore given for the following theorem. Assume that f (x) =
0 and that f (x) does not change its sign within the interval [a; b] and f (a).f (b) < 0.
If, furthermore,
f (a) f (b)
< b в€’ a and <bв€’a
f (a) f (b)
the NewtonвЂ“Raphson method will converge at every initial arbitrary point x0 that belongs
to [a; b].

y = f (x)

f(x1)/f вЂІ(x1)

x0
x1
x2

f (x0)/fвЂІ(x0)

Figure 8.4 NewtonвЂ“Raphson method
380 Asset and Risk Management

The classic chord method, unlike the NewtonвЂ“Raphson method, requires two initial
approximations but only involves one new function evaluation at each subsequent stage.
The choice between the classic chord method and the NewtonвЂ“Raphson method will
therefore depend on the effort of calculation required for evaluation f (x).
Let us assume that the effort of calculation required for evaluation of f (x) is Оё times
the prior effort of calculation for f (x).
Given what has been said above, we can establish that the effort of calculation will be
the same for the two methods if:
в€љ
1+Оё 1+ 5
1
= in which p =
log p
log 2 2
is the order of convergence in the classic chord method.
In consequence:

вЂў If Оё > (log 2/ log p) в€’ 1 в€ј 0.44 в†’ the classic chord method will be used.
вЂў If Оё в‰¤ (log 2/ log p) в€’ 1 в€ј 0.44 в†’ the NewtonвЂ“Raphson method will be used.

8.2.3 Bisection method
The bisection method is a linear convergence method and is therefore slow. Use of the
method is, however, justiп¬Ѓed by the fact that it converges overall, unlike the usual methods
(especially the NewtonвЂ“Raphson and classic chord methods). This method will therefore
be used to bring the initial iterated value of the NewtonвЂ“Raphson or classic chord method
to a point sufп¬Ѓciently close to the solution to ensure that the methods in question converge.
Let us assume therefore that f (x) is continuous in the interval [a0 ; b0 ] and such that4
f (a0 ).f (b0 ) < 0. The principle of the method consists of putting together a converging
sequence of bracketed intervals, [a1 ; b1 ] вЉѓ [a2 ; b2 ] вЉѓ [a3 ; b3 ] вЉѓ . . . , all of which contain
a solution of the equation f (x) = 0.
If it is assumed that5 f (a0 ) < 0 and f (b0 ) > 0, the intervals Ik = [ak ; bk ] will be put
together by recurrence on the basis of Ikв€’1 .

[mk ; bkв€’1 ] f (mk ) < 0
if
[ak ; bk ] =
[akв€’1 ; mk ] f (mk ) > 0
if

Here, mk = (akв€’1 + bkв€’1 )/2. One is thus assured that f (ak ) < 0 and f (bk ) > 0, which
guarantees convergence.
The bisection method is not a Picard iteration, but the order of convergence can be deter-
|en+1 | 1
= . The bisection method is therefore a п¬Ѓrst-order method.
mined, as limnв†’в€ћ
|en | 2

8.3 NONLINEAR EQUATION SYSTEMS
We have a system of n nonlinear equations of n unknowns: fi (x1 , x2 , . . . , xn ) = 0 i =
1, 2, . . . , n. Here, in vectorial notation, f (x) = 0. The solution to the system is an
n-dimensional vector a.

This implies that f (x) has a root within this interval.
4

This is not restrictive in any way, as it corresponds to f (x) = 0 or в€’f (x) = 0, x в€€ [a0 ; b0 ], depending on the case.
5
Numerical Methods for Solving Nonlinear Equations 381

8.3.1 General theory of n-dimensional iteration
n-dimensional iteration general theory is similar to the one-dimensional theory. The above
equation can thus be expressed in the form:

x = g(x) в‰Ў x в€’ A(x)f (x)

where A is a square matrix of nth order.
PicardвЂ™s iteration is always deп¬Ѓned as

xk+1 = g(xk ) k = 0, 1, 2 etc.

and the convergence theorem for PicardвЂ™s iteration remains valid in n dimensions.
In addition, if the Jacobian matrix J(x), deп¬Ѓned by [J(x)]ij = gj (x) xi is such that
for every x в€€ I , ||J(x)|| в‰¤ m for a norm compatible with m < 1, LipschitzвЂ™s condition
is satisп¬Ѓed.
The order of convergence is deп¬Ѓned by

||ek+1 ||
=C
lim
kв†’в€ћ ||ek ||p

where C is the constant for the asymptotic error.

8.3.2 Principal methods
If one chooses a constant matrix A as the value for A(x), the iterative process is the
generalisation in n dimensions of the chord method.
If the inverse of the Jacobian matrix of f is chosen as the value of A(x), we will obtain
the generalisation in n dimensions of the NewtonвЂ“Raphson method.
Another approach to solving the equation f (x) = 0 involves using the i th equation to
determine the (i + 1)th component. Therefore, for i = 1, 2, . . . , n, the following equations
will be solved in succession:
(k+1) (k+1) (k) (k)
fi (x1 , . . . , xiв€’1 , xi , xi+1 , . . . , xn ) = 0

with respect to xi . This is known as the nonlinear GaussвЂ“Seidel method.
Bibliography

CHAPTER 1
The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices
for the Management and Supervision of Operational Risk, Basle, February 2003.
The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle
Capital Accord, Basle, January 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle
Capital Accord: An Explanatory Note, Basle, January 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, Vue dвЂ™ensemble du
Nouvel accord de Bale sur les fonds propres, Basle, January 2001.
Л†
Cruz M. G., Modelling, Measuring and Hedging Operational Risk, John Wiley & Sons, Ltd, 2002.
Hoffman D. G., Managing Operational Risk: 20 Firm-Wide Best Practice Strategies, John Wiley &
Sons, Inc, 2002.
Jorion P., Financial Risk Manager Handbook (Second Edition), John Wiley & Sons, Inc, 2003.
Marshall C., Measuring and Managing Operational Risks in Financial Institutions, John Wiley &
Sons, Inc, 2001.

CHAPTER 2
The Bank for International Settlements, BIS Quarterly Review, Collateral in Wholesale Financial
Markets, Basle, September 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, Internal Audit in
Banks and the SupervisorвЂ™s Relationship with Auditors, Basle, August 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices
for Managing Liquidity in Banking Organisations, Basle, February 2000.
The Bank for International Settlements, Committee on the Global Financial System, Collateral in
Wholesale Financial Markets: Recent Trends, Risk Management and Market Dynamics, Basle,
March 2001.
MoodyвЂ™s, MoodyвЂ™s Analytical Framework for Operational Risk Management of Banks, MoodyвЂ™s,
January 2003.

CHAPTER 3
Bachelier L., ThВґ orie de la spВґ culation, Gauthier-Villars, 1900.
e e
Bechu T. and Bertrand E., LвЂ™Analyse Technique, Economica, 1998.
Binmore K., Jeux et thВґ orie des jeux, De Boeck & Larcier, 1999.
e
Brealey R. A. and Myers S. C., Principles of Corporate Finance, McGraw-Hill, 1991.
Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997.
Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business,
No. 59, 1986, pp. 383вЂ“403.
Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988.
384 Bibliography
Вґ
Devolder P., Finance stochastique, Editions ULB, 1993.
Dhrymes P. J., Friend I., and Gultekin N. B., A critical re-examination of the empirical evidence
on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323вЂ“46.
Eeckhoudt L. and Gollier C., Risk, Harvester Wheatsheaf, 1995.
Elton E. and Gruber M., Modern Portfolio Theory and Investment Analysis, John Wiley & Sons,
Inc, 1991.
Elton E., Gruber M., and Padberg M., Optimal portfolios from single ranking devices, Journal of
Portfolio Management, Vol. 4, No. 3, 1978, pp. 15вЂ“19.
Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection, Journal of
Finance, Vol. XI, No. 5, 1976, pp. 1341вЂ“57.
Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection: tracing out
the efп¬Ѓcient frontier, Journal of Finance, Vol. XIII, No. 1, 1978, pp. 296вЂ“302.
Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection with upper
bounds, Operation Research, 1978.
Fama E. and Macbeth J., Risk, return and equilibrium: empirical tests, Journal of Political Economy,
Vol. 71, No. 1., 1974, pp. 607вЂ“36.
Fama E. F., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34вЂ“105.
Fama E. F., Efп¬Ѓcient capital markets: a review of theory and empirical work, Journal of Finance,
Vol. 25, 1970.
Fama E. F., Random walks in stock market prices, Financial Analysis Journal, 1965.
Gillet P., LвЂ™efп¬Ѓcience des marchВґ s п¬Ѓnanciers, Economica, 1999.
e
Gordon M. and Shapiro E., Capital equipment analysis: the required rate proп¬Ѓt, Management Sci-
ence, Vol. 3, October 1956.
Grinold C. and Kahn N., Active Portfolio Management, McGraw-Hill, 1998.
Lintner J., The valuation of risky assets and the selection of risky investments, Review of Economics
and Statistics, Vol. 47, 1965, pp. 13вЂ“37.
Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Blackwell Pub-
lishers, 1987.
Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 419вЂ“33.
Mehta M. L., Random Matrices, Academic Press, 1996.
Miller M. H. and Modigliani F., Dividend policy, growth and the valuation of shares, Journal of
Morrison D., Multivariate Statistical Methods, McGraw-Hill, 1976.
Roger P., LвЂ™Вґ valuation des Actifs Financiers, de Boeck, 1996.
e
Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976,
pp. 343вЂ“62.
Samuelson P., Mathematics on Speculative Price, SIAM Review, Vol. 15, No. 1, 1973.
Saporta G., ProbabilitВґ s, Analyse des DonnВґ es et Statistique, Technip, 1990.
e e
Sharpe W., A simpliп¬Ѓed model for portfolio analysis, Management Science, Vol. 9, No. 1, 1963,
pp. 277вЂ“93.
Sharpe W., Capital asset prices, Journal of Finance, Vol. 19, 1964, pp. 425вЂ“42.
Von Neumann J. and Morgenstern O., Theory of Games and Economic Behaviour, Princeton Uni-
versity Press, 1947.

CHAPTER 4
Bierwag G., Kaufmann G., and Toevs A (Eds.), Innovations in Bond Portfolio Management: Dura-
tion Analysis and Immunisation, JAI Press, 1983.
Bisi` re C., La Structure par Terme des Taux dвЂ™intВґ rЛ† t, Presses Universitaires de France, 1997.
e ee
Brennan M. and Schwartz E., A continuous time approach to the pricing of bonds, Journal of
Banking and Finance, Vol. 3, No. 2, 1979, pp. 133вЂ“55.
Colmant B., Delfosse V., and Esch L., Obligations, les notions п¬Ѓnanci` res essentielles, Larcier,
e
2002.
Cox J., Ingersoll J., and Ross J., A theory of the term structure of interest rates, Econometrica,
Vol. 53, No. 2, 1985, pp. 385вЂ“406.
Fabozzi J. F., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000.
Bibliography 385
Heath D., Jarrow R., and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New
Methodology, Cornell University, 1987.
Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: discrete
time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419вЂ“40.
Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of
Finance, Vol. 41, No. 5, 1986, pp. 1011вЂ“29.
Macauley F., Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields
and Stock Prices in the United States since 1856, New York, National Bureau of Economic
Research, 1938, pp. 44вЂ“53.
Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science,
Vol. 4, No. 1, 1973, pp. 141вЂ“83.
Ramaswamy K. and Sundaresan M., The valuation of п¬‚oating-rates instruments: theory and evi-
dence, Journal of Financial Economics, Vol. 17, No. 2, 1986, pp. 251вЂ“72.
Richard S., An arbitrage model of the term structure of interest rates, Journal of Financial Eco-
nomics, Vol. 6, No. 1, 1978, pp. 33вЂ“57.
Schaefer S. and Schwartz E., A two-factor model of the term structure: an approximate analytical
solution, Journal of Financial and Quantitative Analysis, Vol. 19, No. 4, 1984, pp. 413вЂ“24.
Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics,
Vol. 5, No. 2, 1977, pp. 177вЂ“88.

CHAPTER 5
Black F. and Scholes M., The pricing of options and corporate liabilities, Journal of Political
Economy, Vol. 81, 1973, pp. 637вЂ“59.
Colmant B. and Kleynen G., Gestion du risque de taux dвЂ™intВґ ret et instruments п¬Ѓnanciers dВґ rivВґ s,
eЛ† ee
Kluwer 1995.
Copeland T. E. and Wreston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988.
Courtadon G., The pricing of options on default-free bonds, Journal of Financial and Quantitative
Analysis, Vol. 17, 1982, pp. 75вЂ“100.
Cox J., Ross S., and Rubinstein M., Option pricing: a simpliп¬Ѓed approach, Journal of Financial
Economics, No. 7, 1979, pp. 229вЂ“63.
Вґ
Devolder P., Finance stochastique, Editions ULB, 1993.
Garman M. and Kohlhagen S., Foreign currency option values, Journal of International Money and
Finance, No. 2, 1983, pp. 231вЂ“7.
Hicks A., Foreign Exchange Options, Woodhead, 1993.
Hull J. C., Options, Futures and Others Derivatives, Prentice Hall, 1997.
Krasnov M., Kisselev A., Makarenko G., and Chikin E., Math` matiques supВґ rieures pour ingВґ nieurs
e e e
et polytechniciens, De Boeck, 1993.
Reilly F. K. and Brown K. C., Investment Analysis and Portfolio Management, South-Western,
2000.
Rubinstein M., Options for the undecided, in From BlackвЂ“Scholes to Black Holes, Risk Magazine,
1992.
Sokolnikoff I. S. and Redheffer R. M., Mathematics of Physics and Modern Engineering, McGraw-
Hill, 1966.

CHAPTER 6
Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models
for stock prices, Journal of Business, Vol. 47, 1974, pp. 244вЂ“80.
Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34вЂ“105.
Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970.
Jorion P., Value at Risk, McGraw-Hill, 2001.
Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976.
CHAPTER 7
Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972.
Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA,
1995.
386 Bibliography
Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA,
undated.
Chase Manhattan Bank NA, Value at Risk, Chase Manhattan Bank NA, 1996.
Danielsson J. and De Vries C., Beyond the Sample: Extreme Quantile and Probability Estimation,
Mimeo, Iceland University and Tinbergen Institute Rotterdam, 1997.
Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data,
Journal of Empirical Finance, No. 4, 1997, pp. 241вЂ“57.
Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group
Discussion Paper 273, London School of Economics, 1997.
Embrechts P. KlВЁ ppelberg C., and Mikosch T., Modelling External Events for Insurance and Finance,
u
Springer Verlag, 1999.
Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5.
Gilchrist W. G., Statistical Modelling with Quantile Functions, Chapman & Hall/CRC, 2000.
Gnedenko B. V., On the limit distribution of the maximum term in a random series, Annals of
Mathematics, Vol. 44, 1943, pp. 423вЂ“53.
Gourieroux C., Mod` les ARCH et applications п¬Ѓnanci` res, Economica, 1992.
e e
Gumbel E. J., Statistics of Extremes, Columbia University Press, 1958.
Hendricks D., Evaluation of Value at Risk Models using Historical Data, FRBNY Policy Review,
1996, pp. 39вЂ“69.
Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of
Statistics, Vol. 46, 1975, pp. 1163вЂ“73.
Hill I. D., Hill R., and Holder R. L, Fitting Johnson curves by moments (Algorithm AS 99), Applied
Statistics, Vol. 25, No. 2, 1976, pp. 180вЂ“9.
Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of
meteorological elements, Quarterly Journal of the Royal Meteorological Society, Vol. 87, 1955,
pp. 145вЂ“58.
Johnson N. L., Systems of frequency curves generated by methods of translation, Biometrika,
Vol. 36, 1949, pp. 1498вЂ“575.
Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking
and Finance, No. 24, 2000, pp. 1097вЂ“130.
Longin F. M., Extreme Value Theory: Introduction and First Applications in Finance, Journal de
la SociВґ tВґ Statistique de Paris, Vol. 136, 1995, pp. 77вЂ“97.
ee
Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business,
No. 69, 1996, pp. 383вЂ“408.
McNeil A. J., Estimating the Tails of Loss Severity Distributions using Extreme Value Theory,
Mimeo, ETH Zentrum Zurich, 1996.
McNeil A. J., Extreme value theory for risk managers, in Internal Modelling and CAD II, Risk
Publications, 1999, pp. 93вЂ“113.
Mina J. and Yi Xiao J., Return to RiskMetrics: The Evolution of a Standard, RiskMetrics, 2001.
Morgan J. P., RiskMetricsп››: Technical Document, 4th Ed., Morgan Guaranty Trust Company, 1996.
Pickands J., Statistical inference using extreme order statistics, Annals of Statistics, Vol. 45, 1975,
pp. 119вЂ“31.
Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001.
Rouvinez C., Going Greek with VAR, Risk Magazine, February 1997, pp. 57вЂ“65.
Schaller P., On Cash Flow Mapping in VAR Estimation, Creditanstalt-Bankverein, CA RISC-
199602237, 1996.
Stambaugh V., Value at Risk, not published, 1996.
Vose D., Quantitative Risk Analysis, John Wiley & Sons, Ltd, 1996.

CHAPTER 9
Lopez T., DВґ limiter le risque de portefeuille, Banque Magazine, No. 605, JulyвЂ“August 1999,
e
pp. 44вЂ“6.

CHAPTER 10
Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997.
Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981.
Bibliography 387
Esch L., Kieffer R., and Lopez T., Value at Risk вЂ“ Vers un risk management moderne, De Boeck,
1997.
Litt F. X., Analyse numВґ rique, premi` re partie, ULG, 1999.
e e
Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell,
1987.
Markowitz H., Portfolio Selection: Efп¬Ѓcient Diversiп¬Ѓcation of Investments, Blackwell Publishers,
1991.
Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 77вЂ“91.
Nougier J-P., MВґ thodes de calcul numВґ rique, Masson, 1993.
e e
Vauthey P., Une approche empirique de lвЂ™optimisation de portefeuille, Eds. Universitaires Fribourg
Suisse, 1990.
CHAPTER 11
Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business,
No. 59, 1986, pp. 383вЂ“403.
Dhrymes P. J., Friends I., and Gultekin N. B., A critical re-examination of the empirical evidence
on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323вЂ“46.
Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976,
pp. 343вЂ“62.
CHAPTER 12
Ausubel L., The failure of competition in the credit card market, American Economic Review,
vol. 81, 1991, pp. 50вЂ“81.
Cooley W. W. and Lohnes P. R., Multivariate Data Analysis, John Wiley & Sons, Inc, 1971.
Damel P., La modВґ lisation des contrats bancaires a taux rВґ visable: une approche utilisant les
e ` e
corrВґ lations canoniques, Banque et MarchВґ s, mars avril, 1999.
e e
Damel P., LвЂ™apport de replicating portfolio ou portefeuille rВґ pliquВґ en ALM: mВґ thode contrat par
e e e
contrat ou par la valeur optimale, Banque et MarchВґ s, mars avril, 2001.
e
Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: a new
methodology for contingent claims valuation, Econometrica, vol. 60, 1992, pp. 77вЂ“105.
Hotelling H., Relation between two sets of variables, Biometrica, vol. 28, 1936, pp. 321вЂ“77.
Hull J. and White A., Pricing interest rate derivative securities, Review of Financial Studies, vols
3 & 4, 1990, pp. 573вЂ“92.
Hutchinson D. and Pennachi G., Measuring rents and interest rate risk in imperfect п¬Ѓnancial mar-
kets: the case of retail bank deposit, Journal of Financial and Quantitative Analysis, vol. 31,
1996, pp. 399вЂ“417.
Mardia K. V., Kent J. T., and Bibby J. M., Multivariate Analysis, Academic Press, 1979.
Sanyal A., A Continuous Time Monte Carlo Implementation of the Hull and White One Factor Model
and the Pricing of Core Deposit, unpublished manuscript, December 1997.
Selvaggio R., Using the OAS methodology to value and hedge commercial bank retail demand
deposit premiums, The Handbook of Asset/Liability Management, Edited by F. J. Fabozzi &
A. Konishi, McGraw-Hill, 1996.
Smithson C., A Lego approach to п¬Ѓnancial engineering in the Handbook of Currency and Interest
Rate Risk Management, Edited by R. Schwartz & C. W. Smith Jr., New York Institute of Finance,
1990.
Tatsuoka M. M., Multivariate Analysis, John Wiley & Sons, Ltd, 1971.
Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.
APPENDIX 1
Bair J., MathВґ matiques gВґ nВґ rales, De Boeck, 1990.
e ee
Esch L., MathВґ matique pour economistes et gestionnaires (2nd Edition), De Boeck, 1999.
e Вґ
Guerrien B., Alg` bre linВґ aire pour economistes, Economica, 1982.
e e Вґ
Ortega J. M., Matrix Theory, Plenum, 1987.
Weber J. E., Mathematical Analysis (Business and Economic Applications), Harper and Row, 1982.
APPENDIX 2
Baxter M. and Rennie A., Financial Calculus, Cambridge University Press, 1996.
388 Bibliography
Feller W., An Introduction to Probability Theory and its Applications (2 volumes), John Wiley &
Sons, Inc, 1968.
Grimmett G. and Stirzaker D., Probability and Random Processes, Oxford University Press, 1992.
Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifп¬Ѓn, 1977.
Loeve M., Probability Theory (2 volumes), Springer-Verlag, 1977.
Roger P., Les outils de la modВґ lisation п¬Ѓnanci` re, Presses Universitaires de France, 1991.
e e
Ross S. M., Initiation aux probabilitiВґ s, Presses Polytechniques et Universitaires Romandes, 1994.
e
APPENDIX 3
Ansion G., EconomВґ trie pour lвЂ™enterprise, Eyrolles, 1988.
e
Dagnelie P., ThВґ orie et mВґ thodes statistique (2 volumes), Presses Agronomiques de Gembloux,
e e
1975.
Johnston J., Econometric Methods, McGraw-Hill, 1972.
Justens D., Statistique pour dВґ cideurs, De Boeck, 1988.
e
Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifп¬Ѓn, 1977.
APPENDIX 4
Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or
smallest member of a sample, Proceedings of the Cambridge Philosophical Society, Vol. 24,
1928, pp. 180вЂ“90.
Gnedenko B. V., On the limit distribution for the maximum term of a random series, Annals of
Mathematics, Vol. 44, 1943, pp. 423вЂ“53.
Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of
 << стр. 15(всего 16)СОДЕРЖАНИЕ >>