<< ńņš. 15(āńåćī 16)ŃĪÄÅŠĘĄĶČÅ >>
The full development of this theory is outside the scope of this work.
Probabilistic Concepts 357

where, we have:
a = t0 < t1 < . . . < tn = b
Ī“ = max (tk ā’ tkā’1 )
k=1,...,n

Let us now consider a stochastic process Zt (for which we wish to deļ¬ne the stochastic
differential) and a standard Brownian motion wt . If there is a stochastic process zt such that
t
Zt = Z0 + 0 zs dws , then it is said that Zt admits the stochastic differential dZt = zt dwt .
This differential is interpreted as follows: the stochastic differential dZt represents the
variation (for a very short period of time dt) of Zt , triggered by a random variation dwt
weighted by zt , which represents the volatility of Zt at the moment t.
More generally, the deļ¬nition of dXt = at (Xt ) Ā· dt + bt (Xt ) Ā· dwt is given by
t2 t2
Xt2 ā’ Xt1 = at (Xt ) dt + bt (Xt ) dwt
t1 t1

The stochastic differential has some of the properties of ordinary differentials, such as
linearity. Not all of them, however, remain true. For example,9 the stochastic differential
of a product of two stochastic processes for which the stochastic differential of the factors
is known,
dXt(i) = at(i) dt + bt(i) dwt i = 1, 2

is given by:
d(Xt(1) Xt(2) ) = Xt(1) dXt(2) + Xt(2) dXt(1) + bt(1) bt(2) dt

Another property, which corresponds to the derivation formula for a compound function,
is the well-known Ito formula.10 This formula gives the differential for a two-variable
Ė
function: a stochastic process for which the stochastic differential is known, and time.
If the process Xt has the stochastic differential dXt = at dt + bt dwt and if f (x, t) is a
C2 -class function, the process f (X, t) will admit the following stochastic differential:

1
df (Xt , t) = ft (Xt , t) + fx (Xt , t)at + fxx (Xt , t)bt2 Ā· dt + fx (Xt , t)bt Ā· dwt
2

We will from now on leave out the argument X in the expression of s functions a and b.
9
10
Also known as the ItĖ ā™s lemma.
o
Appendix 3
Statistical Concepts1

3.1 INFERENTIAL STATISTICS
3.1.1 Sampling
3.1.1.1 Principles
In inferential statistics, we are usually interested in a population and the variables measured
on the individual members of that population. Unfortunately, the population as a whole
is often far too large, and sometimes not sufļ¬ciently well known, to be handled directly.
For cases of observed information, therefore, we must conļ¬ne ourselves to a subset
of the population, known as a sample. Then, on the basis of observations made in
relation to that sample, we attempt to deduce (infer) conclusions in relation to the popu-
lation.
The operation that consists of extracting the sample from the population is known as
sampling. It is here that probability theory becomes involved, constituting the link between
the population and the sample. It is deļ¬ned as simply random when the individual mem-
bers are extracted independently from the population and all have the same probability
of being chosen. In practice, this is not necessarily the case and the procedures set up for
carrying out the sampling process must imitate the chance as closely as possible.

3.1.1.2 Sampling distribution
Suppose that we are interested in a parameter Īø of the population. If we extract a sample
x1 , x2 , . . . , xn from the population, we can calculate the parameter Īø for this sample
Īø (x1 , x2 , . . . , xn ).
As the sampling is at the origin of the fortuitous aspect of this procedure, for another
sample x1 , x2 , . . . , xn , we would have obtained another parameter value Īø (x1 , x2 , . . . , xn ).
We are therefore constructing a r.v. , in which the various possible values are the
results of the calculation of Īø for all the possible samples. The law of probability for this
r.v. is known as the sampling distribution.
In order to illustrate this concept, let us consider the sampling distribution for the mean
of the population and suppose that the variable considered has a mean and variance Āµ
and Ļ 2 respectively. On the basis of the various samples, it is possible to calculate an
average on each occasion:
n n
1 1
x= xi x= xi Ā·Ā·Ā·
n n
i=1 i=1

1
Readers interested in ļ¬nding out more about the concepts developed below should read: Ansion G., EconomĀ“ trie pour
e
lā™enterprise, Eyrolles, 1988. Dagnelie P., ThĀ“ orie et mĀ“ thodes statistique, (2 volumes), Presses Agronomiques de Gembloux,
e e
1975. Johnston J., Econometric Methods, McGraw-Hill, 1972. Justens D., Statistique pour dĀ“ cideurs, De Boeck, 1988. Kendall
e
M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifļ¬n, 1977.
360 Asset and Risk Management

We thus deļ¬ne a r.v. X for which it can be demonstrated that:

E(X) = Āµ
Ļ2
var(X) =
n

The ļ¬rst of these two relations justiļ¬es the choice of the average of the sample as an
estimator for the mean of the population. It is referred to as an unbiased estimator.

Note
If we examine in a similar way the sampling distribution for the variance, calculated on
the basis of a sample using s 2 = n n (xi ā’ x)2 , the associated r.v. S 2 will be such that
1
i=1
nā’1 2
E(S 2 ) = Ļ.
n
We are no longer looking at an unbiased estimator, but an asymptotically unbiased
estimator (for n tending towards inļ¬nity). For this reason, we frequently choose the
1 n
i=1 (xi ā’ x) .
2
following expression as an estimator for the variance:
nā’1

3.1.2 Two problems of inferential statistics
3.1.2.1 Estimation
If the problem is therefore one of estimating a parameter Īø of the population, we must
construct an estimator that is a function of the values observed through the sampling
procedure. It is therefore important for this estimator to be of good quality for evaluating
the parameter Īø . We thus often require an unbiased estimator: E( ) = Īø .
Nevertheless, of all the unbiased estimators, we want the estimator adopted to have
other properties, and most notably its dispersion around the central value Īø to be as small
as possible. Its variance var( ) = E(( ā’ Īø )2 ) must be minimal.2
Alongside this selective estimation (there is only one estimation for a sample), a preci-
sion is generally calculated for the estimation by determining an interval [ 1 ; 2 ] centred
that contains the true value of the parameter Īø to be estimated with a
on the value
given probability:
Pr[ 1 ā¤ Īø ā¤ 2 ] = 1 ā’ Ī±

with Ī± = 0.05, for example. This interval is termed the conļ¬dence interval for Īø and the
number (1 ā’ Ī±) is the conļ¬dence coefļ¬cient.
This estimation by conļ¬dence interval is only possible if one knows the sampling
distribution for Īø , for example because the population obeys this or that known distribution
or if certain asymptotic results, such as central limit theorem, can be applied to it.
Let us examine, by way of an example, the estimate of the mean of a normal population
through conļ¬dence interval. It is already known that the ā˜bestā™ estimator is the average
Ļ
of sampling, which is distributed following a normal law with parameters Āµ; ā and
n
2
For example, the sample average is the unbiased estimator for the minimal variance for the average of the population.
Statistical Concepts 361

Xā’Āµ
ā is thus standard normal. If the quantile for this last distribution is termed
the r.v.
Ļ/ n
Q(u), we have:

Ī± Xā’Āµ Ī±
Pr Q ā¤ ā ā¤Q 1ā’ =1ā’Ī±
Ļ/ n
2 2
Ļ Ī± Ļ Ī±
Pr X ā’ ā Q 1 ā’ ā¤Āµā¤Xā’ ā Q =1ā’Ī±
n n
2 2
Ļ Ī± Ļ Ī±
Pr X ā’ ā Q 1 ā’ ā¤Āµā¤X+ ā Q 1ā’ =1ā’Ī±
n n
2 2

This last equality makes up the conļ¬dence interval formula for the mean; it can also be
written more concisely as:
Ļ Ī±
I.C.(Āµ) : X Ā± ā Q 1 ā’ (s.p.Ī±)
n 2

We indicate that in this last formula, the standard deviation for the population Ļ is
generally not known. If it is replaced by its estimator calculated on the basis of the
sample, the quantile for the distribution must be replaced by the quantile relative to the
Student distribution at (n ā’ 1) degrees of freedom.

3.1.2.2 Hypothesis test
The aim of a hypothesis test is to conļ¬rm or refute a hypothesis formulated by a popu-
lation, on the basis of a sample. In this way, we will know:

ā¢ The goodness-of-ļ¬t tests: verifying whether the population from which the sample is
taken is distributed according to a given law of probability.
ā¢ The independence tests between certain classiļ¬cation criteria deļ¬ned on the population
(these are also used for testing independence between r.v.s).
ā¢ The compliance tests: verifying whether a population parameter is equal to a given
value.
ā¢ The homogeneity tests: verifying whether the values for a parameter measured on
more than one population are the same (this requires one sample to be extracted
per population).

The procedure for carrying out a hypothesis test can be shown as follows. After deļ¬ning
the hypothesis to be tested H0 , also known as the null hypothesis, and the alternative
hypotheses H1 , we determine under H0 the sampling distribution for the parameter to
be studied. With the ļ¬xed conļ¬dence coefļ¬cient (1 ā’ Ī±), the sample is allocated to the
region of acceptance (AH0 ) or to the region of rejection (RH0 ) within H0 .
Four situations may therefore arise depending on the reality on one hand and the
decision taken on the other hand (see Table A3.1).
Zones (a) and (d) in Table A3.1 correspond to correct conclusions of the test. In zone
(b) the hypothesis is rejected although it is true; this is a ļ¬rst-type error for which the
probability is the complementary Ī± of the conļ¬dence coefļ¬cient ļ¬xed beforehand. In zone
362 Asset and Risk Management
Table A3.1 Hypothesis test conclusions

Decision
reality AH0 RH0

H0 a b
H1 c d

(c), the hypothesis is accepted although it is false; this is a second-type error for which
the probability Ī² is unknown. A good test will therefore have a small parameter Ī²; the
complementary (1 ā’ Ī²) of this probability is called the power of the test.
By way of an example, we present the compliance test for the mean of a normal
population. The hypothesis under test is, for example, H0 : Āµ = 1.
The rival hypothesis is written as: H1 : Āµ = 1.
Xā’1
ā follows a normal law and the hypothesis being tested will
Under H0 , the r.v.
Ļ/ n
therefore be rejected when:

Xā’1 Ī±
ā >Q 1ā’ (s.p.Ī±).
Ļ/ n 2

Again, the normal distribution quantile is replaced by the quantile for the Student dis-
tribution with (n ā’ 1) degrees of freedom if the standard deviation for the population is
replaced by the standard deviation for the sample.

3.2 REGRESSIONS
3.2.1 Simple regression
Let us assume that a variable Y depends on another variable X through a linear relation
Y = aX + b and that a series of observations is available for this pair of variables (X, Y ):
(xt , yt ) t = 1, . . . , n.

3.2.1.1 Estimation of model
If the observation pairs are represented on the (X, Y ) plane, it will be noticed that there
are differences between them and a straight line (see Figure A3.1). These differences

Y

Y = aX + b
yt
Īµt
axt + b

xt X

Figure A3.1 Simple regression
Statistical Concepts 363

may arise, especially in the ļ¬eld of economics, through failure to take account of certain
explanatory factors of variable Y .
It is therefore necessary to ļ¬nd the straight line that passes as closely as possible to the
point cloud, that is, the straight line for which Īµt = yt ā’ (axt + b) are as small as possible
overall. The criterion most frequently used is that of minimising the sum of the squares
of these differences (referred to as the least square method ). The problem is therefore
one of searching for the parameters a and b for which the expression
n n
2
Īµt2 = yt ā’ (axt + b)
t=1 t=1

is minimal. It can be easily shown that these parameters total:
n
(xt ā’ x)(yt ā’ y)
sxy t=1
a=
Ė = n
sx2
(xt ā’ x)2
t=1

Ė
b = y ā’ ax
Ė

These are unbiased estimators of the real unknown parameters a and b. In addition, of
all the unbiased estimators expressed linearly as a function of yt , they are the ones with
the smallest variance.3 The straight line obtained using the procedure is known as the
regression line.

3.2.1.2 Validation of model
The signiļ¬cantly explanatory character of the variable X in this model can be proved by
testing the hypothesis H0 : a = 0.
If we are led to reject the hypothesis, it is because X signiļ¬cantly explains Y through
the model, that is therefore validated. Because under certain probability hypotheses on
the residuals Īµt the estimator for a is distributed according to a Student law with (n ā’ 2)
degrees of freedom, the hypothesis will be rejected (and the model therefore accepted) if

aĖ (nā’2)
> t1ā’Ī±/2 (s.p.Ī±)
sa

where sa is the standard deviation for the estimator for a, measured on the observations.

3.2.2 Multiple regression
The regression model that we have just presented can be generalised when several explana-
tory variables are involved at once:

Y = Ī±0 + Ī±1 X1 + Ā· Ā· Ā· + Ī±k Xk .
3
They are referred to as BLUE (Best Linear Unbiased Estimators).
364 Asset and Risk Management

In this case, if the observations x and y and the parameters Ī± are presented as matrices
ļ£« ļ£¶ ļ£«ļ£¶ ļ£«ļ£¶
1 x11 Ā· Ā· Ā· x1k y1 Ī±1
ļ£¬ 1 x21 Ā· Ā· Ā· x2k ļ£· ļ£¬ y2 ļ£· ļ£¬ Ī±2 ļ£·
ļ£¬ ļ£· ļ£¬ļ£· ļ£¬ļ£·
X=ļ£¬. . ļ£· Y = ļ£¬ . ļ£· Ī± = ļ£¬ . ļ£·,
. ..
ļ£­. . .ļ£ø ļ£­.ļ£ø ļ£­.ļ£ø
.
. . . . .
1 xn1 Ā· Ā· Ā· xnk yn Ī±n

Ė
it can be shown that the vector for the parameter estimations is given by Ī± = (Xt X)ā’1 (Xt Y ).
In addition, the Student validation test shown for the simple regression also applies
here. It is used to test the signiļ¬cantly explanatory nature of a variable within the multiple
model, the only alteration being the number of degrees of freedom, which passes from
(n ā’ 2) to (n ā’ k ā’ 1). We should mention that there are other tests for the overall validity
of the multiple regression model.

3.2.3 Nonlinear regression
It therefore turns out that the relation allowing Y to be explained by X1 , X2 , . . . , Xk is
not linear: Y = f (X1 , X2 , . . . , Xk ).
In this case, sometimes, the relation can be made linear by a simple analytical conver-
sion. For example, Y = aXb is converted by a logarithmic transformation:

ln Y = ln a + b ln X
Y ā— = a ā— + bXā—

We are thus brought round to a linear regression model.
Other models cannot be transformed quite so simply. Thus, Y = a + Xb is not equiv-
alent to the linear model. In this case, much better developed techniques, generally of an
iterative nature, must be used to estimate the parameters for this type of model.
Appendix 4
Extreme Value Theory

4.1 EXACT RESULT
Let us consider a sequence of r.v.s X1 , X2 , . . . , Xn , independent and identically distributed
with a common distribution function FX . Let us also consider the sequence of r.v.s Z1 ,
Z2 , . . . , Zn , deļ¬ned by:

Zk = max(X1 , . . . , Xk ). k = 1, . . . , n

The d.f. for Zn is given by:

F (n) (z) = Pr[max(X1 , . . . , Xn ) ā¤ z]
= Pr([X1 ā¤ z] ā© Ā· Ā· Ā· ā© [Xn ā¤ z])
= Pr[X1 ā¤ z] Ā· Ā· Ā· Ā· Ā·Pr[Xn ā¤ z]
n
= FX (z)

Note
When one wishes to study the distribution of an extreme Zn for a large number n of r.v.s,
the precise formula established by us is not greatly useful. In fact, we need to have a
result that does not depend essentially on the d.f., as Fx is not necessarily known with any
great accuracy. In addition, when n tends towards the inļ¬nite, the r.v. Zn tends towards
a degenerate r.v., as:
0 si FX (z) < 1
lim F (n) (z) =
1 si FX (z) = 1
nā’ā

It was for this reason that asymptotic extreme value theory was developed.

4.2 ASYMPTOTIC RESULTS
Asymptotic extreme value theory originates in the work of R. A. Fisher,1 and the problem
was fully solved by B. Gnedenko.2

4.2.1 Extreme value theorem
The extreme value theorem states that under the hypothesis of independence and equal
distribution of r.v.s X1 , X2 , . . . , Xn , if there are also two sequences of coefļ¬cients Ī±n > 0

1
Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or smallest member of a
sample, Proceedings of the Cambridge Philosophical Society, Vol. 24, 1978, pp. 180ā“90.
2
Gnedenko B. V., On the distribution limit of the maximum term of a random series, Annals of Mathematics, Vol. 44,
1943, pp. 423ā“53.
366 Asset and Risk Management

and Ī²n (n = 1, 2, . . .) so that the limit (for n ā’ ā) of the random variable

max(X1 , . . . , Xn ) ā’ Ī²n
Yn =
Ī±n

is not degenerate, it will admit a law of probability deļ¬ned by a distribution function that
must be one of the following three forms:

(z) = exp[ā’eā’z ]
zā¤0
0
(z) =
exp[ā’zā’k ] z>0
exp[ā’(ā’z)k ] z<0
(z) =
zā„0
1

Here, k is a positive constant. These three laws are known respectively as Gumbelā™s law
for , FrĀ“ chetā™s law for and Weibullā™s law for .
e
The parameter Ī±n is a parameter for measuring the dispersion of the law of probability.
The parameter Ī²n , meanwhile, is a location parameter that when n tends towards inļ¬nity
tends towards the limit distribution mode.3
We should point out that although the hypotheses of independence and equal distribution
of the initial r.v.s are demanding, the extreme value theorem allows for extensions if these
hypotheses are partly taken by default.

4.2.2 Attraction domains
There is the question of knowing for what type of initial d.f. FX the distribution of
extremes tends towards Gumbelā™s, FrĀ“ chetā™s or Weibullā™s law. Gnedenko has also provided
e
an answer to this question. These sets of d.f.s FX are the attraction domains for each of
the three laws.
The attraction domain for Gumbelā™s law is characterised by the presence of a number
x0 for which FX (x0 ) = 1 and FX (x) < 1 when x < x0 , so that there exists a continuous
function g verifying:
ļ£±
ļ£“ lim g(x) = 0
ļ£² xā’x0 ā’
ļ£“ lim 1 ā’ FX [x(1 + yg(x))] = eā’y āy
ļ£³
1 ā’ FX (x)
xā’x0 ā’

It will be seen that the initial FX laws that verify this condition are laws for which
the density has a tail with at least exponential decrease, such as the normal law, the
exponential law or the chi-square law.
The attraction domain for FrĀ“ chetā™s law is characterised by the presence of a positive
e
parameter k, so that
1 ā’ FX (x)
= uk āu > 0
lim
xā’ā 1 ā’ FX (ux)

3
That is, the value that corresponds to the maximum of the probability density.
Extreme Value Theory 367

The laws covered by this description are the laws for which the tails decrease less rapidly
than the exponential, such as Studentā™s law, Cauchyā™s law and stable Paretoā™s law.
Finally, the attraction domain of Weibullā™s law is characterised by the presence of
a number x0 for which FX (x0 ) = 1 and FX (x) < 1 when x < x0 , and the presence of a
positive parameter k, so that

1 ā’ FX (x0 + ux)
= uk āu > 0
lim
xā’0ā’ 1 ā’ FX (x0 + x)

This category contains the bounded support distributions, such as the uniform law.

4.2.3 Generalisation
A. F. Jenkinson has been able to provide Gnedenkoā™s result with a uniļ¬ed form.
In fact, if for FrĀ“ chetā™s law it is suggested that z = 1 ā’ Ļ„y and k = ā’1/Ļ„ , we will
e
ļ¬nd, when Ļ„ < 0 and we obtain

exp[ā’zā’k ] = exp[ā’(1 ā’ Ļ„y)1/Ļ„ ]

a valid relation for z > 0, that is, y > 1/Ļ„ (for the other values of y, the r.v. takes the
value 0).
In the same way, for Weibullā™s law , it is suggested that z = Ļ„y ā’ 1 and k = 1/Ļ„ .
We then ļ¬nd, when Ļ„ > 0 and we obtain

exp[ā’(ā’z)k ] = exp[ā’(1 ā’ Ļ„y)1/Ļ„ ]

a valid relation for z < 0, that is, y < 1/Ļ„ (for the other values of y, the r.v. takes the
value 1).
We therefore have the same analytical expression in both cases. We will also see that
the same applies to Gumbelā™s law . By passage to the limit, we can easily ļ¬nd:

y n
= exp[ā’eā’y ]
lim exp[ā’(1 ā’ Ļ„y)1/Ļ„ ] = exp ā’ lim 1ā’
n
nā’Ā±ā
Ļ„ ā’0Ā±

which is the expression set out in Gumbelā™s law.
To sum up: by paring a(y) = exp[ā’(1 ā’ Ļ„y)1/Ļ„ ], the d.f. FY of the extreme limit distri-
bution is written as follows:

si y ā¤ 1/Ļ„
0
If t < 0, FY (y) = (FrĀ“ chetā™s law).
e
a(y) si y > 1/Ļ„
If t = 0, FY (y) = a(y) āy (Gumbelā™s Law).
a(y) si y < 1/Ļ„
If t > 0, FY (y) = (Weibullā™s law).
si y ā„ 1/Ļ„
1

This, of course, is the result shown in Section 7.4.2.
Appendix 5
Canonical Correlations

5.1 GEOMETRIC PRESENTATION OF THE METHOD
The aim of canonical analysis 1 is to study the linear relations that exist between the static
spreads and dynamic spreads observed on the same sample. We are looking for a linear
combination of static spreads and a linear combination of dynamic spreads that are as
well correlated as possible.
We therefore have two sets of characters: x1 , x2 , . . . , xp on one hand and y1 , y2 , . . . , yq
on the other hand. In addition, it is assumed that the characters are centred, standardized
and observed for in the same number n of individuals.
Both sets of characters generate the respective associated vectorial subspaces, V1 and
V2 of R. We also introduce the matrices X and Y with respective formats (n, p) and
(n, q), in which the various columns are observations relative to the different characters.
As the characters are centred, the same will apply to the vectorial subspaces. Geomet-
rically, therefore, the problem of canonical analysis can be presented as follows: we need
to ļ¬nd Ī¾ ā ĪµV1 and Ī· ā V2 , so that cos2 (Ī¾, Ī·) = r 2 (Ī¾, Ī·) is maximised.

5.2 SEARCH FOR CANONICAL CHARACTERS
Let us assume that the characters Ī¾ 1 and Ī·1 are solutions to the problem ā“ see Figure A5.1.
The angle between Ī¾ 1 and Ī·1 does not depend on their norm (length). In fact, V1 and V2
are invariable when the base vectors are multiplied by a scalar and therefore cos2 (Ī¾ 1 , Ī·1 )
does not depend on the base vector norms. It is then assumed that ||Ī¾ 1 || = ||Ī·1 || = 1.
The character Ī·1 must be co-linear with the orthogonal projection of Ī¾ 1 over V2 , which
is the vector of V2 that makes a minimum angle with Ī¾ 1 . This condition is written as

A2 Ī¾ 1 = r1 Ī·1

where r 2 1 = cos2 (Ī¾ 1 , Ī·1 ) and A2 is the operator of the orthogonal projection on V2 . In
the same way, we have A1 Ī·1 = r1 Ī¾ 1 . These two relations produce the system

A1 A2 Ī¾ 1 = Ī»1 Ī¾ 1
A2 A1 Ī·1 = Ī»1 Ī·1

where Ī»1 = r 2 1 = cos2 (Ī¾ 1 , Ī·1 ).
It is therefore deduced that Ī¾ 1 and Ī·1 are respectively the eigenvectors of operators
A1 A2 and A2 A1 associated with the same highest eigenvalue Ī»1 , this value being equal

1
A detailed description of this method and other multivariate statistical methods, is found in Chatļ¬eld C. and Collins
A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics,
Technip, 1990.
370 Asset and Risk Management

V1
x1

A1h1

A2x1
h1
V2

Figure A5.1 Canonical correlations

to their squared cosine or their squared correlation. The characters Ī¾ 1 and Ī·1 are deduced
from each other by a simple linear application:

1 1
Ī·1 = ā A2 Ī¾ 1 and Ī¾ 1 = ā A1 Ī·1 .
Ī»1 Ī»1

The following canonical characters are the eigenvectors of A1 A2 , associated with the
eigenvalue Ī»1 sorted in decreasing order. If the ļ¬rst canonical characters of order i
are written
Ī¾ i = a1 x1 + Ā· Ā· Ā· + aP xP and Ī·i = b1 y1 + Ā· Ā· Ā· + bq yq

(in other words, in terms of matrix, Ī¾ i = Xa and Ī·i = Y b) and if the diagonal matrix of
the weights is expressed as D, it can be shown that:
ļ£±
ļ£“ b = ā (Y t DY )ā’1 (Xt DY )t a
1
ļ£“
ļ£²
Ī»i
ļ£“ 1
ļ£“ a = ā (Xt DX)ā’1 (Xt DY )b
ļ£³
Ī»i
Appendix 6
Algebraic Presentation of Logistic
Regression

Let Y be the binary qualitative variable (0 for periods of equilibrium, 1 for breaks in
equilibrium) that we wish to explain by the explanatory quantitative variables X1,p, . The
model looks to evaluate the following probabilities:

pi = Pr [Y = 1] X1 = xi1 ; . . . ; Xp = xip

The logistic regression model1 is a nonlinear regression model. Here, the speciļ¬cation for
the model is based on the use of a logistic function:

p
G(p) = ln
1ā’p

In this type of model, it is considered that there is linear dependency between G(pi ) and
the explanatory variables:

G(pi ) = Ī²0 + Ī²1 xi1 + Ā· Ā· Ā· + Ī²p xip

where, Ī²0 , Ī²1 , . . . , Ī²p are the unknown parameters to be estimated. By introducing the
vector Ī² for these coefļ¬cients, so that
ļ£« ļ£¶
1
ļ£¬ xi1 ļ£·
ļ£¬ ļ£·
zi = ļ£¬ . ļ£·
ļ£­.ļ£ø.
xip

the binomial probability can be expressed in the form

eĪ²zi
pi =
1 + eĪ²zi
The method for estimating the parameters is that of maximising the probability function
through successive iterations. This probability function is the product of the statistical
density relative to each individual member:

1
eĪ²zi Ā·
L(Ī²) =
1 + eĪ²zi
{i:yi =1} {i:y =0}
i

1
A detailed description of this method and other multivariate statistical methods, is found in Chatļ¬eld C. and Collins
A. J., Introduction to Multivariate Analysis, Chapman & Hall, 1980. Saporta G., Probabilities, Data Analysis and Statistics,
Technip, 1990.
Appendix 7
Time Series Models:
ARCH-GARCH and EGARCH

7.1 ARCH-GARCH MODELS
The ARCH-GARCH (auto-regressive conditional heteroscedasticity or generalised auto-
regressive conditional heteroscedasticity) models were developed by Engel1 in 1982 in the
context of studies of macroeconomic data. The ARCH model allows speciļ¬c modelling
of variance in terms of error. Heteroscedasticity can be integrated by introducing an
exogenous variable x, which provides for variance in the term of error. This modelling
can take one of the following forms:

yt = et Ā· xtā’1 yt = et Ā· ytā’1
or

Here, et is a white noise (sequence of r.v.s not correlated, with zero mean and the
same variance).
In order to prevent the variance in this geometric series from being inļ¬nite or zero, it
is preferable to take the following formulations:
p
yt = a0 + ai ytā’i + Īµt
i=1

ļ£±
with:
ļ£“ E(Īµt ) = 0
ļ£² q
ļ£“ var(Īµt ) = Ī³ + Ī±i Īµtā’i
2
ļ£³
i=1

This type of model is generally expressed as AR(p) ā’ ARCH(q) or ARCH(p, q).

7.2 EGARCH MODELS
These models, unlike the ARCH-GARCH model, allow the conditional variance to re-
spond to a fall or rise in the series in different ways. This conļ¬guration is of particular
interest in generally increasing ļ¬nancial series. An example of this type of model is
Nelsonā™s:2 ā
xt = Āµ + ht Ļt
ā
ln ht = Ī± + Ī² ln htā’1 + Ī“ |Ļt | ā’ 2/Ļ + Ī³ Ļtā’1

Here, Ļt /Itā’1 follows a standard normal law (Itā’1 representing the information available
at the moment t ā’ 1).
1
Engel R. F., Auto-regressive conditional heteroscedasticity with estimate of the variance of United Kingdom inļ¬‚ation,
Econometrica No. 50, 1982, pp. 987ā“1003. A detailed presentation of the chronological series models will also be found in
Droebske J. J, Fichet B. and Tassi P., ModĀ“ lisation ARCH, thĀ“ orie statistique et applications dans le domaine de la ļ¬nance,
e e
Ā“
Editions ULB, 1994; and in Gourjeroux C., Mod` les ARCH et applications ļ¬nanci` res, Economica, 1992.
e e
2
Nelson D. B., Conditional heteroscedasticity in asset returns: a new approach, Econometrica No. 39, 1991, pp. 347ā“70.
Appendix 8
Numerical Methods for Solving
Nonlinear Equations1

An equation is said to be nonlinear when it involves terms of degree higher than 1 in
the unknown quantity. These terms may be polynomial or capable of being broken down
into Taylor series of degrees higher than 1.
Nonlinear equations cannot in general be solved analytically. In this case, therefore, the
solutions of the equations must be approached using iterative methods. The principle of
these methods of solving consists in starting from an arbitrary point ā“ the closest possible
point to the solution sought ā“ and involves arriving at the solution gradually through
successive tests.
The two criteria to take into account when choosing a method for solving nonlinear
equations are:

ā¢ Method convergence (conditions of convergence, speed of convergence etc.).
ā¢ The cost of calculating of the method.

8.1 GENERAL PRINCIPLES FOR ITERATIVE METHODS
8.1.1 Convergence
Any nonlinear equation f (x) = 0 can be expressed as x = g(x).
If x0 constitutes the arbitrary starting point for the method, it will be seen that the
solution x ā— for this equation, x ā— = g(x ā— ), can be reached by the numerical sequence:

xn+1 = g(xn ) n = 0, 1, 2, . . .

This iteration is termed a Picard process and x ā— , the limit of the sequence, is termed the
ļ¬xed iterative point.
In order for the sequence set out below to tend towards the solution of the equation,
it has to be guaranteed that this sequence will converge. A sufļ¬cient condition for con-
vergence is supplied by the following theorem: if x = g(x) has a solution a within the
interval I = [a ā’ b; a + b] = {x : |x ā’ a| ā¤ b} and if g(x) satisļ¬es Lipschitzā™s condition:

āL ā [0; 1[ : āx ā I, |g(x) ā’ g(a)| ā¤ L|x ā’ a|

Then, for every x0 ā I :

ā¢ all the iterated values xn will belong to I;
ā¢ the iterated values xn will converge towards a;
ā¢ the solution a will be unique within interval I .

1
This appendix is mostly based on Litt F. X., Analyse numĀ“ rique, premi` re partie, ULG, 1999. Interested readers should
e e
also read: Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981; and Nougier J. P., MĀ“ thodes
e
de calcul numĀ“ rique, Masson, 1993.
e
376 Asset and Risk Management

We should also show a case in which Lipschitzā™s condition is satisļ¬ed: it is sufļ¬cient that
for every x ā I , g (x) exists and is such that |g (x)| ā¤ m with m < 1.

8.1.2 Order of convergence
It is important to choose the most suitable of the methods that converge. At this level, one
of the most important criteria to take into account is the speed or order of convergence.
Thus the sequence xn , deļ¬ned above, and the error en = xn ā’ a. If there is a number
p and a constant C > 0 so that

|en+1 |
=C
lim
nā’ā |en |p

p will then be termed the order of convergence for the sequence and C is the asymptotic
error constant.
When the speed of convergence is unsatisfactory, it can be improved by the Aitken
extrapolation,2 which is a convergence acceleration process. The speed of convergence
of this extrapolation is governed by the following result:

ā¢ If Picardā™s iterative method is of the order p, the Aitken extrapolation will be of the
order 2p ā’ 1.
ā¢ If Picardā™s iterative method is of the ļ¬rst order, Aitkenā™s extrapolation will be of the
second order in the case of a simple solution and of the ļ¬rst order in the case of a
multiple solution. In this last case, the asymptotic error constant is equal to 1 ā’ 1/m
where m is the multiplicity of the solution.

8.1.3 Stop criteria
As stated above, the iterative methods for solving nonlinear equations supply an approached
solution to the solution of the equation. It is therefore essential to be able to estimate the
error in the solution.
Working on the mean theorem:

f (xn ) = (xn ā’ a)f (Ī¾ ), with Ī¾ ā [xn ; a]

we can deduce the following estimation for the error:

|f (xn )|
|xn ā’ a| ā¤ , |f (xn )| ā„ M, x ā [xn ; a]
M

In addition, the rounding error inherent in every numerical method limits the accuracy of
the iterative methods to:
Ī“
Īµa =
f (a)

2
We refer to Litt F. X., Analyse numĀ“ rique, premi` re partie, ULG 1999, for further details.
e e
Numerical Methods for Solving Nonlinear Equations 377

in which Ī“ represents an upper boundary for the rounding error in iteration n:

Ī“ ā„ |Ī“n | = f (xn ) ā’ f (xn )

f (xn ) represents the calculated value for the function.
Let us now assume that we wish to determine a solution a with a degree of precision
Īµ. We could stop the iterative process on the basis of the error estimation formula.
These formulae, however, require a certain level of information on the derivative f (x),
information that is not easy to obtain. On the other hand, the limit speciļ¬cation Īµa will not
generally be known beforehand.3 Consequently, we are running the risk of Īµ, the accuracy
level sought, never being reached, as it is better than the limit precision Īµa (Īµ < Īµa ). In
this case, the iterative process will carry on indeļ¬nitely.
This leads us to accept the following stop criterion:

|xn ā’ xnā’1 | < Īµ
|xn+1 ā’ xn | ā„ |xn ā’ xnā’1 |

This means that the iteration process will be stopped when the iteration n produces a
variation in value less than that of the iteration n + 1. The value of Īµ will be chosen in
a way that prevents the iteration from stopping too soon.

8.2 PRINCIPAL METHODS
Deļ¬ning an iterative method is based ultimately on deļ¬ning the function h(x) of the
equation x = g(x) ā” x ā’ h(x)f (x).
The choice of this function will determine the order of the method.

8.2.1 First order methods
The simplest choice consists of taking h(x) = m = constant = 0.

8.2.1.1 Chord method
This deļ¬nes the chord method (Figure A8.1), for which the iteration is xn+1 = xn ā’
mf (xn ).

y = f (x)

y = x/m

x1
x0
x2

Figure A8.1 Chord method

This will in effect require knowledge of f (a), when a is exactly what is being sought.
3
378 Asset and Risk Management

y = f (x)

x0 x2
x1

Figure A8.2 Classic chord method

The sufļ¬cient convergence condition (see Section A8.1.1) for this method is 0 <
mf (x) < 2, in the neighbourhood of the solution. In addition, it can be shown that
|en+1 |
= |g (a)| = 0.
limnā’ā
|en |
The chord method is therefore clearly a ļ¬rst-order method (see Section A8.1.2).

8.2.1.2 Classic chord method
It is possible to improve the order of convergence by making m change at each iteration:

xn+1 = xn ā’ mn f (xn )

The classic chord method (Figure A8.2) takes as the value for mn the inverse of the slope
for the straight line deļ¬ned by the points (xnā’1 ; f (xnā’1 )) and (xn ; f (xn )):

xn ā’ xnā’1
xn+1 = xn ā’ f (xn )
f (xn ) ā’ f (xnā’1 )

This method will converge if f (a) = 0 and f (x) is continuous in the neighbourhood
of a. In addition, it can be shown that
1/p
|en+1 | f (a)
= =0
lim
nā’ā |en |p 2f (a)
ā
for p = 1 (1 + 5) = 1.618 . . . > 1, which greatly improves the order of convergence for
2
the method.

8.2.1.3 Regula falsi method
The regula falsi method (Figure A8.3) takes as the value for mn the inverse of the slope
for the straight line deļ¬ned by the points (xn ; f (xn )) and (xn ; f (xn )) where n is the
highest index for which f (xn ).f (xn ) < 0:

xn ā’ xn
xn+1 = xn ā’ f (xn )
f (xn ) ā’ f (xn )
Numerical Methods for Solving Nonlinear Equations 379

y = f (x)

x2
x1
x0

Figure A8.3 Regula falsi method

This method always converges when f (x) is continuous. On the other hand, the conver-
gence of this method is linear and therefore less effective than the convergence of the
classic chord method.

8.2.2 Newtonā“Raphson method
If, in the classic chord method, we choose mn so that g (xn ) = 0, that is, f (xn ) = 1/mn ,
we will obtain a second-order iteration.
The method thus deļ¬ned,
f (xn )
xn+1 = xn ā’
f (xn )
is known as the Newton-Raphson method (Figure A8.4).
It is clearly a second-order method, as
|en+1 | 1 f (a)
= =0
lim
nā’ā |en |2 2 f (a)
The Newtonā“Raphson method is therefore rapid insofar as the initial iterated value is not
too far from the solution sought, as global convergence is not assured at all.
A convergence criterion is therefore given for the following theorem. Assume that f (x) =
0 and that f (x) does not change its sign within the interval [a; b] and f (a).f (b) < 0.
If, furthermore,
f (a) f (b)
< b ā’ a and <bā’a
f (a) f (b)
the Newtonā“Raphson method will converge at every initial arbitrary point x0 that belongs
to [a; b].

y = f (x)

f(x1)/f ā²(x1)

x0
x1
x2

f (x0)/fā²(x0)

Figure 8.4 Newtonā“Raphson method
380 Asset and Risk Management

The classic chord method, unlike the Newtonā“Raphson method, requires two initial
approximations but only involves one new function evaluation at each subsequent stage.
The choice between the classic chord method and the Newtonā“Raphson method will
therefore depend on the effort of calculation required for evaluation f (x).
Let us assume that the effort of calculation required for evaluation of f (x) is Īø times
the prior effort of calculation for f (x).
Given what has been said above, we can establish that the effort of calculation will be
the same for the two methods if:
ā
1+Īø 1+ 5
1
= in which p =
log p
log 2 2
is the order of convergence in the classic chord method.
In consequence:

ā¢ If Īø > (log 2/ log p) ā’ 1 ā¼ 0.44 ā’ the classic chord method will be used.
ā¢ If Īø ā¤ (log 2/ log p) ā’ 1 ā¼ 0.44 ā’ the Newtonā“Raphson method will be used.

8.2.3 Bisection method
The bisection method is a linear convergence method and is therefore slow. Use of the
method is, however, justiļ¬ed by the fact that it converges overall, unlike the usual methods
(especially the Newtonā“Raphson and classic chord methods). This method will therefore
be used to bring the initial iterated value of the Newtonā“Raphson or classic chord method
to a point sufļ¬ciently close to the solution to ensure that the methods in question converge.
Let us assume therefore that f (x) is continuous in the interval [a0 ; b0 ] and such that4
f (a0 ).f (b0 ) < 0. The principle of the method consists of putting together a converging
sequence of bracketed intervals, [a1 ; b1 ] ā [a2 ; b2 ] ā [a3 ; b3 ] ā . . . , all of which contain
a solution of the equation f (x) = 0.
If it is assumed that5 f (a0 ) < 0 and f (b0 ) > 0, the intervals Ik = [ak ; bk ] will be put
together by recurrence on the basis of Ikā’1 .

[mk ; bkā’1 ] f (mk ) < 0
if
[ak ; bk ] =
[akā’1 ; mk ] f (mk ) > 0
if

Here, mk = (akā’1 + bkā’1 )/2. One is thus assured that f (ak ) < 0 and f (bk ) > 0, which
guarantees convergence.
The bisection method is not a Picard iteration, but the order of convergence can be deter-
|en+1 | 1
= . The bisection method is therefore a ļ¬rst-order method.
mined, as limnā’ā
|en | 2

8.3 NONLINEAR EQUATION SYSTEMS
We have a system of n nonlinear equations of n unknowns: fi (x1 , x2 , . . . , xn ) = 0 i =
1, 2, . . . , n. Here, in vectorial notation, f (x) = 0. The solution to the system is an
n-dimensional vector a.

This implies that f (x) has a root within this interval.
4

This is not restrictive in any way, as it corresponds to f (x) = 0 or ā’f (x) = 0, x ā [a0 ; b0 ], depending on the case.
5
Numerical Methods for Solving Nonlinear Equations 381

8.3.1 General theory of n-dimensional iteration
n-dimensional iteration general theory is similar to the one-dimensional theory. The above
equation can thus be expressed in the form:

x = g(x) ā” x ā’ A(x)f (x)

where A is a square matrix of nth order.
Picardā™s iteration is always deļ¬ned as

xk+1 = g(xk ) k = 0, 1, 2 etc.

and the convergence theorem for Picardā™s iteration remains valid in n dimensions.
In addition, if the Jacobian matrix J(x), deļ¬ned by [J(x)]ij = gj (x) xi is such that
for every x ā I , ||J(x)|| ā¤ m for a norm compatible with m < 1, Lipschitzā™s condition
is satisļ¬ed.
The order of convergence is deļ¬ned by

||ek+1 ||
=C
lim
kā’ā ||ek ||p

where C is the constant for the asymptotic error.

8.3.2 Principal methods
If one chooses a constant matrix A as the value for A(x), the iterative process is the
generalisation in n dimensions of the chord method.
If the inverse of the Jacobian matrix of f is chosen as the value of A(x), we will obtain
the generalisation in n dimensions of the Newtonā“Raphson method.
Another approach to solving the equation f (x) = 0 involves using the i th equation to
determine the (i + 1)th component. Therefore, for i = 1, 2, . . . , n, the following equations
will be solved in succession:
(k+1) (k+1) (k) (k)
fi (x1 , . . . , xiā’1 , xi , xi+1 , . . . , xn ) = 0

with respect to xi . This is known as the nonlinear Gaussā“Seidel method.
Bibliography

CHAPTER 1
The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices
for the Management and Supervision of Operational Risk, Basle, February 2003.
The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle
Capital Accord, Basle, January 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, The New Basle
Capital Accord: An Explanatory Note, Basle, January 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, Vue dā™ensemble du
Nouvel accord de Bale sur les fonds propres, Basle, January 2001.
Ė
Cruz M. G., Modelling, Measuring and Hedging Operational Risk, John Wiley & Sons, Ltd, 2002.
Hoffman D. G., Managing Operational Risk: 20 Firm-Wide Best Practice Strategies, John Wiley &
Sons, Inc, 2002.
Jorion P., Financial Risk Manager Handbook (Second Edition), John Wiley & Sons, Inc, 2003.
Marshall C., Measuring and Managing Operational Risks in Financial Institutions, John Wiley &
Sons, Inc, 2001.

CHAPTER 2
The Bank for International Settlements, BIS Quarterly Review, Collateral in Wholesale Financial
Markets, Basle, September 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, Internal Audit in
Banks and the Supervisorā™s Relationship with Auditors, Basle, August 2001.
The Bank for International Settlements, Basle Committee for Banking Controls, Sound Practices
for Managing Liquidity in Banking Organisations, Basle, February 2000.
The Bank for International Settlements, Committee on the Global Financial System, Collateral in
Wholesale Financial Markets: Recent Trends, Risk Management and Market Dynamics, Basle,
March 2001.
Moodyā™s, Moodyā™s Analytical Framework for Operational Risk Management of Banks, Moodyā™s,
January 2003.

CHAPTER 3
Bachelier L., ThĀ“ orie de la spĀ“ culation, Gauthier-Villars, 1900.
e e
Bechu T. and Bertrand E., Lā™Analyse Technique, Economica, 1998.
Binmore K., Jeux et thĀ“ orie des jeux, De Boeck & Larcier, 1999.
e
Brealey R. A. and Myers S. C., Principles of Corporate Finance, McGraw-Hill, 1991.
Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997.
Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business,
No. 59, 1986, pp. 383ā“403.
Copeland T. E. and Weston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988.
384 Bibliography
Ā“
Devolder P., Finance stochastique, Editions ULB, 1993.
Dhrymes P. J., Friend I., and Gultekin N. B., A critical re-examination of the empirical evidence
on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323ā“46.
Eeckhoudt L. and Gollier C., Risk, Harvester Wheatsheaf, 1995.
Elton E. and Gruber M., Modern Portfolio Theory and Investment Analysis, John Wiley & Sons,
Inc, 1991.
Elton E., Gruber M., and Padberg M., Optimal portfolios from single ranking devices, Journal of
Portfolio Management, Vol. 4, No. 3, 1978, pp. 15ā“19.
Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection, Journal of
Finance, Vol. XI, No. 5, 1976, pp. 1341ā“57.
Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection: tracing out
the efļ¬cient frontier, Journal of Finance, Vol. XIII, No. 1, 1978, pp. 296ā“302.
Elton E., Gruber M., and Padberg M., Simple criteria for optimal portfolio selection with upper
bounds, Operation Research, 1978.
Fama E. and Macbeth J., Risk, return and equilibrium: empirical tests, Journal of Political Economy,
Vol. 71, No. 1., 1974, pp. 607ā“36.
Fama E. F., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34ā“105.
Fama E. F., Efļ¬cient capital markets: a review of theory and empirical work, Journal of Finance,
Vol. 25, 1970.
Fama E. F., Random walks in stock market prices, Financial Analysis Journal, 1965.
Gillet P., Lā™efļ¬cience des marchĀ“ s ļ¬nanciers, Economica, 1999.
e
Gordon M. and Shapiro E., Capital equipment analysis: the required rate proļ¬t, Management Sci-
ence, Vol. 3, October 1956.
Grinold C. and Kahn N., Active Portfolio Management, McGraw-Hill, 1998.
Lintner J., The valuation of risky assets and the selection of risky investments, Review of Economics
and Statistics, Vol. 47, 1965, pp. 13ā“37.
Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Blackwell Pub-
lishers, 1987.
Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 419ā“33.
Mehta M. L., Random Matrices, Academic Press, 1996.
Miller M. H. and Modigliani F., Dividend policy, growth and the valuation of shares, Journal of
Morrison D., Multivariate Statistical Methods, McGraw-Hill, 1976.
Roger P., Lā™Ā“ valuation des Actifs Financiers, de Boeck, 1996.
e
Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976,
pp. 343ā“62.
Samuelson P., Mathematics on Speculative Price, SIAM Review, Vol. 15, No. 1, 1973.
Saporta G., ProbabilitĀ“ s, Analyse des DonnĀ“ es et Statistique, Technip, 1990.
e e
Sharpe W., A simpliļ¬ed model for portfolio analysis, Management Science, Vol. 9, No. 1, 1963,
pp. 277ā“93.
Sharpe W., Capital asset prices, Journal of Finance, Vol. 19, 1964, pp. 425ā“42.
Von Neumann J. and Morgenstern O., Theory of Games and Economic Behaviour, Princeton Uni-
versity Press, 1947.

CHAPTER 4
Bierwag G., Kaufmann G., and Toevs A (Eds.), Innovations in Bond Portfolio Management: Dura-
tion Analysis and Immunisation, JAI Press, 1983.
Bisi` re C., La Structure par Terme des Taux dā™intĀ“ rĖ t, Presses Universitaires de France, 1997.
e ee
Brennan M. and Schwartz E., A continuous time approach to the pricing of bonds, Journal of
Banking and Finance, Vol. 3, No. 2, 1979, pp. 133ā“55.
Colmant B., Delfosse V., and Esch L., Obligations, les notions ļ¬nanci` res essentielles, Larcier,
e
2002.
Cox J., Ingersoll J., and Ross J., A theory of the term structure of interest rates, Econometrica,
Vol. 53, No. 2, 1985, pp. 385ā“406.
Fabozzi J. F., Bond Markets, Analysis and Strategies, Prentice-Hall, 2000.
Bibliography 385
Heath D., Jarrow R., and Morton A., Bond Pricing and the Term Structure of Interest Rates: a New
Methodology, Cornell University, 1987.
Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: discrete
time approximation, Journal of Financial and Quantitative Analysis, Vol. 25, 1990, pp. 419ā“40.
Ho T. and Lee S., Term structure movement and pricing interest rate contingent claims, Journal of
Finance, Vol. 41, No. 5, 1986, pp. 1011ā“29.
Macauley F., Some Theoretical Problems Suggested by the Movements of Interest Rates, Bond Yields
and Stock Prices in the United States since 1856, New York, National Bureau of Economic
Research, 1938, pp. 44ā“53.
Merton R., Theory of rational option pricing, Bell Journal of Economics and Management Science,
Vol. 4, No. 1, 1973, pp. 141ā“83.
Ramaswamy K. and Sundaresan M., The valuation of ļ¬‚oating-rates instruments: theory and evi-
dence, Journal of Financial Economics, Vol. 17, No. 2, 1986, pp. 251ā“72.
Richard S., An arbitrage model of the term structure of interest rates, Journal of Financial Eco-
nomics, Vol. 6, No. 1, 1978, pp. 33ā“57.
Schaefer S. and Schwartz E., A two-factor model of the term structure: an approximate analytical
solution, Journal of Financial and Quantitative Analysis, Vol. 19, No. 4, 1984, pp. 413ā“24.
Vasicek O., An equilibrium characterisation of the term structure, Journal of Financial Economics,
Vol. 5, No. 2, 1977, pp. 177ā“88.

CHAPTER 5
Black F. and Scholes M., The pricing of options and corporate liabilities, Journal of Political
Economy, Vol. 81, 1973, pp. 637ā“59.
Colmant B. and Kleynen G., Gestion du risque de taux dā™intĀ“ ret et instruments ļ¬nanciers dĀ“ rivĀ“ s,
eĖ ee
Kluwer 1995.
Copeland T. E. and Wreston J. F., Financial Theory and Corporate Policy, Addison-Wesley, 1988.
Courtadon G., The pricing of options on default-free bonds, Journal of Financial and Quantitative
Analysis, Vol. 17, 1982, pp. 75ā“100.
Cox J., Ross S., and Rubinstein M., Option pricing: a simpliļ¬ed approach, Journal of Financial
Economics, No. 7, 1979, pp. 229ā“63.
Ā“
Devolder P., Finance stochastique, Editions ULB, 1993.
Garman M. and Kohlhagen S., Foreign currency option values, Journal of International Money and
Finance, No. 2, 1983, pp. 231ā“7.
Hicks A., Foreign Exchange Options, Woodhead, 1993.
Hull J. C., Options, Futures and Others Derivatives, Prentice Hall, 1997.
Krasnov M., Kisselev A., Makarenko G., and Chikin E., Math` matiques supĀ“ rieures pour ingĀ“ nieurs
e e e
et polytechniciens, De Boeck, 1993.
Reilly F. K. and Brown K. C., Investment Analysis and Portfolio Management, South-Western,
2000.
Rubinstein M., Options for the undecided, in From Blackā“Scholes to Black Holes, Risk Magazine,
1992.
Sokolnikoff I. S. and Redheffer R. M., Mathematics of Physics and Modern Engineering, McGraw-
Hill, 1966.

CHAPTER 6
Blattberg R. and Gonedes N., A comparison of stable and Student descriptions as statistical models
for stock prices, Journal of Business, Vol. 47, 1974, pp. 244ā“80.
Fama E., Behaviour of stock market prices, Journal of Business, Vol. 38, 1965, pp. 34ā“105.
Johnson N. L. and Kotz S., Continuous Univariate Distribution, John Wiley & Sons, Inc, 1970.
Jorion P., Value at Risk, McGraw-Hill, 2001.
Pearson E. S. and Hartley H. O., Biometrika Tables for Students, Biometrika Trust, 1976.
CHAPTER 7
Abramowitz M. and Stegun A., Handbook of Mathematical Functions, Dover, 1972.
Chase Manhattan Bank NA, The Management of Financial Price Risk, Chase Manhattan Bank NA,
1995.
386 Bibliography
Chase Manhattan Bank NA, Value at Risk, its Measurement and Uses, Chase Manhattan Bank NA,
undated.
Chase Manhattan Bank NA, Value at Risk, Chase Manhattan Bank NA, 1996.
Danielsson J. and De Vries C., Beyond the Sample: Extreme Quantile and Probability Estimation,
Mimeo, Iceland University and Tinbergen Institute Rotterdam, 1997.
Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data,
Journal of Empirical Finance, No. 4, 1997, pp. 241ā“57.
Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group
Discussion Paper 273, London School of Economics, 1997.
Embrechts P. KlĀØ ppelberg C., and Mikosch T., Modelling External Events for Insurance and Finance,
u
Springer Verlag, 1999.
Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5.
Gilchrist W. G., Statistical Modelling with Quantile Functions, Chapman & Hall/CRC, 2000.
Gnedenko B. V., On the limit distribution of the maximum term in a random series, Annals of
Mathematics, Vol. 44, 1943, pp. 423ā“53.
Gourieroux C., Mod` les ARCH et applications ļ¬nanci` res, Economica, 1992.
e e
Gumbel E. J., Statistics of Extremes, Columbia University Press, 1958.
Hendricks D., Evaluation of Value at Risk Models using Historical Data, FRBNY Policy Review,
1996, pp. 39ā“69.
Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of
Statistics, Vol. 46, 1975, pp. 1163ā“73.
Hill I. D., Hill R., and Holder R. L, Fitting Johnson curves by moments (Algorithm AS 99), Applied
Statistics, Vol. 25, No. 2, 1976, pp. 180ā“9.
Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of
meteorological elements, Quarterly Journal of the Royal Meteorological Society, Vol. 87, 1955,
pp. 145ā“58.
Johnson N. L., Systems of frequency curves generated by methods of translation, Biometrika,
Vol. 36, 1949, pp. 1498ā“575.
Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking
and Finance, No. 24, 2000, pp. 1097ā“130.
Longin F. M., Extreme Value Theory: Introduction and First Applications in Finance, Journal de
la SociĀ“ tĀ“ Statistique de Paris, Vol. 136, 1995, pp. 77ā“97.
ee
Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business,
No. 69, 1996, pp. 383ā“408.
McNeil A. J., Estimating the Tails of Loss Severity Distributions using Extreme Value Theory,
Mimeo, ETH Zentrum Zurich, 1996.
McNeil A. J., Extreme value theory for risk managers, in Internal Modelling and CAD II, Risk
Publications, 1999, pp. 93ā“113.
Mina J. and Yi Xiao J., Return to RiskMetrics: The Evolution of a Standard, RiskMetrics, 2001.
Morgan J. P., RiskMetricsļ: Technical Document, 4th Ed., Morgan Guaranty Trust Company, 1996.
Pickands J., Statistical inference using extreme order statistics, Annals of Statistics, Vol. 45, 1975,
pp. 119ā“31.
Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001.
Rouvinez C., Going Greek with VAR, Risk Magazine, February 1997, pp. 57ā“65.
Schaller P., On Cash Flow Mapping in VAR Estimation, Creditanstalt-Bankverein, CA RISC-
199602237, 1996.
Stambaugh V., Value at Risk, not published, 1996.
Vose D., Quantitative Risk Analysis, John Wiley & Sons, Ltd, 1996.

CHAPTER 9
Lopez T., DĀ“ limiter le risque de portefeuille, Banque Magazine, No. 605, Julyā“August 1999,
e
pp. 44ā“6.

CHAPTER 10
Broquet C., Cobbaut R., Gillet R., and Vandenberg A., Gestion de Portefeuille, De Boeck, 1997.
Burden R. L. and Faires D. J., Numerical Analysis, Prindle, Weber & Schmidt, 1981.
Bibliography 387
Esch L., Kieffer R., and Lopez T., Value at Risk ā“ Vers un risk management moderne, De Boeck,
1997.
Litt F. X., Analyse numĀ“ rique, premi` re partie, ULG, 1999.
e e
Markowitz H., Mean-Variance Analysis in Portfolio Choice and Capital Markets, Basil Blackwell,
1987.
Markowitz H., Portfolio Selection: Efļ¬cient Diversiļ¬cation of Investments, Blackwell Publishers,
1991.
Markowitz H., Portfolio selection, Journal of Finance, Vol. 7, No. 1, 1952, pp. 77ā“91.
Nougier J-P., MĀ“ thodes de calcul numĀ“ rique, Masson, 1993.
e e
Vauthey P., Une approche empirique de lā™optimisation de portefeuille, Eds. Universitaires Fribourg
Suisse, 1990.
CHAPTER 11
Chen N. F., Roll R., and Ross S. A., Economic forces of the stock market, Journal of Business,
No. 59, 1986, pp. 383ā“403.
Dhrymes P. J., Friends I., and Gultekin N. B., A critical re-examination of the empirical evidence
on the arbitrage pricing theory, Journal of Finance, No. 39, 1984, pp. 323ā“46.
Ross S. A., The arbitrage theory of capital asset pricing, Journal of Economic Theory, 1976,
pp. 343ā“62.
CHAPTER 12
Ausubel L., The failure of competition in the credit card market, American Economic Review,
vol. 81, 1991, pp. 50ā“81.
Cooley W. W. and Lohnes P. R., Multivariate Data Analysis, John Wiley & Sons, Inc, 1971.
Damel P., La modĀ“ lisation des contrats bancaires a taux rĀ“ visable: une approche utilisant les
e ` e
corrĀ“ lations canoniques, Banque et MarchĀ“ s, mars avril, 1999.
e e
Damel P., Lā™apport de replicating portfolio ou portefeuille rĀ“ pliquĀ“ en ALM: mĀ“ thode contrat par
e e e
contrat ou par la valeur optimale, Banque et MarchĀ“ s, mars avril, 2001.
e
Heath D., Jarrow R., and Morton A., Bond pricing and the term structure of interest rates: a new
methodology for contingent claims valuation, Econometrica, vol. 60, 1992, pp. 77ā“105.
Hotelling H., Relation between two sets of variables, Biometrica, vol. 28, 1936, pp. 321ā“77.
Hull J. and White A., Pricing interest rate derivative securities, Review of Financial Studies, vols
3 & 4, 1990, pp. 573ā“92.
Hutchinson D. and Pennachi G., Measuring rents and interest rate risk in imperfect ļ¬nancial mar-
kets: the case of retail bank deposit, Journal of Financial and Quantitative Analysis, vol. 31,
1996, pp. 399ā“417.
Mardia K. V., Kent J. T., and Bibby J. M., Multivariate Analysis, Academic Press, 1979.
Sanyal A., A Continuous Time Monte Carlo Implementation of the Hull and White One Factor Model
and the Pricing of Core Deposit, unpublished manuscript, December 1997.
Selvaggio R., Using the OAS methodology to value and hedge commercial bank retail demand
deposit premiums, The Handbook of Asset/Liability Management, Edited by F. J. Fabozzi &
A. Konishi, McGraw-Hill, 1996.
Smithson C., A Lego approach to ļ¬nancial engineering in the Handbook of Currency and Interest
Rate Risk Management, Edited by R. Schwartz & C. W. Smith Jr., New York Institute of Finance,
1990.
Tatsuoka M. M., Multivariate Analysis, John Wiley & Sons, Ltd, 1971.
Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.
APPENDIX 1
Bair J., MathĀ“ matiques gĀ“ nĀ“ rales, De Boeck, 1990.
e ee
Esch L., MathĀ“ matique pour economistes et gestionnaires (2nd Edition), De Boeck, 1999.
e Ā“
Guerrien B., Alg` bre linĀ“ aire pour economistes, Economica, 1982.
e e Ā“
Ortega J. M., Matrix Theory, Plenum, 1987.
Weber J. E., Mathematical Analysis (Business and Economic Applications), Harper and Row, 1982.
APPENDIX 2
Baxter M. and Rennie A., Financial Calculus, Cambridge University Press, 1996.
388 Bibliography
Feller W., An Introduction to Probability Theory and its Applications (2 volumes), John Wiley &
Sons, Inc, 1968.
Grimmett G. and Stirzaker D., Probability and Random Processes, Oxford University Press, 1992.
Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifļ¬n, 1977.
Loeve M., Probability Theory (2 volumes), Springer-Verlag, 1977.
Roger P., Les outils de la modĀ“ lisation ļ¬nanci` re, Presses Universitaires de France, 1991.
e e
Ross S. M., Initiation aux probabilitiĀ“ s, Presses Polytechniques et Universitaires Romandes, 1994.
e
APPENDIX 3
Ansion G., EconomĀ“ trie pour lā™enterprise, Eyrolles, 1988.
e
Dagnelie P., ThĀ“ orie et mĀ“ thodes statistique (2 volumes), Presses Agronomiques de Gembloux,
e e
1975.
Johnston J., Econometric Methods, McGraw-Hill, 1972.
Justens D., Statistique pour dĀ“ cideurs, De Boeck, 1988.
e
Kendall M. and Stuart A., The Advanced Theory of Statistics (3 volumes), Grifļ¬n, 1977.
APPENDIX 4
Fisher R. A. and Tippett L. H. C., Limiting forms of the frequency distribution of the largest or
smallest member of a sample, Proceedings of the Cambridge Philosophical Society, Vol. 24,
1928, pp. 180ā“90.
Gnedenko B. V., On the limit distribution for the maximum term of a random series, Annals of
Mathematics, Vol. 44, 1943, pp. 423ā“53.
Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) values of
 << ńņš. 15(āńåćī 16)ŃĪÄÅŠĘĄĶČÅ >>