. 10
( 16)


= σ2 ρδ1 + σ2 1 ’ ρ 2 δ2 + µ2

We use (t) in the upper index when it no longer represents time but instead constitutes a numbering for various
estimations of a random variable.
220 Asset and Risk Management

where we have introduced the correlation coef¬cient
ρ = corr( 1, 2) =
σ1 σ2

and 2 with the desired properties.26
to obtain the variables 1
The de¬nition of the k variables can also be written as:

σ1 δ1 µ1
= +
δ2 µ2
σ2 ρ σ2 1 ’ ρ2

The matrix for the δk coef¬cients, which we will call L, is such that:

σ1 σ1 σ2 ρ
LLt =
σ2 ρ σ2 1 ’ ρ2 σ2 1 ’ ρ 2
σ1 σ1 σ2 ρ
σ1 σ2 ρ σ2 ρ 2 + σ2 (1 ’ ρ 2 )
2 2

σ1 σ12
σ12 σ22

We are looking at the variance“covariance matrix for the k variables, and the matrix L
for the coef¬cients is therefore deduced by the Choleski factorisation.27
In the general case of n risk factors, the process is generalised: the k variables
(k = 1, . . ., n) may be written as a vector that is de¬ned on the basis of the independent
δk variables (k = 1, . . ., n) with zero expectations and variances equal to 1, through the
= Lδ + µ

Here, L is the Choleski factorisation matrix and the variance“covariance matrix for the
k variables.
Once the k variables (k = 1, . . ., n) are generated:
(k = 1, . . . , n t = 1, . . . , M)

The future values of the risk factors are determined on the basis of the values currently
observed X1 (0), . . ., Xn (0):
(t) (t)
Xk (1) = Xk (0) · (1 + k) (k = 1, . . . , n t = 1, . . . , M)

From this, the future price of the asset can be easily deduced:
(t) (t)
p (t) (1) = f (X1 (1), X2 (1), . . . , Xn (1))
t = 1, . . . , M
It is not generally true that such ¬rst-degree expressions give random variables arising from the same distribution as
that of δ. It is, however, true for multinormal distributions.
This factorisation can only be made under certain conditions. Among other things, it is essential for the number T of

observations to exceed the number n of risk factors. Refer to Appendix 1 for further details on this method.
VaR Estimation Techniques 221

so that, by difference with:

p(0) = f (X1 (0), X2 (0), . . . , Xn (0)).
= p (t) (1) ’ p(0) (t = 1, . . . , M).
the variation in value can be estimated: p

It is possible to simplify the approach somewhat for some speci¬c risk factors for which
a stochastic process of evolution is dif¬cult to obtain. Thus, for the volatility of an option
(vega risk), one can for different values of this parameter σR apply the methodology as
presented above and then group together all the results of the simulations carried out for
these values. Portfolio case
Let us deal ¬nally with the general case in which one wishes to determine the VaR for a
portfolio consisting of N assets in respective numbers28 n1 , . . ., nN , the value of each of
these assets expressed on the basis of several different risk factors X1 , X2 , . . ., XN . The
value pP of this portfolio is expressed according to the pj values (j = 1, . . ., N ) for the
various assets through the relation pP = N=1 nj pj .
Even where VaR is calculated on the basis of a normal distribution hypotheses using
the expectation and standard deviation (and the Monte Carlo method, if it uses theoretical
models, does not rely on this distributional hypothesis), we then have to use a variance-
covariance matrix again (that for all the couples of securities in the portfolio29 ), as it has
been seen in Section 3.1.1 that:
var( pP ) = ni nj cov( pi , pj ).
i=1 j =1

It is preferable here to determine the distribution of the portfolio loss directly on the basis
of the effect of variations in the various risk factors on the value of the portfolio itself. Synthesis
This approach consists of the following stages:

1. A group of valuation models is chosen for the various risk factors or the assets in
the portfolio (choice to be validated by suitable statistical tests). This may refer to
normal or log-normal laws or more complex models for certain derived products; or
more generally, it may relate to any distribution that has been adjusted on the basis
of historical observations. One can also consider the relations that express the way in
which the prices of various assets behave according to common risk factors, like a
general market index (for Sharpe™s simple index model for equities), sectorial indices
(for multiple-index models), interest rates or exchange rates.

Remember (see Section 3.1.1) that in cases where prices are replaced by returns, the numbers nj of assets in the portfolio

must be replaced by the proportions that correspond to the respective stock-exchange capitalisations for the various securities.
This variance-covariance matrix is much larger (it may concern hundreds of different assets) than the matrix mentioned
in the previous paragraph relating to just a few explicative risk factors for an isolated asset.
222 Asset and Risk Management

2. On the basis of historical periods for present equities and risk factors, the following
are estimated:
• the distribution of the various risk factors, as well as the parameters associated with
them, the expectations and the variance and covariance matrix.
• the parameters for the relations30 that link the prices of the assets with the risk
factors. Generally, these are estimations made using regressive techniques.
3. For the full range of risk factors, the combined use of probability models obtained
in (1) and the distribution parameters determined in (2) allows construction, using the
Monte Carlo method, of a large number M (5000“10 000) of pseudo-random samples
extracted from each of the risk factor variance distributions in question.
k = 1, . . . , n t = 1, . . . , M.

The future values of the risk factors are then obtained from the values currently
observed: X1 (0), . . ., Xn (0):
(t) (t)
Xk (1) = Xk (0) · (1 + k) k = 1, . . . , n t = 1, . . . , M.

Here we have as many evaluation scenarios as there are prices. It is obvious that the
simulations must be carried out on the basis of parameters that each describe a risk
factor individually, but also on the basis of correlations that link them to each other
(Choleski factorisation).
4. The results of these simulations are then introduced into the relations for the assets,
according to the common risk factors:

pj = fj (X1 , X2 , . . . , Xn ) j = 1, . . . , N

These relations may be very simple (such as risk factor equivalent to asset) or much
more complex (such as in the case of optional assets). They allow the future price
distributions to be simulated for the various assets:
(t) (t)
pj = fj (X1 (t) (1), X2 (1), . . . , Xn (1))
j = 1, . . . , N t = 1, . . . , M

The calculation of the portfolio value according to the value of its components for
each of the M simulations
(t) (t)
pP (1) = nj pj (1) t = 1, . . . , M
j =1

assumes that the distributions of the various prices can simply be added together to give
the portfolio price distribution. It is known, however, that this is only possible if the
joint price distribution is multi-normal. Because the Monte Carlo method aims to avoid
this demanding hypothesis, the components of the portfolio can thus be aggregated by
a new simulation, taking account this time of the correlation structure between the
various assets in the portfolio. This is much more laborious.

These are often stochastic processes.
VaR Estimation Techniques 223
5. The confrontation with the current value of the portfolio, pP (0) = nj pj (0),
j =1
pj (0) = fj (X1 (0), X2 (0), . . . , Xn (0)) j = 1, . . . , N

allows the distribution of the value variance in the portfolio to be estimated:

p (t) = pP (1) ’ pP (0) t = 1, . . . , M

After the M results have been ordered and grouped where necessary, the value of the
VaR parameter can be easily deduced.

Note 1
Another method consists of using the results of the simulations to construct not the dis-
tribution of the portfolio variation in value as above, but the expectation and variance
of that variation. The VaR is then calculated more simply, using the formula VaR q =
E( pt ) ’ zq · σ ( pt ). This, alongside the calculation facilities, presents the very disad-
vantage of the normality hypothesis that the Monte Carlo method attempts to avoid.

Note 2
It may appear surprising to have to carry out so many simulations (5000 to 10 000),
when the statistical limit theorems (law of large numbers, central limit theorem, etc.) are
generally applicable to a very much smaller number of observations. It must be borne
in mind that what we are interested in here is not a ˜globalising™ notion of a law of
probability, such as expectation or variance for example, but an ˜extreme™ notion, which
corresponds to a phenomenon that occurs only rarely. Much more signi¬cant numbers
are therefore necessary, in the same way that estimation of a proportion p per interval
of con¬dence with a ¬xed relative error will require many more observations to estimate
that p = 0.01 than p = 0.4.31

Note 3
Just like the Monte Carlo simulation, stress testing, which is covered in Section 2.1.2, is
based on the scenario concept. In this method, however, the distribution parameters are
not estimated on the basis of historical periods and no future prices are generated through
simulation. We simply envisage the speci¬c potential developments32 of the prices and
the portfolio value is reassessed in this context.
This method of working will of course help to understand phenomena not found in the
historical periods (through a wide experience of ¬nancial markets, for example). However,
it does not allow the distribution of the loss to be determined, as the scenarios are chosen
on one hand and the correlation structure is practically ignored by de¬nition on the other
hand. We are therefore looking at a useful (maybe even a necessary) complement rather
than a competing or substitute method.

We will deal with the estimation of these extreme values in greater detail in Section 7.4.2.
The catastrophe scenarios are part of these developments.
224 Asset and Risk Management

The third method of estimating VaR is based on the hypothesis of stationarity: the joint
(theoretical and unknown) distribution of the price variations for the different risk factors
for the horizon for which the VaR is being estimated is properly estimated by observing
the variations in these prices during the available history. In this case, the quality of the
parameter estimation (expectations, variances, covariances) for this distribution is also
The method of estimating VaR by historical simulation is the method used by Chase
Manhattan34 with the Charisma and Risk$ systems.

7.4.1 Basic methodology
The principle applied is that of estimating the distribution of the variations of risk factors
through the distribution observed on the basis of historical periods. Assimilating the
frequency of variations in value less than the VaR parameter value on one hand and
the corresponding probability on the other hand is a direct consequence of the law of
large numbers. Risk factor case
Let us deal ¬rst of all with the very simple case where the VaR required to be determined
is the risk factor X itself (the rate for an equity, for example). It is therefore assumed
that the distribution of the random variable
X(1) ’ X(0)
is properly represented by the observations:
X(t) ’ X(t ’ 1)
(t) = t = ’T + 1, . . . , ’1, 0
X(t ’ 1)
The relation X(1) ’ X(0) = · X(0) allows the future value of the risk factor to be
established through:

X(t) (1) = X(0) + (t) · X(0) t = ’T + 1, . . . , ’1, 0

The distribution of the value variation is therefore assessed as:

p (t) = X(t) (1) ’ X(0) = (t) · X(0) t = ’T + 1, . . . , ’1, 0 Isolated asset case
Let us now consider a case in which the asset for which the VaR is required to be
determined depends on a number of risk factors X1 , X2 , . . . , Xn . The value of this asset
is therefore expressed by a relation of the type p = f (X1 , X2 , . . . , Xn ).
We deal very brie¬‚y in Section 7.4.2 with a methodology that can be applied in cases of non-stationariness (due for
example to the presence of aberrant observations). It is based on historical observations but gives models of the distribution tails.
Chase Manhattan Bank NA., Value at Risk: its Measurement and Uses, Chase Manhattan Bank, n/d. Chase Manhat-
tan Bank NA., The Management of Financial Price Risk, Chase Manhattan Bank NA., 1996. Stambaugh F., Value at Risk,
S.Ed., 1996.
VaR Estimation Techniques 225

We have observations available for these risk factors, for which the relative variations
can be determined:
Xk (t) ’ Xk (t ’ 1)
k (t) = k = 1, . . . , n t = ’T + 1, . . . , ’1, 0
Xk (t ’ 1)

On the basis of the values currently observed X1 (0), . . ., Xn (0) for the various risk factors,
the distribution of the future values is estimated by:
Xk (1) = Xk (0) · (1 + k (t)) k = 1, . . . , n t = ’T + 1, . . . , ’1, 0

From that, the estimation of the future distribution price of the asset in question can be
deduced easily:
(t) (t)
p (t) (1) = f (X1 (1), X2 (1), . . . , Xn (1))
t = ’T + 1, . . . , ’1, 0

Thus, by difference from:

p(0) = f (X1 (0), X2 (0), . . . , Xn (0))

The distribution of the variation of value is estimated as:

p (t) = p (t) (1) ’ p(0) t = ’T + 1, . . . , ’1, 0 Portfolio case
Here, the reasoning is very similar to that followed in the Monte Carlo simulation. To
determine the VaR of a portfolio consisting of N assets in respective numbers35 n1 , . . .,
nN , the value of each asset is expressed on the basis of a number of risk factors X1 , X2 ,
. . ., Xn . The value pP of the portfolio is expressed according to pj (j = 1, . . . , N ) for
the various assets by:
pP = n j pj
j =1

Even in cases where VaR is calculated on the basis of a normal distribution by the
VaR q = E( pt ) ’ zq · σ ( pt ) (and the historical simulation method is independent of
this distributional hypothesis), a problem is encountered in the case of a portfolio as the
variance of value within that portfolio depends on the covariances between the prices of
the various assets:
var( pP ) = ni nj cov( pi , pj )
i=1 j =1

Therefore, we are also directly determining the distribution of the variation of value for
the portfolio on the basis of the effect exerted by the variations in the various risk factors

Remember (see Section 3.4) that in cases where prices are replaced by returns, the numbers nj of assets in the portfolio

must be replaced by the proportions that correspond to the respective stock-exchange capitalisations for the various securities.
226 Asset and Risk Management

on the value of the portfolio itself. The impact will of course be determined in the same
way as just shown for an isolated asset. Synthesis
The different stages can therefore be isolated as follows.

1. The various risk factors X1 , X2 , . . ., Xn are identi¬ed, determining the value of the
various assets in the portfolio: indices, security rates, interest rates, exchange rates etc.
2. The methodology shown above is applied to each risk factor. Therefore, on the basis
of observations of the various risk factors for the times ’T , ’T + 1, . . ., ’1, 0, the
relative variations or the corresponding k (t) are deduced:

Xk (t) ’ Xk (t ’ 1)
k (t) = k = 1, . . . , n t = ’T + 1, . . . , ’1, 0
Xk (t ’ 1)

The current observations X1 (0), . . . , Xn (0) for these risk factors allow the distribution
of the future values to be obtained:

Xk (1) = Xk (0) · (1 + k (t)) k = 1, . . . , n t = ’T + 1, . . . , ’1, 0

3. The prices of the various assets are expressed on the basis of risk factors through the
pj = fj (X1 , X2 , . . . , Xn ) j = 1, . . . , N

These relations may be very simple (such as risk factor equivalent to asset) or much
more complex (for example, Black and Scholes formula for valuing an option on the
basis of risk factors as ˜underlying price™ and ˜risk-free security return™). They allow
the distributions of the future prices of the various assets to be determined:

(t) (t) (t) (t)
pj (1) = fj (X1 (1), X2 (1), . . . , Xn (1)) j = 1, . . . , N t = ’T + 1, . . . , ’1, 0

The future value distribution of the portfolio can therefore be determined:

(t) (t)
pP (1) = nj pj (1)
j =1

4. In addition, we have the current value of the portfolio

pP (0) = nj pj (0)
j =1

pj (0) = fj (X1 (0), X2 (0), . . . , Xn (0)) j = 1, . . . , N
VaR Estimation Techniques 227

From there, the variation in value of the portfolio can be easily deduced:

p (t) = pP (1) ’ pP (0) t = ’T + 1, . . . , ’1, 0

Note 1
Instead of determining the VaR directly from the estimated distribution of the loss, this
loss can be used to calculate the expectation and variance for the variable and thus to
evaluate the VaR through the relation

V aRq = E( pt ) ’ zq · σ ( pt )

We would not of course recommend this method, which is based on the normality hypoth-
esis, as the historical simulation method (in the same way as the Monte Carlo simulation
method) is independent of this hypothesis.

Note 2
It should be noted that in the t th estimation of the future value of the portfolio, we use
the estimations of the future values of the various assets relative to period t, which are
themselves evaluated on the basis of the differences k (t) measured at that time. The VaR
for the portfolio is therefore estimated according to the variations in the portfolio™s value
during the period of observation, and not determined for each asset on the basis of the least
favourable combination of risk factors followed by aggregation of the least favourable
combination of VaR parameters. This method of working leads to the consideration that
the unfavourable variations in the assets making up the portfolio are all produced at the
same time; this is a ˜catastrophe scenario™ that does not take account of the correlation
between the various assets and therefore of the effects of diversi¬cation. The method
shown above takes this structure into consideration by using concomitant observations of
risk factors and aggregations of concomitant values for the constituent assets.

Note 3
The methodology as presented above uses the valuation models pj = fj (X1 , X2 , . . . , Xn )
j = 1, . . . , N to estimate the future value of the various prices (and thus the future value
of the portfolio price) on the basis of estimation of the future value of the elementary risk
factors X1 , X2 , . . ., Xn . Another method is to use the observations of these elementary
risk factors to put together the database for the price pj for the various assets using
these valuation models (in practice, by using suitable pricing software). These assets then
become risk factors, their future value being estimated using the method presented, and
the portfolio can be re-evaluated on the basis of these estimations.
Because for an ef¬cient market the prices re¬‚ect all the information available on that
market, this method of working is economically coherent; the individual items of informa-
tion supply the valuation models (that is, they make up the pricer input) and the database
used by the VaR estimation technique is that of the price deduced therefrom. This tech-
nique is all the more homogeneous as the concept of price is its common denominator.
This approach is therefore preferable and easier to implement; it will be used in the
practical applications in Chapter 8.
228 Asset and Risk Management

Finally, we would point out that this kind of adaptation can only be partly envisaged
for the Monte Carlo simulation. In this method, in fact, the stochastic processes used
for the simulations are essentially valid to represent developments in elementary risks.
Thus, for an option price, it is the underlying price values that will be simulated; the
price of the option is thus reconstituted by a valuation model such as Black and Scholes.
Of course this means that the Monte Carlo method will have to have access to very
large databases from the beginning (all the elementary risk factors, multiplied by the
number of possible maturity dates) and to carry out many different simulations for all
the components.

Note 4
In its Risk$ system, Chase Manhattan considers a VaR calculation horizon of one day
and daily observations relating to a historical period of T = 100 days.

Let us consider a portfolio36 consisting of three equities with respective numbers: n1 = 3,
n2 = 3 and n3 = 5, for which the rates have been observed at 11 separate times (from
’10 to 0, we therefore have T = 10 see Table 7.1).
The relative variations k (t) are calculated for t = ’9, . . ., 0:
13 150 ’ 12 800
1 (’9) = = 0.0273
12 800
The values for the next period can be deduced therefrom, on the basis of the observations
for t = 0 (see Table 7.2):
X1 (1) = 14 800 — (1 + 0.0273) = 15 204.7

The value of the portfolio is then determined at t = 0:

pP (0) = 3 — 14 800 + 2 — 25 825 + 5 — 1530 = 1 03 700

Table 7.1 Historical rates

t C1 (t) C2 (t) C3 (t)

’10 12 800 23 475 1238
’9 13 150 23 150 1236
’8 12 150 21 875 1168
’7 11 100 21 400 1234
’6 11 725 22 100 1310
’5 11 950 21 650 1262
’4 12 025 22 650 1242
’3 12 325 21 000 1170
’2 13 675 23 625 1260
’1 14 300 24 150 1342
0 14 800 25 825 1530

A detailed, real-life application of this historical simulation technique for different types of ¬nancial products is shown
in Chapter 8.
VaR Estimation Techniques 229
Table 7.2 Historical returns and portfolio estimations

Variations Estimations
(t) (t) (t)
t 1 (t) 2 (t) 3 (t) X1 (1) X2 (1) X3 (1)

’9 ’0.0138 ’0.0016
0.0273 15 204.7 25 467.5 1527.5
’8 ’0.0760 ’0.0551 ’0.0550 13 674.5 24 402.7 1445.8
’7 ’0.0864 ’0.0217 0.0565 13 521.0 25 264.2 1616.5
’6 0.0563 0.0327 0.0616 15 633.3 26 669.7 1624.2
’5 ’0.0204 ’0.0366
0.0192 15 084.0 25 299.2 1473.9
’4 ’0.0158
0.0063 0.0462 14 892.9 27 017.8 1505.8
’3 ’0.0728 ’0.0580
0.0249 15 169.2 23 943.7 1441.3
’2 0.1095 0.1250 0.0769 16 421.1 29 053.1 1647.7
’1 0.0457 0.0222 0.0651 15 467.4 26 398.9 1629.6
0 0.0350 0.0694 0.1401 15 317.5 27 616.2 1744.3

The future value estimations for the portfolio are:
pP (1) = 3 — 15 204.7 + 2 — 25 467.5 + 5 — 1527.5 = 104 186.6

The variations in value are (see Table 7.3):

p (’9) = 104 — 186.6 ’ 103.700 = 486.6

Classifying the value variations in decreasing order (from the lowest ’6, 642.0 to the
highest 11,908.0) (in Table 7.4 “ classi¬cation) shows the estimated distribution of the
future value variation, which can be represented by the distribution function shown in
Figure 7.5.
It is readily apparent from here that VaR 0.95 = ’6624.0. Note that if the VaR is calcu-
lated separately for three equities from the same historical periods, we ¬nd the following
shown in Table 7.5.
These values should not be aggregated with the weight nj , as this would give VaR =
’3 — 1279.0 ’ 2 — 1881.3 ’ 5 — 88.7 = ’8611.9 for the portfolio, a value that does not
take account of the correlation structure between the structures and thus overestimates the
loss suffered.

Table 7.3 Future values and losses incurred
p (t)
t pP (1)

’9 104 186.6 486.6
’8 ’6642.0
97 058.0
’7 ’4526.3
99 173.7
’6 108 360.6 4660.6
’5 ’480.0
103 220.0
’4 106 243.1 2543.1
’3 ’3098.4
100 601.6
’2 115 608.0 11 908.0
’1 107 374.9 3674.9
0 109 906.5 6206.5
230 Asset and Risk Management


0 v

Figure 7.5 Estimated distribution of value variation

Table 7.4 Classi¬cation


Table 7.5 VaR per asset

(1) (2) (3)

’1279.0 ’1881.3 ’88.7

7.4.2 The contribution of extreme value theory
One criticism that can be made of the historical simulation method (see Section 7.5) is
that a small number of outliers, whether these are true exceptional observations or whether
they are caused by errors in measurement or processing etc., will heavily in¬‚uence the
value of the VaR for a long period (equal to the duration of the historical periods). Extreme
value theory37 can assist in resolving this problem. Extreme value theorem
The extreme value theorem states38 that as a series of independent and identically dis-
tributed random variables X1 , X2 , . . ., Xn , for which two series of coef¬cients ±n > 0
Appendix 4 sets out the theoretical bases for this method in brief.
Gnedenko B. V., On the limit distribution for the maximum term in a random series, Annals of Mathematics, Vol. 44,
1943, pp. 423“53.
Galambos J., Advanced Probability Theory, M. Dekker, 1988, Section 6.5.
Jenkinson A. F., The frequency distribution of the annual maximum (or minimum) value of meteorological elements,
Quarterly Journal of the Royal Meteorology Society, Vol 87, 1955, pp. 145“58.
VaR Estimation Techniques 231

and βn (n = 1, 2, . . .) such as the limit (for n ’ ∞) for the random variable
max(X1 , . . . , Xn ) ’ βn
Yn =
is not degenerated, this variable will allow a law of probability that depends on a real
parameter „ and de¬ned by the distribution function:
0 if y ¤ „ when „ < 0

 ±

  y > „ when „ < 0
 1
y real when „ = 0
FY (y) = exp ’(1 ’ „y) „ if
 

 y < „ when „ > 0

if y ≥ „ when „ > 0

This is independently of the common distribution of the Xi totals.39 The probability law
involved is the generalised Pareto distribution.
The numbers ±n , βn and „ are interpreted respectively as a dispersion parameter, a loca-
tion parameter and a tail parameter (see Figure 7.6). Thus, „ < 0 corresponds to Xi values
with a fat tail distribution (decreasing less than exponential), „ = 0 has a thin tail distri-
bution (exponential decrease) and „ > 0 has a zero tail distribution (bounded support). Estimation of parameters by regression
The methods40 that allow the ±n , βn and „ parameters to be estimated by regression use
the fact that this random variable Yn (or more precisely, its distribution) can in practice
be estimated by sampling on a historical basis: N periods each of a duration of n will
supply N values for the loss variable in question.




0 y

Figure 7.6 Distribution of extremes

When „ = 0, (1 ’ „y)1/„ is interpreted as being equal to its limit e’y .
Gumbel F. J., Statistics of Extremes, Columbia University Press, 1958.
Longin F. M., Extreme value theory: presentation and ¬rst applications in ¬nance, Journal de la Soci´ t´ Statistique de
Paris, Vol. 136, 1995, pp. 77“97.
Longin F. M., The asymptotic distribution of extreme stock market returns, Journal of Business, No. 69, 1996, pp. 383“408.
Longin F. M., From value at risk to stress testing: the extreme value approach, Journal of Banking and Finance, No. 24,
2000, pp. 1097“130.
232 Asset and Risk Management

We express the successive observations of the variation in value variable as x1 , x2 ,
. . ., xNn , and the extreme value observed for the i th ˜section™ of observations as yi,n (i =
1, . . . , N ).
y1,n = max(x1 , . . . , xn )
y2,n = max(xn+1 , . . . , x2n )
yN,n = max(x(N’1)n+1 , . . . , xNn )
Let us arrange these observations in order of increasing magnitude, expressing the values
thus arranged yi (i = 1, . . . , N ):

y1 ¤ y2 ¤ . . . ¤ yN

It is therefore possible to demonstrate that if the extremes observed are in fact a rep-
resentative sample of the law of probability given by the extreme value theorem, we
yi ’ βn i
FY = + ui i = 1, . . . , N
±n N +1
where the ui values correspond to a normal zero-expectation law. When this relation is
transformed by taking the iterated logarithm (logarithm of logarithm) for the two ex-
pressions, we obtain:
yi ’ βn
’ ln ’ ln = ’ ln ’ ln FY + µi
N +1 ±n
® ® ® 
yi ’ βn „
= ’ ln °’ ln °exp °’ 1 ’ „ »»» + µi
® 
yi ’ βn „
= ’ ln ° 1 ’ „ » + µi

= {ln ±n ’ ln[±n ’ „ (yi ’ βn )]} + µi

This relation constitutes a nonlinear regressive equation for the three parameters ±n , βn
and „ .
Note that when we are dealing with a matter of distribution of extremes with a thin
tail („ parameter not signi¬cantly different from 0), we have: FY (y) = exp[’ exp(’y)],
and another regressive relationship has to be used:
yi ’ βn
’ ln ’ ln = ’ ln ’ ln FY + µi
N +1 ±n
yi ’ βn
= ’ ln ’ ln exp ’ exp ’ + µi
yi ’ βn
= + µi
VaR Estimation Techniques 233 Estimating parameters using the semi-parametric method
As well as the technique for estimating the ±n , βn and „ parameters, there are non-
parametric methods,41 speci¬cally indicated for estimating the tail parameter „ . They are,
however, time consuming in terms of calculation, as an intermediate parameter has to be
estimated using a Monte Carlo-type method. We show the main aspects here.
The i th observation is termed x(i) after the observations are arranged in increasing order:
x(1) ¤ . . . ¤ x(n) . The ¬rst stage consists of setting a limit M so that only the M highest
observations from the sample (of size n) will be of interest in shaping the tail distribution.
It can be shown42 that an estimator (as termed by Hill) for the tail parameter is given by:
ˆ ln x(n’k+1) ’ ln x(n’M)
M k=1

The choice of M is not easy to make, as the quality of Hill™s estimator is quite sensitive
to the choice of this threshold. If this threshold is ¬xed too low, the distribution tail will
be too rich and the estimator will be biased downwards; if it is ¬xed too high, there will
only be a small number of observations to use for making the estimation. The optimal
choice of M can be made using a graphic method43 or the bootstrap method44 , which we
will not be developing here.
An estimator proposed by Danielsson and De Vries for the limit distribution function
is given by:
M x(n’M) „
FY (y) = 1 ’
n y

This relation is variable for y ≥ x(n’M) only. Calculation of VaR
Once the parameters have been estimated, the VaR parameter can then be determined. We
explain the procedure to be followed when the tail model is estimated using the semi-
parametric method,45 presenting a case of one risk factor only. Of course we will invert
the process to some extent, given that it is the left extremity of the distribution that has
to be used.
The future value of the risk is estimated in exactly the same way as for the historical

X(t) (1) = X(0) + (t) · X(0) t = ’T + 1, . . . , ’1, 0
Beirland J., Teugels J. L. and Vynckier P., Practical Analysis of Extreme Values, Leuven University Press, 1996.
Hill B. M., A simple general approach to inference about the tail of a distribution, Annals of Statistics, Vol. 46, 1975,
pp. 1163“73. Pickands J., Statistical inference using extreme order statistics, Vol. 45, 1975, pp. 119“31.
McNeil A. J., Estimating the tails of loss severity distributions using extreme value theory, Mimeo, ETH Zentrum
Zurich. 1996.
Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Mimeo, Iceland
University and Tinbergen Institute Rotterdam, 1997.
Danielsson J. and De Vries C., Tail index and quantile estimation with very high frequency data, Journal of Empirical
Finance, No. 4, 1997, pp. 241“57.
Danielsson J. and De Vries C., Value at Risk and Extreme Returns, LSE Financial Markets Group Discussion Paper 273,
London School of Economics, 1997.
Embrechts P., Kl¨ ppelberg C. and Mikosch T., Modelling Extreme Events for Insurance and Finance, Springer Verlag, 1999.
Reiss R. D. and Thomas M., Statistical Analysis of Extreme Values, Birkhauser Verlag, 2001.
234 Asset and Risk Management

The choice of M is made using one of the methods mentioned above, and the tail parameter
is estimated by:
ˆ ln x(k) ’ ln x(M+1)
M k=1

The adjustment for the left tail of the distribution is made by
M x(M+1) „
FY (y) =
n y

This relation is valid for y ≥ x(M+1) only.
The distribution tail is simulated46 by taking a number of values at random from the
re-evaluated distribution of the X(t) (1) values and by replacing each x value lower than
x(M+1) by the corresponding value obtained from the distribution of the extremes, that is,
for the level of probability p relative to x, through the xp solution of the equation
M x(M+1) „
n xp

In other words: „
xp = x(M+1)
ˆ .

Extreme value theory, which allows the adverse effects of one or more outliers to be
avoided, has a serious shortcoming despite its impressive appearance. The historical period
that has to be used must have a very long duration, regardless of the estimation method
used. In fact:

• In the method based on nonlinear regression, Nn observations must be lodged in which
the duration n on which an extreme value is measured must be relatively long. The
extreme value theorem is an asymptotic theorem in which the number N of durations
must be large if one wishes to work with a sample distribution that is representative of
the actual distribution of the extremes.
• In the semi-parametric method, a large number of observations are abandoned as soon
as the estimation process starts.

We now move on to review the various advantages and drawbacks of each VaR estimation
technique. To make things simpler, we will use the abbreviations shown in Figure 7.2: VC
for the estimated variance“covariance matrix method, MC for the Monte Carlo simulation
and HS for historical simulation.
This operation can also be carried out by generating a uniform random variable in the interval [0; 1], taking the reciprocal
for the observed distribution of X(t) (1) and replacing the observed values lower than x(M+1) by the value given by the extremes
VaR Estimation Techniques 235

7.5.1 The theoretical viewpoint Hypotheses and limitations
(1) Let us envisage ¬rst of all the presence or absence of a distributional hypothesis and
its likely impact on the method.
MC and HS do not formulate any distributional hypothesis. Only VC assumes that
variations in price are distributed according to a normal law.47 Here, this hypothesis
is essential:

• because of the technique used to split the assets into cash¬‚ows: only multinormal
distribution is such that the sum of the variables, even when correlated, is still distributed
according to such a law;
• because the information supplied by RiskMetrics includes the ’zq σk values (k =
1, . . . , n) of the VaR — parameter for each risk factor, and zq = 1.645, that is, the normal
distribution quantile for q = 0.95.

This hypothesis has serious consequences for certain assets such as options, as the
returns are highly skewed and the method can no longer be applied. It is for this reason
that RiskMetrics introduced a method based on the quantile concept for this type of
asset, similar to MC and VC.
For simpler assets such as equities, it has been demonstrated that the variations in price
are distributed according to a leptokurtic law (more pointed than the normal close to
the expectation, with thicker tails and less probable intermediate values). The normality
hypothesis asserts that the VaR value is underestimated for such leptokurtic distributions,
because of the greater probability associated with the extreme values.
This phenomenon has already been observed for the Student distributions (see
Section 6.2.2). It can also be veri¬ed for speci¬c cases.

Consider the two distributions in Figure 6.2 (Section 6.1.2), in which the triangular,
de¬ned by √
3 ’ |x|
f1 (x) = x ∈ ’ 3; 3
has thicker tails than the rectangular for which
√ √√
6 6 6
f2 (x) = x∈ ’ ;
6 2 2

Table 7.6 shows a comparison of two distributions.
The phenomenon of underestimation of risk for leptokurtic distributions is shown by
the fact that:

VaR q (triangular) = 6(1 ’ q) ’ 3

VaR q (rectangular) = 6(1 ’ q) ’ > VaR q (triangular)
In any case, it formulates the conditional normality hypothesis (normality with changes in variation over time).
236 Asset and Risk Management
Table 7.6 Comparison of two distributions

Triangular Rectangular

µ 0 0
σ2 0.5 0.5
γ1 0 0
γ2 ’0.6 ’1.2

In addition, numerical analyses carried out by F. M. Longin have clearly shown the
underestimation of the VaR for leptokurtic distributions under normality hypothesis. They
have also shown that the underestimation increases as q moves closer to the unit; in other
words, we have an interest in extreme risks. Thus, for a market portfolio represented by
an index, he calculated that:
VaR 0.5 (HS) = 1.6 VaR 0.5 (VC)
VaR 0.75 (HS) = 2.1 VaR 0.75 (VC)
VaR 0.95 (HS) = 3.5 VaR 0.95 (VC)
VaR 0.99 (HS) = 5.9 VaR 0.99 (VC)

This problem can be solved in VC by the consideration, in the evolution model for the
variable return Rj t = µj + σt µj t , for a residual distributed not normally but in accordance
with a generalised law of errors smoother than the normal law.
In the MC and HS methods, the normality hypothesis is not formulated. In MC, how-
ever, if the portfolio is not re-evaluated by a new simulation, the hypothesis will be
required but only for this section of the method.

(2) The VC method, unlike MC and HS, relies explicitly on the hypothesis of asset price
linearity according to the risk factors. This hypothesis forms the basis for the principle of
splitting assets into cash¬‚ows. It is, however, ¬‚awed for certain groups of assets, such as
options: the linear link between the option price and the underlying equity price assumes
that is the only non-zero sensitivity parameter.
For this reason, RiskMetrics has abandoned the VC methodology and deals with
this type of product by calling on Taylor™s development. Another estimation technique,
namely MC, is sometimes indicated for dealing with this group of assets.
(3) The hypothesis of stationarity can take two forms. In its more exacting form, it
suggests that joint (theoretical and unknown) distribution of price variations in different
risk factors, for the VaR calculation horizon, is well estimated in the observations of vari-
ations in these prices during the historical period available. The hypothesis of stationarity
is formulated thus for the HS method. However, if it is not veri¬ed because of the pres-
ence of a trend in the observed data, it is easy to take account of the trend for estimating
the future value of the portfolio.
A ˜softer™ form is recommended for applying the VC method, as this method no
longer relates to the complete distribution; the statistical parameters measured on the
observed distribution of the price (or return) variances are good estimations of these same
(unknown) parameters for the horizon for which the VaR is being estimated. The VC
VaR Estimation Techniques 237

method does, however, have the drawback of being unable to depart from this hypothesis
if a trend is present in the data.
(4) In the presentation of the three estimation methods, it is assumed that the VaR
calculation horizon was equal to the periodicity of the historical observations.48 The
usual use of VaR involves making this period equal to one day for the management of
dealing room portfolios and 10 days according to prudential regulations,49 although a
longer period can be chosen when measuring the risk associated with stable products
such as investment funds.
If, on the other hand, one wishes to consider a horizon (say one month) longer than
the observation period (say one day), three methods may be applied:
• Estimating the VaR on the basis of monthly returns, even if the data are daily in nature.
This leads to serious erosion of the accuracy of the initial observations.
• Using the formulae set out in the note in Section 7.1.2, which consist of multiplying the
expectation loss and the loss variance respectively by the horizon (here, the number of
working days in the month) and the square root of the horizon. This is of course only
valid with a hypothesis of independence of daily variations and for methodologies that
calculate the VaR on the basis of these two parameters only (case of normal distribution)
such as VC. When HS cannot rely on the normality hypothesis, this method of working
is incorrect50 and the previous technique should be applied.
• For MC and for this method only, it is possible to generate not only a future price
value but a path of prices for the calculation horizon.
We now explain this last case a little further, where, for example, the price evolution
of an equity is represented by geometric Brownian motion (see Section 3.4.2):
St+dt ’ St = St · (ER · dt + σR · dwt )
where the Wiener process (dwt ) obeys a law with a zero expectation and a variance
equal to dt. If one considers a normal random variable µ with zero expectation and
variance of 1, we can write:

St+dt ’ St = St · (ER · dt + σR · µ dt)
Simulation of a sequence of independent values for µ using the Monte Carlo method
allows the variations St+dt ’ St to be obtained, and therefore, on the basis of the last
price observed S0 , allows the path of the equity™s future price to be generated for a
number of dates equal to the number of µ values simulated.51 Models used
(1) The valuation models play an important part in the VC and MC methods. In the case
of VC, they are even associated with a conditional normality hypothesis. For MC, the
The usual use of VaR involves making this period equal to one day. However, a longer period can be chosen when
measuring the risk associated with stable products such as investment founds.
This 10-day horizon may, however, appear somewhat unrealistic when the speed and volume of the deals conducted in
a dealing room is seen.
As is pointed out quite justi¬ably by Hendricks D., Evaluation of value at risk models using historical data, FRBNY
Policy Review, 1996, pp. 39“69.
The process that we have described for MC is also applicable, provided suf¬cient care is taken, for a one-day horizon,
with this period broken down into a small number of subperiods.
238 Asset and Risk Management

search for a model is an essential (and dif¬cult) part of the method; however, as there is
a wide variety of models on offer, there is some guarantee as to the quality of results.
Conversely, the HS method is almost completely independent of these models; at the
most, it uses them as a pricing tool for putting together databases for asset prices.
Here is one of the many advantages of this method, which have their source in the
conceptual simplicity of the technique in question.
To sum up, the risk associated with the quality of the models used is:

• signi¬cant and untreatable for VC;
• signi¬cant but manageable for MC;
• virtually zero for HS.

(2) As VC and HS are based on a hypothesis of stationarity, the MC method is the
only one to make intensive use of asset price development models over time (dynamic
models). These models can improve the results of this method, provided the models are
properly adapted to the data and correctly estimated. Data
The data needed for supplying the VC methods in its RiskMetrics version are:

• the partial VaRs for each of the elementary risks;
• the correlation matrix for the various risk-factor couples.

Thus, for n risk factors, n(n + 1)/2 different data are necessary. If for example one is
considering 450 elementary risk factors, 101 475 different data must be determined daily.
Note that the RiskMetrics system makes all these data available to the user.
The MC method consumes considerably less data; in addition to the history of the
various risk factors, a number of correlations (between risk factors that explain the same
asset) are essential. However, if the portfolio is re-evaluated by a new simulation in order
to avoid the normality hypothesis, the variance“covariance matrix for the assets in the
portfolio will be essential.
Finally, the HS method is the least data consuming; as the historical periods already
contain the structure of the correlation between risk factors and between assets, this last
information does not need to be obtained from an outside body or calculated on the basis
of historical periods.

7.5.2 The practical viewpoint Data
Most of the data used in the VC method cannot be directly determined by an institu-
tion applying a VaR methodology. Although the institution knows the composition of its
portfolio and pays close attention to changes in the prices of the assets making up the
portfolio, it cannot know the levels of volatility and the correlations of basic risk factors,
some of which can only be obtained by consulting numerous outside markets. The VC
method can therefore only be effective if all these data are available in addition, which is
the case if the RiskMetrics system is used. This will, however, place the business at a
VaR Estimation Techniques 239

disadvantage as it is not provider of the data that it uses. It will not therefore be possible
to analyse these data critically, or indeed make any corrections to them in case of error.
Conversely, the MC method and especially the HS method will use data from inside
the business or data that can be easily calculated on the basis of historical data, with all
the ¬‚exibility that this implies with respect to their processing, conditioning, updating
and control. Calculations

Of course, the three methods proposed require a few basic ¬nancial calculations, such as
application of the principle of discounting. We now look at the way in which the three
techniques differ from the point of view of calculations to be made.
The calculations required for the HS method are very limited and easily programmable
on ordinary computer systems, as they are limited to arithmetical operations, sorting
processes and the use of one or another valuation model when the price to be integrated
into the historical process is determined.
The VC method makes greater use of the valuation models, since the principle of
splitting assets and mapping cash¬‚ows is based on this group of models and since options
are dealt with directly using the Black and Scholes model. Most notably, these valuation
models will include regressions, as equity values are expressed on the basis of national
stock-exchange indices. In addition, the matrix calculation is made when the portfolio is
re-evaluated on the basis of the variance“covariance matrix.
In contrast to these techniques, which consume relatively little in terms of calculations
(especially HS), the MC method requires considerable calculation power and time:

• valuation models (including regressions), taking account of changes over time and
therefore estimations of stochastic process parameters;
• forecasting, on the basis of historical periods, of a number of correlations (between the
risk factors that explain the same asset on one hand, and between assets in the same
portfolio for the purpose of its re-valuation on the other hand);
• matrix algebra, including the Choleski decomposition method;
• ¬nally and most signi¬cantly, a considerable number of simulations. Thus, if M is the
number of simulations required in the Monte Carlo method to obtain a representative
distribution and the asset for which a price must be generated depends on n risk factors,
a total of nM simulations will be necessary for the asset in question. If the portfolio is
also revalued by simulation (with a bulky variance“covariance matrix), the number of
calculations increases still further. Installation and use

The basic principles of the VC method, with splitting of assets and mapping of cash¬‚ows,
cannot be easily understood at all levels within the business that uses the methodology;
and the function of risk management cannot be truly effective without positive assistance
from all departments within the business. In addition, this method has a great advantage:
RiskMetrics actually exists, and the great number of data that supply the system are
240 Asset and Risk Management
Table 7.7 Advantages and drawbacks


Distributional Conditional normality No No
Linearity Taylor if options No No
Stationarity Yes No Method to be adapted
hypothesis if trend
Horizon 1 observation period Paths (any duration) 1 observation period
Valuation models Yes (unmanageable risk) Yes (manageable risk) External
Dynamic models No Yes External
“Partial VaR —
Required data Histories (+ var-cov. Historices
“Correlation matrix of assets)
Source of data External In house In house
Sensitivity Average Average Outliers
Calculation “Valuation models “Valuation models External valuation
“Matrix calculation “Statistical estimates models
“Matrix calculation
Set-up Easy Dif¬cult Easy
Understanding Dif¬cult Average Easy
Flexibility Low Low Good
Robustness Too many hypotheses Good Good

also available. The drawback, of course, is the lack of transparency caused by the external
origin of the data.
Although the basic ideas of the MC method are simple and natural, putting them
into practice is much more problematic, mainly because of the sheer volume of the
The HS method relies on theoretical bases as simple and natural as those of the MC
method. In addition, the system is easy to implement and its principles can be easily under-
stood at all levels within a business, which will be able to adopt it without problems.
In addition, it is a very ¬‚exible methodology: unlike the other methods, which appear
clumsy because of their vast number of calculations, aggregation can be made at many
different levels and used in many different contexts (an investment fund, a portfolio, a
dealing room, an entire institution). Finally, the small number of basic hypotheses and the
almost complete absence of complex valuation models makes the HS method particularly
reliable in comparison with MC and especially VC.
Let us end by recalling one drawback of the HS method, inherent in the simplic-
ity of its design: its great sensitivity to quality of data. In fact, one or a few more
outliers (whether exceptional in nature or caused by an error) will greatly in¬‚uence
the VaR value over a long period (equal to the duration of the historical periods).
It has been said that extreme value theory can overcome this problem, but unfortu-
nately, the huge number of calculations that have to be made when applying it is
prohibitive. Instead, we would recommend that institutions using the HS method set
VaR Estimation Techniques 241

up a very rigorous data control system and systematically analyse any exceptional obser-
vations (that is, outliers); this is possible in view of the internal nature of the data
used here.

7.5.3 Synthesis
We end by setting out a synoptic table52 shown in Table 7.7 of all the arguments put forward.

VAR can be obtained for a longer horizon H than the
With regard to the horizon for the VC method, note that the √
periodicity of observations, by multiplying the VAR for a period by H , except in the case of optional products.
Setting Up a VaR Methodology

The aim of this chapter is to demonstrate how the VaR can be calculated using the historical
simulation method. So that the reader can work through the examples speci¬cally, we felt
it was helpful to include a CD-ROM of Excel spreadsheets in this book. This ¬le, called
˜CH8.XLS™, contains all the information relating to the examples dealt with below. No
part of the sheets making up the ¬le has been hidden so that the calculation procedures
are totally transparent.
The examples presented have been deliberately simpli¬ed; the actual portfolios of banks,
institutions and companies will be much more complex than what the reader can see here.
The great variety of ¬nancial products, and the number of currencies available the world
over, have compelled us to make certain choices.
In the ¬nal analysis, however, the aim is to explain the basic methodology so that the
user can transpose historical simulation into the reality of his business. Being aware of
the size of some companies™ portfolios, we point out a number of errors to be avoided in
terms of simpli¬cation.

8.1.1 Which data should be chosen?
Relevant data is fundamental. As VaR dreals with extreme values in a series of returns,
a database error, which is implicitly extreme, will exert its in¬‚uence for many days.
The person responsible for putting together the data should make a point of testing the
consistency of the new values added to the database every day, so that it is not corrupted.
The reliability of data depends upon:

• the source (internal or external);
• where applicable, the sturdiness of the model and the hypotheses that allow it to
be determined;
• awareness of the market;
• human intervention in the data integration process.

Where the source is external, market operators will be good reference points for spe-
cialist data sources (exchange, long term, short term, derivatives etc.). Sources may be
printed (¬nancial newspapers and magazines) or electronic (Reuters, Bloomberg, Telerate,
Datastream etc.).
Prices may be chosen ˜live™ (what is the FRA 3“6 USD worth on the market?) or
calculated indirectly (calculation of forward-forward on the basis of three and six months
Libor USD, for example). The ultimate aim is to provide proof of consistency as time
goes on.
On a public holiday, the last known price will be used as the price for the day.
244 Asset and Risk Management

8.1.2 The data in the example
We have limited ourselves to four currencies (EUR, PLN, USD and GBP), in weekly
data. For each of these currencies, 101 dates (from 19 January 2001 to 20 December
2002) have been selected. For these dates, we have put together a database containing the
following prices:

• 1, 2, 3, 6 and 12, 18, 24, 36, 48 and 60 months deposit and swap rates for EUR and
PLN, and the same periods but only up to 24 months for USD and GBP.
• Spot rates for three currency pairs (EUR/GBP, EUR/PLN and EUR/USD).

The database contains 3737 items of data.

8.2.1 Treasury portfolio case
The methodology assumes that historical returns will be applied to a current portfolio in
order to estimate the maximum loss that will occur, with a certain degree of con¬dence,
through successive valuations of that portfolio.
The ¬rst stage, which is independent of the composition of the portfolio, consists of
determining the past returns (in this case, weekly returns). Determining historical returns
As a reminder (see Section 3.1.1), the formula that allows the return to be calculated1 is:
Ct ’ Ct’1
Rt =
For example, the weekly return for three months™ deposit USD between 19 January 2001
and 26 January 2001 is:
0.05500 ’ 0.05530
= ’0.5425 %
The results of applying the rates of return to the databases are found on the ˜Returns™
sheet within CH8.XLS. For 101 rates, 100 weekly returns can be determined. Composition of portfolio
The treasury portfolio is located on the ˜Portfolios™ sheet within CH8.XLS. This sheet is
entirely ¬ctitious, and has base in economic reality either for the dates covered by the
sample and even less at the time you read these lines. The only reality is the prices and
rates that prevail at the dates chosen (and de facto the historical returns).
The investor™s currency is the euro. The term ˜long™ (or ˜short™) indicates:

• in terms of deposits, that the investor has borrowed (lent).
We were able to use the other expression for the return, that is, ln (see also Section 3.1.1).
Setting Up a VaR Methodology 245

• in terms of foreign exchange, that the investor has purchased (sold) the ¬rst cur-
rency (EUR in the case of EUR/USD) in exchange for the second currency in the

We have assumed that the treasury portfolio for which the VaR is to be calculated
contains only new positions for the date on which the maximum loss is being estimated:
20 December 2002.
In a real portfolio, an existing contract must of course be revalued in relation to the
period remaining to maturity. Thus, a nine-month deposit that has been running for six
months (remaining period therefore three months) will require a database that contains the
prices and historical returns for three-month deposits in order to estimate the maximum
loss for the currency in question.
In addition, some interpolations may need to be made on the curve, as the price for
some broken periods (such as seven-month deposits running for four months and 17 days)
does not exist in the market.
Therefore, for each product in the treasury portfolio, we have assumed that the contract
prices obtained by the investor correspond exactly to those in the database on the date of
valuation. The values in Column ˜J™ (˜Initial Price™) in the ˜Portfolios™ sheet in CH8.XLS
for the treasury portfolio will thus correspond to the prices in the ˜Rates™ sheet in CH8.XLS
for 20 December 2002 for the products and currencies in question. Revaluation by asset type
We have said that historical simulation consists of revaluing the current portfolio by
applying past returns to that portfolio; the VaR is not classi¬ed and determined until later.
Account should, however, be taken of the nature of the product when applying the
historical returns. Here, we are envisaging two types of product:

• interest-rate products;
• FX products.

A. Exchange rate product: deposit
Introduction: Calculating the VBP
We saw in Section 2.1.2 that the value of a basis point (VBP) allowed the sensitivity of
an interest rate position to a basis point movement to be calculated, whenever interest
rates rise or fall.
Position 1 of the treasury portfolio (CH8.XLS, ˜Portfolios™ sheet, line 14) is a deposit
(the investor is ˜long™) within a GBP deposit for a total of GBP50 000 000 at a rate
of 3.9400 %.
The investor™s interest here is in the three-month GBP rate increasing; he will thus be
able to reinvest his position at a more favourable rate. Otherwise, the position will make
a loss. More generally, however, it is better to pay attention to the sensitivity of one™s
particular position.
The ¬rst stage consists of calculating the interest on the maturity date:

I =C·R·
246 Asset and Risk Management

I represents the interest;
C represents the nominal;
R represents the interest rate;
ND represents the number of days in the period;
DIV represents the number of days in a year for the currency in question.

I = 50 000 000 — 0.0394 — 90/365 = 485 753.42.

Let us now assume that the rates increase by one basis point. The interest cash¬‚ow at
the maturity date is thus calculated on an interest rate base of: 0.0394 + 0.0001 = 0.0395.
We therefore obtain:

I = 50 000 000 — 0.0395 — 90/365 = 486 986.30

As the investor in the example is ˜long™, that is, it is better for him to lend in order
to cover his position, he will gain: I = 50 000 000 — 0.0001 — 90/365 = |485 753.42 ’
486 986.30| = 1 232.88 GBP every time the three-month GBP rate increases by one basis

Historical return case
The VBP assumes a predetermined variation of one basis point every time, either upwards
or downwards. In the example, this variation equals a pro¬t (rise) or loss (fall) of
1232.88 GBP.
In the same way, we can apply to the current rate for the position any other variation
that the investor considers to be of interest: we stated in Section 2.1.2 that this was the
case for simulations (realistic or catastrophic).
However, if the investor believes that the best forecast2 of future variations in rates is
a variation that he has already seen in the past, all he then needs to do is apply a series
of past variations to the current rate (on the basis of past returns) and calculate a law of
probability from that.
On 19 January 2001, the three-month GBP was worth 5.72 %, while on 26 January
2001 it stood at 5.55 %. The historical return is ’2.9720 % (˜Returns™ sheet, cell AG4).
This means that: 0.0572 — (1 + (’0.02972)) = 0.0555.
If we apply this past return to the current rate for the position (˜Portfolios™ sheet, cell
J14), we will have: 0.0394 — (1 + (’0.02972)) = 0.038229.
This rate would produce interest of: I = 50 000 000 — 0.038229 — 90/365 = 471 316.70.
As the investor is ˜long™, this drop in the three-month rate would produce a loss in
relation to that rate, totalling: 471 316.70 ’ 485 753.42 = ’14 436.73 GBP. The result is
shown on the ˜Treasury Reval™ sheet, cell D3.

Error to avoid
Some people may be tempted to proceed on the basis of difference from past rate:
0.055500 ’ 0.057200 = ’0.0017. And then to add that difference to the current rate:
The argument in favour of this assumption is that the variation has already existed. The argument against, however, is
that it cannot be assumed that it will recur in the future.
Setting Up a VaR Methodology 247

0.0394 ’ 0.0017 = 0.0377. This would lead to a loss of: I = 50 000 000 — (’0.0017) —
90/365 = ’20 958.90. This is obviously different from the true result of ’14 436.73.
This method is blatantly false. To stress the concepts once again, if rates moved from
10 % to 5 % within one week (return of ’50 %) a year ago, with the differential applied
to a current position valued at a rate of 2 %, we would have a result of:

0.02 — (1 ’ 0.50) = 0.01 with the right method.
0.02 ’ (0.10 ’ 0.05) = ’0.03 with the wrong method.

In other words, it is best to stick to the relative variations in interest rates and FX rates
and not to the absolute variations.
B. FX product: spot
Position 3 in the treasury portfolio (CH8.XLS, ˜Portfolios™ sheet, line 16) is a purchase
(the investor is ˜long™) of EUR/USD for a total of EUR75 000 000 at a price of USD1.0267
per EUR.
Introduction: calculating the value of a ˜pip™
A ˜pip™ equals one-hundredth of a USD cent in a EUR/USD quotation, that is, the fourth
¬gure after the decimal point. The investor is ˜long™ as he has purchased euros and paid
for the purchase in USD. A rise (fall) in the EUR/USD will therefore be favourable


. 10
( 16)