<< стр. 2(всего 14)СОДЕРЖАНИЕ >>
i=1

(for |t| в†’ в€ћ). This motivates the following approximation for П†:
ПЂ
Лњ def
П†(t) = wв€— |t|в€’m/2 exp i mв€— sign(t) + ixв€— t (1.34)
4
with
m
def
mв€— = sign(О»i ), (1.35)
i=1
m 2
1 Оґi
def
xв€— = Оё в€’ . (1.36)
2 О»i
i=1

xв€— is the location and wв€— the вЂњweightвЂќ of the singularity. The multivariate
delta-gamma-distribution is C в€ћ except at xв€— , where the highest continuous
derivative of the cdf is of order [(m в€’ 1)/2].
Note that
2
1 Оґj
def Лњ в€’1 в€’1/2
exp{ 2 (1 в€’ iО»j t)в€’1 }
(1 в€’ (iО»j t)
О±(t) = П†(t)/П†(t) = ) (1.37)
2 О»j
j

and О± meets the assumptions of theorem 1.1.
1.4 Fourier Inversion 21

1.4.3 Inversion of the cdf minus the Gaussian Approximation

Assume that F is a cdf with mean Вµ and standard deviation Пѓ, then
в€ћ
1 i 22
eв€’ixt (П†(t) в€’ eiВµtв€’Пѓ t /2 ) dt
F (x) в€’ О¦(x; Вµ, Пѓ) = (1.38)
2ПЂ t
в€’в€ћ

holds, where О¦(.; Вµ, Пѓ) is the normal cdf with mean Вµ and standard deviation
22
Пѓ and eiВµtв€’Пѓ t /2 its characteristic function. (Integrating the inversion formula
(1.16) w.r.t. x and applying FubiniвЂ™s theorem leads to (1.38).) Applying the
Fourier inversion to F (x) в€’ О¦(x; Вµ, Пѓ) instead of F (x) solves the (numerical)
i
problem that t П†(t) has a pole at 0. Alternative distributions with known
Fourier transform may be chosen if they better approximate the distribution
F under consideration.
The moments of the delta-gamma-distribution can be derived from (1.3) and
(1.5):
1 1
Вµ= (Оёi + О»i ) = Оё 1 + tr(О“ОЈ)
2 2
i
and
1 1
Пѓ2 = (Оґi + О»2 ) = в€† ОЈв€† + tr((О“ОЈ)2 ).
2
i
2 2
i

def 22
i
Let П€(t) = t (П†(t)в€’eiВµtв€’Пѓ t /2 ). Since П€(в€’t) = П€(t), the truncated sum (1.21)
1
can for t = в€†t /2 and T = (K в€’ 2 )в€†t be written as
Kв€’1
в€†t 1
Лњ 1
Лњ П€((k + )в€†t )eв€’i((k+ 2 )в€†t )xj
F (xj ; T, в€†t , t) в€’ О¦(xj ) = ,
ПЂ 2
k=0

which can comfortably be computed by a FFT with modulus N в‰Ґ K:
Kв€’1
в€†t 1
в€†t
в€’i
П€((k + )в€†t )eв€’ikв€†t x0 eв€’2ПЂikj/N ,
xj
= e (1.39)
2
ПЂ 2
k=0

with в€†x в€†t = 2ПЂ and the last N в€’ K components of the input vector to the
N
The aliasing error of the approximation (1.20) applied to F в€’ N is
2ПЂ 2ПЂ
j) (в€’1)j .
j) в€’ О¦(x +
ea (x, в€†t , в€†t /2) = F (x + (1.40)
в€†t в€†t
j=0
22 1 Approximating Value at Risk in Conditional Gaussian Models

в€љ в€љ
The cases (О», Оґ, Оё) = (В± 2, 0, 2/2) are the ones with the fattest tails and
are thus candidates for the worst case for (1.40), asymptotically for в€†t в†’ 0. In
these cases, (1.40) is eventually an alternating sequence of decreasing absolute
value and thus

2 в€’ 1 в€љ2ПЂ/в€†t
F (в€’ПЂ/в€†t ) + 1 в€’ F (ПЂ/в€†t ) в‰¤ e2 (1.41)
ПЂe
is an asymptotic bound for the aliasing error.
The truncation error (1.22) applied to F в€’ N is
в€ћ
1 i 22
П†(t) в€’ eiВµtв€’Пѓ t /2 dt .
et (x; T ) = в€’ (1.42)
ПЂ t
T

The Gaussian part plays no role asymptotically for T в†’ в€ћ and Theorem 1.1
applies with ОЅ = m/2 + 1.
The quantile error for a given parameter П‘ is

eП‘ (q(П‘); в€†t ) + eП‘ (q(П‘); T )
q (П‘) в€’ q(П‘) в€ј в€’ a t
Лњ , (1.43)
f П‘ (q(П‘))

asymptotically for T в†’ в€ћ and в€†t в†’ 0. (q(П‘) denotes the true 1%-quantile
for the triplet П‘ = (Оё, в€†, О“).) The problem is now to п¬Ѓnd the right trade-oп¬Ђ
between вЂњaliasing errorвЂќ and вЂњtruncation errorвЂќ, i.e., to choose в€†t optimally
for a given K.
Empirical observation of the one- and two-factor cases shows that (О», Оґ, Оё) =
в€љ в€љ
(в€’ 2, 0, 2/2) hasв€љ smallest density (в‰€ 0.008) at the 1%-quantile. Since
the
в€љ
(О», Оґ, Оё) = (в€’ 2, 0, 2/2) is the case with the maximal вЂњaliasing errorвЂќ as well,
it is the only candidate for the worst case of the ratio of the вЂњaliasing errorвЂќ
over the density (at the 1%-quantile).
The question which П‘ is the worst case for the ratio of the вЂњtruncation errorвЂќ
over the density (at the 1%-quantile) is not as clear-cut. Empirical observation
в€љ в€љ
shows that the case (О», Оґ, Оё) = (в€’ 2, 0, 2/2) is also the worst case for this
ratio over a range of parameters in one- and two-factor problems. This leads to
the following heuristic to choose в€†t for a given K (T = (K в€’ 0.5)в€†t ). Choose
в€†t such as to minimize в€љ sum of the aliasing and truncation errors for the
the
в€љ
case (О», Оґ, Оё) = (в€’ 2, 0, 2/2), as approximated by the bounds (1.41) and
w
lim sup |et (x, T )|T 3/2 = (1.44)
ПЂ|xв€— в€’ x|
T в†’в€ћ
1.4 Fourier Inversion 23

в€љ
with w = 2в€’1/4 , xв€— = 2/2, and the 1%-quantile x в‰€ в€’3.98. (Note that this
is suitable only for intermediate K, leading to accuracies of 1 to 4 digits in the
quantile. For higher K, other cases become the worst case for the ratio of the
truncation error over the density at the quantile.)
Since F в€’ N has a kink in the case m = 1, О» = 0, higher-order interpolations
2ПЂ
are futile in non-adaptive methods and в€†x = N в€†t is a suitable upper bound
for the interpolation error. By experimentation, N в‰€ 4K suп¬ѓces to keep the
interpolation error comparatively small.
K = 26 evaluations of П† (N = 28 ) suп¬ѓce to ensure an accuracy of 1 digit in the
approximation of the 1%-quantile over a sample of one- and two-factor cases.
K = 29 function evaluations are needed for two digits accuracy. The XploRe
implementation of the Fourier inversion is split up as follows:

z= VaRcharfDGF2(t,par)
def i 22
implements the function П€(t) = t (П†(t)в€’eiВµtв€’Пѓ t /2 ) for the com-
plex argument t and the parameter list par.
z= VaRcorrfDGF2(x,par)
implements the correction term О¦(x, Вµ, Пѓ 2 ) for the argument x
and the parameter list par.
vec= gFourierInversion(N,K,dt,t0,x0,charf,par)
implements a generic Fourier inversion like in (1.39). charf is a
string naming the function to be substituted for П€ in (1.39). par
is the parameter list passed to charf.

gFourierInversion can be applied to VaRcharfDG, giving the density, or to
VaRcharfDGF2, giving the cdf minus the Gaussian approximation. The three
auxiliary functions are used by
24 1 Approximating Value at Risk in Conditional Gaussian Models

l= VaRcdfDG(par,N,K,dt)
to approximate the cumulative distribution function (cdf) of the
distribution from the class of quadratic forms of Gaussian vectors
with parameter list par. The output is a list of two vectors x and
y, containing the cdf-approximation on a grid given by x.
q= cdf2quant(a,l)
approximates the a-quantile from the list l, as returned from
VaRcdfDG.
q= VaRqDG(a,par,N,K,dt)
calls VaRcdfDG and cdf2quant to approximate an a-quantile for
the distribution of the class of quadratic forms of Gaussian vectors
that is deп¬Ѓned by the parameter list par.

The following example plots the 1%-quantile for a one-parametric family of the
class of quadratic forms of one- and two-dimensional Gaussian vectors:
XFGqDGtest.xpl

1.5 Variance Reduction Techniques in
Monte-Carlo Simulation

1.5.1 Monte-Carlo Sampling Method

The partial Monte-Carlo method is a Monte-Carlo simulation that is performed
by generating underlying prices given the statistical model and then valuing
them using the simple delta-gamma approximation. We denote X as a vector
of risk factors, в€†V as the change in portfolio value resulting from X, L as
в€’в€†V , О± as a conп¬Ѓdence level and l as a loss threshold.
We also let

вЂў в€† = п¬Ѓrst order derivative with regard to risk factors
вЂў О“ = second order derivative with regard to risk factors
1.5 Variance Reduction Techniques in Monte-Carlo Simulation 25

вЂў ОЈX = covariance matrix of risk factors

Equation 1.1 deп¬Ѓnes the class of Delta-Gamma normal methods. The detailed
procedures to implement the partial Monte-Carlo method are as follows

1. Generate N scenarios by simulating risk factors X1 , ..., XN according to
ОЈX ;
2. Revalue the portfolio and determine the loss in the portfolio values L1 , ..., LN
using the simple delta-gamma approximation;
3. Calculate the fraction of scenarios in which losses exceed l:
N
в€’1
N 1(Li > l), (1.45)
i=1

where 1(Li > l) = 1 if Li > l and 0 otherwise.

The partial Monte-Carlo method is п¬‚exible and easy to implement. It provides
the accurate estimation of the VaR when the loss function is approximately
quadratic. However, one drawback is that for a large number of risk factors,
it requires a large number of replications and takes a long computational time.
According to Boyle, Broadie and Glasserman (1998), the convergence rate of
в€љ
the Monte-Carlo estimate is 1/ N . Diп¬Ђerent variance reduction techniques
have been developed to increase the precision and speed up the process. In
the next section, we will give a brief overview of diп¬Ђerent types of variance
reduction techniques, Boyle et al. (1998).

1. Antithetic Method
We assume Wi = f (zi ), where zi в€€ Rm are independent samples from the
standard normal distribution. In our case, the function f is deп¬Ѓned as
m
1 2
f (zi ) = I(Li > l) = I[в€’ (Оґi zi + О»i zi ) > l]. (1.46)
2
i=1

Based on N replications, an unbiased estimator of the Вµ = E(W ) is given
by
N N
1 1
Вµ=
Л† Wi = f (zi ). (1.47)
N i=1 N i=1
26 1 Approximating Value at Risk in Conditional Gaussian Models

In this context, the method of antithetic variates is based on the obser-
vation that if zi has a standard normal distribution, then so does в€’zi .
Similarly, each
N
1
Вµ=
Лњ f (в€’zi ) (1.48)
N i=1

is also an unbiased estimator of Вµ. Therefore,
Вµ+Вµ
Л†Лњ
ВµAV =
Л† (1.49)
2

is an unbiased estimator of Вµ as well.
The intuition behind the antithetic method is that the random inputs
obtained from the collection of antithetic pairs (zi , в€’zi ) are more regularly
distributed than a collection of 2N independent samples. In particular,
the sample mean over the antithetic pairs always equals the population
mean of 0, whereas the mean over п¬Ѓnitely many independent samples is
almost surely diп¬Ђerent from 0.
2. Control Variates
The basic idea of control variates is to replace the evaluation of an un-
known expectation with the evaluation of the diп¬Ђerence between the un-
known quantity and another expectation whose value is known. The
N
1
standard Monte-Carlo estimate of Вµ = E[Wi ] = E[f (zi )] is N i=1 Wi .
Suppose we know Вµ = E[g(zi )]. The method of control variates uses the
Лњ
known error
N
1 Лњ
Wi в€’ ВµЛњ (1.50)
N i=1
to reduce the unknown error
N
1
Wi в€’ Вµ. (1.51)
N i=1

The controlled estimator has the form
N N
1 1 Лњ
Wi в€’ ОІ( Wi в€’ Вµ).
Лњ (1.52)
N N
i=1 i=1

Since the term in parentheses has expectation zero, equation (1.52) pro-
vides an unbiased estimator of Вµ as long as ОІ is independent. In practice,
1.5 Variance Reduction Techniques in Monte-Carlo Simulation 27

if the function g(zi ) provides a close approximation of f (zi ), we usually
set ОІ = 1 to simplify the calculation.
3. Moment Matching Method
Let zi , i = 1, ..., n, denote an independent standard normal random vec-
tor used to drive a simulation. The sample moments will not exactly
match those of the standard normal. The idea of moment matching is to
transform the zi to match a п¬Ѓnite number of the moments of the underly-
ing population. For example, the п¬Ѓrst and second moment of the normal
random number can be matched by deп¬Ѓning
Пѓz
zi = (zi в€’ z )
Лњ Лњ + Вµz , i = 1, .....n (1.53)
sz
where z is the sample mean of the zi , Пѓz is the population standard devi-
Лњ
ation, sz is the sample standard deviation of zi , and Вµz is the population
mean.
The moment matching method can be extended to match covariance and
higher moments as well.
4. Stratiп¬Ѓed Sampling
Like many variance reduction techniques, stratiп¬Ѓed sampling seeks to
make the inputs to simulation more regular than the random inputs. In
stratiп¬Ѓed sampling, rather than drawing zi randomly and independent
from a given distribution, the method ensures that п¬Ѓxed fractions of the
samples fall within speciп¬Ѓed ranges. For example, we want to generate
N m-dimensional normal random vectors for simulation input. The em-
pirical distribution of an independent sample (z1 , . . . , zN ) will look only
roughly like the true normal density; the rare events - which are im-
portant for calculating the VaR - will inevitably be underrepresented.
Stratiп¬Ѓed sampling can be used to ensure that exactly one observation
k
zi lies between the (i в€’ 1)/N and i/N quantiles (i = 1, ..., N ) of the k-th
marginal distribution for each of the m components. One way to imple-
ment that is to generate N m independent uniform random numbers uk i
on [0, 1] (k = 1, . . . , m, i = 1, . . . , N ) and set

zi = О¦в€’1 [(i + uk в€’ 1)/N ], i = 1, ...., N
Лњk (1.54)
i

where О¦в€’1 is the inverse of the standard normal cdf. (In order to achieve
satisfactory sampling results, we need a good numerical procedure to cal-
culate О¦в€’1 .) An alternative is to apply the stratiп¬Ѓcation only to the most
28 1 Approximating Value at Risk in Conditional Gaussian Models

important components (directions), usually associated to the eigenvalues
of largest absolute value.

5. Latin Hypercube Sampling
The Latin Hypercube Sampling method was п¬Ѓrst introduced by McKay,
Beckman and Conover (1979). In the Latin Hypercube Sampling method,
the range of probable values for each component uk is divided into N seg-
i
ments of equal probability. Thus, the m-dimensional space, consisting of
k parameters, is partitioned into N m cells, each having equal probability.
For example, for the case of dimension m = 2 and N = 10 segments, the
parameter space is divided into 10 Г— 10 cells. The next step is to choose
10 cells from the 10 Г— 10 cells. First, the uniform random numbers are
generated to calculate the cell number. The cell number indicates the
segment number the sample belongs to, with respect to each of the pa-
rameters. For example, a cell number (1,8) indicates that the sample
lies in the segment 1 with respect to п¬Ѓrst parameter, segment 10 with
respect to second parameter. At each successive step, a random sample
is generated, and is accepted only if it does not agree with any previous
sample on any of the segment numbers.
6. Importance sampling
The technique builds on the observation that an expectation under one
probability measure can be expressed as an expectation under another
through the use of a likelihood ratio. The intuition behind the method is
to generate more samples from the region that is more important to the
practical problem at hand. In next the section, we will give a detailed
description of calculating VaR by the partial Monte-Carlo method with
importance sampling.

1.5.2 Partial Monte-Carlo with Importance Sampling

In the basic partial Monte-Carlo method, the problem of sampling changes in
market risk factors Xi is transformed into a problem of sampling the vector z of
underlying standard normal random variables. In importance sampling, we will
change the distribution of z from N(0, I) to N(Вµ, ОЈ). The key steps proposed
by Glasserman, Heidelberger and Shahabuddin (2000) are to calculate

P (L > l) = EВµ,ОЈ [Оё(z)I(L > l)] (1.55)
1.5 Variance Reduction Techniques in Monte-Carlo Simulation 29

Expectation is taken with z sampled from N(Вµ, ОЈ) rather than its original
distribution N(0, I). To correct for this change of distribution, we weight the
loss indictor I(L > l) by the likelihood ratio
ОЈв€’1 Вµ в€’ 1 [z (Iв€’ОЈв€’1 )zв€’2Вµ ОЈв€’1 z]
1
Оё(z) = |ОЈ|1/2 eв€’ 2 Вµ e , (1.56)
2

which is simply the ratio of N[0, I] and N[Вµ, ОЈ] densities evaluated at z.
The next task is to choose Вµ and ОЈ so that the Monte-Carlo estimator will have
minimum variance. The key to reducing the variance is making the likelihood
ratio small when L > l. Equivalently, Вµ and ОЈ should be chosen in the way
to make L > l more likely under N(Вµ, ОЈ) than under N(0, I). The steps of the
algorithm are following:

1. Decomposition Process
We follow the decomposition steps described in the section 1.2 and п¬Ѓnd
the cumulant generating function of L given by
m
1 (П‰Оґi )2
в€’ log(1 в€’ П‰О»i )]
Оє(П‰) = [ (1.57)
2 1 в€’ П‰О»i
i=1

2. Transform N(0, I) to N(Вµ, ОЈ)
If we take the п¬Ѓrst derivative of Оє(П‰) with respect to П‰, we will get:
d
Оє(П‰) = EВµ(П‰),ОЈ(П‰) [L] = l (1.58)
dП‰
where ОЈ(П‰) = (I в€’ П‰О›)в€’1 and Вµ(П‰) = П‰ОЈ(П‰)Оґ. Since our objective is
to estimate P (L > l), we will choose П‰ to be the solution of equation
(1.58). The loss exceeding scenarios (L > l), which were previously rare
under N(0, I), are typical under N(Вµ, ОЈ), since the expected value of the
approximate value L is now l. According to Glasserman et al. (2000), the
eп¬Ђectiveness of this importance sampling procedure is not very sensitive
to the choice of П‰.
After we get N(Вµ(П‰), ОЈ(П‰)), we can follow the same steps in the basic
partial Monte-Carlo simulation to calculate the VaR. The only diп¬Ђerence
is that the fraction of scenarios in which losses exceed l is calculated by:
N
1
[exp(в€’П‰Li + Оє(П‰))I(Li > l)] (1.59)
N i=1
30 1 Approximating Value at Risk in Conditional Gaussian Models

An important feature of this method is that it can be easily added to an
existing implementation of partial Monte-Carlo simulation. The impor-
tance sampling algorithm diп¬Ђers only in how it generates scenarios and
in how it weights scenarios as in equation (1.59).

1.5.3 XploRe Examples

VaRMC = VaRestMC (VaRdelta, VaRgamma, VaRcovmatrix,
smethod, opt)
Partial Monte-Carlo method to calculate VaR based on Delta-
Gamma Approximation.

The function VaRestMC uses the diп¬Ђerent types of variance reduction to calcu-
late the VaR by the partial Monte-Carlo simulation. We employ the variance
reduction techniques of moment matching, Latin Hypercube Sampling and im-
portance sampling. The output is the estimated VaR. In order to test the
eп¬ѓciency of diп¬Ђerent Monte-Carlo sampling methods, we collect data from the
MD*BASE and construct a portfolio consisting of three German stocks (Bayer,
Deutsche Bank, Deutsche Telekom) and corresponding 156 options on these un-
derlying stocks with maturity ranging from 18 to 211 days on May 29, 1999.
The total portfolio value is 62,476 EUR. The covariance matrix for the stocks
is provided as well. Using the Black-Scholes model, we also construct the ag-
gregate delta and aggregate gamma as the input to the Quantlet. By choosing
the importance sampling method, 0.01 conп¬Ѓdence level, 1 days forecast horizon
and 1,000 times of simulation, the result of the estimation is as follows.
XFGVaRMC.xpl

Contents of VaRMC

[1,] 771.73

It tells us that we expect the loss to exceed 771.73 EUR or 1.24% of portfolio
value with less than 1% probability in 1 day. However, the key question of
the empirical example is that how much variance reduction is achieved by the
diп¬Ђerent sampling methods. We run each of the four sampling methods 1,000
1.5 Variance Reduction Techniques in Monte-Carlo Simulation 31

times and estimated the standard error of the estimated VaR for each sampling
method. The table (1.1) summarizes the results.

Estimated VaR Standard Error Variance Reduction
Plain-Vanilla 735.75 36.96 0%
Moment Matching 734.92 36.23 1.96%
Latin Hypercube 757.83 21.32 42.31%
Importance Sampling 761.75 5.66 84.68%

Table 1.1. Variance Reduction of Estimated VaR for German Stock
Option Portfolio

As we see from the table (1.1), the standard error of the importance sampling
is 84.68% less than those of plain-vanilla sampling and it demonstrates that
approximately 42 times more scenarios would have to be generated using the
plain-vanilla method to achieve the same precision obtained by importance
sampling based on Delta-Gamma approximation. These results clearly indicate
the great potential speed-up of estimation of the VaR by using the importance
sampling method. This is why we set the importance sampling as the default
sampling method in the function VaRestMC. However, the Latin Hypercube
sampling method also achieved 42.31% of variance reduction. One advantage
of the Latin Hypercube sampling method is that the decomposition process is
not necessary. Especially when the number of risk factors (m) is large, the
decomposition (O(m3 )) dominates the sampling (O(m)) and summation O(1)
in terms of computational time. In this case, Latin Hypercube sampling may
oп¬Ђer the better performance in terms of precision for a given computational
time.

Bibliography
Abate, J. and Whitt, W. (1992). The Fourier-series method for inverting trans-
forms of probability distributions, Queuing Systems Theory and Applica-
tions 10: 5вЂ“88.
Albanese, C., Jackson, K. and Wiberg, P. (2000). Fast convolution method for
Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra,
J., Croz, J. D., Greenbaum, A., Hammarling, S., McKenney, A. and
32 1 Approximating Value at Risk in Conditional Gaussian Models

Sorensen, D. (1999). LAPACK UsersвЂ™ Guide, third edn, SIAM. http:
//www.netlib.org/lapack/lug/.
Basel Committee on Banking Supervision (1995). An internal model-based ap-
proach to market risk capital requirements, http://www.bis.org/publ/
bcbsc224.pdf.
Boyle, P., Broadie, M. and Glasserman, P. (1998). Monte Carlo methods for
security pricing, Journal of Economic Dynamics and Control 3: 1267вЂ“
1321.
Breckling, J., Eberlein, E. and Kokic, P. (2000). A tailored suit for risk man-
agement: Hyperbolic model, in J. Franke, W. HВЁrdle and G. Stahl (eds),
a
Measuring Risk in Complex Stochastic Systems, Vol. 147 of Lecture Notes
in Statistics, Springer, New York, chapter 12, pp. 198вЂ“202.
Britton-Jones, M. and Schaefer, S. (1999). Non-linear Value-at-Risk, European
Finance Review 2: 161вЂ“187.
Embrechts, P., KlВЁppelberg, C. and Mikosch, T. (1997). Modelling extremal
u
events, Springer-Verlag, Berlin.
Embrechts, P., McNeil, A. and Straumann, D. (1999). Correlation and depen-
dence in risk management: Properties and pitfalls, http://www.math.
ethz.ch/˜strauman/preprints/pitfalls.ps.
Engle, R. (2000). Dynamic conditional correlation - a simple class of multivari-
ate GARCH models, http://weber.ucsd.edu/˜mbacci/engle/.
Fallon, W. (1996). Calculating Value at Risk, http://wrdsenet.wharton.
upenn.edu/fic/wfic/papers/96/9649.pdf. Wharton Financial Institu-
tions Center Working Paper 96-49.
Glasserman, P., Heidelberger, P. and Shahabuddin, P. (2000). Eп¬ѓcient monte
carlo methods for value at risk, http://www.research.ibm.com/people/
b/berger/papers/RC21723.pdf. IBM Research Paper RC21723.
Hill, G. W. and Davis, A. W. (1968). Generalized asymptotic expansions of
Cornish-Fisher type, Ann. Math. Statist. 39: 1264вЂ“1273.
Jaschke, S. (2001). The Cornish-Fisher-expansion in the context of delta-
gamma-normal approximations, http://www.jaschke-net.de/papers/
CoFi.pdf. Discussion Paper 54, Sonderforschungsbereich 373, Humboldt-
UniversitВЁt zu Berlin.
a
1.5 Variance Reduction Techniques in Monte-Carlo Simulation 33

Jorion, P. (2000). Value at Risk: The New Benchmark for Managing Financial
Risk, McGraw-Hill, New York.
Lee, Y. S. and Lin, T. K. (1992). Higher-order Cornish Fisher expansion,
Applied Statistics 41: 233вЂ“240.
Lee, Y. S. and Lin, T. K. (1993). Correction to algorithm AS269 : Higher-order
Cornish Fisher expansion, Applied Statistics 42: 268вЂ“269.
Li, D. (1999). Value at Risk based on the volatility, skewness and kurtosis,
http://www.riskmetrics.com/research/working/var4mm.pdf. Risk-
Metrics Group.
Longerstaey, J. (1996). RiskMetrics technical document, Technical Report
fourth edition, J.P.Morgan. originally from http://www.jpmorgan.com/
RiskManagement/RiskMetrics/, now http://www.riskmetrics.com.
McKay, M. D., Beckman, R. J. and Conover, W. J. (1979). A comparison
of three methods for selecting values of input variables in the analysis of
output from a computer code, Technometrics 21(2): 239вЂ“245.
Mina, J. and Ulmer, A. (1999). Delta-gamma four ways, http://www.
riskmetrics.com.
Pichler, S. and Selitsch, K. (1999). A comparison of analytical VaR method-
ologies for portfolios that include options, http://www.tuwien.ac.at/
E330/Research/paper-var.pdf. Working Paper TU Wien.
Pritsker, M. (1996). Evaluating Value at Risk methodologies: Accuracy versus
computational time, http://wrdsenet.wharton.upenn.edu/fic/wfic/
papers/96/9648.pdf. Wharton Financial Institutions Center Working
Paper 96-48.
Rogers, L. and Zane, O. (1999). Saddle-point approximations to option prices,
Annals of Applied Probability 9(2): 493вЂ“503. http://www.bath.ac.uk/
˜maslcgr/papers/.
Rouvinez, C. (1997). Going greek with VaR, Risk 10(2): 57вЂ“65.
Zangari, P. (1996a). How accurate is the delta-gamma methodology?, Risk-
Metrics Monitor 1996(third quarter): 12вЂ“29.
Zangari, P. (1996b). A VaR methodology for portfolios that include options,
RiskMetrics Monitor 1996(п¬Ѓrst quarter): 4вЂ“12.
2 Applications of Copulas for the
Calculation of Value-at-Risk
JВЁrn Rank and Thomas Siegl
o

We will focus on the computation of the Value-at-Risk (VaR) from the perspec-
tive of the dependency structure between the risk factors. Apart from historical
simulation, most VaR methods assume a multivariate normal distribution of
the risk factors. Therefore, the dependence structure between diп¬Ђerent risk
factors is deп¬Ѓned by the correlation between those factors. It is shown in Em-
brechts, McNeil and Straumann (1999) that the concept of correlation entails
several pitfalls. The authors therefore propose the use of copulas to quantify
dependence.
For a good overview of copula techniques we refer to Nelsen (1999). Copulas
can be used to describe the dependence between two or more random variables
with arbitrary marginal distributions. In rough terms, a copula is a function
C : [0, 1]n в†’ [0, 1] with certain special properties. The joint multidimensional
cumulative distribution can be written as
P(X1 в‰¤ x1 , . . . , Xn в‰¤ xn ) = C (P(X1 в‰¤ x1 ), . . . , P(Xn в‰¤ xn ))
= C (F1 (x1 ), . . . , Fn (xn )) ,
where F1 , . . . , Fn denote the cumulative distribution functions of the n random
variables X1 , . . . , Xn . In general, a copula C depends on one or more cop-
ula parameters p1 , . . . , pk that determine the dependence between the random
variables X1 , . . . , Xn . In this sense, the correlation ПЃ(Xi , Xj ) can be seen as a
parameter of the so-called Gaussian copula.
Here we demonstrate the process of deriving the VaR of a portfolio using the
copula method with XploRe, beginning with the estimation of the selection
of the copula itself, estimation of the copula parameters and the computation
of the VaR. Backtesting of the results is performed to show the validity and
relative quality of the results. We will focus on the case of a portfolio containing
36 2 Applications of Copulas for the Calculation of Value-at-Risk

two market risk factors only, the FX rates USD/EUR and GBP/EUR. Copulas
in more dimensions exist, but the selection of suitable n-dimensional copulas
is still quite limited. While the case of two risk factors is still important for
applications, e.g. spread trading, it is also the case that can be best described.
As we want to concentrate our attention on the modelling of the dependency
structure, rather than on the modelling of the marginal distributions, we re-
strict our analysis to normal marginal densities. On the basis of our backtesting
results, we п¬Ѓnd that the copula method produces more accurate results than
вЂњcorrelation dependenceвЂќ.

2.1 Copulas
In this section we summarize the basic results without proof that are necessary
to understand the concept of copulas. Then, we present the most important
properties of copulas that are needed for applications in п¬Ѓnance. In doing so,
we will follow the notation used in Nelsen (1999).

2.1.1 Deп¬Ѓnition

DEFINITION 2.1 A 2-dimensional copula is a function C : [0, 1]2 в†’ [0, 1]
with the following properties:

1. For every u в€€ [0, 1]
C(0, u) = C(u, 0) = 0 . (2.1)
2. For every u в€€ [0, 1]
C(u, 1) = u and C(1, u) = u . (2.2)

3. For every (u1 , u2 ), (v1 , v2 ) в€€ [0, 1] Г— [0, 1] with u1 в‰¤ v1 and u2 в‰¤ v2 :
C(v1 , v2 ) в€’ C(v1 , u2 ) в€’ C(u1 , v2 ) + C(u1 , u2 ) в‰Ґ 0 . (2.3)

A function that fulп¬Ѓlls property 1 is also said to be grounded. Property 3 is
the two-dimensional analogue of a nondecreasing one-dimensional function. A
function with this feature is therefore called 2-increasing.
The usage of the name вЂќcopulaвЂќ for the function C is explained by the following
theorem.
2.1 Copulas 37

2.1.2 SklarвЂ™s Theorem

The distribution function of a random variable R is a function F that assigns
all r в€€ R a probability F (r) = P(R в‰¤ r). In addition, the joint distribution
function of two random variables R1 , R2 is a function H that assigns all r1 , r2 в€€
R a probability H(r1 , r2 ) = P(R1 в‰¤ r1 , R2 в‰¤ r2 ).

THEOREM 2.1 (SklarвЂ™s theorem) Let H be a joint distribution function
with margins F1 and F2 . Then there exists a copula C with
H(x1 , x2 ) = C(F1 (x1 ), F2 (x2 )) (2.4)
for every x1 , x2 в€€ R. If F1 and F2 are continuous, then C is unique. Otherwise,
C is uniquely determined on Range F1 Г— Range F2 . On the other hand, if C is
a copula and F1 and F2 are distribution functions, then the function H deп¬Ѓned
by (2.4) is a joint distribution function with margins F1 and F2 .

It is shown in Nelsen (1999) that H has margins F1 and F2 that are given by
def def
F1 (x1 ) = H(x1 , +в€ћ) and F2 (x2 ) = H(+в€ћ, x2 ), respectively. Furthermore,
F1 and F2 themselves are distribution functions. With SklarвЂ™s Theorem, the
use of the name вЂњcopulaвЂќ becomes obvious. It was chosen by Sklar (1996)
to describe вЂњa function that links a multidimensional distribution to its one-
dimensional marginsвЂќ and appeared in mathematical literature for the п¬Ѓrst
time in Sklar (1959).

2.1.3 Examples of Copulas

Product Copula The structure of independence is especially important for
applications.

DEFINITION 2.2 Two random variables R1 and R2 are independent if and
only if the product of their distribution functions F1 and F2 equals their joint
distribution function H,
H(r1 , r2 ) = F1 (r1 ) В· F2 (r2 ) r1 , r 2 в€€ R .
for all (2.5)

Thus, we obtain the independence copula C = О  by
n
О (u1 , . . . , un ) = ui ,
i=1
38 2 Applications of Copulas for the Calculation of Value-at-Risk

which becomes obvious from the following theorem:

THEOREM 2.2 Let R1 and R2 be random variables with continuous distri-
bution functions F1 and F2 and joint distribution function H. Then R1 and
R2 are independent if and only if CR1 R2 = О .

From SklarвЂ™s Theorem we know that there exists a unique copula C with
P(R1 в‰¤ r1 , R2 в‰¤ r2 ) = H(r1 , r2 ) = C(F1 (r1 ), F2 (r2 )) . (2.6)
Independence can be seen using Equation (2.4) for the joint distribution func-
tion H and the deп¬Ѓnition of О ,
H(r1 , r2 ) = C(F1 (r1 ), F2 (r2 )) = F1 (r1 ) В· F2 (r2 ) . (2.7)

Gaussian Copula The second important copula that we want to investigate
is the Gaussian or normal copula,
О¦в€’1 (u) О¦в€’1 (v)
1 2
def
Gauss
CПЃ (u, v) = fПЃ (r1 , r2 )dr2 dr1 , (2.8)
в€’в€ћ в€’в€ћ

see Embrechts, McNeil and Straumann (1999). In (2.8), fПЃ denotes the bivariate
normal density function with correlation ПЃ for n = 2. The functions О¦1 , О¦2
in (2.8) refer to the corresponding one-dimensional, cumulated normal density
functions of the margins.
In the case of vanishing correlation, ПЃ = 0, the Gaussian copula becomes
О¦в€’1 (u) О¦в€’1 (v)
1 2
Gauss
C0 (u, v) = f1 (r1 )dr1 f2 (r2 )dr2
в€’в€ћ в€’в€ћ
= uv (2.9)
= О (u, v) if ПЃ = 0 .
Result (2.9) is a direct consequence of Theorem 2.2.
As О¦1 (r1 ), О¦2 (r2 ) в€€ [0, 1], one can replace u, v in (2.8) by О¦1 (r1 ), О¦2 (r2 ). If
one considers r1 , r2 in a probabilistic sense, i.e. r1 and r2 being values of two
random variables R1 and R2 , one obtains from (2.8)
Gauss
(О¦1 (r1 ), О¦2 (r2 )) = P(R1 в‰¤ r1 , R2 в‰¤ r2 ) .
CПЃ (2.10)
Gauss
In other words: CПЃ (О¦1 (r1 ), О¦2 (r2 )) is the binormal cumulated probability
function.
2.1 Copulas 39

Gumbel-Hougaard Copula Next, we consider the Gumbel-Hougaard family of
copulas, see Hutchinson (1990). A discussion in Nelsen (1999) shows that CОё
is suited to describe bivariate extreme value distributions. It is given by the
function
1/Оё
def
CОё (u, v) = exp в€’ (в€’ ln u)Оё + (в€’ ln v)Оё . (2.11)
The parameter Оё may take all values in the interval [1, в€ћ).
For Оё = 1, expression (2.11) reduces to the product copula, i.e. C1 (u, v) =
О (u, v) = u v. For Оё в†’ в€ћ one п¬Ѓnds for the Gumbel-Hougaard copula
Оёв†’в€ћ def
CОё (u, v) в€’в†’ min(u, v) = M (u, v).
It can be shown that M is also a copula. Furthermore, for any given copula C
one has C(u, v) в‰¤ M (u, v), and M is called the FrВґchet-Hoeп¬Ђding upper bound.
e
def
The two-dimensional function W (u, v) = max(u+vв€’1, 0) deп¬Ѓnes a copula with
W (u, v) в‰¤ C(u, v) for any other copula C. W is called the FrВґchet-Hoeп¬Ђding
e
lower bound.

2.1.4 Further Important Properties of Copulas

In this section we focus on the properties of copulas. The theorem we will
present next establishes the continuity of copulas via a Lipschitz condition on
[0, 1] Г— [0, 1]:

THEOREM 2.3 Let C be a copula. Then for every u1 , u2 , v1 , v2 в€€ [0, 1]:
|C(u2 , v2 ) в€’ C(u1 , v1 )| в‰¤ |u2 в€’ u1 | + |v2 в€’ v1 | . (2.12)

From (2.12) it follows that every copula C is uniformly continuous on its do-
main. A further important property of copulas concerns the partial derivatives
of a copula with respect to its variables:

THEOREM 2.4 Let C be a copula. For every u в€€ [0, 1], the partial derivative
в€‚ C/в€‚ v exists for almost every v в€€ [0, 1]. For such u and v one has
в€‚
0в‰¤ C(u, v) в‰¤ 1 . (2.13)
в€‚v
The analogous statement is true for the partial derivative в€‚ C/в€‚ u.
def def
In addition, the functions u в†’ Cv (u) = в€‚ C(u, v)/в€‚ v and v в†’ Cu (v) =
в€‚ C(u, v)/в€‚ u are deп¬Ѓned and nondecreasing almost everywhere on [0,1].
40 2 Applications of Copulas for the Calculation of Value-at-Risk

To give an example of this theorem, we consider the partial derivative of the
Gumbel-Hougaard copula (2.11) with respect to u,

в€‚ 1/Оё
CОё (u, v) = exp в€’ (в€’ ln u)Оё + (в€’ ln v)Оё Г—
CОё,u (v) =
в€‚u
Оёв€’1
в€’ Оёв€’1 (в€’ ln u)
(в€’ ln u)Оё + (в€’ ln v)Оё Оё
. (2.14)
u
Note that for u в€€ (0, 1) and for all Оё в€€ R where Оё > 1, CОё,u is a strictly
в€’1
increasing function of v. Therefore the inverse function CОё,u is well deп¬Ѓned.
в€’1
However, as one might guess from (2.14), CОё,u can not be calculated analytically
so that some kind of numerical algorithm has to be used for this task. As CОё
is symmetric in u and v, the partial derivative of CОё with respect to v shows
an identical behaviour for the same set of parameters.
We will end this section with a statement on the behaviour of copulas under
strictly monotone transformations of random variables.

THEOREM 2.5 Let R1 and R2 be random variables with continuous distri-
bution functions and with copula CR1 R2 . If О±1 and О±2 are strictly increasing
functions on Range R1 and Range R2 , then CО±1 (R1 ) О±2 (R2 ) = CR1 R2 . In other
words: CR1 R2 is invariant under strictly increasing transformations of R1 and
R2 .

2.2 Computing Value-at-Risk with Copulas
Now that we have given the most important properties of copulas, we turn to
the practical question of how to compute the Value-at-Risk of a portfolio using
copulas. The following steps need to be performed:

2.2.1 Selecting the Marginal Distributions

The copula method works with any given marginal distribution, i.e. it does
not restrict the choice of margins. However, we will use normal margins for
simplicity and in order to allow a comparison with standard VaR methods.
2.2 Computing Value-at-Risk with Copulas 41

2.2.2 Selecting a Copula

A wide variety of copulas exists, mainly for the two dimensional case (Nelsen
(1999)). In our numerical tests, we will use some of the copulas presented
in Table 4.1 of Nelsen (1999) in our experiments for comparison which are
implemented in the function

C = VaRcopula(uv,theta,0,copula)
returns CОё (u, v) for copula copula with parameter Оё = theta. uv
is a n Г— 2 vector of coordinates, where the copula is calculated.

For easy reference the implemented copulas are given in Table 2.1.

2.2.3 Estimating the Copula Parameters

After selecting a copula we п¬Ѓt the copula to a time series
(t)
s = s(1) , . . . , s(T ) with s(t) = (s1 , . . . , s(t) )
n

for t в€€ 1, . . . , T . For simplicity we assume that the s(t) are realizations of i.i.d.
random variables S (t) . The п¬Ѓrst step will be to determine the parameters of
the marginal distributions. In the numerical example we will use the normal
2
distribution N(0, Пѓi ), and estimate the volatility Пѓi using an equally weighted
(t) (t) (t) (tв€’1)
T
1
volatility estimator Пѓi = T в€’1 t=2 (ri )2 of the returns ri = log(si /si
Л†2 )
for simplicity. The marginal distributions of the risk factors are then log-
normal. The remaining task is to estimate the copula parameters. In the
XploRe VaR quantlib this is done by the function

res = VaRfitcopula(history,copula,method)
п¬Ѓts the copula to the history using п¬Ѓtting function method.
The result res is a list containing the estimates of the copula
parameter together with there standard deviations.

Least Square Fit The main idea of the least square п¬Ѓt is that the cumulative
(C)
distribution function FОё (x) deп¬Ѓned by the copula C should п¬Ѓt the sample
42 2 Applications of Copulas for the Calculation of Value-at-Risk

Оёв€€
# CОё (u, v) =
max [uв€’Оё + v в€’Оё в€’ 1]в€’1/Оё , 0 [в€’1, в€ћ)\{0}
1
max 1 в€’ [(1 в€’ u)Оё + (1 в€’ v)Оё в€’ 1]1/Оё , 0 [1, в€ћ)
2
uv
3 [в€’1, 1)
1в€’Оё(1в€’u)(1в€’v)

exp в€’[(в€’ ln u)Оё + (в€’ ln v)Оё ]1/Оё [1, в€ћ)
4
(eв€’Оёu в€’1)(eв€’Оёv в€’1)
1
в€’ Оё ln 1 + (в€’в€ћ, в€ћ)\{0}
5 eв€’Оё в€’1
1/Оё
1 в€’ (1 в€’ u)Оё + (1 в€’ v)Оё в€’ (1 в€’ u)Оё (1 в€’ v)Оё ) [1, в€ћ)
6
max Оёuv + (1 в€’ Оё)(u + v в€’ 1), 0
7 (0, 1]
2
Оё uvв€’(1в€’u)(1в€’v)
8 max Оё 2 в€’(Оёв€’1)2 (1в€’u)(1в€’v) , 0 (0, 1]
9 uv exp(в€’Оё ln u ln v) (0, 1]
1/Оё
uv/ 1 + (1 в€’ uОё )(1 в€’ v Оё )
10 (0, 1]
1/Оё
ОёОё Оё Оё
u v в€’ 2(1 в€’ u )(1 в€’ v )
11 max ,0 (0, 1/2]
1/Оё в€’1
1 + (uв€’1 в€’ 1)Оё + (v в€’1 в€’ 1)Оё [1, в€ћ)
12
1/Оё
exp 1 в€’ (1 в€’ ln u)Оё + (1 в€’ ln v)Оё в€’ 1 (0, в€ћ)
13
1/Оё в€’Оё
1 + (uв€’1/Оё в€’ 1)Оё + (v в€’1/Оё в€’ 1)Оё [1, в€ћ)
14
1/Оё Оё
1 в€’ (1 в€’ u1/Оё )Оё + (1 в€’ v 1/Оё )Оё [1, в€ћ)
15 max ,0
в€љ
1
S + S2 + 4 Оё [0, в€ћ)
16 2
1 1
в†’S =u+vв€’1в€’Оё в€’1
+
u v
1
Оё Оё
1 в€’ 1 в€’ max(S(u) + S(v) в€’ 1, 0) [1, в€ћ)
21
1/Оё
в†’ S(u) = 1 в€’ (1 в€’ u)Оё

Table 2.1. Copulas implemented in the VaR quantlib.

(t) (t)
T
1
distribution function S(x) = T t=1 1(s1 в‰¤ x1 , . . . , sn в‰¤ xn ) as close as
possible in the mean square sense. The function 1(A) is the indicator function
of the event A. In order to solve the least square problem on a computer, a
(C)
discretization of the support of FОё is needed, for which the sample set s(t)
2.2 Computing Value-at-Risk with Copulas 43

seems to be well suited. The copula parameter estimators are therefore the
solution of the following minimization problem:
T 2
1
(c)
FОё (s(t) ) (t)
в€’ S(s ) + subject to Оё в€€ DC .
min
2T
t=1

using the Newton method on the п¬Ѓrst derivative (method = 1). The addition of
1 1
2T avoids problems that result from the T jumps at the sample points. While
this method is inherently numerically stable, it will produce unsatisfactory
results when applied to risk management problems, because the minimization
will п¬Ѓt the copula best where there are the most datapoints, and not necessarily
at the extreme ends of the distribution. While this can be somewhat rectiп¬Ѓed
by weighting schemes, the maximum likelihood method does this directly.

Maximum Likelihood The likelihood function of a probability density func-
(C) (C)
T
tion fОё (x) evaluated for a time series s is given by l(Оё) = t=1 fОё (st ).
The maximum likelihood method states that the copula parameters at which l
reaches its maximum are good estimators of the вЂњrealвЂќ copula parameters. In-
stead of the likelihood function, it is customary to maximize the log-likelihood
function
T
(C)
(x(t) ) s.t. Оё в€€ DC .
max log fОё
t=1

Maximization can be performed on the copula function itself by the Newton
method on the п¬Ѓrst derivative (method=2) or by an interval search (method=3).
The true maximum likelihood method is implemented in method=4 using an
interval search. Depending on the given copula it may not be possible to
(C)
maximize the likelihood function (i.e. if fОё (s(t) )) = 0 for some t and all Оё. In
this case the least square п¬Ѓt may be used as a fallback.

2.2.4 Generating Scenarios - Monte Carlo Value-at-Risk

Assume now that the copula C has been selected. For risk management pur-
poses, we are interested in the Value-at-Risk of a position. While analytical
methods for the computation of the Value-at-Risk exist for the multivariate
normal distribution (i.e. for the Gaussian copula), we will in general have
to use numerical simulations for the computation of the VaR. To that end,
we need to generate pairs of random variables (X1 , X2 ) в€ј F (C) , which form
44 2 Applications of Copulas for the Calculation of Value-at-Risk

scenarios of possible changes of the risk factor. The Monte Carlo method gen-
erates a number N of such scenarios, and evaluates the present value change of
a portfolio under each scenario. The sample О±в€’quantile is then the one period
Value-at-Risk with conп¬Ѓdence О±.
Our п¬Ѓrst task is to generate pairs (u, v) of observations of U (0, 1) distributed
random variables U and V whose joint distribution function is C(u, v). To
reach this goal we use the method of conditional distributions. Let cu denote
the conditional distribution function for the random variable V at a given value
u of U ,
def
cu (v) = P(V в‰¤ v, U = u) . (2.15)
From (2.6) we have

C(u + в€†u, v) в€’ C(u, v) в€‚
cu (v) = lim = C(u, v) = Cu (v) , (2.16)
в€†u в€‚u
в€†uв†’0

where Cu is the partial derivative of the copula. From Theorem 2.4 we know
that cu (v) is nondecreasing and exists for almost all v в€€ [0, 1].
For the sake of simplicity, we assume from now on that cu is strictly increasing
and exists for all v в€€ [0, 1]. If these conditions are not fulп¬Ѓlled, one has to
replace the term вЂњinverseвЂќ in the remaining part of this section by вЂњquasi-
inverseвЂќ, see Nelsen (1999).
With result (2.16) at hand we can now use the method of variable transforma-
tion to generate the desired pair (u, v) of pseudo random numbers (PRN). The
algorithm consists of the following two steps:

вЂў Generate two independent uniform PRNs u, w в€€ [0, 1]. u is already the
п¬Ѓrst number we are looking for.

вЂў Compute the inverse function of cu . In general, it will depend on the
parameters of the copula and on u, which can be seen, in this context,
as an additional parameter of cu . Set v = cв€’1 (w) to obtain the second
u
PRN.

It may happen that the inverse function cannot be calculated analytically. In
this case one has to use a numerical algorithm to determine v. This situation
occurs for example when Gumbel-Hougaard copulas are used.
2.3 Examples 45

v = VaRcopula(uv,theta,-1,copula)
returns inverse v = cв€’1 such that res = cu (u, v) for copula copula
u
with parameter Оё = theta. uv is a n Г— 2 vector of coordinates,
where the copula is calculated.

Finally we determine x1 = О¦в€’1 (u) and x2 = О¦в€’1 (v) to obtain one pair (x1 , x2 )
1 2
of random variables with the desired copula dependence structure. For a Monte
Carlo simulation, this procedure is performed N times to yield a sample X =
(x(1) , . . . , x(N ) ).

X = VaRsimcopula(N, sigma 1, sigma 2, theta, copula)
returns a sample of size N for the copula copula with parameter
Оё = theta and normal distributions with standard deviations
Пѓ1 = sigma 1, Пѓ2 = sigma 2.

If we assume a linear position a with holdings a1 , . . . , an in each of the risk
n
factors, the change in portfolio value is approximately i=1 ai В· xi . Using a
п¬Ѓrst order approximation, this yields a sample Value-at-Risk with conп¬Ѓdence
level О±.

VaR = VaRestMCcopula(history,a,copula,opt)
п¬Ѓts the copula copula to the history history and returns the
N-sample Monte Carlo Value-at-Risk with conп¬Ѓdence level О± =
alpha for position a. N and alpha are contained in list opt.

2.3 Examples
In this section we show possible applications for the Gumbel-Hougaard copula,
i.e. for copula = 4. First we try to visualize C4 (u, v) in Figure 2.1.
XFGaccvar1.xpl
46 2 Applications of Copulas for the Calculation of Value-at-Risk

(0.0,0.0,1.0)
C(u,v)
0.8

v
u
0.5

0.2

(0.0,1.0,0.0)
0.8
0.5
0.2

(0.0,0.0,0.0) 0.2 0.5
(1.0,0.0,0.0)
0.8

Figure 2.1. Plot of C4 (u, v) for Оё = 3

In the next Figure 2.2 we show an example of copula sampling for п¬Ѓxed pa-
rameters Пѓ1 = 1, Пѓ2 = 1, Оё = 3 for copulas numbered 4, 5, 6, and 12, see Table
2.1.

XFGaccvar2.xpl

In order to investigate the connection between the Gaussian and Copula based
dependency structure we plot Оё against correlation ПЃ in Figure 2.3. We assume
that tmin and tmax hold the minimum respectively maximum possible Оё val-
ues. Those can also be obtained by tmin=VaRcopula(0,0,0,8,copula) and
tmax=VaRcopula(0,0,0,9,copula). Care has to be taken that the values are
п¬Ѓnite, so we have set the maximum absolute Оё bound to 10.
XFGaccvar3.xpl
2.4 Results 47

Copula4 Copula5

2
2
0
v

v
0
-2

-2

-4 -2 0 2 -2 0 2
u u
Copula6 Copula12
3
2
2

1
0

0
v

v
-1
-2

-2
-3

-3 -2 -1 0 1 2 3 -3 -2 -1 0 1 2
u u

Figure 2.2. 10000-sample output for Пѓ1 = 1, Пѓ2 = 1, Оё = 3

2.4 Results
To judge the eп¬Ђectiveness of a Value-at-Risk model, it is common to use back-
testing. A simple approach is to compare the predicted and empirical number
of outliers, where the actual loss exceeds the VaR. We implement this test in
a two risk factor model using real life time series, the FX rates USD/EUR
and GBP/EUR, respectively their DEM counterparts before the introduction
of the Euro. Our backtesting investigation is based on a time series ranging
from 2 Jan. 1991 until 9 Mar. 2000 and simple linear portfolios i = 1, . . . , 4:

Value(ai , t)[EU R] = ai,1 Г— USDt в€’ ai,2 Г— GBPt . (2.17)
48 2 Applications of Copulas for the Calculation of Value-at-Risk

1
Correlation
0.5
0

2 4 6 8 10
theta

Figure 2.3. Plot of Оё against correlation ПЃ for C4 .

The Value-at-Risk is computed with conп¬Ѓdence level 1в€’О±i (О±1 = 0.1, О±2 = 0.05,
and О±3 = 0.01) based on a time series for the statistical estimators of length
T = 250 business days. The actual next day value change of the portfolio is
compared to the VaR estimate. If the portfolio loss exceeds the VaR estimate,
an outlier has occurred. This procedure is repeated for each day in the time
series.
The prediction error as the absolute diп¬Ђerence of the relative number of out-
liers О± to the predicted number О± is averaged over diп¬Ђerent portfolios and con-
Л†
п¬Ѓdence levels. The average over the portfolios (a1 = (в€’3, в€’2) a2 = (+3, в€’2)
a3 = (в€’3, +2) a4 = (+3, +2)) uses equal weights, while the average over the
conп¬Ѓdence levels i emphasizes the tails by a weighting scheme wi (w1 = 1,
w2 = 5, w3 = 10). Based on the result, an overall error and a relative ranking
of the diп¬Ђerent methods is obtained (see Table 2.2).
As benchmark methods for Value-at-Risk we use the variance-covariance (vcv)
method and historical simulation (his), for details see Deutsch and Eller (1999).
The variance covariance method is an analytical method which uses a multi-
variate normal distribution. The historical simulation method not only includes
2.4 Results 49

the empirical copula, but also empirical marginal distributions. For the cop-
ula VaR methods, the margins are assumed to be normal, the only diп¬Ђerence
between the copula VaRвЂ™s is due to diп¬Ђerent dependence structures (see Table
2.1). Mainly as a consequence of non-normal margins, the historical simulation
has the best backtest results. However, even assuming normal margins, certain
copulas (5, 12вЂ“14) give better backtest results than the traditional variance-
covariance method.
Copula as in Table 2.1
О±= a= his vcv 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 21
.10 a1 .103 .084 .111 .074 .100 .086 .080 .086 .129 .101 .128 .129 .249 .090 .087 .084 .073 .104 .080
.05 a1 .053 .045 .066 .037 .059 .041 .044 .040 .079 .062 .076 .079 .171 .052 .051 .046 .038 .061 .041
.01 a1 .015 .019 .027 .013 .027 .017 .020 .016 .032 .027 .033 .034 .075 .020 .022 .018 .015 .027 .018
.10 a2 .092 .078 .066 .064 .057 .076 .086 .062 .031 .049 .031 .031 .011 .086 .080 .092 .085 .065 .070
.05 a2 .052 .044 .045 .023 .033 .041 .049 .031 .012 .024 .012 .013 .003 .051 .046 .054 .049 .039 .032
.01 a2 .010 .011 .016 .002 .007 .008 .009 .006 .002 .002 .002 .002 .001 .015 .010 .018 .025 .011 .005
.10 a3 .099 .086 .126 .086 .064 .088 .096 .073 .032 .054 .033 .031 .016 .094 .086 .105 .133 .070 .086
.05 a3 .045 .048 .093 .047 .032 .052 .050 .040 .017 .026 .017 .016 .009 .049 .047 .058 .101 .034 .050
.01 a3 .009 .018 .069 .018 .012 .018 .016 .012 .007 .009 .006 .006 .002 .018 .015 .018 .073 .013 .020
.10 a4 .103 .090 .174 .147 .094 .095 .086 .103 .127 .094 .129 .127 .257 .085 .085 .085 .136 .088 .111
.05 a4 .052 .058 .139 .131 .056 .060 .058 .071 .084 .068 .084 .085 .228 .053 .054 .051 .114 .053 .098
.01 a4 .011 .020 .098 .108 .017 .025 .025 .035 .042 .056 .041 .042 .176 .016 .017 .016 .087 .015 .071
.10 Avg .014 .062 .145 .123 .085 .055 .052 .082 .193 .104 .194 .194 .478 .045 .061 .045 .110 .082 .075
.05 Avg .011 .021 .154 .124 .051 .030 .016 .060 .134 .080 .132 .136 .387 .006 .012 .017 .127 .041 .075
.01 Avg .007 .029 .169 .117 .028 .031 .032 .036 .065 .071 .065 .067 .249 .029 .025 .029 .160 .026 .083
Avg Avg .009 .028 .163 .120 .039 .032 .028 .047 .095 .076 .094 .096 .306 .022 .023 .026 .147 .034 .080
Rank 1 6 18 16 9 7 5 10 14 11 13 15 19 2 3 4 17 8 12

Table 2.2. Relative number of backtest outliers О± for the VaR with
Л†
conп¬Ѓdence 1 в€’ О±, weighted average error |Л† в€’ О±| and error ranking.
О±
XFGaccvar4.xpl

Bibliography
H.-P. Deutsch, R. Eller (1999). Derivatives and Internal Models. Macmillan
Press.
T. P. Hutchinson, C. D. Lai (1990). Continuous Bivariate Distributions, Em-
phasising Applications. Rumsby Scientiп¬Ѓc Publishing, Adelaide.
P. Embrechts, A. McNeil, D. Straumann (1999).Correlation: Pitfalls and Al-
ternatives. RISK, May, pages 69-71.
P. Embrechts, A. McNeil, D. Straumann (1999).Correlation and Dependence
in Risk Management: Properties and Pitfalls. Preprint ETH ZВЁrich.
u
50 2 Applications of Copulas for the Calculation of Value-at-Risk

R. B. Nelsen (1999). An Introduction to Copulas. Springer, New York.
A. Sklar (1959). Fonctions de rВґpartition ` n dimensions et leurs marges. Publ.
e a
Inst. Statist. Univ. Paris 8, pages 229-231.
A. Sklar (1996). Random Variables, Distribution Functions, and Copulas вЂ“ a
Personal Look Backward and Forward published in Distributions with
Fixed Marginals and Related Topics, edited by L. RВЁschendorf, B.
u
Schweizer, and M.D. Taylor, Institute of Mathematical Statistics, Hay-
ward, CA, pages 1-14.
3 Quantiп¬Ѓcation of Spread Risk by
Means of Historical Simulation
Christoph Frisch and Germar KnВЁchlein
o

3.1 Introduction
Modeling spread risk for interest rate products, i.e., changes of the yield diп¬Ђer-
ence between a yield curve characterizing a class of equally risky assets and a
riskless benchmark curve, is a challenge for any п¬Ѓnancial institution seeking to
estimate the amount of economic capital utilized by trading and treasury activ-
ities. With the help of standard tools this contribution investigates some of the
characteristic features of yield spread time series available from commercial
data providers. From the properties of these time series it becomes obvious
that the application of the parametric variance-covariance-approach for esti-
mating idiosyncratic interest rate risk should be called into question. Instead
we apply the non-parametric technique of historical simulation to synthetic
zero-bonds of diп¬Ђerent riskiness, in order to quantify general market risk and
spread risk of the bond. The quality of value-at-risk predictions is checked by a
backtesting procedure based on a mark-to-model proп¬Ѓt/loss calculation for the
zero-bond market values. From the backtesting results we derive conclusions
for the implementation of internal risk models within п¬Ѓnancial institutions.

3.2 Risk Categories вЂ“ a Deп¬Ѓnition of Terms
For the analysis of obligor-speciп¬Ѓc and market-sector-speciп¬Ѓc inп¬‚uence on bond
price risk we make use of the following subdivision of вЂњprice riskвЂќ, Gaumert
 << стр. 2(всего 14)СОДЕРЖАНИЕ >>