and sum up to one

d

wj (t) = 1.

j=1

In the case of independent rating migrations, the expected portfolio weights at

t + 1 given the weights at t result from (4.22) and (4.23) as

E[w(t + 1)|w(t) = w(t)] = w(t)P

˜ ˜

108 4 Rating Migrations

and the conditional covariance matrix V [w(t + 1)|w(t) = w(t)] has elements

˜ ˜

±

n(t) d wj (t)pjk (1 ’ pjk )

1

k=l

j=1

def

for

vkl = (4.24)

d

’1

j=1 wj (t)pjk pjl k = l.

n(t)

For m periods the multi-period transition matrix P(m) = Pm has to be used,

see Section 4.3.1. Hence, (4.22) and (4.23) are modi¬ed to

d

1 (m)

wk (t + m) =

˜ cjk (t)

˜

n(t) j=1

and

(m) (m)

cjk (t)|wj (t) = wj (t) ∼ B n(t) wj (t), pjk

˜ ˜ .

(m)

Here, cjk (t) denotes the number of credits migrating from j to k over m

periods starting in t. The conditional mean of the portfolio weights is now

given by

E[w(t + m)|w(t) = w(t)] = w(t)P(m)

˜ ˜

and the elements of the conditional covariance matrix V [w(t + m)|w(t) = w(t)]

˜ ˜

(m) (m)

result by replacing pjk and pjl in (4.24) by pjk and pjl .

Bibliography

Athreya, K. B. and Fuh, C. D. (1992). Bootstrapping Markov chains, in

R. LePage and L. Billard (eds), Exploring the Limits of Bootstrap, Wi-

ley, New York, pp. 49“64.

Basawa, I. V., Green, T. A., McCormick, W. P., and Taylor, R. L. (1990).

Asymptotic bootstrap validity for ¬nite Markov chains, Communications

in Statistics A 19: 1493“1510.

Basel Committee on Banking Supervision (2001). The Internal Ratings-Based

Approach. Consultative Document.

Bishop, Y. M. M., Fienberg, S. E., and Holland, P. W. (1975). Discrete Multi-

variate Analysis: Theory and Practice, MIT Press, Cambridge.

4.3 Multi-Period Transitions 109

Br´maud, P. (1999). Markov Chains: Gibbs Fields, Monte Carlo Simulation,

e

and Queues, Springer, New York.

Crouhy, M., Galai, D., and Mark, R. (2001). Prototype risk rating system,

Journal of Banking & Finance 25: 47“95.

Davidson, J. (1994). Stochastic Limit Theory, Oxford University Press, Oxford.

Efron, B. and Tibshirani, R. J. (1993). An Introduction to the Bootstrap,

Chapman & Hall, New York.

Finger, C. C. (1998). Extended ”constant correlations” in CreditManager 2.0,

CreditMetrics Monitor pp. 5“8. 3rd Quarter.

Gupton, G. M., Finger, C. C., and Bhatia, M. (1997). CreditMetrics - Technical

Document, J.P. Morgan.

H¨rdle, W., Horowitz, J., and Kreiss, J. P. (2001). Bootstrap Methods for

a

Time Series, SFB Discussion Paper, 59.

Huschens, S. and Locarek-Junge, H. (2000). Konzeptionelle und statistische

Grundlagen der portfolioorientierten Kreditrisikomessung, in A. Oehler

(ed.), Kreditrisikomanagement - Portfoliomodelle und Derivate, Sch¨¬er-

a

Poeschel Verlag, Stuttgart, pp. 25“50.

Jarrow, R. A., Lando, D., and Turnbull, S. M. (1997). A Markov model for

the term structure of credit risk spreads, The Review of Financial Studies

10(2): 481“523.

Kim, J. (1999). Conditioning the transition matrix, Risk: Credit Risk Special

Report, October: 37“40.

Lancaster, T. (1990). The Econometric Analysis of Transition Data, Cambridge

University Press.

Machauer, A. and Weber, M. (1998). Bank behavior based on internal credit

ratings of borrowers, Journal of Banking & Finance 22: 1355“1383.

Mardia, K. V., Kent, J. T., and Bibby, J. M. (1979). Multivariate Analysis,

Academic Press, London.

Nickell, P., Perraudin, W., and Varotto, S. (2000). Stability of rating transi-

tions, Journal of Banking & Finance 24: 203“227.

110 4 Rating Migrations

Saunders, A. (1999). Credit Risk Measurement: New Approaches to Value at

Risk and Other Paradigms, Wiley, New York.

Shao, J. and Tu, D. (1995). The Jackknife and Bootstrap, Springer, New York.

5 Sensitivity analysis of credit

portfolio models

R¨diger Kiesel and Torsten Kleinow

u

To assess the riskiness of credit-risky portfolios is one of the most challenging

tasks in contemporary ¬nance. The decision by the Basel Committee for Bank-

ing Supervision to allow sophisticated banks to use their own internal credit

portfolio risk models has further highlighted the importance of a critical eval-

uation of such models. A crucial input for a model of credit-risky portfolios

is the dependence structure of the underlying obligors. We study two widely

used approaches, namely a factor structure and the direct speci¬cation of a

copula, within the framework of a default-based credit risk model. Using the

powerful simulation tools of XploRe we generate portfolio default distributions

and study the sensitivity of commonly used risk measures with respect to the

approach in modelling the dependence structure of the portfolio.

5.1 Introduction

Understanding the principal components of portfolio credit risk and their in-

teraction is of considerable importance. Investment banks use risk-adjusted

capital ratios such as risk-adjusted return on capital (RAROC) to allocate eco-

nomic capital and measure performance of business units and trading desks.

The current attempt by the Basel Committee for Banking Supervision in its

Basel II proposals to develop an appropriate framework for a global ¬nancial

regulation system emphasizes the need for an accurate understanding of credit

risk; see BIS (2001). Thus bankers, regulators and academics have put con-

siderable e¬ort into attempts to study and model the contribution of various

ingredients of credit risk to overall credit portfolio risk. A key development

has been the introduction of credit portfolio models to obtain portfolio loss

distributions either analytically or by simulation. These models can roughly

112 5 Sensitivity analysis of credit portfolio models

be classi¬ed as based on credit rating systems, on Merton™s contingent claim

approach or on actuarial techniques; see Crouhy, Galai and Mark (2001) for

exact description and discussion of the various models.

However, each model contains parameters that e¬ect the risk measures pro-

duced, but which, because of a lack of suitable data, must be set on a judge-

mental basis. There are several empirical studies investigating these e¬ects:

Gordy (2000) and Koyluoglu and Hickmann (1998) show that parametrisation

of various models can be harmonized, but use only default-driven versions (a

related study with more emphasis on the mathematical side of the models is

Frey and McNeil (2001)). Crouhy, Galai and Mark (2000) compare models

on benchmark portfolio and ¬nd that the highest VaR estimate is 50 per cent

larger than the lowest. Finally, Nickell, Perraudin and Varotto (1998) ¬nd that

models yield too many exceptions by analyzing VaRs for portfolios over rolling

twelve-month periods.

Despite these shortcomings credit risk portfolio models are regarded as valu-

able tools to measure the relative riskiness of credit risky portfolios “ not least

since measures such as e.g. the spread over default-free interest rate or default

probabilities calculated from long runs of historical data su¬er from other in-

trinsic drawbacks “ and are established as benchmark tools in measuring credit

risk.

The calculation of risk capital based on the internal rating approach, currently

favored by the Basel Supervisors Committee, can be subsumed within the class

of ratings-based models. To implement such an approach an accurate under-

standing of various relevant portfolio characteristics within such a model is

required and, in particular, the sensitivity of the risk measures to changes in

input parameters needs to be evaluated. However, few studies have attempted

to investigate aspects of portfolio risk based on rating-based credit risk models

thoroughly. In Carey (1998) the default experience and loss distribution for

privately placed US bonds is discussed. VaRs for portfolios of public bonds,

using a bootstrap-like approach, are calculated in Carey (2000). While these

two papers utilize a ”default-mode” (abstracting from changes in portfolio value

due to changes in credit standing), Kiesel, Perraudin and Taylor (1999) employ

a ”mark-to-market” model and stress the importance of stochastic changes in

credit spreads associated with market values “ an aspect also highlighted in

Hirtle, Levonian, Saidenberg, Walter and Wright (2001).

The aim of this chapter is to contribute to the understanding of the performance

of rating-based credit portfolio models. Our emphasis is on comparing the

e¬ect of the di¬erent approaches to modelling the dependence structure of

5.2 Construction of portfolio credit risk models 113

the individual obligors within a credit-risky portfolio. We use a default-mode

model (which can easily be extended) to investigate the e¬ect of changing

dependence structure within the portfolio. We start in Section 5.2 by reviewing

the construction of a rating-based credit portfolio risk model. In Section 5.3 we

discuss approaches to modelling dependence within the portfolio. In Section

5.4 we comment on the implementation in XploRe and present results from our

simulations.

5.2 Construction of portfolio credit risk models

To construct a credit risk model we have to consider individual risk elements

such as

(1i) Default Probability: the probability that the obligor or counterparty will

default on its contractual obligations to repay its debt,

(2i) Recovery Rates: the extent to which the face value of an obligation can

be recovered once the obligor has defaulted,

(3i) Credit Migration: the extent to which the credit quality of the obligor or

counterparty improves or deteriorates;

and portfolio risk elements

(1p) Default and Credit Quality Correlation: the degree to which the default

or credit quality of one obligor is related to the default or credit quality

of another,

(2p) Risk Contribution and Credit Concentration: the extent to which an indi-

vidual instrument or the presence of an obligor in the portfolio contributes

to the totality of risk in the overall portfolio.

From the above building blocks a rating-based credit risk model is generated

by

(1m) the de¬nition of the possible states for each obligor™s credit quality, and

a description of how likely obligors are to be in any of these states at the

horizon date, i.e. speci¬cation of rating classes and of the corresponding

matrix of transition probabilities (relating to (1i) and (3i)).

114 5 Sensitivity analysis of credit portfolio models

(2m) quantifying the interaction and correlation between credit migrations of

di¬erent obligors (relating to (1p)).

(3m) the re-evaluation of exposures in all possible credit states, which in case of

default corresponds to (2i) above; however, for non-default states a mark-

to-market or mark-to-model (for individual assets) procedure is required.

During this study we will focus on the e¬ects of default dependence modelling.

Furthermore, we assume that on default we are faced with a zero recovery rate.

Thus, only aspects (1i) and (1p) are of importance in our context and only

two rating classes “ default and non-default “ are needed. A general discussion

of further aspects can be found in any of the books Caouette, Altman and

Narayanan (1998), Ong (1999), Jorion (2000) and Crouhy et al. (2001). For

practical purposes we emphasize the importance of a proper mark-to-market

methodology (as pointed out in Kiesel et al. (1999)). However, to study the

e¬ects of dependence modelling more precisely, we feel a simple portfolio risk

model is su¬cient.

As the basis for comparison we use Value at Risk (VaR) “ the loss which will

be exceeded on some given fractions of occasions (the con¬dence level) if a

portfolio is held for a particular time (the holding period).

5.3 Dependence modelling

To formalize the ratings-based approach, we characterize each exposure j ∈

{1, . . . , n} by a four-dimensional stochastic vector

(Sj , kj , lj , π(j, kj , lj )),

where for obligor j

(1) Sj is the driving stochastic process for defaults and rating migrations,

(2) kj , lj represent the initial and end-of-period rating category,

(3) π(.) represents the credit loss (end-of-period exposure value).

In this context Sj (which is, with reference to the Merton model, often in-

terpreted as a proxy of the obligor™s underlying equity) is used to obtain the

end-of-period state of the obligor. If we assume N rating classes, we obtain

5.3 Dependence modelling 115

cut-o¬ points ’∞ = zk,0 , zk,1 , zk,2 , . . . , zk,N ’1 , zk,N = ∞ using the matrix of

transition probabilities together with a distributional assumption on Sj . Then,

obligor j changes from rating k to rating l if the variable Sj falls in the range

[zk,l’1 , zkl ]. Our default-mode framework implies two rating classes, default

resp. no-default, labeled as 1 resp. 0 (and thus only a single cut-o¬ point

obtained from the probability of default). Furthermore, interpreting π(•) as

the individual loss function, π(j, 0, 0) = 0 (no default) and according to our

zero recovery assumption π(j, 0, 1) = 1. To illustrate the methodology we plot

in Figure 5.1 two simulated drivers S1 and S2 together with the corresponding

cut-o¬ points z1,1 and z2,1 .

1.64

1.30

0.96

0.62

50.00 100.00 150.00 200.00 250.00

Figure 5.1. Two simulated driver Sj and the corresponding cut-o¬

points for default. XFGSCP01.xpl

5.3.1 Factor modelling

In a typical credit portfolio model dependencies of individual obligors are mod-

elled via dependencies of the underlying latent variables S = (S1 , . . . , Sn ) . In

the typical portfolio analysis the vector S is embedded in a factor model, which

allows for easy analysis of correlation, the typical measure of dependence. One

assumes that the underlying variables Sj are driven by a vector of common

116 5 Sensitivity analysis of credit portfolio models

factors. Typically, this vector is assumed to be normally distributed (see e.g.

JP Morgan (1997)). Thus, with Z ∼ N(0, Σ) a p-dimensional normal vec-

tor and = ( 1 , . . . , n ) independent normally distributed random variables,

independent also from Z, we de¬ne

p

Sj = aji Zi + σj j , j = 1, . . . n. (5.1)

i=1

Here aji describes the exposure of obligor j to factor i, i.e. the so-called factor

loading, and σj is the volatility of the idiosyncratic risk contribution. In such

a framework one can easily interfere default correlation from the correlation of

the underlying drivers Sj . To do so, we de¬ne default indicators

Yj = 1(Sj ¤ Dj ),

where Dj is the cut-o¬ point for default of obligor j. The individual default

probabilities are

πj = P(Yj = 1) = P(Sj ¤ Dj ),

and the joint default probability is

πij = P(Yi = 1, Yj = 1) = P(Si ¤ Di , Sj ¤ Dj ).

If we denote by ρij = Corr(Si , Sj ) the correlation of the underlying latent

variables and by ρD = Corr(Yi , Yj ) the default correlation of obligors i and j,

ij

then we obtain for the default correlation the simple formula

πij ’ πi πj

ρD = . (5.2)

ij

πi πj (1 ’ πi )(1 ’ πj )

Under the assumption that (Si , Sj ) are bivariate normal, we obtain for the joint

default probability

Di Dj

πij = •(u, v; ρij )dudv,

’∞ ’∞

where •(u, v; ρ) is bivariate normal density with correlation coe¬cient ρ. Thus,

asset (factor) correlation in¬‚uences default correlation by entering in joint de-

fault probability. Within the Gaussian framework we can easily evaluate the

above quantities, see (5.1). We see, that under our modelling assumption de-

fault correlation is of an order of magnitude smaller than asset correlation

(which is also supported by empirical evidence).

5.3 Dependence modelling 117

Asset correlation Default correlation

0.1 0.0094

0.2 0.0241

0.3 0.0461

Table 5.1. E¬ect of asset correlation on default correlation

5.3.2 Copula modelling

As an alternative approach to the factor assumption, we can model each of the

underlying variables independently and subsequently use a copula to generate

the dependence structure. (For basic facts on copulae we refer the reader to

Chapter 2 and the references given there.)

So, suppose we have speci¬ed the individual distributions Fj of the variables

Sj and a copula C for the dependence structure. Then, for any subgroup of

obligors {j1 , . . . , jm }, we have for the joint default probability

P (Yj1 = 1, . . . , Yjm = 1)

P (Sj1 ¤ Dj1 , . . . , Sjm ¤ Djm )

=

= Cj1 ,...,jm {Fj1 (Dj1 ), . . . , Fjm (Djm )} ,

where we denote by Cj1 ,...,jm the m-dimensional margin of C. In particular,

the joint default probability of two obligors is now

πij = Ci,j {Fi (Di ), Fj (Dj )} .

To study the e¬ect of di¬erent copulae on default correlation, we use the fol-

lowing examples of copulae (further details on these copulae can be found in

Embrechts, Lindskog and McNeil (2001)).

1. Gaussian copula:

(u) = ¦n (¦’1 (u1 ), . . . , ¦’1 (un )).

Gauss

CR R

Here ¦n denotes the joint distribution function of the n-variate normal

R

with linear correlation matrix R, and ¦’1 the inverse of the distribution

function of the univariate standard normal.

118 5 Sensitivity analysis of credit portfolio models

2. t-copula:

Cν,R (u) = tn (t’1 (u1 ), . . . , t’1 (un )),

t

ν,R ν ν

where tn denotes the distribution function of an n-variate t-distributed

ν,R

random vector with parameter ν > 2 and linear correlation matrix R.

Furthermore, tν is the univariate t-distribution function with parameter

ν.

3. Gumbel copula:

Gumbel

(u) = exp ’[(’ log u1 )θ + . . . + (’ log un )θ ]1/θ ,

Cθ

where θ ∈ [1, ∞). This class of copulae is a sub-class of the class of

Archimedean copulae. Furthermore, Gumbel copulae have applications

in multivariate extreme-value theory.

In Table 5.2 joint default probabilities of two obligors are reported using three

types of obligors with individual default probabilities roughly corresponding

to rating classes A,B,C. We assume that underlying variables S are univariate

normally distributed and model the joint dependence structure using the above

copulae.

Copula Default probability

’6

) class B (—10’4 ) class C (—10’4 )

class A (—10

Gaussian 6.89 3.38 52.45

t

C10 46.55 7.88 71.03

t

C4 134.80 15.35 97.96

Gumbel, C2 57.20 14.84 144.56

Gumbel, C4 270.60 41.84 283.67

Table 5.2. Copulae and default probabilities

The computation shows that t and Gumbel copulae have higher joint default

probabilities than the Gaussian copula (with obvious implication for default

correlation, see equation (5.2)). To explain the reason for this we need the

concept of tail dependence:

DEFINITION 5.1 Let X and Y be continuous random variables with distri-

bution functions F and G. The coe¬cient of upper tail dependence of X and

Y is

lim P[Y > G’1 (u)|X > F ’1 (u)] = »U (5.3)

u’1

5.4 Simulations 119

provided that the limit »U ∈ [0, 1] exists. If »U ∈ (0, 1], X and Y are said to

be asymptotically dependent in the upper tail; if »U = 0, X and Y are said to

be asymptotically independent in the upper tail.

For continuous distributions F and G one can replace (5.3) by a version involv-

ing the bivariate copula directly:

1 ’ 2u + C(u, u)

lim = »U . (5.4)

1’u

u’1

Lower tail dependence, which is more relevant to our current purpose, is de¬ned

in a similar way. Indeed, if

C(u, u)

lim = »L (5.5)

u

u’0

exists, then C exhibits lower tail dependence if »L ∈ (0, 1], and lower tail

independence if »L = 0.

It can be shown that random variables linked by Gaussian copulae have no

tail-dependence, while the use of tν and the Gumbel copulae results in tail-

dependence. In fact, in case of the tν copula, we have increasing tail dependence

with decreasing parameter ν, while for the Gumbel family tail dependence

increases with increasing parameter θ.

5.4 Simulations

The purpose here is to generate portfolios with given marginals (normal) and

the above copulae. We focus on the Gaussian and t-copula case.

5.4.1 Random sample generation

For the generation of an n-variate Normal with linear correlation matrix R,

(x1 , . . . , xn ) ∼ N(0, R), we apply the quantlet gennorm. To obtain realizations

from a Gaussian copula we simply have to transform the marginals:

• Set ui = ¦(xi ), i = 1, . . . , n.

Gauss

• (u1 , . . . , un ) ∼ CR .

120 5 Sensitivity analysis of credit portfolio models

t

To generate random variates from the t-copula Cν,R we recall that if the random

vector X admits the stochastic representation

ν

X =µ+ Y (in distribution), (5.6)

Z

with µ ∈ Rn , Z ∼ χ2 and Y ∼ N(0, Σ), where Z and Y are independent, then

ν

ν

X is tν distributed with mean µ and covariance matrix ν’2 Σ. Here we assume

as above, that ν > 2. While the stochastic representation (5.6) is still valid, the

interpretation of the parameters has to change for ν ¤ 2. Thus, the following

algorithm can be used (this is Algorithm 5.2 in Embrechts et al. (2001)):

• Simulate x = (x1 , . . . , xn ) ∼ N(0, R) using gennorm.

• Simulate a random variate z from χ2 independent of y1 , . . . , yn .

ν

ν

• Set x = z.

• Set ui = tν (xi ), i = 1, . . . , n.

t

• (u1 , . . . , un ) ∼ Cν,R .

Having obtained the t-copula Cν,R , we only need to replace the ui with ¦’1 (ui )

t

in order to have a multivariate distribution with t-copula and normal marginals.

The implementation of these algorithms in XploRe is very straightforward. In-

deed, using the quantlet normal we can generate normally distributed random

variables. Naturally all the distribution functions needed are also implemented,

cdfn, cdft etc.

5.4.2 Portfolio results

We simulate standard portfolios of size 500 with all obligors belonging to one

rating class. We use three rating classes, named A,B,C with default prob-

abilities 0.005, 0.05, 0.15 roughly corresponding to default probabilities from

standard rating classes, Ong (1999), p. 77.

For our ¬rst simulation exercise we assume that the underlying variables Sj

are normally distributed within a single factor framework, i.e. p = 1 in (5.1).

The factor loadings aj1 in (5.1) are constant and chosen so that the correlation

for the underlying latent variables Sj is ρ = 0.2, which is a standard baseline

5.4 Simulations 121

value for credit portfolio simulations, Kiesel et al. (1999). To generate di¬erent

degrees of tail correlation, we link the individual assets together using a Gaus-

sian, a t10 and a t4 -copula as implemented in VaRcredN and VaRcredTcop.

out = VaRcredN (d, p, rho, opt)

simulates the default distribution for a portfolio of d homogeneous

obligors assuming a Gaussian copula.

out = VaRcredTcop (d, p, rho, df, opt)

simulates the default distribution for a portfolio of d homogeneous

obligors assuming a t-copula with df degrees of freedom.

The default driver Sj are normal for all obligors j in both quantlets. p de-

notes the default probability πj of an individual obligor and rho is the asset

correlation ρ. opt is an optional list parameter consisting of opt.alpha, the

signi¬cance level for VaR estimation and opt.nsimu, the number of simula-

tions. Both quantlets return a list containing the mean, the variance and the

opt.alpha-quantile of the portfolio default distribution.

VaR

Portfolio Copula ± = 0.95 ± = 0.99

A Normal 10 22

t10 14 49

t4 10 71

B Normal 77 119

t10 95 178

t4 121 219

C Normal 182 240

t10 198 268

t4 223 306

Table 5.3. E¬ect of di¬erent copulae XFGSCP02.xpl

The most striking observation from Table 5.3 is the e¬ect tail-dependence has

on the high quantiles of highly-rated portfolios: the 99%-quantile for the t4 -

copula is more than 3-times larger than the corresponding quantile for the

Gaussian copula. The same e¬ect can be observed for lower rated portfolios

122 5 Sensitivity analysis of credit portfolio models

although not quite with a similar magnitude.

To assess the e¬ects of increased correlation within parts of the portfolio, we

change the factor loading within parts of our portfolio. We assume a second

factor, i.e. p = 2 in (5.1), for a sub-portfolio of 100 obligors increasing the

correlation of the latent variables Sj within the sub-portfolio to 0.5. In the

simulation below, the quantlets VaRcredN2 and VaRcredTcop2 are used.

out = VaRcredN2 (d1, d2, p, rho1, rho2, opt)

simulates the default distribution for a portfolio consisting of two

homogeneous subportfolios using a Gaussian copula.

out = VaRcredTcop2 (d1, d2, p, rho1, rho2, df, opt)

simulates the default distribution for a portfolio consisting of two

homogeneous subportfolios using a t-copula with df degrees of

freedom.

The number of obligors in the ¬rst (second) subportfolio is d1 (d2). rho1

(rho2) is the asset correlation generated by the ¬rst (second) factor. The other

parameters correspond to the parameters in VaRcredN and VaRcredTcop.

Such a correlation cluster might be generated by a sector or regional exposure

for a real portfolio. Again, degrees of tail correlation are generated by using

a Gaussian, a t10 and a t4 -copula. As expected the results in Table 5.4 show

a slight increase in the quantiles due to the increased correlation within the

portfolio. However, comparing the two tables we see that the sensitivity of the

portfolio loss quantiles is far higher with regard to the underlying copula “ and

its corresponding tail dependence “ than to the correlation within the portfolio.

Our simulation results indicate that the degree of tail dependence of the un-

derlying copula plays a major role as a credit risk characteristicum. Thus,

while analysis of the driving factors for the underlying variables (obligor eq-

uity, macroeconomic variables, ..) remains an important aspect in modelling

credit risky portfolio, the copula linking the underlying variables together is of

crucial importance especially for portfolios of highly rated obligors.

5.4 Simulations 123

VaR

Portfolio Copula ± = 0.95 ± = 0.99

A Normal 10 61

t10 9 61

t4 5 60

B Normal 161 318

t10 157 344

t4 176 360

C Normal 338 421

t10 342 426

t4 350 432

Table 5.4. E¬ect of correlation cluster XFGSCP03.xpl

Bibliography

BIS (2001). Overview of the new Basel capital accord, Technical report, Basel

Committee on Banking Supervision.

Caouette, J., Altman, E. and Narayanan, P. (1998). Managing Credit Risk, The

Next Great Financial Challenge, Wiley Frontiers in Finance, Vol. Wiley

Frontiers in Finance, Wiley & Sons, Inc, New York.

Carey, M. (1998). Credit risk in private debt portfolios, Journal of Finance

53(4): 1363“1387.

Carey, M. (2000). Dimensions of credit risk and their relationship to economic

capital requirements. Preprint, Federal Reserve Board.

Crouhy, M., Galai, D. and Mark, R. (2000). A comparative analysis of current

credit risk models, Journal of Banking and Finance 24(1-2): 59“117.

Crouhy, M., Galai, D. and Mark, R. (2001). Risk management, McGraw Hill.

Embrechts, P., Lindskog, F. and McNeil, A. (2001). Modelling dependence

with copulas and applications to risk management. Working paper, ETH

Z¨rich.

u

Frey, R. and McNeil, A. (2001). Modelling dependent defaults. Working paper,

ETH Z¨rich.

u

124 5 Sensitivity analysis of credit portfolio models

Gordy, M. (2000). A comparative anatomy of credit risk models, Journal of

Banking and Finance 24: 119“149.

Hirtle, B., Levonian, M., Saidenberg, M., Walter, S. and Wright, D. (2001). Us-

ing credit risk models for regulartory capital: Issues and options, FRBNY

Economic Policy Review 6(2): 1“18.

Jorion, P. (2000). Value at Risk, 2nd. edn, McGraw-Hill, New York.

JP Morgan (1997). Creditmetrics-Technical Document, JP Morgan, New York.

Kiesel, R., Perraudin, W. and Taylor, A. (1999). The structure of credit risk.

Preprint, Birkbeck College.

Koyluoglu, H. and Hickmann, A. (1998). A generalized framework for credit

portfolio models. Working Paper, Oliver, Wyman & Company.

Nickell, P., Perraudin, W. and Varotto, S. (1998). Ratings-versus equity-based

credit risk models: An empirical investigation. unpublished Bank of Eng-

land mimeo.

Ong, M. (1999). Internal Credit Risk Models. Capital Allocation and Perfor-

mance Measurement, Risk Books, London.

Part III

Implied Volatility

6 The Analysis of Implied

Volatilities

Matthias R. Fengler, Wolfgang H¨rdle and Peter Schmidt

a

The analysis of volatility in ¬nancial markets has become a ¬rst rank issue in

modern ¬nancial theory and practice: Whether in risk management, portfolio

hedging, or option pricing, we need to have a precise notion of the market™s

expectation of volatility. Much research has been done on the analysis of real-

ized historic volatilities, Roll (1977) and references therein. However, since it

seems unsettling to draw conclusions from past to expected market behavior,

the focus shifted to implied volatilities, Dumas, Fleming and Whaley (1998).

To derive implied volatilities the Black and Scholes (BS) formula is solved for

the constant volatility parameter σ using observed option prices. This is a more

natural approach as the option value is decisively determined by the market™s

assessment of current and future volatility. Hence implied volatility may be

used as an indicator for market expectations over the remaining lifetime of the

option.

It is well known that the volatilities implied by observed market prices exhibit

a pattern that is far di¬erent from the ¬‚at constant one used in the BS formula.

Instead of ¬nding a constant volatility across strikes, implied volatility appears

to be non ¬‚at, a stylized fact which has been called ”smile”e¬ect. In this

chapter we illustrate how implied volatilites can be analyzed. We focus ¬rst

on a static and visual investigation of implied volatilities, then we concentrate

on a dynamic analysis with two variants of principal components and interpret

the results in the context of risk management.

128 6 The Analysis of Implied Volatilities

6.1 Introduction

Implied volatilities are the focus of interest both in volatility trading and in

risk management. As common practice traders directly trade the so called

”vega”, i.e. the sensitivity of their portfolios with respect to volatility changes.

In order to establish vega trades market professionals use delta-gamma neutral

hedging strategies which are insensitive to changes in the underlying and to time

decay, Taleb (1997). To accomplish this, traders depend on reliable estimates

of implied volatilities and - most importantly - their dynamics.

One of the key issues in option risk management is the measurement of the

inherent volatility risk, the so called ”vega” exposure. Analytically, the ”vega”

is the ¬rst derivative of the BS formula with respect to the volatility parameter

σ, and can be interpreted as a sensitivity of the option value with respect to

changes in (implied) volatility. When considering portfolios composed out of

a large number of di¬erent options, a reduction of the risk factor space can

be very useful for assessing the riskiness of the current position. H¨rdle and

a

Schmidt (2002) outline a procedure for using principal components analysis

(PCA) to determine the maximum loss of option portfolios bearing vega expo-

sure. They decompose the term structure of DAX implied volatilities ”at the

money” (ATM) into orthogonal factors. The maximum loss, which is de¬ned

directly in the risk factor space, is then modeled by the ¬rst two factors.

Our study on DAX options is organized as follows: First, we show how to de-

rive and to estimate implied volatilities and the implied volatility surface. A

data decription follows. In section 6.3.2, we perfom a standard PCA on the co-

variance matrix of VDAX returns to identify the dominant factor components

driving term structure movements of ATM DAX options. Section 6.3.3 intro-

duces a common principal components approach that enables us to model not

only ATM term structure movements of implied volatilities but the dynamics

of the ”smile” as well.

6.2 The Implied Volatility Surface 129

6.2 The Implied Volatility Surface

6.2.1 Calculating the Implied Volatility

The BS formula for the price Ct of a European call at time t is given by

= St ¦(d1 ) ’ Ke’r„ ¦(d2 ),

Ct (6.1)

ln(St /K) + (r + 1 σ 2 )„

2

√

d1 = , (6.2)

σ„

√

= d1 ’ σ „ ,

d2 (6.3)

where ¦ denotes the cumulative distribution function of a standard normal

random variable. r denotes the risk-free interest rate, S the price of the under-

lying, „ = T ’ t the time to maturity and K the strike price. For ATM options

the equality K = St holds.

The only parameter in the Black and Scholes formula that cannot be observed

directly is the actual volatility of the underlying price process. However, we may

study the volatility which is implied by option prices observed in the markets,

the so called implied volatility: implied volatility is de¬ned as the parameter

σ that yields the actually observed market price of a particular option when

ˆ

substituted into the BS formula. The implied volatility of a European put with

the same strike and maturity can be deduced from the ”put-call parity”

Ct ’ Pt = St ’ Ke’r„ .

XploRe o¬ers a fast and convenient numerical way to invert the BS formula in

order to recover σ from the market prices of Ct or Pt .

ˆ

y = ImplVola(x{, IVmethod})

calculates implied volatilities.

As numerical procedures both a bisectional method and a Newton-Raphson

algorithm are available. They are selected by the option IVmethod, which can

either be the bisection method IVmethod="bisect" or the default Newton-

Raphson. Within arbitrage bounds on the other input parameters there exists

130 6 The Analysis of Implied Volatilities

a unique solution, since the BS formula is globally concave in σ. The input

vector x contains the data in an n—6 dimensional matrix, where the ¬rst column

contains the underlying asset prices S, the second the strikes K, the third the

interest rates r [on a yearly basis], the fourth maturities „ [in scale of years],

the ¬fth the observed option prices Ct and Pt . The sixth column contains the

type of the option, where 0 abbreviates a put and 1 a call. For example, the

command ImplVola(100˜120˜0.05˜0.5˜1.94˜1) yields the implied volatility

of a European call at strike K = 120 with maturity „ of half a year, where

the interest rate is assumed to be r = 5%, the price of the underlying asset

S = 100 and the option price Ct = 1.94: the result is σ = 24.94%. One may

ˆ

verify this result by using XploRe:

opc = BlackScholes(S, K, r, sigma, tau, task)

which calculates European option prices according to the Black and Scholes

model, when no dividend is assumed. The ¬rst 5 input parameters follow the

notation in this paper, and task speci¬es whether one desires to know a call

price, task=1, or a put price, task=0. Indeed, for σ = 24.94% we reproduce

the assumed option call price of Ct = 1.94. XFGiv00.xpl

Now we present a more complex example using option data from the German

and Swiss Futures Exchange (EUREX). The data set volsurfdata2 contains

the full set of option prices (settlement prices) as observed on January 4th,

1999. The ¬rst column contains the settlement price S of the DAX, the second

the strike price K of the option, the third the interest rate r, the fourth time

to maturity „ , the ¬fth the option prices Ct or Pt and the last column ¬nally

the type of option, either 0, i.e. a put, or 1, i.e. a call. Hence the data set is

already in the form as required by the quantlet ImplVola. We may therefore

use the following code to calculate the implied volatilities:

library ("finance")

x=read("volsurfdata2.dat") ; read the data

x=paf(x,x[,4]>0.14&&x[,4]<0.22) ; select 2 months maturity

y=ImplVola(x,"bisect") ; calculate ImplVola

sort(x[,2]˜y) ; sort data according to strikes

6.2 The Implied Volatility Surface 131

Volatility Smile

0.5

Implied Volatility

0.45

0.4

0.35

0.3

3000 4000 5000 6000 7000

Strike Prices

Figure 6.1. Implied volatility ”smile” as observed on January 4th, 1999

XFGiv01.xpl

In Figure 6.1 we display the output for the strike dimension. The deviation from

the BS model is clearly visible: implied volatilities form a convex ”smile” in

strikes. One ¬nds a curved shape also across di¬erent maturities. In combina-

tion with the strike dimension this yields a surface with pronounced curvature

(Figure 6.2). The discontinuity of the ATM position is related to tax e¬ects

exerting di¬erent in¬‚uences on puts and calls, Hafner and Wallmeier (2001).

In our case this e¬ect is not so important, since we smooth the observations

and calculate the returns of the implied volatility time series before applying

the PCA.

6.2.2 Surface smoothing

Calculation of implied volatilities at di¬erent strikes and maturities yields a

surface. The quantlet volsurf estimates the implied volatility surface on a

speci¬ed grid using a bi-dimensional kernel smoothing procedure. A Nadaraya-

Watson estimator with a quartic kernel is employed, A¨ ±t-Sahalia, and Lo (1998),

A¨±t-Sahalia and Lo (2000), H¨rdle (1990), H¨rdle, M¨ller, Sperlich, and Wer-

a a u

watz (2002).

132 6 The Analysis of Implied Volatilities

More technically, given a partition of explanatory variables (x1 , x2 ) = (K, „ ),

i.e. of strikes and maturities, the two-dimensional Nadaraya-Watson kernel

estimator is n x1 ’x1i x2 ’x2i

i=1 K1 ( h1 )K2 ( h2 )ˆi σ

σ (x1 , x2 ) =

ˆ , (6.4)

n

K1 ( x1 ’x1i )K2 ( x2 ’x2i )

i=1 h1 h2

where σi is the volatility implied by the observed option prices Cti or Pti . K1

ˆ

and K2 are univariate kernel functions, and h1 and h2 are bandwidths. The

order 2 quartic kernel is given by

15 2

1 ’ u2 1(|u| ¤ 1).

Ki (u) =

16

The basic structure of volsurf is given by

{IVsurf, IVpoints} = volsurf(x, stepwidth, firstXF,

lastXF, firstMat, lastMat, metric, bandwidth, p,

{IVmethod})

As input parameters we ¬rst have the n — 6 matrix x which has been explained

in section 6.2.1. The remaining parameters concern the surface: stepwidth

is a 2 — 1 vector determining the stepwidth in the grid of the surface; the ¬rst

entry relates to the strike dimension, the second to the dimension across time

to maturity. firstXF, lastXF, firstMat, lastMat are scalar constants

giving the lowest limit and the highest limit in the strike dimension, and the

lowest and the highest limit of time to maturity in the volatility surface. The

option metric gives the choice whether to compute the surface in a moneyness

or in a strike metric. Setting metric = 0 will generate a surface computed

in a moneyness metric K/F , i.e. strike divided by the (implied) forward price

of the underlying, where the forward price is computed by Ft = St er„ . If

metric = 1, the surface is computed in the original strike dimension in terms

of K. bandwidth is a 2 — 1 vector determining the width of the bins for the

kernel estimator. p determines whether for computation a simple Nadaraya-

Watson estimator, p = 0, or a local polynomial regression, p = 0, is used.

The last and optional parameter IVmethod has the same meaning as in the

6.2 The Implied Volatility Surface 133

ImplVola quantlet. It tells XploRe which method to use for calculating the

implied volatilities, default again is Newton-Raphson.

IVsurf is an N — 3 matrix containing the

The output are two variables.

coordinates of the points computed for the implied volatility surface, where

the ¬rst column contains the values of the strike dimension, the second those

of time to maturity, the third estimated implied volatilities. N is the number of

grid points. IVpoints is a M — 3 matrix containing the coordinates of the M

options used to estimate the surface. As before, the ¬rst column contains the

values for the strike dimension, the second the maturity, the third the implied

volatilities.

Before presenting an example we brie¬‚y introduce a graphical tool for display-

ing the volatility surface. The following quantlet plots the implied surface:

volsurfplot(IVsurf, IVpoints, {AdjustToSurface})

As input parameters we have the output of volsurf, i.e. the volatility sur-

face IVsurf, and the original observations IVpoints. An optional parame-

ter AdjustToSurface determines whether the surface plot is shown based on

the surface data given in IVsurf, or on the basis of the original observations

IVpoints. This option might be useful in a situation where one has estimated a

smaller part of the surface than would be possible given the data. By default,

or AdjustToSurface = 1, the graph is adjusted according to the estimated

surface.

XFGiv02.xpl

XFGiv02.xpl computes an implied volatility surface with the Nadaraya-

Watson estimator and displays it (Figure 6.2). The parameters are determined

in order to suit the example best, then volsurfplot is used to create the

graphic. The output matrix IVsurf contains now all surface values on a grid

at the given stepwidth. Doing this for a sequential number of dates produces

a time series {ˆt } of implied volatility surfaces. Empirical evidence shows that

σ

this surface changes its shape and characteristics as time goes on. This is what

we analyze in the subsequent sections.

134 6 The Analysis of Implied Volatilities

Volatility Surface

(3500.0,0.0,0.5)

0.4

0.4

(3500.0,1.0,0.3)

0.7

0.3

0.5

0.2

(3500.0,0.0,0.3)

4357.5

5215.0

6072.5

(6930.0,0.0,0.3)

Figure 6.2. Implied volatility surface as observed on January 4th, 1999

XFGiv02.xpl

6.3 Dynamic Analysis

6.3.1 Data description

Options on the DAX are the most actively traded contracts at the derivatives

exchange EUREX. Contracts of various strikes and maturities constitute a

liquid market at any speci¬c time. This liquidity yields a rich basket of implied

volatilities for many pairs (K, „ ). One subject of our research concerning the

6.3 Dynamic Analysis 135

dynamics of term structure movements is implied volatility as measured by the

German VDAX subindices available from Deutsche B¨rse AG (http://deutsche-

o

boerse.com/)

These indices, representing di¬erent option maturities, measure volatility im-

plied in ATM European calls and puts. The VDAX calculations are based on

the BS formula. For a detailed discussion on VDAX calculations we refer to

Redelberger (1994). Term structures for ATM DAX options can be derived

from VDAX subindices for any given trading day since 18 March 1996. On

that day, EUREX started trading in long term options. Shapes of the term

structure on subsequent trading days are shown in Figure 6.3.

If we compare the volatility structure of 27 October 1997 (blue line) with that

of 28 October 1997 (green line), we easily recognize an overnight upward shift

in the levels of implied volatilities. Moreover, it displays an inversion as short

term volatilities are higher than long term ones. Only a couple of weeks later,

on 17 November (cyan line) and 20 November (red line), the term structure

had normalized at lower levels and showed its typical shape again. Evidently,

during the market tumble in fall 1997, the ATM term structure shifted and

changed its shape considerably over time.

Term structure

0.45

0.4

Percentage [%]

0.35

0.3

0.25

1 2 3 4 5 6 7 8

Subindex

Figure 6.3. Term Structure of VDAX Subindices

XFGiv03.xpl

136 6 The Analysis of Implied Volatilities

As an option approaches its expiry date T , time to maturity „ = T ’ t is

declining with each trading day. Hence, in order to analyze the dynamic struc-

ture of implied volatility surfaces, we need to calibrate „ as time t passes. To

accomplish this calibration we linearly interpolate between neighboring VDAX

subindices. For example, to recover the implied volatility σ at a ¬xed „ , we use

ˆ

the subindices at „’ and „+ where „’ ¤ „ ¤ „+ , i.e. we compute σt („ ) with

ˆ

¬xed maturities of „ ∈ {30, 60, 90, 180, 270, 360, 540, 720} calendar days by

„ ’ „’ „ ’ „’

σt („ ) = σt („’ ) 1 ’

ˆ ˆ + σt („+ )

ˆ (6.5)

„+ ’ „ ’ „ + ’ „’

Proceeding this way we obtain 8 time series of ¬xed maturity. Each time series

is a weighted average of two neighboring maturities and contains n = 440 data

points of implied volatilities.

6.3.2 PCA of ATM Implied Volatilities

The data set for the analysis of variations of implied volatilities is a collection

of term structures as given in Figure 6.3. In order to identify common factors

we use Principal Components Analysis (PCA). Changes in the term structure

can be decomposed by PCA into a set of orthogonal factors.

De¬ne Xc = (xtj ) as the T — J matrix of centered ¬rst di¬erences of ATM

implied volatilities for subindex j = 1, ..., J in time t = 1, ..., T , where in our

case J = 8 and T = 440. The sample covariance matrix S = T ’1 Xc Xc can be

decomposed by the spectral decomposition into

S = “Λ“ , (6.6)

where “ is the 8 — 8 matrix of eigenvectors and Λ the 8 — 8 diagonal matrix

of eigenvalues »j of S. Time series of principal components are obtained by

Y = Xc “.

A measure of how well the PCs explain variation of the underlying data is given

by the relative proportion ζl of the sum of the ¬rst l eigenvalues to the overall

sum of eigenvalues:

l l

»j V ar(yj )

j=1 j=1

ζl = = for l<8 (6.7)

8 8

»j V ar(yj )

j=1 j=1

6.3 Dynamic Analysis 137

The quantlet XFGiv04.xpl uses the VDAX data to estimate the proportion

of variance ζl explained by the ¬rst l PCs.

XFGiv04.xpl

As the result shows the ¬rst PC captures around 70% of the total data vari-

ability. The second PC captures an additional 13%. The third PC explains a

considerably smaller amount of total variation. Thus, the two dominant PCs

together explain around 83% of the total variance in implied ATM volatilities

for DAX options. Taking only the ¬rst two factors, i.e. those capturing around

83% in the data, the time series of implied ATM volatilities can therefore be

represented by a factor model of reduced dimension:

xtj = γj1 yt1 + γj2 yt2 + t , (6.8)

where γjk denotes the jkth element of “ = (γjk ), ytk is taken from the matrix

of principal components Y , and t denotes white noise. The γj are in fact

the sensitivities of the implied volatility time series to shocks on the principal

components. As is evident from Figure 6.4, a shock on the ¬rst factor tends

to a¬ect all maturities in a similar manner, causing a non-parallel shift of the

term structure. A shock in the second factor has a strong negative impact on

the front maturity but a positive impact on the longer ones, thus causing a

change of curvature in the term structure of implied volatilities.

6.3.3 Common PCA of the Implied Volatility Surface

Implied volatilities calculated for di¬erent strikes and maturities constitute a

surface. The principle component analysis as outlined above, does not take this

structure into account, since only one slice of the surface, the term structure of

ATM options are used. In this section we present a technique that allows us to

analyze several slices of the surface simultaneously. Since options naturally fall

into maturity groups, one could analyze several slices of the surface taken at

di¬erent maturities. What we propose to do is a principal component analysis

of these di¬erent groups. Enlarging the basis of analysis will lead to a better

understanding of the dynamics of the surface. Moreover, from a statistical

point of view, estimating PCs simultaneously in di¬erent groups will result in

a joint dimension reducing transformation. This multi-group PCA, the so called

common principle components analysis (CPCA), yields the joint eigenstructure

across groups.

138 6 The Analysis of Implied Volatilities

Factor Loadings

1

0.5

Percentage [%]

0

-0.5

2 4 6 8

Subindex

Figure 6.4. Factor Loadings of First and Second PC

XFGiv05.xpl

In addition to traditional PCA, the basic assumption of CPCA is that the

space spanned by the eigenvectors is identical across several groups, whereas

variances associated with the components are allowed to vary. This approach

permits us to analyze a p variate random vector in k groups, say k maturities

of implied volatilities jointly, Fengler, H¨rdle and Villa (2001).

a

More formally, the hypothesis of common principle components can be stated

in the following way, Flury (1988):

HCP C : Ψi = “Λi “ , i = 1, ..., k

where the Ψi are positive de¬nite p — p population covariance matrices,

“ = (γ1 , ..., γp ) is an orthogonal p — p transformation matrix and Λi =

diag(»i1 , ..., »ip ) is the matrix of eigenvalues. Moreover, assume that all »i

are distinct.

Let S be the (unbiased) sample covariance matrix of an underlying p-variate

6.3 Dynamic Analysis 139

normal distribution Np (µ, Ψ) with sample size n. Then the distribution of nS

is Wishart, Muirhead (1982), p. 86, with n ’ 1 degrees of freedom:

nS ∼ Wp (Ψ, n ’ 1)

The density of the Wishart distribution is given by

p(n’1)

n’1

1 2

f (S) =

“p ( n’1 )|Ψ|(n’1)/2 2

2

n ’ 1 ’1

Ψ S |S|(n’p’2)/2 ,

exp tr ’ (6.9)

2

where

p

1

p(p’1)/4

“ x ’ (i ’ 1)

“p (x) = π

2

i=1

is the multivariate gamma function, Muirhead (1982). Hence for given Wishart

matrices Si with sample size ni the likelihood function can be written as

k

1 ’ 1 (ni ’1)

exp tr ’ (ni ’ 1)Ψ’1 Si |Ψi |

L (Ψ1 , ..., Ψk ) = C (6.10)

2

i

2

i=1

where C is a constant not depending on the parameters Ψi . Maximizing the

likelihood is equivalent to minimizing the function

k

(ni ’ 1) ln |Ψi | + tr(Ψ’1 Si ) .

g(Ψ1 , ..., Ψk ) = i

i=1

Assuming that HCP C holds, i.e. in replacing Ψi by “Λi “ , one gets after some

manipulations

p

k

γj Si γj

(ni ’ 1)

g(“, Λ1 , ..., Λk ) = ln »ij + .

»ij

i=1 j=1

As we know from section 6.3.2, the vectors γj in “ need to be orthogonal.

We achieve orthogonality of the vectors γj via the Lagrange method, i.e. we

140 6 The Analysis of Implied Volatilities

impose the p constraints γj γj = 1 using the Lagrange multiplyers µj , and the

remaining p(p ’ 1)/2 constraints γh γj = 0 for (h = j) using the multiplyer

µhj . This yields

p p

—

g (“, Λ1 , ..., Λk ) = g(·) ’ µj (γj γj ’ 1) ’ 2 µhj γh γj .

j=1 h<j

Taking partial derivatives with respect to all »im and γm , it can be shown

(Flury, 1988) that the solution of the CPC model is given by the generalized

system of characteristic equations

k

»im ’ »ij

(ni ’ 1)

γm Si γj = 0, m, j = 1, ..., p, m = j. (6.11)

»im »ij

i=1

This has to be solved using

»im = γm Sγm , i = 1, ..., k, m = 1, ..., p

under the constraints

0 m=j

γm γj = .

1 m=j

Flury (1988) proves existence and uniqueness of the maximum of the likelihood

function, and Flury and Gautschi (1988) provide a numerical algorithm, which

has been implemented in the quantlet CPC.

CPC-Analysis

A number of quantlets are designed for an analysis of covariance matrices,

amongst them the CPC quantlet:

{B, betaerror, lambda, lambdaerror, psi} = CPC(A,N)

estimates a common principle components model.

6.3 Dynamic Analysis 141

As input variables we need a p — p — k array A, produced from k p — p co-

variance matrices, and a k — 1 vector of weights N. Weights are the number of

observations in each of the k groups.

The quantlet produces the p — p common transformation matrix B, and the

p — p matrix of asymptotic standard errors betaerror. Next, eigenvalues

lambda and corresponding standard errors lamdbaerror are given in a vector

array of 1 — p — k. Estimated population covariances psi are also provided. As

an example we provide the data sets volsurf01, volsurf02 and volsurf03

that have been used in Fengler, H¨rdle and Villa (2001) to estimate common

a

principle components for the implied volatility surfaces of the DAX 1999. The

data has been generated by smoothing a surface day by day as spelled out

in section 6.2.2 on a speci¬ed grid. Next, the estimated grid points have been

grouped into maturities of „ = 1, „ = 2 and „ = 3 months and transformed into

a vector of time series of the ”smile”, i.e. each element of the vector belongs

to a distinct moneyness ranging from 0.85 to 1.10.

XFGiv06.xpl

We plot the ¬rst three eigenvectors in a parallel coordinate plot in Figure 6.5.

The basic structure of the ¬rst three eigenvectors is not altered. We ¬nd a

shift, a slope and a twist structure. This structure is common to all maturity

groups, i.e. when exploiting PCA as a dimension reducing tool, the same

transformation applies to each group! However, from comparing the size of

eigenvalues among groups, i.e. ZZ.lambda, we ¬nd that variability is dropping

across groups as we move from the front contracts to long term contracts.

Before drawing conclusions we should convince ourselves that the CPC model

is truly a good description of the data. This can be done by using a likelihood

ratio test. The likelihood ratio statistic for comparing a restricted (the CPC)

model against the unrestricted model (the model where all covariances are

treated separately) is given by

L(Ψ1 , ..., Ψk )

T(n1 ,n2 ,...,nk ) = ’2 ln .

L(S1 , ..., Sk )

Inserting from the likelihood function we ¬nd that this is equivalent to

k

detΨi

(ni ’ 1)

T(n1 ,n2 ,...,nk ) = ,

detSi

i=1

142 6 The Analysis of Implied Volatilities

Common Coordinate Plot: First three Eigenvectors

0.5

Y

0

-0.5

1 2 3 4 5 6

Index of Eigenvectors

Figure 6.5. Factor loadings of the ¬rst (blue), the second (green), and

the third PC (red)

XFGiv06.xpl

which is χ2 distributed as min(ni ) tends to in¬nity with

1 1 1

p(p ’ 1) + 1 ’ p(p ’ 1) + kp = (k ’ 1)p(p ’ 1)

k

2 2 2

degrees of freedom. In the quantlet XFGiv06.xpl this test is included.

XFGiv06.xpl

The calculations yield T(n1 ,n2 ,...,nk ) = 31.836, which corresponds to the p-value

p = 0.37512 for the χ2 (30) distribution. Hence we cannot reject the CPC

model against the unrelated model, where PCA is applied to each maturity

separately.

Using the methods in section 6.3.2, we can estimate the amount of variability ζl

explained by the ¬rst l principle components: again a few number of factors, up

to three at the most, is capable of capturing a large amount of total variability

present in the data. Since the model now captures variability both in strike

and maturity dimension, this can be a suitable starting point for a simpli¬ed

6.3 Dynamic Analysis 143

VaR calculation for delta-gamma neutral option portfolios using Monte Carlo

methods, and is hence a valuable insight for risk management.