<< стр. 13(всего 14)СОДЕРЖАНИЕ >>
16.3 Pricing options with simulation techniques - a guideline 365

16.3.3 Restrictions for the payoп¬Ђ functions

Monte Carlo based option pricing methods are not applicable for all types of
payoп¬Ђ functions. There is one theoretical, and some practical limitations for
the method. Let us look at the theoretical limitation п¬Ѓrst.
In the derivation of the probabilistic error bounds we have to assume the ex-
istence of the payoп¬Ђ variance with respect to the risk neutral distribution. It
follows that we are no longer able to derive the presented error bounds if this
variance does not exist. However for most payoп¬Ђ functions occurring in practice
and the Black Scholes model the diп¬Ђerence between the payoп¬Ђ samples and the
price can be bounded from above by a polynomial function in the diп¬Ђerence
between the underlying estimate and the start price for which the integral with
respect to the risk neutral density exists. Consequently the variance of these
payoп¬Ђ functions must be п¬Ѓnite.
Much more important than the theoretical limitations are the practical limi-
tations. In the п¬Ѓrst place Monte Carlo simulation relies on the quality of the
pseudo random number generator used to generate the uniformly distributed
samples. All generators used are widely tested, but it canвЂ™t be guaranteed
that the samples generated for a speciп¬Ѓc price estimation exhibit all assumed
statistical properties. It is also important to know that all generators produce
the same samples in a п¬Ѓxed length cycle. For example if we use the random
number generator from Park and Miller with Bays-Durham shuп¬„e, we will get
the same samples after в‰€ 108 method invocations.
Another possible error source is the transformation function which converts the
uniformly distributed random numbers in normally distributed number. The
approximation to the inverse of the normal distribution used in our case has a
maximum absolute error of 10в€’15 which is suп¬ѓciently good.
The most problematic cases for Monte Carlo based option pricing are options
for which the probability of an occurrence of a strictly positive payoп¬Ђ is very
small. Then we will get either price and variance estimates based on a few
positive samples if we hit the payoп¬Ђ region or we get a zero payoп¬Ђ and variance
if this improbable event does not occur. However in both cases we will get a
very high relative error. More accurate results may be calculated by applying
importance sampling to these options.
366 16 Simulation based Option Pricing

Bibliography
Bauer, H. (1991). Wahrscheinlichkeitstheorie, W. de Gruyter.
Bosch, K. (1993). Elementare EinfВЁhrung in die Wahrscheinlichkeitsrechnung,
u
Vieweg.

Boyle, P. P. (1977). Options: A monte carlo approach, Journal of Financial
Economics 4: 323вЂ“338.
Broadie, M., Glasserman, P., and Ha,Z. (2000 ) Pricing American Options by
Simulation Using a Stochastic Mesh with Optimized Weights, Probabilistic
Constrained Optimization: Methodology and Applications, S. Uryasev ed.,
Kluwer.

Marsaglia, George (1993 ) Monkey tests for random number generators, Com-
puters & Mathematics with Applications 9: 1-10.
Niederreiter, H. (1992). Random number generation and Quasi Monte Carlo
methods, 1 edn, Capital City Press, Monpellier Vermont.
Joy, C.,Boyle, P., and Tan, K. S. (1996). Quasi monte carlo methods in nu-
merical п¬Ѓnance, Management Science 42(6): 926вЂ“936.

Tuп¬ѓn, Bruno (1996). On the use of low discrepancy sequences in Monte Carlo
Methods, Technical Report IRISA - Institut de Recherche en Informatique
et Systemes Aleatoires 1060.
Morokoп¬Ђ, William J.,andCaп¬‚ish, Russel E. (1996 ) Quasi-monte carlo simula-
tion of random walks in п¬Ѓnance, In Monte Carlo and Quasi-Monte Carlo
methods, 340-352, University of Salzburg, Springer.
Malvin H. Kalos (1986) Monte Carlo Methods, Wiley
17 Nonparametric Estimators of
GARCH Processes
JВЁrgen Franke, Harriet Holzberger and Marlene MВЁller
u u

The generalized ARCH or GARCH model (Bollerslev, 1986) is quite popular
as a basis for analyzing the risk of п¬Ѓnancial investments. Examples are the
estimation of value-at-risk (VaR) or the expected shortfall from a time series
of log returns. In practice, a GARCH process of order (1,1) often provides a
reasonable description of the data. In the following, we restrict ourselves to
that case.
We call {Оµt } a (strong) GARCH (1,1) process if
Оµt = Пѓt Zt
2
= П‰ + О± Оµ2 + ОІ Пѓtв€’1
2
Пѓt (17.1)
tв€’1

with independent identically distributed innovations Zt having mean 0 and
variance 1. A special case is the integrated GARCH model of order (1,1) or
IGARCH(1,1) model where О± + ОІ = 1 and, frequently, П‰ = 0 is assumed, i.e.
Пѓt = О± Оµ2 + (1 в€’ О±) Пѓtв€’1 .
2 2
tв€’1

This model forms the basis for the J.P. Morgan RiskMetrics VaR analysis using
exponential moving averages (Franke, HВЁrdle and Hafner, 2001, Chapter 15).
a
The general GARCH(1,1) process has п¬Ѓnite variance Пѓ 2 = П‰/(1 в€’ О± в€’ ОІ) if О± +
2
ОІ < 1, and it is strictly stationary if E{log(О± Zt + ОІ)} < 0. See Franke, HВЁrdle
a
and Hafner (2001, Chapter 12) for these and further properties of GARCH
processes.
In spite of its popularity, the GARCH model has one drawback: Its sym-
metric dependence on past returns does not allow for including the leverage
eп¬Ђect into the model, i.e. the frequently made observation that large negative
returns of stock prices have a greater impact on volatility than large posi-
tive returns. Therefore, various parametric modiп¬Ѓcations like the exponential
368 17 Nonparametric Estimators of GARCH Processes

GARCH (EGARCH) or the threshold GARCH (TGARCH) model have been
proposed to account for possible asymmetric dependence of volatility on re-
turns. The TGARCH model, for example, introduces an additional term into
2
the volatility equation allowing for an increased eп¬Ђect of negative Оµtв€’1 on Пѓt :
Пѓt = П‰ + О± Оµ2 + О±в€’ Оµ2 В· 1(Оµtв€’1 < 0) + ОІ Пѓtв€’1 .
2 2
Оµt = Пѓt Zt , tв€’1 tв€’1

To develop an exploratory tool which allows to study the nonlinear depen-
2
dence of squared volatility Пѓt on past returns and volatilities we introduce a
nonparametric GARCH(1,1) model
Оµt = Пѓt Zt
2 2
Пѓt = g(Оµtв€’1 , Пѓtв€’1 ) (17.2)
where the innovations Zt are chosen as above. We consider a nonparametric
estimator for the function g based on a particular form of local smoothing.
Such an estimate may be used to decide if a particular parametric nonlinear
GARCH model like the TGARCH is appropriate.
We remark that the volatility function g cannot be estimated by common kernel
or local polynomial smoothers as the volatilities Пѓt are not observed directly.
BВЁhlmann and McNeil (1999) have considered an iterative algorithm. First,
u
they п¬Ѓt a common parametric GARCH(1,1) model to the data from which they
get sample volatilities Пѓt to replace the unobservable true volatilities. Then,
2
they use a common bivariate kernel estimate to estimate g from Оµt and Пѓt .
Using this preliminary estimate for g they obtain new sample volatilities which
are used for a further kernel estimate of g. This procedure is iterated several
times until the estimate stabilizes.
Alternatively, one could try to п¬Ѓt a nonparametric ARCH model of high order
2 2
to the data to get some п¬Ѓrst approximations Пѓt to Пѓt and then use a local
linear estimate based on the approximate relation
2 2
Пѓt в‰€ g(Оµtв€’1 , Пѓtв€’1 ).
However, a complete nonparametric approach is not feasible as high-order non-
2
parametric ARCH models based on Пѓt = g(Оµtв€’1 , . . . , Оµtв€’p ) cannot be reliably
estimated by local smoothers due to the sparseness of the data in high dimen-
sions. Therefore, one would have to employ restrictions like additivity to the
2
ARCH model, i.e. Пѓt = g1 (Оµtв€’1 ) + . . . + gp (Оµtв€’p ), or even use a parametric
ARCH model Пѓt = П‰ + О±1 Оµ2 + . . . + О±p Оµ2 . The alternative we consider here
2
tв€’p
tв€’1
is a direct approach to estimating g based on deconvolution kernel estimates
2
which does not require prior estimates Пѓt .
17.1 Deconvolution density and regression estimates 369

17.1 Deconvolution density and regression
estimates
Deconvolution kernel estimates have been described and extensively discussed
in the context of estimating a probability density from independent and identi-
cally distributed data (Carroll and Hall, 1988; Stefansky and Carroll, 1990). To
explain the basic idea behind this type of estimates we consider the deconvo-
lution problem п¬Ѓrst. Let Оѕ1 , . . . , ОѕN be independent and identically distributed
real random variables with density pОѕ (x) which we want to estimate. We do
not, however, observe the Оѕk directly but only with additive errors О·1 , . . . , О·N .
Let us assume that the О·k as well are independent and identically distributed
with density pО· (x) and independent of the Оѕk . Hence, the available data are

Xk = Оѕk + О·k , k = 1, . . . , N.

To be able to identify the distribution of the Оѕk from the errors О·k at all, we
have to assume that pО· (x) is known. The density of the observations Xk is just
the convolution of pОѕ with pО· :

px (x) = pОѕ (x) pО· (x) .

We can therefore try to estimate px (x) by a common kernel estimate and ex-
tract an estimate for pОѕ (x) out of it. This kind of deconvolution operation is
preferably performed in the frequency domain, i.e. after applying a Fourier
transform. As the subsequent inverse Fourier transform includes already a
smoothing part we can start with the empirical distribution of X1 , . . . , XN in-
stead of a smoothed version of it. In detail, we calculate the Fourier transform
or characteristic function of the empirical law of X1 , . . . , XN , i.e. the sample
characteristic function
N
1
eiП‰Xk .
П†x (П‰) =
N
k=1
Let в€ћ
iП‰О·k
eiП‰u pО· (u) du
П†О· (П‰) = E(e )=
в€’в€ћ

denote the (known) characteristic function of the О·k . Furthermore, let K be
a common kernel function, i.e. a nonnegative continuous function which is
symmetric around 0 and integrates up to 1: K(u) du = 1, and let

eiП‰u K(u) du
П†K (П‰) =
370 17 Nonparametric Estimators of GARCH Processes

be its Fourier transform. Then, the deconvolution kernel density estimate of
pОѕ (x) is deп¬Ѓned as
в€ћ
1 П†x (П‰)
eв€’iП‰x П†K (П‰h)
ph (x) = dП‰ .
2ПЂ П†О· (П‰)
в€’в€ћ

The name of this estimate is explained by the fact that it may be written
equivalently as a kernel density estimate
N
x в€’ Xk
1
Kh
ph (x) =
Nh h
k=1

with deconvolution kernel
в€ћ
1 П†K (П‰)
eв€’iП‰u
h
K (u) = dП‰
2ПЂ П†О· (П‰/h)
в€’в€ћ

depending explicitly on the smoothing parameter h. Based on this kernel esti-
mate for probability densities, Fan and Truong (1993) considered the analogous
deconvolution kernel regression estimate deп¬Ѓned as
N
x в€’ Xk
1
Kh
mh (x) = Yk / ph (x).
Nh h
k=1

This Nadaraya-Watson-type estimate is consistent for the regression function
m(x) in an errors-in-variables regression model

Yk = m(Оѕk ) + Wk , Xk = Оѕk + О·k , k = 1, . . . , N,

where W1 , . . . , WN are independent identically distributed zero-mean random
variables independent of the Xk , Оѕk , О·k which are chosen as above. The Xk , Yk
are observed, and the probability density of the О·k has to be known.

17.2 Nonparametric ARMA Estimates
GARCH processes are closely related to ARMA processes. If we square a
GARCH(1,1) process {Оµt } given by (17.1) then we get an ARMA(1,1) process

Оµ2 = П‰ + (О± + ОІ) Оµ2 в€’ ОІ О¶tв€’1 + О¶t ,
t tв€’1
17.2 Nonparametric ARMA Estimates 371

2 2
where О¶t = Пѓt (Zt в€’ 1) is white noise, i.e. a sequence of pairwise uncorrelated
random variables, with mean 0. Therefore, we study as an intermediate step
towards GARCH processes the nonparametric estimation for ARMA models
which is more closely related to the errors-in-variables regression of Fan and
Truong (1993). A linear ARMA(1,1) model with non-vanishing mean П‰ is given
by
Xt+1 = П‰ + a Xt + b et + et+1

with zero-mean white noise et . We consider the nonparametric generalization
of this model
Xt+1 = f (Xt , et ) + et+1 (17.3)
for some unknown function f (x, u) which is monotone in the second argument
u. Assume we have a sample X1 , . . . , XN +1 observed from (17.3). If f does
not depend on the second argument, (17.3) reduces to a nonparametric autore-
gression of order 1
Xt+1 = f (Xt ) + et+1
and the autoregression function f (x) may be estimated by common kernel es-
timates or local polynomials. There exists extensive literature about that type
of estimation problem, and we refer to the review paper of HВЁrdle, LВЁtkepohl
a u
and Chen (1997). In the general case of (17.3) we again have the problem of
estimating a function of (partially) non-observable variables. As f depends also
on the observable time series Xt , the basic idea of constructing a nonparamet-
ric estimate of f (x, u) is to combine a common kernel smoothing in the п¬Ѓrst
variable x with a deconvolution kernel smoothing in the second variable u. To
deп¬Ѓne the estimate we have to introduce some notation and assumptions.
We assume that the innovations et have a known probability density pe with
v
distribution function Pe (v) = в€’в€ћ pe (u) du and with Fourier transform П†e (П‰) =
0 for all П‰ and

|П†e (П‰)| в‰Ґ c В· |П‰|ОІ0 exp(в€’|П‰|ОІ /Оі) for |П‰| в€’в†’ в€ћ

for some constants c, ОІ, Оі > 0, ОІ0 . The nonlinear ARMA process (17.3) has
to be stationary and strongly mixing with exponentially decaying mixing co-
eп¬ѓcients. Let p(x) denote the density of the stationary marginal density of
Xt .
The smoothing kernel K x in x-direction is a common kernel function with
compact support [в€’1, +1] satisfying 0 в‰¤ K x (u) в‰¤ K x (0) for all u. The kernel
K which is used in the deconvolution part has a Fourier transform П†K (П‰)
372 17 Nonparametric Estimators of GARCH Processes

which is symmetric around 0, has compact support [в€’1, +1] and satisп¬Ѓes some
smoothness conditions (Holzberger, 2001). We have chosen a kernel with the
following Fourier transform:

П†K (u) = 1 в€’ u2 for |u| в‰¤ 0.5
П†K (u) = 0.75 в€’ (|u| в€’ 0.5) в€’ (|u| в€’ 0.5)2
в€’220 (|u| в€’ 0.5)4 + 1136 (|u| в€’ 0.5)5
в€’1968 (|u| в€’ 0.5)6 + 1152 (|u| в€’ 0.5)7 for 0.5 в‰¤ |u| в‰¤ 1.

For convenience, we use the smoothing kernel K x to be proportional to that
function: K x (u) в€ќ П†K (u). The kernel K x is hence an Epanechnikov kernel
with modiп¬Ѓed boundaries.
Let b = C/N 1/5 be the bandwidth for smoothing in x-direction, and let h =
A/ log(N ) be the smoothing parameter for deconvolution in u-direction where
A > ПЂ/2 and C > 0 are some constants. Then,
N +1
x в€’ Xt
1
Kx
pb (x) =
(N + 1)b b
t=1

is a common RosenblattвЂ“Parzen density estimate for the stationary density
p(x).
Let q(u) denote the stationary density of the random variable f (Xt , et ), and
let q(u|x) be its conditional density given Xt = x. An estimate of the latter is
given by
N
u в€’ Xt+1 x в€’ Xt
1
Kh Kx
qb,h (u|x) = / pb (x) (17.4)
N hb h b
t=1

where the deconvolution kernel K h is
в€ћ
1 П†K (П‰)
eв€’iП‰u
h
K (u) = dП‰ .
2ПЂ П†e (П‰/h)
в€’в€ћ

In (17.4) we use a deconvolution smoothing in the direction of the second
argument of f (x, u) using only pairs of observations (Xt , Xt+1 ) for which |x в€’
Xt | в‰¤ b, i.e. Xt в‰€ x. By integration, we get the conditional distribution
function of f (Xt , et ) given Xt = x
v
Q(v|x) = P(f (x, et ) в‰¤ v|Xt = x) = q(u|x) du
в€’в€ћ
17.2 Nonparametric ARMA Estimates 373

and its estimate
v aN
Qb,h (v|x) = qb,h (u|x)du qb,h (u|x) du
в€’aN в€’aN

for some aN в€ј N 1/6 for N в†’ в€ћ. Due to technical reasons we have to cut oп¬Ђ
the density estimate in regions where it is still unreliable for given N . The
particular choice of denominator guarantees that Qb,h (aN |x) = 1 in practice,
since Q(v|x) is a cumulative distribution function.
To estimate the unconditional density q(u) of f (Xt , et ) = Xt+1 в€’ et+1 , we use
a standard deconvolution density estimate with smoothing parameter hв€— =
Aв€— / log(N )
N
u в€’ Xt
1
qhв€— (u) = Khв€— .
N hв€— hв€—
t=1

Let pe (u|x) be the conditional density of et given Xt = x, and let Pe (v|x) =
v
p (u|x) du be the corresponding conditional distribution function. An es-
в€’в€ћ e
timate of it is given as
v aN
q (x в€’ u) pe (u)du qhв€— (x в€’ u) pe (u) du
P (v|x) =
e,hв€— hв€—
в€’aN в€’aN

where again we truncate at aN в€ј N 1/6 .
To obtain the ARMA function f , we can now compare Q(v|x) and Pe (v|x).
In practice this means to relate Qb,h (v|x) and Pe,hв€— (v|x). The nonparametric
estimate for the ARMA function f (x, v) depending on smoothing parameters
b, h and hв€— is hence given by

fb,h,hв€— (x, v) = Qв€’1 (Pe,hв€— (v|x) |x)
b,h

if f (x, v) is increasing in the second argument, and

fb,h,hв€— (x, v) = Qв€’1 (1 в€’ Pe,hв€— (v|x) |x)
b,h

if f (x, v) is a decreasing function of v for any x. Qв€’1 (В·|x) denotes the in-
b,h
verse of the function Qb,h (В·|x) for п¬Ѓxed x. Holzberger (2001) has shown that
fb,h,hв€— (x, v) is a consistent estimate for f (x, v) under suitable assumptions and
has given upper bounds on the rates of bias and variance of the estimate. We
remark that the assumption of monotonicity on f is not a strong restriction.
In the application to GARCH processes which we have in mind it seems to be
374 17 Nonparametric Estimators of GARCH Processes

intuitively reasonable that the volatility of today is an increasing function of
the volatility of yesterday which translates into an ARMA function f which is
decreasing in the second argument.
Let us illustrate the steps for estimating a nonparametric ARMA process. First
we generate time series data and plot Xt+1 versus Xt .

library("times")
n=1000
x=genarma(0.7,0.7,normal(n))
XFGnpg01.xpl
The result is shown in Figure 17.1. The scatterplot in the right panel of Fig-
ure 17.1 deп¬Ѓnes the region where we can estimate the function f (x, v).

ARMA(1,1) Time Series ARMA(1,1) Scatterplot
5

5
0

0
X(t)

X(t)
-5

-5

0 5 10 -5 0 5
t*E2 X(t+1)

Figure 17.1. ARMA(1,1) process.

To compare the deconvolution density estimate with the density of f (Xt , et )
we use now our own routine (myarma) for generating ARMA(1,1) data from a
known function (f):

proc(f)=f(x,e,c)
f=c+c*x+c*e
endp
17.2 Nonparametric ARMA Estimates 375

proc(x,f)=myarma(n,c)
x=matrix(n+1)-1
f=x
e=normal(n+1)
t=1
while (t<n+1)
t=t+1
f[t]=f(x[t-1],e[t-1],c)
x[t]=f[t]+e[t]
endo
x=x[2:(n+1)]
f=f[2:(n+1)]
endp

n=1000
{x,f}=myarma(n,0|0.7|0.7)

h=0.4
library("smoother")
dh=dcdenest(x,h) // deconvolution estimate
fh=denest(f,3*h) // kernel estimate
XFGnpg02.xpl

Figure 17.2 shows both density estimates. Note that the smoothing parameter
(bandwidth h) is diп¬Ђerent for both estimates since diп¬Ђerent kernel functions
are used.

f = nparmaest (x {,h {,g {,N {,R } } } } )
estimates a nonparametric ARMA process

The function nparmaest computes the function f (x, v) for an ARMA process
according to the algorithm described above. Let us п¬Ѓrst consider an ARMA(1,1)
376 17 Nonparametric Estimators of GARCH Processes

Deconvolution Density

20
15
q(u)*E-2
10
5
0

-5 0 5
u
Figure 17.2. Deconvolution density estimate (solid) and kernel den-
sity estimate (dashed) of the known mean function of an ARMA(1,1)
process.

with f (x, v) = 0.3 + 0.6x + 1.6v, i.e.

Xt = 0.3 + 0.6Xtв€’1 + 1.6etв€’1 + et .

Hence, we use myarma with c=0.3|0.6|1.6 and call the estimation routine by

f=nparmaest(x)
XFGnpg03.xpl
The optional parameters N and R are set to 50 and 250, respectively. N con-
tains the grid sizes used for x and v. R is an additional grid size for internal
computations. The resulting function is therefore computed on a grid of size
N Г— N. For comparison, we also calculate the true function on the same grid.
Figure 17.3 shows the resulting graphs. The bandwidths h (corresponding to
hв€— ) for the one-dimensional deconvolution kernel estimator q and g for the
17.2 Nonparametric ARMA Estimates 377

two-dimensional (corresponding to h and b) are chosen according to the rates
derived in Holzberger (2001).

Linear ARMA(1,1)

10.5

0.7

4.7
-9.1
0.2

-4.3
4.7
0.2
-4.3

Figure 17.3. Nonparametric estimation of a (linear) ARMA process.
True vs. estimated function and data.

As a second example consider an ARMA(1,1) with a truly nonlinear function
f (x, v) = в€’2.8 + 8F (6v), i.e.
Xt = в€’2.8 + 8F (6 etв€’1 ) + et ,
where F denotes the sigmoid function F (u) = (1 + eв€’u )в€’1 In contrast to
the previous example, this function is obviously not dependent on the п¬Ѓrst
argument. The code above has to be modiп¬Ѓed by using

proc(f)=f(x,e,c)
378 17 Nonparametric Estimators of GARCH Processes

f=c/(1+exp(-c*e))+c
endp
c=-2.8|8|6
XFGnpg04.xpl
The resulting graphs for this nonlinear function are shown in Figure 17.4. The
estimated surface varies obviously only in the second dimension and follows
the s-shaped underlying true function. However, the used sample size and
the internal grid sizes of the estimation procedure do only allow for a rather
imprecise reconstruction of the tails of the surface.

Nonlinear ARMA(1,1)

4.2

0.8

4.2
-2.5
0.2

-3.9
4.2
0.2
-3.9

Figure 17.4. Nonparametric estimation of a (nonlinear) ARMA process.
True vs. estimated function and data.
17.3 Nonparametric GARCH Estimates 379

17.3 Nonparametric GARCH Estimates
In the following, we consider nonparametric GARCH(1,1) models which depend
symmetrically on the last observation:

Оµt = Пѓt Zt , (17.5)
2
= g(Оµ2 , Пѓtв€’1 ) .
2
Пѓt tв€’1

Here, g denotes a smooth unknown function and the innovations Zt are chosen
as in as in Section 17.2. This model covers the usual parametric GARCH(1,1)
process (17.1) but does not allow for representing a leverage eп¬Ђect like the
TGARCH(1,1) process. We show now how to transform (17.5) into an ARMA
model. First, we deп¬Ѓne

Xt = log(Оµ2 ), et = log(Zt ).
2
t

By (17.5), we have now

= log(Оµ2 ) = log Пѓt+1 + et+1
2
Xt+1 t+1
= log g(Оµ2 , Пѓt ) + et+1
2
t
= log g1 (log(Оµ2 ), log(Пѓt )) + et+1
2
t
= log g1 (Xt , Xt в€’ et ) + et+1
= f (Xt , et ) + et+1

with
g1 (x, u) = g(ex , eu ), f (x, v) = log g1 (x, x в€’ v).
Now, we can estimate the ARMA function f (x, v) from the logarithmic squared
data Xt = log(Оµ2 ) as in Section 17.3 using the nonparametric ARMA estimate
t
fb,h,h в€— (x, v) of (17.5). Reverting the transformations, we get

g1 (x, u) = exp{fb,h,hв€— (x, x в€’ u)}, gb,h,hв€— (y, z) = g1 (log y, log z)

or, combining both equations,

gb,h,hв€— (y, z) = exp fb,h,hв€— (log y, log(y/z)) , y, z > 0,

as an estimate of the symmetric GARCH function g(y, z).
We have to be aware, of course, that the density pe used in the deconvolution
2
part of estimating f (x, v) is the probability density of the et = log Zt , i.e. if
380 17 Nonparametric Estimators of GARCH Processes

pz (z) denotes the density of Zt ,
1
eu/2 pz (eu/2 ) + eв€’u/2 pz (eв€’u/2 ) .
pe (u) =
2
If Оµt is a common parametric GARCH(1,1) process of form (17.1), then g(y, z) =
П‰ + О±y + ОІz, and the corresponding ARMA function is f (x, v) = log(П‰ + О±ex +
ОІexв€’v ). This is a decreasing function in v which seems to be a reasonable
assumption in the general case too corresponding to the assumption that the
present volatility is an increasing function of past volatilities.
As an example, we simulate a GARCH process from

proc(f)=gf(x,e,c)
f=c+c*x+c*e
endp

proc(e,s2)=mygarch(n,c)
e=zeros(n+1)
f=e
s2=e
z=normal(n+1)
t=1
while (t<n+1)
t=t+1
s2[t]=gf(e[t-1]^2,s2[t-1]^2,c)
e[t]=sqrt(s2[t]).*z[t]
endo
e=e[2:(n+1)]
s2=s2[2:(n+1)]
endp

f = npgarchest (x {,h {,g {,N {,R } } } } )
estimates a nonparametric GARCH process
17.3 Nonparametric GARCH Estimates 381

The function npgarchest computes the functions f (x, v) and g(y, z) for
a GARCH process using the techniques described above. Consider a
GARCH(1,1) with
g(y, z) = 0.01 + 0.6 y + 0.2 z.
Hence, we use

n=1000
c=0.01|0.6|0.2
{e,s2}=mygarch(n,c)

and call the estimation routine by

g=npgarchest(e)
XFGnpg05.xpl
Figure 17.5 shows the resulting graph for the estimator of f (x, v) together with
the true function (decreasing in v) and the data (Xt+1 versus Xt ). As in the
ARMA case, the estimated function shows the underlying structure only for a
part of the range of the true function.
Finally, we remark how the the general case of nonparametric GARCH models
could be estimated. Consider
Оµt = Пѓt Zt (17.6)
2 2
Пѓt = g(Оµtв€’1 , Пѓtв€’1 )
2
where Пѓt may depend asymmetrically on Оµtв€’1 . We write
g(x, z) = g + (x2 , z) 1(x в‰Ґ 0) + g в€’ (x2 , z) 1(x < 0).

As g + , g в€’ depend only on the squared arguments we can estimate them as
before. Again, consider Xt = log(Оµ2 ), et = log(Zt ). Let N+ be the number of
2
t
all t в‰¤ N with Оµt в‰Ґ 0, and Nв€’ = N в€’ N+ . Then, we set
N
x в€’ Xt
1
p+ (x) K x( )1(Оµt в‰Ґ 0)
=
b
N+ b b
t=1
N
u в€’ Xt+1 x в€’ Xt
1
+
p+ (x)
Kh Kx 1(Оµt в‰Ґ 0)
qb,h (u|x) = b
N+ hb h b
t=1
N
u в€’ Xt
1
+
1(Оµt в‰Ґ 0).
qhв€— (u) = Khв€—
N + hв€— hв€—
t=1
382 17 Nonparametric Estimators of GARCH Processes

Nonparametric GARCH(1,1)

8.1

2.3

-4.2
-3.5
-7.8

-11.4
-4.2
-7.8
-11.4

Figure 17.5. Nonparametric estimation of f (x, v) for a (linear) GARCH
process. True vs. estimated function, data Xt = log(Оµ2 ).
t

Q+ (v|x), Pe,hв€— (v|x) are deп¬Ѓned as in Section 17.2 with qb,h , p+ replacing qb,h
+ +
b,h b
and pb , and, using both estimates of conditional distribution functions we get
+
an ARMA function estimate fb,h,hв€— (x, v). Reversing the transformation from
GARCH to ARMA, we get as the estimate of g + (x2 , z)

+ +
gb,h,hв€— (x2 , z) = exp fb,h,hв€— log x2 , log(x2 /z) .

The estimate for g в€’ (x2 , z) is analogously deп¬Ѓned

в€’ в€’
gb,h,hв€— (x2 , z) = exp fb,h,hв€— log x2 , log(x2 /z) .
17.3 Nonparametric GARCH Estimates 383

в€’
where, in the derivation of fb,h,hв€— , N+ and 1(Оµt в‰Ґ 0) are replaced by Nв€’ and
1(Оµt < 0).

Bibliography
Bollerslev, T.P. (1986). Generalized autoregressive conditional heteroscedas-
ticity, Journal of Econometrics 31: 307-327.
BВЁhlmann, P. and McNeil, A.J. (1999). Nonparametric GARCH-models,
u
Manuscript, ETH ZВЁrich, http://www.math.ethz.ch/в€јmcneil.
u
Carroll, R.J. and Hall, P. (1988). Optimal rates of convergence for deconvolut-
ing a density, J. Amer. Statist. Assoc. 83: 1184-1186.
Fan, J. and Truong, Y.K. (1993). Nonparametric regression with errors-in-
variables, Ann. Statist. 19: 1257-1272.
Franke, J., HВЁrdle, W. and Hafner, Ch. (2001). Statistik der FinanzmВЁrkte,
a a
Springer, ebook: http://www.quantlet.de.
HВЁrdle, W., LВЁtkepohl, H. and Chen, R. (1997). A review of nonparametric
a u
time series analysis, International Statistical Review 65: 49-72.

Holzberger, H. (2001). Nonparametric Estimation of Nonlinear ARMA and
GARCH-processes, PhD Thesis, University of Kaiserslautern.
J.P. Morgan. RiskMetrics, http://www.jpmorgan.com.
Stefansky, L.A. and Carroll, R.J. (1990). Deconvoluting kernel density estima-
tors, Statistics 21: 169-184.
Quantitative Finance
GВЁkhan AydД±nlД±
o

18.1 Introduction
Modern risk management requires accurate, fast and п¬‚exible computing envi-
ronments. To meet this demand a vast number of software packages evolved
over the last decade, accompanying a huge variety of programming languages,
interfaces, conп¬Ѓguration and output possibilities. One solution especially de-
signed for large scale explorative data analysis is XploRe, a procedural program-
ming environment, equipped with a modern client/server architecture (HВЁrdle a
et al. (1999) and HВЁrdle et al. (2000)).
a
As far as п¬‚exibility in the sense of openness and accuracy is concerned XploRe
has a lot to oп¬Ђer a risk analyst may wish. On the contrary its matrix oriented
programming language (Fickel, 2001) might be seen as a drawback in respect
to other computational approaches. In terms of learning curve eп¬Ђects and total
cost of ownership an alternative solution seems desirable.
This chapter will present and demonstrate the net based spreadsheet solution
MD*ReX designed for modern statistical and econometric analysis. We concen-
trate on examples of Value-at-risk (VaR) with copulas and means of quantifying
implied volatilities presented in Chapter 2 and 6. All results will be shown in
Microsoft Excel.
Recent research work suggests that the rationale for spreadsheet based sta-
tistical computing is manifold. Ours is to bring state-of-the-art quantitative
methods to the п¬Ѓngertips of spreadsheet users. Throughout this chapter we
will give a short introduction into our underlying technology, brieп¬‚y explain
386 18 Net Based Spreadsheets in Quantitative Finance

the usage of the aforementioned spreadsheet solution and provide applications
of this tool.

18.2 Client/Server based Statistical Computing
While the power of computing equipment increased exponentially, statistical
algorithms yielded higher eп¬ѓciency in terms of computing time and accuracy.
Meanwhile the development of high capacity network architectures had an-
other positive impact on this trend, especially the establishment of the world
wide web. Consequently a vast number of researchers and programmers in
computational statistics as well as institutions like commercial banks, insurers
and corporations spent much eп¬Ђort to utilize this evolution for their п¬Ѓeld of
research. An outcome has been the technological philosophy of client/server
based statistical computing: meaning a decentralized combination of methods,
users and providers of statistical knowledge.
Our understanding of client/server based statistical computing is such that
there exists a formal relationship between user, provider and vendor of statis-
tical methodology. An easy to grasp example is a telephone call. The caller (in
our case the user demanding statistical methods and/or advice) calls (connects
via TCP/IP enabled networks) someone (a high-performance server/vendor of
statistical information and methods) who serves his call (the requested cal-
culation is done/information is displayed in a HTML browser, etc.). This
client/server understanding is an approach to gain scalability of computational
tasks, resource shifting of processing power and decentralization of methods.
There are numerous ways of implementing client/server based statistics, among
others Common Gateway Interfaces (CGI), JavaScript, Java Applets and Plug-
Ins are the most commonly used techniques. The technology behind the XploRe
client/server architecture is thoroughly explained in Kleinow and Lehmann
(2002). While that solution is applet based the spreadsheet client presented
here has an Add-in character. The MD*ReX-Client is a software tool, nested
within an spreadsheet application. In both cases the communication technique
relies on the protocol stack MD*CRYPT. For the spreadsheet solution the
Java based MD*CRYPT has to be modiп¬Ѓed. As Microsoft does not support
any Java natively for its Oп¬ѓce suite, MD*CRYPT has been implemented as
a dynamic link library to utilize its interface for Oп¬ѓce applications like Excel.
The technical aspects and the design philosophy of the MD*ReX-Client are
discussed in detail in AydД±nlД± et al. (2002).

Since their п¬Ѓrst appearance in the late 1970s spreadsheets gained a remarkable
popularity in business as well as research and education. They are the most
common software managers, researchers and traders utilize for data analysis,
quantitative modelling and decision support.
Przasnyski and Seal (1996) п¬Ѓnd that most of the time series modeling done in
suggests that those users have become so proп¬Ѓcient with spreadsheets that they
are reluctant to adopt other software solutions. Not even a higher suitability
for speciп¬Ѓc applications is appealing then (Chan and Storey, 1996). An analysis
our perception of the statistical software market. A stunning majority of users
are using Excel for statistical analysis (Sofyan and Werwatz, 2001).
A major diп¬Ђerence between a spreadsheet application and statistical program-
ming languages is the interaction model. This вЂќdirect manipulation interaction
modelвЂќ enables statistical computations e.g. by drag and drop (Neuwirth and
Baier, 2001). In the cell based framework of spreadsheets the direct aspect of
interaction means, that manipulations in one cell immediately eп¬Ђect the con-
tent of another cell. Of course this is only the case if the regarding cells are
interconnected with appropriate cell functions and references. Especially in
business applications the immediate visibility of numerical changes when cell
values are modiп¬Ѓed, is an appreciated feature for decision making based on
diп¬Ђerent scenarios.
Our approach is based on the philosophy to bring two ideals together: On
the one hand an accurate and reliable statistical engine (which is represented
by the XploRe Quantlet Server, XQS ) and on the other a user friendly and
intuitive Graphical User Interface like Excel. The numerical impreciseness of
Excel has been discussed exhaustively in the literature. As a starting point the
reader is referred to e.g. McCullough and Wilson (1999). The development of
MD*ReX is guided by the principles of usability and п¬‚exibility. Hence we try
to oп¬Ђer diп¬Ђerent modes of usage for varying user groups: a dialogue based and
more вЂќWindowsвЂќ like appearance for the novice user of statistical software and
a вЂќrawвЂќ mode where the spreadsheet application merely functions as a data
388 18 Net Based Spreadsheets in Quantitative Finance

18.4 Using MD*ReX
We will demonstrate the usage of MD*ReX in the context of quantitative п¬Ѓ-
nancial modeling. In order to use MD*ReX a working installation of Excel 9.x
is required. Furthermore the installation routine will setup a Java runtime en-
refer to http://md-rex.com/.
After successfully installing the client it can be used in two ways: for an on-
demand usage MD*ReX can be accessed via the Start в†’ Programs shortcut
in Windows or if a permanent usage in Excel is requested, the Add-in can be
installed from the Extras в†’ Add-in Manager dialogue in Excel. The rex.xla
п¬Ѓle is located under

%Systemroot%\%Program Files%\MDTech\ReX\ReX.

In the latter case the client is available every time Excel is started. Anyway
the client can be accessed via the Excel menu bar (Figure 18.1) and exposes
its full functionality after clicking on the ReX menu item.

Figure 18.1. Excel and MD*ReX menus

In order to work with MD*ReX the user п¬Ѓrst has to connect herself to a running
XploRe Quantlet Server. This can either be a local server, which by default is
triggered if MD*ReX is started via the Programs shortcut, or any other XQS
somewhere on the Internet. Evidently for the latter option a connection to the
Internet is required. The Connect dialogue oп¬Ђers some pre-conп¬Ѓgured XQSвЂ™.
After the connection has been successfully established the user can start right
away to work with MD*ReX.
In contrast to XploRe, the user has the option to perform statistical analysis by
using implemented dialogues e.g. the Time Series dialogue in Figure 18.3. Via
18.4 Using MD*ReX 389

this dialogue a researcher is able to conduct standard time series analysis tech-
niques as well as e.g. more reп¬Ѓned nonlinear approaches like ARCH tests based
on neural networks. These interfaces encapsulate XploRe code while using the
standard Excel GUI elements hence undue learning overhead is minimized. Al-
ternatively one can directly write XploRe commands into the spreadsheet cells
and then let these run either via the menu button or with the context menu,
by right clicking the highlighted cell range (Figure 18.2). Furthermore it is
now much easier to get data to the XploRe Quantlet Server. Simply marking
an appropriate data range within Excel and clicking the Put button is enough
to transfer any kind of numerical data to the server. We will show this in the
next section. A further virtue of using a spreadsheet application is the com-
monly built-in database connectivity. Excel for example allows for various data
retrieval mechanisms via the Open Database Connectivity (ODBC) standard,
which is supported by most of the database systems available nowadays.

390 18 Net Based Spreadsheets in Quantitative Finance

18.5 Applications
In the following paragraph we want to show how MD*ReX might be used in
order to analyze the VaR using copulas as described in Chapter 2 of this book.
Subsequently we will demonstrate the analysis of implied volatility shown in
Chapter 6. All examples are taken out of this book and have been accordingly
modiп¬Ѓed. The aim is to make the reader aware of the need of this modiп¬Ѓcation
and give an idea how this client may be used for other п¬Ѓelds of statistical
research as well.

Figure 18.3. MD*ReX Time Series Dialogue

We have willingly omitted the demonstration of dialogues and menu bars as
it is pretty straightforward to develop these kind of interfaces on your own.
Some knowledge of the macro language Visual Basic for Applications (VBA)
integrated into Excel and an understanding of the XploRe Quantlets is suп¬ѓcient
18.5 Applications 391

to create custom dialogues and menus for this client. Thus no further knowledge
of the XploRe Quantlet syntax is required. An example is the aforementioned
Time Series dialogue, Figure 18.3.

18.5.1 Value at Risk Calculations with Copulas

The quantiп¬Ѓcation of the VaR of a portfolio of п¬Ѓnancial instruments has become
a constituent part of risk management. Simpliп¬Ѓed the VaR is a quantile of the
probability distribution of the value-loss of a portfolio (Chapter 2). Aggregat-
ing individual risk positions is one major concern for risk analysts. The Вµ в€’ Пѓ
approach of portfolio management measures risk in terms of the variance, im-
plying a вЂќGaussian worldвЂќ (BouyВґ et al., 2001). Traditional VaR methods are
e
hence based on the normality assumption for the distribution of п¬Ѓnancial re-
turns. Though empirical evidence suggests high probability of extreme returns
(вЂќFat tailsвЂќ) and more mass around the center of the distribution (leptokurto-
sis), violating the principles of the Gaussian world (Rachev, 2001).
In conjunction with the methodology of VaR these problems seem to be
tractable with copulas. In a multivariate model setup a copula function is
used to couple joint distributions to their marginal distributions. The copula
approach has two major issues, substituting the dependency structure, i.e. the
correlations and substituting the marginal distribution assumption, i.e. relax-
ation of the Gaussian distribution assumption. With MD*ReX the user is now
enabled to conduct copula based VaR calculation with Excel, making use of
ExcelвЂ™s powerful graphical capabilities and its intuitive interface.
The steps necessary are as follows:

1. Get the according Quantlets into Excel,
2. run them from there,
3. obtain the result,
4. create a plot of the result.

The п¬Ѓrst step is rather trivial: copy and paste the example Quantlet
XFGrexcopula1.xpl from any text editor or browser into an Excel work-
sheet.
Next mark the range containing the Quantlet and apply the Run command.
Then switch to any empty cell of the worksheet and click Get to receive the
392 18 Net Based Spreadsheets in Quantitative Finance

numerical output rexcuv. Generating a tree-dimensional Excel graph from this
output one obtains an illustration as displayed in Figure 18.4. The according
Quantlets are XFGrexcopula1.xpl, XFGrexcopula2.xpl,
XFGrexcopula3.xpl and XFGrexcopula4.xpl. They literally work the
same way as the XFGaccvar1.xpl Quantlet.

Figure 18.4. Copulas: C4 (u, v) for Оё = 2 and N = 30 (upper left),
C5 (u, v) for Оё = 3 and N = 21 (upper right), C6 (u, v) for Оё = 4 and
N = 30 (lower left), C7 (u, v) for Оё = 5 and N = 30 (lower right)

Of course the steps 1-4 could easily be wrapped into a VBA macro with suitable
dialogues. This is exactly what we refer to as the change from the raw mode
of MD*ReX into the вЂќWindowsвЂќ like embedded mode. Embedded here means
that XploRe commands (quantlets) are integrated into the macro language of
Excel.
The Monte Carlo simulations are obtained correspondingly and are depicted in
Figure 18.5. The according Quantlet is XFGrexmccopula.xpl. This Quant-
let again is functioning analogous to XFGaccvar2.xpl. The graphical out-
put then is constructed along same lines: paste the corresponding results z11
through z22 in cell areas and let Excel draw a scatter-plot.
18.5 Applications 393

Figure 18.5. Monte Carlo Simulations for N = 10000 and Пѓ1 = 1,
Пѓ2 = 1, Оё = 3

18.5.2 Implied Volatility Measures

A basic risk measure in п¬Ѓnance is volatility, which can be applied to a single
asset or a bunch of п¬Ѓnancial assets (i.e. a portfolio). Whereas the historic
volatility simply measures past price movements the implied volatility repre-
sents a market perception of uncertainty. Implied volatility is a contempora-
neous risk measure which is obtained by reversely solving an option pricing
model as the Black-Scholes model for the volatility. The implied volatility can
only be quantiп¬Ѓed if there are options traded which have the asset or assets
as an underlying (for example a stock index). The examples here are again
taken out of Chapter 6. The underlying data are VolaSurf02011997.xls,
VolaSurf03011997.xls and volsurfdata2. The data has been kindly pro-
vided by MD*BASE. volsurfdata2 ships with any distribution of XploRe. In
our case the reader has the choice of either importing the data into Excel via
the data import utility or simply running the command
data=read("volsurfdata2.dat"). For the other two data sets utilizing the
Put button is the easiest way to transfer the data to an XQS. Any of these
alternatives have the same eп¬Ђect, whereas the former is a good example of how
the MD*ReX client exploits the various data retrieval methods of Excel.
The Quantlet XFGReXiv.xpl returns the data matrix for the implied volatil-
ity surfaces shown in Figure 18.6 through 18.8. Evidently the Quantlet has to
394 18 Net Based Spreadsheets in Quantitative Finance

Figure 18.6. DAX30 Implied Volatility, 02.01.1997

be modiп¬Ѓed for the appropriate data set. In contrast to the above examples
where Quantlets could be adopted without any further modiп¬Ѓcation, in this
case we need some redesign of the XploRe code. This is achieved with suitable
reshape operations of the output matrices. The graphical output is then ob-
tained by arranging the two output vectors x2 and y2 and the output matrix
z1.
The advantage of measuring implied volatilities is obviously an expressive vi-
sualization. Especially the well known volatility smile and the corresponding
time structure can be excellently illustrated in a movable cubic space. Further-
more this approach will enable real-time calculation of implied volatilities in
future applications. Excel can be used as a data retrieval front end for real-
time market data providers as Datastream or Bloomberg. It is imaginable then
to analyze tick-data which are fed online into such an spreadsheet system to
evaluate contemporaneous volatility surfaces.
18.5 Applications 395

Figure 18.7. DAX30 Implied Volatility, 03.01.1997

Bibliography
AydД±nlД±, G., HВЁrdle, W., Kleinow, T. and Sofyan, H.(2002). ReX: Link-
a
ing XploRe to Excel, forthcoming Computational Statistics Special Issue,
Springer Verlag, Heidelberg.
BouyВґ, E., Durrleman, V., Nikeghbali, A., Riboulet, G. and Roncalli, T.(2000).
e
396 18 Net Based Spreadsheets in Quantitative Finance

Figure 18.8. DAX30 Implied Volatility

Copulas for Finance, A Reading Guide and some Applications, unpub-
lished manuscript, Financial Econometrics Research Centre, City Univer-
Chan, Y.E. and Storey, V.C. (1996). The use of spreadsheets in organizations:
determinants and consequences, Information & Management, Vol. 31,
18.5 Applications 397

pp. 119-134.
Fickel, N. (2001). Book Review: XploRe - Learning Guide, Allgemeines Statis-
tisches Archiv, Vol. 85/1, p. 93.
HВЁrdle, W., Klinke, S. and MВЁller, M. (1999). XploRe Learning Guide, Springer
a u
Verlag, Heidelberg.
HВЁrdle, W., Hlavka, Z. and Klinke, S. (2000). XploRe Application Guide,
a
Springer Verlag, Heidelberg.
Kleinow, T. and Lehmann, H. (2002). Client/Server based Statistical Com-
puting, forthcoming in Computational Statistics Special Issue, Springer
Verlag, Heidelberg.
McCullough, B.D. and Wilson, B. (1999). On the Accuracy of Statistical Pro-
cedures in Microsoft Excel, Computational Statistics & Data Analysis,
Vol. 31, p. 27-37.
Neuwirth, E. and Baier, T. (2001). Embedding R in Standard Software, and the
other way round, DSC 2001 Proceedings of the 2nd International Work-
shop on Distributed Statistical Computing.
Przasnyski, L.L. and Seal, K.C. (1996). Spreadsheets and OR/MS models: an
end-user perspective, Interfaces, Vol. 26, pp. 92-104.
Rachev, S. (2001). Company Overview, Bravo Consulting,
http://www.bravo-group.com/inside/bravo-consulting
company-overview.pdf.
Ragsdale, C.T. and Plane, D.R. (2000). On modeling time series data using
spreadsheets, The International Journal of Management Science, Vol. 28,
pp. 215-221.
Intelligent Miner, Computational Statistics, Vol. 16, pp. 465-479.
Index

bigarch, 226 IBTbc, 160, 198, 199
IBTdc, 156
BlackScholesPathDependent1DQMC,
363 IBTdk, 156, 160
ImplVola, 129, 130, 133
BlackScholesPathDependent1D,
363 jarber, 289
list, 121
BlackScholesPathIndependent1DQMC,
363 lowdiscrepancy, 358
lpderxest, 185
BlackScholesPathIndependent1D,
363 mean, 55
BlackScholesPathIndependentMDDivQMC, nmBFGS, 298
364 nmhessian, 298
normal, 120
BlackScholesPathIndependentMDDiv,
364 nparmaest, 375
npgarchest, 380, 381
BlackScholesPathIndependentMDQMC,
364 paf, 185
quantile, 62
BlackScholesPathIndependentMD,
364 randomnumbers, 358
BlackScholes, 130 regest, 263
cdf2quant, 24 regxbwcrit, 203
cdfn, 120 regxest, 263
cdft, 120 simou, 278
CPC, 140 spccusumCarl, 247
denrot, 204 spccusumC, 240
denxest, 204 spcewma1arl, 247
descriptive, 56, 61 spcewma1, 240
gennorm, 119, 120 spcewma2arl, 239
gFourierInversion, 23 spcewma2c, 239
gkalarray, 293, 294 spcewma2pmfm, 239
gkalfilter, 293, 295, 300 spcewma2pmf, 239
gkalresiduals, 293, 298 spcewma2, 239
gkalsmoother, 293, 300 spdbl, 183, 185
Index 399

spdbs, 176 XFGData9701, 184
summarize, 54, 225 XFGhouseprice, 290
VaRcdfDG, 24 XFGhousequality, 290, 294
VaRcgfDG, 11 fx, 225
VaRcharfDGF2, 23 sigmaprocess, 229
VaRcharfDG, 11, 23 volsurf01, 141
VaRcopula, 41, 45, 46 volsurf02, 141
VaRcorrfDGF2, 23 volsurf03, 141
VaRcredN2, 122 volsurfdata2, 130, 393
VaRcredN, 121, 122 discrepancy, 356
VaRcredTcop2, 122
EGARCH, 368
VaRcredTcop, 121, 122
Empirical Likelihood, 259
VaRcumulantDG, 11
equicorrelation, 91
VaRcumulantsDG, 11
VaRDGdecomp, 9, 11
finance library, 185, 362
VaRestMCcopula, 45
fx data, 225
VaRestMC, 30, 31
VaRest, 70, 71
GARCH model, 367
VaRfitcopula, 41
exponential, 368
VaRqDG, 24
integrated, 367
VaRqqplot, 75
threshold, 368
VaRRatMigCount, 93
GARCH process
VaRRatMigRateM, 104, 106
nonparametric estimates,
VaRRatMigRate, 93
в†’ NPGARCH
VaRsimcopula, 45
volsurfplot, 133 Halton, sequence, 357
volsurf, 131вЂ“133, 162, 185, 198 HiSim, 51
Asian option, 355 IBT, 145
Idiosyncratic Bond Risk,
BiGARCH, 221
в†’ HiSim
Brownian bridge, 361
IGARCH, 367
Implied Binomial Trees,
Copula, 35
в†’ IBT
credit portfolio model, 111
implied volatilities, 127
data sets INAAA data, 54, 55, 61, 71, 75, 76
INAAA, 54, 55, 61, 71, 75, 76
Koksma-Hlawka theorem, 356
MMPL, 71
PL, 70
library
USTF, 55
400 Index

VaR, 41, 42 PL data, 70
finance, 185, 362 portfolio
nummath, 298 composition, 89
spc, 239, 247 migration, 106
weights, 107
Markov chain, 87, 101
bootstrapping, 102 Quasi Monte Carlo simulation, 356
mcopt, 349
randomized algorithm, 349
migration
rating, 87
correlation, 90, 92
migrations, 87
counts, 89
dependence, 90
events, 88
independence, 90
probability,
transition probability,
в†’ transition probability
в†’ transition probability
rates, 90
risk horizon, 88
MMPL data, 71
risk neutral model, 351
model
RiskMetrics, 367
Implied Binomial Trees,
в†’ IBT
sigmaprocess data, 229
multivariate volatility,
SPC, 237
в†’ BiGARCH
spc library, 239, 247
State-Price Densities,
 << стр. 13(всего 14)СОДЕРЖАНИЕ >>