<<

. 13
( 14)



>>

16.3 Pricing options with simulation techniques - a guideline 365


16.3.3 Restrictions for the payo¬ functions

Monte Carlo based option pricing methods are not applicable for all types of
payo¬ functions. There is one theoretical, and some practical limitations for
the method. Let us look at the theoretical limitation ¬rst.
In the derivation of the probabilistic error bounds we have to assume the ex-
istence of the payo¬ variance with respect to the risk neutral distribution. It
follows that we are no longer able to derive the presented error bounds if this
variance does not exist. However for most payo¬ functions occurring in practice
and the Black Scholes model the di¬erence between the payo¬ samples and the
price can be bounded from above by a polynomial function in the di¬erence
between the underlying estimate and the start price for which the integral with
respect to the risk neutral density exists. Consequently the variance of these
payo¬ functions must be ¬nite.
Much more important than the theoretical limitations are the practical limi-
tations. In the ¬rst place Monte Carlo simulation relies on the quality of the
pseudo random number generator used to generate the uniformly distributed
samples. All generators used are widely tested, but it can™t be guaranteed
that the samples generated for a speci¬c price estimation exhibit all assumed
statistical properties. It is also important to know that all generators produce
the same samples in a ¬xed length cycle. For example if we use the random
number generator from Park and Miller with Bays-Durham shu¬„e, we will get
the same samples after ≈ 108 method invocations.
Another possible error source is the transformation function which converts the
uniformly distributed random numbers in normally distributed number. The
approximation to the inverse of the normal distribution used in our case has a
maximum absolute error of 10’15 which is su¬ciently good.
The most problematic cases for Monte Carlo based option pricing are options
for which the probability of an occurrence of a strictly positive payo¬ is very
small. Then we will get either price and variance estimates based on a few
positive samples if we hit the payo¬ region or we get a zero payo¬ and variance
if this improbable event does not occur. However in both cases we will get a
very high relative error. More accurate results may be calculated by applying
importance sampling to these options.
366 16 Simulation based Option Pricing


Bibliography
Bauer, H. (1991). Wahrscheinlichkeitstheorie, W. de Gruyter.
Bosch, K. (1993). Elementare Einf¨hrung in die Wahrscheinlichkeitsrechnung,
u
Vieweg.

Boyle, P. P. (1977). Options: A monte carlo approach, Journal of Financial
Economics 4: 323“338.
Broadie, M., Glasserman, P., and Ha,Z. (2000 ) Pricing American Options by
Simulation Using a Stochastic Mesh with Optimized Weights, Probabilistic
Constrained Optimization: Methodology and Applications, S. Uryasev ed.,
Kluwer.

Marsaglia, George (1993 ) Monkey tests for random number generators, Com-
puters & Mathematics with Applications 9: 1-10.
Niederreiter, H. (1992). Random number generation and Quasi Monte Carlo
methods, 1 edn, Capital City Press, Monpellier Vermont.
Joy, C.,Boyle, P., and Tan, K. S. (1996). Quasi monte carlo methods in nu-
merical ¬nance, Management Science 42(6): 926“936.

Tu¬n, Bruno (1996). On the use of low discrepancy sequences in Monte Carlo
Methods, Technical Report IRISA - Institut de Recherche en Informatique
et Systemes Aleatoires 1060.
Moroko¬, William J.,andCa¬‚ish, Russel E. (1996 ) Quasi-monte carlo simula-
tion of random walks in ¬nance, In Monte Carlo and Quasi-Monte Carlo
methods, 340-352, University of Salzburg, Springer.
Malvin H. Kalos (1986) Monte Carlo Methods, Wiley
17 Nonparametric Estimators of
GARCH Processes
J¨rgen Franke, Harriet Holzberger and Marlene M¨ller
u u


The generalized ARCH or GARCH model (Bollerslev, 1986) is quite popular
as a basis for analyzing the risk of ¬nancial investments. Examples are the
estimation of value-at-risk (VaR) or the expected shortfall from a time series
of log returns. In practice, a GARCH process of order (1,1) often provides a
reasonable description of the data. In the following, we restrict ourselves to
that case.
We call {µt } a (strong) GARCH (1,1) process if
µt = σt Zt
2
= ω + ± µ2 + β σt’1
2
σt (17.1)
t’1

with independent identically distributed innovations Zt having mean 0 and
variance 1. A special case is the integrated GARCH model of order (1,1) or
IGARCH(1,1) model where ± + β = 1 and, frequently, ω = 0 is assumed, i.e.
σt = ± µ2 + (1 ’ ±) σt’1 .
2 2
t’1

This model forms the basis for the J.P. Morgan RiskMetrics VaR analysis using
exponential moving averages (Franke, H¨rdle and Hafner, 2001, Chapter 15).
a
The general GARCH(1,1) process has ¬nite variance σ 2 = ω/(1 ’ ± ’ β) if ± +
2
β < 1, and it is strictly stationary if E{log(± Zt + β)} < 0. See Franke, H¨rdle
a
and Hafner (2001, Chapter 12) for these and further properties of GARCH
processes.
In spite of its popularity, the GARCH model has one drawback: Its sym-
metric dependence on past returns does not allow for including the leverage
e¬ect into the model, i.e. the frequently made observation that large negative
returns of stock prices have a greater impact on volatility than large posi-
tive returns. Therefore, various parametric modi¬cations like the exponential
368 17 Nonparametric Estimators of GARCH Processes


GARCH (EGARCH) or the threshold GARCH (TGARCH) model have been
proposed to account for possible asymmetric dependence of volatility on re-
turns. The TGARCH model, for example, introduces an additional term into
2
the volatility equation allowing for an increased e¬ect of negative µt’1 on σt :
σt = ω + ± µ2 + ±’ µ2 · 1(µt’1 < 0) + β σt’1 .
2 2
µt = σt Zt , t’1 t’1

To develop an exploratory tool which allows to study the nonlinear depen-
2
dence of squared volatility σt on past returns and volatilities we introduce a
nonparametric GARCH(1,1) model
µt = σt Zt
2 2
σt = g(µt’1 , σt’1 ) (17.2)
where the innovations Zt are chosen as above. We consider a nonparametric
estimator for the function g based on a particular form of local smoothing.
Such an estimate may be used to decide if a particular parametric nonlinear
GARCH model like the TGARCH is appropriate.
We remark that the volatility function g cannot be estimated by common kernel
or local polynomial smoothers as the volatilities σt are not observed directly.
B¨hlmann and McNeil (1999) have considered an iterative algorithm. First,
u
they ¬t a common parametric GARCH(1,1) model to the data from which they
get sample volatilities σt to replace the unobservable true volatilities. Then,
2
they use a common bivariate kernel estimate to estimate g from µt and σt .
Using this preliminary estimate for g they obtain new sample volatilities which
are used for a further kernel estimate of g. This procedure is iterated several
times until the estimate stabilizes.
Alternatively, one could try to ¬t a nonparametric ARCH model of high order
2 2
to the data to get some ¬rst approximations σt to σt and then use a local
linear estimate based on the approximate relation
2 2
σt ≈ g(µt’1 , σt’1 ).
However, a complete nonparametric approach is not feasible as high-order non-
2
parametric ARCH models based on σt = g(µt’1 , . . . , µt’p ) cannot be reliably
estimated by local smoothers due to the sparseness of the data in high dimen-
sions. Therefore, one would have to employ restrictions like additivity to the
2
ARCH model, i.e. σt = g1 (µt’1 ) + . . . + gp (µt’p ), or even use a parametric
ARCH model σt = ω + ±1 µ2 + . . . + ±p µ2 . The alternative we consider here
2
t’p
t’1
is a direct approach to estimating g based on deconvolution kernel estimates
2
which does not require prior estimates σt .
17.1 Deconvolution density and regression estimates 369


17.1 Deconvolution density and regression
estimates
Deconvolution kernel estimates have been described and extensively discussed
in the context of estimating a probability density from independent and identi-
cally distributed data (Carroll and Hall, 1988; Stefansky and Carroll, 1990). To
explain the basic idea behind this type of estimates we consider the deconvo-
lution problem ¬rst. Let ξ1 , . . . , ξN be independent and identically distributed
real random variables with density pξ (x) which we want to estimate. We do
not, however, observe the ξk directly but only with additive errors ·1 , . . . , ·N .
Let us assume that the ·k as well are independent and identically distributed
with density p· (x) and independent of the ξk . Hence, the available data are

Xk = ξk + ·k , k = 1, . . . , N.

To be able to identify the distribution of the ξk from the errors ·k at all, we
have to assume that p· (x) is known. The density of the observations Xk is just
the convolution of pξ with p· :

px (x) = pξ (x) p· (x) .

We can therefore try to estimate px (x) by a common kernel estimate and ex-
tract an estimate for pξ (x) out of it. This kind of deconvolution operation is
preferably performed in the frequency domain, i.e. after applying a Fourier
transform. As the subsequent inverse Fourier transform includes already a
smoothing part we can start with the empirical distribution of X1 , . . . , XN in-
stead of a smoothed version of it. In detail, we calculate the Fourier transform
or characteristic function of the empirical law of X1 , . . . , XN , i.e. the sample
characteristic function
N
1
eiωXk .
φx (ω) =
N
k=1
Let ∞
iω·k
eiωu p· (u) du
φ· (ω) = E(e )=
’∞

denote the (known) characteristic function of the ·k . Furthermore, let K be
a common kernel function, i.e. a nonnegative continuous function which is
symmetric around 0 and integrates up to 1: K(u) du = 1, and let

eiωu K(u) du
φK (ω) =
370 17 Nonparametric Estimators of GARCH Processes


be its Fourier transform. Then, the deconvolution kernel density estimate of
pξ (x) is de¬ned as

1 φx (ω)
e’iωx φK (ωh)
ph (x) = dω .
2π φ· (ω)
’∞

The name of this estimate is explained by the fact that it may be written
equivalently as a kernel density estimate
N
x ’ Xk
1
Kh
ph (x) =
Nh h
k=1

with deconvolution kernel

1 φK (ω)
e’iωu
h
K (u) = dω
2π φ· (ω/h)
’∞

depending explicitly on the smoothing parameter h. Based on this kernel esti-
mate for probability densities, Fan and Truong (1993) considered the analogous
deconvolution kernel regression estimate de¬ned as
N
x ’ Xk
1
Kh
mh (x) = Yk / ph (x).
Nh h
k=1

This Nadaraya-Watson-type estimate is consistent for the regression function
m(x) in an errors-in-variables regression model

Yk = m(ξk ) + Wk , Xk = ξk + ·k , k = 1, . . . , N,

where W1 , . . . , WN are independent identically distributed zero-mean random
variables independent of the Xk , ξk , ·k which are chosen as above. The Xk , Yk
are observed, and the probability density of the ·k has to be known.


17.2 Nonparametric ARMA Estimates
GARCH processes are closely related to ARMA processes. If we square a
GARCH(1,1) process {µt } given by (17.1) then we get an ARMA(1,1) process

µ2 = ω + (± + β) µ2 ’ β ζt’1 + ζt ,
t t’1
17.2 Nonparametric ARMA Estimates 371

2 2
where ζt = σt (Zt ’ 1) is white noise, i.e. a sequence of pairwise uncorrelated
random variables, with mean 0. Therefore, we study as an intermediate step
towards GARCH processes the nonparametric estimation for ARMA models
which is more closely related to the errors-in-variables regression of Fan and
Truong (1993). A linear ARMA(1,1) model with non-vanishing mean ω is given
by
Xt+1 = ω + a Xt + b et + et+1

with zero-mean white noise et . We consider the nonparametric generalization
of this model
Xt+1 = f (Xt , et ) + et+1 (17.3)
for some unknown function f (x, u) which is monotone in the second argument
u. Assume we have a sample X1 , . . . , XN +1 observed from (17.3). If f does
not depend on the second argument, (17.3) reduces to a nonparametric autore-
gression of order 1
Xt+1 = f (Xt ) + et+1
and the autoregression function f (x) may be estimated by common kernel es-
timates or local polynomials. There exists extensive literature about that type
of estimation problem, and we refer to the review paper of H¨rdle, L¨tkepohl
a u
and Chen (1997). In the general case of (17.3) we again have the problem of
estimating a function of (partially) non-observable variables. As f depends also
on the observable time series Xt , the basic idea of constructing a nonparamet-
ric estimate of f (x, u) is to combine a common kernel smoothing in the ¬rst
variable x with a deconvolution kernel smoothing in the second variable u. To
de¬ne the estimate we have to introduce some notation and assumptions.
We assume that the innovations et have a known probability density pe with
v
distribution function Pe (v) = ’∞ pe (u) du and with Fourier transform φe (ω) =
0 for all ω and

|φe (ω)| ≥ c · |ω|β0 exp(’|ω|β /γ) for |ω| ’’ ∞

for some constants c, β, γ > 0, β0 . The nonlinear ARMA process (17.3) has
to be stationary and strongly mixing with exponentially decaying mixing co-
e¬cients. Let p(x) denote the density of the stationary marginal density of
Xt .
The smoothing kernel K x in x-direction is a common kernel function with
compact support [’1, +1] satisfying 0 ¤ K x (u) ¤ K x (0) for all u. The kernel
K which is used in the deconvolution part has a Fourier transform φK (ω)
372 17 Nonparametric Estimators of GARCH Processes


which is symmetric around 0, has compact support [’1, +1] and satis¬es some
smoothness conditions (Holzberger, 2001). We have chosen a kernel with the
following Fourier transform:

φK (u) = 1 ’ u2 for |u| ¤ 0.5
φK (u) = 0.75 ’ (|u| ’ 0.5) ’ (|u| ’ 0.5)2
’220 (|u| ’ 0.5)4 + 1136 (|u| ’ 0.5)5
’1968 (|u| ’ 0.5)6 + 1152 (|u| ’ 0.5)7 for 0.5 ¤ |u| ¤ 1.

For convenience, we use the smoothing kernel K x to be proportional to that
function: K x (u) ∝ φK (u). The kernel K x is hence an Epanechnikov kernel
with modi¬ed boundaries.
Let b = C/N 1/5 be the bandwidth for smoothing in x-direction, and let h =
A/ log(N ) be the smoothing parameter for deconvolution in u-direction where
A > π/2 and C > 0 are some constants. Then,
N +1
x ’ Xt
1
Kx
pb (x) =
(N + 1)b b
t=1

is a common Rosenblatt“Parzen density estimate for the stationary density
p(x).
Let q(u) denote the stationary density of the random variable f (Xt , et ), and
let q(u|x) be its conditional density given Xt = x. An estimate of the latter is
given by
N
u ’ Xt+1 x ’ Xt
1
Kh Kx
qb,h (u|x) = / pb (x) (17.4)
N hb h b
t=1

where the deconvolution kernel K h is

1 φK (ω)
e’iωu
h
K (u) = dω .
2π φe (ω/h)
’∞


In (17.4) we use a deconvolution smoothing in the direction of the second
argument of f (x, u) using only pairs of observations (Xt , Xt+1 ) for which |x ’
Xt | ¤ b, i.e. Xt ≈ x. By integration, we get the conditional distribution
function of f (Xt , et ) given Xt = x
v
Q(v|x) = P(f (x, et ) ¤ v|Xt = x) = q(u|x) du
’∞
17.2 Nonparametric ARMA Estimates 373


and its estimate
v aN
Qb,h (v|x) = qb,h (u|x)du qb,h (u|x) du
’aN ’aN

for some aN ∼ N 1/6 for N ’ ∞. Due to technical reasons we have to cut o¬
the density estimate in regions where it is still unreliable for given N . The
particular choice of denominator guarantees that Qb,h (aN |x) = 1 in practice,
since Q(v|x) is a cumulative distribution function.
To estimate the unconditional density q(u) of f (Xt , et ) = Xt+1 ’ et+1 , we use
a standard deconvolution density estimate with smoothing parameter h— =
A— / log(N )
N
u ’ Xt
1
qh— (u) = Kh— .
N h— h—
t=1

Let pe (u|x) be the conditional density of et given Xt = x, and let Pe (v|x) =
v
p (u|x) du be the corresponding conditional distribution function. An es-
’∞ e
timate of it is given as
v aN
q (x ’ u) pe (u)du qh— (x ’ u) pe (u) du
P (v|x) =
e,h— h—
’aN ’aN

where again we truncate at aN ∼ N 1/6 .
To obtain the ARMA function f , we can now compare Q(v|x) and Pe (v|x).
In practice this means to relate Qb,h (v|x) and Pe,h— (v|x). The nonparametric
estimate for the ARMA function f (x, v) depending on smoothing parameters
b, h and h— is hence given by

fb,h,h— (x, v) = Q’1 (Pe,h— (v|x) |x)
b,h

if f (x, v) is increasing in the second argument, and

fb,h,h— (x, v) = Q’1 (1 ’ Pe,h— (v|x) |x)
b,h


if f (x, v) is a decreasing function of v for any x. Q’1 (·|x) denotes the in-
b,h
verse of the function Qb,h (·|x) for ¬xed x. Holzberger (2001) has shown that
fb,h,h— (x, v) is a consistent estimate for f (x, v) under suitable assumptions and
has given upper bounds on the rates of bias and variance of the estimate. We
remark that the assumption of monotonicity on f is not a strong restriction.
In the application to GARCH processes which we have in mind it seems to be
374 17 Nonparametric Estimators of GARCH Processes


intuitively reasonable that the volatility of today is an increasing function of
the volatility of yesterday which translates into an ARMA function f which is
decreasing in the second argument.
Let us illustrate the steps for estimating a nonparametric ARMA process. First
we generate time series data and plot Xt+1 versus Xt .

library("times")
n=1000
x=genarma(0.7,0.7,normal(n))
XFGnpg01.xpl
The result is shown in Figure 17.1. The scatterplot in the right panel of Fig-
ure 17.1 de¬nes the region where we can estimate the function f (x, v).

ARMA(1,1) Time Series ARMA(1,1) Scatterplot
5




5
0




0
X(t)




X(t)
-5




-5




0 5 10 -5 0 5
t*E2 X(t+1)

Figure 17.1. ARMA(1,1) process.


To compare the deconvolution density estimate with the density of f (Xt , et )
we use now our own routine (myarma) for generating ARMA(1,1) data from a
known function (f):

proc(f)=f(x,e,c)
f=c[1]+c[2]*x+c[3]*e
endp
17.2 Nonparametric ARMA Estimates 375


proc(x,f)=myarma(n,c)
x=matrix(n+1)-1
f=x
e=normal(n+1)
t=1
while (t<n+1)
t=t+1
f[t]=f(x[t-1],e[t-1],c)
x[t]=f[t]+e[t]
endo
x=x[2:(n+1)]
f=f[2:(n+1)]
endp

n=1000
{x,f}=myarma(n,0|0.7|0.7)

h=0.4
library("smoother")
dh=dcdenest(x,h) // deconvolution estimate
fh=denest(f,3*h) // kernel estimate
XFGnpg02.xpl


Figure 17.2 shows both density estimates. Note that the smoothing parameter
(bandwidth h) is di¬erent for both estimates since di¬erent kernel functions
are used.




f = nparmaest (x {,h {,g {,N {,R } } } } )
estimates a nonparametric ARMA process




The function nparmaest computes the function f (x, v) for an ARMA process
according to the algorithm described above. Let us ¬rst consider an ARMA(1,1)
376 17 Nonparametric Estimators of GARCH Processes


Deconvolution Density




20
15
q(u)*E-2
10
5
0




-5 0 5
u
Figure 17.2. Deconvolution density estimate (solid) and kernel den-
sity estimate (dashed) of the known mean function of an ARMA(1,1)
process.



with f (x, v) = 0.3 + 0.6x + 1.6v, i.e.

Xt = 0.3 + 0.6Xt’1 + 1.6et’1 + et .

Hence, we use myarma with c=0.3|0.6|1.6 and call the estimation routine by

f=nparmaest(x)
XFGnpg03.xpl
The optional parameters N and R are set to 50 and 250, respectively. N con-
tains the grid sizes used for x and v. R is an additional grid size for internal
computations. The resulting function is therefore computed on a grid of size
N — N. For comparison, we also calculate the true function on the same grid.
Figure 17.3 shows the resulting graphs. The bandwidths h (corresponding to
h— ) for the one-dimensional deconvolution kernel estimator q and g for the
17.2 Nonparametric ARMA Estimates 377


two-dimensional (corresponding to h and b) are chosen according to the rates
derived in Holzberger (2001).


Linear ARMA(1,1)




10.5




0.7



4.7
-9.1
0.2

-4.3
4.7
0.2
-4.3




Figure 17.3. Nonparametric estimation of a (linear) ARMA process.
True vs. estimated function and data.


As a second example consider an ARMA(1,1) with a truly nonlinear function
f (x, v) = ’2.8 + 8F (6v), i.e.
Xt = ’2.8 + 8F (6 et’1 ) + et ,
where F denotes the sigmoid function F (u) = (1 + e’u )’1 In contrast to
the previous example, this function is obviously not dependent on the ¬rst
argument. The code above has to be modi¬ed by using

proc(f)=f(x,e,c)
378 17 Nonparametric Estimators of GARCH Processes


f=c[2]/(1+exp(-c[3]*e))+c[1]
endp
c=-2.8|8|6
XFGnpg04.xpl
The resulting graphs for this nonlinear function are shown in Figure 17.4. The
estimated surface varies obviously only in the second dimension and follows
the s-shaped underlying true function. However, the used sample size and
the internal grid sizes of the estimation procedure do only allow for a rather
imprecise reconstruction of the tails of the surface.


Nonlinear ARMA(1,1)




4.2




0.8



4.2
-2.5
0.2

-3.9
4.2
0.2
-3.9




Figure 17.4. Nonparametric estimation of a (nonlinear) ARMA process.
True vs. estimated function and data.
17.3 Nonparametric GARCH Estimates 379


17.3 Nonparametric GARCH Estimates
In the following, we consider nonparametric GARCH(1,1) models which depend
symmetrically on the last observation:

µt = σt Zt , (17.5)
2
= g(µ2 , σt’1 ) .
2
σt t’1

Here, g denotes a smooth unknown function and the innovations Zt are chosen
as in as in Section 17.2. This model covers the usual parametric GARCH(1,1)
process (17.1) but does not allow for representing a leverage e¬ect like the
TGARCH(1,1) process. We show now how to transform (17.5) into an ARMA
model. First, we de¬ne

Xt = log(µ2 ), et = log(Zt ).
2
t

By (17.5), we have now

= log(µ2 ) = log σt+1 + et+1
2
Xt+1 t+1
= log g(µ2 , σt ) + et+1
2
t
= log g1 (log(µ2 ), log(σt )) + et+1
2
t
= log g1 (Xt , Xt ’ et ) + et+1
= f (Xt , et ) + et+1

with
g1 (x, u) = g(ex , eu ), f (x, v) = log g1 (x, x ’ v).
Now, we can estimate the ARMA function f (x, v) from the logarithmic squared
data Xt = log(µ2 ) as in Section 17.3 using the nonparametric ARMA estimate
t
fb,h,h — (x, v) of (17.5). Reverting the transformations, we get



g1 (x, u) = exp{fb,h,h— (x, x ’ u)}, gb,h,h— (y, z) = g1 (log y, log z)

or, combining both equations,

gb,h,h— (y, z) = exp fb,h,h— (log y, log(y/z)) , y, z > 0,

as an estimate of the symmetric GARCH function g(y, z).
We have to be aware, of course, that the density pe used in the deconvolution
2
part of estimating f (x, v) is the probability density of the et = log Zt , i.e. if
380 17 Nonparametric Estimators of GARCH Processes


pz (z) denotes the density of Zt ,
1
eu/2 pz (eu/2 ) + e’u/2 pz (e’u/2 ) .
pe (u) =
2
If µt is a common parametric GARCH(1,1) process of form (17.1), then g(y, z) =
ω + ±y + βz, and the corresponding ARMA function is f (x, v) = log(ω + ±ex +
βex’v ). This is a decreasing function in v which seems to be a reasonable
assumption in the general case too corresponding to the assumption that the
present volatility is an increasing function of past volatilities.
As an example, we simulate a GARCH process from

proc(f)=gf(x,e,c)
f=c[1]+c[2]*x+c[3]*e
endp

proc(e,s2)=mygarch(n,c)
e=zeros(n+1)
f=e
s2=e
z=normal(n+1)
t=1
while (t<n+1)
t=t+1
s2[t]=gf(e[t-1]^2,s2[t-1]^2,c)
e[t]=sqrt(s2[t]).*z[t]
endo
e=e[2:(n+1)]
s2=s2[2:(n+1)]
endp




f = npgarchest (x {,h {,g {,N {,R } } } } )
estimates a nonparametric GARCH process
17.3 Nonparametric GARCH Estimates 381


The function npgarchest computes the functions f (x, v) and g(y, z) for
a GARCH process using the techniques described above. Consider a
GARCH(1,1) with
g(y, z) = 0.01 + 0.6 y + 0.2 z.
Hence, we use

n=1000
c=0.01|0.6|0.2
{e,s2}=mygarch(n,c)

and call the estimation routine by

g=npgarchest(e)
XFGnpg05.xpl
Figure 17.5 shows the resulting graph for the estimator of f (x, v) together with
the true function (decreasing in v) and the data (Xt+1 versus Xt ). As in the
ARMA case, the estimated function shows the underlying structure only for a
part of the range of the true function.
Finally, we remark how the the general case of nonparametric GARCH models
could be estimated. Consider
µt = σt Zt (17.6)
2 2
σt = g(µt’1 , σt’1 )
2
where σt may depend asymmetrically on µt’1 . We write
g(x, z) = g + (x2 , z) 1(x ≥ 0) + g ’ (x2 , z) 1(x < 0).

As g + , g ’ depend only on the squared arguments we can estimate them as
before. Again, consider Xt = log(µ2 ), et = log(Zt ). Let N+ be the number of
2
t
all t ¤ N with µt ≥ 0, and N’ = N ’ N+ . Then, we set
N
x ’ Xt
1
p+ (x) K x( )1(µt ≥ 0)
=
b
N+ b b
t=1
N
u ’ Xt+1 x ’ Xt
1
+
p+ (x)
Kh Kx 1(µt ≥ 0)
qb,h (u|x) = b
N+ hb h b
t=1
N
u ’ Xt
1
+
1(µt ≥ 0).
qh— (u) = Kh—
N + h— h—
t=1
382 17 Nonparametric Estimators of GARCH Processes



Nonparametric GARCH(1,1)




8.1




2.3



-4.2
-3.5
-7.8

-11.4
-4.2
-7.8
-11.4




Figure 17.5. Nonparametric estimation of f (x, v) for a (linear) GARCH
process. True vs. estimated function, data Xt = log(µ2 ).
t




Q+ (v|x), Pe,h— (v|x) are de¬ned as in Section 17.2 with qb,h , p+ replacing qb,h
+ +
b,h b
and pb , and, using both estimates of conditional distribution functions we get
+
an ARMA function estimate fb,h,h— (x, v). Reversing the transformation from
GARCH to ARMA, we get as the estimate of g + (x2 , z)

+ +
gb,h,h— (x2 , z) = exp fb,h,h— log x2 , log(x2 /z) .

The estimate for g ’ (x2 , z) is analogously de¬ned

’ ’
gb,h,h— (x2 , z) = exp fb,h,h— log x2 , log(x2 /z) .
17.3 Nonparametric GARCH Estimates 383


where, in the derivation of fb,h,h— , N+ and 1(µt ≥ 0) are replaced by N’ and
1(µt < 0).




Bibliography
Bollerslev, T.P. (1986). Generalized autoregressive conditional heteroscedas-
ticity, Journal of Econometrics 31: 307-327.
B¨hlmann, P. and McNeil, A.J. (1999). Nonparametric GARCH-models,
u
Manuscript, ETH Z¨rich, http://www.math.ethz.ch/∼mcneil.
u
Carroll, R.J. and Hall, P. (1988). Optimal rates of convergence for deconvolut-
ing a density, J. Amer. Statist. Assoc. 83: 1184-1186.
Fan, J. and Truong, Y.K. (1993). Nonparametric regression with errors-in-
variables, Ann. Statist. 19: 1257-1272.
Franke, J., H¨rdle, W. and Hafner, Ch. (2001). Statistik der Finanzm¨rkte,
a a
Springer, ebook: http://www.quantlet.de.
H¨rdle, W., L¨tkepohl, H. and Chen, R. (1997). A review of nonparametric
a u
time series analysis, International Statistical Review 65: 49-72.

Holzberger, H. (2001). Nonparametric Estimation of Nonlinear ARMA and
GARCH-processes, PhD Thesis, University of Kaiserslautern.
J.P. Morgan. RiskMetrics, http://www.jpmorgan.com.
Stefansky, L.A. and Carroll, R.J. (1990). Deconvoluting kernel density estima-
tors, Statistics 21: 169-184.
18 Net Based Spreadsheets in
Quantitative Finance
G¨khan Ayd±nl±
o




18.1 Introduction
Modern risk management requires accurate, fast and ¬‚exible computing envi-
ronments. To meet this demand a vast number of software packages evolved
over the last decade, accompanying a huge variety of programming languages,
interfaces, con¬guration and output possibilities. One solution especially de-
signed for large scale explorative data analysis is XploRe, a procedural program-
ming environment, equipped with a modern client/server architecture (H¨rdle a
et al. (1999) and H¨rdle et al. (2000)).
a
As far as ¬‚exibility in the sense of openness and accuracy is concerned XploRe
has a lot to o¬er a risk analyst may wish. On the contrary its matrix oriented
programming language (Fickel, 2001) might be seen as a drawback in respect
to other computational approaches. In terms of learning curve e¬ects and total
cost of ownership an alternative solution seems desirable.
This chapter will present and demonstrate the net based spreadsheet solution
MD*ReX designed for modern statistical and econometric analysis. We concen-
trate on examples of Value-at-risk (VaR) with copulas and means of quantifying
implied volatilities presented in Chapter 2 and 6. All results will be shown in
Microsoft Excel.
Recent research work suggests that the rationale for spreadsheet based sta-
tistical computing is manifold. Ours is to bring state-of-the-art quantitative
methods to the ¬ngertips of spreadsheet users. Throughout this chapter we
will give a short introduction into our underlying technology, brie¬‚y explain
386 18 Net Based Spreadsheets in Quantitative Finance


the usage of the aforementioned spreadsheet solution and provide applications
of this tool.


18.2 Client/Server based Statistical Computing
While the power of computing equipment increased exponentially, statistical
algorithms yielded higher e¬ciency in terms of computing time and accuracy.
Meanwhile the development of high capacity network architectures had an-
other positive impact on this trend, especially the establishment of the world
wide web. Consequently a vast number of researchers and programmers in
computational statistics as well as institutions like commercial banks, insurers
and corporations spent much e¬ort to utilize this evolution for their ¬eld of
research. An outcome has been the technological philosophy of client/server
based statistical computing: meaning a decentralized combination of methods,
users and providers of statistical knowledge.
Our understanding of client/server based statistical computing is such that
there exists a formal relationship between user, provider and vendor of statis-
tical methodology. An easy to grasp example is a telephone call. The caller (in
our case the user demanding statistical methods and/or advice) calls (connects
via TCP/IP enabled networks) someone (a high-performance server/vendor of
statistical information and methods) who serves his call (the requested cal-
culation is done/information is displayed in a HTML browser, etc.). This
client/server understanding is an approach to gain scalability of computational
tasks, resource shifting of processing power and decentralization of methods.
There are numerous ways of implementing client/server based statistics, among
others Common Gateway Interfaces (CGI), JavaScript, Java Applets and Plug-
Ins are the most commonly used techniques. The technology behind the XploRe
client/server architecture is thoroughly explained in Kleinow and Lehmann
(2002). While that solution is applet based the spreadsheet client presented
here has an Add-in character. The MD*ReX-Client is a software tool, nested
within an spreadsheet application. In both cases the communication technique
relies on the protocol stack MD*CRYPT. For the spreadsheet solution the
Java based MD*CRYPT has to be modi¬ed. As Microsoft does not support
any Java natively for its O¬ce suite, MD*CRYPT has been implemented as
a dynamic link library to utilize its interface for O¬ce applications like Excel.
The technical aspects and the design philosophy of the MD*ReX-Client are
discussed in detail in Ayd±nl± et al. (2002).
18.3 Why Spreadsheets? 387


18.3 Why Spreadsheets?
Since their ¬rst appearance in the late 1970s spreadsheets gained a remarkable
popularity in business as well as research and education. They are the most
common software managers, researchers and traders utilize for data analysis,
quantitative modelling and decision support.
Przasnyski and Seal (1996) ¬nd that most of the time series modeling done in
the business world is accomplished using spreadsheets. Further research work
suggests that those users have become so pro¬cient with spreadsheets that they
are reluctant to adopt other software solutions. Not even a higher suitability
for speci¬c applications is appealing then (Chan and Storey, 1996). An analysis
of XploRe download pro¬les conducted in a data mining framework con¬rmed
our perception of the statistical software market. A stunning majority of users
are using Excel for statistical analysis (Sofyan and Werwatz, 2001).
A major di¬erence between a spreadsheet application and statistical program-
ming languages is the interaction model. This ”direct manipulation interaction
model” enables statistical computations e.g. by drag and drop (Neuwirth and
Baier, 2001). In the cell based framework of spreadsheets the direct aspect of
interaction means, that manipulations in one cell immediately e¬ect the con-
tent of another cell. Of course this is only the case if the regarding cells are
interconnected with appropriate cell functions and references. Especially in
business applications the immediate visibility of numerical changes when cell
values are modi¬ed, is an appreciated feature for decision making based on
di¬erent scenarios.
Our approach is based on the philosophy to bring two ideals together: On
the one hand an accurate and reliable statistical engine (which is represented
by the XploRe Quantlet Server, XQS ) and on the other a user friendly and
intuitive Graphical User Interface like Excel. The numerical impreciseness of
Excel has been discussed exhaustively in the literature. As a starting point the
reader is referred to e.g. McCullough and Wilson (1999). The development of
MD*ReX is guided by the principles of usability and ¬‚exibility. Hence we try
to o¬er di¬erent modes of usage for varying user groups: a dialogue based and
more ”Windows” like appearance for the novice user of statistical software and
a ”raw” mode where the spreadsheet application merely functions as a data
import wizard and scratch-pad for the advanced user.
388 18 Net Based Spreadsheets in Quantitative Finance


18.4 Using MD*ReX
We will demonstrate the usage of MD*ReX in the context of quantitative ¬-
nancial modeling. In order to use MD*ReX a working installation of Excel 9.x
is required. Furthermore the installation routine will setup a Java runtime en-
vironment and if needed a Virtual Java Machine. For more information please
refer to http://md-rex.com/.
After successfully installing the client it can be used in two ways: for an on-
demand usage MD*ReX can be accessed via the Start ’ Programs shortcut
in Windows or if a permanent usage in Excel is requested, the Add-in can be
installed from the Extras ’ Add-in Manager dialogue in Excel. The rex.xla
¬le is located under

%Systemroot%\%Program Files%\MDTech\ReX\ReX.

In the latter case the client is available every time Excel is started. Anyway
the client can be accessed via the Excel menu bar (Figure 18.1) and exposes
its full functionality after clicking on the ReX menu item.




Figure 18.1. Excel and MD*ReX menus


In order to work with MD*ReX the user ¬rst has to connect herself to a running
XploRe Quantlet Server. This can either be a local server, which by default is
triggered if MD*ReX is started via the Programs shortcut, or any other XQS
somewhere on the Internet. Evidently for the latter option a connection to the
Internet is required. The Connect dialogue o¬ers some pre-con¬gured XQS™.
After the connection has been successfully established the user can start right
away to work with MD*ReX.
In contrast to XploRe, the user has the option to perform statistical analysis by
using implemented dialogues e.g. the Time Series dialogue in Figure 18.3. Via
18.4 Using MD*ReX 389


this dialogue a researcher is able to conduct standard time series analysis tech-
niques as well as e.g. more re¬ned nonlinear approaches like ARCH tests based
on neural networks. These interfaces encapsulate XploRe code while using the
standard Excel GUI elements hence undue learning overhead is minimized. Al-
ternatively one can directly write XploRe commands into the spreadsheet cells
and then let these run either via the menu button or with the context menu,
by right clicking the highlighted cell range (Figure 18.2). Furthermore it is
now much easier to get data to the XploRe Quantlet Server. Simply marking
an appropriate data range within Excel and clicking the Put button is enough
to transfer any kind of numerical data to the server. We will show this in the
next section. A further virtue of using a spreadsheet application is the com-
monly built-in database connectivity. Excel for example allows for various data
retrieval mechanisms via the Open Database Connectivity (ODBC) standard,
which is supported by most of the database systems available nowadays.




Figure 18.2. ReX Context Menu
390 18 Net Based Spreadsheets in Quantitative Finance


18.5 Applications
In the following paragraph we want to show how MD*ReX might be used in
order to analyze the VaR using copulas as described in Chapter 2 of this book.
Subsequently we will demonstrate the analysis of implied volatility shown in
Chapter 6. All examples are taken out of this book and have been accordingly
modi¬ed. The aim is to make the reader aware of the need of this modi¬cation
and give an idea how this client may be used for other ¬elds of statistical
research as well.




Figure 18.3. MD*ReX Time Series Dialogue


We have willingly omitted the demonstration of dialogues and menu bars as
it is pretty straightforward to develop these kind of interfaces on your own.
Some knowledge of the macro language Visual Basic for Applications (VBA)
integrated into Excel and an understanding of the XploRe Quantlets is su¬cient
18.5 Applications 391


to create custom dialogues and menus for this client. Thus no further knowledge
of the XploRe Quantlet syntax is required. An example is the aforementioned
Time Series dialogue, Figure 18.3.


18.5.1 Value at Risk Calculations with Copulas

The quanti¬cation of the VaR of a portfolio of ¬nancial instruments has become
a constituent part of risk management. Simpli¬ed the VaR is a quantile of the
probability distribution of the value-loss of a portfolio (Chapter 2). Aggregat-
ing individual risk positions is one major concern for risk analysts. The µ ’ σ
approach of portfolio management measures risk in terms of the variance, im-
plying a ”Gaussian world” (Bouy´ et al., 2001). Traditional VaR methods are
e
hence based on the normality assumption for the distribution of ¬nancial re-
turns. Though empirical evidence suggests high probability of extreme returns
(”Fat tails”) and more mass around the center of the distribution (leptokurto-
sis), violating the principles of the Gaussian world (Rachev, 2001).
In conjunction with the methodology of VaR these problems seem to be
tractable with copulas. In a multivariate model setup a copula function is
used to couple joint distributions to their marginal distributions. The copula
approach has two major issues, substituting the dependency structure, i.e. the
correlations and substituting the marginal distribution assumption, i.e. relax-
ation of the Gaussian distribution assumption. With MD*ReX the user is now
enabled to conduct copula based VaR calculation with Excel, making use of
Excel™s powerful graphical capabilities and its intuitive interface.
The steps necessary are as follows:

1. Get the according Quantlets into Excel,
2. run them from there,
3. obtain the result,
4. create a plot of the result.

The ¬rst step is rather trivial: copy and paste the example Quantlet
XFGrexcopula1.xpl from any text editor or browser into an Excel work-
sheet.
Next mark the range containing the Quantlet and apply the Run command.
Then switch to any empty cell of the worksheet and click Get to receive the
392 18 Net Based Spreadsheets in Quantitative Finance


numerical output rexcuv. Generating a tree-dimensional Excel graph from this
output one obtains an illustration as displayed in Figure 18.4. The according
Quantlets are XFGrexcopula1.xpl, XFGrexcopula2.xpl,
XFGrexcopula3.xpl and XFGrexcopula4.xpl. They literally work the
same way as the XFGaccvar1.xpl Quantlet.




Figure 18.4. Copulas: C4 (u, v) for θ = 2 and N = 30 (upper left),
C5 (u, v) for θ = 3 and N = 21 (upper right), C6 (u, v) for θ = 4 and
N = 30 (lower left), C7 (u, v) for θ = 5 and N = 30 (lower right)


Of course the steps 1-4 could easily be wrapped into a VBA macro with suitable
dialogues. This is exactly what we refer to as the change from the raw mode
of MD*ReX into the ”Windows” like embedded mode. Embedded here means
that XploRe commands (quantlets) are integrated into the macro language of
Excel.
The Monte Carlo simulations are obtained correspondingly and are depicted in
Figure 18.5. The according Quantlet is XFGrexmccopula.xpl. This Quant-
let again is functioning analogous to XFGaccvar2.xpl. The graphical out-
put then is constructed along same lines: paste the corresponding results z11
through z22 in cell areas and let Excel draw a scatter-plot.
18.5 Applications 393




Figure 18.5. Monte Carlo Simulations for N = 10000 and σ1 = 1,
σ2 = 1, θ = 3



18.5.2 Implied Volatility Measures

A basic risk measure in ¬nance is volatility, which can be applied to a single
asset or a bunch of ¬nancial assets (i.e. a portfolio). Whereas the historic
volatility simply measures past price movements the implied volatility repre-
sents a market perception of uncertainty. Implied volatility is a contempora-
neous risk measure which is obtained by reversely solving an option pricing
model as the Black-Scholes model for the volatility. The implied volatility can
only be quanti¬ed if there are options traded which have the asset or assets
as an underlying (for example a stock index). The examples here are again
taken out of Chapter 6. The underlying data are VolaSurf02011997.xls,
VolaSurf03011997.xls and volsurfdata2. The data has been kindly pro-
vided by MD*BASE. volsurfdata2 ships with any distribution of XploRe. In
our case the reader has the choice of either importing the data into Excel via
the data import utility or simply running the command
data=read("volsurfdata2.dat"). For the other two data sets utilizing the
Put button is the easiest way to transfer the data to an XQS. Any of these
alternatives have the same e¬ect, whereas the former is a good example of how
the MD*ReX client exploits the various data retrieval methods of Excel.
The Quantlet XFGReXiv.xpl returns the data matrix for the implied volatil-
ity surfaces shown in Figure 18.6 through 18.8. Evidently the Quantlet has to
394 18 Net Based Spreadsheets in Quantitative Finance




Figure 18.6. DAX30 Implied Volatility, 02.01.1997



be modi¬ed for the appropriate data set. In contrast to the above examples
where Quantlets could be adopted without any further modi¬cation, in this
case we need some redesign of the XploRe code. This is achieved with suitable
reshape operations of the output matrices. The graphical output is then ob-
tained by arranging the two output vectors x2 and y2 and the output matrix
z1.
The advantage of measuring implied volatilities is obviously an expressive vi-
sualization. Especially the well known volatility smile and the corresponding
time structure can be excellently illustrated in a movable cubic space. Further-
more this approach will enable real-time calculation of implied volatilities in
future applications. Excel can be used as a data retrieval front end for real-
time market data providers as Datastream or Bloomberg. It is imaginable then
to analyze tick-data which are fed online into such an spreadsheet system to
evaluate contemporaneous volatility surfaces.
18.5 Applications 395




Figure 18.7. DAX30 Implied Volatility, 03.01.1997



Bibliography
Ayd±nl±, G., H¨rdle, W., Kleinow, T. and Sofyan, H.(2002). ReX: Link-
a
ing XploRe to Excel, forthcoming Computational Statistics Special Issue,
Springer Verlag, Heidelberg.
Bouy´, E., Durrleman, V., Nikeghbali, A., Riboulet, G. and Roncalli, T.(2000).
e
396 18 Net Based Spreadsheets in Quantitative Finance




Figure 18.8. DAX30 Implied Volatility



Copulas for Finance, A Reading Guide and some Applications, unpub-
lished manuscript, Financial Econometrics Research Centre, City Univer-
sity Business School, London, 2000.
Chan, Y.E. and Storey, V.C. (1996). The use of spreadsheets in organizations:
determinants and consequences, Information & Management, Vol. 31,
18.5 Applications 397


pp. 119-134.
Fickel, N. (2001). Book Review: XploRe - Learning Guide, Allgemeines Statis-
tisches Archiv, Vol. 85/1, p. 93.
H¨rdle, W., Klinke, S. and M¨ller, M. (1999). XploRe Learning Guide, Springer
a u
Verlag, Heidelberg.
H¨rdle, W., Hlavka, Z. and Klinke, S. (2000). XploRe Application Guide,
a
Springer Verlag, Heidelberg.
Kleinow, T. and Lehmann, H. (2002). Client/Server based Statistical Com-
puting, forthcoming in Computational Statistics Special Issue, Springer
Verlag, Heidelberg.
McCullough, B.D. and Wilson, B. (1999). On the Accuracy of Statistical Pro-
cedures in Microsoft Excel, Computational Statistics & Data Analysis,
Vol. 31, p. 27-37.
Neuwirth, E. and Baier, T. (2001). Embedding R in Standard Software, and the
other way round, DSC 2001 Proceedings of the 2nd International Work-
shop on Distributed Statistical Computing.
Przasnyski, L.L. and Seal, K.C. (1996). Spreadsheets and OR/MS models: an
end-user perspective, Interfaces, Vol. 26, pp. 92-104.
Rachev, S. (2001). Company Overview, Bravo Consulting,
http://www.bravo-group.com/inside/bravo-consulting
company-overview.pdf.
Ragsdale, C.T. and Plane, D.R. (2000). On modeling time series data using
spreadsheets, The International Journal of Management Science, Vol. 28,
pp. 215-221.
Sofyan, H. and Werwatz, A. (2001). Analysing XploRe Download Pro¬les with
Intelligent Miner, Computational Statistics, Vol. 16, pp. 465-479.
Index

bigarch, 226 IBTbc, 160, 198, 199
IBTdc, 156
BlackScholesPathDependent1DQMC,
363 IBTdk, 156, 160
ImplVola, 129, 130, 133
BlackScholesPathDependent1D,
363 jarber, 289
list, 121
BlackScholesPathIndependent1DQMC,
363 lowdiscrepancy, 358
lpderxest, 185
BlackScholesPathIndependent1D,
363 mean, 55
BlackScholesPathIndependentMDDivQMC, nmBFGS, 298
364 nmhessian, 298
normal, 120
BlackScholesPathIndependentMDDiv,
364 nparmaest, 375
npgarchest, 380, 381
BlackScholesPathIndependentMDQMC,
364 paf, 185
quantile, 62
BlackScholesPathIndependentMD,
364 randomnumbers, 358
BlackScholes, 130 regest, 263
cdf2quant, 24 regxbwcrit, 203
cdfn, 120 regxest, 263
cdft, 120 simou, 278
CornishFisher, 15 spccusum2ad, 248
CPC, 140 spccusumCarl, 247
denrot, 204 spccusumC, 240
denxest, 204 spcewma1arl, 247
descriptive, 56, 61 spcewma1, 240
elmtest, 276, 277 spcewma2ad, 239
gennorm, 119, 120 spcewma2arl, 239
gFourierInversion, 23 spcewma2c, 239
gkalarray, 293, 294 spcewma2pmfm, 239
gkalfilter, 293, 295, 300 spcewma2pmf, 239
gkalresiduals, 293, 298 spcewma2, 239
gkalsmoother, 293, 300 spdbl, 183, 185
Index 399


spdbs, 176 XFGData9701, 184
summarize, 54, 225 XFGhouseprice, 290
VaRcdfDG, 24 XFGhousequality, 290, 294
VaRcgfDG, 11 fx, 225
VaRcharfDGF2, 23 sigmaprocess, 229
VaRcharfDG, 11, 23 volsurf01, 141
VaRcopula, 41, 45, 46 volsurf02, 141
VaRcorrfDGF2, 23 volsurf03, 141
VaRcredN2, 122 volsurfdata2, 130, 393
VaRcredN, 121, 122 discrepancy, 356
VaRcredTcop2, 122
EGARCH, 368
VaRcredTcop, 121, 122
Empirical Likelihood, 259
VaRcumulantDG, 11
equicorrelation, 91
VaRcumulantsDG, 11
VaRDGdecomp, 9, 11
finance library, 185, 362
VaRestMCcopula, 45
fx data, 225
VaRestMC, 30, 31
VaRest, 70, 71
GARCH model, 367
VaRfitcopula, 41
exponential, 368
VaRqDG, 24
integrated, 367
VaRqqplot, 75
threshold, 368
VaRRatMigCount, 93
GARCH process
VaRRatMigRateM, 104, 106
nonparametric estimates,
VaRRatMigRate, 93
’ NPGARCH
VaRsimcopula, 45
volsurfplot, 133 Halton, sequence, 357
volsurf, 131“133, 162, 185, 198 HiSim, 51
Asian option, 355 IBT, 145
Idiosyncratic Bond Risk,
BiGARCH, 221
’ HiSim
Brownian bridge, 361
IGARCH, 367
Implied Binomial Trees,
Copula, 35
’ IBT
credit portfolio model, 111
implied volatilities, 127
data sets INAAA data, 54, 55, 61, 71, 75, 76
INAAA, 54, 55, 61, 71, 75, 76
Koksma-Hlawka theorem, 356
MMPL, 71
PL, 70
library
USTF, 55
400 Index


VaR, 41, 42 PL data, 70
finance, 185, 362 portfolio
nummath, 298 composition, 89
spc, 239, 247 migration, 106
weights, 107
Markov chain, 87, 101
bootstrapping, 102 Quasi Monte Carlo simulation, 356
mcopt, 349
randomized algorithm, 349
migration
rating, 87
correlation, 90, 92
migrations, 87
counts, 89
dependence, 90
events, 88
independence, 90
probability,
transition probability,
’ transition probability
’ transition probability
rates, 90
risk horizon, 88
MMPL data, 71
risk neutral model, 351
model
RiskMetrics, 367
Implied Binomial Trees,
’ IBT
sigmaprocess data, 229
multivariate volatility,
SPC, 237
’ BiGARCH
spc library, 239, 247
State-Price Densities,

<<

. 13
( 14)



>>