<< стр. 2(всего 2)СОДЕРЖАНИЕ
b (7.102)
This same linear regression gives as an estimator of the gradient at Оёk ,

в€‚\ b (7.103)
Pj (Оёk ) = ОІjk .
в€‚Оё
Unfortunately, simulations at every value of Оёk provide estimators of the same
quantity Pj (Оё) and some estimators are clearly more precise than others and
so we may wish to combine these individual estimators to provide a weighted
b0
combination of the values О±jk + ОІjk (Оё в€’ Оёk ), k = 1, 2, ...K. The ideal weights
b
would be inversely proportional to the variance of the estimators themselves if
the linear model for Pj (Оё) were a perfect п¬Ѓt. However, given that it is usually not,
some form of distance between the parameter value Оёk at which the simulations
were conducted and the parameter at which we wish to extrapolate should also
be included in the weights. We suggest weights which approximate
H(Оё в€’ Оёk )
cjk (Оё) в€ќ (7.104)
var(\Pj (Оё))
7.11. CALIBRATING A MODEL USING SIMULATIONS 421

with the вЂњkernelвЂќ H(t) having the property that it is maximized when t = 0
and decreases to zero as |t| increases. A simpler example of a kernel function
H(t) is the Gaussian kernel,
p
X
t2 }
H(t) = exp{в€’c i
i=1

or the Cauchy kernel
1
Pp
H(t) = 2
1+c i=1 ti

for some positive parameter c governing the window width or the amount of
smoothing applied to the observations. The symbol вЂњв€ќ00 indicates that the
weights are proportional to the values on the right hand side, but renormal-
ized so that
K
X
cjk (Оё) = 1.
k=1

Then our estimator of Pj (Оё) is

K
X
\ b0
cjk (Оё)[bjk + ОІjk (Оё в€’ Оёk )]. (7.105)
Pj (Оё) = О±
k=1

в€‚
Similarly, the gradients в€‚Оё Pj (Оё) can be estimated by a weighted average of the
b0
individual estimators ОІjk leading to the estimator

\ K
X
в€‚ b (7.106)
Pj (Оё) = dk (Оё)ОІjk ,
в€‚Оё
k=1

with
dk (Оё) в€ќ H(Оё в€’ Оёk ).
If we now substitute (7.105, 7.106) in (7.101), we obtain an estimator of
[
the current estimate (on the m0 th step) of the optimal parameter Оё(m) , say in a
direction opposite to the gradient vector

[
Оё(m+1) = Оё(m) в€’ Оґ
[

where Оґ is the stepsize and ||.|| denotes the length of the vector. We will
adjust the step size from time to time so that it eventually converges at the
asymptotically optimal rate of 1/n. This is the case if, at each step, we retain
the original step size Оґ (m) provided that a new simulation at Оё(m+1) shows a
decrease in the objective function (7.100) but if it does not, (this will happen
422SENSITIVITY ANALYSIS, ESTIMATING DERIVATIVES AND THE GREEKS

at random of the time when we are close enough to the minimum to overstep it
), we reduce the step size according to

Оґ (m+1) = Оґ (m) (1 в€’ Оґ (m) ). (7.107)

Then the suggested algorithm is as follows: we begin with a small number K of
simulations at arbitrary points Оё1 , ..., ОёK .

\
b
1. Use simulations at Оё1 , ..., ОёK to estimate the option values О±jk = Pj (Оёk ),
the gradients ОІjk = в€‚Оё\ Pj (Оёk ) as well as crude estimators of var(Pj (Оё)).
\
b в€‚

Begin with a step size Оґ (0) .

2. Use these estimators to obtain weights cjk (Оё) from (7.104) and dk (Оё) as
well as estimators (7.105),(7.106).

3. Estimate the direction of the gradient vector

[
Оґ
[

4. Set ОёK+1 =this solution.

5. Conduct simulations at this new parameter ОёK+1 .

6. With K replaced by K + 1, repeat steps 1-4 until the objective function
J
X
wj (\Pj (Оё) в€’ Pj )2
j=1

no longer changes signiп¬Ѓcantly or we have done a maximum number of
iterations.

7. On termination choose the value of Оё(m) , m = 1, 2, ....K which minimizes
PJ \ (m) ) в€’ Pj )2 .
j=1 wj (Pj (Оё

It is well-known that for functions that are non-random, steepest descent is
not a particularly eп¬ѓcient minimization routine, because it can bounce back and
forth across a valley, and that methods like NewtonвЂ™s method which are based
on a quadratic approximation to the function tend to be more eп¬ѓcient. For
example setting the gradient (7.101) equal to zero with the estimators (7.105,
7.106) replacing their true values results in the equation

J K K
X X X
b0 b
(m+1)
ОІjk (Оё(m+1) dk (Оё(m+1) )ОІjk ) = 0,
)[bjk в€’ Pj + в€’ Оёk )])(
wj ( cjk (Оё О±
j=1 k=1 k=1
7.11. CALIBRATING A MODEL USING SIMULATIONS 423

which we might wish to solve for the next parameter value Оё(m+1) . It appears
that the success of such an algorithm depends on how precise our gradient
b
estimators ОІjk are, and in general, since they are quite noisy, this algorithm
relies too heavily on them for determining both direction and distance of travel.
The solution Оё(m+1) = Оё can be expressed in familiar least squares terms as
\ \
J K J K
X X X X
в€‚ в€‚
b b0
0 в€’1
cjk (Оё)[Pj в€’ О±jk + ОІjk Оёk ].
b
Оё=[ wj Pj (Оё) cjk (Оё)ОІjk ] wj Pj (Оё)
в€‚Оё в€‚Оё
j=1 j=1
k=1 k=1
(7.108)
except that the presence of Оё on both sides of (7.108) means that we should
solve this equation iteratively, substituting an old value of Оё on the right hand
side. This solution (7.108) can also be regarded as the estimator in a weighted
linear regression if we deп¬Ѓne the vector of responses Y as the JK values of
b0
Pj в€’ О±jk + ОІjk Оёk arranged as a column vector, the JK by JK weight matrix
b
b0
в„¦jl,jk = wj djl (Оё)cjk (Оё) and the matrix X to be the JK vectors ОІjk stacked to
form a JK by p matrix. In this case (7.108) can be re-expressed in the more
familiar form
Оё = (X 0 в„¦X)в€’1 (X 0 в„¦Y ).

7.11.1 Example: Calibrating the Normal Inverse Gaussian
model
We return to the problem of calibrating the Normal Inverse Gaussian model to
option the option prices listed in Table 7.1 (the initial value of the S&P 500 was
116.84):

Exercise K 950 1005 1050 1100 1125 1150 1175 1200 1225 1250
Option Price PO (K) 173.1 124.8 88.5 53.7 39.3 27.3 17.85 10.95 6.2 3.25
Table 6.1. Price of SPX Call Options
We п¬Ѓt the Normal Inverse Gamma distribution to historical prices in Section
3.4 where we obtained the parameter values

b b
О± = 95.23, ОІ = в€’4.72, Оґ = 0.016, Вµ = 0.0009
b
but of course these are just estimates of the parameters for the historical dis-
tribution and are not necessarily the appropriate parameters for a risk-neutral
distribution. Running the algorithm above for 40 iterations beginning with
b b
b b
О± = 3.8010, ОІ = 2.4105, Оґ = 0.4591, Вµ = 0.0009
In Figure 7.19 we compare the error in the option pricing formulas for the
NIG and the Black Scholes model. using the algorithm of the last Section.
Notice that there is again no evidence from this graph that the NIG model with
constant parameters п¬Ѓts the risk neutral distribution better than does the Black
Scholes model with constant volatility parameter. We saw a similar comparison
in Chapter three when we compared the two volatility smiles Figure 7.1 and
Figure 7.2.
424SENSITIVITY ANALYSIS, ESTIMATING DERIVATIVES AND THE GREEKS

4
NIG process
Black Scholes

2

Empirical Error in Formula for Option Price

0

-2

-4

-6

-8
0 20 40 60 80 100 120 140 160 180
Option Price

Figure 7.19: The error in pricing options on the S&P 500 using the NIG model
and the Black Scholes model.

7.12 Problems
1. Assume that X has a normal (Оё, 1) distribution and T (X) = X +
в€‚
bX 2 + cX 3 . Show we can estimate в€‚Оё EОё T (X) = 1 + 2bОё + 3c(1 + Оё2 )
by randomly samplingP independent values of Xi , i = 1, 2, ...n and
n
n
1
using the estimator n i=1 T (Xi )(Xi в€’ Оё). How would the variance of
P
this compare with the variance of an alternative estimator n n V 0 (Xi ).
1
i=1
How do they compare if V is close to being a linear function, i.e. if b, c
are small?

2. Using the Black-Scholes formula for the price of a call option, verify the
following formulae for the greeks. In each case use simulation to verify the
formula in the case T = .25, Пѓ = .3, r = .05, K = S0 = 10.
в€‚V
(a) (delta) = О¦(d1 )
в€‚S
в€‚2V П†(d1 )
=
(b) (gamma) в€љ
в€‚S 2 sПѓ T в€’t
в€‚V
K(T в€’ t)eв€’r(T в€’t) О¦(d2 )
(c) (Rho) =
в€‚r
sПѓП†(d1 )
в€‚V
в€’ rKeв€’r(T в€’t) О¦(d2 )
=
(d) (theta) в€љ
в€‚t 2 T в€’t
в€љ
в€‚V
sП†(d1 ) T в€’ t .
(e) (vega) =
в€‚Пѓ

3. The rho of an option is the derivative of the option price with respect to
the interest rate parameter r. What is the value of ПЃ for a call option with
S0 = 10, strike=K = 10, r = 0.05, T = .25 and Пѓ = .2? Use a simulation
to estimate this slope and determine the variance of your estimator. Try
using (i) independent simulations at two points and (ii) common random
7.12. PROBLEMS 425

4. Consider the estimator given by (7.65) when Оё = 1. For what values of the
importance sample parameter П€ is the variance of the estimator (7.65)
п¬Ѓnite?
5. Use the score function method (7.71) and a simulation to estimate the
sensitivity of the value of a digital option to the parameters S0 , r, Пѓ in the
Black Scholes model. Speciп¬Ѓcally suppose the discounted payoп¬Ђ from the
option is
eв€’rT 1(ST > K)
with ST Normal(ln(S0 ) + rT в€’ Пѓ 2 T /2 ,Пѓ 2 T ) and S0 = K = 10, r = 0.05,
T = 0.25. Compare your estimated values with the true values given in
Problem 2.
6. Use numerical quadrature corresponding to the 4вЂ™th degree Hermite poly-
nomial to approximate the integral
Zв€ћ
ex П•(x)dx
в€’в€ћ

where П•(x) is the standard normal probability density function. Compare
the numerical approximation with the true value of the integral.
7. Use simulation to calibrate the volatility parameter of the Black Scholes
model to the S&P500 option prices in Table 6.1 so that the sum of squares
of the pricing errors is minimized (note we wish to choose a volatility
parameter that does not depend on the strike price). Check your answer
to the results of a numerical minimization where the options are priced
using the Black-Scholes formula.
426SENSITIVITY ANALYSIS, ESTIMATING DERIVATIVES AND THE GREEKS

 << стр. 2(всего 2)СОДЕРЖАНИЕ