. 13
( 16)


generally J. P. Morgan™s, as the variance“covariance matrix data for the interest rates and
exchanges required are available free of charge on the Internet.

We now introduce two concepts that are essential for optimising ALM tools. Their aim
is to de¬ne the maturity dates for products without speci¬ed dates and to arrive at an
understanding of the non-contractual methods of revising rates for products with ¬‚oating
rates. The maturity date de¬nition models, or replicating portfolios, and the repricing
models, are essential for calculating a J. P. Morgan type of VaR for the whole of the
balance sheet, and for calculating a gap in liquidity or interest rates, a duration for non-
maturity date products and a net present value.

12.4.1 The conventions method
In the absence of research reports, a large number of banks use an arbitrary convention for
integrating ¬‚oating-rate products into interest rate risk analytics tools. Some banks with
¬‚oating-rate contracts admit as the next rate revision the average historical interval for
which the ¬‚oating rates have remained stable. This calculation must obviously be made
302 Asset and Risk Management

for each type of ¬‚oating-rate product, as the behaviour of the revision process is not the
same from one product to another.
Like every convention, this approach is far from perfect as we will clearly see that the
rate revision periods are irregular.

12.4.2 The theoretical approach to the interest rate risk on ¬‚oating rate
products, through the net current value
The problem of ¬‚oating rates is a complex one, as we are often looking at demand
products or products without maturity dates, on which banking clients carry out arbitrage
between different investment products. In addition, the ¬‚oating rates may be regulated
and conditioned by business strategy objectives.
There are very few academic works on the interest rate risk for ¬‚oating-rate products.
Where these works exist, they remain targeted towards demand savings products. The
academic approach has limited itself to integrating the interest rate risks for ¬‚oating-rate
contracts by calculating the net current value on the basis of volume and market rate
simulations. This NPV approach is dif¬cult to reconcile with an interest-rate risk analysis
built on the basis of a ˜gap™. The NPV analysis gives interesting theoretical information
on the alteration to the value of the balance sheet items, as well as accounting and market
values. However, the NPV does not give any information on the variation in the bank™s
margin following a change in rates. The interest-rate risk on ˜retail™ products is often a
margin variation risk. The NPV analysis is therefore of limited interest for the ˜retail™
section of the balance sheet (deposits and credits), in which most of the ¬‚oating-rate
products are found. Ausubel1 was the ¬rst to calculate the net current value of ¬‚oating-
demand deposits by using a determinist process for volumes and rates.
More recently, Selvaggio2 and Hutchinson and Pennachi3 have used a speci¬c model
of the reference monetary rate by taking a stochastic process based on the square root
with a recurrence function towards the average. Other stochastic models have also been
used, such as those of Heath, Jarrow and Morton4 and of Hull and White5 in the works
by Sanyal.6
The hypotheses necessary for this methodology are in fact postulates:

• This approach considers that demand products consist of short-term ¬‚ows. This has
serious consequences as the updating coef¬cients used for the NPV are essentially
short-term market rates. This has the effect of denying the existence of ˜replicating
portfolios™7 of long maturity in demand products. In a replicating portfolio, stable
long-term demand contracts cannot be updated with a short-term rate.
• The construction methods for the models suggest three stages. First, an econometric
link must be established between the historical volume of the product and the monetary
Ausubel L., The failure of competition in the credit card market, American Economic Review, 1991, pp. 50“81.
Selvaggio R., Using the OAS methodology to value and hedge commercial bank retail demand deposit premiums, The
Handbook of Asset/Liability Management, Edited by F. J. Fabozzi and A. Konishi, 1996.
Hutchinson D. and Pennachi G., Measuring rents and interest rate risk in imperfect ¬nancial markets: the case of retail
bank deposit, Journal of Financial and Quantitative Analysis, 1996, pp. 399“417.
Heath D., Jarrow R. and Morton A, Bond pricing and the term structure of interest rates: a new methodology for
contingent claims valuation, Econometrica, 1992, pp. 77“105.
Hull J. and White A., Pricing interest rate derivative securities, Review of Financial Studies, 1990, pp. 573“92.
Sanyal A., A continuous time Monte Carlo implementation of the Hull and White one-factor model and the pricing of core
deposit, unpublished manuscript, December 1997.
The ˜replicating portfolio™ suggests breaking down a stock (for example, total demand deposits at moment t) in ¬‚ow,

each with a speci¬c maturity date and nominal value. This concept is the subject of the next development.
Techniques for Measuring Structural Risks in Balance Sheets 303

reference rate. Next, a Monte Carlo simulation will specify the monetary rates. This
stage uses the Hull and White discrete model. The total of these possible rates will allow
the future volumes to be de¬ned and updated using the same rate. The mathematical
anticipation of the current values obtained represents the NPV, for which the expression
is as follows:
Dt (rt ’ st )
NPV0 = E0
(1 + r0 · . . . · (1 + rt )


E0 is the mathematical expectation operator at time 0.
Dt is the nominal total of deposits at time t.
r0 , . . . , rt are the short-term rates for times 0, . . . , t.
st is the rate at which the product pays at time t.
rt ’ st is the de¬nition of the spread.

This spread is a difference between a market reference rate and the contract rate. The
concept of spread is an interesting one; it is an average margin, generally positive, for
creditor rates in relation to market rates. The postulate for the approach, however, is to
consider that the spread is unique and one-dimensional as it is equal by de¬nition to the
difference between the monetary market rate and the contract rate. Here again we ¬nd
the postulate that considers that the demand product ¬‚ows are short-term ¬‚ows. A spread
calculated only on the short-term ¬‚ows would have the effect of denying the existence of
the long-term ˜replicating portfolio™ on the products without maturity dates.
In the absence of a precise de¬nition of reference rates, and with the aim of gen-
eralising for products that are not necessarily demand products, the spread should be
calculated for the whole of the zero-coupon curve range. Our approach uses the static and
dynamic spread.

12.4.3 The behavioural study of rate revisions Static and dynamic spread
It must be remarked that there is no true theory of ¬‚oating rates. The revision of rates
is actually a complex decision that clearly depends on the evolution of one or more
market interest rates. These market and contractual rates allow a spread or margin, which
should be pro¬table to the bank, to be calculated. The difference will depend on the
segmentation of clients into private individuals, professionals and businesses and on the
commercial objectives particular to each product. When the margin falls, the duration of
the fall in the spread prior to the adjustment or revision of rates may be longer or shorter.
Following this kind of drop in margin, certain product rates will be adjusted more quickly
than others because of the increased sensitivity of the volumes to variations in rates and
depending on the competition environment. This dynamic, speci¬c to each product or to
each bank, is dif¬cult to model. It can, however, be af¬rmed that there are three types of
explanatory variable.

• the time variable between two periods of revision, for respecting the diachronic
304 Asset and Risk Management

• the static margins or static spreads calculated at regular intervals. We are taking a
two-month period to calculate this static spread.

SS = rt ’ st

• a selective reduction of a static margin will not allow it alone to bring about a revision
in rates. The revision of rates will be effective only if the fall in the static spread is
con¬rmed and intensi¬es. This dynamic spread or DS8 can be obtained by calculating
the surface area between the ¬‚oating rate and the zero-coupon rate between two revision
periods. This surface area allows the sensitivity of the bank™s products to rates, as well
as the bank™s business policy, to be taken into account. In our analysis, the volume is
an endogenous variable.
DS = (rt ’ st ) dt
T =1

After analysing the correlations between the zero-coupon rates, we have calculated the
static and dynamic spreads bimonthly, on a historical period running from 1991 to 1999
(see the example on investment credits on the CD-ROM). Data analysis method
The scheme shown in Figure 12.1 allows the problem to be displayed. On the x-axis we
have the bimonthly time scale, which here begins on 1 January 1991. The y-axis shows
the rates of interest expressed as basis points and the various changes in the zero-coupon
rates. More speci¬cally, the rates used are 3 months, 2 years, 5 years and 10 years, split
into periods of 15 days.
The stepped curve corresponds to the historical rates for the ¬‚oating-rate product pre-
sented “ the investment credits granted to the liberal professions by a bank. The aim







1 27 53 79 105 131 157 183

Retail rate Market rate 3 months
Market rate 2 years Market rate 5 years Market rate 10 years

Figure 12.1 History of ¬‚oating rates on investment credits and a few market rates

Or cumulative static margins between two revision periods.
Techniques for Measuring Structural Risks in Balance Sheets 305

of the method is to disassociate the bimonthly periods of equilibrium from the periods
of non-equilibrium, which are in fact the periods situated immediately prior to the rate
revisions. The method consists of three stages:

• Analysis of the correlations on the rate curve.
• Analysis of the canonical correlations on the explanatory variables.
• The logistic regression on the variables selected.

A. Analysis of correlations on zero-coupon rates
The ¬rst stage is a simple analysis of the correlations between the various maturity dates
of the zero-coupon rates.9 Thanks to the matrix of correlations, rates that are closely
correlated are excluded from the analysis. The other rates, when retained, are used to
calculate the static and dynamic spreads. These spreads are de¬ned in two different ways.
The static margins are ¬rst of all calculated by making the difference between the annual
¬‚oating contract rate and the annual zero-coupon rate for a maturity date (example: static
margin ˜A™ 3 months, 6 months etc.) but also by transforming the margin obtained into
a proportional two-weekly rate (static margin ˜B™ 3 months, 6 months etc.). The space
between analyses, or the period, is 15 days in the study.
Dynamic margins, meanwhile, are obtained by adding the simple two-weekly interest
period by period since the last change of rate (dynamic margin ˜A™) and by calculating
the compound interest on the two-weekly margin at moment t since the last moment of
revision (dynamic margin ˜B™).
The static and dynamic margins are calculated on the basis of the zero-coupon rates.
The information on the coupon curves can easily be found on Bloomberg (pages: State
bond rate curves). It is not a good idea to use the Strips curves (for State bonds separated
to create a structured zero-coupon product). These curves are dif¬cult to use as they
suffer, among other things, from liquidity problems.
To calculate the zero-coupon curves, we have divided the State bond curves into
coupons using the classic ˜step-by-step™ method. Up to one year, we have taken the
bank offered rate or BOR.

Description of the step-by-step method
The following bullet rates apply after interpolation (linear and otherwise):

• year — 1 = 10 %
• years — 2 = 10.10 %
• years — 3 = 10.20 %
• years — 4 = 10.30 %

The zero-coupon rate at one year is 10 %, that is ZCB 1 year.
The zero-coupon rate at two years is 10.105 %. When the actuarial rates are used, we
have in fact the following equality for a main bond with maturity:

100 = 10.10/1 + ZCB 1 year + 100 + 10.10/(1 + ZCB 2 years)2

A matrix of correlations is obtained: correlation between 3-month and 6-month rates, 3-month and 1-year rates, 3-month
and 2-year rates, etc.
306 Asset and Risk Management

ZCB 2 years = ’1
100 ’
= 10.105 %
Method of calculating static and dynamic margins
In the Excel sheets on the CD-ROM, the retail and zero-coupon rates are expressed in
basis points. As we have no precise theoretical indications on the de¬nition of the margins,
we have calculated the two types of margin using two different methods of calculation.
The following example will allow a better understanding of how the Excel sheets are
calculated, that is:

• At t0 , ZCB = 950 and the rate = 800 basis points.
• At t1 , ZCB = 975 and the rate = 800 basis points.
• t is a two-monthly period.

For the static margins:

• First method: the difference in basis points between the repricing rate and the zero-
coupon rate is calculated, hence 950 ’ 800 = 150 at t0 , 975 ’ 800 = 175 at t1 .
• Second method: 0.015 is the differential of the annual rate converted into a twice-
monthly proportional rate (26 periods).

1+ · 100 ’ 100 · 100 = 5.769 at t0
1+ · 100 ’ 100 · 100 = 6.73 at t1

This second method allows the dynamic margin to be calculated on another scale of
values, as the data are not centred reduced (average 0; standard deviation 1).

We are converting the spread or the rate differential into a twice-monthly proportional
rate (52 weeks/2 = 26). This is the ˜gain™ in basis points for the bank over a twice-
monthly period.
There are two methods of calculation for dynamic margins.

• First method
1+ · 100 ’ 100 · 100 = 5.769 at t0

The ˜gain™ for the bank over a two-monthly period.

1+ · 100 ’ 100 · 100 = 13.46 at t1
Techniques for Measuring Structural Risks in Balance Sheets 307

The ˜gain™ capitalised by the bank over two periods from the rate differential noted in
second period.
• Second method
1+ · 100 ’ 100 · 100 = 5.769 at t0

The ˜gain™ for the bank over a two-monthly period.

0.0150 0.0175
1+ ·100 ’ 100 ·100 + 1+ ·100 ’ 100 ·100 = 12.499 at t1
26 26

The ˜gain™ for the bank over two periods from the rate differential noted for ¬rst and
second period.

Analysis of correlations between changes in zero-coupon rates
The static and dynamic margins are only calculated on the least correlated points within
the zero-coupon curve. In fact, the calculation of the margins on closely correlated points
contributes nothing in terms of information, as the margins obtained will be similar.
The process is a simple one. The currency curves are found on Excel, and SAS (the
statistical software) accepts the data through simple copying and pasting. For the analysis
of the correlations between curves, the SAS algorithm must be programmed, with a3m
being the market rate at 3 months, b6m the rate at 6 months, c1a the rate at 1 year etc.,
as can be found on the CD-ROM.
The static and dynamic margins for products in BEF/LUF have been calculated on the
basis of the least correlated points, that is: 3 months, 2 years, 4 years, 6 years and 10 years.
The classical statistical procedure uses correlation tests to exclude the correlated vari-
ables from the analysis (example: non-parametric Spearman test). Here, the procedure is
different as the rates are all correlated globally.
The act of taking the rates least correlated to each other allows the calculation of the
margins and procedures to be reduced in size.

B. Canonical correlation analysis
The second stage uses the little-known concept of canonical correlation analysis for mak-
ing an optimal selection of static and dynamic margins.
Canonical correlation analysis is carried out on the basis of canonical analysis, which
is now an old concept, having ¬rst been introduced in 1936 by Hotelling.10
The method is very productive in terms of theory as it takes most forms of data analysis
as a speci¬c case. The method is currently available as an algorithm (SAS software) but
is not frequently used because of problems with interpreting and using the results.
Canonical analysis can be used when a variable is linked linearly to another variable. In
our study, we have the static margins linked linearly to margins or dynamic spreads. When
there are two types of variables linked linearly by canonical analysis, some variables may
be excluded if there is an opposite sign between the standardised canonical coef¬cient and
the sign of the correlation between the variable and the canonical factor. The excluded
variables are known as ˜suppresser variables™.

Hotelling H., Relation between two sets of variables, Biometrica, 321“77, 1936.
308 Asset and Risk Management

When there is a linear link between the two series of variables to be selected, tests11 have
shown that the canonical correlation method is more suitable than a simple selection built
on the basis of statistical correlations. The canonical correlations are shown in Appendix 5.

Examples of investment credits (CD-ROM)
For investment credits, the static margins have been calculated on the 3-month, 2-year,
5-year and 10-year rates. Analysis of the canonical correlations on the static and dynamic
differences gives us the following linear combination for the highest proper value in a
two-monthly study from 1991 to 1999.
Table 12.6 shows the results for the static margins.
For the next stage in the method, we will select the static spreads: Margin A 3 months,
Margin B 3 months, Margin A 10 years.
Table 12.7 shows the results for the dynamic margins:
For the next stage in the method, therefore, we will not select the dynamic spreads:
margin A 2 years, margin B 2 years, margin A 5 years.
The canonical correlation to the square for ξ 1 , ·1 is good as it totals 0.85353. This
value is signi¬cant at the 1 % threshold according to statistic F . On the other hand, the

Table 12.6 Canonical correlations (static margins)

ξ 1 ¬rst canonical
Static spreads Correlation between margin Suppresser
factor and canonical variable variables

’0.3474 ’0.7814
Margin A, 3 months
’1.0656 ’0.7861
Margin B, 3 months
Margin A, 2 years 0.9069 Yes
Margin B, 2 years 0.0794 Yes
Margin A, 5 years 0.5411 Yes
Margin B, 5 years 0.5238 Yes
Margin A, 10 years 0.1520 0.7160
Margin B, 10 years 0.7208 Yes

Table 12.7 Canonical correlations (dynamic margins)

·1 ¬rst canonical
Static spreads Correlation between margin Suppresser
factor and canonical variable variables

’2.1657 ’0.5231
Dynamic margin A, 3 months
’0.4786 ’0.4433
Dynamic margin B, 3 months
Dynamic margin A, 2 years 2.0689 Yes
Dynamic margin B, 2 years 0.3585 Yes
Dynamic margin A, 5 years 0.2556 Yes
Dynamic margin B, 5 years 0.1665 0.1360
Dynamic margin A, 10 years 0.1505 0.5547
Dynamic margin B, 10 years 0.0472 0.5149

In this regard, we will mention the applicative works by: Cooley W. W. and Lohnes P. R., Multivariate Data Analysis,
John Wiley & Sons, Ltd, 1971. Tatsuoka M. M. Multivariate Analysis, John Wiley & Sons, Ltd, 1971. Mardia K. V., Kent
J. T. and Bibby J. M., Multivariate Analysis, Academic Press, 1979 or Damel P., “La mod´ lisation des contrats bancaires a
e `
taux r´ visable: une approche utilisant les corr´ lations canoniques”, Banque et March´ s, mars avril, 1999.
e e e
Techniques for Measuring Structural Risks in Balance Sheets 309

hypothesis H0 of the absence of correlation between »1 (the ¬rst proper value) and »2 (the
second proper value) is veri¬ed with a probability of 0.9999 on the basis of the Wilks
lambda test. The canonical factors ξ 1 and ·1 are therefore of good quality.
Tables 12.6 and 12.7 identify the ˜suppresser variables™, that is, the variables excluded
from the analysis, as we have a contrary and contradictory sign between the coef¬cient
of the canonical axis and the sign of the correlation.
This method allows the variables, belonging to two analytical groups between which a
linear relation can be established, to be chosen in the optimal way.
The following stage is the use of the logistic regression as an explanatory model for
the differentiation between the periods of equilibrium (absence of revision) and periods
of interruption (period with change of rate). The logistic model is constructed on the basis
of static and dynamic spreads and of time. The model is also particular to each product
and to each bank, for reasons stated above.

C. Logistic regression
Logistic regression is a binomial model of conditional probability, known and in frequent
use (see Appendix 6). The variable to be explained takes the value 0 in a period of
equilibrium and 1 in a period of non-equilibrium (the two-month period before the change
in rate). The model is optimised using the Newson“Raphson nonlinear iterative method.
Our example contains nine changes of rate (value 1) and 188 periods of equilibrium.
The model is adjusted in the classical way by excluding the variables that do not differ
signi¬cantly from 0 (χ 2 test). We use the concept of pairs to illustrate the convergence
between the observed reality (periods 0 and 1) and the periods 0 and 1 given by the logistic
model equation. The pair (observation 0 of equilibrium and observation 1 of interruption)
will be concordant if the probability of a change in rate at the ¬rst observation is less than
the probability of the second by more than 0.02. Otherwise, the pair will not concord.
A pair will be uncertain when the difference between the two probabilities is small and
they are less than 0.02 apart. The rates of concordance, uncertainty and discordance for
the pairs are calculated on the basis of the total number of possible pairs combining an
observation of equilibrium (value 0) with an interruption value (value 1).

Repricing model on professional investment credits
Optimisation of logistic regression
Regression is optimised in the classical way by excluding the variables that do not differ
signi¬cantly from zero (P r > χ 2 ) step by step. The exclusion of variables is conditioned
in all cases by the degree of adjustment of the model. The rate of concordance between the
model and the observed reality must be maximised. The SAS output will be association
of predicted probabilities and observed responses “ concordant: 97.9 %.
In the following example (Table 12.8), the variable Mc10yr has a probability of 76.59 %
of being statistically zero. Excluding it will lead to deterioration in the rate of concor-
dance between the observations (repricing“non-repricing) and the forecasts for the model
(repricing“non-repricing). This variable must remain in the model.
There are other criteria for measuring the performance of a logistic regression, such
as the logarithm of likelihood. The closer the log of likelihood is to zero, the better the
adjustment of the model to the observed reality (’2log L in SAS output). The log of
likelihood can also be approximated by the MacFadden R 2 : R 2 = 1 (’2log L intercept
only/’2log L intercept and covariates).
310 Asset and Risk Management
Table 12.8 Logistic regression

Variables DF Parameter Standard Wald Proba over Odds
estimate error chi-square chi-square

Constant 1 35.468 12.1283 8.5522 0.0035
Time 1 0.2500 1.1404 0.2856 0.766
M3m 1 0.3231 0.1549 4.3512 0.0370 1.381
Ma3m 1 3.4407 2.9504 0.0859 0.003
Ma10y 1 0.9997 0.7190 1.9333 0.1644 2.718
Mc3m 1 0.0335 0.0709 0.2236 0.6363 1.034
Mac3m 1 0.0447 2.6772 0.1018 0.929
Mac5y 1 0.1029 0.1041 0.9766 0.323 1.108
Mc10y 1 0.0227 0.0762 0.0887 0.7659 1.023
Mac10y 1 0.102 1.2618 0.2613 0.892

Association of predicted probabilities and observed responses
Concordant = 97.9 %
Discordant = 2.1 %
Tried = 0 % (1692 pairs)

In the model, the probability of a change in rate increases with:

• time;
• the fall in the static spread A at 3 months;
• the rise in the static spread B at 3 months;
• the fall in the static spread A 10 years;
• the slowing of the rise in the dynamic spreads A 3 months, B 5 months and A
10 years;
• the rise in the dynamic margins B 3 months and B 10 years.
Displaying the model
For each model, the linear combination on the historical data must be programmed. This
will allow the critical value of the model needed for dissociating the repricing periods
from the periods of equilibrium to be determined. As the dissociation is not 100 %, there
is no objective value. The critical value chosen conditions the statistical error of the ¬rst
and second area. In the example, the value 1.11 allows almost all the repricing to be
obtained without much anticipation of the model for the actual repricing periods (see
model CD-ROM and critical value).
The method presented was applied to all the ¬‚oating-rate products for a bank every
two months for nine years maximum in the period 1991 to 1999, depending on the
historical data available and the creation date of the products. The results are encouraging
as the rates of convergence between the models and the observed reality, with just a few
exceptions, are all over 90 %.
The classic method, based on the choice of dynamic and static spreads through simple
statistical correlation, has been tested. This method shows results very far removed from
those obtained using the method proposed, as the rate of concordance of pairs was less
than 80 %.
The odds ratio is equal to the exponential of the parameter estimated: eb . A variation in a unit within the variable (here

time and the spreads) makes the probability of ˜repricing™ alter by 1 ’ eb .
Techniques for Measuring Structural Risks in Balance Sheets 311 Use of the models in rate risk management
This behavioural study allows the arbitrary rate-change conventions to be replaced to
good advantage. Remember that the conventions in the interest-rate gaps often take the
form of a simple calculation of an average for the periods during which rates are not
changed. Working on the hypothesis that the bank™s behaviour is stable, we can use each
model as a prospective by calculating the static and dynamic spreads on the basis of the
sliding forward rates, for example over one year. This ¬‚oating-rate integration method
gives us two cases:

• The rate change occurs between today™s date and one year from now. In this case, the
contract revision date will be precisely on that date.
• The rate change is not probable over a one-year horizon. In this case, the date of revision
may be put back to the most distant prospective date (in our example, in one year).

Naturally, using an interest-rate gap suggests in the ¬rst instance that the rate-change
dates are known for each contract, but also that the magnitude of the change can be
anticipated in order to assess the change in the interest margin. Our method satis¬es the
¬rst condition but does not directly give us the magnitude of the change. In fact, between
two repricing periods we see a large number of situations of equilibrium. In practice,
the ALM manager can put this free space to good use to optimise the magnitude of the
change and pro¬t from a long or short balance-sheet position. This optimisation process
is made easier by the model. In fact, a change with too low a magnitude will necessitate
a further change, while a change with too high a magnitude may be incompatible with
the historical values of the model (see the statistics for magnitude of changes).
Modelling the repricing improves knowledge of the rate risk and optimises the simula-
tions on the interest margin forecasts and the knowledge of the market risk through VaR. Remarks and criticisms
Our behavioural approach does, however, have a few weak points. The model speci¬es the
revision dates without indicating the total change in terms of basis points. It is not a margin
optimisation model. Another criticism that can be levelled relates to the homogeneity of
the period studied. A major change in one or more of the parameters set out previously
could disrupt or invalidate the model estimated. Finally, this empirical method cannot be
applied to new ¬‚oating-rate products.
Despite these limitations, the behavioural approach to static and dynamic spreads, based
on the analysis of canonical correlations, gives good results and is suf¬ciently ¬‚exible to
explain changes in rates on very different products. In fact, in our bank™s balance sheet,
we have both liability and asset products each with their own speci¬c client segmentation.
The behavioural method allows complex parameters to be integrated, such as the busi-
ness policy of banks, the sensitivity of adjustment of volumes to market interest rates,
and competition environment.

In asset and liability management, a measurement of the monthly VaR for all the assets
as a whole is information of ¬rst importance on the market risk (rate and change). It is a
measurement that allows the economic forecasts associated with the risk to be assessed.
312 Asset and Risk Management

ALM software packages most frequently use J. P. Morgan™s interest and exchange
rate variance-covariance matrix, as the information on duration necessary for making the
calculation is already available. It is well known that products without a maturity date
are a real stumbling block for this type of VaR and for ALM.
There is relatively little academic work on the studies that involve attributing maturity
dates to demand credit or debit products. The aim of ˜replicating portfolios™ is to attribute
a maturity date to balance-sheet products that do not have one. These portfolios combine
all the statistical or conventional techniques that allow the position of a product without a
maturity date to be converted into an interwoven whole of contracts that are homogeneous
in terms of liquidity and duration.
˜Replicating portfolios™ can be constructed in different ways. If the technical environ-
ment allows, it is possible to construct them contract by contract, de¬ning development
pro¬les and therefore implicit maturity dates for ˜stable™ contracts. Where necessary, on
the basis of volumes per type of product, the optimal value method may be used. Other
banks provide too arbitrary de¬nitions of replicating portfolios.

12.5.1 Presentation of replicating portfolios
Many products do not have a certain maturity date, including, among others, the follow-
ing cases:

• American options that can be exercised at any time outside the scope of the bal-
ance sheet.
• Demand advances and overcharges on assets.
• Current liability accounts.

The banks construct replicating portfolios in order to deal with this problem. This kind of
portfolio uses statistical techniques or conventions. The assigned aim of all the methods
is to transform an accounting balance of demand products into a number of contracts
with differing characteristics (maturity, origin, depreciation pro¬le, internal transfer rate
etc.). At the time of the analysis, the accounting balance of the whole contract portfolio is
equal to the accounting balance of the demand product. Figures 12.2“12.4 offers a better
understanding of replicating portfolio construction.
The replicating portfolio presented consists of three different contracts that explain the
accounting balances at t’1 , t0 and t1 . The aim of the replicating portfolio is to represent
the structure of the ¬‚ows that make up the accounting balance.

Accounting balances
Thousands of millions


90 80

t“1 t0 t1

Figure 12.2 Accounting balances on current accounts
Techniques for Measuring Structural Risks in Balance Sheets 313
Contract 2
Contract 1 Contract 3
50 25 50
Thousands of millions

40 40
30 30
40 40 40
20 20 40 40
20 30 30
10 10 20
5 10
0 0 0
t“1 t0 t1 t“1 t0 t1 t“1 t0 t1 t2 t3

Figure 12.3 Contracts making up the replicating portfolio

Contract 2
Contract 3

Contract 1

60 IN t“1 90 IN t0 80 IN t1

Figure 12.4 Replicating portfolio constructed on the basis of the three contracts

12.5.2 Replicating portfolios constructed according to convention
To present the various methods, we are taking the example of current accounts. There
are two types of convention for constructing a replicating portfolio. The ¬rst type can be
described as simplistic; they are used especially for demand deposits with an apparently
stable monthly balance. On the basis of this observation, some banks construct the repli-
cating portfolio by applying linear depreciation to the accounting balance at moment t
over several months. As the depreciation is linear over several months or even several
years, the banking institutions consider that the structure of the ¬‚ows making up the
accounting balance is stable overall in the short term. In fact, only 1/12 of the balance
is depreciated at the end of one month (1/6 in the second month, etc.) in a replicating
portfolio constructed over 12 months.
This arbitrary technique, which has no statistical basis, is unsatisfactory as many current
accounts are partially or totally depreciated over one month because of the monthly nature
of the income.
The second class of conventions covers the conventions that are considered as more
sophisticated and these do call in part on statistical studies. Because of the very restrictive
hypotheses retained, construction of the replicating portfolio remains within the scope of
convention. For example: we calculate two well-known statistical indicators to assess a
volatile item like the arithmetical mean and the monthly standard deviation for the daily
balances of all the deposits. The operation is repeated every two months, every quarter
etc. in order to obtain the statistical volatility indicators (average, standard deviation) on
a temporal horizon that increases from month to month. The interest, of course, is in
making the calculation over several years in order to re¬ne support for stable resources
for long-term functions such as credit facilities.
Thanks to these indicators it is possible, using probability theory, to calculate the
monthly portion of the deposits that will be depreciated month by month. For example:
314 Asset and Risk Management

to de¬ne the unstable portion of deposits for one month, we calculate ¬rst the probability
that the current account will be in debit balance compared to the monthly average and
the standard deviation for the totals observed over the month. The probability obtained is
equal to the unstable proportion for one month. We can also write in the general case that
the probability associated with the percentage of deposits depreciated equals Pr[x < 0]
with σ as the standard deviation over a period (one or two months etc.) for the daily
totals of deposits and µ as the arithmetical mean for the deposits over the same period.
With this method, part of the deposits is depreciated or deducted each month until
depreciation is complete. In other words, the balance sheet is de¬‚ated. For example: the
demand deposit entry in the balance sheet represents EUR10 000 million, and this sum
will be broken down into monthly due dates that generally cover several years. Naturally,
this convention for constructing a replicating portfolio is more satisfying than a simple
arbitrary convention. Some serious weaknesses have, however, been noted.
In fact, if we have a product with a credit balance, the proportion depreciated during
the ¬rst month will be the probability of the balance becoming a debit balance in view
of the monthly arithmetical mean and standard deviation calculated and observed. Under
this approach, the instability amounts to the probability of having a debit balance (for a
product in liabilities) or a credit balance (for a product in assets). It is considered that
the credit positions capable of being debited to a considerable extent are probably stable!
This shows the limits of the approach built on the global basis balance or the practice of
producing the total accounting position day by day.

12.5.3 The contract-by-contract replicating portfolio
The other methods consist of producing more accurate projections for demand products
on the basis of statistical analyses. The ¬rst prerequisite for a statistical analysis to be
consistent is to identify correctly each component that explains the overall development. In
other words, the statistical analysis builds up the replicating portfolio account by account
and day by day. The portfolio is not built on the daily accounting balance that brings
together the behaviour of all the accounts. The banks allocate one account per type of
product and per client. The account-by-account analysis is more re¬ned as it allows the
behaviour of the ¬‚ows to be identi¬ed per type of client.
The account-by-account daily analysis includes technical problems of database consti-
tution, including those in the large system or ˜mainframe™ environment, because of the
volume created by the large number of current accounts or cheques and the need for
historical entries.
After the completion of this ¬rst stage, considerable thought was applied to de¬ne the
concept of stability in theoretical terms. To carry out this work, we used two concepts:

• The ¬rst was the method of the account-by-account replicating portfolio. We con-
sidered that the balance observed at moment t is the product of a whole set of
interwoven accounts with different pro¬les and cash¬‚ow behaviour and nonsimulta-
neous creation dates.
• The second concept is the stability test, adopted for de¬ning a stable account statisti-
cally. The test used is the standardised range or SR. This is a practical test used to judge
the normality of a statistical distribution, as it is easy to interpret and calculate. SR is
a measurement of the extent of the extreme values in the observations for a sample
Techniques for Measuring Structural Risks in Balance Sheets 315

dispersion unit (the standard deviation13 ). It is expressed as follows:
max(Xi ) ’ min(Xi )
SR =
This test allows three types of statistical distribution to be identi¬ed: a normal or
Gaussian distribution, a ¬‚at distribution with higher statistical dispersion than that of
a normal law, and a distribution with a statistical dispersion lower than that of a
normal law.
It can be considered that a demand current account is stable within the third typology.
The difference between the extreme values, max(Xi ) ’ min(Xi ), is low because of
the standard different. The SR statistical test can be carried out with several intervals of
con¬dence, and the test can be programmed with differentiated intervals of con¬dence.
It is preferable to use a wide interval of con¬dence to judge the daily stability of the
account in order to avoid the problem of making monthly income payments. In addition,
the second condition for daily account stability is the absence of debit balances in the
monthly historical period values. In a monthly historical period, it is preferable to take
a wider interval of con¬dence when the history of the deposits shows at least one debit
balance, and a narrower interval otherwise.
After the stable accounts have been identi¬ed, we can reasonably create repayment
schedules by extending the trends or historical tendencies. On the statistically stable
accounts, two major trend types exist. In the upward trend, the deposits are stable over
a long term and the total observed at moment t will therefore be depreciated once over
a long period; this may be the date of the historical basis. In the downward trend, it is
possible by prolonging the trend to ¬nd out the future date of complete depreciation of
the account. Therefore, the balance of the account at moment t is depreciated linearly
until the maturity date obtained by prolonging the trend.
In order to provide an explanation, we have synthesised the conditions of stability in
Table 12.9. We have identi¬ed four cases. ˜SR max™ corresponds to a wide interval of
con¬dence, while ˜SR min™ corresponds to a narrower interval of con¬dence.

Table 12.9 Stability typologies on current account deposits

Type Daily Monthly Historical Type Maturity
of case stability stability monthly of trend date

1 Yes (SR max) Yes (SR min) Always in Upward & Duration of history of
credit horizontal data
2 Yes (SR max) Yes (SR min) Always in Downward Duration of trend
credit prolongation
3 Yes (SR max) Yes (SR max) At least one Generally Duration of history of
debit balance upward data
4 Yes (SR max) No (SR min) Always in No trend Duration of history of
credit data (for historical
min. total)

There are of course other statistical tests for measuring the normality of a statistical distribution, such as the χ 2 test,

the Kolmogorov“Smirnov test for samples with over 2000 contracts, and the Wilk“Shapiro test where needed.
316 Asset and Risk Management

The fourth case requires further explanation. These accounts are always in a credit
balance on the daily and monthly histories, but are not stable on a monthly basis.
On the other hand, there is a historical minimum credit balance that can be considered
to be stable. Economists name this liquidity as ˜liquidity preference™. In this case, the
minimum historical total will be found in the long-term repayment schedule (the database
date). The unstable contracts, or the unstable part of a contract, will have a short-term
maturity date (1 day to 1 month).
This method will allow better integration of products into the liquidity management
tools and rate risk without maturity dates. Based on the SR test and the account-by-account
replicating portfolio, it is simple in design and easy to carry out technically.
Speci¬cally, an accounting position of 120 will be broken down as follows. The unstable
part will have a maturity date of one day or one month, and the stable part will be broken
down two months from the date of the historical period. If the history and therefore the
synthetic maturity dates are judged insuf¬cient, especially on savings products without
maturity dates, it is possible under certain hypotheses to extrapolate the stability level
and de¬ne a long maturity period over smaller totals. The historical period is 12 months.
A volume of 100 out of the 130 observed is de¬ned as stable. The maturity period is
therefore one year. It is also known that the volatility of a ¬nancial variable calculated
over a year can be used as a basis for extrapolating the volatility calculated over two

years by multiplying the standard deviation by the time root: σ2 years = σ1 year · 2.
It can be considered that the stable part diminishes symmetrically in proportion to time.

The stable part at ¬ve years can thus be de¬ned: 100 · 1/ 5 = 100 · 0.447 = 44.7 %. We
therefore have 30 at one day, 55.27 at one year and 44.73 at ¬ve years.
The stability obtained on the basis of a monthly and daily history therefore takes overall
account of the explanatory variables of instability (arbitrage behaviour, monthly payment
of income, liquidity preference, anticipation of rates, seasonality etc.).
In this method, the interest rate is an exogenous variable. The link between changes in
stability and interest rates therefore depends on the frequency of the stability analysis. It
allows speci¬c implicit maturity dates to be found while remaining a powerful tool for
allocating resources on a product without a maturity date located among the assets. For
a liability bank, a good knowledge of ¬‚ows will allow resources to be replaced over the
long term instead of the interbank system and therefore provide an additional margin if
the rate curve is positive. For an asset bank, this procedure will allow better management
of the liquidity risk and the rate risk.
Contrarily, this historical and behavioural approach to the replicating portfolios poses
problems when rate simulations are carried out in ALM. In the absence of an endogenous
rate variable, knowledge of the link between rate and replicating portfolio will be limited
to history. This last point justi¬es the replicating portfolio searches that include interest
rates in the modelling process.

12.5.4 Replicating portfolios with the optimal value method Presentation of the method
This method was developed by Smithson14 in 1990 according to the ˜building approach™
or ˜Lego approach™. The method proposes a de¬nition of optimal replicating portfolios
Smithson C., A Lego approach to ¬nancial engineering. In The Handbook of Currency and Interest Rate Risk Manage-
ment, edited by R. Schwarz and C. W. Smith Jr., New York Institute of Finance, 1990 or Damel P., “L™apport de replicating
portfolio ou portefeuille r´ pliqu´ en ALM: m´ thode contrat par contrat ou par la valeur optimale”, Banque et March´ s, mars
e e e e
avril, 2001.
Techniques for Measuring Structural Risks in Balance Sheets 317

by integrating market interest rates and the anticipated repayment risk, and considers the
interest rate(s) to be endogenous variables. This perspective is much more limited than
the previous one when the bank carries out stochastic or other rate simulations on the
ALM indicators (VaR, NPV for equity funds, interest margins etc.).
In this method, it is considered that the stable part of a product without a maturity
date is a function of simple rate contracts with known maturity dates. In this problem,
the de¬nition of stability is not provided contract by contract but on the basis of daily or
monthly accounting volumes. An equation allows optimal representation of the chronolog-
ical series of the accounting positions. This ¬rst point de¬nes a stable part and a volatile
part that is the statistical residue of the stability equation.
The volatile part is represented by a short-term bond with a short-term monetary ref-
erence rate (such as one month).
The stable part consists of a number of interwoven zero-coupon bonds with reference
rates and maturity dates from 3 months to 15 years. The weave de¬nes a re¬nancing
strategy based on the monetary market and the primary bond market.
The stable part consists of rate products. The advantage of this approach is therefore
that the early repayment rate is taken into account together with any ˜repricing™ of the
product and the volume is therefore linked to the reference interest rates. The model
contains two principal equations.

• Volum t represents the accounting position at moment t.
• Stab t represents the stable part of the volume at moment t.
• rrt is the rate for the product at moment t and taux1m, taux2m etc. represent the market
reference rates for maturity rates 1 month, 2 months etc.
• µt represents the statistical residual or volatile part of the accounting positions.
• brit represents an interest for a zero-coupon bond position with maturity date i and
market reference rate i at time t
• ±i represents the stable part replicated by the brit position.
• ±i equals 1 (i = 3 months to 15 years).
• mrt represents the portion of the demand product rate that is not a function of the
market rate. mrt is also equal to the difference between the average weighted rate
obtained from the interwoven bonds and the ¬‚oating or ¬xed retail rate. This last point
also includes the repricing strategy and the spread, which will be negative on liability
products and positive on asset products.

Wilson15 was the ¬rst to use this approach speci¬cally for optimal value. His equations
can be presented as follows:

Volum t = Stab t + µt (a)
15 years
Volum t · rrt = µt · r1 + ±i brit + mrt + δt (b)
i=3 months

15 years
i=3 months ±i = 1.
with the constraint:

Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.
318 Asset and Risk Management

Example of replicated zero-coupon position
br6m is a bond with a six-month maturity date and a market reference rate of six months. It
will be considered that the stable part in t1 is invested in a six-month bond at a six-month
market rate. At t2 , t3 , t4 , t5 and t6 the new deposits (difference between Stab t’1 and Stab t )
are also placed in a six-month bond with a six-month reference market rate for t2 , t3 , t4 ,
t5 and t6 . At t7 the stable part invested at t1 has matured. This stable party and the new
deposits will be replaced at six months at the six-month market rate prevailing at t = 7.
brit functions with all the reference rates from three months to 15 years.
After econometric adjustment of this two-equation model, ±i readily gives us the dura-
tion of this demand product. The addition properties of the duration are used. If ±1y = 0.5
and ±2y = 0.5, the duration of this product without a maturity date will be 18 months. Econometric adjustment of equations
A. The stability or de¬nition equation
There are many different forecasting models for the chronological series. For upward
accounting volumes, the equation will be different from that obtained from decreasing or
sine wave accounting values. The equation to be adopted will be the one that minimises
the term of error µ.
Here follows a list (not comprehensive) of the various techniques for forecasting a
chronological series:

• regression;
• trend extrapolations;
• exponential smoothing;
• autoregressive moving average (ARMA).

Wilson uses exponential smoothing. The stability of the volumes is an exponential function
of time,
Stab t = b0 · eb1 t + µt

log Stab t = log b0 + b1 · t + δt

Instead of this arbitrary formula, we propose to de¬ne the volumes on the basis of classical
methods or recent research into chance market models specialised in during the temporal
series study. These models are much better adapted for estimating temporal series. The
ARMA model is a classical model; it considers that the volumes observed are produced
by a random stable process, that is, the statistical properties do not change over the course
of time.
The variables in the process (that is, mathematical anticipation, valuation“valuation)
are independent of time and follow a Gaussian distribution. The variation must also be
¬nished. Volumes will be observed at equidistant moments (case of process in discrete
time). We will take as an example the ¬‚oating-demand savings accounts in LUF/BEF
Techniques for Measuring Structural Risks in Balance Sheets 319

from 1996 to 1999, observed monthly (data on CD-ROM). The form given in the model
is that of the recurrence system,
Volum t = a0 + ai Volum t’i + µt

where a0 + a1 Volum t’1 + . . . + aP Volum t’p represents the autoregressive model that is
ideal or perfectly adjusted to the chronological series, thus being devoid of uncertainty,
and µt is a mobile average process.
µt = bi ut’i

The ut’I values constitute ˜white noise™ (following the non-autocorrelated and centred
normal random variables with average 0 and standard deviation equal to 1). µt is therefore
a centred random variable with constant variance. This type of model is an ARMA type
model (p, q).

Optimisation of ARMA model (p, q)
The ¬rst stage consists of constructing the model on the observed data without transfor-
mation (Volum t ).
The ¬rst solution is to test several ARMA models (p, 1) and to select the model that
maximises the usual adjustment criteria:

• The function of log of likelihood. Box and Jenkins propose the lowest square estimators
(R-square in the example or adjusted), identical to the maximum likelihood estimators
if it is considered that the random variables are distributed normally. This last point is
consistent with the ARMA approach.
• AIC (Akaike™s information criterion).
• Schwartz criteria.
• There are other criteria, not referenced in the example (FPE: ¬nal prediction error;
BIC: Bayesian information criterion; Parsen CAT: criterion of autoregressive trans-
fer function).

The other process consists of constructing the model on the basis of the graphic autocor-
relation test. This stage of identi¬cation takes account of the autocorrelation test with all
the possible intervals (t ’ n). This autocorrelation function must be downward or depre-
ciated oscillating. In the example, the graph shows on the basis of the bilateral Student
test (t = 1.96) that the one- and two-period intervals have an autocorrelation signi¬cantly
different from 0 at the con¬dence threshold of 5 %. The ARMA model will have an AR
component equal to two (AR(2)).
This stage may be completed in a similar way by partial autocorrelation, which takes
account of the effects of the intermediate values between Volum t and Volum t+r in the
autocorrelation. The model to be tested is ARMA (2, 0). The random disturbances in
the model must not be autocorrelated. Where applicable, the autocorrelations have not
been included in the AR part. There are different tests, including the Durbin“Watson
320 Asset and Risk Management
Table 12.10 ARMA (2, 2) model

R-square = 0.7251 Adjusted R-square = 0.6773
Akaike Information Criteria ’ AIC(K) = 43.539
Schwartz Criteria ’ SC(K) = 43.777
Parameter estimates STD error T-STAT
AR(1) 0.35356 0.1951 1.812
AR(2) 0.40966 0.2127 1.926
MA(1) 0.2135E-3 0.1078 0.0019
MA(2) ’ 0.91454 ’15.59
Constant 0.90774E + 10 0.7884E + 10 1.151
Skewness 1.44
Kurtosis 7.51
Studentised range 5.33

non-autocorrelation error test. In the example of the savings accounts, the optimal ARMA
model with a distribution normal and noncorrelated residue is the ARMA (2, 2) model with
an acceptable R 2 of 0.67. This model is therefore stationary, as the AR total is less than 1.
The ARMA (2, 2) model (Table 12.10) obtained is as follows. The monthly accounting
data, the zero-coupon rates for 1 month, 6 months, 1 year, 2 years, 4 years, 7 years and
10 years can be found on the CD-ROM. The model presented has been calculated on the
basis of data from end November 1996 to end February 1999.
If the model is nonstationary (nonstationary variance and/or mean), it can be converted
into a stationary model by using the integration of order r after the logarithmic transforma-
tion : if y is the transformed variable, apply the technique to ( (. . . (yt ))) ’ r times’
instead of yt ( (yt ) = yt ’ yt’1 ). We therefore use an ARIMA(p, r, q) procedure.16 If
this procedure fails because of nonconstant volatility in the error term, it will be necessary
to use the ARCH-GARCH or EGARCH models (Appendix 7).
B. The equation on the replicated positions
This equation may be estimated by a statistical model (such as SAS/OR procedure PROC
NPL), using multiple regression with the constraints
15 years
±i = 1 and ±i ≥ 0
i=3 months

It is also possible to estimate the replicated positions (b) with the single constraint (by
using the SAS/STAT procedure)
15 years
±i = 1
i=3 months

In both cases, the duration of the demand product is a weighted average of the durations.
In the second case, it is possible to obtain negative ±i values. We therefore have a synthetic
investment loan position on which the duration is calculated.
Autoregressive integrated moving average.
Techniques for Measuring Structural Risks in Balance Sheets 321
Table 12.11 Multiple regression model obtained on BEF/LUF savings accounts on the basis of a
SAS/STAT procedure (adjusted R-square 0.9431)

Prob > (T )
Variables Parameter estimate Standard error

’92 843 024
Intercept (global margin) 224 898 959 0.6839
F1M (stable part) 0.086084 0.00583247 0.0001
F6M (stable rollover) 0.05014466 0.7573
F1Y (stable rollover) 0.036787 0.07878570 0.6454
F2Y (stable rollover) 0.127688 0.14488236 0.3881
F4Y (stable rollover) 3.490592 1.46300205 0.0265
F7Y (stable rollover) 2.94918687 0.1399
F10Y (stable rollover) 1.884966 1.63778119 0.2627

If ±1y = 2.6 and ±6m = ’1.6 for a liability product, duration = 1(1.6/2.6)0.5 = 0.69
of a year.
The bond weaves on the stable part have been calculated on the basis of the zero-coupon
rates (1 month, 6 months, 1 year, 2 years, 4 years, 7 years, 10 years). See Table 12.11.
The equation (b) is very well adjusted, as R 2 is 94.31 %. The interest margin is of course
negative, as the cost of the resources on liabilities is lower than the market conditions. Like
Wilson, we consider that the margin between the average rate for the interwoven bonds
and the product rate is constant over the period. Possibly it should also be considered that
the margin is not constant, as the ¬‚oating rate is not instantaneously re-updated according
to changes in market rates. On the other hand, the quality of the clients and therefore
the spread of credit are not necessarily constant over the period. The sum of coef¬cients
associated with the interwoven bond positions is 1.
This multiple linear regression allows us to calculate the duration of this product without
a maturity date on the basis of the synthetic bond positions obtained. In the example, the
duration obtained from the unstable and stable positions equals 1.42 years.

1 Mathematical concepts
2 Probabilistic concepts
3 Statistical concepts
4 Extreme value theory
5 Canonical correlations
6 Algebraic presentation of logistic regression
7 Time series models: ARCH-GARCH and EGARCH
8 Numerical methods for solving nonlinear equations
Appendix 1
Mathematical Concepts1

1.1.1 Derivatives De¬nition
f (x0 + h)’f (x0 )
The derivative2 of function f at point x0 is de¬ned as f (x0 ) = limh’0 ,
if this limit exists and is ¬nite.
If the function f is derivable at every point within an open interval ]a; b[, it will
constitute a new function de¬ned within that interval: the derivative function, termed f . Geometric interpretations
For a small value of h, the numerator in the de¬nition represents the increase (or decrease)
in the value of the function when the variable x passes from value x0 to the neighbouring
value (x0 + h), that is, the length of AB (see Figure A1.1).
The denominator in the same expression, h, is in turn equal to the length of AC. The
ratio is therefore equal to the slope of the straight line BC. When h tends towards 0, this
straight line BC moves towards the tangent on the function graph, at point C.
The geometric interpretation of the derivative is therefore as follows: f (x0 ) represents
the slope of the tangent on the graph for f at point x0 . In particular, the sign of the
derivative characterises the type of variation of the function: a positive (resp. negative)
derivative has a corresponding increasing (resp. decreasing) function. The derivative
therefore measures the speed at which the function increases (resp. decreases) in the
neighbourhood of a point.
The derivative of the derivative, termed the second derivative and written f , will
therefore be positive when the function f is increasing, that is, when the slope of the
tangent on the graph for f increases when the variable x increases: the function is said to
be convex. Conversely, a function with a negative second derivative is said to be concave
(see Figure A1.2). Calculations
Finally, remember the elementary rules for calculating derivatives. Those relative to oper-
ations between functions ¬rst of all:

(f + g) = f + g
(»f ) = »f
Readers wishing to ¬nd out more about these concepts should read: Bair J., Math´ matiques g´ n´ rales, De Boeck, 1990.
e ee
Esch L., Math´ matique pour economistes et gestionnaires, De Boeck, 1992. Guerrien B., Alg` bre lin´ aire pour economistes,
e ´ e e ´
Economica, 1992. Ortega M., Matrix Theory, Plenum, 1987. Weber J. E., Mathematical Analysis (Business and Economic
Applications), Harper and Row, 1982.
Also referred to as ¬rst derivative.
326 Asset and Risk Management

f(x0 + h) B

C θ
f(x0) A

x0 + h

Figure A1.1 Geometric interpretation of derivative

f (x) f (x)

x x

Figure A1.2 Convex and concave functions

(fg) = f g + fg
f f g ’ fg
g g2
Next, those relating to compound functions:

[g(f )] = g (f ) · f

Finally, the formulae that give the derivatives for a few elementary functions:

(x m ) = mx m’1
(ex ) = ex
(a x ) = a x ln a
(ln x) =
(loga x) =
x ln a Extrema
The point x0 is a local maximum (resp. minimum) of the function f if

f (x0 ) ≥ f (x) (resp. f (x0 ) ¤ f (x))

for any x close to x0 .
Mathematical Concepts 327

The extrema within an open interval for a derivable function can be determined thanks
to two conditions.

• The ¬rst-order (necessary) condition states that if x0 is an extremum of f , then f (x0 ) =
0. At this point, called the stationary point, the tangent at the graph of f is there-
fore horizontal.
• The second-order (suf¬cient) condition allows the stationary points to be ˜sorted™
according to their nature. If x0 is a stationary point of f and f (x0 ) > 0, we then
have a minimum; in the opposite situation, if f (x0 ) < 0, we have a maximum.

1.1.2 Taylor™s formula
Consider a function f that one wishes to study in the neighbourhood of x0 (let us say, at
x0 + h). One method will be to replace this function by a polynomial “ a function that is
easily handled “ of the variable h:

f (x0 + h) = a0 + a1 h + a2 h2 + · · ·

For the function f to be represented through the polynomial, both must:

• take the same value at h = 0;
• have the same slope (that is, the same ¬rst derivative) at h = 0;
• have the same convexity or concavity (that is, the same second derivative) at h = 0;
• and so on.

Also, the number of conditions to be imposed must correspond to the number of coef-
¬cients to be determined within the polynomial. It will be evident that these conditions
lead to:
f (x0 )
a0 = f (x0 ) =
f (x0 )
a1 = f (x0 ) =
f (x0 ) f (x0 )
a2 = =
2 2!
f (k) (x0 )
ak =

Generally, therefore, we can write:

f (n) (x0 ) n
f (x0 ) f (x0 ) f (x0 ) 2
f (x0 + h) = + h+ h +··· + h + Rn
0! 1! 2!
Here Rn , known as the expansion remainder, is the difference between the function f
to be studied and the approximation polynomial. This remainder will be negligible under
certain conditions of regularity as when h tends towards 0, it will tend towards 0 more
quickly than hn .
328 Asset and Risk Management

The use of Taylor™s formula in this book does not need a high-degree polynomial, and
we will therefore write more simply:

f (x0 ) f (x0 ) 2 f (x0 ) 3
f (x0 + h) ≈ f (x0 ) + h+ h+ h +···
1! 2! 3!
For some elementary functions, Taylor™s expansion takes a speci¬c form that is worth

x x2 x3
e ≈1+ + + + ···
1! 2! 3!
m m(m ’ 1) 2 m(m ’ 1)(m ’ 2) 3
(1 + x)m ≈ 1 + x + x+ x + ···
1! 2! 3!
x2 x3
ln(1 + x) ≈ x ’ + ’ ···
2 3
A speci¬c case of power function expansion is the Newton binomial formula:
a k bn’k
(a + b) =

1.1.3 Geometric series
If within the Taylor formula for (1 + x)m , x is replaced by (’x) and m by (’1), we
will obtain:
≈ 1 + x + x2 + x3 + · · ·
It is easy to demonstrate that when |x| < 1, the sequence

1 + x + x2
1 + x + x2 + · · · + xn

will converge towards the number 1/(1 ’ x).
The limit of this sequence is therefore a sum comprising an in¬nite number of terms
and termed a series. What we are concerned with here is the geometric series:

xn =
1 + x + x + ··· + x + ··· =

A relation linked to this geometric series is the one that gives the sum of the terms in a
geometric progression: the sequence t1 , t2 , t3 etc. is characterised by the relation

tk = tk’1 · q (k = 2, 3, . . .)
Mathematical Concepts 329

the sum of t1 + t2 + t3 + · · · + tn is given by the relation:
1 ’ qn
t1 ’ tn+1
tk = = t1
1’q 1’q

1.2.1 Partial derivatives De¬nition and graphical interpretation
For a function f of n variables x1 , x2 , . . . , xn , the concept of derivative is de¬ned in a
similar way, although the increase h can relate to any of the variables. We will therefore
have n concepts of derivatives, relative to each of the n variables, and they will be termed
partial derivatives. The partial derivative of f (x1 , x2 , . . . , xn ) with respect to xk at point
(0) (0) (0)
(x1 , x2 , . . . , xn ) will be de¬ned as:
(0) (0) (0)
fxk (x1 , x2 , . . . , xn )
(0) (0) (0) (0) (0) (0)
(0) (0)
f (x1 , x2 , . . . , xk + h, . . . , xn ) ’ f (x1 , x2 , . . . , xk , . . . , xn )
= lim

The geometric interpretation of the partial derivatives can only be envisaged for the
functions of two variables as the graph for such a function will enter the ¬eld of three
dimensions (one dimension for each of the two variables and the third, the ordinate, for
the values of the function). We will thus be examining the partial derivatives:

f (x0 + h, y0 ) ’ f (x0 , y0 )
fx (x0 , y0 ) = lim

f (x0 , y0 + h) ’ f (x0 , y0 )
fy (x0 , y0 ) = lim

Let us now look at the graph for this function f (x, y). It is a three-dimensional space
(see Figure A1.3).
Let us also consider the vertical plane that passes through the point (x0 , y0 ) and parallel
to the Ox axis. Its intersection with the graph for f is the curve Cx . The same reasoning as
that adopted for the functions of one variable shows that the partial derivative fx (x0 , y0 )
is equal to the slope of the tangent to that curve Cx at the axis point (x0 , y0 ) (that is, the
slope of the graph for f in the direction of x). In the same way, fy (x0 , y0 ) represents the
slope of the tangent to Cy at the axis point (x0 , y0 ). Extrema without constraint
(0) (0)
The point (x1 , . . . , xn ) is a local maximum (resp. minimum) of the function f if
(0) (0)
(0) (0)
f (x1 , . . . , xn ) ≥ f (x1 , . . . , xn ) [resp. f (x1 , . . . , xn ) ¤ f (x1 , . . . , xn )]
(0) (0)
for any (x1 , . . . , xn ) close to (x1 , . . . , xn ).
330 Asset and Risk Management






. 13
( 16)