ńņš. 13 |

exchanges required are available free of charge on the Internet.

12.4 REPRICING SCHEDULES (MODELLING OF CONTRACTS

WITH FLOATING RATES)

We now introduce two concepts that are essential for optimising ALM tools. Their aim

is to deļ¬ne the maturity dates for products without speciļ¬ed dates and to arrive at an

understanding of the non-contractual methods of revising rates for products with ļ¬‚oating

rates. The maturity date deļ¬nition models, or replicating portfolios, and the repricing

models, are essential for calculating a J. P. Morgan type of VaR for the whole of the

balance sheet, and for calculating a gap in liquidity or interest rates, a duration for non-

maturity date products and a net present value.

12.4.1 The conventions method

In the absence of research reports, a large number of banks use an arbitrary convention for

integrating ļ¬‚oating-rate products into interest rate risk analytics tools. Some banks with

ļ¬‚oating-rate contracts admit as the next rate revision the average historical interval for

which the ļ¬‚oating rates have remained stable. This calculation must obviously be made

302 Asset and Risk Management

for each type of ļ¬‚oating-rate product, as the behaviour of the revision process is not the

same from one product to another.

Like every convention, this approach is far from perfect as we will clearly see that the

rate revision periods are irregular.

12.4.2 The theoretical approach to the interest rate risk on ļ¬‚oating rate

products, through the net current value

The problem of ļ¬‚oating rates is a complex one, as we are often looking at demand

products or products without maturity dates, on which banking clients carry out arbitrage

between different investment products. In addition, the ļ¬‚oating rates may be regulated

and conditioned by business strategy objectives.

There are very few academic works on the interest rate risk for ļ¬‚oating-rate products.

Where these works exist, they remain targeted towards demand savings products. The

academic approach has limited itself to integrating the interest rate risks for ļ¬‚oating-rate

contracts by calculating the net current value on the basis of volume and market rate

simulations. This NPV approach is difļ¬cult to reconcile with an interest-rate risk analysis

built on the basis of a ā˜gapā™. The NPV analysis gives interesting theoretical information

on the alteration to the value of the balance sheet items, as well as accounting and market

values. However, the NPV does not give any information on the variation in the bankā™s

margin following a change in rates. The interest-rate risk on ā˜retailā™ products is often a

margin variation risk. The NPV analysis is therefore of limited interest for the ā˜retailā™

section of the balance sheet (deposits and credits), in which most of the ļ¬‚oating-rate

products are found. Ausubel1 was the ļ¬rst to calculate the net current value of ļ¬‚oating-

demand deposits by using a determinist process for volumes and rates.

More recently, Selvaggio2 and Hutchinson and Pennachi3 have used a speciļ¬c model

of the reference monetary rate by taking a stochastic process based on the square root

with a recurrence function towards the average. Other stochastic models have also been

used, such as those of Heath, Jarrow and Morton4 and of Hull and White5 in the works

by Sanyal.6

The hypotheses necessary for this methodology are in fact postulates:

ā¢ This approach considers that demand products consist of short-term ļ¬‚ows. This has

serious consequences as the updating coefļ¬cients used for the NPV are essentially

short-term market rates. This has the effect of denying the existence of ā˜replicating

portfoliosā™7 of long maturity in demand products. In a replicating portfolio, stable

long-term demand contracts cannot be updated with a short-term rate.

ā¢ The construction methods for the models suggest three stages. First, an econometric

link must be established between the historical volume of the product and the monetary

1

Ausubel L., The failure of competition in the credit card market, American Economic Review, 1991, pp. 50ā“81.

2

Selvaggio R., Using the OAS methodology to value and hedge commercial bank retail demand deposit premiums, The

Handbook of Asset/Liability Management, Edited by F. J. Fabozzi and A. Konishi, 1996.

3

Hutchinson D. and Pennachi G., Measuring rents and interest rate risk in imperfect ļ¬nancial markets: the case of retail

bank deposit, Journal of Financial and Quantitative Analysis, 1996, pp. 399ā“417.

4

Heath D., Jarrow R. and Morton A, Bond pricing and the term structure of interest rates: a new methodology for

contingent claims valuation, Econometrica, 1992, pp. 77ā“105.

5

Hull J. and White A., Pricing interest rate derivative securities, Review of Financial Studies, 1990, pp. 573ā“92.

6

Sanyal A., A continuous time Monte Carlo implementation of the Hull and White one-factor model and the pricing of core

deposit, unpublished manuscript, December 1997.

The ā˜replicating portfolioā™ suggests breaking down a stock (for example, total demand deposits at moment t) in ļ¬‚ow,

7

each with a speciļ¬c maturity date and nominal value. This concept is the subject of the next development.

Techniques for Measuring Structural Risks in Balance Sheets 303

reference rate. Next, a Monte Carlo simulation will specify the monetary rates. This

stage uses the Hull and White discrete model. The total of these possible rates will allow

the future volumes to be deļ¬ned and updated using the same rate. The mathematical

anticipation of the current values obtained represents the NPV, for which the expression

is as follows:

T

Dt (rt ā’ st )

NPV0 = E0

(1 + r0 Ā· . . . Ā· (1 + rt )

t=1

Here:

E0 is the mathematical expectation operator at time 0.

Dt is the nominal total of deposits at time t.

r0 , . . . , rt are the short-term rates for times 0, . . . , t.

st is the rate at which the product pays at time t.

rt ā’ st is the deļ¬nition of the spread.

This spread is a difference between a market reference rate and the contract rate. The

concept of spread is an interesting one; it is an average margin, generally positive, for

creditor rates in relation to market rates. The postulate for the approach, however, is to

consider that the spread is unique and one-dimensional as it is equal by deļ¬nition to the

difference between the monetary market rate and the contract rate. Here again we ļ¬nd

the postulate that considers that the demand product ļ¬‚ows are short-term ļ¬‚ows. A spread

calculated only on the short-term ļ¬‚ows would have the effect of denying the existence of

the long-term ā˜replicating portfolioā™ on the products without maturity dates.

In the absence of a precise deļ¬nition of reference rates, and with the aim of gen-

eralising for products that are not necessarily demand products, the spread should be

calculated for the whole of the zero-coupon curve range. Our approach uses the static and

dynamic spread.

12.4.3 The behavioural study of rate revisions

12.4.3.1 Static and dynamic spread

It must be remarked that there is no true theory of ļ¬‚oating rates. The revision of rates

is actually a complex decision that clearly depends on the evolution of one or more

market interest rates. These market and contractual rates allow a spread or margin, which

should be proļ¬table to the bank, to be calculated. The difference will depend on the

segmentation of clients into private individuals, professionals and businesses and on the

commercial objectives particular to each product. When the margin falls, the duration of

the fall in the spread prior to the adjustment or revision of rates may be longer or shorter.

Following this kind of drop in margin, certain product rates will be adjusted more quickly

than others because of the increased sensitivity of the volumes to variations in rates and

depending on the competition environment. This dynamic, speciļ¬c to each product or to

each bank, is difļ¬cult to model. It can, however, be afļ¬rmed that there are three types of

explanatory variable.

ā¢ the time variable between two periods of revision, for respecting the diachronic

dynamic.

304 Asset and Risk Management

ā¢ the static margins or static spreads calculated at regular intervals. We are taking a

two-month period to calculate this static spread.

SS = rt ā’ st

ā¢ a selective reduction of a static margin will not allow it alone to bring about a revision

in rates. The revision of rates will be effective only if the fall in the static spread is

conļ¬rmed and intensiļ¬es. This dynamic spread or DS8 can be obtained by calculating

the surface area between the ļ¬‚oating rate and the zero-coupon rate between two revision

periods. This surface area allows the sensitivity of the bankā™s products to rates, as well

as the bankā™s business policy, to be taken into account. In our analysis, the volume is

an endogenous variable.

n

DS = (rt ā’ st ) dt

T =1

After analysing the correlations between the zero-coupon rates, we have calculated the

static and dynamic spreads bimonthly, on a historical period running from 1991 to 1999

(see the example on investment credits on the CD-ROM).

12.4.3.2 Data analysis method

The scheme shown in Figure 12.1 allows the problem to be displayed. On the x-axis we

have the bimonthly time scale, which here begins on 1 January 1991. The y-axis shows

the rates of interest expressed as basis points and the various changes in the zero-coupon

rates. More speciļ¬cally, the rates used are 3 months, 2 years, 5 years and 10 years, split

into periods of 15 days.

The stepped curve corresponds to the historical rates for the ļ¬‚oating-rate product pre-

sented ā“ the investment credits granted to the liberal professions by a bank. The aim

1200

1000

800

600

400

200

0

1 27 53 79 105 131 157 183

Retail rate Market rate 3 months

Market rate 2 years Market rate 5 years Market rate 10 years

Figure 12.1 History of ļ¬‚oating rates on investment credits and a few market rates

8

Or cumulative static margins between two revision periods.

Techniques for Measuring Structural Risks in Balance Sheets 305

of the method is to disassociate the bimonthly periods of equilibrium from the periods

of non-equilibrium, which are in fact the periods situated immediately prior to the rate

revisions. The method consists of three stages:

ā¢ Analysis of the correlations on the rate curve.

ā¢ Analysis of the canonical correlations on the explanatory variables.

ā¢ The logistic regression on the variables selected.

A. Analysis of correlations on zero-coupon rates

The ļ¬rst stage is a simple analysis of the correlations between the various maturity dates

of the zero-coupon rates.9 Thanks to the matrix of correlations, rates that are closely

correlated are excluded from the analysis. The other rates, when retained, are used to

calculate the static and dynamic spreads. These spreads are deļ¬ned in two different ways.

The static margins are ļ¬rst of all calculated by making the difference between the annual

ļ¬‚oating contract rate and the annual zero-coupon rate for a maturity date (example: static

margin ā˜Aā™ 3 months, 6 months etc.) but also by transforming the margin obtained into

a proportional two-weekly rate (static margin ā˜Bā™ 3 months, 6 months etc.). The space

between analyses, or the period, is 15 days in the study.

Dynamic margins, meanwhile, are obtained by adding the simple two-weekly interest

period by period since the last change of rate (dynamic margin ā˜Aā™) and by calculating

the compound interest on the two-weekly margin at moment t since the last moment of

revision (dynamic margin ā˜Bā™).

The static and dynamic margins are calculated on the basis of the zero-coupon rates.

The information on the coupon curves can easily be found on Bloomberg (pages: State

bond rate curves). It is not a good idea to use the Strips curves (for State bonds separated

to create a structured zero-coupon product). These curves are difļ¬cult to use as they

suffer, among other things, from liquidity problems.

To calculate the zero-coupon curves, we have divided the State bond curves into

coupons using the classic ā˜step-by-stepā™ method. Up to one year, we have taken the

bank offered rate or BOR.

Description of the step-by-step method

The following bullet rates apply after interpolation (linear and otherwise):

ā¢ year Ć— 1 = 10 %

1

ā¢ years Ć— 2 = 10.10 %

2

ā¢ years Ć— 3 = 10.20 %

3

ā¢ years Ć— 4 = 10.30 %

4

The zero-coupon rate at one year is 10 %, that is ZCB 1 year.

The zero-coupon rate at two years is 10.105 %. When the actuarial rates are used, we

have in fact the following equality for a main bond with maturity:

100 = 10.10/1 + ZCB 1 year + 100 + 10.10/(1 + ZCB 2 years)2

9

A matrix of correlations is obtained: correlation between 3-month and 6-month rates, 3-month and 1-year rates, 3-month

and 2-year rates, etc.

306 Asset and Risk Management

Hence:

110.1

ZCB 2 years = ā’1

10.1

100 ā’

1.1

= 10.105 %

Method of calculating static and dynamic margins

In the Excel sheets on the CD-ROM, the retail and zero-coupon rates are expressed in

basis points. As we have no precise theoretical indications on the deļ¬nition of the margins,

we have calculated the two types of margin using two different methods of calculation.

The following example will allow a better understanding of how the Excel sheets are

calculated, that is:

ā¢ At t0 , ZCB = 950 and the rate = 800 basis points.

ā¢ At t1 , ZCB = 975 and the rate = 800 basis points.

ā¢ t is a two-monthly period.

For the static margins:

ā¢ First method: the difference in basis points between the repricing rate and the zero-

coupon rate is calculated, hence 950 ā’ 800 = 150 at t0 , 975 ā’ 800 = 175 at t1 .

ā¢ Second method: 0.015 is the differential of the annual rate converted into a twice-

monthly proportional rate (26 periods).

0.0150

1+ Ā· 100 ā’ 100 Ā· 100 = 5.769 at t0

26

0.0175

1+ Ā· 100 ā’ 100 Ā· 100 = 6.73 at t1

26

This second method allows the dynamic margin to be calculated on another scale of

values, as the data are not centred reduced (average 0; standard deviation 1).

We are converting the spread or the rate differential into a twice-monthly proportional

rate (52 weeks/2 = 26). This is the ā˜gainā™ in basis points for the bank over a twice-

monthly period.

There are two methods of calculation for dynamic margins.

ā¢ First method

0.0150

1+ Ā· 100 ā’ 100 Ā· 100 = 5.769 at t0

26

The ā˜gainā™ for the bank over a two-monthly period.

2

0.0175

1+ Ā· 100 ā’ 100 Ā· 100 = 13.46 at t1

26

Techniques for Measuring Structural Risks in Balance Sheets 307

The ā˜gainā™ capitalised by the bank over two periods from the rate differential noted in

second period.

ā¢ Second method

0.0150

1+ Ā· 100 ā’ 100 Ā· 100 = 5.769 at t0

26

The ā˜gainā™ for the bank over a two-monthly period.

0.0150 0.0175

1+ Ā·100 ā’ 100 Ā·100 + 1+ Ā·100 ā’ 100 Ā·100 = 12.499 at t1

26 26

The ā˜gainā™ for the bank over two periods from the rate differential noted for ļ¬rst and

second period.

Analysis of correlations between changes in zero-coupon rates

The static and dynamic margins are only calculated on the least correlated points within

the zero-coupon curve. In fact, the calculation of the margins on closely correlated points

contributes nothing in terms of information, as the margins obtained will be similar.

The process is a simple one. The currency curves are found on Excel, and SAS (the

statistical software) accepts the data through simple copying and pasting. For the analysis

of the correlations between curves, the SAS algorithm must be programmed, with a3m

being the market rate at 3 months, b6m the rate at 6 months, c1a the rate at 1 year etc.,

as can be found on the CD-ROM.

The static and dynamic margins for products in BEF/LUF have been calculated on the

basis of the least correlated points, that is: 3 months, 2 years, 4 years, 6 years and 10 years.

The classical statistical procedure uses correlation tests to exclude the correlated vari-

ables from the analysis (example: non-parametric Spearman test). Here, the procedure is

different as the rates are all correlated globally.

The act of taking the rates least correlated to each other allows the calculation of the

margins and procedures to be reduced in size.

B. Canonical correlation analysis

The second stage uses the little-known concept of canonical correlation analysis for mak-

ing an optimal selection of static and dynamic margins.

Canonical correlation analysis is carried out on the basis of canonical analysis, which

is now an old concept, having ļ¬rst been introduced in 1936 by Hotelling.10

The method is very productive in terms of theory as it takes most forms of data analysis

as a speciļ¬c case. The method is currently available as an algorithm (SAS software) but

is not frequently used because of problems with interpreting and using the results.

Canonical analysis can be used when a variable is linked linearly to another variable. In

our study, we have the static margins linked linearly to margins or dynamic spreads. When

there are two types of variables linked linearly by canonical analysis, some variables may

be excluded if there is an opposite sign between the standardised canonical coefļ¬cient and

the sign of the correlation between the variable and the canonical factor. The excluded

variables are known as ā˜suppresser variablesā™.

10

Hotelling H., Relation between two sets of variables, Biometrica, 321ā“77, 1936.

308 Asset and Risk Management

When there is a linear link between the two series of variables to be selected, tests11 have

shown that the canonical correlation method is more suitable than a simple selection built

on the basis of statistical correlations. The canonical correlations are shown in Appendix 5.

Examples of investment credits (CD-ROM)

For investment credits, the static margins have been calculated on the 3-month, 2-year,

5-year and 10-year rates. Analysis of the canonical correlations on the static and dynamic

differences gives us the following linear combination for the highest proper value in a

two-monthly study from 1991 to 1999.

Table 12.6 shows the results for the static margins.

For the next stage in the method, we will select the static spreads: Margin A 3 months,

Margin B 3 months, Margin A 10 years.

Table 12.7 shows the results for the dynamic margins:

For the next stage in the method, therefore, we will not select the dynamic spreads:

margin A 2 years, margin B 2 years, margin A 5 years.

The canonical correlation to the square for Ī¾ 1 , Ī·1 is good as it totals 0.85353. This

value is signiļ¬cant at the 1 % threshold according to statistic F . On the other hand, the

Table 12.6 Canonical correlations (static margins)

Ī¾ 1 ļ¬rst canonical

Static spreads Correlation between margin Suppresser

factor and canonical variable variables

ā’0.3474 ā’0.7814

Margin A, 3 months

ā’1.0656 ā’0.7861

Margin B, 3 months

ā’0.1222

Margin A, 2 years 0.9069 Yes

ā’0.1178

Margin B, 2 years 0.0794 Yes

ā’0.0960

Margin A, 5 years 0.5411 Yes

ā’0.0875

Margin B, 5 years 0.5238 Yes

Margin A, 10 years 0.1520 0.7160

ā’0.0001

Margin B, 10 years 0.7208 Yes

Table 12.7 Canonical correlations (dynamic margins)

Ī·1 ļ¬rst canonical

Static spreads Correlation between margin Suppresser

factor and canonical variable variables

ā’2.1657 ā’0.5231

Dynamic margin A, 3 months

ā’0.4786 ā’0.4433

Dynamic margin B, 3 months

ā’0.1327

Dynamic margin A, 2 years 2.0689 Yes

ā’0.2473

Dynamic margin B, 2 years 0.3585 Yes

ā’0.4387

Dynamic margin A, 5 years 0.2556 Yes

Dynamic margin B, 5 years 0.1665 0.1360

Dynamic margin A, 10 years 0.1505 0.5547

Dynamic margin B, 10 years 0.0472 0.5149

11

In this regard, we will mention the applicative works by: Cooley W. W. and Lohnes P. R., Multivariate Data Analysis,

John Wiley & Sons, Ltd, 1971. Tatsuoka M. M. Multivariate Analysis, John Wiley & Sons, Ltd, 1971. Mardia K. V., Kent

J. T. and Bibby J. M., Multivariate Analysis, Academic Press, 1979 or Damel P., āLa modĀ“ lisation des contrats bancaires a

e `

taux rĀ“ visable: une approche utilisant les corrĀ“ lations canoniquesā, Banque et MarchĀ“ s, mars avril, 1999.

e e e

Techniques for Measuring Structural Risks in Balance Sheets 309

hypothesis H0 of the absence of correlation between Ī»1 (the ļ¬rst proper value) and Ī»2 (the

second proper value) is veriļ¬ed with a probability of 0.9999 on the basis of the Wilks

lambda test. The canonical factors Ī¾ 1 and Ī·1 are therefore of good quality.

Tables 12.6 and 12.7 identify the ā˜suppresser variablesā™, that is, the variables excluded

from the analysis, as we have a contrary and contradictory sign between the coefļ¬cient

of the canonical axis and the sign of the correlation.

This method allows the variables, belonging to two analytical groups between which a

linear relation can be established, to be chosen in the optimal way.

The following stage is the use of the logistic regression as an explanatory model for

the differentiation between the periods of equilibrium (absence of revision) and periods

of interruption (period with change of rate). The logistic model is constructed on the basis

of static and dynamic spreads and of time. The model is also particular to each product

and to each bank, for reasons stated above.

C. Logistic regression

Logistic regression is a binomial model of conditional probability, known and in frequent

use (see Appendix 6). The variable to be explained takes the value 0 in a period of

equilibrium and 1 in a period of non-equilibrium (the two-month period before the change

in rate). The model is optimised using the Newsonā“Raphson nonlinear iterative method.

Our example contains nine changes of rate (value 1) and 188 periods of equilibrium.

The model is adjusted in the classical way by excluding the variables that do not differ

signiļ¬cantly from 0 (Ļ 2 test). We use the concept of pairs to illustrate the convergence

between the observed reality (periods 0 and 1) and the periods 0 and 1 given by the logistic

model equation. The pair (observation 0 of equilibrium and observation 1 of interruption)

will be concordant if the probability of a change in rate at the ļ¬rst observation is less than

the probability of the second by more than 0.02. Otherwise, the pair will not concord.

A pair will be uncertain when the difference between the two probabilities is small and

they are less than 0.02 apart. The rates of concordance, uncertainty and discordance for

the pairs are calculated on the basis of the total number of possible pairs combining an

observation of equilibrium (value 0) with an interruption value (value 1).

Repricing model on professional investment credits

Optimisation of logistic regression

Regression is optimised in the classical way by excluding the variables that do not differ

signiļ¬cantly from zero (P r > Ļ 2 ) step by step. The exclusion of variables is conditioned

in all cases by the degree of adjustment of the model. The rate of concordance between the

model and the observed reality must be maximised. The SAS output will be association

of predicted probabilities and observed responses ā“ concordant: 97.9 %.

In the following example (Table 12.8), the variable Mc10yr has a probability of 76.59 %

of being statistically zero. Excluding it will lead to deterioration in the rate of concor-

dance between the observations (repricingā“non-repricing) and the forecasts for the model

(repricingā“non-repricing). This variable must remain in the model.

There are other criteria for measuring the performance of a logistic regression, such

as the logarithm of likelihood. The closer the log of likelihood is to zero, the better the

adjustment of the model to the observed reality (ā’2log L in SAS output). The log of

likelihood can also be approximated by the MacFadden R 2 : R 2 = 1 (ā’2log L intercept

only/ā’2log L intercept and covariates).

310 Asset and Risk Management

Table 12.8 Logistic regression

Variables DF Parameter Standard Wald Proba over Odds

ratio12

estimate error chi-square chi-square

Constant 1 35.468 12.1283 8.5522 0.0035

ā’0.2669

Time 1 0.2500 1.1404 0.2856 0.766

M3m 1 0.3231 0.1549 4.3512 0.0370 1.381

ā’5.9101

Ma3m 1 3.4407 2.9504 0.0859 0.003

Ma10y 1 0.9997 0.7190 1.9333 0.1644 2.718

Mc3m 1 0.0335 0.0709 0.2236 0.6363 1.034

ā’0.0731

Mac3m 1 0.0447 2.6772 0.1018 0.929

Mac5y 1 0.1029 0.1041 0.9766 0.323 1.108

Mc10y 1 0.0227 0.0762 0.0887 0.7659 1.023

ā’0.1146

Mac10y 1 0.102 1.2618 0.2613 0.892

Association of predicted probabilities and observed responses

Concordant = 97.9 %

Discordant = 2.1 %

Tried = 0 % (1692 pairs)

In the model, the probability of a change in rate increases with:

ā¢ time;

ā¢ the fall in the static spread A at 3 months;

ā¢ the rise in the static spread B at 3 months;

ā¢ the fall in the static spread A 10 years;

ā¢ the slowing of the rise in the dynamic spreads A 3 months, B 5 months and A

10 years;

ā¢ the rise in the dynamic margins B 3 months and B 10 years.

Displaying the model

For each model, the linear combination on the historical data must be programmed. This

will allow the critical value of the model needed for dissociating the repricing periods

from the periods of equilibrium to be determined. As the dissociation is not 100 %, there

is no objective value. The critical value chosen conditions the statistical error of the ļ¬rst

and second area. In the example, the value 1.11 allows almost all the repricing to be

obtained without much anticipation of the model for the actual repricing periods (see

model CD-ROM and critical value).

The method presented was applied to all the ļ¬‚oating-rate products for a bank every

two months for nine years maximum in the period 1991 to 1999, depending on the

historical data available and the creation date of the products. The results are encouraging

as the rates of convergence between the models and the observed reality, with just a few

exceptions, are all over 90 %.

The classic method, based on the choice of dynamic and static spreads through simple

statistical correlation, has been tested. This method shows results very far removed from

those obtained using the method proposed, as the rate of concordance of pairs was less

than 80 %.

The odds ratio is equal to the exponential of the parameter estimated: eb . A variation in a unit within the variable (here

12

time and the spreads) makes the probability of ā˜repricingā™ alter by 1 ā’ eb .

Techniques for Measuring Structural Risks in Balance Sheets 311

12.4.3.3 Use of the models in rate risk management

This behavioural study allows the arbitrary rate-change conventions to be replaced to

good advantage. Remember that the conventions in the interest-rate gaps often take the

form of a simple calculation of an average for the periods during which rates are not

changed. Working on the hypothesis that the bankā™s behaviour is stable, we can use each

model as a prospective by calculating the static and dynamic spreads on the basis of the

sliding forward rates, for example over one year. This ļ¬‚oating-rate integration method

gives us two cases:

ā¢ The rate change occurs between todayā™s date and one year from now. In this case, the

contract revision date will be precisely on that date.

ā¢ The rate change is not probable over a one-year horizon. In this case, the date of revision

may be put back to the most distant prospective date (in our example, in one year).

Naturally, using an interest-rate gap suggests in the ļ¬rst instance that the rate-change

dates are known for each contract, but also that the magnitude of the change can be

anticipated in order to assess the change in the interest margin. Our method satisļ¬es the

ļ¬rst condition but does not directly give us the magnitude of the change. In fact, between

two repricing periods we see a large number of situations of equilibrium. In practice,

the ALM manager can put this free space to good use to optimise the magnitude of the

change and proļ¬t from a long or short balance-sheet position. This optimisation process

is made easier by the model. In fact, a change with too low a magnitude will necessitate

a further change, while a change with too high a magnitude may be incompatible with

the historical values of the model (see the statistics for magnitude of changes).

Modelling the repricing improves knowledge of the rate risk and optimises the simula-

tions on the interest margin forecasts and the knowledge of the market risk through VaR.

12.4.3.4 Remarks and criticisms

Our behavioural approach does, however, have a few weak points. The model speciļ¬es the

revision dates without indicating the total change in terms of basis points. It is not a margin

optimisation model. Another criticism that can be levelled relates to the homogeneity of

the period studied. A major change in one or more of the parameters set out previously

could disrupt or invalidate the model estimated. Finally, this empirical method cannot be

applied to new ļ¬‚oating-rate products.

Despite these limitations, the behavioural approach to static and dynamic spreads, based

on the analysis of canonical correlations, gives good results and is sufļ¬ciently ļ¬‚exible to

explain changes in rates on very different products. In fact, in our bankā™s balance sheet,

we have both liability and asset products each with their own speciļ¬c client segmentation.

The behavioural method allows complex parameters to be integrated, such as the busi-

ness policy of banks, the sensitivity of adjustment of volumes to market interest rates,

and competition environment.

12.5 REPLICATING PORTFOLIOS

In asset and liability management, a measurement of the monthly VaR for all the assets

as a whole is information of ļ¬rst importance on the market risk (rate and change). It is a

measurement that allows the economic forecasts associated with the risk to be assessed.

312 Asset and Risk Management

ALM software packages most frequently use J. P. Morganā™s interest and exchange

rate variance-covariance matrix, as the information on duration necessary for making the

calculation is already available. It is well known that products without a maturity date

are a real stumbling block for this type of VaR and for ALM.

There is relatively little academic work on the studies that involve attributing maturity

dates to demand credit or debit products. The aim of ā˜replicating portfoliosā™ is to attribute

a maturity date to balance-sheet products that do not have one. These portfolios combine

all the statistical or conventional techniques that allow the position of a product without a

maturity date to be converted into an interwoven whole of contracts that are homogeneous

in terms of liquidity and duration.

ā˜Replicating portfoliosā™ can be constructed in different ways. If the technical environ-

ment allows, it is possible to construct them contract by contract, deļ¬ning development

proļ¬les and therefore implicit maturity dates for ā˜stableā™ contracts. Where necessary, on

the basis of volumes per type of product, the optimal value method may be used. Other

banks provide too arbitrary deļ¬nitions of replicating portfolios.

12.5.1 Presentation of replicating portfolios

Many products do not have a certain maturity date, including, among others, the follow-

ing cases:

ā¢ American options that can be exercised at any time outside the scope of the bal-

ance sheet.

ā¢ Demand advances and overcharges on assets.

ā¢ Current liability accounts.

The banks construct replicating portfolios in order to deal with this problem. This kind of

portfolio uses statistical techniques or conventions. The assigned aim of all the methods

is to transform an accounting balance of demand products into a number of contracts

with differing characteristics (maturity, origin, depreciation proļ¬le, internal transfer rate

etc.). At the time of the analysis, the accounting balance of the whole contract portfolio is

equal to the accounting balance of the demand product. Figures 12.2ā“12.4 offers a better

understanding of replicating portfolio construction.

The replicating portfolio presented consists of three different contracts that explain the

accounting balances at tā’1 , t0 and t1 . The aim of the replicating portfolio is to represent

the structure of the ļ¬‚ows that make up the accounting balance.

Accounting balances

100

Thousands of millions

80

60

90 80

40

60

20

0

tā“1 t0 t1

Periods

Figure 12.2 Accounting balances on current accounts

Techniques for Measuring Structural Risks in Balance Sheets 313

Contract 2

Contract 1 Contract 3

50 25 50

Thousands of millions

40 40

20

30 30

15

40 40 40

20 20 40 40

10

20 30 30

10 10 20

5 10

0

0 0 0

tā“1 t0 t1 tā“1 t0 t1 tā“1 t0 t1 t2 t3

Figure 12.3 Contracts making up the replicating portfolio

Contract 2

Contract 3

Contract 1

60 IN tā“1 90 IN t0 80 IN t1

Figure 12.4 Replicating portfolio constructed on the basis of the three contracts

12.5.2 Replicating portfolios constructed according to convention

To present the various methods, we are taking the example of current accounts. There

are two types of convention for constructing a replicating portfolio. The ļ¬rst type can be

described as simplistic; they are used especially for demand deposits with an apparently

stable monthly balance. On the basis of this observation, some banks construct the repli-

cating portfolio by applying linear depreciation to the accounting balance at moment t

over several months. As the depreciation is linear over several months or even several

years, the banking institutions consider that the structure of the ļ¬‚ows making up the

accounting balance is stable overall in the short term. In fact, only 1/12 of the balance

is depreciated at the end of one month (1/6 in the second month, etc.) in a replicating

portfolio constructed over 12 months.

This arbitrary technique, which has no statistical basis, is unsatisfactory as many current

accounts are partially or totally depreciated over one month because of the monthly nature

of the income.

The second class of conventions covers the conventions that are considered as more

sophisticated and these do call in part on statistical studies. Because of the very restrictive

hypotheses retained, construction of the replicating portfolio remains within the scope of

convention. For example: we calculate two well-known statistical indicators to assess a

volatile item like the arithmetical mean and the monthly standard deviation for the daily

balances of all the deposits. The operation is repeated every two months, every quarter

etc. in order to obtain the statistical volatility indicators (average, standard deviation) on

a temporal horizon that increases from month to month. The interest, of course, is in

making the calculation over several years in order to reļ¬ne support for stable resources

for long-term functions such as credit facilities.

Thanks to these indicators it is possible, using probability theory, to calculate the

monthly portion of the deposits that will be depreciated month by month. For example:

314 Asset and Risk Management

to deļ¬ne the unstable portion of deposits for one month, we calculate ļ¬rst the probability

that the current account will be in debit balance compared to the monthly average and

the standard deviation for the totals observed over the month. The probability obtained is

equal to the unstable proportion for one month. We can also write in the general case that

the probability associated with the percentage of deposits depreciated equals Pr[x < 0]

with Ļ as the standard deviation over a period (one or two months etc.) for the daily

totals of deposits and Āµ as the arithmetical mean for the deposits over the same period.

With this method, part of the deposits is depreciated or deducted each month until

depreciation is complete. In other words, the balance sheet is deļ¬‚ated. For example: the

demand deposit entry in the balance sheet represents EUR10 000 million, and this sum

will be broken down into monthly due dates that generally cover several years. Naturally,

this convention for constructing a replicating portfolio is more satisfying than a simple

arbitrary convention. Some serious weaknesses have, however, been noted.

In fact, if we have a product with a credit balance, the proportion depreciated during

the ļ¬rst month will be the probability of the balance becoming a debit balance in view

of the monthly arithmetical mean and standard deviation calculated and observed. Under

this approach, the instability amounts to the probability of having a debit balance (for a

product in liabilities) or a credit balance (for a product in assets). It is considered that

the credit positions capable of being debited to a considerable extent are probably stable!

This shows the limits of the approach built on the global basis balance or the practice of

producing the total accounting position day by day.

12.5.3 The contract-by-contract replicating portfolio

The other methods consist of producing more accurate projections for demand products

on the basis of statistical analyses. The ļ¬rst prerequisite for a statistical analysis to be

consistent is to identify correctly each component that explains the overall development. In

other words, the statistical analysis builds up the replicating portfolio account by account

and day by day. The portfolio is not built on the daily accounting balance that brings

together the behaviour of all the accounts. The banks allocate one account per type of

product and per client. The account-by-account analysis is more reļ¬ned as it allows the

behaviour of the ļ¬‚ows to be identiļ¬ed per type of client.

The account-by-account daily analysis includes technical problems of database consti-

tution, including those in the large system or ā˜mainframeā™ environment, because of the

volume created by the large number of current accounts or cheques and the need for

historical entries.

After the completion of this ļ¬rst stage, considerable thought was applied to deļ¬ne the

concept of stability in theoretical terms. To carry out this work, we used two concepts:

ā¢ The ļ¬rst was the method of the account-by-account replicating portfolio. We con-

sidered that the balance observed at moment t is the product of a whole set of

interwoven accounts with different proļ¬les and cashļ¬‚ow behaviour and nonsimulta-

neous creation dates.

ā¢ The second concept is the stability test, adopted for deļ¬ning a stable account statisti-

cally. The test used is the standardised range or SR. This is a practical test used to judge

the normality of a statistical distribution, as it is easy to interpret and calculate. SR is

a measurement of the extent of the extreme values in the observations for a sample

Techniques for Measuring Structural Risks in Balance Sheets 315

dispersion unit (the standard deviation13 ). It is expressed as follows:

max(Xi ) ā’ min(Xi )

SR =

ĻX

This test allows three types of statistical distribution to be identiļ¬ed: a normal or

Gaussian distribution, a ļ¬‚at distribution with higher statistical dispersion than that of

a normal law, and a distribution with a statistical dispersion lower than that of a

normal law.

It can be considered that a demand current account is stable within the third typology.

The difference between the extreme values, max(Xi ) ā’ min(Xi ), is low because of

the standard different. The SR statistical test can be carried out with several intervals of

conļ¬dence, and the test can be programmed with differentiated intervals of conļ¬dence.

It is preferable to use a wide interval of conļ¬dence to judge the daily stability of the

account in order to avoid the problem of making monthly income payments. In addition,

the second condition for daily account stability is the absence of debit balances in the

monthly historical period values. In a monthly historical period, it is preferable to take

a wider interval of conļ¬dence when the history of the deposits shows at least one debit

balance, and a narrower interval otherwise.

After the stable accounts have been identiļ¬ed, we can reasonably create repayment

schedules by extending the trends or historical tendencies. On the statistically stable

accounts, two major trend types exist. In the upward trend, the deposits are stable over

a long term and the total observed at moment t will therefore be depreciated once over

a long period; this may be the date of the historical basis. In the downward trend, it is

possible by prolonging the trend to ļ¬nd out the future date of complete depreciation of

the account. Therefore, the balance of the account at moment t is depreciated linearly

until the maturity date obtained by prolonging the trend.

In order to provide an explanation, we have synthesised the conditions of stability in

Table 12.9. We have identiļ¬ed four cases. ā˜SR maxā™ corresponds to a wide interval of

conļ¬dence, while ā˜SR minā™ corresponds to a narrower interval of conļ¬dence.

Table 12.9 Stability typologies on current account deposits

Type Daily Monthly Historical Type Maturity

of case stability stability monthly of trend date

balances

1 Yes (SR max) Yes (SR min) Always in Upward & Duration of history of

credit horizontal data

2 Yes (SR max) Yes (SR min) Always in Downward Duration of trend

credit prolongation

3 Yes (SR max) Yes (SR max) At least one Generally Duration of history of

debit balance upward data

4 Yes (SR max) No (SR min) Always in No trend Duration of history of

credit data (for historical

min. total)

There are of course other statistical tests for measuring the normality of a statistical distribution, such as the Ļ 2 test,

13

the Kolmogorovā“Smirnov test for samples with over 2000 contracts, and the Wilkā“Shapiro test where needed.

316 Asset and Risk Management

The fourth case requires further explanation. These accounts are always in a credit

balance on the daily and monthly histories, but are not stable on a monthly basis.

On the other hand, there is a historical minimum credit balance that can be considered

to be stable. Economists name this liquidity as ā˜liquidity preferenceā™. In this case, the

minimum historical total will be found in the long-term repayment schedule (the database

date). The unstable contracts, or the unstable part of a contract, will have a short-term

maturity date (1 day to 1 month).

This method will allow better integration of products into the liquidity management

tools and rate risk without maturity dates. Based on the SR test and the account-by-account

replicating portfolio, it is simple in design and easy to carry out technically.

Speciļ¬cally, an accounting position of 120 will be broken down as follows. The unstable

part will have a maturity date of one day or one month, and the stable part will be broken

down two months from the date of the historical period. If the history and therefore the

synthetic maturity dates are judged insufļ¬cient, especially on savings products without

maturity dates, it is possible under certain hypotheses to extrapolate the stability level

and deļ¬ne a long maturity period over smaller totals. The historical period is 12 months.

A volume of 100 out of the 130 observed is deļ¬ned as stable. The maturity period is

therefore one year. It is also known that the volatility of a ļ¬nancial variable calculated

over a year can be used as a basis for extrapolating the volatility calculated over two

ā

years by multiplying the standard deviation by the time root: Ļ2 years = Ļ1 year Ā· 2.

It can be considered that the stable part diminishes symmetrically in proportion to time.

ā

The stable part at ļ¬ve years can thus be deļ¬ned: 100 Ā· 1/ 5 = 100 Ā· 0.447 = 44.7 %. We

therefore have 30 at one day, 55.27 at one year and 44.73 at ļ¬ve years.

The stability obtained on the basis of a monthly and daily history therefore takes overall

account of the explanatory variables of instability (arbitrage behaviour, monthly payment

of income, liquidity preference, anticipation of rates, seasonality etc.).

In this method, the interest rate is an exogenous variable. The link between changes in

stability and interest rates therefore depends on the frequency of the stability analysis. It

allows speciļ¬c implicit maturity dates to be found while remaining a powerful tool for

allocating resources on a product without a maturity date located among the assets. For

a liability bank, a good knowledge of ļ¬‚ows will allow resources to be replaced over the

long term instead of the interbank system and therefore provide an additional margin if

the rate curve is positive. For an asset bank, this procedure will allow better management

of the liquidity risk and the rate risk.

Contrarily, this historical and behavioural approach to the replicating portfolios poses

problems when rate simulations are carried out in ALM. In the absence of an endogenous

rate variable, knowledge of the link between rate and replicating portfolio will be limited

to history. This last point justiļ¬es the replicating portfolio searches that include interest

rates in the modelling process.

12.5.4 Replicating portfolios with the optimal value method

12.5.4.1 Presentation of the method

This method was developed by Smithson14 in 1990 according to the ā˜building approachā™

or ā˜Lego approachā™. The method proposes a deļ¬nition of optimal replicating portfolios

14

Smithson C., A Lego approach to ļ¬nancial engineering. In The Handbook of Currency and Interest Rate Risk Manage-

ment, edited by R. Schwarz and C. W. Smith Jr., New York Institute of Finance, 1990 or Damel P., āLā™apport de replicating

portfolio ou portefeuille rĀ“ pliquĀ“ en ALM: mĀ“ thode contrat par contrat ou par la valeur optimaleā, Banque et MarchĀ“ s, mars

e e e e

avril, 2001.

Techniques for Measuring Structural Risks in Balance Sheets 317

by integrating market interest rates and the anticipated repayment risk, and considers the

interest rate(s) to be endogenous variables. This perspective is much more limited than

the previous one when the bank carries out stochastic or other rate simulations on the

ALM indicators (VaR, NPV for equity funds, interest margins etc.).

In this method, it is considered that the stable part of a product without a maturity

date is a function of simple rate contracts with known maturity dates. In this problem,

the deļ¬nition of stability is not provided contract by contract but on the basis of daily or

monthly accounting volumes. An equation allows optimal representation of the chronolog-

ical series of the accounting positions. This ļ¬rst point deļ¬nes a stable part and a volatile

part that is the statistical residue of the stability equation.

The volatile part is represented by a short-term bond with a short-term monetary ref-

erence rate (such as one month).

The stable part consists of a number of interwoven zero-coupon bonds with reference

rates and maturity dates from 3 months to 15 years. The weave deļ¬nes a reļ¬nancing

strategy based on the monetary market and the primary bond market.

The stable part consists of rate products. The advantage of this approach is therefore

that the early repayment rate is taken into account together with any ā˜repricingā™ of the

product and the volume is therefore linked to the reference interest rates. The model

contains two principal equations.

ā¢ Volum t represents the accounting position at moment t.

ā¢ Stab t represents the stable part of the volume at moment t.

ā¢ rrt is the rate for the product at moment t and taux1m, taux2m etc. represent the market

reference rates for maturity rates 1 month, 2 months etc.

ā¢ Īµt represents the statistical residual or volatile part of the accounting positions.

ā¢ brit represents an interest for a zero-coupon bond position with maturity date i and

market reference rate i at time t

ā¢ Ī±i represents the stable part replicated by the brit position.

ā¢ Ī±i equals 1 (i = 3 months to 15 years).

ā¢ mrt represents the portion of the demand product rate that is not a function of the

market rate. mrt is also equal to the difference between the average weighted rate

obtained from the interwoven bonds and the ļ¬‚oating or ļ¬xed retail rate. This last point

also includes the repricing strategy and the spread, which will be negative on liability

products and positive on asset products.

Wilson15 was the ļ¬rst to use this approach speciļ¬cally for optimal value. His equations

can be presented as follows:

Volum t = Stab t + Īµt (a)

15 years

Volum t Ā· rrt = Īµt Ā· r1 + Ī±i brit + mrt + Ī“t (b)

month,t

i=3 months

15 years

i=3 months Ī±i = 1.

with the constraint:

15

Wilson T., Optimal value: portfolio theory, Balance Sheet, Vol. 3, No. 3, Autumn 1994.

318 Asset and Risk Management

Example of replicated zero-coupon position

br6m is a bond with a six-month maturity date and a market reference rate of six months. It

will be considered that the stable part in t1 is invested in a six-month bond at a six-month

market rate. At t2 , t3 , t4 , t5 and t6 the new deposits (difference between Stab tā’1 and Stab t )

are also placed in a six-month bond with a six-month reference market rate for t2 , t3 , t4 ,

t5 and t6 . At t7 the stable part invested at t1 has matured. This stable party and the new

deposits will be replaced at six months at the six-month market rate prevailing at t = 7.

brit functions with all the reference rates from three months to 15 years.

After econometric adjustment of this two-equation model, Ī±i readily gives us the dura-

tion of this demand product. The addition properties of the duration are used. If Ī±1y = 0.5

and Ī±2y = 0.5, the duration of this product without a maturity date will be 18 months.

12.5.4.2 Econometric adjustment of equations

A. The stability or deļ¬nition equation

There are many different forecasting models for the chronological series. For upward

accounting volumes, the equation will be different from that obtained from decreasing or

sine wave accounting values. The equation to be adopted will be the one that minimises

the term of error Īµ.

Here follows a list (not comprehensive) of the various techniques for forecasting a

chronological series:

ā¢ regression;

ā¢ trend extrapolations;

ā¢ exponential smoothing;

ā¢ autoregressive moving average (ARMA).

Wilson uses exponential smoothing. The stability of the volumes is an exponential function

of time,

Stab t = b0 Ā· eb1 t + Īµt

or

log Stab t = log b0 + b1 Ā· t + Ī“t

Instead of this arbitrary formula, we propose to deļ¬ne the volumes on the basis of classical

methods or recent research into chance market models specialised in during the temporal

series study. These models are much better adapted for estimating temporal series. The

ARMA model is a classical model; it considers that the volumes observed are produced

by a random stable process, that is, the statistical properties do not change over the course

of time.

The variables in the process (that is, mathematical anticipation, valuationā“valuation)

are independent of time and follow a Gaussian distribution. The variation must also be

ļ¬nished. Volumes will be observed at equidistant moments (case of process in discrete

time). We will take as an example the ļ¬‚oating-demand savings accounts in LUF/BEF

Techniques for Measuring Structural Risks in Balance Sheets 319

from 1996 to 1999, observed monthly (data on CD-ROM). The form given in the model

is that of the recurrence system,

p

Volum t = a0 + ai Volum tā’i + Īµt

i=1

where a0 + a1 Volum tā’1 + . . . + aP Volum tā’p represents the autoregressive model that is

ideal or perfectly adjusted to the chronological series, thus being devoid of uncertainty,

and Īµt is a mobile average process.

q

Īµt = bi utā’i

i=0

The utā’I values constitute ā˜white noiseā™ (following the non-autocorrelated and centred

normal random variables with average 0 and standard deviation equal to 1). Īµt is therefore

a centred random variable with constant variance. This type of model is an ARMA type

model (p, q).

Optimisation of ARMA model (p, q)

The ļ¬rst stage consists of constructing the model on the observed data without transfor-

mation (Volum t ).

The ļ¬rst solution is to test several ARMA models (p, 1) and to select the model that

maximises the usual adjustment criteria:

ā¢ The function of log of likelihood. Box and Jenkins propose the lowest square estimators

(R-square in the example or adjusted), identical to the maximum likelihood estimators

if it is considered that the random variables are distributed normally. This last point is

consistent with the ARMA approach.

ā¢ AIC (Akaikeā™s information criterion).

ā¢ Schwartz criteria.

ā¢ There are other criteria, not referenced in the example (FPE: ļ¬nal prediction error;

BIC: Bayesian information criterion; Parsen CAT: criterion of autoregressive trans-

fer function).

The other process consists of constructing the model on the basis of the graphic autocor-

relation test. This stage of identiļ¬cation takes account of the autocorrelation test with all

the possible intervals (t ā’ n). This autocorrelation function must be downward or depre-

ciated oscillating. In the example, the graph shows on the basis of the bilateral Student

test (t = 1.96) that the one- and two-period intervals have an autocorrelation signiļ¬cantly

different from 0 at the conļ¬dence threshold of 5 %. The ARMA model will have an AR

component equal to two (AR(2)).

This stage may be completed in a similar way by partial autocorrelation, which takes

account of the effects of the intermediate values between Volum t and Volum t+r in the

autocorrelation. The model to be tested is ARMA (2, 0). The random disturbances in

the model must not be autocorrelated. Where applicable, the autocorrelations have not

been included in the AR part. There are different tests, including the Durbinā“Watson

320 Asset and Risk Management

Table 12.10 ARMA (2, 2) model

R-square = 0.7251 Adjusted R-square = 0.6773

Akaike Information Criteria ā’ AIC(K) = 43.539

Schwartz Criteria ā’ SC(K) = 43.777

Parameter estimates STD error T-STAT

AR(1) 0.35356 0.1951 1.812

AR(2) 0.40966 0.2127 1.926

MA(1) 0.2135E-3 0.1078 0.0019

MA(2) ā’ 0.91454 ā’15.59

0.05865

Constant 0.90774E + 10 0.7884E + 10 1.151

Residuals

Skewness 1.44

Kurtosis 7.51

Studentised range 5.33

non-autocorrelation error test. In the example of the savings accounts, the optimal ARMA

model with a distribution normal and noncorrelated residue is the ARMA (2, 2) model with

an acceptable R 2 of 0.67. This model is therefore stationary, as the AR total is less than 1.

The ARMA (2, 2) model (Table 12.10) obtained is as follows. The monthly accounting

data, the zero-coupon rates for 1 month, 6 months, 1 year, 2 years, 4 years, 7 years and

10 years can be found on the CD-ROM. The model presented has been calculated on the

basis of data from end November 1996 to end February 1999.

If the model is nonstationary (nonstationary variance and/or mean), it can be converted

into a stationary model by using the integration of order r after the logarithmic transforma-

tion : if y is the transformed variable, apply the technique to ( (. . . (yt ))) ā’ r timesā’

instead of yt ( (yt ) = yt ā’ ytā’1 ). We therefore use an ARIMA(p, r, q) procedure.16 If

this procedure fails because of nonconstant volatility in the error term, it will be necessary

to use the ARCH-GARCH or EGARCH models (Appendix 7).

B. The equation on the replicated positions

This equation may be estimated by a statistical model (such as SAS/OR procedure PROC

NPL), using multiple regression with the constraints

15 years

Ī±i = 1 and Ī±i ā„ 0

i=3 months

It is also possible to estimate the replicated positions (b) with the single constraint (by

using the SAS/STAT procedure)

15 years

Ī±i = 1

i=3 months

In both cases, the duration of the demand product is a weighted average of the durations.

In the second case, it is possible to obtain negative Ī±i values. We therefore have a synthetic

investment loan position on which the duration is calculated.

16

Autoregressive integrated moving average.

Techniques for Measuring Structural Risks in Balance Sheets 321

Table 12.11 Multiple regression model obtained on BEF/LUF savings accounts on the basis of a

SAS/STAT procedure (adjusted R-square 0.9431)

Prob > (T )

Variables Parameter estimate Standard error

ā’92 843 024

Intercept (global margin) 224 898 959 0.6839

F1M (stable part) 0.086084 0.00583247 0.0001

ā’0.015703

F6M (stable rollover) 0.05014466 0.7573

F1Y (stable rollover) 0.036787 0.07878570 0.6454

F2Y (stable rollover) 0.127688 0.14488236 0.3881

F4Y (stable rollover) 3.490592 1.46300205 0.0265

ā’4.524331

F7Y (stable rollover) 2.94918687 0.1399

F10Y (stable rollover) 1.884966 1.63778119 0.2627

If Ī±1y = 2.6 and Ī±6m = ā’1.6 for a liability product, duration = 1(1.6/2.6)0.5 = 0.69

of a year.

The bond weaves on the stable part have been calculated on the basis of the zero-coupon

rates (1 month, 6 months, 1 year, 2 years, 4 years, 7 years, 10 years). See Table 12.11.

The equation (b) is very well adjusted, as R 2 is 94.31 %. The interest margin is of course

negative, as the cost of the resources on liabilities is lower than the market conditions. Like

Wilson, we consider that the margin between the average rate for the interwoven bonds

and the product rate is constant over the period. Possibly it should also be considered that

the margin is not constant, as the ļ¬‚oating rate is not instantaneously re-updated according

to changes in market rates. On the other hand, the quality of the clients and therefore

the spread of credit are not necessarily constant over the period. The sum of coefļ¬cients

associated with the interwoven bond positions is 1.

This multiple linear regression allows us to calculate the duration of this product without

a maturity date on the basis of the synthetic bond positions obtained. In the example, the

duration obtained from the unstable and stable positions equals 1.42 years.

Appendices

1 Mathematical concepts

2 Probabilistic concepts

3 Statistical concepts

4 Extreme value theory

5 Canonical correlations

6 Algebraic presentation of logistic regression

7 Time series models: ARCH-GARCH and EGARCH

8 Numerical methods for solving nonlinear equations

Appendix 1

Mathematical Concepts1

1.1 FUNCTIONS OF ONE VARIABLE

1.1.1 Derivatives

1.1.1.1 Deļ¬nition

f (x0 + h)ā’f (x0 )

The derivative2 of function f at point x0 is deļ¬ned as f (x0 ) = limhā’0 ,

h

if this limit exists and is ļ¬nite.

If the function f is derivable at every point within an open interval ]a; b[, it will

constitute a new function deļ¬ned within that interval: the derivative function, termed f .

1.1.1.2 Geometric interpretations

For a small value of h, the numerator in the deļ¬nition represents the increase (or decrease)

in the value of the function when the variable x passes from value x0 to the neighbouring

value (x0 + h), that is, the length of AB (see Figure A1.1).

The denominator in the same expression, h, is in turn equal to the length of AC. The

ratio is therefore equal to the slope of the straight line BC. When h tends towards 0, this

straight line BC moves towards the tangent on the function graph, at point C.

The geometric interpretation of the derivative is therefore as follows: f (x0 ) represents

the slope of the tangent on the graph for f at point x0 . In particular, the sign of the

derivative characterises the type of variation of the function: a positive (resp. negative)

derivative has a corresponding increasing (resp. decreasing) function. The derivative

therefore measures the speed at which the function increases (resp. decreases) in the

neighbourhood of a point.

The derivative of the derivative, termed the second derivative and written f , will

therefore be positive when the function f is increasing, that is, when the slope of the

tangent on the graph for f increases when the variable x increases: the function is said to

be convex. Conversely, a function with a negative second derivative is said to be concave

(see Figure A1.2).

1.1.1.3 Calculations

Finally, remember the elementary rules for calculating derivatives. Those relative to oper-

ations between functions ļ¬rst of all:

(f + g) = f + g

(Ī»f ) = Ī»f

1

Readers wishing to ļ¬nd out more about these concepts should read: Bair J., MathĀ“ matiques gĀ“ nĀ“ rales, De Boeck, 1990.

e ee

Esch L., MathĀ“ matique pour economistes et gestionnaires, De Boeck, 1992. Guerrien B., Alg` bre linĀ“ aire pour economistes,

e Ā“ e e Ā“

Economica, 1992. Ortega M., Matrix Theory, Plenum, 1987. Weber J. E., Mathematical Analysis (Business and Economic

Applications), Harper and Row, 1982.

2

Also referred to as ļ¬rst derivative.

326 Asset and Risk Management

f(x)

f(x0 + h) B

C Īø

f(x0) A

x

x0 + h

x0

Figure A1.1 Geometric interpretation of derivative

f (x) f (x)

x x

Figure A1.2 Convex and concave functions

(fg) = f g + fg

f f g ā’ fg

=

g g2

Next, those relating to compound functions:

[g(f )] = g (f ) Ā· f

Finally, the formulae that give the derivatives for a few elementary functions:

(x m ) = mx mā’1

(ex ) = ex

(a x ) = a x ln a

1

(ln x) =

x

1

(loga x) =

x ln a

1.1.1.4 Extrema

The point x0 is a local maximum (resp. minimum) of the function f if

f (x0 ) ā„ f (x) (resp. f (x0 ) ā¤ f (x))

for any x close to x0 .

Mathematical Concepts 327

The extrema within an open interval for a derivable function can be determined thanks

to two conditions.

ā¢ The ļ¬rst-order (necessary) condition states that if x0 is an extremum of f , then f (x0 ) =

0. At this point, called the stationary point, the tangent at the graph of f is there-

fore horizontal.

ā¢ The second-order (sufļ¬cient) condition allows the stationary points to be ā˜sortedā™

according to their nature. If x0 is a stationary point of f and f (x0 ) > 0, we then

have a minimum; in the opposite situation, if f (x0 ) < 0, we have a maximum.

1.1.2 Taylorā™s formula

Consider a function f that one wishes to study in the neighbourhood of x0 (let us say, at

x0 + h). One method will be to replace this function by a polynomial ā“ a function that is

easily handled ā“ of the variable h:

f (x0 + h) = a0 + a1 h + a2 h2 + Ā· Ā· Ā·

For the function f to be represented through the polynomial, both must:

ā¢ take the same value at h = 0;

ā¢ have the same slope (that is, the same ļ¬rst derivative) at h = 0;

ā¢ have the same convexity or concavity (that is, the same second derivative) at h = 0;

ā¢ and so on.

Also, the number of conditions to be imposed must correspond to the number of coef-

ļ¬cients to be determined within the polynomial. It will be evident that these conditions

lead to:

f (x0 )

a0 = f (x0 ) =

0!

f (x0 )

a1 = f (x0 ) =

1!

f (x0 ) f (x0 )

a2 = =

2 2!

Ā·Ā·Ā·

f (k) (x0 )

ak =

k!

Ā·Ā·Ā·

Generally, therefore, we can write:

f (n) (x0 ) n

f (x0 ) f (x0 ) f (x0 ) 2

f (x0 + h) = + h+ h +Ā·Ā·Ā· + h + Rn

n!

0! 1! 2!

Here Rn , known as the expansion remainder, is the difference between the function f

to be studied and the approximation polynomial. This remainder will be negligible under

certain conditions of regularity as when h tends towards 0, it will tend towards 0 more

quickly than hn .

328 Asset and Risk Management

The use of Taylorā™s formula in this book does not need a high-degree polynomial, and

we will therefore write more simply:

f (x0 ) f (x0 ) 2 f (x0 ) 3

f (x0 + h) ā f (x0 ) + h+ h+ h +Ā·Ā·Ā·

1! 2! 3!

For some elementary functions, Taylorā™s expansion takes a speciļ¬c form that is worth

remembering:

x x2 x3

x

e ā1+ + + + Ā·Ā·Ā·

1! 2! 3!

m m(m ā’ 1) 2 m(m ā’ 1)(m ā’ 2) 3

(1 + x)m ā 1 + x + x+ x + Ā·Ā·Ā·

1! 2! 3!

x2 x3

ln(1 + x) ā x ā’ + ā’ Ā·Ā·Ā·

2 3

A speciļ¬c case of power function expansion is the Newton binomial formula:

n

n

n

a k bnā’k

(a + b) =

k

k=0

1.1.3 Geometric series

If within the Taylor formula for (1 + x)m , x is replaced by (ā’x) and m by (ā’1), we

will obtain:

1

ā 1 + x + x2 + x3 + Ā· Ā· Ā·

1ā’x

It is easy to demonstrate that when |x| < 1, the sequence

1

1+x

1 + x + x2

Ā·Ā·Ā·

1 + x + x2 + Ā· Ā· Ā· + xn

Ā·Ā·Ā·

will converge towards the number 1/(1 ā’ x).

The limit of this sequence is therefore a sum comprising an inļ¬nite number of terms

and termed a series. What we are concerned with here is the geometric series:

ā

1

n

xn =

1 + x + x + Ā·Ā·Ā· + x + Ā·Ā·Ā· =

2

1ā’x

n=0

A relation linked to this geometric series is the one that gives the sum of the terms in a

geometric progression: the sequence t1 , t2 , t3 etc. is characterised by the relation

tk = tkā’1 Ā· q (k = 2, 3, . . .)

Mathematical Concepts 329

the sum of t1 + t2 + t3 + Ā· Ā· Ā· + tn is given by the relation:

n

1 ā’ qn

t1 ā’ tn+1

tk = = t1

1ā’q 1ā’q

k=1

1.2 FUNCTIONS OF SEVERAL VARIABLES

1.2.1 Partial derivatives

1.2.1.1 Deļ¬nition and graphical interpretation

For a function f of n variables x1 , x2 , . . . , xn , the concept of derivative is deļ¬ned in a

similar way, although the increase h can relate to any of the variables. We will therefore

have n concepts of derivatives, relative to each of the n variables, and they will be termed

partial derivatives. The partial derivative of f (x1 , x2 , . . . , xn ) with respect to xk at point

(0) (0) (0)

(x1 , x2 , . . . , xn ) will be deļ¬ned as:

(0) (0) (0)

fxk (x1 , x2 , . . . , xn )

(0) (0) (0) (0) (0) (0)

(0) (0)

f (x1 , x2 , . . . , xk + h, . . . , xn ) ā’ f (x1 , x2 , . . . , xk , . . . , xn )

= lim

h

hā’0

The geometric interpretation of the partial derivatives can only be envisaged for the

functions of two variables as the graph for such a function will enter the ļ¬eld of three

dimensions (one dimension for each of the two variables and the third, the ordinate, for

the values of the function). We will thus be examining the partial derivatives:

f (x0 + h, y0 ) ā’ f (x0 , y0 )

fx (x0 , y0 ) = lim

h

hā’0

f (x0 , y0 + h) ā’ f (x0 , y0 )

fy (x0 , y0 ) = lim

h

hā’0

Let us now look at the graph for this function f (x, y). It is a three-dimensional space

(see Figure A1.3).

Let us also consider the vertical plane that passes through the point (x0 , y0 ) and parallel

to the Ox axis. Its intersection with the graph for f is the curve Cx . The same reasoning as

that adopted for the functions of one variable shows that the partial derivative fx (x0 , y0 )

is equal to the slope of the tangent to that curve Cx at the axis point (x0 , y0 ) (that is, the

slope of the graph for f in the direction of x). In the same way, fy (x0 , y0 ) represents the

slope of the tangent to Cy at the axis point (x0 , y0 ).

1.2.1.2 Extrema without constraint

(0) (0)

The point (x1 , . . . , xn ) is a local maximum (resp. minimum) of the function f if

(0) (0)

(0) (0)

f (x1 , . . . , xn ) ā„ f (x1 , . . . , xn ) [resp. f (x1 , . . . , xn ) ā¤ f (x1 , . . . , xn )]

(0) (0)

for any (x1 , . . . , xn ) close to (x1 , . . . , xn ).

330 Asset and Risk Management

f(x,y)

Cx

Cy

x0

y0

y

x

(x0,y0)

ńņš. 13 |