ñòð. 5 |

Chaos and Fractals in Financial Markets, Part 5, by J. Orlin Grabbe

Now that weâ€™ve worked our way to the heart of the

matter, letâ€™s take a break from probability theory and

turn our attention again to dynamical systems. In

particular, letâ€™s look at our old friend the logistic

equation:

x(n+1) = k x(n) [1 â€“ x(n)],

where x(n) is the input variable, x(n+1) is the output

variable, and k is a constant.

In Part 1, we looked at a particular version of this

equation where k = 4. In general, k takes values 0 < k <=

4.

The dynamic behavior of this equation depends on the

value k, and also on the particular starting value or

starting point, x(0). Later in this series we will examine

how the behavior of this equation changes as we change

k. But not now.

Instead, we are going to look at this equation when we

substitute for x, which is a real variable, a complex

variable z:

z(n+1) = k z(n) [1 â€“ z(n)].

Complex numbers z have the form

z = x + i y,

where i is the square root of minus one. Complex

numbers are normally graphed in a plane, with x on the

horizontal ("real") axis, while y is on the vertical

("imaginary") axis.

That means when we iterate z, we actually iterate two

values: x in the horizontal direction, and y in the vertical

direction. The complex logistic equation is:

x + i y = k (x + i y) [ 1 â€“ (x + i y)].

(Note that I have dropped the notation x(n) and y(n) and

just used x and y, to make the equations easier to read.

But keep in mind that x and y on the left-hand side of the

equation represent output, while the x and y on the

right-hand side of the equation represent input.)

The output x, the real part of z, is composed of all the

terms that do not multiply i, while the output y, the

http://www.aci.net/kalliste/Chaos5.htm (10 of 14) [12/04/2001 1:30:00]

Chaos and Fractals in Financial Markets, Part 5, by J. Orlin Grabbe

imaginary part of z, is made up of all the terms that

multiply i.

To complete the transformation of the logistic equation,

we let k be complex also, and write

k = A + B i,

giving as our final form:

x + i y = (A + B i) (x + i y) [ 1 â€“ (x + i y)].

Now we multiply this all out and collect terms. The

result is two equations in x and y:

x = A (x-x2+y2) + B (2xy-y)

y = B (x-x2+y2) - A (2xy-y).

As in the real version of the logistic equation, the

behavior of the equation depends on the multiplier k = A

+ B i (that is, on A and B), as well as the initial starting

value of z = x + i y (that is, on x(0) and y(0) ).

Julia Sets

Depending on k, some beginning values z(0) = x(0) + i

y(0) blow off to infinity after a certain number of

iterations. That is, the output values of z keep getting

larger and larger, diverging to infinity. As z is composed

of both an x term and a y term, we use as the criterion for

"getting large" the value of

x2 + y2.

The square root of this number is called the modulus of

z, and represents the length of a vector from the origin

(0,0) to the point z = (x,y). In the iterations we are about

to see, the criterion to determine if the equation is

diverging to infinity is

x2 + y2 > 4,

which implies the modulus of z is greater than 2.

When the equation is iterated, some starting values

diverge to infinity and some donâ€™t. The Julia set is the

set of starting values for z that remain finite under

iteration. That is, the Julia set is the set of all starting

values (x(0), y(0)) such that the equation output does not

blow off to infinity as the equation is iterated.

http://www.aci.net/kalliste/Chaos5.htm (11 of 14) [12/04/2001 1:30:00]

Chaos and Fractals in Financial Markets, Part 5, by J. Orlin Grabbe

Each value for k will produce a different Julia set (i.e., a

different set of (x(0) ,y(0) ) values that do not diverge

under iteration).

Letâ€™s do an example. Let k = 1.678 + .95 i. That is, A =

1.678 and B = .95. We let starting values for x(0) range

from â€“.5 to 1.5, while letting starting values for y(0)

range from -.7 to +.7.

We keep k constant always, so we are graphing the Julia

set associated with k = 1.678 + .95 i.

We iterate the equation 256 times. If, at the end of 256

iterations, the modulus of z is not greater than 2, we

paint the starting point (x(0), y(0)) black. So the entire

Julia set in this example is colored black. If the

modulus of z exceeds 2 during the iterations, the starting

point (x(0), y(0)) is assigned a color depending on the

rate the equation is blowing off to infinity.

To see the demonstration, be sure Java is enabled on

your web browser and click here.

We can create a plot that looks entirely different by

making a different color assignment. For the next

demonstration, we again iterate the dynamical system

256 times for different starting values of z(n). If, during

the iterations, the modulus of z exceeds 2, then we know

the iterations are diverging, so we plot the starting value

z(0) = (x(0), y(0)) black. Hence the black region of the

plot is made up of all the points not in the Julia set.

For the Julia set itself, we assign bright colors. The color

assigned depends on the value of z after 256 iterations.

For example, if the square of the modulus of z is greater

than .6, but less than .7, the point z(0) is assigned a light

red color. Hence the colors in the Julia set indicate the

value of the modulus of z at the end of 256 iterations.

To see the second demonstration of the same equation,

but with this alternative color mapping, be sure Java is

enabled on your web browser and click here

So, from the complex logistic equation, a dynamical

system, we have created a fractal. The border of the Julia

set is determined by k in the equation, and this border

was created in a working Euclidean space of 2

dimensions, has a topological dimension of 1, but has a

Hausdorff dimension that lies between these two

http://www.aci.net/kalliste/Chaos5.htm (12 of 14) [12/04/2001 1:30:00]

Chaos and Fractals in Financial Markets, Part 5, by J. Orlin Grabbe

numbers.

Meanwhile, we have passed from mathematics to art. Or

maybe the art was there all along. We just had to learn

how to appreciate it.

Notes

[1] This is the stock market equivalent of the Interest

Parity Theorem that relates the forward price F(t+T) of a

currency, T-days in the future, to the current spot price

S(t). In the foreign exchange market, the relationship can

be written as:

F(t+T) = S(t) [1 + r (T/360)]/[1+r*(T/360)]

where r is the domestic interest rate (say the dollar

interest rate), and r* is the foreign interest rate (say the

interest rate on the euro). S is then the spot dollar price

of the euro, and F is the forward dollar price of the euro.

We can also use this equation to give us the forward

value F of a stock index in relation to its current value S,

in which case r* must be the dividend yield on the stock

index.

(A more precise calculation would disaggregate the

"dividend yield" into the actual days and amounts of

dividend payments.)

This relationship is explored at length in Chapter 5,

"Forwards, Swaps, and Interest Parity," in J. Orlin

Grabbe, International Financial Markets, 3rd edition,

Prentice-Hall, 1996.

[2] The definitions here follow those in Gennady

Samorodnitsky and Murad S. Taqqu, Stable

Non-Gaussian Random Processes: Stochastic Models

with Infinite Variance, Chapman & Hall, New York,

1994.

[3] This is Theorem VI.1.1 in William Feller, An

Introduction to Probability Theory and Its Applications,

Vol 2, 2nd ed., Wiley, New York, 1971.

[4] If Y = X/n1/ Î± , then for n independent copies of Y,

Y1 + Y2 + â€¦ + Yn-1 + Yn ˜ n1/ Î± Y = n1/ Î± (X/n1/ Î± ) =

X.

http://www.aci.net/kalliste/Chaos5.htm (13 of 14) [12/04/2001 1:30:00]

Chaos and Fractals in Financial Markets, Part 5, by J. Orlin Grabbe

J. Orlin Grabbe is the author of International Financial

Markets, and is an internationally recognized derivatives

expert. He has recently branched out into cryptology,

banking security, and digital cash. His home page is

located at http://www.aci.net/kalliste/homepage.html .

-30-

from The Laissez Faire City Times, Vol 3, No 29, July

19, 1999

http://www.aci.net/kalliste/Chaos5.htm (14 of 14) [12/04/2001 1:30:00]

Julia Plot of the Complex Logistic Equation

Julia Plot of the Complex Logistic Equation

If you can read this, then your browser is not set up for Java. . .

This applet plots the Julia set for the complex logistic equation: z(n+1) = k z(n)[1-z(n)], for k = 1.678 +

.95 i. The equation is iterated 256 times for different starting values of z(n). If the equation is not

diverging to infinity at the end of the 256 iterations, then the starting point z(0) is painted black. Hence

the black area of the plot represents the Julia set (except for some points on the outer corners that are

also painted black if they diverge on the first iteration). If, however, the equation begins diverging

(defined as occurring when the modulus of z(n+1) > 2), the iterations are terminated, and the point is

plotted with one of 16 colors. The iterations cycle through this set of colors, but terminate when the

equation diverges. Hence, the color plotted indicates the rate the equation output is blowing off to

infinity.

The Java source code.

http://www.aci.net/kalliste/logistic_julia.html [12/04/2001 1:30:03]

Alternative Julia Plot of the Complex Logistic Equation

Alternative Julia Plot of the Complex Logistic

Equation

If you can read this, then your browser is not set up for Java. . .

This applet plots the Julia set for the complex logistic equation: z(n+1) = k z(n)[1-z(n)], for k = 1.678 +

.95 i. The equation is iterated 256 times for different starting values of z(n). If the equation is not

diverging to infinity at the end of the 256 iterations, then the terminal value z(n+1) = z(256) is assigned a

color based on the value of the square of its modulus (i.e., on the value of x2+y2). Hence, the brightly

colored area of the plot consists of the Julia set. If, however, the equation begins diverging (defined as

occurring when the modulus of z(n+1) > 2), the iterations are terminated, and the point z(0) is painted

black. Hence the black area of the plot represents points that diverge and which are therefore not in

the Julia set.

The Java source code.

http://www.aci.net/kalliste/log2_julia.html [12/04/2001 1:30:04]

Chaos and Fractals in Financial Markets, Part 6, by J. Orlin Grabbe

Chaos and Fractals in Financial Markets

Part 6

by J. Orlin Grabbe

Prechterâ€™s Drum Roll

Robert Prechter is a drummer. He faces the following problem. He

wants to strike his drum three times, creating two time intervals

which have a special ratio:

1<------------g-------------->2<--------------------h-------------------->3

Here is the time ratio he is looking for: he wants the ratio of the

first time interval to the second time interval to be the same as the

ratio of the second time interval to the entire time required for the

three strikes.

Let the first time internal (between strikes 1 and 2) be labeled g,

while the second time interval (between strikes 2 and 3) be labeled

h. So what Prechter is looking for is the ratio of g to h to be the

same as h to the whole. However, the whole is simply g + h, so

Prechter seeks g and h such that:

g / h = h / (g+h).

Now. Prechter is only looking for a particular ratio. He doesnâ€™t

care whether he plays his drum slow or fast. So h can be anything:

1 nano-second, 1 second, 1 minute, or whatever. So letâ€™s set h = 1.

(Note that by setting h = 1, we are choosing our unit of

measurement.) We then have

g / 1 = 1 / (1+g).

Multiplying the equation out we get

g2 + g â€“ 1 = 0.

This gives two solutions:

g = [- 1 + 50.5] / 2 = 0.618033â€¦, and

g = [- 1 - 50.5] / 2 = -1.618033â€¦

The first, positive solution (g = 0.618033â€¦) is called the golden

mean. Using h = 1 as our scale of measurement, then g, the

golden mean, is the solution to the ratio

http://www.aci.net/kalliste/chaos6.htm (1 of 6) [12/04/2001 1:30:06]

Chaos and Fractals in Financial Markets, Part 6, by J. Orlin Grabbe

g / h = h / (g+h).

By contrast, if we use g = 1 as our scale of measurement, and

solve for h, we have

1 / h = h / (1+h), which gives the equation

h2 - h â€“ 1 = 0.

Which gives the two solutions:

h = [ 1 + 50.5] / 2 = 1.618033â€¦, and

h = [ 1 - 50.5] / 2 = -0.618033â€¦

Note that since the units of measurement are somewhat aribitrary,

h has as much claim as g to being the solution to Prechterâ€™s drum

roll. Naturally, g and h are closely related:

h (using g as the unit scale) = 1/ g (using h as the unit scale).

for either the positive or negative solutions:

1.618033â€¦ = 1/ 0.618033â€¦

-0.618033â€¦ = 1/ -1.618033.

What is the meaning of the negative solutions? These also have a

physical meaning, depending on where we place our time origin.

For example, letâ€™s let the second strike of the drum be time t=0:

<------------g-------------->0<--------------------h-------------------->

Then we find that for g = -1.618033, h = 1, we have

-1.618033 /1 = 1/ [1 - 1.618033].

So the negative solutions tell us the same thing as the positive

solutions; but they correspond to a time origin of t = 0 for the

second strike of the drum.

The same applies for g = 1, h = -0.618033, since

1 / -0.618033 = -0.618033/(1 â€“ 0.618033),

but in this case time is running backwards, not forwards.

The golden mean g, or its reciprocal equivalent h, are found

throughout the natural world. Numerous books have been devoted

to the subject. These same ratios are found in financial markets.

Symmetric Stable Distributions and the Golden Mean Law

http://www.aci.net/kalliste/chaos6.htm (2 of 6) [12/04/2001 1:30:06]

Chaos and Fractals in Financial Markets, Part 6, by J. Orlin Grabbe

In Part 5, we saw that symmetric stable distributions are a type of

probability distribution that are fractal in nature: a sum of n

independent copies of a symmetric stable distribution is related to

each copy by a scale factor n1/ Î± , where Î± is the Hausdorff

dimension of the given symmetric stable distribution.

In the case of the normal or Gaussian distribution, the Hausdorff

dimension Î± = 2, which is equivalent to the dimension of a plane.

A Bachelier process, or Brownian motion (as first covered in Part

2), is governed by a T1/Î± = T1/2 law.

In the case of the Cauchy distribution (Part 4), the Hausdorff

dimension Î± = 1, which is equivalent to the dimension of a line. A

Cauchy process would be governed by a T1/Î± = T1/1 = T law.

In general, 0 < Î± <=2. This means that between the Cauchy and

the Normal are all sorts of interesting distributions, including ones

having the same Hausdorf dimension as a Sierpinski carpet (Î± =

log 8/ log 3 = 1.8927â€¦.) or Koch curve (Î± = log 4/ log 3 =

1.2618â€¦.).

Interestingly, however, many financial variables are symmetric

stable distributions with an Î± parameter that hovers around the

value of h = 1.618033, where h is the reciprocal of the golden

mean g derived and discussed in the previous section. This implies

that these market variables follow a time scale law of T1/Î± = T1/h =

Tg = T0.618033... That is, these variables following a

T-to-the-golden-mean power law, by contrast to Brownian

motion, which follows a T-to-the-one-half power law.

For example, I estimated Î± for daily changes in the

dollar/deutschemark exchange rate for the first six years following

the breakdown of the Bretton Woods Agreement of fixed

exchange rates in 1973. [1] (The time period was July 1973 to

June 1979.) The value of Î± was calculated using maximum

likelihood techniques [2]. The value I found was

Î± = 1.62

with a margin of error of plus or minus .04. You canâ€™t get much

closer than that to Î± = h = 1.618033â€¦

In this and other financial asset markets, it would seem that time

scales not according to the commonly assumed square-root-of-T

law, but rather to a Tg law.

The Fibonacci Dynamical System

http://www.aci.net/kalliste/chaos6.htm (3 of 6) [12/04/2001 1:30:06]

Chaos and Fractals in Financial Markets, Part 6, by J. Orlin Grabbe

The value of h = 1.618033â€¦ is closely related to the Fibonacci

sequence of numbers. The Fibonacci sequence of numbers is a

sequence in which each number is the sum of the previous two:

1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, â€¦

Notice the third number in the sequence, 2=1+1. The next number

3=2+1. The next number 5=3+2. And so on, each number being

the sum of the two previous numbers.

This mathematical sequence appeared 1202 A.D. in the book

Liber Abaci, written by the Italian mathematician Leonardo da

Pisa, who was popularly known as Fibonacci (son of Bonacci).

Fibonacci told a story about rabbits. These were mathematical

rabbits that live forever, take one generation to mature, and

always thereafter have one off-spring per generation. So if we start

with 1 rabbit (the first 1 in the Fibonaaci sequence), the rabbit

takes one generation to mature (so there is still 1 rabbit the next

generationâ€”the second 1 in the sequence), then it has a baby

rabbit in the following generation (giving 2 rabbitsâ€”the 2 in the

sequence), has another offspring the next generation (giving 3

rabbits); then, in the next generation, the first baby rabbit has

matured and also has a baby rabbit, so there are two offspring

(giving 5 rabbits in the sequence), and so on.

Now, the Fibonacci sequence represents the path of a dynamical

system. We introduced dynamical systems in Part 1 of this series.

(In Part 5, we discussed the concept of Julia Sets, and used a

particular dynamical systemâ€”the complex logistic equationâ€”to

create computer art in real time using Java applets. The Java

source code was also included.)

The Fibonacci dynamical system look like this:

F(n+2) = F(n+1) + F(n).

The number of rabbits in each generation (F(n+2)) is equal to the

sum of the rabbits in the previous two generations (represented by

F(n+1) and F(n)). This is an example of a more general dynamical

system that may be written as:

F(n+2) = p F(n+1) + q F(n),

where p and q are some numbers (parameters). The solution to the

system depends on the values of p and q, as well as the starting

values F(0) and F(1). For the Fibonacci system, we have the

simplification p = q = F(0) = F(1) = 1.

I will not go through the details here, but the Fibonacci system can

http://www.aci.net/kalliste/chaos6.htm (4 of 6) [12/04/2001 1:30:06]

Chaos and Fractals in Financial Markets, Part 6, by J. Orlin Grabbe

be solved to yield the solution:

F(n) = [1/50.5] { [(1+50.5)/2]n â€“ [(1-50.5)/2]n }, n = 1, 2, . . .

The following table gives the value of F(n) for the first few values

of n:

n 1 2 3 4 5

F(n) 1 1 2 3 5

And so on for the rest of the numbers in the Fibonacci sequence.

Notice that the general solution involves the two solution values

we previously calculated for h. To simplify, however, we will now

write everything in terms of the first of these values (namely, h =

1.618033 â€¦). Thus we have

h = [ 1 + 50.5] / 2 = 1.618033â€¦, and

- 1/ h = [ 1 - 50.5] / 2 = -0.618033â€¦

Inserting these into the solution for the Fibonacci system F(n), we

get

F(n) = [1/50.5] { [h]n â€“ [-1/ h ]n }, n = 1, 2, . . .

Alternatively, writing the solution using the golden mean g, we

have

F(n) = [1/50.5] { [g]-n â€“ [-g]n }, n = 1, 2, . . .

The use of Fibonacci relationships in financial markets has been

popularized by Robert Prechter [3] and his colleagues, following

the work of R. N. Elliott [4]. The empirical evidence that the

Hausdorff dimension of some symmetric stable distributions

encountered in financial markets is approximately Î± = h =

1.618033â€¦ indicates that this approach is based on a solid

empirical foundation.

Notes

[1] See "Research Strategy in Empirical Work with Exchange

Rate Distributions," in J. Orlin Grabbe, Three Essays in

International Finance, Ph.D. Thesis, Department of Economics,

Harvard University, 1981.

[2] There are two key papers by DuMouchel which yield the

http://www.aci.net/kalliste/chaos6.htm (5 of 6) [12/04/2001 1:30:06]

Chaos and Fractals in Financial Markets, Part 6, by J. Orlin Grabbe

background needed for doing maximum likelihood estimates of Î±

, where Î± < 2:

DuMouchel, William H. (1973), "On the Asymptotic

Normality of the Maximum Likelihood Estimate

when Sampling from a Stable Distribution," Annals

of Statistics, 1, 948-57.

DuMouchel, William H. (1975), "Stable Distributions

in Statistical Inference: 2. Information from Stably

Distributed Samples," Journal of the American

Statistical Association, 70, 386-393.

[3] See, for example:

Robert R. Prechter, Jr., At the Crest of the Tidal

Wave, John Wiley & Sons, New York, 1995

Robert R. Prechter, Jr., The Wave Principle of Human

Social Behavior and the New Science of Socionomics,

New Classics Library, Gainesville, Georgia, 1999.

[4] See R.N. Elliottâ€™s Masterworksâ€”The Definitive Collection,

edited by Robert R. Prechter, Jr., Gainesville, Georgia, 1994.

J. Orlin Grabbe is the author of International Financial Markets,

and is an internationally recognized derivatives expert. He has

recently branched out into cryptology, banking security, and

digital cash. His home page is located at

http://www.aci.net/kalliste/homepage.html .

-30-

from The Laissez Faire City Times, Vol 3, No 35, September 6,

1999

http://www.aci.net/kalliste/chaos6.htm (6 of 6) [12/04/2001 1:30:06]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

Chaos and Fractals in Financial Markets

Part 7

by J. Orlin Grabbe

Grow Brain

Many dynamical systems create solution paths, or

trajectories, that look strange and complex. These

solution plots are called "strange attractors".

Some strange attractors have a fractal structure. For

example, we saw in Part 3 that it was easy to create a

fractal called a Sierpinski Carpet by using a stochastic

dynamical system (one in which the outcome at each

step is partially determined by a random component that

either selects among equations or forms part of at least

one of the equations, or both).

Here is a dynamical system that I ran across while doing

computer art. I labeled it "Grow Brain" because of its

structure. To see Grow Brain in action, make sure Java is

enabled on your browser (you can turn it off afterward)

and click here. (The truly paranoid can, of course,

compile their own applet, since I provide the source

code, as usual.)

The trajectory of Grow Brain is amazingly complex. But

is it a fractal? That is, at some larger or smaller scale,

will similar structures repeat themselves? Unlike the case

of the Sierpinski Carpet, the answer to this question is

not obvious for Grow Brain.

Some dynamical systems create fractal structures in time

(as Brownian motion does, in Part 2, or the

Fibonacci-type systems of Part 6 do), while others create

fractal structures in space (as in the aforementioned

Sierpinski carpet).

And some systems are all wet. Or maybe not, as the case

may be.

Hurst, Hydrology, and the Annual Flooding of the

Nile

http://www.aci.net/kalliste/chaos7.htm (1 of 7) [12/04/2001 1:30:09]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

For centuries, perhaps millennia, the yearly flooding of

the Nile was the basis of agriculture which supported

much of known civilization. The annual overflowing of

the river deposited rich top soil from the Ethiopian

Highland along the river banks. The water and silt were

distributed by irrigation, and the staple crops of wheat,

barley, and flax were planted. The grain was harvested

and stored in silos and granaries, where it was protected

from rodents by guard cats, whom the Egyptians

worshipped and turned into a cult (of the goddess Bast)

because of their importance for survival of the grain, and

hence for human survival.

The amount of Nile flooding was critical. A good flood

meant a good harvest, while a low-water flood meant a

poor harvest and possible food shortage. The flooding

came (and still comes) from tropical rains in the Upper

Nile Basin in Ethiopia (the Blue Nile) and in the East

African Plateau (the White Nile). The river flooding

would begin in the Sudan in April, and reach Aswan in

Egypt by July. (This would occur about the time of the

heliacal rising of the Dog-Star Sirus, or Sothis, around

July 19 in the Julian calendar.) The waters would then

continue to rise, peaking in mid-September in Aswan.

Further down the river at Cairo, the peak wouldnâ€™t occur

until October. The waters would then fall rapidly in

November and December, and continue to fall afterward,

reaching their low point in the March to May period.

Ancient Egypt had three seasons, all determined in

reference to the river: akhet, the "inundation"; peret, the

season when land emerged from the flood; and shomu,

the time when water was low.

A British government bureaucrat named Hurst made a

study of records of the Nileâ€™s flooding and noticed

something interesting. Harold Edwin Hurst was a poor

Leicester boy who made good, eventually working his

way into Oxford, and later became a British "civil

servant" in Cairo in 1906. He got interested in the Nile.

He looked at 800 years of records and noticed that there

was a tendency for a good flood year to be followed by

another good flood year, and for a bad (low) flood year

to be followed by another bad flood year.

That is, there appeared to be non-random runs of good or

bad years. Later Mandelbrot and Wallis [1] used the term

Joseph effect to refer to any persistent phenomenon like

http://www.aci.net/kalliste/chaos7.htm (2 of 7) [12/04/2001 1:30:09]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

this (alluding to the seven years of Egyptian plenty

followed by the seven years of Egyptian famine in the

biblical story of Joseph).

Of course, even if the yearly flows were independent,

there still could be runs of good or bad years. So to pin

this down, Hurst calculated a variable which is now

called a Hurst exponent H. The expectation was that H =

Â½ if the yearly flood levels were independent of each

other.

Calculating the Hurst Exponent

Let me give a specific example of Hurst exponent

calculation which will illustrate the general rule.

Suppose there are 99 yearly observations of the height h

of the mid-September Nile water level at Aswan: h(1),

h(2), . . ., h(99).

Calculate a location m and a scale c for h. If we assume

in general that h has a finite variance, then m is simply

the sample mean of the 99 observations, while c is the

standard deviation.

The first thing is to remove any trend, any tendency over

the century for h to rise or fall as a long-run phenomena.

So we subtract m from each of the observations h,

getting a new series x that has mean zero:

x(1) = h(1) - m,

x(2) = h(2) - m,

â€¦

x(99) = h(99) - m .

The set of xâ€™s are a set of variables with mean zero.

Positive xâ€™s represent those years when the river level is

above average, while negative xâ€™s represent those years

when the river level is below average.

Next we form partial sums of these random variables,

each partial sum Y(n) being the sum of all the years prior

to year n:

Y(1) = x(1),

Y(2) = x(1) + x(2),

...

Y(n) = x(1) + x(2) + . . . + x(n),

...

Y(99) = x(1) + x(2) + x(3) + . . . + x(99).

http://www.aci.net/kalliste/chaos7.htm (3 of 7) [12/04/2001 1:30:09]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

Since the Yâ€™s are a sum of mean-zero random variables

x, they will be positive if they have a preponderance of

positive xâ€™s and negative if they have a preponderance of

negative xâ€™s. In general, the set of Yâ€™s

{Y(1), Y(2), â€¦. , Y(99)}

will have a maximum and a minimum: max Y and min

Y, respectively. The difference between these two is

called the range R:

R = max Y - min Y

If we adjust R by the scale parameter c, we get the

rescaled range:

rescaled range = R/c .

Now, the probability theorist William Feller [2] had

proven that if a series of random variables like the xâ€™s 1)

had finite variance, and 2) were independent, then the

rescaled range formed over n observations would be

equal to:

R/c = k n 1/2

where k is a constant (in particular, k = (Ï€ /2)1/2 ) . That

is, the rescaled range would increase much like the

partial sums of independent variables (with finite

variance) we looked at in Part 5ï£§ namely, the partial

sums would increase by a factor of n1/2.

In particular, for n = 99 in our hypothetical data, the

result would be:

R/c = k 991/2 .

Now, the latter equation implies log(R/c) = log k + Â½ log

99. So if you ran a regression of log(R/c) against log(n)

[for a number of rescaled ranges (R/c) and their

associated number of years n] so as to estimate an

intercept a and a slope b,

log(R/c) = a + b log(n),

you should find that b is statistically indistinguishable

from Â½.

But that wasnâ€™t what Hurst found. Instead, he found that

in general the rescaled range was governed by a power

http://www.aci.net/kalliste/chaos7.htm (4 of 7) [12/04/2001 1:30:09]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

law

R/c = k nH

where the Hurst exponent H was greater than Â½ (Hurst

found H â‰… .7)

This implied that succeeding xâ€™s were not independent of

each other: x(t) had some sticky, persistent effect on

x(t+1). This was what Hurst had observed in the data,

and his calculation showed H to be a good bit above Â½

[3].

That this would be true in general for H > Â½ , of course,

needs to be proven. Nevertheless, to summarize, for

reference, for the Hurst exponent H:

H = Â½ : the flood level deviations from the

mean are independent, random; the xâ€™s are

independent and correspond to a random

walk;

Â½ < H <=1: the flood level deviations are

persistentï£§ high flood levels tend to be

followed by high flood levels, and low

flood levels by low flood levels; x(t+1)

tends to deviate from the mean the same

way x(t) did; the probability that x(t+1)

deviates from the mean in the same

direction as x(t) increases as H approaches

1;

0<=H< 1/2: the flood level deviations are

anti-persistentï£§ the xâ€™s are

mean-reverting; high flood levels have a

tendency to be followed by low flood

levels, and vice-versa; the probability that

x(t+1) deviates from the mean in the

opposite direction from x(t) increases as H

approaches 0.

A Misunderstanding to Avoid

Recall that Bachelier had noted that the probability range

of the log of a stock price would increase with the square

root of time T. The probability range, starting at log S,

would grow with T according to:

(log S â€“ k T1/2 , log S + k T1/2),

http://www.aci.net/kalliste/chaos7.htm (5 of 7) [12/04/2001 1:30:09]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

where k is the scale (in his case, the standard deviation)

c, k = c. But, more generally, the symmetric stable

distributions of Part 5, increase with T raised to the

reciprocal power of the Hausdorff dimension Î± (Î± <=2):

(log S â€“ k T1/Î± , log S + k T1/Î± ).

Hurst similarly said the rescaled range of the flood level

varied according to (setting n = T):

R/c = k TH .

So it is tempting to equate the Hurst exponent H with the

reciprocal of the Hausdorff dimension D, to equate H

with 1/D = 1/Î± . But we must be careful.

Recall that symmetric stable distributions, with Î± < 2,

have infinite variance (for them, variance is a blob

measure that is not meaningful). However, here in

discussing the Hurst exponent we are assuming that the

variance, and standard deviation (the scale c), are finite,

and hence Î± =2. The role of the Hurst exponent is to

inform us whether the yearly flood deviations are

independent or persistent. H is not related to the need for

a different scale measure. The variance and the standard

deviation are well defined for these latter processes.

Nevertheless, the formal equation H = 1/D or D = 1/H

yields the correct exponent for T in the case Â½<= H <=1.

Even though Î± =2, the calculation of the Hausdorf

dimension D yields D<2 if the increments are not

independent. Hence D can take a minimum value of 1, D

= 1/H = 1/1 = 1 when H=1, so that the process

accumulates variation (rescaled range) much like a

Cauchy sequence (TH = T); or a maximum of 2, D = 1/H

= 1/Â½ = 2 when H=Â½, so that the process accumulates

variation (rescaled range) like a Gaussian sequence (TH

= T1/2), or ordinary Brownian motion. [4]

Mandelbrot called these types of processes where Î± =2,

but where H â‰ Â½, fractional Brownian motion. (I will not

here elaborate the case H < Â½.)

Bull and Bear Markets

We are, of course, used to the idea of persistent

phenomena in the stock market and foreign exchange

http://www.aci.net/kalliste/chaos7.htm (6 of 7) [12/04/2001 1:30:09]

Chaos and Fractals in Financial Markets, Part 7, by J. Orlin Grabbe

markets. The NASD rises relentlessly for a period of

time. Then it falls just as persistently. There are bull and

bear markets, implying the price rise or decline is a

persistent phenomena, and not just an accidental

accumulation of random variables in one direction.

The US dollar rises relentless for a period of years, then

(as it is doing now) begins a relentless decline for

another period of years. In the case of the Nile, the

patterns of rising and falling are partly governed by the

weather patterns in the green rain forest of the Ethiopian

highlands. In the case of the US dollar, the patterns of

rising and falling are partly governed by the span of

Green in the Washington D.C. lowlands.

Notes

[1] B.B. Mandelbrot & J. R. Wallis, "Noah, Joseph, and

Operational Hydrology." Water Resources Research 4,

909-918, (1968).

[2] W. Feller, "The asymptotic distribution of the range

of sums of independent random variables." Annals of

Mathematical Statistics 22, 427 (1951).

[3] H. E. Hurst, "Long-term storage capacity of

reservoirs." Tr. of the American Society of Civil

Engineers 116, 770-808 (1951).

[4] See also the discussion on pages 251-2 in Benoit B.

Mandelbrot, The Fractal Geometry of Nature. W.H.

Freeman and Company, New York, 1983.

J. Orlin Grabbe is the author of International Financial

Markets, and is an internationally recognized derivatives

expert. He has recently branched out into cryptology,

banking security, and digital cash. His home page is

located at http://orlingrabbe.org/ .

-30-

from The Laissez Faire City Times, Vol 5, No 3,

January 15, 2001

http://www.aci.net/kalliste/chaos7.htm (7 of 7) [12/04/2001 1:30:09]

Iterative Plot of Grow Brain

Iterative Plot of Grow Brain

If you can read this, then your browser is not set up for Java. . .

Wait a minute for the applet to run. This amazingly complex structure is created by the simple iteration

of the two equations:

X = Y - (X/|X|)*sqrt(|aX|)

Y=b-X

(Here "sqrt" means to take the square root.)

The first series of iterations here starts with the initial values X = .9 and Y = 0. The two equations are

then iterated 10,000 times, plotting each (X,Y) point of the iteration. Then the starting value for Y is

increased slightly, and another 10,000 plots are plotted (see the Java source code for details).

By varying the parameter a in the first equation above (aX is set at 1.53 X for the applet supplied here),

and also the rate at which the starting value for Y is increased (set Y = .025*col, say, instead of Y =

.047*col), different structures can be obtained.

The Java source code.

http://www.aci.net/kalliste/GrowBrain.html [12/04/2001 1:30:15]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

Chaos and Fractals in Financial Markets

Part 2

by J. Orlin Grabbe

The French Gambler and the Pollen

Grains

In 1827 an English botanist, Robert Brown got his hands

on some new technology: a microscope "made for me by

Mr. Dolland, . . . of which the three lenses that I have

generally used, are of a 40th, 60th, and 70th of an inch

focus."

Right away, Brown noticed how pollen grains suspended

in water jiggled around in a furious, but random, fashion.

To see what Brown saw under his microscope, make

sure that Java is enabled on your web browser, and then

click here.

What was going on was a puzzle. Many people

wondered: Were these tiny bits of organic matter

somehow alive? Luckily, Hollywood wasnâ€™t around at

the time, or John Carpenter might have made his

wonderful horror film They Live! about pollen grains

rather than about the infiltration of society by liberal

control-freaks.

Robert Brown himself said he didnâ€™t think the movement

had anything to do with tiny currents in the water, nor

was it produced by evaporation. He explained his

observations in the following terms:

"That extremely minute particles of solid

matter, whether obtained from organic or

inorganic substances, when suspended in

pure water, or in some other aqueous fluids,

exhibit motions for which I am unable to

account, and from which their irregularity

and seeming independence resemble in a

remarkable degree the less rapid motions of

some of the simplest animalcules of

infusions. That the smallest moving

particles observed, and which I have termed

Active Molecules, appear to be spherical, or

nearly so, and to be between 1-20,0000dth

and 1-30,000dth of an inch in diameter; and

http://orlingrabbe.com/chaos2.htm (1 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

that other particles of considerably greater

and various size, and either of similar or of

very different figure, also present analogous

motions in like circumstances.

"I have formerly stated my belief that these

motions of the particles neither arose from

currents in the fluid containing them, nor

depended on that intestine motion which

may be supposed to accompany its

evaporation."[1]

Brown noted that others before him had made similar

observations in special cases. For example, a Dr. James

Drummond had observed this fishy, erratic motion in

fish eyes:

"In 1814 Dr. James Drummond, of Belfast,

published in the 7th Volume of the

Transactions of the Royal Society of

Edinburgh, a valuable Paper, entitled â€˜On

certain Appearances observed in the

Dissection of the Eyes of Fishes.â€™

"In this Essay, which I regret I was entirely

unacquainted with when I printed the

account of my Observations, the author

gives an account of the very remarkable

motions of the spicula which form the

silvery part of the choroid coat of the eyes

of fishes."

Today, we know that this motion, called Brownian

motion in honor of Robert Brown, was due to random

fluctuations in the number of water molecules

bombarding the pollen grains from different directions.

Experiments showed that particles moved further in a

given time interval if you raised the temperature, or

reduced the size of a particle, or reduced the "viscosity"

[2] of the fluid. In 1905, in a celebrated treatise entitled

The Theory of the Brownian Movement [3], Albert

Einstein developed a mathematical description which

explained Brownian motion in terms of particle size,

fluid viscosity, and temperature. Later, in 1923, Norbert

Wiener gave a mathematically rigorous description of

what is now referred to as a "stochastic process." Since

that time, Brownian motion has been called a Wiener

process, as well as a "diffusion process", a "random

walk", and so on.

But Einstein wasnâ€™t the first to give a mathematical

http://orlingrabbe.com/chaos2.htm (2 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

description of Brownian motion. That honor belonged to

a French graduate student who loved to gamble. His

name was Louis Bachelier. Like many people, he sought

to combine duty with pleasure, and in 1900 in Paris

presented his doctoral thesis, entitled ThÃ©orie de la

spÃ©culation.

What interested Bachelier were not pollen grains and

fish eyes. Instead, he wanted to know why the prices of

stocks and bonds jiggled around on the Paris bourse. He

was particularly intrigued by bonds known as rentes sur

lâ€™Ã©tatâ€” perpetual bonds issued by the French

government. What were the laws of this jiggle?

Bachelier wondered. He thought the answer lay in the

prices being bombarded by small bits of news. ("The

British are coming, hammer the prices down!")

The Square Root of Time

Among other things, Bachelier observed that the

probability intervals into which prices fall seemed to

increased or decreased with the square-root of time

(T0.5). This was a key insight.

By "probability interval" we mean a given probability

for a range of prices. For example, prices might fall

within a certain price range with 65 percent probability

over a time period of one year. But over two years, the

same price range that will occur with 65 percent

probability will be larger than for one year. How much

larger? Bachelier said the change in the price range was

proportional to the square root of time.

Let P be the current price. After a time T, the prices will

(with a given probability) fall in the range

(P â€“a T0.5, P + a T0.5), for some constant a.

For example, if T represents one year (T=1), then the last

equation simplifies to

(P â€“a , P + a), for some constant a.

The price variation over two years (T=2) would be

a T0.5 = a(2)0.5 = 1.4142 a

or 1.4142 times the variation over one year. By contrast,

the variation over a half-year (T=0.5) would be

a T0.5 = a(0.5) 0.5 = .7071 a

or about 71 percent of the variation over a full year. That

is, after 0.5 years, the price (with a given probability)

http://orlingrabbe.com/chaos2.htm (3 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

would be in the range

(P â€“.7071a , P + .7071a ).

Here the constant a has to be determined, but one

supposes it will be different for different types of prices:

a may be bigger for silver prices than for gold prices, for

example. It may be bigger for a share of Yahoo stock

than for a share of IBM.

The range of prices for a given probability, then,

depends on the constant a, and on the square root of

time (T0.5). This was Bachelierâ€™s insight.

Normal Versus Lognormal

Now, to be sure, Bachelier made a financial mistake.

Remember (from Part 1 of this series) that in finance we

always take logarithms of prices. This is for many

reasons. Most changes in most economic variables are

proportional to their current level. For example, it is

plausible to think that the variation in gold prices is

proportional to the level of gold prices: $800 dollar gold

varies in greater increments than does gold at $260.

The change in price, âˆ†P, as a proportion of the current

price P, can be written as:

âˆ†P/P .

But this is approximately the same as the change in the

log of the price:

âˆ†P/P â‰ˆ âˆ† (log P) .

What this means is that Bachelier should have written his

equation:

(log P â€“a T0.5, log P + a T0.5), for some constant a.

However, keep in mind that Bachelier was making

innovations in both finance and in the mathematical

theory of Brownian motion, so he had a hard enough

time getting across the basic idea, without worrying

about fleshing out all the correct details for a

non-existent reading audience. And, to be sure, almost

no one read Bachelierâ€™s PhD thesis, except the celebrated

mathematician Henri PoincarÃ©, one of his instructors.

The range of prices for a given probability, then, depends

on the constant a, and on the square root of time (T0.5),

as well as the current price level P.

To see why this is true, note that the probability range

http://orlingrabbe.com/chaos2.htm (4 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

for the log of the price

(log P â€“a T0.5, log P + a T0.5)

translates into a probability range for the price itself as

( P exp(- a T0.5), P exp( a T0.5 ) ) .

(Here "exp" means exponential, remember? For

example, exp(-.7) = e -.7 = 2.718281-.7 = .4966. )

Rather than adding a plus or minus something to the

current price P, we multiply something by the current

price P. So the answer depends on the level of P. For a

half-year (T=0.5), instead of

(P â€“.7071a , P + .7071a )

we get

( P exp(- .7071 a ), P exp( .7071 a ) ) .

The first interval has a constant width of 1.4142 a, no

matter what the level of P (because P + .7071 a - (P

-.7071 a) = 1.4142 a). But the width of the second

interval varies as P varies. If we double the price P, the

width of the interval doubles also.

Bachelier allowed the price range to depend on the

constant a and on the square root of time (T0.5), but

omitted the requirement that the range should also

depend on the current price level P.

The difference in the two approaches is that if price

increments (âˆ†P) are independent, and have a finite

variance, then the price P has a normal (Gaussian

distribution). But if increments in the log of the price (âˆ†

log P) are independent, and have a finite variance, then

the price P has a lognormal distribution.

Here is a picture of a normal or Gaussian distribution:

http://orlingrabbe.com/chaos2.htm (5 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

The left-hand tail never becomes zero. No matter where

we center the distribution (place the mean), there is

always positive probability of negative numbers.

Here is a picture of a lognormal distribution:

http://orlingrabbe.com/chaos2.htm (6 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

The left-hand tail of a lognormal distribution becomes

zero at zero. No matter where we center the distribution

(place the mean), there is zero probability of negative

numbers.

A lognormal distribution assigns zero probability to

negative prices. This makes us happy because most

businesses donâ€™t charge negative prices. (However, US

Treasury bills paid negative interest rates on certain

occasions in the 1930s.) But a normal distribution

assigns positive probability to negative prices. We donâ€™t

want that.

So, at this point, we have seen Bachelierâ€™s key insight

that probability intervals for prices change proportional

to the square root of time (that is, the probability interval

around the current price P changes by a T0.5), and have

modified it slightly to say that probability intervals for

the log of prices change proportional to the square root

of time (that is, the probability interval around log P

changes by a T0.5).

How Big Is It?

Now we are going to take a break from price

distributions, and pursue the question of how we

measure things. How we measure length, area, volume,

or time. (This will lead us from Bachelier to

Mandelbrot.)

Usually, when we measure things, we use everyday

dimensions (or at least the ones we are familiar with

from elementary plain geometry). A point has zero

dimension. A line has one dimension. A plane or a

square has two dimensions. A cube has three

dimensions. These basic, common-sense type

dimensions are sometimes referred to as topological

dimensions.

We say a room is so-many "square feet" in size. In this

case, we are using the two-dimensional concept of area.

We say land is so-many "acres" in size. Here, again, we

are using a two-dimensional concept of area, but with

different units (an "acre" being 43,560 "square feet").

We say a tank holds so-many "gallons". Here we are

using a measure of volume (a "gallon" being 231 "cubic

inches" in the U.S., or .1337 "cubic feet").

Suppose you have a room that is 10 feet by 10 feet, or

100 square feet. How much carpet does it take to cover

the room? Well, you say, a 100 square feet of carpet, of

course. And that is true, for ordinary carpet.

http://orlingrabbe.com/chaos2.htm (7 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

Letâ€™s take a square and divide it into smaller pieces.

Letâ€™s divide each side by 10:

We get 100 pieces. That is, if we divide by a scale factor of 10, we get 100 smaller

squares, all of which look like the big square. If we multiply any one of the smaller

squares by 10, we get the original big square.

Letâ€™s calculate a dimension for this square. We use the same formula as we used for

the Sierpinski carpet:

N = rD .

Taking logs, we have log N = D log r, or D = log N/ log r.

We have N = 100 pieces, and r = 10, so we get the dimension D as

D = log(100)/log(10) = 2.

(We are using "log" to mean the natural log, but notice for this calculation, which

involves the ratio of two logs, that it doesnâ€™t matter what base we use. You can use

logs to the base 10, if you wish, and do the calculation in your head.)

We called the dimension D calculated in this way (namely, by comparing the

number of similar objects N we got at different scales to the scale factor r) a

Hausdorff dimension. In this case, the Hausdorff dimension 2 is the same as the

ordinary or topological dimension 2.

http://orlingrabbe.com/chaos2.htm (8 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

So, in any case, the dimension is 2, just as you suspected all along. But suppose you

covered the floor with Sierpinski carpet. How much carpet do you need then?

We saw (in Part 1) that the Sierpinski carpet had a Hausdorff dimension D = 1.8927â€¦

A Sierpinski carpet which is 10 feet on each side would only have N = 101.8927 =

78.12 square feet of material in it.

Why doesnâ€™t a Sierpinski carpet with 10 feet on each side take 100 square feet of

material? Because the Sierpinski carpet has holes in it, of course.

Remember than when we divided the side of a Sierpinski carpet by 3, we got only 8

copies of the original because we threw out the center square. So it had a Hausdorff

dimension of D = log 8/ log 3 = 1.8927. Then we divided each of the 8 copies by 3

again , threw out the center squares once more, leaving 64 copies of the original.

Dividing by 3 twice is the same as dividing by 9, so, recalculating our dimension, we

get D = log 64/ log 9 = 1.8927.

An ordinary carpet has a Hausdorff dimension of 2 and a topological (ordinary)

dimension of 2. A Sierpinski carpet has a Hausdorff dimension of 1.8927 and a

topological dimension of 2. [4]

Benoit Mandelbrot defined a fractal as an object whose Hausdorff dimension is

different from its topological dimension. So a Sierpinski carpet is a fractal. An

ordinary carpet isnâ€™t.

Fractals are cheap and sexy. A Sierpinski carpet needs only 78.12 square feet of

material to cover 100 square feet of floor space. Needing less material, a Sierpinski

carpet costs less. Sure it has holes in it. But the holes form a really neat pattern. So a

Sierpinski carpet is sexy. Cheap and sexy. You canâ€™t beat that.

Historyâ€™s First Fractal

Letâ€™s see if we have this fractal stuff straight. Letâ€™s look at the first known fractal,

created in 1870 by the mathematical troublemaker George Cantor.

Remember that we create a fractal by forming similar patterns at different scales, as

we did with the Sierpinski carpet. Itâ€™s a holey endeavor. In order to get a carpet whose

Hausdorff dimension was less than 2, we created a pattern of holes in the carpet. So

we ended up with an object whose Hausdorff dimension D (which compares the

number N of different, but similar, objects at different scales r, N = rD ) was more than

1 but less than 2. That made the Sierpinski carpet a fractal, because its Hausdorff

dimension was different from its topological dimension.

What George Cantor created was an object whose dimension was more than 0 but less

than 1. That is, a holey object that was more than a point (with 0 dimensions) but less

than a line (with 1 dimension). Itâ€™s called Cantor dust. When the Cantor wind blows,

the dust gets in your lungs and you canâ€™t breathe.

To create Cantor dust, draw a line and cut out the middle third:

0________________________________________________________1

http://orlingrabbe.com/chaos2.htm (9 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

0__________________1/3 2/3_________________1

Now cut out the middle thirds of each of the two remaining pieces:

0____1/9 2/9____ 1/3 2/3____7/9 8/9 ____1

Now cut out the middle thirds of each of the remaining four pieces, and proceed in this

manner for an infinite number of steps, as indicated in the following graphic.

http://orlingrabbe.com/chaos2.htm (10 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

What's left over after all the cutting is Cantor dust.

At each step we changed scale by r = 3, because we

divided each remaining part into 3 pieces. (Each of these

pieces had 1/3 the length of the original part.) Then we

threw away the middle piece. (Thatâ€™s how we created the

holes.) That left 2 pieces. At the next step there were 4

pieces, then 8, and so on. At each step the number of

pieces increased by a factor of N = 2. Thus the Hausdorff

dimension for Cantor dust is:

D = log 2 / log 3 = .6309.

Is Cantor dust a fractal? Yes, as long as the topological

dimension is different from .6309, which it surely is.

Butâ€”what is the topological dimension of Cantor dust?

We can answer this by seeing how much of the original

line (with length 1) we cut out in the process of making

holes.

At the first step we cut out the middle third, or a length

of 1/3. The next step we cut out the middle thirds of the

two remaining pieces, or a length of 2(1/3)(1/3). And so

on. The total length cut out is then:

1/3 + 2(1/32) + 4(1/33) + 8(1/34) + . . . = 1.

We cut out all of the length of the line (even though we

left an infinite number of points), so the Cantor dust

that's left over has length zero. Its topological dimension

is zero. Cantor dust is a fractal with a Hausdorff

dimension of .6309 and a topological dimension of 0.

Now, the subhead refers to Cantor dust as "historyâ€™s first

fractal". That a little anthropocentric. Because nature has

been creating fractals for millions of years. In fact, most

things in nature are not circles, squares, and lines.

Instead they are fractals, and the creation of these

fractals are usually determined by chaos equations.

Chaos and fractal beauty are built into the nature of

reality. Get used to it.

Today, there are roughly of order 103

recognized fractal systems in nature, though

a decade ago when Mandelbrot's classic

Fractal Geometry of Nature was written,

many of these systems were not known to

be fractal. [5]

Fractal Time

So far weâ€™ve seen that measuring things is a complicated

http://orlingrabbe.com/chaos2.htm (11 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

business. Not every length can be measured with a tape

measure, nor the square footage of material in every

carpet measured by squaring the side of the carpet.

Many things in life are fractal, and follow power laws

just like the D of the Hausdorff dimension. For example,

the "loudness" L of noise as heard by most humans is

proportional to the sound intensity I raised to the

fractional power 0.3:

L = a I0.3 .

Doubling the loudness at a rock concert requires

increasing the power output by a factor of ten, because

a (10 I)0.3 = 2 a I0.3 = 2 L .

In financial markets, another subjective domain, "time"

is fractal. Time does not always move with the rhythms

of a pendulum. Sometimes time is less than that. In fact,

weâ€™ve already encounted fractal time with the Bachelier

process, where the log of probability moved according to

a T0.5 .

Bachelier observed that if the time interval was

multiplied by 4, the probability interval only increased

by 2. In other words, at a scale of r = 4, the number N of

similar probability units was N = 2. So the Hausdorff

dimension for time was:

D = log N/ log r = log 2/ log 4 = 0.5 .

In going from Bachelier to Mandelbrot, then, the

innovation is not in the observation that time is fractal:

that was Bachelierâ€™s contribution. Instead the question is:

What is the correct fractal dimension for time in

speculative markets? Is the Hausdorff dimension really

D = 0.5, or does it take other values? And if the

Hausdorff dimension of time takes other values, whatâ€™s

the big deal, anyway?

The way in which Mandlebrot formulated the problem

provides a starting point:

Despite the fundamental importance of

Bachelier's process, which has come to be

called "Brownian motion," it is now

obvious that it does not account for the

abundant data accumulated since 1900 by

empirical economists, simply because the

empirical distributions of price changes are

usually too "peaked" to be relative to

http://orlingrabbe.com/chaos2.htm (12 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

samples from Gaussian populations. [6]

What does Mandelbrot mean by "peaked"? Itâ€™s now time

for a discussion of probability.

Probability is a One-Pound Jar of Jelly

Probability is a one-pound jar of jelly. You take the jelly

and smear it all over the real line. The places where you

smear more jelly have more probability, while the places

where you smear less jelly have less probability. Some

spots may get no jelly. They have no probability at

allâ€”their probability is zero.

The key is that you only have one pound of jelly. So if

you smear more jelly (probability) at one location, you

have to smear less jelly at another location.

Here is a picture of jelly smeared in the form of a

bell-shaped curve:

The jelly is smeared between the horizontal (real) line all

the way up to the curve, with a uniform thickness. The

result is called a "standard normal distribution".

("Standard" because its mean is 0, and the standard

deviation is 1.) In this picture, the point where the

vertical line is and surrounding points have the jelly

piled highâ€”hence they are more probable.

As we observed previously, for the normal distribution

jelly gets smeared on the real (horizontal) line all the

way to plus or minus infinity. There may not be much

jelly on the distant tails, but there is always some.

Now, letâ€™s think about this bell-shaped picture. What

does Mandelbrot mean by the distribution of price

changes being "too peaked" to come from a normal

http://orlingrabbe.com/chaos2.htm (13 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

distribution?

Does Mandelbrotâ€™s statement make any sense? If we

smear more jelly at the center of the bell curve, to make

it taller, we can only do so by taking jelly from some

other place. Suppose we take jelly out of the tails and

intermediate parts of the distribution and pile it on the

center. The distribution is now "more peaked". It is more

centered in one place. It has a smaller standard

deviationâ€”or smaller dispersion around the mean.

Butâ€”it could well be still normal.

So whatâ€™s with Mandelbrot, anyway? What does he

mean? Weâ€™ll discover this in Part 3 of this series.

Click here to see the Answer to Problem 1 from Part 1.

The material therein should be helpful in solving

Problem 2.

Meanwhile, here are two new problems for eager

students:

Problem 3: Suppose you create a Cantor dust using a

different procedure. Draw a line. Then divide the line

into 5 pieces, and throw out the second and fourth

pieces. Repeat this procedure for each of the remaining

pieces, and so on, for an infinite number of times. What

is the fractal dimension of the Cantor dust created this

way? What is its topological dimension? Did you create

a new fractal?

Problem 4: Suppose we write all the numbers between 0

and 1 in ternary. (Ternary uses powers of 3, and the

numbers 0, 1, 2. The ternary number .1202, for example,

stands for 1 x 1/3 + 2 x 1/9 + 0 x 1/27 + 2 x 1/81.) Show

the Cantor dust we created here in Part 2 (with a

Hausdorff dimension of .6309) can be created by taking

all numbers between 0 and 1, and eliminating those

numbers whose ternary expansion contains a 1. (In other

words, what is left over are all those numbers whose

ternary expansions only have 0s and 2s.)

And enjoy the fractal:

http://orlingrabbe.com/chaos2.htm (14 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

Notes

[1] Robert Brown, "Additional Remarks on Active

Molecules," 1829.

[2] Viscosity is a fluidâ€™s stickiness: honey is more

viscous than water, for example. "Honey donâ€™t jiggle so

much."

[3] I am using the English title of the well-known Dover

reprint: Investigations on the Theory of the Brownian

Movement, Edited by R. Furth, translated by A.D.

Cowpter, London, 1926. The original article was in

German and titled somewhat differently.

[4] I am admittingly laying a subtle trap here, because of

the undefined nature of "topological dimension". This is

partially clarified in the discussion of Cantor dust, and

further discussed in Part 3.

[5] H. Eugene Stanley, Fractals and Multifractals, 1991

[6] Benoit Mandelbrot, "The Variation of Certain

Speculative Prices," Journal of Business, 36(4), 394-419,

1963.

J. Orlin Grabbe is the author of International Financial

Markets, and is an internationally recognized derivatives

expert. He has recently branched out into cryptology,

banking security, and digital cash. His home page is

located at http://www.aci.net/kalliste/homepage.html .

http://orlingrabbe.com/chaos2.htm (15 of 16) [12/04/2001 1:30:22]

Chaos and Fractals in Financial Markets, Part 2, by J. Orlin Grabbe

-30-

from The Laissez Faire City Times, Vol 3, No 24, June

14, 1999

http://orlingrabbe.com/chaos2.htm (16 of 16) [12/04/2001 1:30:22]

ñòð. 5 |