12. MATRIX REPRESENTATION AND HYPERBOLIC NUMBERS

All the associative algebras with neutral elements for the addition and product

can be represented with matrices. The representations can have different dimensions,

but the most interesting is the minimal representation which is an isomorphism.

Rotations and the representation of complex numbers

We have seen that a rotation of a vector v over an angle ± is written in geometric

algebra as:

v' = v ( cos ± + e12 sin ± )

Separating the components:

v1' = v1 cos ± ’ v2 sin ±

v2' = v1 sin ± + v2 cos ±

and writing them in matrix form, we have:

« cos ± sin±

(v1' v 2' ) = (v1 v2 ) ¬ ·

¬ ’ sin± cos ± ·

Identifying this equation with the first one:

« cos ± sin±

cos ± + e12 sin± = ¬ ·

¬ ’ sin± cos ± ·

we obtain the matrix representation for the complex numbers:

«1 0 « 0 1 « a b

1= ¬ e12 = ¬ z = a + b e12 = ¬

¬0 1· ¬ ’ 1 0· ¬’ b a·

· · ·

Now we wish to obtain also the matrix representation for vectors. Note that the

matrix form of the rotation of a vector gives us the first row of the vector representation,

which may be completed with a suitable second row:

140 RAMON GONZALEZ CALVET

v 2 « cos ± sin±

« v1' v 2' « v1

¬ ·=¬ ·¬ ·

’ v1 · ¬ ’ sin± cos ± ·

¬v ' ’ v1' · ¬ v 2

2

from where we arrive to the matrix representation for vectors:

«v v2

«1 0 «0 1

v = v1 e1 + v 2 e 2 = ¬ 1

e1 = ¬ e2 = ¬ ·

¬ 0 ’ 1· ¬1 0·

· · ¬v ’ v1 ·

2

These four matrices are a basis of the real matrix space M2—2(R) and therefore the

geometric algebra of the vectorial plane V2 is isomorphic to this matrix space:

Cl (V 2 ) ≡ Cl 2,0 = M 2—2 (R )

The moduli of a complex number and a vector are related with the determinant of the

matrix:

«v v2

« a b

2 2 2 2

= v1 + v 2 = ’ det ¬ 1

= a 2 + b 2 = det ¬ ·

¬’ b a·

z v

· ¬v ’ v1 ·

2

Let us think about a heterogeneous element being a sum of a complex and a

vector. Its geometric meaning is unknown for us but we can write it with a matrix:

« a + v1 b + v2

z + v = a + b e12 + v1 e1 + v 2 e 2 = ¬ ·

¬’ b + v a ’ v1 ·

2

The determinant of this matrix

det (z + v ) = a 2 + b 2 ’ v1 ’ v 2

2 2

reminds us the Pythagorean theorem in a pseudo Euclidean plane.

The subalgebra of the hyperbolic numbers

In the matrix representation of the geometric algebra, we can see that the

diagonal matrices are a subalgebra:

«a + b 0

a + b e1 = ¬ ·

¬0 a ’ b·

The product of elements of this kind is commutative (and also associative). Because of

this, we may talk about «numbers»:

( a + b e1 ) ( c + d e1 ) = a c + b d + ( a d + b c ) e1

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 141

Using the Hamilton's notation of pairs of real numbers, we write:

( a, b ) ( c, d ) = ( a c + b d , a d + b c )

From the determinant we may define their pseudo Euclidean modulus ¦z¦:

det (a + b e1 ) = a 2 ’ b 2 = a + b e1

2

The numbers having constant modulus lie on a hyperbola. Because of this, they are

called hyperbolic numbers. Due to the fact that e1 cannot be a privileged direction on the

plane, any other set of elements having the form:

a+ku

with a and k real, and u being a fixed unitary vector, is also a set of hyperbolic numbers.

By defining the conjugate of a hyperbolic number as that number with the

opposite vector component:

( a + b e1 )* = a ’ b e1

we see that the square of the modulus of a hyperbolic number is equal to the product of

this number by its conjugate:

¦z¦2 = z z*

Hence the inverse of a hyperbolic number follows:

z*

z ’1 = 2

z

Since the modulus is the square root of the determinant, and the determinant of a matrix

product is equal to the product of determinants, it follows that the modulus of a product

of hyperbolic numbers is equal to the product of moduli:

zt = z t

The elements (1 + e1 ) / 2 and (1 ’ e1 ) / 2 , which are idempotents, and their multiples have

a null modulus and no inverse. They form two ideals, that is, the product of any

hyperbolic number by a multiple of an idempotent (or any other multiple) yields also a

multiple of the idempotent.

Hyperbolic trigonometry

Let us consider the locus of the points (hyperbolic numbers) located at a constant

distance r from the origin (hyperbolic numbers with constant determinant equal to r2),

which lie on the hyperbola x2 ’ y2 = r2 (figure 12.1). I shall call r the hyperbolic radius

142 RAMON GONZALEZ CALVET

following the analogy with the circle. The extreme of the radius is a point on the

hyperbola with coordinates (x, y). The arc between the positive X half axis and this point

(x, y) has an oriented length s. On the other hand, the radius, the hyperbola and the X-axis

delimit a sector with an oriented area A. The hyperbolic angle (or argument) ψ is defined

as the quotient of the arc length divided by the radius1:

s Figure 12.1

ψ=

r

It follows from this definition that

the oriented area A is2:

ψ r2 2A

ψ=

A= ’

r2

2

The hyperbolic sine, cosine and

tangent are defined as the

following quotients:

y

sinh ψ =

r

x y

coshψ = tghψ =

r x

These definitions yield the three fundamental identities of the hyperbolic trigonometry:

sinh ψ 1

tghψ ≡ cosh 2 ψ ’ sinh 2ψ ≡ 1 1 ’ tgh 2 ψ ≡

coshψ cosh 2 ψ

Now we search an explicit expression of the hyperbolic functions in terms of elemental

functions such as polynomials, exponential, logarithm, etc. The differential of the arc

length (being real) is related with the differentials of the coordinates by the pseudo

Pythagorean theorem:

« d (sinh ψ ) « d (coshψ )

2 2

ds 2 = dy 2 ’ dx 2 ’ 1= ¬ · ’¬ ·

¬ dψ · ¬ ·

dψ

That is, we have the following system with one differential equation:

1

Note that x2 ’ y2 = r2 > 0 while s2 < 0. However I overcome this trouble taking in these

definitions r and s and also the area A as real numbers but being oriented, that is, with sign.

2

The formula of the sector area is obtained by an analogous argument from Archimedes: the

total area is the addition of areas of the infinitesimal triangles with altitude equal to r and base

equal to ds . r being constant, the area of the hyperbola sector is A = r s / 2 . Obviously the

radius is orthogonal to each infinitesimal piece of arc of the hyperbola. This question and the

concept and calculus of areas are studied with more detail in the following chapter.

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 143

± 1 = cosh 2 ψ ’ sinh 2ψ

1 = (sinh' ψ ) ’ (cosh 'ψ )

2 2

whose solution, according to the initial conditions given by the geometric definition

(sinh 0 = 0 and cosh 0 = 1), is:

expψ ’ exp(’ ψ ) expψ + exp(’ ψ )

sinh ψ = coshψ =

2 2

Hyperbolic exponential and logarithm

Within the hyperbolic numbers one can define and study functions and develop a

hyperbolic analysis, with the aid of matrix functions. In the matrix algebra, the

exponential function of a matrix A is defined as:

∞

An

exp( A) = ‘

n =0 n!

The matrices which represent the hyperbolic numbers (e1 direction) are diagonal. Then

their exponential matrix has the exponential of each element in the diagonal:

« exp( x + y )

«x + y

0 0

exp( x + y e1 ) = exp¬ ·=¬ ·

¬0 y· ¬ exp( x ’ y )·

x’ 0

By extracting the common factor and introducing the hyperbolic functions we arrive at:

« cosh y + sinh y

0

exp( x + y e1 ) = exp( x ) ¬ ·

¬ cosh y ’ sinh y ·

0

which is the analogous of the Euler's identity:

exp( x + y e1 ) = exp( x )(cosh y + e1 sinh y )

From this exponential identity, we can find the logarithm function:

«x+ y

log( x 2 ’ y 2 ) + e1 arg tgh¬ · = log( x 2 ’ y 2 ) + 1 log¬

e

« y 1

1

log( x + y e1 ) = ·

¬x’ y·

2 x 2 2

for “x < y < x . This condition is the set of hyperbolic numbers with positive

determinant, which is called a sector3. The two ideals generated by the idempotents

(1 + e1 ) / 2 and (1 ’ e1 ) / 2 separate two sectors of hyperbolic numbers, one with

3

According to the relativity theory, the region of the space-time accessible to our knowledge

must fulfil the inequality c2 t2 ’ x2 ≥ 0, being x the space coordinate, t the time and c the light

celerity.

144 RAMON GONZALEZ CALVET

positive determinant and real modulus and another with negative determinant and

imaginary modulus.

The characteristic property of the exponential function is:

exp( z + t ) = exp( z ) exp( t )

z, t being hyperbolic numbers. Taking exponentials with unity determinant:

exp((ψ + χ ) e1 ) ≡ exp(ψ e1 ) exp(χ e1 )

and splitting the real and vectorial parts, we obtain the addition identities:

cosh(ψ + χ ) ≡ coshψ cosh χ + sinh ψ sinh χ

sinh (ψ + χ ) ≡ sinh ψ cosh χ + coshψ sinh χ

Also through the equality:

exp(nψ e1 ) ≡ ( exp(ψ e1 ) )

n

the analogous of Moivre™s identity is found:

cosh(nψ ) + e1 sinh (nψ ) ≡ (coshψ + e1 sinh ψ )

n

For example, for n = 3 it becomes:

cosh 3ψ = cosh 3 ψ + 3 coshψ sinh 2 ψ

sinh 3ψ = 3 cosh 2 ψ sinhψ + sinh 3 ψ

Polar form, powers and roots of hyperbolic numbers

The exponential allows us to write any hyperbolic number in polar form. For

example:

13 3

arg z = arg tgh = log

z = 13 + 5 e1 z = 13 2 ’ 5 2 = 12

5 2

« 3

« 3 « 3 «

z = 12 exp¬ e1 log · = 12 ¬ cosh¬ log · + e1 sinh ¬ log · · = 12 3

2 2 2

log

2

The product of hyperbolic numbers, like that of complex numbers, is found by the

multiplication of the moduli and the addition of the arguments, while the division is

obtained as the quotient of moduli and difference of arguments. The division is not

possible when the denominator is a multiple of an idempotent (modulus zero).

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 145

The power of a hyperbolic number has modulus equal to the power of the

modulus, and argument the addition of arguments. Here the periodicity is absent, and

distinct arguments always correspond to distinct numbers. The roots must be viewed

with more detail. For example, there are four square roots of 13 + 12 e1:

13 + 12 e1 = (3 + 2 e1 ) = (2 + 3 e1 ) = (’ 3 ’ 2 e1 ) = (’ 2 ’ 3 e1 )

2 2 2 2

that is, the equation of second degree z2 ’13 ’12 e1 = 0 has four solutions. Recently G.

Casanova4 has proved that n2 is the maximum number of hyperbolic solutions

(including real values) of an algebraic equation of nth degree. In the case of the equation

of second degree we can use the classical formula whenever we know to solve the

square roots. For example, let us consider the equation:

z 2 ’ 5 z + 5 + e1 = 0

whose solutions, according to the second degree formula, are:

5 ± 25 ’ 4 … 1 … (5 + e1 ) 5 ± 5 ’ 4 e1

z= =

2 …1 2

Now, we must calculate all the square roots of the number 5 ’ 4 e1 . In order to find

them we obtain its modulus and argument:

’4

arg( 5 ’ 4 e1 ) = arg tgh

5 ’ 4 e1 = 5 2 ’ (’ 4 ) = 3

2

= ’ log 3

5

The square root has half argument and the square root of the modulus:

[( )]

) (

5 ’ 4 e1 = 3 ’ log 3 = 3 cosh ’ log 3 + e1 sinh ’ log 3 = 2 ’ e1

2

There are four square roots of this number:

2 ’ e1 , ’ 2 + e1 , 1 ’ 2 e1 , ’ 1 + 2 e1

and so four solutions of the initial equation:

5 + 2 ’ e1 7 ’ e1 5 ’ 2 + e1 3 + e1

z1 = = z2 = =

2 2 2 2

5 + 1 ’ 2 e1 5 ’ 1 + 2 e1

z3 = = 3 ’ e1 z4 = = 2 + e1

2 2

Let us study the second degree equation with real coefficients using the matrix

representation:

4

Gaston Casanova, Advances in Applied Clifford Algebras 9, 2 (1999) 215-219.

146 RAMON GONZALEZ CALVET

a z2 + b z + c = 0 a, b and c real and z = x + e1 y

Dividing the second degree equation by a we obtain the equivalent equation:

b c

z2 + z+ =0

a a

which is fulfilled by the number z, but also by its matrix representation. So, according to

the Hamilton-Cayley theorem, it is the characteristic polynomial of the matrix of z,

whose eigenvalues are the elements on the diagonal x + y, x ’ y (whenever they are

distinct, that is, z is not real). Since b is the opposite of the sum of eigenvalues, and c is

their product, we have:

z 2 ’ 2x z + x 2 ’ y 2 = 0

which gives the equalities:

b2 ’ 4 a c

b c

x=’ = x2 ’ y2 ’ y=2

4 a2

2a a

and the solutions:

’ b ± e1 b 2 ’ 4 a c

z=

2a

The two real solutions must be added to these values, obtaining the four expected.

Anyway, the equation has only solutions if the discriminant is positive. Let us see an

example:

z2 + 3 z + 2 = 0

’ 3 +e1 ’ 3 ’ e1

’ 3+1 ’ 3’1

z1 = = ’1 z2 = = ’2 z3 = z4 =

2 2 2 2

On the other hand, we may calculate for instance the cubic root of 14’13 e1. The

modulus and argument of this number are:

« 13

arg (14 ’ 13 e1 ) = arg tgh¬ ’ · = ’ log 27

14 ’ 13 e1 = 14 2 ’ 13 2 = 27

14

From where it follows that the unique cubic root is:

14 ’ 13 e1 = 3 ’ log = 3 ’ log = 2 ’ e1

3 27

3

3

In order to study with more generality the number of roots, let us consider again

the matrix representation of a hyperbolic number:

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 147

«a + b 0

a + b e1 = ¬ ·

¬0 a ’ b·

Now, it is obvious that there is only a unique root with odd index, which always exists

for every hyperbolic number:

«n a + b

0

a + b e1 = ¬ · n odd

n

¬0 a ’b·

n

On the other hand, if a + b > 0 and a ’ b > 0 (first half-sector) there are four roots with

even index, one on each half-sector (I follow the anticlockwise order as usually):

n even

«n a + b «n a + b nd

0 0

¬ · ¬ · ∈2 half-sector

st

∈1 half-sector

¬0 a ’b· ¬0 ’ a ’b·

n n

«’ n a + b rd «’ n a + b th

0 0

¬ · ∈3 half-sector ¬ · ∈4 half-sector

¬ ’ a ’b· ¬ a ’b·

n n

0 0

If the number belongs to a half-sector different from the first, some of the elements on

the diagonal is negative and there is not any even root. This shows a panorama of the

hyperbolic algebra far from that of the complex numbers.

Hyperbolic analytic functions

Which conditions must fulfil a hyperbolic function f(z) of a hyperbolic variable z

to be analytic? We wish that the derivative be well defined:

f (z + ∆z ) ’ f (z )

f' (z ) = lim

∆z

∆z ’ 0

that is, this limit must be independent of the direction of ∆z. If f(z) = a + b e1 and the

variable z = x + y e1 , then the derivative calculated in the direction ∆z = ∆x is:

‚a ‚b

f' (z ) = + e1

‚x ‚x

while the derivative calculated in the direction ∆z = e1 ∆y becomes:

‚a ‚b

f' (z ) = e1 +

‚y ‚y

148 RAMON GONZALEZ CALVET

Both expressions must be equal, which results in the conditions of hyperbolic

analyticity:

‚a ‚b ‚a ‚b

= =

and

‚x ‚y ‚y ‚x

Note that the exponential and logarithm fulfil these conditions and therefore they are

hyperbolic analytic functions. More exactly, the exponential is analytic in all the plane

while the logarithm is analytic for the sector of hyperbolic numbers with positive

determinant.

By derivation of both identities one finds that the analytic functions satisfy the

hyperbolic partial differential equation (also called wave equation):

‚ 2a ‚ 2a ‚ 2b ‚ 2b

’ = ’ =0

‚x 2 ‚y 2 ‚x 2 ‚y 2

Now, we must state the main integral theorem for hyperbolic analytic functions:

if a hyperbolic function is analytic in a certain domain on the hyperbolic plane, then its

integral following a closed way C within this domain is zero. If the hyperbolic function

is f (z ) = a + b e1 then the integral is:

« f (z ) dz = « (a + be )(dx + dy e ) = « (a dx + b dy ) + e « (a dy + b dx )

1 1 1

C C C C

Since C is a closed path, we may apply the Green theorem to write:

« ‚b ‚a « ‚a ‚b

= «« ¬ ’ · dx § dy + e1 «« ¬

¬ ‚x ’ ‚y · dx § dy = 0

¬ ‚x ‚y · ·

D D

where D is the region bounded by the closed way C. Since f(z) fulfils the analyticity

conditions everywhere within D, the integral vanishes.

From here other theorems follow like for the complex analysis, e. g.: if f(z) is a

hyperbolic analytic function in a domain D simply connected and z1 and z2 are two

points on D then the definite integral:

« f (z ) dz

z1

z2

between these points has a unique value independently of the integration path.

Let us see an example. Consider the function f(z) = 1 / ( z ’ 1 ). The function is

only defined if the inverse of z ’ 1 exists, which implies ¦z ’ 1¦≠0. Of course, this

function is not analytic at z = 1, but neither for the points:

¦z ’ 1¦2 = 0 ” ( x ’ 1)2 ’ y2 = 0 ” ( x + y ’ 1) ( x ’ y ’ 1) = 0

The lines x + y = 1 and x ’ y = 1 break the analyticity domain into two sectors. Let us

calculate the integral:

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 149

( 5, 3 )

dz

« z ’1

( 5 , ’3 )

through two different trajectories within a sector. The first one is a straight path given

by the parametric equation z = 5 + t e1 (figure 12.2):

(4 ’ t e1 ) dt

( 5, 3 ) 3 3

dz dt

«’3) z ’ 1 ’3 4 + t e1 ’3 16 ’ t 2 e1

=« e1 = «

( 5,

Ought to symmetry the integral of the odd

function is zero:

3

3

4+t

®e

4 dt

=« e1 = 1 log = e1 log 7

4 ’ t ’3

16 ’ t 2

2

° »

’3

The second path (figure 12.2) is the hyperbola

going from the point (5, ’3) to (5, 3): Figure 12.2

(z * ’1) dz (z * ’1) dz

( 5, 3 ) ( 5, 3 ) ( 5, 3 )

dz

« z ’ 1 (5,’3) (z ’ 1) (z * ’1) (5,’3) det (z ) ’ 2 Re (z ) + 1

=« =«

( 5, ’3)

Introducing the parametric equation of this path, z = 4 ( cosh t + e1 sinh t) we have:

(4 cosh t ’ 4e1 sinh t ’ 1) (4sinht + 4 e1 sinh t ) e1 (4 ’ cosh t ) ’ 4 sinh t

log 2 log 2

« «2

= dt = 4 dt

16 ’ 8 cosh t + 1 17 ’ 8 cosh t

’ log 2 ’ log

Due to symmetry, the integral of the hyperbolic sine (an odd function) divided by the

denominator (an even function) is zero. Then we split the integral in two integrals and

find its value:

’ 8 exp(t ) + 2

log 2

log 2 log 2

dt e1 ®

15 dt

= e1 « + e1 « = log + e1 log 2 = e1 log 7

’ 8 exp(t ) + 32 ’ log 2

2 ’ log 2 17 ’ 8 cosh t 2 2° »

’ log 2

Now we see that the integral following both paths gives the same result, as indicated by

the theorem. In fact, the analytical functions can be integrated directly by using the

indefinite integral:

( 5, 3 )

( )

( 5, 3 )

®1 « y

dz

= [ log(z ’ 1) ] ( 5, ’3) = log ( x ’ 1) ’ y 2 + e1 arg tgh¬

«’3) z ’ 1

( 5, 3 ) 2

·

x ’ 1 » ( 5, ’3)

2

°

( 5,

( 5, 3 )

®e x ’1+ y

= 1 log = e1 log 7

x ’ 1 ’ y » ( 5, ’3)

2

°

150 RAMON GONZALEZ CALVET

Analyticity and square of convergence of the power series

A matrix function f(A) can be developed as a Taylor series of powers of the

matrix A:

«x + y

0

∞

f ( A) = ‘ a n A n A = x + y e1 = ¬ ·

¬0 y·

x’

n =0

The series is convergent when all the eigenvalues of the matrix A are located within the

radius r of convergence of the complex series:

∞

‘a zn < ∞ for |z|<r

n

n =0

which leads us to the following conditions:

|x’y|<r

|x+y|<r and

Therefore, the region of convergence of a Taylor series of hyperbolic variable is a

square centred at the origin of coordinates with vertices (r, 0)-(0, r)-(’r, 0)-(0, ’r). Note

that from both conditions we obtain:

det ( x + y e1 ) = x 2 ’ y 2 < r 2 .

which is a condition similar to that for complex numbers, that is, the modulus must be

lower than the radius of convergence. However this condition is not enough to ensure

the convergence of the series. Let us see, for example, the function f(z) = 1/(1 ’ z):

∞

1

f (z ) = = ‘ zn

1 ’ z n =0

The radius of convergence of the complex series is r = 1, so that the square of the

convergence of the hyperbolic series is (1, 0)-(0, 1)-(’1,0)-(0,’1). Otherwise, we have

formerly seen that 1/(z ’1) (and hence f(z) ) is not analytic at the lines x + y ’1 = 0 and x

’ y ’ 1= 0 belonging to the boundary of the square of convergence. Now we find a

phenomenon which also happens within the complex numbers: at some point on the

boundary of the region of convergence of the Taylor series, the function is not analytic.

Another example is the Riemann™s function:

∞ ∞

1

ζ (z ) = ‘ z = ‘ e ’ z log n

n =1 n n =1

Now, we take z instead of real being a hyperbolic number:

∞

ζ ( x + y e1 ) = ‘ e ’ x log n (cosh( y log n ) ’ e1 sinh( y log n ))

n =1

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 151

« n y + n ’ y n y ’ n ’ y ∞ « ’ x + y 1 ’ e1 1 + e1

∞

= ‘n ¬ e1 · = ‘ ¬ n

’x

+ n ’x’ y

’ ·

¬ ·

2 2 2 2

n =1

n =1

The series is convergent only if ’x +y < ’1 and ’x ’y < ’1 or:

1’x<y<x’1

which is the positive half-sector beginning at (x, y) = (1, 0). You can see that it is an

analytic function within this domain. We can rewrite the Riemann™s function in the

form:

1 + e1 1 ’ e1

ζ ( x + y e1 ) = ζ (x + y ) + ζ (x ’ y )

2 2

and then define the Riemann™s zeta function extended to the other sector taking into

account the complex analytic continuation when needed for ζ(x + y) or ζ(x ’ y):

∞

t z ’1

πz

2 1

ζ (1 ’ z ) = ζ (z ) “(z ) cos and ζ (z ) = 0 < Re(z ) < 1

“(z ) « e t ’ 1

dt

(2 π ) z

2 0

Now a very interesting theorem arises: every analytic function f(z) can be always

written in the following form:

1 + e1 1 ’ e1

f ( x + y e1 ) = f (x + y ) + f (x ’ y )

2 2

Let us prove this statement calculating the partial derivatives of each part a and b of the

analytic function:

‚a 1 ® df ( x + y ) df ( x ’ y ) ‚b 1 ® df ( x + y ) df ( x ’ y )

= + = ’

‚x 2 d ( x + y ) d (x ’ y ) ‚x 2 d ( x + y ) d ( x ’ y )

° » ° »

‚b 1 ® df ( x + y ) df ( x ’ y )

‚a 1 ® df ( x + y ) df ( x ’ y )

= ’ = +

‚y 2 d ( x + y ) d ( x ’ y ) ‚y 2 d ( x + y ) d (x ’ y )

° » ° »

These derivatives fulfil the analyticity condition provided that the derivatives of the real

function at x + y and x ’ y exist. On the other hand, this expression is a method to get

the analytical continuation of any real function. For example, let us construct the

analytical continuation of f(x) = cos x :

1 + e1 1 ’ e1

cos( x + y e 1 ) = cos( x + y ) + cos( x ’ y )

2 2

e

1

= [ cos( x + y ) + cos( x ’ y ) ] + 1 [ cos( x + y ) ’ cos( x ’ y ) ]

2 2

= cos x cos y ’ e1 sin x sin y

152 RAMON GONZALEZ CALVET

The other consequence of this way of analytic continuation is the fact that if f(z)

loses analyticity for the real value x0 then it is neither analytic at the lines with slope 1

and ’1 passing through (x0, 0). Since we have supposed that the function is analytic

except for a certain real value, the former equality holds although the analyticity is lost

at this point. But then the function cannot be analytic at (x0 + t, ’ t ) nor (x0 ’ t, t )

because:

1 + e1 1 ’ e1

f ( x 0 + t ’ t e1 ) = f (x 0 + 2 t ) + f (x 0 )

2 2

1 + e1 1 ’ e1

f ( x 0 ’ t + t e1 ) = f (x 0 ) + f (x 0 ’ 2 t )

2 2

For example, look at the function f(z) = 1/z, which is not analytic for the real value x =

0. Then it will not be analytic at the lines y = x and y = ’x. We can see it through the

decomposition of the function:

x ’ y e1 1 + e1 1 1 ’ e1 1

1 1

= =2 = +

z x + y e1 x ’ y 2 2 x+ y 2 x’y

Now we may return to the question of the convergence of the Taylor series. A

function can be only developed in a power series in the neighbourhood of a point where

it is analytic. The series is convergent till where the function breaks the analyticity, so

that the lines y = x ’ x0 and y = ’x + x0 (being x0 the real value for which f(x) is not

analytic) are boundaries of the convergence domain. On the other hand, due to the

symmetry of the powers of z, the convergence domain forms a square around the centre

of development. This implies that there are not multiply connected domains. The

Gruyère cheese picture of a complex domain is not possible within the hyperbolic

numbers, because if the function is not analytic at a certain point, then it is not analytic

for all the points lying on a cross which passes through this point. The hyperbolic

domains are multiply separated, in other words, formed by disjoined rectangular regions

without holes.

About the isomorphism of Clifford algebras

Until now, I have only used the geometric algebra generated by the Euclidean

plane vectors (usually noted as Cl2,0(R)). This algebra already contains the hyperbolic

numbers and hyperbolic vectors. However an isomorphic algebra Cl1,1(R) generated by

hyperbolic vectors (time-space) is more used in relativity:

(e0 e1 )2 = 1

2

2

e1 = ’1

e0 = 1 e0 e1 = ’ e1 e0

The isomorphism is:

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 153

Cl2,0(R) ” Cl1,1(R)

”

1 1

”

e1 e01

”

e2 e0

’e12 ” e1

A hyperbolic number has the expression z = x + y e01 and a hyperbolic vector v = v0 e0 +

v1 e1.

This description perhaps could be satisfactory for a mathematician or a physicist,

but not for me. I think that in fact both algebras only differ in the notation but not in

their nature, so that they are the same algebra. Moreover, they are equal to the matrix

algebra, which is the expected algebra for a space of multiple quantities (usually said

vectors). The plane geometric algebra is the algebra of the 2—2 real matrices5.

Cl2,0(R) = Cl1,1(R) = M2—2(R)

This distinct notation only expresses the fact that the plane geometric algebra is equally

generated (in the Grassmann™s sense) by Euclidean vectors or hyperbolic vectors. The

vector plane (Euclidean or hyperbolic) is the quotient space of the geometric algebra

divided by an even subalgebra (complex or hyperbolic numbers). Just this is the matter

of the next chapter.

Exercises

12.1 Calculate the square roots of 5 + 4 e1.

2 z 2 + 3 z ’ 17 + 3 e1 = 0

12.2 Solve the following equation:

z2 ’ 6 z + 5 = 0

12.3 Solve directly the equation:

12.4 Find the analytical continuation of the function f(x) = sin x .

12.5 Find cosh 4ψ and sinh 4ψ as functions of coshψ and sinhψ.

12.6 Construct the analytical continuation of the real logarithm and see that it is

identical to the logarithm found from the hyperbolic exponential function.

e1

«z dz following a straight path z = t e1 and a circular path z

2

12.7 Calculate the integral

’ e1

= cos t + e1 sin t, and see that the result is identical to the integration via the primitive.

12.8 Prove that if f(z) is a hyperbolic analytic function and does not vanish then it is a

hyperbolic conformal mapping.

5

In comparison, the 2—2 complex matrices (Pauli matrices) are a representation of the algebra

of the tridimensional space.

154 RAMON GONZALEZ CALVET

13. THE HYPERBOLIC OR PSEUDO-EUCLIDEAN PLANE

Hyperbolic vectors

The elements not belonging to the subalgebra of the hyperbolic numbers form a

quotient space1, but not an algebra:

v 2 ’ v 21

«0

v = v 2 e 2 + v 21 e 21 = ¬ ·

¬v + v 0·

2

21

In other words, their linear combinations are in the space, but the products are

hyperbolic numbers. These elements play the same role as the Euclidean vectors with

respect to the complex numbers. Because of this, they are called hyperbolic vectors.

Like the hyperbolic numbers, the hyperbolic vectors have also a pseudo-Euclidean

determinant:

det (v 2 e 2 + v 21 e 21 ) = v 21 ’ v 2

2 2

Following the analogy with Euclidean vectors:

det (w1 e1 + w2 e 2 ) = ’(w1 e1 + w2 e 2 ) = ’ w

2

2

the determinant of the hyperbolic vectors is also equal to its square with opposite sign:

det (v 2 e 2 + v 21 e 21 ) = ’(v 2 e 2 + v 21 e 21 ) = ’ v

2

2

whence the modulus of a hyperbolic vector can be defined:

v = v 2 ’ v 21

2 2

Like for Euclidean vectors, the inverse of a hyperbolic vector is equal to this vector

divided by its square (or square of the modulus):

v 2 e 2 + v 21 e 21

(v 2 e2 + v 21 e21 )’1 = 2 2

v 2 ’ v 21

As before, there is not any privileged direction in the plane. Then every subspace

whose elements have the form:

1

Every Euclidean vector is obtained from e1 through a rotation and dilation given by the

complex number with the same components: v1 e1 + v2 e2 = e1 (v1 + v2 e12 ) so that the complex

and vector planes are equivalent. This statement also holds in hyperbolic geometry: every

hyperbolic vector is obtained from e2 through a rotation and dilation given by the hyperbolic

number with the same components: v2 e2 + v21 e21 = e2 (v2 + v21 e1 ) or in relativistic notation: v0

e0 + v1 e1 = e0 (v0 + v1 e01 ) so that both hyperbolic planes are equivalent. That reader accustomed

to the relativistic notation should remember the isomorphism: e0 ” e2 , e1 ” e21 , e01 ” e1 .

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 155

vw w + v21 e21

where w is a unitary Euclidean vector perpendicular to the unitary Euclidean vector u, is

also a subspace of hyperbolic vectors complementary of the hyperbolic numbers with

the direction of u.

The hyperbolic vectors have the following properties, which you may prove:

1) The product of two hyperbolic vectors is always a hyperbolic number. This

fact is shown by the following table, where the products of all the elements

of the geometric algebra are summarised:

— hyp. numbers hyp. vectors

hyp. numbers hyp. numbers hyp. vectors

hyp. vectors hyp. vectors hyp. numbers

2) The conjugate of a product of two hyperbolic vectors x and y is equal to the

product with these vectors exchanged:

( x y )* = y x

3) The product of three hyperbolic vectors x, y and z fulfils the permutative

property:

xyz=zyx

When a hyperbolic number n and a hyperbolic vector x are exchanged, the

hyperbolic number becomes conjugated according to the permutative

property:

n x = x n*

4) The modulus of a product of hyperbolic vectors is equal to the product of

moduli:

vw = v w

This property follows immediately from the fact that the determinant of a

product of matrices is equal to the product of determinants of each matrix.

Inner and outer products of hyperbolic vectors

For any two hyperbolic vectors r and s such as:

r = r2 e 2 + r21 e 21 s = s 2 e 2 + s 21 e 21

their inner and outer products are defined by means of the geometric (or matrix) product

in the following way:

rs + sr

r·s= = r2 s 2 ’ r21 s 21

2

156 RAMON GONZALEZ CALVET

rs ’ sr

r§s= = ( r2 s 21 ’ r21 s 2 ) e1

2

Then the product of two vectors can be written as the addition of both products:

rs=r·s+r§s

If the outer product of two hyperbolic vectors vanishes, they are proportional:

r2 r12

r§s=0 ” = ” r || s

s 2 s12

Two hyperbolic vectors are said to be orthogonal if their inner product vanishes:

r2 s 21

r⊥s ” ” r2 s 2 ’ r21 s 21 = 0 ” =

r·s=0

r21 s 2

When the last condition is fulfilled, we see both vectors being symmetric with respect to

any quadrant bisector. When the ratio of components is equal to ±1 (directions of

quadrant bisectors), the vectors have null modulus and are self-orthogonal. For ratios

differing from ±1, one vector has negative determinant and the other positive

determinant, so that they belong to different sectors. That is, there is not any pair of

orthogonal vectors within the same sector.

Angles between hyperbolic vectors

The oriented angle ± between two Euclidean vectors u and v (of the form x e1 +

y e2) is obtained from the complex exponential function:

± u·v

cos ± =

u § v e 21

uv

uv

exp(± e12 ) = + (π e12 )

” ± = arctg

”

u § v e 21

uv u· v

sin± =

uv

Then, the oriented hyperbolic angle ψ between two hyperbolic vectors u and v (of the

form x e2 + y e21) is defined as:

± u·v

coshψ =

uv

uv

exp(ψ e1 ) = ” ”

u § v e1

uv sinhψ =

uv

1 « u · v + u § v e1

« u § v e1

· + (π e12 ) = log¬ · + (π e12 )

ψ = arg tgh¬

¬ u·v · 2 ¬ u · v ’ u § v e1 ·

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 157

The parenthesis for π e12 indicates that this angle is added to the arctangent or not

depending on the quadrant (Euclidean plane) or half sector (hyperbolic plane).

Let us analyse whether these expressions are suitable. The hyperbolic cosine is

always greater or equal to 1. It cannot describe the angle between two vectors belonging

to opposite half sectors, which have a negative inner product. So we must keep the

complex analytic continuation of the hyperbolic sine and cosine:

sinh( x + e12 y ) ≡ sinh x cos y + e12 cosh x sin y

cosh( x + e12 y ) ≡ cosh x cos y + e12 sinh x sin y

to get sinh(ψ + e12 π ) ≡ ’sinhψ cosh(ψ + e12 π ) ≡ ’ coshψ with ψ real.

and

2

Therefore opposite hyperbolic vectors form a circular angle of π radians .

Let us consider the angle between vectors lying on different sectors. The

modulus of one vector is real while that of the other is imaginary what implies that the

hyperbolic sine and cosine of the angle are imaginary. Using again the complex analytic

continuation of the hyperbolic sine and cosine we find these imaginary values:

« π

sinh¬ψ ± e12 · ≡ ± e12 coshψ ψ real

2

« π

cosh¬ψ ± e12 · ≡ ± e12 sinhψ

2

Then, which is the angle between two orthogonal vectors? Since they have a null

inner product while their outer product

equals in real value the product of both

moduli and the hyperbolic cosine is ±e12 ,

they form an angle of π e12 /2 or 3π e12 /2,

that is, orthogonal hyperbolic vectors form a

circular right angle:

1 π

ψ ⊥ = log(’ 1) + (π e12 ) = e12 + (π e12 )

2 2

The analytic continuations of the

hyperbolic trigonometric functions must be

Figure 13.1

consistent with their definitions given in the

page 142. For example, in the lowest half

2

The so called “antimatter” is not really any special kind of particles, but matter with the

energy-momentum vector (E, p) lying on the negative half sector instead of the positive one

(usual matter). Also we may wonder whether matter having (E, p) on the imaginary sector exists

since, from a geometric point of view, there is not any obstacle. It is known that the particle

dynamics for c<V<∞ is completely symmetric to the dynamics for 0¤V<c, but with an

imaginary mass. In this case, the head of the energy-momentum vector is always located on the

hyperbola having the y-axis (p axis) as principal axis of symmetry. According to Einstein™s

formula, for V = ∞ we have E = 0 and p = m c, that is, the particle has a null energy! The

technical question is how can a particle trespass the light barrier?

158 RAMON GONZALEZ CALVET

sector, the modulus of the hyperbolic radius is imaginary. The abscissa x and ordinate y

divided by r, the modulus of the radius vector, is also imaginary so the angle is a real

quantity plus a right angle and the hyperbolic sine and cosine are exchanged. The values

of the angles consistent with the analytic continuation are displayed in the figure 13.1.

Congruence of segments and angles

Two segments (vectors) are congruent (have equal length) if their determinants

are equal, in other words, when one segment can be obtained from the other through an

isometry. Segments having equal length always lie on the same sector.

Two hyperbolic angles are said to be congruent (or equal) if they have the same

moduli, that is, if they intercept arcs of hyperbola with the same hyperbolic length.

A triangle is said to be isosceles if it has two congruent sides. In colloquial

language we talk about “sides with equal length”, even “equal sides”. Let us see the

isosceles triangle theorem: the angles adjacent to the base of an isosceles triangle are

equal. The proof uses the outer product. The triangle area is the half of the outer product

of any two sides, as proved below in the section on the area. Suppose that the sides a

and b have the same modulus. Then:

sinh β = b sinh ± β =±

’

a c c

Isometries

In the Euclidean plane, an isometry is a geometric transformation that preserves

the modulus of vectors and complex numbers. Now I give a more general definition: a

geometric transformation is an isometry if it preserves the determinant of every element

of the geometric algebra. In fact, the isometries are the inner automorphism of matrices:

A' = B ’1 A B ’ det A' = det A

When B represents a Euclidean vector, the

isometry is a reflection in the direction of B. If

a complex number of argument ± is

represented by B, the transformation is a

rotation of angle 2± .

We wish now to obtain the isometries

for hyperbolic vectors. The hyperbolic rotation

of a hyperbolic vector (figure 13.2) is obtained

through the product by an exponential having

unity determinant:

Figure 13.2

v' 2 e 2 + v' 21 e 21 = ( v 2 e 2 + v 21 e 21 )(coshψ + sinhψ e1 )

Writing the components, we have:

± v' 2 = v 2 coshψ + v 21 sinhψ

v' 21 = v 2 sinhψ + v 21 coshψ

TREATISE OF PLANE GEOMETRY THROUGH GEOMETRIC ALGEBRA 159

which is the Lorentz transformation3 of the relativity. When a vector is turned through a

positive angle ψ, its extreme follows the hyperbola in the direction shown by the

arrowheads in the figure 13.2, as is deduced from the components. So, they indicate the

geometric positive sense of hyperbolic angles (a not trivial question). Since the points of

intersections with the axis corresponds to ψ = 0 plus multiples of π e12 /2, the signs of

the hyperbolic angles are determined: positive in the first and third quadrant, and

negative in the second and fourth quadrant.

We can also write the hyperbolic rotation as an inner automorphism of matrices

by using the half argument identity for hyperbolic trigonometric functions:

2

ψ ψ

«

coshψ + e1 sinhψ ≡ ¬ cosh + e1 sinh ·

2 2

and the permutative property:

ψ ψ ψ ψ

« «

v' 2 e 2 + v' 21 e 21 = ¬ cosh ’ sinh e1 · ( v 2 e 2 + v 21 e 21 ) ¬ cosh + sinh e1 ·

2 2 2 2

If z = exp(e1ψ /2), then the rotation is written as:

v' = z ’1 v z

Now it is not needed that z be unitary and can have any modulus. Also, we see that the

hyperbolic rotation leaves the hyperbolic numbers invariant.

Analogously to reflections in the Euclidean plane, the hyperbolic reflection of a

hyperbolic vector v with respect to a direction given by the vector u is:

v' = u ’1 v u = u ’1 ( v|| + v⊥) u = v|| ’ v⊥

because the proportional vectors commute and those which are orthogonal anticommute

(also in the hyperbolic case,

because the inner product is Figure 13.3

zero). Every pair of orthogonal

directions (for example u|| and u⊥

in figure 13.3) are always seen

by us as having symmetry with

respect to the quadrant bisectors.

Since the determinant is

preserved, a vector and its

reflection always belong to the

same sector, the hyperbolic angle

between each vector and the

direction of reflection being

equal with opposite sign.

3

The hyperbolic vector in the space-time is ct e0 + x e1 where x is the space coordinate, t the

time and c the light celerity. The argument of the hyperbolic rotation ψ is related with the

relative velocity V of both frames of reference through ψ = arg tgh V/c.

160 RAMON GONZALEZ CALVET

Finally, note that the extremes of a vector and its transformed vector under a

hyperbolic reflection (or any isometry) lie on an equilateral hyperbola.

Theorems about angles