212 Appendix C



Appendix C: Fundamental Facts
Throughout the text, we will be using the notational devices for multiplica-
tion and addition.
x The Sum Notation
In general, if we have numbers am , am+1 , · · · , an (m ¤ n), we may write
their sum as
n
ai = am + am+1 + · · · an .
i=m

The letter i is the index of summation (and any letter may be used here), n is
the upper limit of summation, m is the lower limit of summation, and ai is a
summand.
The close cousin of the summation notation is the following symbol.
x The Product Symbol
The multiplicative analogue for the summation notation is the product sym-
bol denoted Π, upper case Greek pi. Given am , am+1 , . . . , an ∈ R (m ¤ n),
their product is denoted:
n
ai = am am+1 · · · an .
i=m

The letter i is the product index, m is the lower product limit, n is the upper
product limit, and ai is a multiplicand.
The notation m n for integers m, n means that m divides n, namely, there
exists an integer such that n = m. Behind the basic notion of divisibility is
a special kind of divisor that we will use throughout.

De¬nition C.1 (The Greatest Common Divisor)
If a, b ∈ Z are not both zero, then the greatest common divisor or gcd of a
and b is the natural number g such that g a, g b, and g is divisible by any
common divisor of a and b, denoted by g = gcd(a, b).

We have a special term for the case where the gcd is 1.

De¬nition C.2 (Relative Primality)
If a, b ∈ Z, and gcd(a, b) = 1, then a and b are said to be relatively prime or
coprime. Sometimes the phrase a is prime to b is also used. If {a1 , a2 , . . . , an }
has the property that gcd(ai , aj ) = 1, for each i = j, we say that the elements
are pairwise relatively prime.

Of particular importance for divisibility is the following algorithm.



© 2003 by CRC Press LLC
Fundamental Facts 213

Theorem C.3 (The Division Algorithm)
If a ∈ N and b ∈ Z, then there exist unique integers q, r ∈ Z with 0 ¤ r < a,
and b = aq + r.

·
Proof. See [163, Theorem 1.3.1, p. 33].
Euclid™s celebrated algorithm tells us how to ¬nd the gcd based upon the
division algorithm.

Theorem C.4 (The Euclidean Algorithm)
Let a, b ∈ Z (a ≥ b > 0), and set a = r’1 , b = r0 . By repeatedly applying
the Division Algorithm, we get rj’1 = rj qj+1 + rj+1 with 0 < rj+1 < rj for
all 0 ¤ j < n, where n is the least nonnegative number such that rn+1 = 0, in
which case gcd(a, b) = rn .

·
Proof. See [163, Theorem 1.3.3, p. 37].
We will have occasion to use the following generalization of Theorem C.4
throughout the text.

Theorem C.5 (The Extended Euclidean Algorithm)
Let a, b ∈ N, and let qi for i = 1, 2, . . . , n + 1 be the quotients obtained from
the application of the Euclidean Algorithm to ¬nd g = gcd(a, b), where n is the
least nonnegative integer such that rn+1 = 0. If s’1 = 1, s0 = 0, and

si = si’2 ’ qn’i+2 si’1 ,

for i = 1, 2, . . . , n + 1, then

g = sn+1 a + sn b.

·
Proof. See [163, Theorem 1.3.4, p. 38].
The elementary fundamental fact behind factorization algorithms studied in
Chapter 5 is the following.

Theorem C.6 (The Fundamental Theorem of Arithmetic)
r s
Let n ∈ N, n > 1. If n = i=1 pi = i=1 qi , where the pi and qi are primes,
then r = s, and the factors are the same if their order is ignored.

·
Proof. See [163, Theorem 1.4.2, pp. 44“45].
Also in Chapter 5, we will need the following basic concept related to the
gcd.

De¬nition C.7 (The Least Common Multiple)
If a, b ∈ Z, then the smallest natural number which is a multiple of both a
and b is the least common multiple of a and b, denoted by lcm(a, b).


© 2003 by CRC Press LLC
214 Appendix C

Throughout the text, we will be working within the following structure.

De¬nition C.8 (The Ring Z/nZ)
For n ∈ N, the set

Z/nZ = {0, 1, 2, . . . , n ’ 1}

is called the Ring of Integers Modulo n, where m denotes the congruence class
of m modulo n.C.1

A special kind of division is given a special notation as follows. If n ∈ N
and a, b ∈ Z, we say that a is congruent to b modulo n if n (a ’ b) and we
denote this by a ≡ b (mod n). If n (a ’ b), then we write a ≡ b (mod n). With
the basic notion of congruential arithmetic comes the following notion that is
essential in many cryptographic algorithms such as RSA.

De¬nition C.9 (Modular Multiplicative Inverses)
Suppose that a ∈ Z, and n ∈ N. A multiplicative inverse of the integer a
modulo n is an integer x such that ax ≡ 1 (mod n). If x is the least positive
such inverse, then we call it the least multiplicative inverse of the integer a
modulo n, denoted x = a’1 .

Since cryptosystems related to RSA are essentially predicated upon the fol-
lowing celebrated result, we include it here for convenience.

Theorem C.10 (Fermat™s Little Theorem)
If a ∈ Z, and p is a prime such that gcd(a, p) = 1, then ap’1 ≡ 1 (mod p).

·
Proof. See [163, Theorem 2.1.4, p. 80].
Knowing when the integers modulo n form a ¬eld is essential as is the struc-
ture of the units modulo n in what follows.

Theorem C.11 (The Field Z/pZ)
If n ∈ N, then Z/nZ is a ¬eld if and only if n is prime.

·
Proof. See [165, Theorem 2.13, p. 62].
We employ the notation F — to denote the multiplicative group of nonzero
elements of a given ¬eld F . In particular, when we have a ¬nite ¬eld Z/pZ = Fp
of p elements for a given prime p, then (Z/pZ)— denotes the multiplicative group
of nonzero elements of Fp . This is tantamount to saying that (Z/pZ)— is the
group of units in Fp , and (Z/pZ)— is cyclic. Thus, this notation and notion may
C.1 When the context is clear and no confusion can arise when talking about elements of
Z/nZ, we will eliminate the overline bars.




© 2003 by CRC Press LLC
Fundamental Facts 215

be generalized as follows. Let n ∈ N and let the group of units of Z/nZ be
denoted by (Z/nZ)— . Then

(Z/nZ) = {a ∈ Z/nZ : 0 ¤ a < n and gcd(a, n) = 1}. (C.12)

Numerous times we will need to solve systems of congruences for which the
following result from antiquity is most useful.

Theorem C.13 (Chinese Remainder Theorem)
Let ni ∈ N for natural numbers i ¤ k ∈ N be pairwise relatively prime, set
k
n = j=1 nj and let ri ∈ Z for i ¤ k. Then the system of k simultaneous linear
congruences given by:
x ≡ r1 (mod n1 ),
x ≡ r2 (mod n2 ),
.
.
.
x ≡ rk (mod nk ),
has a unique solution modulo n.

·
Proof. See [165, Theorem 2.29, p. 69].
The natural generalization of Fermat™s idea is the following, which provides
the modulus for the RSA enciphering and deciphering exponents, for instance.

De¬nition C.14 (Euler™s φ-Function)
For any n ∈ N the Euler φ-function, also known as Euler™s Totient φ(n) is
de¬ned to be the number of m ∈ N such that m < n and gcd(m, n) = 1.


Theorem C.15 (The Arithmetic of the Totient)
a
k
If n = j=1 pj j where the pj are distinct primes, then

k k k
a ’1 a ’1
a a
’ (pj ’ 1)pj j
φ(pj j ) (pj j pj j )
φ(n) = = = .
j=1 j=1 j=1

·
Proof. See [165, Theorem 2.22, p. 65].

Theorem C.16 (Euler™s Generalization of Fermat™s Little Theorem)
If n ∈ N and m ∈ Z such that gcd(m, n) = 1, then mφ(n) ≡ 1 (mod n).

·
Proof. See [163, Theorem 2.3.1, p. 90].

Example C.17 Let n ∈ N. Then the cardinality of (Z/nZ)— is φ(n). Hence, if
G is a subgroup of (Z/nZ)— , |G| φ(n).



© 2003 by CRC Press LLC
216 Appendix C

The caclulus of integer orders and related primitive roots is an underlying
fundamental feature of cryptographic problems such as the discrete log problem.

De¬nition C.18 (Modular Order of an Integer)
Let m ∈ Z, n ∈ N and gcd(m, n) = 1. The order of m modulo n is the
smallest e ∈ N such that me ≡ 1 (mod n), denoted by e = ordn (m), and we say
that m belongs to the exponent e modulo n.

Note that the modular order of an integer given in De¬nition C.18 is the
same as the element order in the group (Z/nZ)— .

Proposition C.19 (Divisibility by the Order of an Integer)
If m ∈ Z, d, n ∈ N such that gcd(m, n) = 1, then md ≡ 1 (mod n) if and
only if ordn (m) d. In particular, ordn (m) φ(n).

·
Proof. See [165, Proposition 4.3, p. 161].


De¬nition C.20 (Primitive Roots)
If m ∈ Z, n ∈ N and
ordn (m) = φ(n),
then m is called a primitive root modulo n. In other words, m is a primitive
root if it belongs to the exponent φ(n) modulo n.


Theorem C.21 (Primitive Root Theorem)
An integer n > 1 has a primitive root if and only if n is of the form 2a pb
where p is an odd prime, 0 ¤ a ¤ 1, and b ≥ 0 or n = 4. Also, if m has a
primitive root, then it has φ(φ(n)) of them.

·
Proof. See [165, Theorem 4.10, p. 165].
The following will be most useful in Chapter 4. The result can be easily
proved using De¬nition C.20 and Theorem C.10.

Proposition C.22 (Primitive Roots and Primality)

(1) If m is a primitive root modulo an odd prime p, then for any prime q
dividing (p ’ 1), m(p’1)/q ≡ 1 (mod p).
(2) If m ∈ N, p is an odd prime, and m(p’1)/q ≡ 1 (mod p) for all primes
q (p ’ 1), then m is a primitive root modulo p.

The following generalization of the Legendre symbol will be required for
understanding, for example, the primality test on page 84.



© 2003 by CRC Press LLC
Fundamental Facts 217

De¬nition C.23 (The Jacobi Symbol)
e
Let n > 1 be an odd natural number with n = j=1 pj j where ej ∈ N and the
k


pj are distinct primes. Then the Jacobi Symbol of a with respect to n is given
by
k e
aj
a
= ,
n pj
j=1

for any a ∈ Z, where the symbols on the right are Legendre Symbols.


Theorem C.24 (Properties of the Jacobi Symbol)
Let m, n ∈ N, both odd, and a, b ∈ Z. Then
ab a b
(1) = .
n n n
a b
if a ≡ b (mod n).
(2) =
n n
a a a
(3) = .
mn m n
’1
= (’1)(n’1)/2 .
(4)
n
2 ’1)/8
2
= (’1)(n
(5) .
n
(6) If gcd(a, n) = 1 where a ∈ N is odd, then
a n a’1 n’1
= (’1) 2 · 2 ,
n a
which is the Quadratic Reciprocity Law for the Jacobi Symbol.

·
Proof. See [165, Theorem 4.40, p. 175].

Theorem C.25 (Euler™s Criterion for Power Residue Congruences)
Let e, c ∈ N with e ≥ 2, b ∈ Z, p > 2 is prime with p b, and g =
gcd(e, φ(pc )) b. Then the congruence

xe ≡ b (mod pc ) (C.26)

is solvable if and only if c
≡ 1 (mod pc ).
)/g
bφ(p

·
Proof. See [165, Theorem 4.17, p. 168].
When b = 1 and we are concenrned with quadratic residues this becomes
the following.



© 2003 by CRC Press LLC
218 Appendix C

Corollary C.27 (Euler™s Criterion for Quadratic Residues)
Let p be an odd prime. Then

c
c(p’1)/2 ≡ (mod p).
p


Theorem C.28 (Solutions of Power Residue Congruences)
m
r
Let n be an odd natural number with n = j=1 pj j for distinct primes pj
(1 ¤ j ¤ r). Then
xt ≡ 1 (mod n)
m
r
gcd(t, φ(pj j )) solutions. Also, if
has exactly g = j=1

xt ≡ ’1 (mod n)

has a solution, then it has g solutions.

·
Proof. See [163, Theorem 3.3.2, p. 155].
Of course, the gem of number theory will come into play in our cryptographic
journey, so we state it here for ¬nger-tip reference.

Theorem C.29 (Prime Number Theorem)
Let the number of primes less than x be denoted by π(x). Then

π(x)
lim = 1,
x’∞ x/ log(x)


denoted by π(x) ∼ x/ log(x).C.2

·
Proof. See [229].
We now provide a brief reminder of vector space basics and elementary
matrix theory that we will need to describe other concepts to be used in the
text.

x Vector Spaces
A vector space consists of an additive abelian group V and a ¬eld F together
with an operation called scalar multiplication of each element of V by each
element of F on the left, such that for each r, s ∈ F and each ±, β ∈ V the
following conditions are satis¬ed:

C.30. r± ∈ V .
C.31. r(s±) = (rs)±.
general, if f and g are functions of a real variable x, then f (x) ∼ g(x) means
C.2 In

limx’∞ f (x)/g(x) = 1. Such functions are said to be asymptotic.




© 2003 by CRC Press LLC
Fundamental Facts 219

C.32. (r + s)± = (r±) + (s±).

C.33. r(± + β) = (r±) + (rβ).
C.34. 1F ± = ±.

The set of elements of V are called vectors and the elements of F are called
scalars. The generally accepted abuse of language is to say that V is a vector
space over F . If V1 is a subset of a vector space V that is a vector space in its
own right, then V1 is called a subspace of V .

De¬nition C.35 (Bases, Dependence, and Finite Generation)
If S is a subset of a vector space V , then the intersection of all subspaces of
V containing S is called the subspace generated by S, or spanned by S. If there
is a ¬nite set S, and S generates V , then V is said to be ¬nitely generated. If
S = …, then S generates the zero vector space. If S = {m}, a singleton set, then
the subspace generated by S is said to be the cyclic subspace generated by m.
A subset S of a vector space V is said to be linearly independent provided
that for distinct s1 , s2 , . . . , sn ∈ S, and rj ∈ F for j = 1, 2, . . . , n,
n
rj sj = 0 implies that rj = 0 for j = 1, 2, . . . , n.
j=1

If S is not linearly independent, then it is called linearly dependent. A linearly
independent subset of a vector space that spans V is called a basis for V . The
number of elements in a basis is called the dimension of V . A hyperplane H is
an (n ’ 1)-dimensional subspace of an n-dimensional vector space V .

Example C.36 For a given prime p, m, n ∈ N, the ¬nite ¬eld Fpn is an n-
dimensional vector space over Fpm with pmn elements.



x Basic Matrix Theory
If m, n ∈ N, then an m — n matrix (read “m by n matrix”) is a rectangular
array of entries with m rows and n columns. For simplicity, we will assume that
the entries come from a ¬eld F . (For instance, in Section 8.1, we will be using
Fp to describe a secret sharing scheme base on vectors and matrices.) If A is
such a matrix, and ai,j denotes the entry in the ith row and j th column, then
« 
a1,1 a1,2 · · · a1,n
¬ a2,1 a2,2 · · · a2,n ·
¬ ·
A = (ai,j ) = ¬ . ·.
. .
. 
. . .
. .
am,1 am,2 · · · am,n




© 2003 by CRC Press LLC
220 Appendix C

Two m — n matrices A = (ai,j ), and B = (bi,j ) are equal if and only if
ai,j = bi,j for all i and j. The matrix (aj,i ) is called the transpose of A, denoted
by
At = (aj,i ).

Addition of two m — n matrices A and B is done in the natural way.

A + B = (ai,j ) + (bi,j ) = (ai,j + bi,j ),
and if r ∈ F , then rA = r(ai,j ) = (rai,j ), called scalar multiplication.
Matrix products are de¬ned by the following.
If A = (ai,j ) is an m — n matrix and B = (bj,k ) is an n — r matrix, then the
product of A and B is de¬ned as the m — r matrix:
AB = (ai,j )(bj,k ) = (ci,k ),
where
n
ci,k = ai, b ,k .
=1
Multiplication, if de¬ned, is associative, and distributive over addition. If m =
n, then the identity n — n matrix:
« 
1F 0 · · · 0
¬ 0 1F · · · 0 ·
¬ ·
In = ¬ . . ·,
. .
. .
. .
. . . .
· · · 1F
0 0
is called the n — n identity matrix, where 1F is the identity of F .
Another important aspect of matrices that we will need throughout the text
is motivated by the following. Consider the 2 — 2 matrix with entries from F :
ab
A= ,
cd
then ad ’ bc is called the determinant of A, denoted by det(A). More generally,
we may de¬ne the determinant of any n — n matrix with entries from F for any
n ∈ N. The determinant of any r ∈ F is just det(r) = r. Thus, we have the
de¬nitions for n = 1, 2, and we may now give the general de¬nition inductively.
The de¬nition of the determinant of a 3 — 3 matrix
« 
a1,1 a1,2 a1,3
A =  a2,1 a2,2 a2,3 
a3,1 a3,2 a3,3
is de¬ned in terms of the above de¬nition of the determinant of a 2 — 2 matrix,
namely, det(A) is given by
a2,2 a2,3 a2,1 a2,3 a2,1 a2,2
’ a1,2 det
a1,1 det + a1,3 det .
a3,2 a3,3 a3,1 a3,3 a3,1 a3,2


© 2003 by CRC Press LLC
Fundamental Facts 221

Therefore, we may inductively de¬ne the determinant of any n — n matrix in
this fashion. Assume that we have de¬ned the determinant of an n — n matrix.
Then we de¬ne the determinant of an (n + 1) — (n + 1) matrix A = (ai,j ) as
follows. First, we let Ai,j denote the n — n matrix obtained from A by deleting
the ith row and j th column. Then we de¬ne the minor of Ai,j at position (i, j)
to be det(Ai,j ). The cofactor of Ai,j is de¬ned to be

cof(Ai,j ) = (’1)i+j det(Ai,j ).

We may now de¬ne the determinant of A by

det(A) = ai,1 cof(Ai,1 ) + ai,2 cof(Ai,2 ) + · · · + ai,n+1 cof(Ai,n+1 ). (C.37)

This is called the expansion of a determinant by cofactors along the ith row of
A. Similarly, we may expand along a column of A.

det(A) = a1,j cof(A1,j ) + a2,j cof(A2,j ) + · · · + an+1,j cof(An+1,j ),

called the cofactor expansion along the j th column of A. Both expansions can
be shown to be identical. Hence, a determinant may be viewed as a function
that assigns a real number to an n — n matrix, and the above gives a method
for ¬nding that number.
If A is an n — n matrix with entries from F , then A is said to be invertible,
or nonsingular if there is a unique matrix denoted by A’1 such that

AA’1 = In = A’1 A.


Another important fact, that we will need for instance in Chapter 8, is a
result which follows from cofactor expansions.

Theorem C.38 (Cramer™s Rule)
Let A = (ai,j ) be the coe¬cient matrix of the following system of n linear
equations in n unknowns:

a1,1 x1 + a1,2 x2 + · · · + a1,n xn = b1

a2,1 x1 + a2,2 x2 + · · · + a2,n xn = b2
. . . . .
. . . . .
. . . . .
an,1 x1 + an,2 x2 + · · · + an,n xn = bn ,
over a ¬eld F . If det(A) = 0, then the system has a solution given by:
n
1
(1 ¤ j ¤ n).
(’1)i+j bi det(Ai,j ) ,
xj =
det(A) i=1




© 2003 by CRC Press LLC
222 Appendix C

Of particular importance is a special matrix called the Vandermonde matrix,
of order t > 1, which is given as follows.


« 
· · · xt’1
1 x1 1
¬ ·
· · · xt’1
1 x2
¬ ·
2
A=¬ ·,
.. . .
 
.. . .
.. . .
· · · xt
t’1
1 xt
where
(xk ’ xi )
det(A) =
1¤i<k¤t

(see Exercise 8.17 on page 159).
A notion that uses vector spaces and matrix theory is the following, which
we will use in Chapters 5 and 8, for instance.

x Gaussian Elimination
The term Gaussian elimination refers to an e¬cient algorithm for ¬nding
linear dependency relations among vectors in a vector space over a suitable ¬eld.
Suppose that we have vectors vj = (v1,j , v2,j , . . . , vn,j ) for j = 1, 2, . . . , m over
a ¬eld F . What we seek ¬eld elements c1 , c2 , . . . , cm ∈ F such that
m


cj vj = 0 , (C.39)
i=j


where cj vj = (cj v1,j , cj v2,j , . . . , cj vn,j ), and 0 = (0, 0, . . . , 0) is the zero vector
m
of length n. Since i=j cj vj , the relation given in (C.39) where not all coe¬-
cients are 0, is a linear dependency relation (see De¬nition C.35 on page 219).
Gaussian elimination uses the basic notions of linear algebra to de¬ne matrices
with the vectors vj as columns, then performs elementary row operations to
put them into a form to determine the dependency relations therefrom, if there
are any. The basic point from elementary linear algebra is that if the number
of vectors is greater than the dimension of the vector space over the ¬eld, then
there must be a dependency relation. For instance, if m > n in (C.39), then at
least one of the cj = 0. This idea will be used in the text, especially for factoring
techniques in Chapter 5.
The following will be needed in our discussions on secret sharing in Section
8.1, for instance.

Theorem C.40 (The Lagrange Interpolation Formula) Let F be a ¬eld,
and let aj for j = 0, 1, 2, . . . , n be distinct elements of F . If cj for j =
0, 1, 2, . . . , n are any elements of F , then
n
(x ’ a0 ) · · · (x ’ aj’1 )(x ’ aj+1 ) · · · (x ’ an )
f (x) = cj
(aj ’ a0 ) · · · (aj ’ aj’1 )(aj ’ aj+1 ) · · · (aj ’ an )
j=0




© 2003 by CRC Press LLC
Fundamental Facts 223

is the unique polynomial in F [x] such that f (aj ) = cj for all j = 0, 1, . . . , n.

We will need to address the notion of convergents of continued fractions, for
example in Section 6.2, so we present some basics here.

x Continued Fractions

De¬nition C.41 (Continued Fractions)
If q j ∈ R where j ∈ Z is nonnegative and q j ∈ R+ for j > 0, then an
expression of the form
1
± = q0 +
1
q1 +
q 2+
..
.
1
+
1
qk +
q k+1
..
.
is called a continued fraction. If q k ∈ Z for all k ≥ 0, then it is called a
simple continued fraction,C.3 denoted by q 0 ; q 1 , . . . , q k , q k+1 , . . . . If there exists
a nonnegative integer n such that qk = 0 for all k ≥ n, then the continued
fraction is called ¬nite. If no such n exists, then it is called in¬nite.


De¬nition C.42 (Convergents) Let n ∈ N and let ± have continued fraction
expansion q 0 ; q 1 , . . . , q n , . . . for q j ∈ R+ when j > 0. Then

Ck = q 0 ; q 1 , . . . , q k

is the k th convergent of ± for any nonnegative integer k.


Theorem C.43 (Finite Simple Continued Fractions Are Rational)
Let ± ∈ R. Then ± ∈ Q if and only if ± can be written as a ¬nite simple
continued fraction.




C.3 Note that the classical de¬nition of a simple continued fraction is a continued fraction that
arises from the reciprocals as in the Euclidean Algorithm, so the “numerators” are all 1 and
the denominators all integers. This is to distinguish from more general continued fractions in
which the numerators and denominators can be functions of a complex variable, for instance.
Simple continued fractions are also called regular continued fractions in the literature.



© 2003 by CRC Press LLC