. 13
( 16)


a. Slightly modify the protocol to make it resistant to an offline PIN-
cracking attack. Note that Alice arid Bob only share a PIN and no
pnblic/private key pairs are available. Hint: Use Diffie -Hellman,
while preventing a man-in-the-middle attack.
b. Why are the identifiers .'Bob" and "Alice" necessary in the second
arid third messages. respectively?

6. For S,,the symmetric group on three elements, the underlying set con-
sists of the permutations on t,hree elements. Show that S3 has the
following finite presentation:
s:j= (2,y I X 3 , y 2 , (2y)".
7. Show that S (see Problem 6) can be interpreted as the set of rigid
motions of an equilateral triangle in 3-space.
6.10 PROBLEMS 311

8. Is the Arithmetica key exchange susceptil.de t o a man-in-the-middle
attack? Why or why not?

9. Use the Euclidean Algorithm t o find the greatest common divisor of
12,345 and 67,890.

10. Suppose 1500 soldiers arrive in training camp. A few soldiers desert the
camp. The drill sergeants divide the remaining soldiers into groups of
five and discover that there is one left over. When they divide them
into groups of seven, there are three left over, and when they divide
them into groups of eleven, there are again three left over. Determine
the number of deserters.
11. In the RSA cryptosystem, it is possible that 111 = C , that is, the plain-
text and the ciphertcxt are identical. For modulus N = 3127 and
encryption exponent e = 17, find a message M that encrypts to itself.

12. Suppose that Bob uses the following variant of RSA: He chooses N
and two encryption exponents el and e2. He asks Alice t o encrypt
her message M to him by first computing C1 = Me' (mod N ) , then
encrypting C1 t o get C2 = C:' (mod N ) . Alice then sends C2 to Bob.
Does this double encryption increase the security over single encryption?
Why or why not?
13. Alice uses RSA to receive a single ciphertext C (encrypted using her
public key), corresponding t o plaintext M from Bob. To tease her
nemesis, Alice challenges Trudy t o recover M . Alice sends C t o Trudy
and agrees to decrypt one ciphertext from Trudy, as long as it is not C ,
and return the result. Is it possible for Trudy t o recover M ?

14. Suppose N = pq with p and q prime. Let e be the corresponding RSA
public encryption exponent and d the private decryption exponent. Use
the fact that
( M e ) d 111 (mod p ) and ( M e ) d M (mod q )
= =

t o show that
( M e ) d M (mod iV)

15. Let ( e l , N ) and (e2,N)be the RSA public keys of Alice and Bob, re-
spectively. Using the attack discussed in Section 6.5.1 beginning on
page 287, write a computer program t o show that Alice can read en-
crypted messages which are sent t o Bob (and vice-versa).
16. Construct a specific example that illustrates the cube root attack dis-
cussed in Section 6.5.1. Does this mean that a small encryption expo-
nent e should never be used in RSA? Why or why not?

17. Let p be prime. Suppose that a and b are integers with ab 0 (mod p ) .
Show that either a = 0 (mod p ) or b = 0 (mod p ) .

18. Let p = 3 (mod 4) be prime. Show that x2 = -1 (mod p ) has no
solutions. Hint: Suppose that x exists. Raise both sides to the power
( p - 1)/2 and use Euler™s Theorem.

19. Let p = 3 (mod 4) be prime. Using Problem 18, show that x2 = f y
cannot occur simultaneously. Hint: Assume that y = u2 (mod p ) and
t,hat -y = b2 (mod p ) and reach a contradiction.

20. For the Rabin cryptosystem, write a computer program to calculate the
square roots of C , modulo a prime p . Assume that p = 3 (mod 4).

21. Let N = p q , where p arid q are distinct primes. Give an example to
show that if p divides C and q does not divide C, then there are two
(and not four) distinct square roots of C in ZN.

22. Set up a Rabiri encryption scheme for Alice, by generating a public-
private key pair. Playing the role of Bob, choose a suitable plaintext
and encrypt it. Now, playing the role of Alice, decrypt the ciphertext

23. Construct an example that illustrates a chosen-ciphertext attack on the
Rabin cryptosystem, as discussed in Section 6.6.1.

24. In the NTRU cipher, the encryption and decryption processes use the
associative, commutative and distributive properties of t arid “*” .
Verify that these properties arc valid in

25. For NTRU, write a computer program to perform the operations of
addition and multiplication, mod ( X N l ) , in R. Use your program to

verify the first example given in Section 6.7.

26. In NTRU, suppose that ( f , g ) is Alice™s secret key which she uses for
decryption. Show that Alice can also use ( f / z z , g / x z ) o decrypt mes-

27. In the NTRU cipher, Alice must keep both f and g secret. Suppose
that Trudy discovers g. How can she use this knowledge to decrypt
messages sent to Alice?
6.10 PROBLEMS 313

28. Consider the multiple transmission attack against NTRU discussed in
Section 6.7.2. Suppose that each r, is a polynomial of degree four or
less and
+ +
-2 2
= -1 x4

q ( z ) = x + 22
r 2 ( z )-

ro(x) = -2 + x + 2x2 x3 + x4.
Q(2)- -

a. Determine as many coefficients of rg(x) as you can
b. Give a general procedure for determining the coefficients of q ( x )
from a set of polynomials of the form ri(x) - rg(x).

29. Suppose we select NTRU parameters are N = 11, q = 32, p = 7 ,
and the sets of polynomials are given by L f = L ( 4 , 3 ) , L, = L ( 3 , 3 ) ,
and L , = L ( 3 , 3 ) . Suppose Alice chooses

+ x + x2 x4 + z6 + x9 d oE L f
f(x) = -1 - -

g ( x )= -1 + x2 + x3 + x5 x8 L,.
- -

+ 6x2 + 16x3 + 42" + 15x5+ 16x6 + 22x7
fq(x)= 5 + 9x
+ 2 0 2 + 182' + 3 0 ˜ ˜ ' .
Recall that NTRU decryption does not always succeed.

a. Use the algorithm in Table 6.3 to find f p ( x ) . Give Alice's public
and private keys.
b. Give an example of a message Mo(x) E L,, and the corresponding
ciphertext Co(x) obtained by encrypting Mo(x) with Alice's public
key, such that Co(x) decrypts to Mo(x) using Alice's private key.
c. If possible, give a message Ml(x) L,, and the corresponding
ciphertext C (x)obtained by encrypting M I (x)with Alice's pub-
lic key, such that Co(z) does not decrypt to M l ( z ) using Alice's
private key.
30. Write a computer program to implement the meet-in-the-middle attack
on the NTRU cipher. To verify that your program is working, it should
output the recovered key as well as the approximate number of opera-
tions performed and computation time.

31. Set up an ElGamal signature scheme for Alice.
a. Assume that Alice pre-processes her messages with a hash function
before signing them. Generate an ElGamal signed message from
Alice. Playing the role of Bob, verify the signed message.

b. Now, assume that Alice does not prc-process her messages before
signing them. Playing the role of the attacker Trudy, forge n signed

32. If Trudy can guess the session key k in the ElGamal signature scheme,
she can recover Alice™s secret key a. Construct a specific example that
illustrates this implcmentation attack, as discussed in Section 6.8.1.

33. In the ElGarrial signature scheme, a different session key must be used
for each signed message. If k is used to sign multiple messages, then an
implementation attack can be initiated, as described in Section 6.8.1.
Construct an example to illustrate this attack.
Chapter 7

Public Key Attacks

There is always more spirit in attack than in defence.
- Titus Livius

7.1 Introduction
In this chapter, we cover some attacks on public key systems in detail. The
most widely used public key cryptosystems rely on the difficulty of factoring
(RSA) and the discrete log problem (Diffie-Hellman and ElGamal). So we
first discuss factoring algorithms and algorithms for solving the discrete log
problem. These represent fundamental attacks on the underpinnings of the
most widely used public key systems. These attacks are roughly the public
key equivalents of an exhaustive key search attack on a symmetric cipher.
Then we present a fascinating set of attacks on RSA that are, in a sense,
the polar opposite of factoring, since these do not directly attack the RSA
algorithm. Instead, these attacks take advantage of implementation issues
that, under some circumstances, allow an attacker to recover the private key
without breaking the RSA algorithm per se. First, we discuss three different
timing attacks on RSA. These attacks are examples of side-channel attacks,
where an attacker gains information about an underlying computation which,
in turn, leaks information about the key. Such attacks have dramatically
changed the nature of crypt,analysis and the development of cryptography in
We also discuss a devastating glitching attack on RSA, where a single
induced error can enable an attacker to recover the private key. This at-
tack, which is an example of a fault induction attack [107], is amazing and
amazingly simple.
Implementation attacks have more than proven their value in the crypt-
analysis of smartcards. In fact, in any scenario where the attacker has physical


a w e s to the crypto device holding the key, such attacks are a serious threat.
Consequently, these attacks are certain to play a role in the emerging field of
trusted computing [4, 481. A practical timing attack has recently been devel-
oped which can be used to recover an RSA private key from a Web server [22].
This particular attack is covered in Section 7.4.

Factoring Algorithms

The o bvjo u$ math ern a tical breakthrough
would be cleveloprricnt of ari eas,y n a y to factor large prime iiumbers.
- Bill Gates [56]

The RSA public key cryptosystem is the “gold standard” by which all other
public key systems are measured. The security of RSA is thought to rest
squarely on the difficulty of factoring large integers. More precisely, given N ,
where N = py, with p and q prime, if we can determine p or q , then we can
break RSA. Consequently, a tremendous amount of effort has been devoted
to developing efficient factoring methods.
In this section we consider several integer factorization methods. First,
we briefly discuss the obvious approach, that is, trial division by numbers up
m. Then we present Dixon™s Algorithm, followed by the quadratic sieve,
which is a refinement of Dixon™s Algorithni. Like trial division, these algo-
rithnis are guaranteed to find the factors of N , provided enough computing
power is available. The quadratic sieve is the best available factoring algo-
rithm for numbers having about 110 decimal digits or less, arid it has been
used to successfully factor numbers with about 130 decimal digits. For inte-
gers with more than about 110 decimal digits, the number field sieve reigns
supreme, and we briefly mention this more complex factoring method a t the
crid of this section.

Trial Division
Given a compositc integer N , one obvious way to factor it is to simply try to
divitlc it by each of the numbers

2 , 3,5,7,9: 11,.. . , [JN]

Any of thesc that divides N is a factor. The work required is on the order
of 0 1 2 .

˜In corit,rast to RSA, the security of the R;tbiri crypt,osystern is easily proven to he
rquivaleiit to factoririg.

We can improve on this simple method by only testing the prime numbers
up to instead of testing all odd integers. In this case, the work factor is
on the order of where ˜ ( xis the function that counts the number
of primes less than or equal t o x, assuming we can efficiently generate the
required primes. For large N , the approximation T ( N )z N / l n ( N ) holds.
Consequently, the work to factor N by trial division is, a t best, on the order
of N / ln(N).

Dixon™s Algorithm
Suppose we want t o factor the integer N. If we find integers x and y such
that N = x2 - y2, then N = (x - y)(x + y ) and we have found factors of N .
More generally, suppose we can find x and y such that x2 - y2 is a multiple
of N , that is,
x2 = y2 (mod N ) . (7.1)
Then x2 - y2 = k N for some k # 0, so that (x - y)(x y) = k N , which
implies that N divides the product ( x - y) (x+ y). If we are unlucky, we could
have z - y = k and x y = N (or vice versa), but with a probability of a t
least 1/2 we can obtain a factor of N [40]. If this is the case, then gcd(N, x-y)
and gcd(N, x y ) reveal factors of N .
For example, since 100 = 9 (mod 91), we have lo2 = 32 (mod 91) and
(10 - 3)(10 3) = (7)(13) = 0 (mod 91).
Since 91 = 7 . 13, we have obtained the factors of 91. However, in general
we must compute gcd(x - y , N ) or gcd(x + y , N ) t o obtain a factor of N .
To see that the gcd is necessary, consider 342 = 82 (mod 91). In this ex-
ample, we have 26 . 42 = 0 (mod 91) and the factors of 91 are found by
computing gcd(26,91) = 13 and gcd(42,91) = 7.
Since the gcd is easy to compute (using the Euclidean Algorithm), we can
factor N provided we can find 2 and y satisfying (7.1). But finding such x
and y is difficult. We can relax these conditions somewhat, thereby making
it easier to find the required pair of values. For example,

412 = 32 (mod 1649) and 432 = 200 (mod 1649) (7.2)

and neither 32 nor 200 is a square. However, if we multiply these two equa-
tions, we find
412 . 432 = 32.200 = 6400 (mod 1649), (7.3)
which yields
(41 . 43)2 = 802 (mod 1649),
and we have obtained the much-coveted congruence of squares. We have
that 41 . 4 3 = 114 (mod 1649) and 114 - 80 = 34. In this example, we find

a factor of 1649 from the gcd computation gcd(34,1649) = 17. It is easily
verified that 1649 = 1 7 . 9 7 .
In ( 7 . 3 ) ,we combined two nonsquares t o obtain a square. To see why this
works in this particular case, note that

and 200 = 23 . S2,
32 = 25 .5'

so that the product is given by

(a4 . 51)2 = 80'.
3 2.2 0 0 = 28 . 5' =

This example illustrates that we can obtain a perfect square from non-square
congruence relations such as those in (7.2). To find these perfect squares,
we only need t o concern oiirselves with the powers in the prime factorization
of the relations under consideration. Also, by properties of exponentiation,
the corresponding powers add when we multiply terms. Furthermore, any
time we multiply terms and all of the resulting powers arc even, we obtain a
perfect square. Consequently, we only need t o be concerned with whether the
powers are even or odd, that is, we only need t o know the powers modulo 2.
We can associate each number with a vector of t h e powers in its prime
factorization, and we obtain a perfect square by multiplying corresponding
t'crins whenever the siirri of these vectors contain only even numbers. In the
exarnple above we have

[ ] [ :I ]
32+ (mod 2),

where the numbers in the vector represent the powers of the prinie factors 2
and 5, respectively, in the prime decomposition of 32. Similarly.

The product therefore satisfies

Since the powers are all even, we know that 32 . 200 is a perfect square
modulo 1649. Furthermore, from (7.2), we know t h a t this square is equal
to (41 . 43)2 (mod 1649), giving us the desircd congruence of squares. While
the actual powers are required to determine the value that is squared (80 in
this example), to determine whether or not we have a congruence of squares,
wc only require the mod 2 vectors of powers.
We can multiply any number of relations to obtain a perfect square. Also,
the number of distinct primes in the factorizations of the numbers under

consideration determines the size of the vectors. Since we ultimately want
to factor large numbers, it is imperative that we keep the vectors small.
Consequently, we will choose a bound B and a set of primes, where each
prime is less than B. This set of primes is our factor base. While all primes
in the factor base are less than B, generally not every such prime will be
included in the factor base (see Problem 7).
A number that factors completely over the given factor base is said to
be B-smooth. Smooth relations (or, more precisely, B-smooth relations) are
those relations that factor completely over the factor base. By restricting our
attention to B-smooth relations, we restrict the size of the vectors of powers.
The fewer elements in the factor base, the smaller the vectors that we need
to deal with (which is good), but the harder it is to find B-smooth values
(which is bad).
An example should illustrate the idea. Suppose we want to factor the
integer N = 1829 and we choose B = 13, as in the example given in [92].
Since we want numbers with small factors, it is advantageous to deal with
modular numbers between - N/2 and N/2, instead of in the range 0 to N - 1.
This creates a slight complication, since we must include -1 in our factor base,
but this is easily managed. In this example, B = 15 and we take


as our factor base.
Now we could simply select a random r and check whether r (mod N )
is B-smooth, repeating this until we have obtained a sufficient number of
B-smooth values. Instead, we use a slightly more systematic approach. We
m and I 1for k = 1,2,3,4, and test whether the
select the values
square of each, modulo 1829, is B-smooth. For this example we obtain

422 = 1764 = -65 . 5 . 13 (mod 1829)
= -1

43' = 20 = 2' . 5 (mod 1829)
60' = 1771 = -58 = -1 ' 2 . 2 9 (mod 1829)
612 = 63 = 3' ' 7 (mod 1829)
742 = 1818 = -11 (mod 1829)
= -1.11

752 = 138 = 2 . 3 . 2 3 (mod 1829)
852 = 1738 = -91 = -1 . 7 . 1 3 (mod 1829)
862 = 80 = 24 . 5 (mod 1829).

All of these values are B-smooth except for 60' and 752, giving us six useful
For each of the six B-smooth values we obtain a mod 2 vector of of length
seven, where the first entry represents the sign bit (1 represents "-", while 0

) and the remaining positions correspond to the factor base
elements 2,3,5,7,11, and 13, respectively. In this case, we have

1' 0
0 0
0 0
, 43* = 20
42' -65 + 1
1 61' = 63 +
= (7.4)
0 0
0 0
0 0
0 0
742 , 852 = -91 , 862 = 80 +
0 0
-11 (7.5)
= + f

0 1
1 0
-0 1

Any combination of these vectors that sum t thc zero vector, modulo 2, will
yicld the desired modular squares and, with high probability, a factor of N .
Notc that the sign bits must also sum to 0 modulo 2, since an even number
of sign bits imply that the product is positive. In this example, we can sum
the vectors corresponding to 42', 43', 612, and 85', modulo 2, to obtain the

0 0
@ 0 = 0
1 CE
0 1
0 0
-1 1

This yields the congruence

422 . 432 . 612 . 852 = (-65) . 2 0 . 63. (-91)
(-1 . 5 . 13) . ( 2 2 . 5) . (3' . 7) . (-1 . 7 . 13)

22 . 32 . S2 . 7' . 132 (mod l829),

which can be rewritten as

( 2 . 3 ' 5 ' 7 . 13)2 (mod 1829).
( 4 2 . 4 3 ' 6 1 . 85)' =

This simplifies to
145g2 = 9012 (mod 1829)

and, since 1459 - 901 = 558, we determine the factor 31 of 1829 via the
calculation of gcd(558,1829) = 31, which can be computed efficiently using
the Euclidean Algorithm. It is easily verified that 1829 = 59 .31.
This example raises a few questions. For example, can we be sure of
obtaining a solution? And, if so, how many relations are required before we
can be certain of obtaining a solution? These questions can be answered
positively with some basic linear algebra.
While in the example above we found a solution by simply “eyeballing”
the vectors in (7.4) and (7.5), a more systematic approach is possible. Let M
be the matrix with columns given by the vectors in (7.4) and (7.5). Then
a solution is given by any vector x satisfying Mx = 0 (mod 2), that is
the 1s in z determine which of the vectors in (7.4) and (7.5) to sum to obtain
a mod 2 sum equal to the 0 vector. In linear algebra terms, we seek a
linear combination of the columns of M that sum to 0, that is, we seek a
linearly dependent set of columns. In the example above. we want to find a
. ,.
solution ( ˜ 0 ˜ x 1., z g ) to the matrix equation

0- -
1 1
-1 0 0
0 0
0 0 0
0 0 0
0 0 0
1 (mod 2).
1 0 0 0
0 0 1 0 1
1 0
0 0
0 0 0 1
1 0-

In general, if n is the number of elements in the factor base (including -l),
then n is also the number of elements in each column vector, and, therefore,
the matrix M has n rows. It is a theorem from linear algebra that if there are
at least R + 1 columns in 111,then we can find a linearly dependent collection
of the columns. Furthermore, this can be done efficiently using standard
techniques from linear algebra. This implies that with with n + 1 or more
B-smooth relations, we will obtain a congruence of squares and thereby, with
a high probability, a factor of N . Note that in the example above, n = 7
and we only had six relations. In this case, we were lucky since we did find a
solution, although we lacked a sufficient number of relations to be guaranteed
of doing so.
In effect, we have reduced the problem of factoring to the problem of
finding a sufficient number of B-smooth relations and then solving a system
of linear equations, modulo 2. Dixon™s Algorithm, as given in Table 7.1, uses
a very simple approach to find the necessary B-smooth relations, while the

Table 7.1: Outline of Dixon's Algorithm

// Given integer N , find a nontrivial factor
Select B and factor base of primes less than B
n = number of elements in factor base (including -1)
/ / Find relations
n =0
while m 5 n
y = r2 (mod N ) // r' can be selected at random
i f y factors completely over the factor base then
Save mod 2 exponent vector of y
Save r2 and y
end if
end while
// Solve the linear system
A1 = matrix of mod 2 exponent vectors
Solve Mn: = 0 (mod 2) for vector x = (xo, I , . . , x,)
I = { i I z = l}
n, n, yt (mod N )
r$ =
We have congruence of squares:
Conipute required gcd and check for nontrivial factor of N

quadratic sieve (discussed in Section 7.2.3) uses a more efficient but inore
complex approach.
It is worth emphasizing that by increasing B , we can find B-smooth rela-
tions more easily, but the size of the vectors will increase, making the resulting
linear algebra problem more difficult to solve. Determining an opt,imal B is
challenging since it depends, among other things, on the efficiency of the
implementation of the various steps of the algorithm. Another interesting
feature of this factoring algorithm is that the problem of finding B-smooth
relations is reasonably parallel, since given k different comput'ers, each can
test a disjoint subset of random values T for B-smoothness. However, the
linear equation solving is not parallel.
In the next section we describe the quadratic sieve factoring algorithm,
which is a refinement of Dixon's Algorithm. The quadratic sieve is the fastest
known algorithm for factoring large integers up to about 110 to 115 decimal
digit,s. Beyond that point, the number field sieve is superior. We briefly
mention the number field sieve before leaving the topic of factoring. Then
in Section 7.3 we consider algorithms for solving the discrete log problem.
Some of these discrete log algorithms use similar techniques t o the factoring
algorithms considered below.

7.2.3 Quadratic Sieve
The quadratic sieve (QS) factoring algorithm is essentially Dixon™s Algorithm
on steroids. In particular, finding B-smooth relations is beefed up in QS as
compared to the more-or-less random approach followed in Dixon™s Algo-
rithm. In both algorithms, the linear algebra is identical, so here we ignore
the linear algebra phase. In this section, we assume that the reader is familiar
with Dixon™s Algorithm, as presented in Section 7.2.2.
As in Dixon™s Algorithm, we first choose a bound B and a factor base
containing primes less than B. Given this factor base, we must find a sufficient
number of B-smooth relations.
Define the polynomial

(LJN]+ x ) -˜N . (7.6)
Q ( 2 )=

We use this quadratic polynomial to generate the values to be tested for B-
smoothness. The word “quadratic” in quadratic sieve comes from the fact
that Q(z) is a quadratic polynomial.
To obtain smooth relations, we choose an interval containing 0, say,
[ - M , MI, and for every integer n: E [ - M , M] we compute y = Q(z). Then,
modulo N , we have y = 3?, where 5 = L f l J n:. Consequently, we are
in the same situation as with Dixon™s Algorithm, that is, we test whether y
is B-smooth and, if so, we save the mod 2 exponent vector for the linear
equation solving phase. The advantage of QS over Dixon™s Algorithm arises
from the fact that we can sieve, as described below. Sieving greatly reduces
the work factor.
A variety of tricks are used to speed up the relation-finding part of the QS
algorithm--the most significant of these is a sieving method. That is, the QS
algorithm utilizes a method whereby the smooth numbers eventually “fall
through” while the non-smooth numbers are filtered out. The process is
somewhat analogous to the sieve of Eratosthenes [25], which is used to find
all primes less than a given bound. Before discussing the sieve used in the QS
algorithm, we review the sieve of Eratosthenes.
Suppose that we want to find all primes less than 31. First, we list all of
the numbers 2 through 30:

2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 1 7 18 19 20 (7.7)
21 22 23 24 25 26 27 28 29 30.
Then we cross out every other number beginning with 4, since all of these have
a factor of 2. Next, we cross out every third number beginning with 6, since
these all have a factor of 3, and so on. At each step, the smallest number not
yet considered that has not been crossed out must be prime, and we remove
all factors of that number from the list.

For the numbers in (7.7), wc begin by marking every number in the list
that has a factor of 2 (other than 2 itself). This is easily accomplished by
simply marking every other number with ˜˜- ”. beginning from 4, t o obtain

11 X2 13 4 15 46 17 4 19 2 %
4 8
21 ?? 23 ?4 25 ?6 27 ?8 29 30.

The next unmarked number after 2, which is 3 , must be prime. To remove all
nunibers with a factor of 3 (other than 3 itself), we mark every third number,
beginning from 6, with “/” t o obtain

11 @ 13 M 45 46 17 @ 19 28
2/1 ?? 23 q 25 ?6 ?8 29 $.

The next unmarked number, 5, must be prime, so we mark every fifth number
beginning frorn 10 with “\”, giving

11 @ 13 f4 @ 3 17 19
?2 23 % 2fi ?6 ?8 29 @.

Continuing, we inark numbers having a factor of 7 with

234 5 + 7+ gq
19 %
+% +
11 $? 13 f6 17 l@
23 29

and numbers having a factor of 11 arc marked with

$)I &6 17 +$ 19 %
11 $2 13
W 1 4 P 23 % 45 26 29 ;Pg
and. finally. numbers with a factor of 13 are marked with “n”.

2 +73-g4p
4 5

+ 45
L6 17 48 19 @
$)I j
11 3$
v w.
WI 41 z-6 29
Since the next unmarked number is 17, which is greater than 30/2, we are
finished. The primes less than 31, namely,

2 , 3 , 5 , 7 , 1 1 ,13,17,19,23,29

have passed through the “sieve.”

While the sieve of Eratosthenes gives us the primes, it also provides con-
siderable information on the non-primes in the list, since the marks through
any given number tell us the prime factors of the number. However, the
marks do not tell us the power t o which a given prime factor occurs. For
example, in (7.8), the number 24 is marked with "-" and "/", so we know
that it is divisible by 2 (due t o the "--") and 3 (due t o the "/"), but from
this information we do not know that 24 has a factor of 23.
Now suppose that instead of crossing out the numbers, we divide out the
factors (here, we also divide the prime by itself). Then, beginning again from

2 3 4 5 6 7 8 9 10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30

after dividing every other number by 2, beginning from 2, we have

1 3 2 5 3 7 4 9 5
1 17
15 19
12 14
11 15
21 23 25 29

where the numbers that were divided by 2 at this step are underlined. Next,
we divide every third number by 3, beginning with 3 (again, underlining the
numbers that were divided a t this step), to obtain

11 2 13 7 5 8 17 3 19 10
- 11 23 4 25 13 9 14 29 5.

Then we divide every fifth number by 5, beginning with 5,

11 2 1 3 7 1 8 17 3 1 9 2
7 11 23 4 5 13 9 14 29 1 .
Dividing every seventh number by 7 yields

11 2 1 3 1 1 8 1 7 3 1 9 2
1 9 2291

and suppose that we stop a t this point. Then the numbers that correspond
t o the positions now occupied by the 1s in this array are 7-smooth, that is,
they have no prime factors greater than 7. However, some of the numbers
that correspond t o non-1s are also 7-smooth, such as 28, which is represented
by the number 2 in the last row. The problem here is that we need to divide

by the highest prime power a t each step, which slightly complicates the siev-
ing. This is essentially t.he principle--with some significant computational
is followed in the QS algorithm t o obtain B-smooth rela-
It is costly t o test each candidate Q ( x ) for B-smoothness by trial divi-
sion. Suppose that we find that, the prime p , where p is in the factor base,
divides Q ( x ) . Then it is easy t o verify that. p also divides Q ( x k p ) , for
any k # 0 (see Problem 6). T h a t is, once we determine that, Q ( x )is divisible
by p , we know that Q of each of the following

+ 2p,. ..
, x- 2 p , x - p , x , x+ p , z

is also divisible by p , so we can save the work of testing whether each of
these Q values is divisible by p . By repeating this for other choices of .T and
other primes in the factor base, we can eventually “sieve” out the B-smooth
integers in the interval [-A[, A l l . This process is somewhat analogous t o the
sieve of Eratosthenes, as discussed above. This sieving process is where the
“sieve” in quadratic sieve comes from.
Sieving in the most costly part of the QS algorithm, and several tricks
are used to speed up the process. For example, suppose y = Q(z) is divisible
by p . Then y = 0 (mod p ) and from the definition of Q, we have

+ = N (mod p ) . (7.9)

Therefore, given p in our factor base, we can compute the square roots
of N (mod p ) , say, s?, arid p - s p , and use these to immediately determine
the sequence of values of z E [-M, such that the corresponding Q ( x ) arc
divisible by p . Since there exists an efficient algorithm (the Shanks--Tonelli
Algorithm [131])for finding the square roots implied by (7.9), this approach
is efficient,.
The actual sieving could proceed as follows. Create an array containing
1 , .. . , - 1 , O . 1,. . . , Af - 1,AT. For the
the values Q(z), for x = - A I ; -A1
first prime p in the factor base: generatc the sequence of z E [ - A [ , 11 that 4

are divisible by p , as described in the previous paragraph. For each of these,
wc know that the corresponding array element is divisible by p , so deteririine
t,hc highest, power of p that divides i,he array element, and store this powcr,
rctluced niodulo 2, in a vector of powers Corresponding tjo the given array
clenieiit. Also divide the array element by this highest power of p . R.epeat
t,his process for each of the remaining primes p in the factor base.
When we have completed the sieving, those array elements that are 1 will
have factored completely over t,he factor base, and these are precisely the
B-smooth elements. Retain t,he vectors of powers mod 2 for the B-smooth
elements and discard all remaining elements and vectors. These retained vec-
t,ors of powers will be w e d in the linear algebra phase, precisely as described

for Dixon™s Algorithm in Section 7.2.2 above. Provided we have found enough
B-smooth elements, we will obtain a solution to the linear equations, and with
high probability this will yield the factors of N.
There are some fairly obvious improvements to the sieving process as we
have described it. However, there are also several not-so-obvious, but criti-
cally important improvements that are used in practice. Most significantly,
it is possible to use an inexpensive approximate “logarithm” calculation to
avoid the costly division operations [99]. The results are only approximate, so
that the survivors of the sieving that have been reduced to “small” values will
require secondary testing to determine whether they are actually B-smooth.
It is also possible to use a similar technique to avoid the costly process of
determining the highest power of p that divides a given value.
We want to restrict the sieving interval [ - M , MI to be small, so that the
resulting Q values are more likely to have small factors (and therefore to be
B-smooth). In practice, it is difficult to obtain enough smooth relations using
only the polynomial Q ( x ) as defined in (7.6). Therefore, multiple quadratic
polynomials of the form Q ( x ) = ( a x b)™ - N are used. This variant is given
the name multiple polynomial quadratic sieve (MPQS) to distinguish it from
the regular quadratic sieve. Certain choices of the parameters a and b will
give better results; see [115].
Note that in the sieving process as we have described it, we only retain
the powers modulo 2. This is sufficient for the linear algebra phase of the
attack, but, as with Dixon™s Algorithm, once a congruence of squares has
been found, it is necessary to know the actual factorization of the elements.
Since all of these are B-smooth, we know the factors are small and, therefore,
Pollard™s Rho Algorithm [32] can be used to factor these numbers, since this
algorithm is particularly efficient at finding small factors.
In summary, the QS algorithm can be viewed as an improved version of
Dixon™s Algorithm. The speedup due to sieving is significant and with mul-
tiple polynomials, the sieving interval [-M, MI can be made much smaller.
This leads to a better parallel implementation where each processor sieves
over the entire interval, but with a different polynomial. Recent factoring at-
tacks have made use of the parallelizability of the sieving phase by distributing
the work over a large number of computers (which is relatively easy, thanks
to the Internet) and then gathering the exponent vectors before doing the
equation solving phase on a supercomputer.

7.2.4 Factoring Conclusions
Currently, the number field sieve is the best available algorithm for factoring
integers having more than 110 decimal digits or so. While the quadratic sieve
is relatively straightforward, the number field sieve relies on some advanced
mathematics. For a brief and readable introduction to both of the quadratic

sieve and the number field sieve, see [ 1161.
Finally, we mention the work factor of factoring. Recall that for trial divi-
sion as described above, the work t o factor N is on the order of N / ln(iV), For
the quadratic sieve, the work factor is on the order of e(lnN)1/2(ln 'rlN)'''. For
the number field sieve, the work factor is on the ordm of ec(lnN)L'3(1n 1nN)2'3,
where c FZ 1.9223. Note that the dominant term of ( l n N ) l / , in the work
factor for the quadratic sieve has been reduced to (1nN)'l3 in the number
field sieve. However, thc relatively largc constant c in the numbcr field sievc:
is the reason that the quadratic sieve is faster for N below a threshold of
about 110 decimal digits.
We consider these functions for the work factors a little more closely.
First; note that the number of bits in N is z = log,N. Since we measure
the work required to break a symmetric cipher in terms of bits, it appears
that 11: is the "right" parameter for measuring the work required t o break
a public system. Up t o constant factors: we can rewrite the work factor
functions as in Table 7.2, whcre f ( x ) is the work factor for an integer N ,
with z = log, N . The final column in Table 7.2 gives the logarithm, hase 2.
of tlie work factor f(11:).

In Figure 7.1 we have plotted the function

log:, 2 ,
z -

which is the logarithm (base 2) of the work factor for trial division, alongside
the funct'ions
' (log, z) 112

1.9223z'/'(log, x)˜/'<,

which represent the logarithrn (base 2) of the work factors for the quadratic:
sieve and thc number field sieve? respectively. These same functions appear
in the final column of Table 7.2. The work factor for trial division is ex-
ponential in z, while work factors for tlie two sieving methods are said t o
be subexponential. A subexponential algorithm has a work factors that is
asymptotically better than any fully exponential algorithm, but slower than

any polynomial-time algorithm [99]. The dominant term in the quadratic
sieve work factor function is z1l2,while the dominant term for the number
field sieve is z1/'3. However, due to the larger constant, the number field
sieve function is larger than the quadratic sieve function for small z. But,
regardless of constants, these functions are both smaller than any polynomial
in z for sufficiently large values of z. This is precisely what it means for an
algorithm to have a subexponential work factor.
It is instructive to compare thc results in Table 7.2 with the work re-
quired for an exhaustive key search attack on a symmetric cipher. Consider
a symmetric cipher with an z-bit key. Then an exhaustive key search has an
expected work factor of 2I-l and taking the logarithm, base 2, gives z - 1.
From Table 7.2, we see that factoring is asymptotically easier than an ex-
haustive key search (provided one of the sieving methods is used), regardless
of any constants that are involved. This implies that to obtain a comparable
level of security for, say, RSA as is provided by a secure symmetric cipher, the
RSA modulus N must have far more bits then the corresponding symmetric
key; see Problem 8.
Note that the quadratic sieve is superior to the number field sieve for
integers N up to about 390 bits, and from that point on, the number field
sieve has the lower work factor. Since a 390-bit integer has about 117 decimal
digits, this is consistent with the often stated claim that the quadratic sieve
is more efficient for factoring large integers up to about 115 decimal digits.
Also, from Figure 7.1 we can roughly compare the number of bits in a the key
of a secure symmetric cipher to the number of bits in an RSA modulus. For
example, a 390-bit RSA modulus requires about the same amount of work
to factor (and thereby recover the private key) as an exhaustive search to
recover a 60-bit symmetric key.

m loo0
0 BM)


Figure 7.1: Comparison of factoring algorithms.

Discrete Log Algorithms

1 1 pioneer days t h e y used oxen for heavy pulling,
and when one ox couldn't budge a log,
the,y didn't t r y to grow a larger ox.
- G . Hopper

The security of the Diffie-Hellman key exchange and the ElGamal signature
scheme are believed to rest on the difficulty of the discrete logarithm problem.
That is, given p , g and ga (mod p ) , where a is unknown, if we can determine a ,
then we can break Diffie-Hellman and ElGamal.
In this section, we give a brief overview of three methods of solving the
discrete log problem. First, we present the nai've method of trial multiplica-
tion, which is the analog of trial division for the factoring problem (factoring
is discussed in Section 7.2). Then we discuss the baby-step giunt-step method,
which is a fairly straightforward time-memory trade-off (TMTO) extension of
trial multiplication. Finally, we discuss the index calculus, which is roughly
the discrete log analog of Dixon's Algorithm for integer factorization. Varia-
tions of the index calculus algorithm are the most efficient known algorithms
for solving the discrete log problem that arises in Diffie-Hellman arid other
public key systems.

Trial Multiplication
Suppose we are given a generator g, a prime p and ga (rnod p ) , and we want
to determine a. We can compute the sequence
g4 (mod p ) , . . .
g2 (mod p ) , g 3 (mod p ) ,

until we find g (mod p ) . As mentioned above, this method of solving the
discretc log problem is essentially the analog of trial division for factoring.
For example, suppose we are given g = 3 , p = 101, and we want to find
the exponent a such that g a (mod p ) = 94. Then we would compute
32 (mod 101) = 9
3'3 (rrlod 101) = 27
34 (rriod 101) = 81
(rnod 101) = 41

3" (rriod 101)= 94.
Alternatively, we could bcgiri at yk (mod p ) , for any given k . In any case.
the cxpected number of multiplications is about p / 2 .

7.3.2 Baby-Step Giant-St ep
The baby-step giant-step algorithm provides an improvement over trial multi-
plication based on a time-memory trade-off [99]. Again, we are given a gener-
ator g, a prime p and x = ga (mod p ) , and we want to determine a. First, we
[JX]. T h e n w e h a v e a = i m + j f o r s o m e i E {0,1,. . . , m-1}
and j E {0,1,.. . ,m - l}. For this choice of i and j , it follows that

and therefore
g j = xg-ZnL (mod p ) (7.10)

If we can determine i and j so that (7.10) holds, then, since a = im+j, we
have found a. To determine a we proceed as follows. Given I : = ga (mod p ) ,
we compute xg-Zm (mod p ) for i = 0,1, . . . , m- 1 (recall that g and p are pub-
lic and m,is known). Then for each j = 0 , 1 , . . . , m-1, we compute g j (mod p )
and compare the result to all of the computed xg-i7n (mod p ) values. When
we find a match, we have found i and j such that (7.10) is satisfied and,
therefore, we have recovered a. Note that in this algorithm, the g j (mod p )
represent the “baby steps,” while the xgPim (mod p ) are the “giant steps.”
An example should clarify the algorithm. Suppose that g = 3, p = 101
and we want to solve for the discrete log of x = ga (mod p ) = 37. Then we
= 10 and note that gpm = 3-l™ = 14 (mod p ) . Next, we
select m =
II:g-Zm = 37. 3-loi = 37.142 (mod l o l ) ,

for i = 0, 1 , 2 , . . . , m - 1, and save the results in a table. This phase of the
computation is summarized in Table 7.3.

Table 7.3: Example Giant-Step Computation

1 5 6 9
giant step i 2 7 8
3 4
0 1
3-102 (mod 101) 1 17 36
14 95 100 87 6 84 65
37.3-loi (mod 101) 23 19
37 13 81 64 88 20 78 82

Next, we compute 3j (mod 101), for j = 0 , 1 , . . . , 9 , until we obtain a
value that appears in the third row of Table 7.3. For j = 0 , 1 , 2 , 3 we do not
find such a match, but for j = 4 we have 34 = 81 (mod 101) which appears
in the third row of the i = 2 column in Table 7.3. Consequently, (7.10) holds
with m = 10, i = 2, and j = 4, that is,

34 = 3 7 . 3T2.10 (mod 101).

Therefore, 324 = 37 (mod 101) and we have solved this particular discrete
log problem.
In general, for the giant-step phase, we compute about fi values and
store these in our table. Then in the baby-step phase, after trying about J?r/2
choices for we expect to have found a solution. Assuming the table lookups
do not carry any cost (the table could be sorted or a hash table used), this
algorithm requires about 1.5J?r niultiplications and it also requires storage
(or space) of about &. Assuming the storage requirement is feasible, this is
A significant improvement over the nai've method of trial multiplication, but
still an cxponential work factor.

Index Calculus
As above, we are given the generator 9.a prime p , and x = ga (mod p ) , and
we want to find a. Analogous to Dixon's factoring algorithm, we first select a
bound B and a factor base of primes less than B . Then we pre-compute the
discrete logarithms to the base y of the elements in the factor base and we
save the logarithms of these elements. These logarithms of the factor base can
he found efficiently by solving a system of linear equations; see Problem 11.
Once the logarithms of the elements in the factor base are known, the
attack is straightforward. We have 11: = g" (mod p ) , and we want to find a.
Let { p ˜ . p l ,. .. ,p,-l} be the set of primes in the factor base. We randomly
select k E ( 0 , l . 2 , . . . , p - 2 } and compute y = J:. gk (mod p ) until we find
a y that factors completely over the factor base. Given such a y we have

2 0.
where each di Taking log, on both sides and simplifying, we find

+ dl log,qp1
u = log,qz = (do log, po
+ . . . + 4 - 1log, p T 1 - l k ) (mod p (7.11)
˜ ˜

Assuming that the logarithms of the elements in the factor base are known, we
have determined a. Note that the mod p - 1 in (7.11) follows from Ferniat's
Little Theorem, which is given in Appendix A-2 and, for example, in [as].
An example should clarify the algorithm. Suppose g = 3 , p = 101 and we
are given z = 3" = 94 (mod 101) and we want to determine a. As our factor
base we choose the set of primes { 2 , 3 , 5 , 7 } and by solving a system of linear
equations (see Problem l l ) , we determine
log3 2 = 29, log;3 3 = 1, log, 5 = 96, log3 7 = 61.
Next, we randomly select k until we obtain a value 9 4 . 3' (mod 101) which
factors completely over the factor base. For example, if we choose k = 10
9 4 . 3"' = 50 (mod 101). (7.12)

and 50 = 2 . 52 factors over our factor base. Taking logarithms of both sides
of (7.12) yields

log, 94 = log, 2 + 2 log, 5 - 10 (mod 100).

Substituting the values for the logarithms of the primes in the factor base,
we have
log, 94 = 29 192 - 10 = 11 (mod loo),

which implies that 311 = 94 (mod 101). Then a = 11 and we have solved
this particular discrete log problem.
The index calculus provides the most efficient method for solving a general
discrete log problem. The work factor for the index calculus is subexponential;
more precisely, the work is on the order of e(1np)'/2('n
1np)'/2. The form of this
work factor is the same as that given for the quadratic sieve factoring method
in Section 7.2.4, which is, perhaps, not surprising since both rely on finding
smooth integers. For more information on the index calculus, a good source
is [99].

Discrete Log Conclusions
There are many parallels between the discrete log algorithms discussed in this
section and the factoring algorithms in Section 7.2. In particular, Dixon's
factoring algorithm and the index calculus discrete log algorithm have many
similarities. Both algorithms have an equation finding phase and a linear
algebra phase.
For properly chosen parameters, the costliest part of the index calculus
algorithm is solving for the logarithms of the elements in the factor base. As
with Dixon's Algorithm, the equation finding part of the index calculus algo-
rithm can be distributed among multiple processors, but the linear algebra
part cannot.
There exists another class of algorithms for factoring and discrete log-
arithm which are known as collision search techniques. The best general
collision search method is Pollard's Rho Algorithm [32]. However, collision
search algorithms have an exponential work factor, while the quadratic sieve
and index calculus are subexponential (see Section 7.2.4 for a discussion of
the work factor for the quadratic sieve).
Finally, it is worth noting that there is no analog of the quadratic sieve
or index calculus for elliptic curve cryptosystems since, for elliptic curves,
there is no analog of a factor base. Consequently, collision search is the best
available technique for breaking systems based on elliptic curves, and this is
why smaller parameters sizes can be used with elliptic curve cryptosystems
without sacrificing security.

RSA Implementation Attacks

Timing is ever,ytbing.
- Anonymous

If factoring and discrete log algorithms represent classical attacks on pub-
lic key systems, then the attacks in this section are strictly “punk rock.”
The attacks discussed here do not follow the usual cryptanalytic approach of
analyzing the underlying crypto algorithm for weaknesses. Instead, the crypt-
analyst looks for any weak link in the overall implementation that might allow
information about the private key to leak.
First, we discuss three different timing attacks. By exploiting small timing
differences that occur in specific met hods of modular exponentiation, the
attacker can gain information about the private key. Then we consider a
“glitching” attack, where the attacker induces an error in a computation
(for example, by abusing a smartcard). For a particular method of modular
exponentiation, a single induced error can enable an attacker to easily recover
the private key. Both timing and glitching attacks are practical and it is
therefore critical that cryptosystems are built to resist such attacks.
It is important to emphasize that the attacks discussed in this section do
not exploit any inherent weakness in the RSA algorithm itself. Instead, these
attacks exploit specific implementation issues that allow information to leak
out -with potentially devastating consequences.

Timing Attacks
At one time, it was widely believed that if RSA was implemented correctly,
thc only realistic way to attack it was by factoring the modulus. In 1996 Paul
Kocher [85]surprised the crypto world when he demonstrated a practical side-
channel attack on RSA.
A side channel is an unintended source of information. In the crypto cow
tcxt , side channels sometimes leak information about a computation, which
in turn reveals information about the key. For example, careful measurements
of the amount of current used by smartcards have been used to recover keys.
Kocher™s side-channel attack on RSA is based on a careful timing of var-
ious cryptographic operations. Using selected inputs, he was able to recover
the private keys from smartcards which used a relatively simple method of
modular exponentiation. Kocher conjectured that his technique could also be
used in settings where the modular exponentiation was computed using more
efficient means, but other researchers soon discovered this was not the case.
Schindler [129] was able to develop a timing attack that succeeds when
modular exponentiation is computed in a more optimized fashion than the

case where Kocher™s attack applies. Brumley and Boneh [22] have pushed
Schindler™s results much further, developing a successful timing attack against
the highly optimized RSA implementation in OpenSSL. This attack is suf-
ficiently robust that it can be conducted over a network, illustrating that
timing attacks are a serious threat to real-world RSA implementations.
In this section we discuss Kocher™s attack, Schindler™s attack, and the
Brumley-Boneh attack. We also consider defenses against timing attacks.
But first, we introduce the t,echniques used to compute modular exponenti-
ation which are employed in efficient implementations of RSA. Specifically,
we discuss repeated squaring, the Chinese Remainder Theorem, Montgomery
multiplication, and Karatsuba multiplication.

Modular Exponentiation

Suppose we want to compute 620 (mod 29). The obvious approach is to
raise 6 to the 20th power, then compute the remainder when this number is
divided by 29. For this particular example, we have

620 = 3,656,158,440,062,976 = 24 (mod 29).

However, this approach is not feasible when the base and exponent are large-
as is the case in RSA-since the intermediate result is too large to compute
and store. And even if we could somehow deal with such enormous numbers,
computing the remainder by long division would be costly.
An improvement would be to do a modular reduction after each multi-
plication, which would eliminate the problem of large intermediate results.
However, there is a better way. A method known as repeated squaring al-
lows us to compute a modular exponentiation without having to deal with
any extremely large intermediate values and it also dramatically reduces the
number of multiplications as compared to the nai™ve approach. In repeated
squaring, we “build up™™ the exponent one bit at a time, from high-order bit
to low-order bit. For example, the exponent 20 is, in binary, 10100, and we
20 = 1 0 . 2 .

Then to find 620 (mod 29) by repeated squaring, we compute

6l (6')' . 6 = 6 (mod 29)

62 = (61 ) 2 = 62 = 36 7 (mod 29)

65 (62 ) 2 . 6 = 72 ' 6 294 =4 (mod 29)
= =

(65 ) 2 2
16 (mod 29)
=4 =
6'" =

620 = (610)2= 162 = 256 = 24 (mod 29).

Note that this computation requires five multiplications (as opposed to 20 for
the naive approach) and five modular reductions, and all intermediate values
are less than N 2 . The repeated squaring algorithm is given in Table 7.4.

Table 7.4: Repeated Squaring

// Compute y = xd (mod N ) ,
// where d = ( d o , d l , d 2 , .. . ,&) in binary, with do 1
for i = 1to n
s = s2 (mod N )
if di == 1 then
s = s . x (mod N )
end if
next i
r e t u r n (s )

While repeated squaring is clearly preferable t o the naive approach of ex-
ponentiation followed by long division, there are many additional refinements
that can further irnprove the efficiency of modular exponentiation. These im-
provements are necessary for efficient RSA implementations due to the large
numbers that arise in RSA. Repeated squaring without further refinements
is only used in RSA implementations in extremely resource-constrained envi-
ronments, such as smartcards.
Another trick that is conmionly used to speed up modular exponentia-
tion employs the Chinese Remainder Theorem (CRT). The precise statement
of the CRT is given in the Appendix. To see how the CRT applies specifi-
cally to RSA, first recall that for an RSA decryption (or signature), we must
coniput,e a modular exponentiation of the form Cd (mod N ) , where N = p q
arid p and q are large primes. Using the CRT, we can compute the modular
exponentiation modulo p and modulo y, then "glue" the two results together
to obtain the desired result modulo N . Since p and q are each much smaller
than N (each is on the order of it is much more efficient to do two
modular exponentiations with these relatively small moduli than to do one

modular exponentiation with modulus N . The use of CRT provides a speedup
of about a factor of four when computing Cd (mod N ) .
This CRT trick works as follows. Suppose we know C, d , N ,p , and q , and
we want t o compute Cd (mod N ) . First, we pre-compute

(mod ( q - 1))
(mod ( p - 1)) and d, =d
d, =d

and we determine a satisfying

a p ) and a = 0 (mod q ) (7.13)
= 1 (mod

and b satisfying

b = 0 (mod p ) and b = 1 (mod q ) . (7.14)

Then for the given ciphertext C, we compute

C, =C (mod p ) and C, = C (mod q )


. 13
( 16)