<<

. 10
( 29)



>>

2. If a is even, then L(a,p) = L(a /2,p)*(- 1)(p2- 1)/8
G1),
3. If a is odd (and G then L(a,p) = L(p mod a,a)*(- 1)(a- 1)*(p- 1)/4

Note that this is also an efficient way to determine whether a is a quadratic residue mod p (when p is
prime).


Jacobi Symbol

The Jacobi symbol, written J(a,n), is a generalization of the Legendre symbol to composite moduli; it
is defined for any integer a and any odd integer n. The function shows up in primality testing. The
Jacobi symbol is a function on the set of reduced residues of the divisors of n and can be calculated by
several formulas [1412]. This is one method:

Definition 1: J(a,n) is only defined if n is odd.
Definition 2: J(0,n) = 0.



Page 211 of 666
Applied Cryptography: Second Edition - Bruce Schneier



Definition 3: If n is prime, then the Jacobi symbol J(a,n) = 0 if n divides a.
Definition 4: If n is prime, then the Jacobi symbol J(a,n) = 1 if a is a quadratic residue
modulo n.
Definition 5: If n is prime, then the Jacobi symbol J(a,n) = - 1 if a is a quadratic
nonresidue modulo n.
Definition 6: If n is composite, then the Jacobi symbol J(a,n) = J(a,p1) *...* J(a,pm), where
p1...pm is the prime factorization of n.

The following algorithm computes the Jacobi symbol recursively:

Rule 1: J(1,n) = 1
Rule 2: J(a*b,n) = J(a,n)*J(b,n)
Rule 3: J(2,n) = 1 if (n2 - 1)/8 is even, and - 1 otherwise
Rule 4: J(a,n) = J((a mod n),n)
Rule 5: J(a,b1*b2) = J(a,b1)*J(a,b2)
Rule 6: If the greatest common divisor of a and b = 1, and a and b are odd:
Rule 6a: J(a,b) = J(b,a) if (a - 1)(b - 1)/4 is even
Rule 6b: J(a,b) = - J(b,a) if (a-1)(b-1)/4 is odd

Here is the algorithm in C:

/* This algorithm computes the Jacobi symbol recursively */
int jacobi(int a, int b)
{
int g;
assert(odd(b));
if (a >= b) a %= b; /* by Rule 4 */
if (a == 0) return 0; /* by Definition 2 */
if (a == 1) return 1; /* by Rule 1 */
if (a < 0)
if (((b-1)/2 % 2 == 0)
return jacobi(-a,b);
else
return -jacobi(-a,b);
if (a % 2 == 0) /* a is even */
if (((b*b - 1)/8) % 2 == 0)
return +jacobi(a/2, b)
else
return -jacobi(a/2, b) /* by Rule 3 and Rule 2 */
g = gcd(a,b);
assert(odd(a)); /* this is guaranteed by the (a % 2 == 0)
test */
if (g == a) /* a exactly divides b */
return 0; /* by Rules 5 and 4, and Definition 2 */
else if (g != 1)
return jacobi(g,b) * jacobi(a/g, b); /* by Rule 2 */
else if (((a-1)*(b-1)/4) % 2 == 0)
return +jacobi(b,a); /* by Rule 6a */
else
return -jacobi(b,a); /* by Rule 6b */
}

If n is known to be prime beforehand, simply compute a((n-1)/2) mod n instead of running the previous
algorithm; in this case J(a,n) is equivalent to the Legendre symbol.

The Jacobi symbol cannot be used to determine whether a is a quadratic residue mod n (unless n is
prime, of course). Note that, if J(a,n) = 1 and n is composite, it is not necessarily true that a is a
quadratic residue modulo n. For example:



Page 212 of 666
Applied Cryptography: Second Edition - Bruce Schneier




J(7, 143) = J(7, 11)*J(7, 13) = (- 1)(- 1) = 1

G7
G (mod 143).
However, there is no integer x such that x2

Blum Integers

If p and q are two primes, and both are congruent to 3 modulo 4, then n = pq is sometimes called a
Blum integer. If n is a Blum integer, each quadratic residue has exactly four square roots, one of
which is also a square; this is the principal square root. For example, the principal square root of 139
mod 437 is 24. The other three square roots are 185, 252, and 413.

Generators

If p is a prime, and g is less than p, then g is a generator mod p if

Gb
G (mod p).
for each b from 1 to p - 1, there exists some a where ga

Another way of saying this is that g is primitive with respect to p.

For example, if p = 11, 2 is a generator mod 11:

G1
210 = 1024 G (mod 11)
21 = 2 G2 (mod 11)
G
G3
28 = 256 G (mod 11)
G4
G (mod 11)
2 2=4

G5
G (mod 11)
2 4 = 16

G6
29 = 512 G (mod 11)
G7
27 = 128 G (mod 11)
G8
23 = 8 G (mod 11)
G9
26 = 64 G (mod 11)
G10
25 = 32 G (mod 11)

Every number from 1 to 10 can be expressed as 2a (mod p).

For p = 11, the generators are 2, 6, 7, and 8. The other numbers are not generators. For example, 3 is
not a generator because there is no solution to

3a = 2 (mod 11)

In general, testing whether a given number is a generator is not an easy problem. It is easy, however,
if you know the factorization of p - 1. Let q1, q2,..., qn be the distinct prime factors of p - 1. To test
whether a number g is a generator mod p, calculate

g(p- 1)/q mod p

for all values of q = q1, q2,..., qn.

If that number equals 1 for some value of q, then g is not a generator. If that value does not equal 1
for any values of q, then g is a generator.




Page 213 of 666
Applied Cryptography: Second Edition - Bruce Schneier




For example, let p = 11. The prime factors of p - 1 = 10 are 2 and 5. To test whether 2 is a generator:

2(11- 1)/5 (mod 11) = 4
2(11- 1)/2 (mod 11) = 10


Neither result is 1, so 2 is a generator.

To test whether 3 is a generator:

3(11- 1)/5 (mod 11) = 9
3(11- 1)/2 (mod 11) = 1

Therefore, 3 is not a generator.

If you need to find a generator mod p, simply choose a random number from 1 to p - 1 and test
whether it is a generator. Enough of them will be, so you™ll probably find one fast.

Computing in a Galois Field

Don™t be alarmed; that™s what we were just doing. If n is prime or the power of a large prime, then we
have what mathematicians call a finite field . In honor of that fact, we use p instead of n. In fact, this
type of finite field is so exciting that mathematicians gave it its own name: a Galois field, denoted as
GF(p). (’variste Galois was a French mathematician who lived in the early nineteenth century and did
a lot of work in number theory before he was killed at age 20 in a duel.)

In a Galois field, addition, subtraction, multiplication, and division by nonzero elements are all well-
defined. There is an additive identity, 0, and a multiplicative identity, 1. Every nonzero number has a
unique inverse (this would not be true if p were not prime). The commutative, associative, and
distributive laws are true.

Arithmetic in a Galois field is used a great deal in cryptography. All of the number theory works; it
keeps numbers a finite size, and division doesn™t have any rounding errors. Many cryptosystems are
based on GF(p), where p is a large prime.

To make matters even more complicated, cryptographers also use arithmetic modulo irreducible
polynomials of degree n whose coefficients are integers modulo q, where q is prime. These fields are
called GF(qn). All arithmetic is done modulo p (x), where p (x) is an irreducible polynomial of degree
n.

The mathematical theory behind this is far beyond the scope of the book, although I will describe
some cryptosystems that use it. If you want to try to work more with this, GF(23) has the following
elements: 0, 1, x, x + 1, x2, x2 + 1, x2 + x, x2 + x + 1. There is an algorithm for computing inverses in GF
(2n) that is suitable for parallel implementation [421].

When talking about polynomials, the term “prime” is replaced by “irreducible.” A polynomial is
irreducible if it cannot be expressed as the product of two other polynomials (except for 1 and itself, of
course). The polynomial x2 + 1 is irreducible over the integers. The polynomial x3 + 2x2 + x is not; it
can be expressed as x (x + 1)(x + 1).

A polynomial that is a generator in a given field is called primitive; all its coefficients are relatively




Page 214 of 666
Applied Cryptography: Second Edition - Bruce Schneier



prime. We™ll see primitive polynomials again when we talk about linear-feedback shift registers (see
Section 16.2).

Computation in GF(2n) can be quickly implemented in hardware with linear-feedback shift registers.
For that reason, computation over GF(2n) is often quicker than computation over GF(p). Just as
exponentiation is much more efficient in GF(2n), so is calculating discrete logarithms [180, 181, 368,
379]. If you want to learn more about this, read [140].

For a Galois field GF(2n), cryptographers like to use the trinomial p (x) = xn + x + 1 as the modulus,
because the long string of zeros between the xn and x coefficients makes it easy to implement a fast
modular multiplication [183]. The trinomial must be primitive, otherwise the math does not work.
Values of n less than 1000 [1649, 1648] for which xn + x + 1 is primitive are:

1, 3, 4, 6, 9, 15, 22, 28, 30, 46, 60, 63, 127, 153, 172, 303, 471, 532,
865, 900

There exists a hardware implementation of GF(2127) where p (x) = x127 + x + 1 [1631, 1632, 1129].
Efficient hardware architectures for implementing exponentiation in GF(2n) are discussed in [147].

11.4 Factoring

Factoring a number means finding its prime factors.

10 = 2*5
60 = 2*2*3*5
252601 = 41*61*101
2113 - 1 = 3391*23279*65993*1868569*1066818132868207

The factoring problem is one of the oldest in number theory. It™s simple to factor a number, but it™s
time-consuming. This is still true, but there have been some major advances in the state of the art.

Currently, the best factoring algorithm is:

Number field sieve (NFS) [953] (see also [952,16,279]). The general number field sieve is
the fastest-known factoring algorithm for numbers larger than 110 digits or so [472,635]. It was
impractical when originally proposed, but that has changed due to a series of improvements
over the last few years [953]. The NFS is still too new to have broken any factoring records, but
this will change soon. An early version was used to factor the ninth Fermat number: 2512 + 1
[955,954].

Other factoring algorithms have been supplanted by the NFS:

Quadratic sieve (QS) [1257,1617,1259]. This is the fastest-known algorithm for numbers
less than 110 decimal digits long and has been used extensively [440]. A faster version of this
algorithm is called the multiple polynomial quadratic sieve [1453,302]. The fastest version of
this algorithm is called the double large prime variation of the multiple polynomial quadratic
sieve.
Elliptic curve method (ECM) [957,1112,1113]. This method has been used to find 43-digit
factors, but nothing larger.
Pollard™s Monte Carlo algorithm [1254,248]. (This algorithm also appears in volume 2,
page 370 of Knuth [863].)
Continued fraction algorithm. See [1123,1252,863]. This algorithm isn™t even in the
running.



Page 215 of 666
Applied Cryptography: Second Edition - Bruce Schneier



Trial division. This is the oldest factoring algorithm and consists of testing every prime
number less than or equal to the square root of the candidate number.


See [251] for a good introduction to these different factoring algorithms, except for the NFS. The best
discussion of the NFS is [953]. Other, older references are [505, 1602, 1258]. Information on parallel
factoring can be found in [250].

If n is the number being factored, the fastest QS variants have a heuristic asymptotic run time of:

e(1+ 0(1))(ln (n))(1/2)(ln (ln (n)))(1/2)

The NFS is much faster, with a heuristic asymptotic time estimate of:

e(1.923+ 0(1))(ln (n))(1/3)(ln (ln (n)))(2/3)

In 1970, the big news was the factoring of a 41-digit hard number [1123]. (A “hard” number is one
that does not have any small factors and is not of a special form that allows it to be factored more
easily.) Ten years later, factoring hard numbers twice that size took a Cray computer just a few hours
[440].

In 1988, Carl Pomerance designed a modular factoring machine, using custom VLSI chips [1259]. The
size of the number you would be able to factor depends on how large a machine you can afford to
build. He never built it.

In 1993, a 120-digit hard number was factored using the quadratic sieve; the calculation took 825
mips-years and was completed in three months real time [463]. Other results are [504].

Today™s factoring attempts use computer networks [302, 955]. In factoring a 116-digit number, Arjen
Lenstra and Mark Manasse used 400 mips-years”the spare time on an array of computers around
the world for a few months.

In March 1994, a 129-digit (428-bit) number was factored using the double large prime variation of
the multiple polynomial QS [66] by a team of mathematicians led by Lenstra. Volunteers on the
Internet carried out the computation: 600 people and 1600 machines over the course of eight months,
probably the largest ad hoc multiprocessor ever assembled. The calculation was the equivalent of
4000 to 6000 mips-years. The machines communicated via electronic mail, sending their individual
results to a central repository where the final steps of analysis took place. This computation used the
QS and five-year-old theory; it would have taken one-tenth the time using the NFS [949]. According
to [66]: “We conclude that commonly used 512-bit RSA moduli are vulnerable to any organization
prepared to spend a few million dollars and to wait a few months.” They estimate that factoring a
512-bit number would be 100 times harder using the same technology, and only 10 times harder using
the NFS and current technology [949].

To keep up on the state of the art of factoring, RSA Data Security, Inc. set up the RSA Factoring
Challenge in March 1991 [532]. The challenge consists of a list of hard numbers, each the product of
two primes of roughly equal size. Each prime was chosen to be congruent to 2 modulo 3. There are 42
numbers in the challenge, one each of length 100 digits through 500 digits in steps of 10 digits (plus
one additional number, 129 digits long). At the time of writing, RSA-100, RSA-110, RSA-120, and
RSA-129 have been factored, all using the QS. RSA-130 might be next (using the NFS), or the
factoring champions might skip directly to RSA-140.

This is a fast-moving field. It is difficult to extrapolate factoring technology because no one can



Page 216 of 666
Applied Cryptography: Second Edition - Bruce Schneier



predict advances in mathematical theory. Before the NFS was discovered, many people conjectured
that the QS was asymptotically as fast as any factoring method could be. They were wrong.

Near-term advances in the NFS are likely to come in the form of bringing down the constant: 1.923.
Some numbers of a special form, like Fermat numbers, have a constant more along the lines of 1.5
[955, 954]. If the hard numbers used in public-key cryptography had that kind of constant, 1024-bit
numbers could be factored today. One way to lower the constant is to find better ways of representing
numbers as polynomials with small coefficients. The problem hasn™t been studied very extensively yet,
but it is probable that advances are coming [949].

For the most current results from the RSA Factoring Challenge, send e-mail to challenge-
info@rsa.com.

Square Roots Modulo n

If n is the product of two primes, then the ability to calculate square roots mod n is computationally
equivalent to the ability to factor n [1283, 35, 36, 193]. In other words, someone who knows the prime
factors of n can easily compute the square roots of a number mod n, but for everyone else the
computation has been proven to be as hard as computing the prime factors of n.

11.5 Prime Number Generation

Public-key algorithms need prime numbers. Any reasonably sized network needs lots of them. Before
discussing the mathematics of prime number generation, I will answer a few obvious questions.

1. If everyone needs a different prime number, won™t we run out? No. In fact, there are
approximately 10151 primes 512 bits in length or less. For numbers near n, the probability that a
random number is prime is approximately one in ln n. So the total number of primes less than n
is n /(ln n). There are only 1077 atoms in the universe. If every atom in the universe needed a
billion new primes every microsecond from the beginning of time until now, you would only
need 10109 primes; there would still be approximately 10151 512-bit primes left.
2. What if two people accidentally pick the same prime number? It won™t happen. With
over 10151 prime numbers to choose from, the odds of that happening are significantly less than
the odds of your computer spontaneously combusting at the exact moment you win the lottery.
3. If someone creates a database of all primes, won™t he be able to use that database to
break public-key algorithms? Yes, but he can™t do it. If you could store one gigabyte of
information on a drive weighing one gram, then a list of just the 512-bit primes would weigh so
much that it would exceed the Chandrasekhar limit and collapse into a black hole...so you
couldn™t retrieve the data anyway.

But if factoring numbers is so hard, how can generating prime numbers be easy? The trick is that the
yes/no question, “Is n prime?” is a much easier question to answer than the more complicated
question, “What are the factors of n? ”


The wrong way to find primes is to generate random numbers and then try to factor them. The right
way is to generate random numbers and test if they are prime. There are several probabilistic
primality tests; tests that determine whether a number is prime with a given degree of confidence.
Assuming this “degree of confidence” is large enough, these sorts of tests are good enough. I™ve heard
primes generated in this manner called “industrial-grade primes”: These are numbers that are
probably prime with a controllably small chance of error.




Page 217 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Assume a test is set to fail once in 250 tries. This means that there is a 1 in 1015 chance that the test
falsely indicates that a composite number is prime. (The test will never falsely indicate that a prime
number is composite.) If for some reason you need more confidence that the number is prime, you can
set the failure level even lower. On the other hand, if you consider that the odds of the number being
composite are 300 million times less than the odds of winning top prize in a state lottery, you might
not worry about it so much.

Overviews of recent developments in the field can be found in [1256, 206]. Other important papers are
[1490, 384, 11, 19, 626, 651, 911].

Solovay-Strassen

Robert Solovay and Volker Strassen developed a probabilistic primality testing algorithm [1490].
Their algorithm uses the Jacobi symbol to test if p is prime:

(1) Choose a random number, a, less than p.
G1,
If the gcd(a,p) G then p fails the test and is composite.
(2)
(3) Calculate j = a(p- 1)/2 mod p.
(4) Calculate the Jacobi symbol J(a,p).
GJ(a,p), then p is definitely not prime.
If j G
(5)
(6) If j = J(a,p), then the likelihood that p is not prime is no more than 50 percent.

A number a that does not indicate that p is definitely not prime is called a witness. If p is composite,
the odds of a random a being a witness is no less than 50 percent. Repeat this test t times, with t
different random values for a. The odds of a composite number passing all t tests is no more than one
in 2t.

Lehmann

Another, simpler, test was developed independently by Lehmann [945]. Here it tests if p is prime:

(1) Choose a random number a less than p.
(2) Calculate a(p- 1)/2 mod p.
G1
If a(p- 1)/2 G or - 1 (mod p), then p is definitely not prime.
(3)
G1
If a(p- 1)/2 G or - 1 (mod p), then the likelihood that p is not prime is no more than 50
(4)
percent.

Again, the odds of a random a being a witness to p ™s compositeness is no less than 50 percent. Repeat
this test t times. If the calculation equals 1 or -1, but does not always equal 1, then p is probably prime
with an error rate of 1 in 2t .

Rabin-Miller

The algorithm everyone uses”it™s easy”was developed by Michael Rabin, based in part on Gary
Miller™s ideas [1093, 1284]. Actually, this is a simplified version of the algorithm recommended in the
DSS proposal [1149, 1154].

Choose a random number, p, to test. Calculate b, where b is the number of times 2 divides p - 1 (i.e., 2b
is the largest power of 2 that divides p - 1). Then calculate m, such that p = 1 + 2b *m.

(1) Choose a random number, a, such that a is less than p.



Page 218 of 666
Applied Cryptography: Second Edition - Bruce Schneier



(2) Set j = 0 and set z = am mod p.
(3) If z = 1, or if z = p - 1, then p passes the test and may be prime.
(4) If j > 0 and z = 1, then p is not prime.
Gp
(5) Set j = j + 1. If j < b and z G - 1, set z = z2 mod p and go back to step (4). If z = p - 1,
then p passes the test and may be prime.
Gp
(6) If j = b and z G - 1, then p is not prime.

The odds of a composite passing decreases faster with this test than with previous ones. Three-
quarters of the possible values of a are guaranteed to be witnesses. This means that a composite
number will slip through t tests no more than ¼t of the time, where t is the number of iterations.
Actually, these numbers are very pessimistic. For most random numbers, something like 99.9 percent
of the possible a values are witnesses [96].

There are even better estimations [417]. For n- bit candidate primes (where n is more than 100), the
odds of error in one test are less than 1 in 4n 2(k/2)(1/2). And for a 256-bit n, the odds of error in six tests
are less than 1 in 251 . More theory is in [418].

Practical Considerations

In real-world implementations, prime generation goes quickly.

(1) Generate a random n- bit number, p.
(2) Set the high-order and low-order bit to 1. (The high-order bit ensures that the prime is
of the required length and the low-order bit ensures that it is odd.)
(3) Check to make sure p is not divisible by any small primes: 3, 5, 7, 11, and so on. Many
implementations test p for divisibility by all primes less than 256. The most efficient is to test for
divisibility by all primes less than 2000 [949]. You can do this efficiently using a wheel [863].
(4) Perform the Rabin-Miller test for some random a. If p passes, generate another
random a and go through the test again. Choose a small value of a to make the calculations go
quicker. Do five tests [651]. (One might seem like enough, but do five.) If p fails one of the tests,
generate another p and try again.

Another option is not to generate a random p each time, but to incrementally search through numbers
starting at a random point until you find a prime.

Step (3) is optional, but it is a good idea. Testing a random odd p to make sure it is not divisible by 3,
5, and 7 eliminates 54 percent of the odd numbers before you get to step (4). Testing against all primes
less than 100 eliminates 76 percent of the odd numbers; testing against all primes less than 256
eliminates 80 percent. In general, the fraction of odd candidates that is not a multiple of any prime
less than n is 1.12/ln n. The larger the n you test up to, the more precomputation is required before
you get to the Rabin-Miller test.


One implementation of this method on a Sparc II was able to find 256-bit primes in an average of 2.8
seconds, 512-bit primes in an average of 24.0 seconds, 768-bit primes in an average of 2.0 minutes, and
1024-bit primes in an average of 5.1 minutes [918].

Strong Primes

If n is the product of two primes, p and q, it may be desirable to use strong primes for p and q. These
are prime numbers with certain properties that make the product n difficult to factor by specific
factoring methods. Among the properties suggested have been [1328,651]:




Page 219 of 666
Applied Cryptography: Second Edition - Bruce Schneier




The greatest common divisor of p - 1 and q - 1 should be small.
Both p - 1 and q - 1 should have large prime factors, respectively p™ and q™.
Both p™ - 1 and q™ - 1 should have large prime factors.
Both p + 1 and q + 1 should have large prime factors.
Both (p - 1)/2 and (q - 1)/2 should be prime [182]. (Note that if this condition is true, then
so are the first two.)

Whether strong primes are necessary is a subject of debate. These properties were designed to thwart
some older factoring algorithms. However, the fastest factoring algorithms have as good a chance of
factoring numbers that meet these criteria as they do of factoring numbers that do not [831].

I recommend against specifically generating strong primes. The length of the primes is much more
important than the structure. Moreover, structure may be damaging because it is less random.

This may change. New factoring techniques may be developed that work better on numbers with
certain properties than on numbers without them. If so, strong primes may be required once again.
Check current theoretical mathematics journals for updates.

11.6 Discrete Logarithms in a Finite Field

Modular exponentiation is another one-way function used frequently in cryptography. Evaluating
this expression is easy:

ax mod n

The inverse problem of modular exponentiation is that of finding the discrete logarithm of a number.
This is a hard problem:

Gb
G (mod n).
Find x where ax

For example:

G15
G mod 17, then x = 6
If 3x

Not all discrete logarithms have solutions (remember, the only valid solutions are integers). It™s easy
to see that there is no solution, x, to the equation

3x = 7 (mod 13)

It™s far more difficult to solve these problems using 1024-bit numbers.

Calculating Discrete Logarithms in a Finite Group

There are three main groups whose discrete logarithms are of interest to cryptographers:

” The multiplicative group of prime fields: GF(p)
” The multiplicative group of finite fields of characteristic 2: GF(2n)
” Elliptic curve groups over finite fields F : EC(F)

The security of many public-key algorithms is based on the problem of finding discrete logarithms, so
the problem has been extensively studied. A good comprehensive overview of the problem, and the



Page 220 of 666
Applied Cryptography: Second Edition - Bruce Schneier



best solutions at the time, can be found in [1189, 1039]. The best current article on the topic is [934].

If p is the modulus and is prime, then the complexity of finding discrete logarithms in GF(p) is
essentially the same as factoring an integer n of about the same size, when n is the product of two
approximately equal-length primes [1378, 934]. This is:

e(1+ 0(1))(ln (p))(1/2)(ln (ln (p)))(1/2)

The number field sieve is faster, with an heuristic asymptotic time estimate of

e(1.923+ 0(1))(ln (p))(1/3)(ln (ln (p)))(2/3)

Stephen Pohlig and Martin Hellman found a fast way of computing discrete logarithms in GF(p) if p -
1 has only small prime factors [1253]. For this reason, only fields where p - 1 has at least one large
factor are used in cryptography. Another algorithm [14] computes discrete logarithms at a speed
comparable to factoring; it has been expanded to fields of the form GF(pn) [716]. This algorithm was
criticized [727] for having some theoretical problems. Other articles [1588] show how difficult the
problem really is.

Computing discrete logarithms is closely related to factoring. If you can solve the discrete logarithm
problem, then you can factor. (The converse has never been proven to be true.) Currently, there are
three methods for calculating discrete logarithms in a prime field [370, 934, 648]: the linear sieve, the
Gaussian integer scheme, and the number field sieve.

The preliminary, extensive computing has to be done only once per field. Afterward, individual
logarithms can be quickly calculated. This can be a security disadvantage for systems based on these
fields. It is important that different applications use different prime fields. Multiple users in the same
application can use a common field, though.

In the world of extension fields, GF(2n) hasn™t been ignored by researchers. An algorithm was
proposed in [727]. Coppersmith™s algorithm makes finding discrete logarithms in fields such as GF
(2127) reasonable and finding them in fields around GF(2400) possible [368]. This was based on work in
[180]. The precomputation stage of this algorithm is enormous, but otherwise it is nice and efficient. A
practical implementation of a less efficient version of the same algorithm, after a seven-hour
precomputation period, found discrete logs in GF(2127) in several seconds each [1130, 180]. (This
particular field, once used in some cryptosystems [142, 1631, 1632], is insecure.) For surveys of some
of these results, consult [1189, 1039].

More recently, the precomputations for GF(2227), GF(2313), and GF(2401) are done, and significant
progress has been made towards GF(2503). These calculations are being executed on an nCube-2
massively parallel computer with 1024 processors [649, 650]. Computing discrete logarithms in GF
(2593) is still barely out of reach.

Like discrete logarithms in a prime field, the precomputation required to calculate discrete logarithms
in a polynomial field has to be done only once. Taher ElGamal [520] gives an algorithm for
calculating discrete logs in the field GF(p2).


Chapter 12
Data Encryption Standard (DES)
12.1 Background


Page 221 of 666
Applied Cryptography: Second Edition - Bruce Schneier




The Data Encryption Standard (DES), known as the Data Encryption Algorithm (DEA) by ANSI and
the DEA-1 by the ISO, has been a worldwide standard for 20 years. Although it is showing signs of
old age, it has held up remarkably well against years of cryptanalysis and is still secure against all but
possibly the most powerful of adversaries.

Development of the Standard

In the early 1970s, nonmilitary cryptographic research was haphazard. Almost no research papers
were published in the field. Most people knew that the military used special coding equipment to
communicate, but few understood the science of cryptography. The National Security Agency (NSA)
had considerable knowledge, but they did not even publicly admit their own existence.

Buyers didn™t know what they were buying. Several small companies made and sold cryptographic
equipment, primarily to overseas governments. The equipment was all different and couldn™t
interoperate. No one really knew if any of it was secure; there was no independent body to certify the
security. As one government report said [441]:

The intricacies of relating key variations and working principles to the real strength of the
encryption/decryption equipment were, and are, virtually unknown to almost all buyers,
and informed decisions as to the right type of on-line, off-line, key generation, etc., which
will meet buyers™ security needs, have been most difficult to make.

In 1972, the National Bureau of Standards (NBS), now the National Institute of Standards and
Technology (NIST), initiated a program to protect computer and communications data. As part of
that program, they wanted to develop a single, standard cryptographic algorithm. A single algorithm
could be tested and certified, and different cryptographic equipment using it could interoperate. It
would also be cheaper to implement and readily available.

In the May 15, 1973 Federal Register, the NBS issued a public request for proposals for a standard
cryptographic algorithm. They specified a series of design criteria:

” The algorithm must provide a high level of security.
” The algorithm must be completely specified and easy to understand.
” The security of the algorithm must reside in the key; the security should not depend on
the secrecy of the algorithm.
” The algorithm must be available to all users.
” The algorithm must be adaptable for use in diverse applications.
” The algorithm must be economically implementable in electronic devices.
” The algorithm must be efficient to use.
” The algorithm must be able to be validated.
” The algorithm must be exportable.

Public response indicated that there was considerable interest in a cryptographic standard, but little
public expertise in the field. None of the submissions came close to meeting the requirements.

The NBS issued a second request in the August 27, 1974 Federal Register. Eventually they received a
promising candidate: an algorithm based on one developed by IBM during the early 1970s, called
Lucifer (see Section 13.1). IBM had a team working on cryptography at both Kingston and Yorktown
Heights, including Roy Adler, Don Coppersmith, Horst Feistel, Edna Grossman, Alan Konheim, Carl
Meyer, Bill Notz, Lynn Smith, Walt Tuchman, and Bryant Tuckerman.

The algorithm, although complicated, was straightforward. It used only simple logical operations on



Page 222 of 666
Applied Cryptography: Second Edition - Bruce Schneier



small groups of bits and could be implemented fairly efficiently in hardware.

The NBS requested the NSA™s help in evaluating the algorithm™s security and determining its
suitability as a federal standard. IBM had already filed for a patent [514], but was willing to make its
intellectual property available to others for manufacture, implementation, and use. Eventually, the
NBS worked out the terms of agreement with IBM and received a nonexclusive, royalty-free license to
make, use, and sell equipment that implemented the algorithm.

Finally, in the March 17, 1975 Federal Register, the NBS published both the details of the algorithm
and IBM™s statement granting a nonexclusive, royalty-free license for the algorithm, and requested
comment [536]. Another notice, in the August 1, 1975 Federal Register, again requested comments
from agencies and the general public.

And there were comments [721,497,1120]. Many were wary of the NSA™s “invisible hand” in the
development of the algorithm. They were afraid that the NSA had modified the algorithm to install a
trapdoor. They complained that the NSA reduced the key size from the original 128-bits to 56-bits (see
Section 13.1). They complained about the inner workings of the algorithm. Much of NSA™s reasoning
became clear in the early 1990s, but in the 1970s this seemed mysterious and worrisome.

In 1976, the NBS held two workshops to evaluate the proposed standard. The first workshop
discussed the mathematics of the algorithm and the possibility of a trapdoor [1139]. The second
workshop discussed the possibility of increasing the algorithm™s key length [229]. The algorithm™s
designers, evaluators, implementors, vendors, users, and critics were invited. From all reports, the
workshops were lively [1118].

Despite criticism, the Data Encryption Standard was adopted as a federal standard on November 23,
1976 [229] and authorized for use on all unclassified government communications. The official
description of the standard, FIPS PUB 46, “Data Encryption Standard,” was published on January
15, 1977 and became effective six months later [1140]. FIPS PUB 81, “DES Modes of Operation,” was
published in 1980 [1143]. FIPS PUB 74, “Guidelines for Implementing and Using the NBS Data
Encryption Standard,” was published in 1981 [1142]. NBS also published FIPS PUB 112, specifying
DES for password encryption [1144], and FIPS PUB 113, specifying DES for computer data
authentication [1145]. (FIPS stands for Federal Information Processing Standard.)

These standards were unprecedented. Never before had an NSA-evaluated algorithm been made
public. This was probably the result of a misunderstanding between NSA and NBS. The NSA thought
DES was hardware-only. The standard mandated a hardware implementation, but NBS published
enough details so that people could write DES software. Off the record, NSA has characterized DES
as one of their biggest mistakes. If they knew the details would be released so that people could write
software, they would never have agreed to it. DES did more to galvanize the field of cryptanalysis
than anything else. Now there was an algorithm to study: one that the NSA said was secure. It is no
accident that the next government standard algorithm, Skipjack (see Section 13.12), was classified.


Adoption of the Standard

The American National Standards Institute (ANSI) approved DES as a private-sector standard in
1981 (ANSI X3.92) [50]. They called it the Data Encryption Algorithm (DEA). ANSI published a
standard for DEA modes of operation (ANSI X3.106) [52], similar to the NBS document, and a
standard for network encryption that uses DES (ANSI X3.105) [51].

Two other groups within ANSI, representing retail and wholesale banking, developed DES-based
standards. Retail banking involves transactions between financial institutions and private individuals,



Page 223 of 666
Applied Cryptography: Second Edition - Bruce Schneier



and wholesale banking involves transactions between financial institutions.

ANSI™s Financial Institution Retail Security Working Group developed a standard for the
management and security of PINs (ANSI X9.8) [53] and another DES-based standard for the
authentication of retail financial messages (ANSI X9.19) [56]. The group has a draft standard for
secure key distribution (ANSI X9.24) [58].

ANSI™s Financial Institution Wholesale Security Working Group developed its own set of standards
for message authentication (ANSI X9.9) [54], key management (ANSI X9.17) [55,1151], encryption
(ANSI X9.23) [57], and secure personal and node authentication (ANSI X9.26) [59].

The American Bankers Association develops voluntary standards for the financial industry. They
published a standard recommending DES for encryption [1], and another standard for managing
cryptographic keys [2].

Before the Computer Security Act of 1987, the General Services Administration (GSA) was
responsible for developing federal telecommunications standards. Since then, that responsibility
transferred to NIST. The GSA published three standards that used DES: two for general security and
interoperability requirements (Federal Standard 1026 [662] and Federal Standard 1027 [663]), and
one for Group 3 facsimile equipment (Federal Standard 1028) [664].

The Department of the Treasury wrote policy directives requiring that all electronic-funds transfer
messages be authenticated with DES [468,470]. They also wrote DES-based criteria that all
authentication devices must meet [469].

The ISO first voted to approve DES”they called it the DEA-1”as an international standard, then
decided not to play a role in the standardization of cryptography. However, in 1987 the International
Wholesale Financial Standards group of ISO used DES in an international authentication standard
[758] and for key management [761]. DES is also specified in an Australian banking standard [1497].

Validation and Certification of DES Equipment

As part of the DES standard, NIST validates implementations of DES. This validation confirms that
the implementation follows the standard. Until 1994, NIST only validated hardware and firmware
implementations”until then the standard prohibited software implementations. As of March 1995, 73
different implementations had been validated.

NIST also developed a program to certify that authentication equipment conformed to ANSI X9.9 and
FIPS 113. As of March, 1995, 33 products had been validated. The Department of the Treasury has an
additional certification procedure. NIST also has a program to confirm that equipment conforms to
ANSI X9.17 for wholesale key management [1151]; four products have been validated as of March,
1995.

1987

The terms of the DES standard stipulate that it be reviewed every five years. In 1983 DES was
recertified without a hitch. In the March 6, 1987 Federal Register, NBS published a request for
comments on the second five-year review. NBS offered three alternatives for consideration
[1480,1481]: reaffirm the standard for another five years, withdraw the standard, or revise the
applicability of the standard.

NBS and NSA reviewed the standard. NSA was more involved this time. Because of an executive




Page 224 of 666
Applied Cryptography: Second Edition - Bruce Schneier



directive called NSDD-145, signed by Reagan, NSA had veto power over the NBS in matters of
cryptography. Initially, the NSA announced that it would not recertify the standard. The problem
was not that DES had been broken, or even that it was suspected of having been broken. It was simply
increasingly likely that it would soon be broken.

In its place, the NSA proposed the Commercial COMSEC Endorsement Program (CCEP), which
would eventually provide a series of algorithms to replace DES [85]. These NSA-designed algorithms
would not be made public, and would only be available in tamper-proof VLSI chips (see Section 25.1).

This announcement wasn™t well received. People pointed out that business (especially the financial
industry) uses DES extensively, and that no adequate alternative is available. Withdrawal of the
standard would leave many organizations with no data protection. After much debate, DES was
reaffirmed as a U.S. government standard until 1992 [1141]. According to the NBS, DES would not be
certified again [1480].

1993

Never say “not.” In 1992, there was still no alternative for DES. The NBS, now called NIST, again
solicited comments on DES in the Federal Register [540]:

The purpose of this notice is to announce the review to assess the continued adequacy of
the standard to protect computer data. Comments from industry and the public are
invited on the following alternatives for FIPS 46-1. The costs (impacts) and benefits of
these alternatives should be included in the comments:

” Reaffirm the standard for another five (5) years. The National Institute of
Standards and Technology would continue to validate equipment that implements
the standard. FIPS 46-1 would continue to be the only approved method for
protecting unclassified computer data.
” Withdraw the standard. The National Institute of Standards and
Technology would no longer continue to support the standard. Organizations could
continue to utilize existing equipment that implements the standard. Other
standards could be issued by NIST as a replacement for the DES.
” Revise the applicability and/or implementation statements for the standard.
Such revisions could include changing the standard to allow the use of
implementations of the DES in software as well as hardware; to allow the iterative
use of the DES in specific applications; to allow the use of alternative algorithms
that are approved and registered by NIST.

The comment period closed on December 10, 1992. According to Raymond Kammer, then the acting
director of NIST [813]:

Last year, NIST formally solicited comments on the recertification of DES. After
reviewing those comments, and the other technical inputs that I have received, I plan to
recommend to the Secretary of Commerce that he recertify DES for another five years. I
also plan to suggest to the Secretary that when we announce the recertification we state
our intention to consider alternatives to it over the next five years. By putting that
announcement on the table, we hope to give people an opportunity to comment on orderly
technological transitions. In the meantime, we need to consider the large installed base of
systems that rely upon this proven standard.

Even though the Office of Technology Assessment quoted NIST™s Dennis Branstead as saying that the
useful lifetime of DES would end in the late 1990s [1191], the algorithm was recertified for another



Page 225 of 666
Applied Cryptography: Second Edition - Bruce Schneier



five years [1150]. Software implementations of DES were finally allowed to be certified.

Anyone want to guess what will happen in 1998?


12.2 Description of DES

DES is a block cipher; it encrypts data in 64-bit blocks. A 64-bit block of plaintext goes in one end of
the algorithm and a 64-bit block of ciphertext comes out the other end. DES is a symmetric algorithm:
The same algorithm and key are used for both encryption and decryption (except for minor
differences in the key schedule).

The key length is 56 bits. (The key is usually expressed as a 64-bit number, but every eighth bit is used
for parity checking and is ignored. These parity bits are the least-significant bits of the key bytes.)
The key can be any 56-bit number and can be changed at any time. A handful of numbers are
considered weak keys, but they can easily be avoided. All security rests within the key.

At its simplest level, the algorithm is nothing more than a combination of the two basic techniques of
encryption: confusion and diffusion. The fundamental building block of DES is a single combination
of these techniques (a substitution followed by a permutation) on the text, based on the key. This is
known as a round. DES has 16 rounds; it applies the same combination of techniques on the plaintext
block 16 times (see Figure 12.1).

The algorithm uses only standard arithmetic and logical operations on numbers of 64 bits at most, so
it was easily implemented in late 1970s hardware technology. The repetitive nature of the algorithm
makes it ideal for use on a special-purpose chip. Initial software implementations were clumsy, but
current implementations are better.

Outline of the Algorithm

DES operates on a 64-bit block of plaintext. After an initial permutation, the block is broken into a
right half and a left half, each 32 bits long. Then there are 16 rounds of identical operations, called
Function f, in which the data are combined with the key. After the sixteenth round, the right and left
halves are joined, and a final permutation (the inverse of the initial permutation) finishes off the
algorithm.

In each round (see Figure 12.2), the key bits are shifted, and then 48 bits are selected from the 56 bits
of the key. The right half of the data is expanded to 48 bits via an expansion permutation, combined
with 48 bits of a shifted and permuted key via an XOR, sent through 8 S-boxes producing 32 new bits,
and permuted again. These four operations make up Function f. The output of Function f is then
combined with the left half via another XOR. The result of these operations becomes the new right
half; the old right half becomes the new left half. These operations are repeated 16 times, making 16
rounds of DES.

If Bi is the result of the ith iteration, Li and Ri are the left and right halves of Bi, Ki is the 48-bit key
for round i, and f is the function that does all the substituting and permuting and XORing with the
key, then a round looks like:




Page 226 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Figure 12.1 DES.

Li = Ri-1
Ri = Li-1 f (Ri-1, Ki)

The Initial Permutation

The initial permutation occurs before round 1; it transposes the input block as described in Table
12.1. This table, like all the other tables in this chapter, should be read left to right, top to bottom. For
example, the initial permutation moves bit 58 of the plaintext to bit position 1, bit 50 to bit position 2,
bit 42 to bit position 3, and so forth.

The initial permutation and the corresponding final permutation do not affect DES™s security. (As
near as anyone can tell, its primary purpose is to make it easier to load plaintext and ciphertext data
into a DES chip in byte-sized pieces. Remember that DES predates 16-bit or 32-bit microprocessor
busses.) Since this bit-wise permutation is difficult in software (although it is trivial in hardware),
many software implementations of DES leave out both the initial and final permutations. While this
new algorithm is no less secure than DES, it does not follow the DES standard and should not be
called DES.




Figure 12.2 One round of DES.




Page 227 of 666
Applied Cryptography: Second Edition - Bruce Schneier




The Key Transformation

Initially, the 64-bit DES key is reduced to a 56-bit key by ignoring every eighth bit. This is described
in Table 12.2. These bits can be used as parity check to ensure the key is error-free. After the 56-bit
key is extracted, a different 48-bit subkey is generated for each of the 16 rounds of DES. These
subkeys, Ki are determined in the following manner.

First, the 56-bit key is divided into two 28-bit halves. Then, the halves are circularly shifted left by
either one or two bits, depending on the round. This shift is given in Table 12.3.

Table 12.1
Initial Permutation
58, 50, 42, 34, 26, 18, 10, 2, 60, 52, 44, 36, 28, 20, 12, 4,
62, 54, 46, 38, 30, 22, 14, 6, 64, 56, 48, 40, 32, 24, 16, 8,
57, 49, 41, 33, 25, 17, 9, 1, 59, 51, 43, 35, 27, 19, 11, 3,
61, 53, 45, 37, 29, 21, 13, 5, 63, 55, 47, 39, 31, 23, 15, 7



Table 12.2
Key Permutation
57, 49, 41, 33, 25, 17, 9, 1, 58, 50, 42, 34, 26, 18,
10, 2, 59, 51, 43, 35, 27, 19, 11, 3, 60, 52, 44, 36,
63, 55, 47, 39, 31, 23, 15, 7, 62, 54, 46, 38, 30, 22,
14, 6, 61, 53, 45, 37, 29, 21, 13, 5, 28, 20, 12, 4


After being shifted, 48 out of the 56 bits are selected. Because this operation permutes the order of the
bits as well as selects a subset of bits, it is called a compression permutation. This operation provides a
subset of 48 bits. Table 12.4 defines the compression permutation (also called the permuted choice).
For example, the bit in position 33 of the shifted key moves to position 35 of the output, and the bit in
position 18 of the shifted key is ignored.

Because of the shifting, a different subset of key bits is used in each subkey. Each bit is used in
approximately 14 of the 16 subkeys, although not all bits are used exactly the same number of times.

The Expansion Permutation

This operation expands the right half of the data, Ri, from 32 bits to 48 bits. Because this operation
changes the order of the bits as well as repeating certain bits, it is known as an expansion
permutation. This operation has two purposes: It makes the right half the same size as the key for the
XOR operation and it provides a longer result that can be compressed during the substitution
operation. However, neither of those is its main cryptographic purpose. By allowing one bit to affect
two substitutions, the dependency of the output bits on the input bits spreads faster. This is called an
avalanche effect. DES is designed to reach the condition of having every bit of the ciphertext depend
on every bit of the plaintext and every bit of the key as quickly as possible.

Figure 12.3 defines the expansion permutation. This is sometimes called the E-box. For each 4-bit




Page 228 of 666
Applied Cryptography: Second Edition - Bruce Schneier



input block, the first and fourth bits each represent two bits of the output block, while the second and
third bits each represent one bit of the output block. Table 12.5 shows which output positions
correspond to which input positions. For example, the bit in position 3 of the input block moves to
position 4 of the output block, and the bit in position 21 of the input block moves to positions 30 and
32 of the output block.

Although the output block is larger than the input block, each input block generates a unique output
block.

Table 12.3
Number of Key Bits Shifted per Round
Round 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Number 1 1 2 2 2 2 2 2 1 2 2 2 2 2 2 1

Table 12.4
Compression Permutation
14, 17, 11, 24, 1, 5, 3, 28, 15, 6, 21, 10,
23, 19, 12, 4, 26, 8, 16, 7, 27, 20, 13, 2,
41, 52, 31, 37, 47, 55, 30, 40, 51, 45, 33, 48,
44, 49, 39, 56, 34, 53, 46, 42, 50, 36, 29, 32



The S-Box Substitution

After the compressed key is XORed with the expanded block, the 48-bit result moves to a substitution
operation. The substitutions are performed by eight substitution boxes, or S-boxes. Each S-box has a
6-bit input and a 4-bit output, and there are eight different S-boxes. (The total memory requirement
for the eight DES S-boxes is 256 bytes.) The 48 bits are divided into eight 6-bit sub-blocks. Each
separate block is operated on by a separate S-box: The first block is operated on by S-box 1, the
second block is operated on by S-box 2, and so on. See Figure 12.4.

Each S-box is a table of 4 rows and 16 columns. Each entry in the box is a 4-bit number. The 6 input
bits of the S-box specify under which row and column number to look for the output. Table 12.6
shows all eight S-boxes.

The input bits specify an entry in the S-box in a very particular manner. Consider an S-box input of 6
bits, labeled b1 b2 b3 b4 b5 and b6. Bits b1 and b6 are combined to form a 2-bit number, from 0 to 3,
which corresponds to a row in the table. The middle 4 bits, b2 through b5 are combined to form a 4-bit
number, from 0 to 15, which corresponds to a column in the table.

For example, assume that the input to the sixth S-box (i.e., bits 31 through 36 of the XOR function) is
110011. The first and last bits combine to form 11, which corresponds to row 3 of the sixth S-box. The
middle 4 bits combine to form 1001, which corresponds to the column 9 of the same S-box. The entry
under row 3, column 9 of S-box 6 is 14. (Remember to count rows and columns from 0 and not from
1.) The value 1110 is substituted for 110011.




Page 229 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Figure 12.3 Expansion permutation.

Table 12.5
Expansion Permutation
32, 1, 2, 3, 4, 5, 4, 5, 6, 7, 8, 9,
8, 9, 10, 11, 12, 13, 12, 13, 14, 15, 16, 17,
16, 17, 18, 19, 20, 21, 20, 21, 22, 23, 24, 25,
24, 25, 26, 27, 28, 29, 28, 29, 30, 31, 32, 1


It is, of course, far easier to implement the S-boxes in software as 64-entry arrays. It takes some
rearranging of the entries to do this, but that™s not hard. (Don™t just change the indexing without
rearranging the entries. The S-boxes are designed very carefully.) However, this way of describing the
S-boxes helps visualize how they work. Each S-box can be viewed as a substitution function on a 4-bit
entry: b2 through b5 go in, and a 4-bit result comes out. Bits b1 and b6 come from neighboring blocks;
they select one out of four substitution functions available in the particular S-box.

The S-box substitution is the critical step in DES. The algorithm™s other operations are linear and
easy to analyze. The S-boxes are nonlinear and, more than anything else, give DES its security.

The result of this substitution phase is eight 4-bit blocks which are recombined into a single 32-bit
block. This block moves to the next step: the P-box permutation.

The P-Box Permutation

The 32-bit output of the S-box substitution is permuted according to a P-box. This permutation maps
each input bit to an output position; no bits are used twice and no bits are ignored. This is called a
straight permutation or just a permutation. Table 12.7 shows the position to which each bit moves.
For example, bit 21 moves to bit 4, while bit 4 moves to bit 31.




Figure 12.4 S-box substitution.




Page 230 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Table 12.6
S-Boxes
S-box 1:
14, 4, 13, 1, 2, 15, 11, 8, 3, 10, 6, 12, 5, 9, 0, 7,
0, 15, 7, 4, 14, 2, 13, 1, 10, 6, 12, 11, 9, 5, 3, 8,
4, 1, 14, 8, 13, 6, 2, 11, 15, 12, 9, 7, 3, 10, 5, 0,
15, 12, 8, 2, 4, 9, 1, 7, 5, 11, 3, 14, 10, 0, 6, 13,
S-box 2:
15, 1, 8, 14, 6, 11, 3, 4, 9, 7, 2, 13, 12, 0, 5, 10,
3, 13, 4, 7, 15, 2, 8, 14, 12, 0, 1, 10, 6, 9, 11, 5,
0, 14, 7, 11, 10, 4, 13, 1, 5, 8, 12, 6, 9, 3, 2, 15,
13, 8, 10, 1, 3, 15, 4, 2, 11, 6, 7, 12, 0, 5, 14, 9,
S-box 3:
10, 0, 9, 14, 6, 3, 15, 5, 1, 13, 12, 7, 11, 4, 2, 8,
13, 7, 0, 9, 3, 4, 6, 10, 2, 8, 5, 14, 12, 11, 15, 1,
13, 6, 4, 9, 8, 15, 3, 0, 11, 1, 2, 12, 5, 10, 14, 7,
1, 10, 13, 0, 6, 9, 8, 7, 4, 15, 14, 3, 11, 5, 2, 12,
S-box 4:
7, 13, 14, 3, 0, 6, 9, 10, 1, 2, 8, 5, 11, 12, 4, 15,
13, 8, 11, 5, 6, 15, 0, 3, 4, 7, 2, 12, 1, 10, 14, 9,
10, 6, 9, 0, 12, 11, 7, 13, 15, 1, 3, 14, 5, 2, 8, 4,
3, 15, 0, 6, 10, 1, 13, 8, 9, 4, 5, 11, 12, 7, 2, 14,
S-box 5:
2, 12, 4, 1, 7, 10, 11, 6, 8, 5, 3, 15, 13, 0, 14, 9,
14, 11, 2, 12, 4, 7, 13, 1, 5, 0, 15, 10, 3, 9, 8, 6,
4, 2, 1, 11, 10, 13, 7, 8, 15, 9, 12, 5, 6, 3, 0, 14,
11, 8, 12, 7, 1, 14, 2, 13, 6, 15, 0, 9, 10, 4, 5, 3,
S-box 6:
12, 1, 10, 15, 9, 2, 6, 8, 0, 13, 3, 4, 14, 7, 5, 11,
10, 15, 4, 2, 7, 12, 9, 5, 6, 1, 13, 14, 0, 11, 3, 8,
9, 14, 15, 5, 2, 8, 12, 3, 7, 0, 4, 10, 1, 13, 11, 6,
4, 3, 2, 12, 9, 5, 15, 10, 11, 14, 1, 7, 6, 0, 8, 13,
S-box 7:
4, 11, 2, 14, 15, 0, 8, 13, 3, 12, 9, 7, 5, 10, 6, 1,
13, 0, 11, 7, 4, 9, 1, 10, 14, 3, 5, 12, 2, 15, 8, 6,
1, 4, 11, 13, 12, 3, 7, 14, 10, 15, 6, 8, 0, 5, 9, 2,
6, 11, 13, 8, 1, 4, 10, 7, 9, 5, 0, 15, 14, 2, 3, 12,
S-box 8:
13, 2, 8, 4, 6, 15, 11, 1, 10, 9, 3, 14, 5, 0, 12, 7,
1, 15, 13, 8, 10, 3, 7, 4, 12, 5, 6, 11, 0, 14, 9, 2,
7, 11, 4, 1, 9, 12, 14, 2, 0, 6, 10, 13, 15, 3, 5, 8,
2, 1, 14, 7, 4, 10, 8, 13, 15, 12, 9, 0, 3, 5, 6, 11




Page 231 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Table 12.7
P-Box Permutation
16, 7, 20, 21, 29, 12, 28, 17, 1, 15, 23, 26, 5, 18, 31, 10,
2, 8, 24, 14, 32, 27, 3, 9, 19, 13, 30, 6, 22, 11, 4, 25


Finally, the result of the P-box permutation is XORed with the left half of the initial 64-bit block.
Then the left and right halves are switched and another round begins.

The Final Permutation

The final permutation is the inverse of the initial permutation and is described in Table 12.8. Note
that the left and right halves are not exchanged after the last round of DES; instead the concatenated
block R16L16 is used as the input to the final permutation. There™s nothing going on here; exchanging
the halves and shifting around the permutation would yield exactly the same result. This is so that the
algorithm can be used to both encrypt and decrypt.

Decrypting DES

After all the substitutions, permutations, XORs, and shifting around, you might think that the
decryption algorithm is completely different and just as confusing as the encryption algorithm. On the
contrary, the various operations were chosen to produce a very useful property: The same algorithm
works for both encryption and decryption.

With DES it is possible to use the same function to encrypt or decrypt a block. The only difference is
that the keys must be used in the reverse order. That is, if the encryption keys for each round are K1
K2 K3,..., K16 then the decryption keys are K16 K15 K14, ..., K1. The algorithm that generates the key
used for each round is circular as well. The key shift is a right shift and the number of positions
shifted is 0,1,2,2,2,2,2,2,1,2,2,2,2,2,2,1.

Modes of DES

FIPS PUB 81 specifies four modes of operation: ECB, CBC, OFB, and CFB (see Chapter 9) [1143].
The ANSI banking standards specify ECB and CBC for encryption, and CBC and n-bit CFB for
authentication [52].

In the software world, certification is usually not an issue. Because of its simplicity, ECB is most often
used in off-the-shelf commercial software products, although it is the most vulnerable to attack. CBC
is used occasionally, even though it is just slightly more complicated than ECB and provides much
more security.

Table 12.8
Final Permutation
40, 8, 48, 16, 56, 24, 64, 32, 39, 7, 47, 15, 55, 23, 63, 31,
38, 6, 46, 14, 54, 22, 62, 30, 37, 5, 45, 13, 53, 21, 61, 29,
36, 4, 44, 12, 52, 20, 60, 28, 35, 3, 43, 11, 51, 19, 59, 27,
34, 2, 42, 10, 50, 18, 58, 26, 33, 1, 41, 9, 49, 17, 57, 25




Page 232 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Hardware and Software Implementations of DES

Much has been written on efficient hardware and software implementations of the algorithm
[997,81,533,534,437,738,1573,176,271,1572]. At this writing, the recordholder for the fastest DES chip
is a prototype developed at Digital Equipment Corporation [512]. It supports ECB and CBC modes
and is based on a GaAs gate array of 50,000 transistors. Data can be encrypted and decrypted at a
rate of 1 gigabit per second, which translates to 16.8 million blocks per second. This is impressive.
Table 12.9 gives the specifications for some commercial DES chips. Seeming discrepancies between
clock speed and data rate are due to pipelining within the chip; a chip might have multiple DES
engines working in parallel.

The most impressive DES chip is VLSI™s 6868 (formerly called “Gatekeeper”). Not only can it
perform DES encryption in only 8 clock cycles (prototypes in the lab can do it in 4 clock cycles), but it
can also do ECB triple-DES in 25 clock cycles, and OFB or CBC triple-DES in 35 clock cycles. This
sounds impossible to me, too, but I assure you it works.

A software implementation of DES on an IBM 3090 mainframe can perform 32,000 DES encryptions
per second. Most microcomputers are slower, but impressive nonetheless. Table 12.10 [603,793] gives
actual results and estimates for various Intel and Motorola microprocessors.


12.3 Security of DES

People have long questioned the security of DES [458]. There has been much speculation on the key
length, number of iterations, and design of the S-boxes. The S-boxes were particularly mysterious”
all those constants, without any apparent reason as to why or what they™re for. Although IBM
claimed that the inner workings were the result of 17 man-years of intensive cryptanalysis some
people feared that the NSA embedded a trapdoor into the algorithm so they would have an easy
means of decrypting messages.

The U.S. Senate Select Committee on Intelligence, with full top-secret clearances, investigated the
matter in 1978. The findings of the committee are classified, but an unclassified summary of those
findings exonerated the NSA from any improper involvement in the algorithm™s design [1552]. “It was
said to have convinced IBM that a shorter key was adequate, to have indirectly assisted in the
development of the S-box structures and to have certified that the final DES algorithm was, to the best
of their knowledge, free of any statistical or mathematical weaknesses” [435]. However, since the
government never made the details of the investigation public, many people remained unconvinced.

Tuchman and Meyer, two of the IBM cryptographers who designed DES, said the NSA did not alter
the design [841]:

Their basic approach was to look for strong substitution, permutation, and key scheduling
functions.... IBM has classified the notes containing the selection criteria at the request of
the NSA.... “The NSA told us we had inadvertently reinvented some of the deep secrets it
uses to make its own algorithms,” explains Tuchman.




Page 233 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Table 12.9
Commercial DES Chips
Manufacturer Chip Year Clock Data Rate Availability
AMD Am9518 1981 3 MHz 1.3 MByte/s N
AMD Am9568 ? 4 MHz 1.5 MByte/s N
AMD AmZ8068 1982 4 MHz 1.7 MByte/s N
AT&T T7000A 1985 ? 1.9 MByte/s N
CE-Infosys SuperCrypt 1992 20 MHz 12.5 MByte/s Y
CE99C003
CE-Infosys SuperCrypt 1994 30 MHz 20.0 MByte/s Y
CE99C003A
Cryptech Cry12C102 1989 20 MHz 2.8 MByte/s Y
Newbridge CA20C03A 1991 25 MHz 3.85 MByte/s Y
Newbridge CA20C03W 1992 8 MHz 0.64 MByte/s Y
Newbridge CA95C68/18/09 1993 33 MHz 14.67 MByte/s Y
Pijnenburg PCC100 ? ? 2.5 MByte/s Y
Semaphore Roadrunner284 ? 40 MHz 35.5 MByte/s Y
Communications
VLSI Technology VM007 1993 32 MHz 200.0 MByte/s Y
VLSI Technology VM009 1993 33 MHz 14.0 MByte/s Y
VLSI Technology 6868 1995 32 MHz 64.0 MByte/s Y
Western Digital WD2001/2002 1984 3 MHz 0.23 MByte/s N

Table 12.10
DES Speeds on Different Microprocessors and Computers
Processor Speed (in MHz) DES Blocks (per second)
8088 4.7 370
68000 7.6 900
80286 6 1,100
68020 16 3,500
68030 16 3,900
80386 25 5,000
68030 50 10,000
68040 25 16,000
68040 40 23,000
80486 66 43,000
Sun ELC 26,000
HyperSparc 32,000
RS6000-350 53,000
Sparc 10/52 84,000
DEC Alpha 4000/610 154,000
HP 9000/887 125 196,000



Page 234 of 666
Applied Cryptography: Second Edition - Bruce Schneier




Later in the article, Tuchman is quoted: “We developed the DES algorithm entirely within IBM using
IBMers. The NSA did not dictate a single wire!” Tuchman reaffirmed this when he spoke on the
history of DES at the 1992 National Computer Security Conference.

On the other hand, Coppersmith wrote [373,374]: “The National Security Agency (NSA) also
provided technical advice to IBM.” And Konheim has been quoted as saying: “We sent the S-boxes off
to Washington. They came back and were all different. We ran our tests and they passed.” People
have pointed to this as evidence that the NSA put a trapdoor in DES.

NSA, when questioned regarding any imposed weakness in DES, said [363]:

Regarding the Data Encryption Standard (DES), we believe that the public record from
the Senate Committee for Intelligence™s investigation in 1978 into NSA™s role in the
development of the DES is responsive to your question. That committee report indicated
that NSA did not tamper with the design of the algorithm in any way and that the security
afforded by the DES was more than adequate for at least a 5“10 year time span for the
unclassified data for which it was intended. In short, NSA did not impose or attempt to
impose any weakness on the DES.

Then why did they modify the S-boxes? Perhaps it was to ensure that IBM did not put a trapdoor in
DES. The NSA had no reason to trust IBM™s researchers, and would be lax in their duty if they did
not make absolutely sure that DES was free of trapdoors. Dictating the S-boxes is one way they could
make sure.

Very recently some new cryptanalysis results have shed some light on this issue, but for many years
this has been the subject of much speculation.

Weak Keys

<<

. 10
( 29)



>>