<< ńňđ. 7(âńĺăî 29)ŃÎÄĹĐĆŔÍČĹ >>

And the general number field sieve is still getting faster. Mathematicians keep coming up with new
tricks, new optimizations, new techniques. Thereâ€™s no reason to think this trend wonâ€™t continue. A
related algorithm, the special number field sieve, can already factor numbers of a certain specialized
formâ€”numbers not generally used for cryptographyâ€”much faster than the general number field
sieve can factor general numbers of the same size. It is not unreasonable to assume that the general
number field sieve can be optimized to run this fast [1190]; it is possible that the NSA already knows
how to do this. Table 7.5 gives the number of mips-years required for the special number field sieve to
factor numbers of different lengths [1190].

At a European Institute for System Security workshop in 1991, the participants agreed that a 1024-bit
modulus should be sufficient for long-term secrets through 2002 [150]. However, they warned:
â€śAlthough the participants of this workshop feel best qualified in their respective areas, this statement
[with respect to lasting security] should be taken with caution.â€ť This is good advice.

The wise cryptographer is ultra-conservative when choosing public-key key lengths. To determine
how long a key you need requires you to look at both the intended security and lifetime of the key,
and the current state-of-the-art of factoring. Today you need a 1024-bit number to get the level of
security you got from a 512-bit number in the early 1980s. If you want your keys to remain secure for
20 years, 1024 bits is likely too short.

Even if your particular secrets arenâ€™t worth the effort required to factor your modulus, you may be at
risk. Imagine an automatic banking system that uses RSA for security. Mallory can stand up in court
and say: â€śDid you read in the newspaper in 1994 that RSA-129 was broken, and that 512-bit numbers
can be factored by any organization willing to spend a few million dollars and wait a few months? My
bank uses 512-bit numbers for security and, by the way, I didnâ€™t make these seven withdrawals.â€ť
Even if Mallory is lying, the judge will probably put the onus on the bank to prove it.

Table 7.4
Factoring Using the General Number Field Sieve
# of bits Mips-years required to factor
512 30,000
2*108
768
3*1011
1024
1*1014
1280
3*1016
1536
3*1020
2048

Table 7.5
Factoring Using the Special Number Field Sieve
# of bits Mips-years required to factor
512 <200
768 100,000

Page 141 of 666
Applied Cryptography: Second Edition - Bruce Schneier

3*107
1024
3*109
1280
2*1011
1536
4*1014
2048

Why not use 10,000-bit keys? You can, but remember that you pay a price in computation time as
your keys get longer. You want a key long enough to be secure, but short enough to be
computationally usable.

Earlier in this section I called making predictions foolish. Now I am about to make some. Table 7.6
gives my recommendations for public-key lengths, depending on how long you require the key to be
secure. There are three key lengths for each year, one secure against an individual, one secure against
a major corporation, and the third secure against a major government.

Here are some assumptions from [66]:

We believe that we could acquire 100 thousand machines without superhuman or
unethical efforts. That is, we would not set free an Internet worm or virus to find
resources for us. Many organizations have several thousand machines each on the net.
Making use of their facilities would require skillful diplomacy, but should not be
impossible. Assuming the 5 mips average power, and one year elapsed time, it is not too
unreasonable to embark on a project which would require half a million mips years.

The project to factor the 129-digit number harnessed an estimated 0.03 percent of the total computing
power of the Internet [1190], and they didnâ€™t even try very hard. It isnâ€™t unreasonable to assume that
a well-publicized project can harness 2 percent of the worldâ€™s computing power for a year.

Assume a dedicated cryptanalyst can get his hands on 10,000 mips-years, a large corporation can get
107 mips-years, and that a large government can get 109 mips-years. Also assume that computing
power will increase by a factor of 10 every five years. And finally, assume that advances in factoring
mathematics allow us to factor general numbers at the speeds of the special number field sieve. (This
isnâ€™t possible yet, but the breakthrough could occur at any time.) Table 7.6 recommends different key
lengths for security during different years.

Table 7.6
Recommended Public-key Key Lengths (in bits)
Year vs. Individual vs. Corporation vs. Government
1995 768 1280 1536
2000 1024 1280 1536
2005 1280 1536 2048
2010 1280 1536 2048
2015 1536 2048 2048

Remember to take the value of the key into account. Public keys are often used to secure things of
great value for a long time: the bankâ€™s master key for a digital cash system, the key the government
uses to certify its passports, or a notary publicâ€™s digital signature key. It probably isnâ€™t worth the

Page 142 of 666
Applied Cryptography: Second Edition - Bruce Schneier

effort to spend months of computing time to break an individualâ€™s private key, but if you can print
your own money with a broken key the idea becomes more attractive. A 1024-bit key is long enough to
sign something that will be verified within the week, or month, or even a few years. But you donâ€™t
want to stand up in court 20 years from now with a digitally signed document and have the opposition
demonstrate how to forge documents with the same signature.

Making predictions beyond the near future is even more foolish. Who knows what kind of advances in
computing, networking, and mathematics are going to happen by 2020? However, if you look at the
broad picture, in every decade we can factor numbers twice as long as in the previous decade. This

On the other hand, factoring technology may reach its Omega point long before 2045. Twenty years
from now, we may be able to factor anything. I think that is unlikely, though.

Not everyone will agree with my recommendations. The NSA has mandated 512-bit to 1024-bit keys
for their Digital Signature Standard (see Section 20.1)â€”far less than I recommend for long-term
security. Pretty Good Privacy (see Section 24.12) has a maximum RSA key length of 2047 bits. Arjen
Lenstra, the worldâ€™s most successful factorer, refuses to make predictions past 10 years [949]. Table
7.8 gives Ron Rivestâ€™s key-length recommendations, originally made in 1990, which I consider much
too optimistic [1323]. While his analysis looks fine on paper, recent history illustrates that surprises
regularly happen. It makes sense to choose your keys to be resilient against future surprises.

Table 7.7
Long-range Factoring Predictions
Year Key Length (in bits)
1995 1024
2005 2048
2015 4096
2025 8192
2035 16,384
2045 32,768

Low estimates assume a budget of \$25,000, the quadratic sieve algorithm, and a technology advance of
20 percent per year. Average estimates assume a budget of \$25 million, the general number field sieve
algorithm, and a technology advance of 33 percent per year. High estimates assume a budget of \$25
billion, a general quadratic sieve algorithm running at the speed of the special number field sieve, and
a technology advance of 45 percent per year.

There is always the possibility that an advance in factoring will surprise me as well, but I factored
that into my calculations. But why trust me? I just proved my own foolishness by making predictions.

DNA Computing

Now it gets weird. In 1994 Leonard M. Adleman actually demonstrated a method for solving an NP-
complete problem (see Section 11.2) in a biochemistry laboratory, using DNA molecules to represent
guesses at solutions to the problem [17]. (Thatâ€™s â€śsolutionsâ€ť meaning â€śanswers,â€ť not meaning â€śliquids
containing solutes.â€ť Terminology in this field is going to be awkward.) The problem that Adleman
solved was an instance of the Directed Hamiltonian Path problem: Given a map of cities connected by
one-way roads, find a path from City A to City Z that passes exactly once through all other cities on

Page 143 of 666
Applied Cryptography: Second Edition - Bruce Schneier

the map. Each city was represented by a different random 20-base string of DNA; with conventional
molecular biology techniques, Adleman synthesized 50 picomols (30 million million molecules) of the
DNA string representing each city. Each road was also represented by a 20-base DNA string, but
these strings were not chosen randomly: They were cleverly chosen so that the â€śbeginningâ€ť end of the
DNA string representing the road from City P to City K (â€śRoad PKâ€ť) would tend to stick to the DNA
string representing City P, and the end of Road PK would tend to stick to City K.

Table 7.8
Rivestâ€™s Optimistic Key-length Recommendations
(in bits)
Year Low Average High
1990 398 515 1289
1995 405 542 1399
2000 422 572 1512
2005 439 602 1628
2010 455 631 1754
2015 472 661 1884
2020 489 677 2017

Adleman synthesized 50 picomols of the DNA representing each road, mixed them all together with
the DNA representing all the cities, and added a ligase enzyme, which links together the ends of DNA
molecules. The clever relationship between the road DNA strings and the city DNA strings causes the
ligase to link the road DNA strings together in a legal fashion. That is, the â€śexitâ€ť end of the road from
P to K will always be linked to the â€śentranceâ€ť end of some road that originates at City K, never to the
â€śexitâ€ť end of any road and never to the â€śentranceâ€ť end of a road that originates at some city other
than K. After a carefully limited reaction time, the ligase has built a large number of DNA strings
representing legal but otherwise random multiroad paths within the map.

From this soup of random paths, Adleman can find the tiniest traceâ€”perhaps even a single
moleculeâ€”of the DNA that represents the answer to the problem. Using common techniques of
molecular biology, he discards all the DNA strings representing paths that are too long or too short.
(The number of roads in the desired path must equal the number of cities minus one.) Next he
discards all the DNA strings that do not pass through City A, then those that miss City B, and so
forth. If any DNA survives this screening, it is examined to find the sequence of roads that it
represents: This is the solution to the directed Hamiltonian path problem.

By definition, an instance of any NP-complete problem can be transformed, in polynomial time, into
an instance of any other NP-complete problem, and therefore into an instance of the directed
Hamiltonian path problem. Since the 1970s, cryptologists have been trying to use NP-complete
problems for public-key cryptography.

While the instance that Adleman solved was very modest (seven cities on his map, a problem that can
be solved by inspection in a few minutes), the technique is in its infancy and has no forbidding
obstacles keeping it from being extended to larger problems. Thus, arguments about the security of
cryptographic protocols based on NP-complete problems, arguments that heretofore have begun,
â€śSuppose an adversary has a million processors, each of which can perform a million tests each
second,â€ť may soon have to be replaced with, â€śSuppose an adversary has a thousand fermentation
vats, each 20,000 liters in capacity.â€ť

Page 144 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Quantum Computing

Now, it gets even weirder. The underlying principle behind quantum computing involves Einsteinâ€™s
wave-particle duality. A photon can simultaneously exist in a large number of states. A classic
example is that a photon behaves like a wave when it encounters a partially silvered mirror; it is both
reflected and transmitted, just as an ocean wave striking a seawall with a small opening in it will both
reflect off the wall and pass through it. However, when a photon is measured, it behaves like a
particle and only a single state can be detected.

In [1443], Peter Shor outlines a design for a factoring machine based on quantum mechanical
principles. Unlike a classical computer, which can be thought of as having a single, fixed state at a
given time, a quantum computer has an internal wave function, which is a superposition of a
combination of the possible basis states. Computations transform the wave function, altering the
entire set of states in a single operation. In this way, a quantum computer is an improvement over
classical finite-state automata: It uses quantum properties to allow it to factor in polynomial time,
theoretically allowing one to break cryptosystems based on factoring or the discrete logarithm
problem.

The consensus is that quantum computers are compatible with the fundamental laws of quantum
mechanics. However, it is unlikely that a quantum factoring machine will be built in the foreseeable
future...if ever. One major obstacle is the problem of decoherence, which causes superimposed
waveforms to lose their distinctness and makes the computer fail. Decoherence will make a quantum
computer running at 1Â° Kelvin fail after just one nanosecond. Additionally, an enormous number of
gates would be required to build a quantum factoring device; this may render the machine impossible
to build. Shorâ€™s design requires a complete modular exponentiator. No internal clock can be used, so
millions or possibly billions of individual gates would be required to factor cryptographically
significant numbers. If n quantum gates have some minimum probability p of failure, the average
number of trials required per successful run is (1/(1 â€“ p))n. The number of gates required presumably
grows polynomially with the length (in bits) of the number, so the number of trials required would be
superexponential with the length of the numbers usedâ€”worse than factoring by trial division!

So, while quantum factorization is an area of great academic excitement, it is extremely unlikely that
it will be practical in the foreseeable future. But donâ€™t say I didnâ€™t warn you.

7.3 Comparing Symmetric and Public-Key Key Length

A system is going to be attacked at its weakest point. If you are designing a system that uses both
symmetric and public-key cryptography, the key lengths for each type of cryptography should be
chosen so that it is equally difficult to attack the system via each mechanism. It makes no sense to use
a symmetric algorithm with a 128-bit key together with a public-key algorithm with a 386-bit key, just
as it makes no sense to use a symmetric algorithm with a 56-bit key together with a public-key
algorithm with a 1024-bit key.

Table 7.9 lists public-key modulus lengths whose factoring difficulty roughly equals the difficulty of a
brute-force attack for popular symmetric key lengths.

This table says that if you are concerned enough about security to choose a symmetric algorithm with
a 112-bit key, you should choose a modulus length for your public-key algorithm of about 1792 bits.
In general, though, you should choose a public-key length that is more secure than your symmetric-
key length. Public keys generally stay around longer, and are used to protect more information.

Page 145 of 666
Applied Cryptography: Second Edition - Bruce Schneier

7.4 Birthday Attacks against One-Way Hash Functions

There are two brute-force attacks against a one-way hash function. The first is the most obvious:
Given the hash of message, H(M), an adversary would like to be able to create another document, MÂ´,
such that H(M) = H(MÂ´). The second attack is more subtle: An adversary would like to find two
random messages, M, and MÂ´, such that H(M) = H(MÂ´). This is called a collision, and it is a far easier
attack than the first one.

Table 7.9
Symmetric and Public-key Key Lengths with
Similar Resistances to Brute-Force Attacks
Symmetric Public-key
Key Length Key Length
56 bits 384 bits
64 bits 512 bits
80 bits 768 bits
112 bits 1792 bits
128 bits 2304 bits

The birthday paradox is a standard statistics problem. How many people must be in a room for the
chance to be greater than even that one of them shares your birthday? The answer is 253. Now, how
many people must there be for the chance to be greater than even that at least two of them will share
the same birthday? The answer is surprisingly low: 23. With only 23 people in the room, there are still
253 different pairs of people in the room.

Finding someone with a specific birthday is analogous to the first attack; finding two people with the
same random birthday is analogous to the second attack. The second attack is commonly known as a
birthday attack.

Assume that a one-way hash function is secure and the best way to attack it is by using brute force. It
produces an m-bit output. Finding a message that hashes to a given hash value would require hashing
2m random messages. Finding two messages that hash to the same value would only require hashing
2m/2 random messages. A machine that hashes a million messages per second would take 600,000
years to find a second message that matched a given 64-bit hash. The same machine could find a pair
of messages that hashed to the same value in about an hour.

This means that if you are worried about a birthday attack, you should choose a hash-value twice as
long as you otherwise might think you need. For example, if you want to drop the odds of someone
breaking your system to less than 1 in 280, use a 160-bit one-way hash function.

7.5 How Long Should a Key Be?

Thereâ€™s no single answer to this question; it depends on the situation. To determine how much
security you need, you must ask yourself some questions. How much is your data worth? How long

A customer list might be worth \$1000. Financial data for an acrimonious divorce case might be worth

Page 146 of 666
Applied Cryptography: Second Edition - Bruce Schneier

\$10,000. Advertising and marketing data for a large corporation might be worth \$1 million. The
master keys for a digital cash system might be worth billions.

In the world of commodities trading, secrets only need to be kept for minutes. In the newspaper
business, todayâ€™s secrets are tomorrowâ€™s headlines. Product development information might need to
remain secret for a year or two. U.S. Census data are required by law to remain secret for 100 years.

The guest list for your sisterâ€™s surprise birthday party is only interesting to your nosy relatives.
Corporate trade secrets are interesting to rival companies. Military secrets are interesting to rival
militaries.

You can even specify security requirements in these terms. For example:

The key length must be such that there is a probability of no more than 1 in 232 that an
attacker with \$100 million to spend could break the system within one year, even
assuming technology advances at a rate of 30 percent per annum over the period.

Table 7.10, taken partially from [150], estimates the secrecy requirements for several kinds of
information:

Future computing power is harder to estimate, but here is a reasonable rule of thumb: The efficiency
of computing equipment divided by price doubles every 18 months and increases by a factor of 10
every five years. Thus, in 50 years the fastest computers will be 10 billion times faster than todayâ€™s!
Remember, too, that these numbers only relate to general-purpose computers; who knows what kind
of specialized cryptosystem-breaking equipment will be developed in the next 50 years?

Assuming that a cryptographic algorithm will be in use for 30 years, you can get some idea how
secure it must be. An algorithm designed today probably will not see general use until 2000, and will
still be used in 2025 to encrypt messages that must remain secret until 2075 or later.

Table 7.10
Security Requirements for Different Information
Minimum
Type of Traffic Lifetime Key Length
Tactical military information minutes/hours 56â€“64 bits
Product announcements, mergers, interest rates days/weeks 64 bits
Long-term business plans years 64 bits
H-bomb secrets >40 years 128 bits
Identities of spies >50 years 128 bits
Personal affairs >50 years 128 bits
Diplomatic embarrassments >65 years at least 128 bits
U.S. census data 100 years at least 128 bits

7.6 Caveat Emptor

This entire chapter is a whole lot of nonsense. The very notion of predicting computing power 10
years in the future, let alone 50 years is absolutely ridiculous. These calculations are meant to be a
guide, nothing more. If the past is any guide, the future will be vastly different from anything we can

Page 147 of 666
Applied Cryptography: Second Edition - Bruce Schneier

predict.

Be conservative. If your keys are longer than you imagine necessary, then fewer technological
surprises can harm you.

Chapter 8
Key Management
Alice and Bob have a secure communications system. They play mental poker, simultaneously sign
contracts, even exchange digital cash. Their protocols are secure. Their algorithms are top-notch.
Unfortunately, they buy their keys from Eveâ€™s â€śKeys-R-Us,â€ť whose slogan is â€śYou can trust us:
Security is the middle name of someone our ex-mother-in-lawâ€™s travel agent met at the Kwik-E-
Mart.â€ť

Eve doesnâ€™t have to break the algorithms. She doesnâ€™t have to rely on subtle flaws in the protocols.
She can use their keys to read all of Aliceâ€™s and Bobâ€™s message traffic without lifting a cryptanalytic
finger.

In the real world, key management is the hardest part of cryptography. Designing secure
cryptographic algorithms and protocols isnâ€™t easy, but you can rely on a large body of academic
research. Keeping the keys secret is much harder.

Cryptanalysts often attack both symmetric and public-key cryptosystems through their key
management. Why should Eve bother going through all the trouble of trying to break the
cryptographic algorithm if she can recover the key because of sloppy key storage procedures? Why
should she spend \$10 million building a cryptanalysis machine if she can spend \$1000 bribing a clerk?
Spending a million dollars to buy a well-placed communications clerk in a diplomatic embassy can be
a bargain. The Walkers sold U.S. Navy encryption keys to the Soviets for years. The CIAâ€™s director of
counterintelligence went for less than \$2 million, wife included. Thatâ€™s far cheaper than building
massive cracking machines and hiring brilliant cryptanalysts. Eve can steal the keys. She can arrest
or abduct someone who knows the keys. She can seduce someone and get the keys that way. (The
Marines who guarded the U.S. Embassy in Moscow were not immune to that attack.) Itâ€™s a whole lot
easier to find flaws in people than it is to find them in cryptosystems.

Alice and Bob must protect their key to the same degree as all the data it encrypts. If a key isnâ€™t
changed regularly, this can be an enormous amount of data. Unfortunately, many commercial
products simply proclaim â€śWe use DESâ€ť and forget about everything else. The results are not very
impressive.

For example, the DiskLock program for Macintosh (version 2.1), sold at most software stores, claims
the security of DES encryption. It encrypts files using DES. Its implementation of the DES algorithm
is correct. However, DiskLock stores the DES key with the encrypted file. If you know where to look
for the key, and want to read a file encrypted with DiskLockâ€™s DES, recover the key from the
encrypted file and then decrypt the file. It doesnâ€™t matter that this program uses DES encryptionâ€”the
implementation is completely insecure.

Further information on key management can be found in [457,98,1273,1225,775,357]. The following
sections discuss some of the issues and solutions.

8.1 Generating Keys

Page 148 of 666
Applied Cryptography: Second Edition - Bruce Schneier

The security of an algorithm rests in the key. If youâ€™re using a cryptographically weak process to
generate keys, then your whole system is weak. Eve need not cryptanalyze your encryption algorithm;
she can cryptanalyze your key generation algorithm.

Reduced Keyspaces

DES has a 56-bit key. Implemented properly, any 56-bit string can be the key; there are 256 (1016)
possible keys. Norton Discreet for MS-DOS (versions 8.0 and earlier) only allows ASCII keys, forcing
the high-order bit of each byte to be zero. The program also converts lowercase letters to uppercase
(so the fifth bit of each byte is always the opposite of the sixth bit) and ignores the low-order bit of
each byte, resulting in only 240 possible keys. These poor key generation procedures have made its
DES ten thousand times easier to break than a proper implementation.

Table 8.1 gives the number of possible keys with various constraints on the input strings. Table 8.2
gives the time required for an exhaustive search through all of those keys, given a million attempts per
second. Remember, there is very little time differential between an exhaustive search for 8-byte keys
and an exhaustive search of 4-, 5-, 6-, 7-, and 8-byte keys.

All specialized brute-force hardware and parallel implementations will work here. Testing a million
keys per second (either with one machine or with multiple machines in parallel), it is feasible to crack
lowercase-letter and lowercase-letter-and-number keys up to 8 bytes long, alphanumeric-character
keys up to 7 bytes long, printable character and ASCII-character keys up to 6 bytes long, and 8-bit-
ASCII-character keys up to 5 bytes long.

Table 8.1
Number of Possible Keys of Various Keyspaces
4-Byte 5-Byte 6-Byte 7-Byte 8-Byte

1.2*107 3.1*108 8.0*109 2.1*1011
Lowercase letters (26): 460,000
6.0*107 2.2*109 7.8*1010 2.8*1012
Lowercase letters and digits (36): 1,700,000
1.5*107 9.2*108 5.7*1010 3.5*1012 2.2*1014
Alphanumeric characters (62):
8.1*107 7.7*109 7.4*1011 7.0*1013 6.6*1015
Printable characters (95):
2.7*108 3.4*1010 4.4*1012 5.6*1014 7.2*1016
ASCII characters (128):
4.3*109 1.1*1012 2.8*1014 7.2*1016 1.8*1019
8-bit ASCII characters (256):

Table 8.2
Exhaustive Search of Various Keyspaces (assume one million attempts per second)
4-Byte 5-Byte 6-Byte 7-Byte 8-Byte
Lowercase letters (26): .5 seconds 12 seconds 5 minutes 2.2 hours 2.4 days
Lowercase letters and digits (36): 1.7 seconds 1 minute 36 minutes 22 hours 33 days
Alphanumeric characters (62): 15 seconds 15 minutes 16 hours 41 days 6.9 years
Printable characters (95): 1.4 minutes 2.1 hours 8.5 days 2.2 years 210 years
ASCII characters (128): 4.5 minutes 9.5 hours 51 days 18 years 2300 years
8-bit ASCII characters (256): 1.2 hours 13 days 8.9 years 2300 years 580,000 years

Page 149 of 666
Applied Cryptography: Second Edition - Bruce Schneier

And remember, computing power doubles every 18 months. If you expect your keys to stand up
against brute-force attacks for 10 years, youâ€™d better plan accordingly.

Poor Key Choices

When people choose their own keys, they generally choose poor ones. Theyâ€™re far more likely to
choose â€śBarneyâ€ť than â€ś*9 (hH/A.â€ť This is not always due to poor security practices; â€śBarneyâ€ť is
easier to remember than â€ś*9 (hH/A.â€ť The worldâ€™s most secure algorithm wonâ€™t help much if the users
habitually choose their spouseâ€™s names for keys or write their keys on little pieces of paper in their
wallets. A smart brute-force attack doesnâ€™t try all possible keys in numerical order; it tries the
obvious keys first.

This is called a dictionary attack, because the attacker uses a dictionary of common keys. Daniel Klein
was able to crack 40 percent of the passwords on the average computer using this system [847,848].
No, he didnâ€™t try one password after another, trying to login. He copied the encrypted password file
and mounted the attack offline. Hereâ€™s what he tried:

1. The userâ€™s name, initials, account name, and other relevant personal information as a
possible password. All in all, up to 130 different passwords were tried based on this
information. For an account name klone with a user named â€śDaniel V. Klein,â€ť some of the
passwords that would be tried were: klone, klone0, klone1, klone123, dvk, dvkdvk, dklein,
DKlein leinad, nielk, dvklein, danielk, DvkkvD, DANIEL-KLEIN, (klone), KleinD, and so on.
2. Words from various databases. These included lists of menâ€™s and womenâ€™s names
(some 16,000 in all); places (including variations so that â€śspain,â€ť â€śspanish,â€ť and â€śspaniardâ€ť
would all be considered); names of famous people; cartoons and cartoon characters; titles,
characters, and locations from films and science fiction stories; mythical creatures (garnered
from Bullfinchâ€™s Mythology and dictionaries of mythical beasts); sports (including team names,
nicknames, and specialized terms); numbers (both as numeralsâ€”â€ś2001,â€ť and written outâ€”
â€śtwelveâ€ť); strings of letters and numbers (â€śa,â€ť â€śaa,â€ť â€śaaa,â€ť â€śaaaa,â€ť etc.); Chinese syllables
(from the Pinyin Romanization of Chinese, an international standard system of writing Chinese
on an English keyboard); the King James Bible; biological terms; colloquial and vulgar phrases
(such as â€śfuckyou,â€ť â€śibmsux,â€ť and â€śdeadheadâ€ť); keyboard patterns (such as â€śqwerty,â€ť â€śasdf,â€ť
and â€śzxcvbnâ€ť); abbreviations (such as â€śroygbivâ€ťâ€”the colors in the rainbow, and
â€śooottafagvahâ€ťâ€”a mnemonic for remembering the 12 cranial nerves); machine names
(acquired from /etc/hosts); characters, plays, and locations from Shakespeare; common Yiddish
words; the names of asteroids; and a collection of words from various technical papers Klein
previously published. All told, more than 60,000 separate words were considered per user (with
any inter- and intra-dictionary duplicates being discarded).
3. Variations on the words from step 2. This included making the first letter uppercase or
a control character, making the entire word uppercase, reversing the word (with and without
the aforementioned capitalization), changing the letter â€˜oâ€™ to the digit â€˜0â€™ (so that the word
â€śscholarâ€ť would also be checked as â€śsch0larâ€ť), changing the letter â€˜lâ€™ to the digit â€˜1â€™ (so that the
word â€śscholarâ€ť would also be checked as â€śscho1arâ€ť), and performing similar manipulation to
change the letter â€˜zâ€™ into the digit â€˜2â€™, and the letter â€˜sâ€™ into the digit â€˜5â€™. Another test was to
make the word into a plural (irrespective of whether the word was actually a noun), with
enough intelligence built in so that â€śdressâ€ť became â€śdresses,â€ť â€śhouseâ€ť became â€śhouses,â€ť and
â€śdaisyâ€ť became â€śdaisies.â€ť Klein did not consider pluralization rules exclusively, though, so that
â€śdatumâ€ť forgivably became â€śdatumsâ€ť (not â€śdataâ€ť), while â€śsphynxâ€ť became â€śsphynxsâ€ť (and not
â€śsphyngesâ€ť). Similarly, the suffixes â€ś-ed,â€ť â€ś-er,â€ť and â€ś-ingâ€ť were added to transform words like
1,000,000 words to the list of possible passwords that were tested for each user.

Page 150 of 666
Applied Cryptography: Second Edition - Bruce Schneier

4. Various capitalization variations on the words from step 2 that were not considered in
step 3. This included all single-letter capitalization variations (so that â€śmichaelâ€ť would also be
checked as â€śmIchael,â€ť â€śmiChael,â€ť â€śmicHael,â€ť â€śmichAel,â€ť etc.), double-letter capitalization
variations (â€śMIchael,â€ť â€śMiChael,â€ť â€śMicHael,â€ť..., â€śmIChael,â€ť â€śmIcHael,â€ť etc.), triple-letter
variations, etc. The single-letter variations added roughly another 400,000 words to be checked
per user, while the double-letter variations added another 1,500,000 words. Three-letter
variations would have added at least another 3,000,000 words per user had there been enough
time to complete the tests. Tests of four-, five-, and six-letter variations were deemed to be
impracticable without much more computational horsepower to carry them out.
5. Foreign language words on foreign users. The specific test that was performed was to
try Chinese language passwords on users with Chinese names. The Pinyin Romanization of
Chinese syllables was used, combining syllables together into one-, two-, and three-syllable
words. Because no tests were done to determine whether the words actually made sense, an
exhaustive search was initiated. Since there are 298 Chinese syllables in the Pinyin system, there
are 158,404 two-syllable words, and slightly more than 16,000,000 three-syllable words. A
similar mode of attack could as easily be used with English, using rules for building
pronounceable nonsense words.
6. Word pairs. The magnitude of an exhaustive test of this nature is staggering. To
simplify the test, only words of three or four characters in length from /usr/dict/words were used.
Even so, the number of word pairs is about ten million.

A dictionary attack is much more powerful when it is used against a file of keys and not a single key.
A single user may be smart enough to choose good keys. If a thousand people each choose their own
key as a password to a computer system, the odds are excellent that at least one person will choose a
key in the attackerâ€™s dictionary.

Random Keys

Good keys are random-bit strings generated by some automatic process. If the key is 64 bits long,
every possible 64-bit key must be equally likely. Generate the key bits from either a reliably random
source (see Section 17.14) or a cryptographically secure pseudo-random-bit generator (see Chapters
16 and 17.) If these automatic processes are unavailable, flip a coin or roll a die.

This is important, but donâ€™t get too caught up in arguing about whether random noise from audio
sources is more random than random noise from radioactive decay. None of these random-noise
sources will be perfect, but they will probably be good enough. It is important to use a good random-
number generator for key generation, but it is far more important to use good encryption algorithms
and key management procedures. If you are worried about the randomness of your keys, use the key-
crunching technique described below.

Some encryption algorithms have weak keys: specific keys that are less secure than the other keys. I
advise testing for these weak keys and generating a new one if you discover one. DES has only 16
weak keys out of 256, so the odds of generating any of these keys are incredibly small. It has been
argued that a cryptanalyst would have no idea that a weak key is being used and therefore gains no
advantage from their accidental use. It has also been argued that not using weak keys gives a
cryptanalyst information. However, testing for the few weak keys is so easy that it seems imprudent
not to do so.

Generating keys for public-key cryptography systems is harder, because often the keys must have
certain mathematical properties (they may have to be prime, be a quadratic residue, etc.). Techniques
for generating large random prime numbers are discussed in Section 11.5. The important thing to
remember from a key management point of view is that the random seeds for those generators must
be just that: random.

Page 151 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Generating a random key isnâ€™t always possible. Sometimes you need to remember your key. (See how
long it takes you to remember 25e8 56f2 e8ba c820). If you have to generate an easy-to-remember key,
make it obscure. The ideal would be something easy to remember, but difficult to guess. Here are
some suggestions:

â€” Word pairs separated by a punctuation character, for example â€śturtle*mooseâ€ť or
â€śzorch!splatâ€ť
â€” Strings of letters that are an acronym of a longer phrase; for example, â€śMein
Luftkissenfahrzeug ist voller Aale!â€ť generates the key â€śMLivA!â€ť

Pass Phrases

A better solution is to use an entire phrase instead of a word, and to convert that phrase into a key.
These phrases are called pass phrases. A technique called key crunching converts the easy-to-
remember phrases into random keys. Use a one-way hash function to transform an arbitrary-length
text string into a pseudo-random-bit string.

For example, the easy-to-remember text string:

My name is Ozymandias, king of kings. Look on my works, ye mighty, and despair.

might crunch into this 64-bit key:

e6c1 4398 5ae9 0a9b

Of course, it can be difficult to type an entire phrase into a computer with the echo turned off. Clever
suggestions to solve this problem would be appreciated.

If the phrase is long enough, the resulting key will be random. Exactly what â€ślong enoughâ€ť means is
open to interpretation. Information theory tells us that standard English has about 1.3 bits of
information per character (see Section 11.1). For a 64-bit key, a pass phrase of about 49 characters, or
10 normal English words, should be sufficient. As a rule of thumb, figure that you need five words for
each 4 bytes of key. Thatâ€™s a conservative assumption, since it doesnâ€™t take into account case, spacing,
and punctuation.

This technique can even be used to generate private keys for public-key cryptography systems: The
text string could be crunched into a random seed, and that seed could be fed into a deterministic
system that generates public-key/private-key key pairs.

If you are choosing a pass phrase, choose something unique and easy-to-remember. Donâ€™t choose
phrases from literatureâ€”the example from â€śOzymandiasâ€ť is a bad one. Both the complete works of
Shakespeare and the dialogue from Star Wars are available on-line and can be used in a dictionary
attack. Choose something obscure, but personal. Include punctuation and capitalization; if you can,
include numbers and non-alphanumeric symbols. Poor or improper English, or even a foreign
language, makes the pass phrase less susceptible to a dictionary attack. One suggestion is to use a
phrase that is â€śshocking nonsenseâ€ť: something offensive enough that you are likely to remember and
unlikely to write down.

Despite everything written here, obscurity is no substitute for true randomness. The best keys are
random keys, difficult as they are to remember.

Page 152 of 666
Applied Cryptography: Second Edition - Bruce Schneier

X9.17 Key Generation

The ANSI X9.17 standard specifies a method of key generation (see Figure 8.1) [55]. This does not
generate easy-to-remember keys; it is more suitable for generating session keys or pseudo-random
numbers within a system. The cryptographic algorithm used to generate keys is triple-DES, but it
could just as easily be any algorithm.

Let EK(X) be triple-DES encryption of X with key K. This is a special key reserved for secret key
generation. V0 is a secret 64-bit seed. T is a timestamp. To generate the random key Ri, calculate:

Ri = EK(EK(Ti) Vi)

To generate Vi,+1, calculate:

Vi+1 = EK(EK(Ti) Ri)

To turn Ri into a DES key, simply adjust every eighth bit for parity. If you need a 64-bit key, use it as
is. If you need a 128-bit key, generate a pair of keys and concatenate them together.

DoD Key Generation

The U.S. Department of Defense recommends using DES in OFB mode (see Section 9.8) to generate
random keys [1144]. Generate a DES key from system interrupt vectors, system status registers, and
system counters. Generate an initialization vector from the system clock, system ID, and date and
time. For the plaintext, use an externally generated 64-bit quantity: eight characters typed in by a

8.2 Nonlinear Keyspaces

Imagine that you are a military cryptography organization, building a piece of cryptography
equipment for your troops. You want to use a secure algorithm, but you are worried about the
equipment falling into enemy hands. The last thing you want is for your enemy to be able to use the
equipment to protect their secrets.

Figure 8.1 ANSI X9.17 key generation.

If you can put your algorithm in a tamperproof module, hereâ€™s what you can do. You can require
keys of a special and secret form; all other keys will cause the module to encrypt and decrypt using a
severely weakened algorithm. You can make it so that the odds of someone, not knowing this special
form but accidentally stumbling on a correct key, are vanishingly small.

This is called a nonlinear keyspace, because all the keys are not equally strong. (The opposite is a
linear, or flat, keyspace.) An easy way to do this is to create the key as two parts: the key itself and
some fixed string encrypted with that key. The module decrypts the string with the key; if it gets the

Page 153 of 666
Applied Cryptography: Second Edition - Bruce Schneier

fixed string it uses the key normally, if not it uses a different, weak algorithm. If the algorithm has a
128-bit key and a 64-bit block size, the overall key is 192 bits; this gives the algorithm an effective key
of 2128, but makes the odds of randomly choosing a good key one in 264.

You can be even subtler. You can design an algorithm such that certain keys are stronger than others.
An algorithm can have no weak keysâ€”keys that are obviously very poorâ€”and can still have a
nonlinear keyspace.

This only works if the algorithm is secret and the enemy canâ€™t reverse-engineer it, or if the difference
in key strength is subtle enough that the enemy canâ€™t figure it out. The NSA did this with the secret
algorithms in their Overtake modules (see Section 25.1). Did they do the same thing with Skipjack (see
Section 13.12)? No one knows.

8.3 Transferring Keys

Alice and Bob are going to use a symmetric cryptographic algorithm to communicate securely; they
need the same key. Alice generates a key using a random-key generator. Now she has to give it to
Bobâ€”securely. If Alice can meet Bob somewhere (a back alley, a windowless room, or one of Jupiterâ€™s
moons), she can give him a copy of the key. Otherwise, they have a problem. Public-key cryptography
solves the problem nicely and with a minimum of prearrangement, but these techniques are not
always available (see Section 3.1). Some systems use alternate channels known to be secure. Alice
could send Bob the key with a trusted messenger. She could send it by certified mail or via an
overnight delivery service. She could set up another communications channel with Bob and hope no
one is eavesdropping on that one.

Alice could send Bob the symmetric key over their communications channelâ€”the one they are going
to encrypt. This is foolish; if the channel warrants encryption, sending the encryption key in the clear
over the same channel guarantees that anyone eavesdropping on the channel can decrypt all
communications.

The X9.17 standard [55] specifies two types of keys: key-encryption keys and data keys. Key-
Encryption Keys encrypt other keys for distribution. Data Keys encrypt message traffic. These key-
encrypting keys have to be distributed manually (although they can be secured in a tamperproof
device, like a smart card), but only seldomly. Data keys are distributed more often. More details are in
[75]. This two-tiered key concept is used a lot in key distribution.

Another solution to the distribution problem splits the key into several different parts (see Section 3.6)
and sends each of those parts over a different channel. One part could be sent over the telephone, one
by mail, one by overnight delivery service, one by carrier pigeon, and so on. (see Figure 8.2). Since an
adversary could collect all but one of the parts and still have no idea what the key is, this method will
work in all but extreme cases. Section 3.6 discusses schemes for splitting a key into several parts. Alice
could even use a secret sharing scheme (see Section 3.7), allowing Bob to reconstruct the key if some of
the shares are lost in transmission.

Alice sends Bob the key-encryption key securely, either by a face-to-face meeting or the splitting
technique just discussed. Once Alice and Bob both have the key-encryption key, Alice can send Bob
daily data keys over the same communications channel. Alice encrypts each data key with the key-
encryption key. Since the amount of traffic being encrypted with the key-encryption key is low, it does
not have to be changed as often. However, since compromise of the key-encryption key could
compromise every message encrypted with every key that was encrypted with the key-encryption key,
it must be stored securely.

Page 154 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Key Distribution in Large Networks

Key-encryption keys shared by pairs of users work well in small networks, but can quickly get
cumbersome if the networks become large. Since every pair of users must exchange keys, the total
number of key exchanges required in an n-person network is n(n â€“ 1)/2.

In a six-person network, 15 key exchanges are required. In a 1000-person network, nearly 500,000 key
exchanges are required. In these cases, creating a central key server (or servers) makes the operation
much more efficient.

Alternatively, any of the symmetric-cryptography or public-key-cryptography protocols in Section 3.1
provides for secure key distribution.

Figure 8.2 Key distribution via parallel channels.

8.4 Verifying Keys

When Bob receives a key, how does he know it came from Alice and not from someone pretending to
be Alice? If Alice gives it to him when they are face-to-face, itâ€™s easy. If Alice sends her key via a
trusted courier, then Bob has to trust the courier. If the key is encrypted with a key-encryption key,
then Bob has to trust the fact that only Alice has that key. If Alice uses a digital signature protocol to
sign the key, Bob has to trust the public-key database when he verifies that signature. (He also has to
trust that Alice has kept her key secure.) If a Key Distribution Center (KDC) signs Aliceâ€™s public key,
Bob has to trust that his copy of the KDCâ€™s public key has not been tampered with.

In the end, someone who controls the entire network around Bob can make him think whatever he
likes. Mallory could send an encrypted and signed message purporting to be from Alice. When Bob
tried to access the public-key database to verify Aliceâ€™s signature, Mallory could substitute his own
public key. Mallory could invent his own false KDC and exchange the real KDCâ€™s public key for his
own creation. Bob wouldnâ€™t be the wiser.

Some people have used this argument to claim that public-key cryptography is useless. Since the only
way for Alice and Bob to ensure that their keys have not been tampered with is to meet face-to-face,
public-key cryptography doesnâ€™t enhance security at all.

This view is naĂŻve. It is theoretically true, but reality is far more complicated. Public-key
cryptography, used with digital signatures and trusted KDCs, makes it much more difficult to
substitute one key for another. Bob can never be absolutely certain that Mallory isnâ€™t controlling his
entire reality, but Bob can be confident that doing so requires more resources than most real-world

Page 155 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Bob could also verify Aliceâ€™s key over the telephone, where he can hear her voice. Voice recognition is
a really good authentication scheme. If itâ€™s a public key, he can safely recite it in public. If itâ€™s a secret
key, he can use a one-way hash function to verify the key. Both PGP (see Section 24.12) and the
AT&T TSD (see Section 24.18) use this kind of key verification.

Sometimes, it may not even be important to verify exactly whom a public key belongs to. It may be
necessary to verify that it belongs to the same person to whom it belonged last year. If someone sends
a signed withdrawal message to a bank, the bank does not have to be concerned with who withdraws
the money, only whether it is the same person who deposited the money in the first place.

Error Detection during Key Transmission

Sometimes keys get garbled in transmission. Since a garbled key can mean megabytes of
undecryptable ciphertext, this is a problem. All keys should be transmitted with some kind of error
detection and correction bits. This way errors in transmission can be easily detected and, if required,
the key can be resent.

One of the most widely used methods is to encrypt a constant value with the key, and to send the first
2 to 4 bytes of that ciphertext along with the key. At the receiving end, do the same thing. If the
encrypted constants match, then the key has been transmitted without error. The chance of an
undetected error ranges from one in 216 to one in 232.

Key-error Detection during Decryption

Sometimes the receiver wants to check if a particular key he has is the correct symmetric decryption
key. If the plaintext message is something like ASCII, he can try to decrypt and read the message. If
the plaintext is random, there are other tricks.

The naĂŻve approach is to attach a verification block: a known header to the plaintext message before
encryption. At the receiving end, Bob decrypts the header and verifies that it is correct. This works,
but it gives Eve a known plaintext to help cryptanalyze the system. It also makes attacks against
short-key ciphers like DES and all exportable ciphers easy. Precalculate the checksum once for each
key, then use that checksum to determine the key in any message you intercept after that. This is a
feature of any key checksum that doesnâ€™t include random or at least different data in each checksum.
Itâ€™s very similar in concept to using salt when generating keys from passphrases.

Hereâ€™s a better way to do this [821]:

(1) Generate an IV (not the one used for the message).
(2) Use that IV to generate a large block of bits: say, 512.
(3) Hash the result.
(4) Use the same fixed bits of the hash, say 32, for the key checksum.

This gives Eve some information, but very little. If she tries to use the low 32 bits of the final hash
value to mount a brute-force attack, she has to do multiple encryptions plus a hash per candidate key;
brute-force on the key itself would be quicker.

She also gets no known-plaintext values to try out, and even if she manages to choose our random
value for us, she never gets a chosen-plaintext out of us, since it goes through the hash function before
she sees it.

Page 156 of 666
Applied Cryptography: Second Edition - Bruce Schneier

8.5 Using Keys

Software encryption is scary. Gone are the days of simple microcomputers under the control of single
programs. Now thereâ€™s Macintosh System 7, Windows NT, and UNIX. You canâ€™t tell when the
operating system will suspend the encryption application in progress, write everything to disk, and
take care of some pressing task. When the operating system finally gets back to encrypting whatever
is being encrypted, everything will look just fine. No one will ever realize that the operating system
wrote the encryption application to disk, and that it wrote the key along with it. The key will sit on the
disk, unencrypted, until the computer writes over that area of memory again. It could be minutes or it
could be months. It could even be never; the key could still be sitting there when an adversary goes
over the hard drive with a fine-tooth comb. In a preemptive, multitasking environment, you can set
your encryption operation to a high enough priority so it will not be interrupted. This would mitigate
the risk. Even so, the whole thing is dicey at best.

Hardware implementations are safer. Many encryption devices are designed to erase the key if
tampered with. For example, the IBM PS/2 encryption card has an epoxy unit containing the DES
chip, battery, and memory. Of course, you have to trust the hardware manufacturer to implement the
feature properly.

Some communications applications, such as telephone encryptors, can use session keys. A session key
is a key that is just used for one communications sessionâ€”a single telephone conversationâ€”and then
discarded. There is no reason to store the key after it has been used. And if you use some key-
exchange protocol to transfer the key from one conversant to the other, the key doesnâ€™t have to be
stored before it is used either. This makes it far less likely that the key might be compromised.

Controlling Key Usage

In some applications it may be desirable to control how a session key is used. Some users may need
session keys only for encryption or only for decryption. Session keys might only be authorized for use
on a certain machine or at a certain time. One scheme to handle these sorts of restrictions attaches a
Control Vector (CV) to the key; the control vector specifies the uses and restrictions for that key (see
Section 24.1) [1025,1026]. This CV is hashed and XORed with a master key; the result is used as an
encryption key to encrypt the session key. The resultant encrypted session key is then stored with the
CV. To recover the session key, hash the CV and XOR it with the master key, and use the result to
decrypt the encrypted session key.

The advantages of this scheme are that the CV can be of arbitrary length and that it is always stored
in the clear with the encrypted key. This scheme assumes quite a bit about tamperproof hardware and
the inability of users to get at the keys directly. This system is discussed further in Sections 24.1 and
24.8.

8.6 Updating Keys

Imagine an encrypted data link where you want to change keys daily. Sometimes itâ€™s a pain to
distribute a new key every day. An easier solution is to generate a new key from the old key; this is
sometimes called key updating.

All it takes is a one-way function. If Alice and Bob share the same key and they both operate on it
using the same one-way function, they will get the same result. Then they can take the bits they need
from the results to create the new key.

Page 157 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Key updating works, but remember that the new key is only as secure as the old key was. If Eve
managed to get her hands on the old key, she can perform the key updating function herself.
However, if Eve doesnâ€™t have the old key and is trying a ciphertext-only attack on the encrypted
traffic, this is a good way for Alice and Bob to protect themselves.

8.7 Storing Keys

The least complex key storage problem is that of a single user, Alice, encrypting files for later use.
Since she is the only person involved, she is the only person responsible for the key. Some systems take
the easy approach: The key is stored in Aliceâ€™s brain and never on the system. Alice is responsible for
remembering the key and entering it every time she needs a file encrypted or decrypted.

An example of this system is IPS [881]. Users can either directly enter the 64-bit key or enter the key
as a longer character string. The system then generates a 64-bit key from the character string using a
key-crunching technique.

Another solution is to store the key in a magnetic stripe card, plastic key with an embedded ROM
chip (called a ROM key), or smart card [556,557,455]. A user could then enter his key into the system
by inserting the physical token into a special reader in his encryption box or attached to his computer
terminal. While the user can use the key, he does not know it and cannot compromise it. He can use it
only in the way and for the purposes indicated by the control vector.

A ROM key is a very clever idea. People understand physical keys, what they signify and how to
protect them. Putting a cryptographic key in the same physical form makes storing and protecting
that key more intuitive.

This technique is made more secure by splitting the key into two halves, storing one half in the
terminal and the other half in the ROM key. The U.S. governmentâ€™s STU-III secure telephone works
this way. Losing the ROM key does not compromise the cryptographic keyâ€”change that key and
everything is back to normal. The same is true with the loss of the terminal. This way, compromising
either the ROM key or the system does not compromise the cryptographic keyâ€”an adversary must
have both parts.

Hard-to-remember keys can be stored in encrypted form, using something similar to a key-encryption
key. For example, an RSA private key could be encrypted with a DES key and stored on disk. To
recover the RSA key, the user has to type in the DES key to a decryption program.

If the keys are generated deterministically (with a cryptographically secure pseudo-random-sequence
generator), it might be easier to regenerate the keys from an easy-to-remember password every time
they are required.

Ideally, a key should never appear unencrypted outside the encryption device. This isnâ€™t always
possible, but it is a worthy goal.

8.8 Backup Keys

Alice is the chief financial officer at Secrets, Ltd.â€”â€śWe donâ€™t tell you our motto.â€ť Like any good
corporate officer, she follows the companyâ€™s security guidelines and encrypts all her data.
Unfortunately, she ignores the companyâ€™s street-crossing guidelines and gets hit by a truck. What
does the companyâ€™s president, Bob, do?

Page 158 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Unless Alice left a copy of her key, heâ€™s in deep trouble. The whole point of encryption is to make files
unrecoverable without the key. Unless Alice was a moron and used lousy encryption software, her
files are gone forever.

Bob can avoid this in several ways. The simplest is sometimes called key escrow (see Section 4.14): He
requires all employees to write their keys on paper and give them to the companyâ€™s security officer,
who will lock them in a safe somewhere (or encrypt them all with a master key). Now, when Alice is
bowled over on the Interstate, Bob can ask his security officer for her key. Bob should make sure to
have the combination to the safe himself as well; otherwise, if the security officer is run over by
another truck, Bob will be out of luck again.

The problem with this key management system is that Bob has to trust his security officer not to
misuse everyoneâ€™s keys. Even more significantly, all the employees have to trust the security officer
not to misuse their keys. A far better solution is to use a secret-sharing protocol (see Section 3.7).

When Alice generates a key, she also divides up that key into some number of pieces. She then sends
each pieceâ€”encrypted, of courseâ€”to a different company officer. None of those pieces alone is the
key, but someone can gather all the pieces together and reconstruct the key. Now Alice is protected
against any one malicious person, and Bob is protected against losing all of Aliceâ€™s data after her run-
in with the truck. Or, she could just store the different pieces, encrypted with each of the officerâ€™s
different public keys, on her own hard disk. That way, no one gets involved with key management
until it becomes necessary.

Another backup scheme [188] uses smart cards (see Section 24.13) for the temporary escrow of keys.
Alice can put the key to secure her hard drive onto the smart card and give it to Bob while she is
away. Bob can use the card to get into Aliceâ€™s hard drive, but because the key is stored in the card
Bob cannot learn it. And the system is bilaterally auditable: Bob can verify that the key will open
Aliceâ€™s drive, and when Alice returns she can verify if Bob has used the key and how many times.

Such a scheme makes no sense for data transmission. On a secure telephone, the key should exist for
the length of the call and no longer. For data storage, as just described, key escrow can be a good idea.
Iâ€™ve lost about one key every five years, and my memory is better than most. If 200 million people
were using cryptography, that same rate would equal 40 million lost keys per year. I keep copies of
my house keys with a neighbor because I may lose mine. If house keys were like cryptographic keys,
and I lost them, I could never get inside and recover my possessions, ever again. Just as I keep off-site
backups of my data, it makes sense to keep backups of my data-encryption keys.

8.9 Compromised Keys

All of the protocols, techniques, and algorithms in this book are secure only if the key (the private key
in a public-key system) remains secret. If Aliceâ€™s key is lost, stolen, printed in the newspaper, or
otherwise compromised, then all her security is gone.

If the compromised key was for a symmetric cryptosystem, Alice has to change her key and hope the
actual damage was minimal. If it was a private key, she has bigger problems; her public key is
probably on servers all over the network. And if Eve gets access to Aliceâ€™s private key, she can
impersonate her on the network: reading encrypted mail, signing correspondence, entering into
contracts, and so forth. Eve can, effectively, become Alice.

It is vital that news of a private keyâ€™s compromise propagate quickly throughout the network. Any
databases of public keys must immediately be notified that a particular private key has been

Page 159 of 666
Applied Cryptography: Second Edition - Bruce Schneier

compromised, lest some unsuspecting person encrypt a message in that compromised key.

One hopes Alice knows when her key was compromised. If a KDC is managing the keys, Alice should
notify it that her key has been compromised. If there is no KDC, then she should notify all
correspondents who might receive messages from her. Someone should publicize the fact that any
message received after her key was lost is suspect, and that no one should send messages to Alice with
the associated public key. The application should be using some sort of timestamp, and then users can
determine which messages are legitimate and which are suspect.

If Alice doesnâ€™t know exactly when her key was compromised, things are more difficult. Alice may
want to back out of a contract because the person who stole the key signed it instead of her. If the
system allows this, then anyone can back out of a contract by claiming that his key was compromised
before it was signed. It has to be a matter for an adjudicator to decide.

This is a serious problem and brings to light the dangers of Alice tying all of her identity to a single
key. It would be better for Alice to have different keys for different applicationsâ€”just as she has
different physical keys in her pocket for different locks. Other solutions to this problem involve
biometrics, limits on what can be done with a key, time delays, and countersigning.

These procedures and tips are hardly optimal, but are the best we can do. The moral of the story is to
protect keys, and protect private keys above all else.

No encryption key should be used for an indefinite period. It should expire automatically like
passports and licenses. There are several reasons for this:

â€” The longer a key is used, the greater the chance that it will be compromised. People
write keys down; people lose them. Accidents happen. If you use the same key for a year, thereâ€™s
a far greater chance of compromise than if you use it for a day.
â€” The longer a key is used, the greater the loss if the key is compromised. If a key is used
only to encrypt a single budgetary document on a file server, then the loss of the key means only
the compromise of that document. If the same key is used to encrypt all the budgetary
information on the file server, then its loss is much more devastating.
â€” The longer a key is used, the greater the temptation for someone to spend the effort
necessary to break itâ€”even if that effort is a brute-force attack. Breaking a key shared between
two military units for a day would enable someone to read and fabricate messages between
those units for that day. Breaking a key shared by an entire military command structure for a
year would enable that same person to read and fabricate messages throughout the world for a
year. In our budget-conscious, post-Cold War world, which key would you choose to attack?
â€” It is generally easier to do cryptanalysis with more ciphertext encrypted with the same
key.

For any cryptographic application, there must be a policy that determines the permitted lifetime of a
key. Different keys may have different lifetimes. For a connection-based system, like a telephone, it
makes sense to use a key for the length of the telephone call and to use a new one with each call.

Systems on dedicated communications channels are not as obvious. Keys should have relatively short
lifetimes, depending on the value of the data and the amount of data encrypted during a given period.
The key for a gigabit-per-second communications link might have to be changed more often than the
key for a 9600-baud modem link. Assuming there is an efficient method of transferring new keys,
session keys should be changed at least daily.

Page 160 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Key-encryption keys donâ€™t have to be replaced as frequently. They are used only occasionally
(roughly once per day) for key exchange. This generates little ciphertext for a cryptanalyst to work
with, and the corresponding plaintext has no particular form. However, if a key-encryption key is
compromised, the potential loss is extreme: all communications encrypted with every key encrypted
with the key-encryption key. In some applications, key-encryption keys are replaced only once a
month or once a year. You have to balance the inherent danger in keeping a key around for a while
with the inherent danger in distributing a new one.

Encryption keys used to encrypt data files for storage cannot be changed often. The files may sit
encrypted on disk for months or years before someone needs them again. Decrypting them and re-
encrypting them with a new key every day doesnâ€™t enhance security in any way; it just gives a
cryptanalyst more to work with. One solution might be to encrypt each file with a unique file key, and
then encrypt all the file keys with a key-encryption key. The key-encryption key should then be either
memorized or stored in a secure location, perhaps in a safe somewhere. Of course, losing this key
would mean losing all the individual file keys.

Private keys for public-key cryptography applications have varying lifetimes, depending on the
application. Private keys used for digital signatures and proofs of identity may have to last years (even
a lifetime). Private keys used for coin-flipping protocols can be discarded immediately after the
protocol is completed. Even if a keyâ€™s security is expected to last a lifetime, it may be prudent to
change the key every couple of years. The private keys in many networks are good only for two years;
after that the user must get a new private key. The old key would still have to remain secret, in case
the user needed to verify a signature from that period. But the new key would be used to sign new
documents, reducing the number of signed documents a cryptanalyst would have for an attack.

8.11 Destroying Keys

Given that keys must be replaced regularly, old keys must be destroyed. Old keys are valuable, even if
they are never used again. With them, an adversary can read old messages encrypted with those keys
[65].

Keys must be destroyed securely (see Section 10.9). If the key is written on paper, the paper should be
shredded or burned. Be careful to use a high-quality shredder; many lousy shredders are on the
market. Algorithms in this book are secure against brute-force attacks costing millions of dollars and
taking millions of years. If an adversary can recover your key by taking a bag of shredded documents
from your trash and paying 100 unemployed workers in some backwater country ten cents per hour
for a year to piece the shredded pages together, that would be \$26,000 well spent.

If the key is in a hardware EEPROM, the key should be overwritten multiple times. If the key is in a
hardware EPROM or PROM, the chip should be smashed into tiny bits and scattered to the four
winds. If the key is stored on a computer disk, the actual bits of the storage should be overwritten
multiple times (see Section 10.9) or the disk should be shredded.

A potential problem is that, in a computer, keys can be easily copied and stored in multiple locations.
Any computer that does its own memory management, constantly swapping programs in and out of
memory, exacerbates the problem. There is no way to ensure that successful key erasure has taken
place in the computer, especially if the computerâ€™s operating system controls the erasure process. The
more paranoid among you should consider writing a special erasure program that scans all disks
looking for copies of the keyâ€™s bit pattern on unused blocks and then erases those blocks. Also
remember to erase the contents of any temporary, or â€śswap,â€ť files.

8.12 Public-Key Key Management

Page 161 of 666
Applied Cryptography: Second Edition - Bruce Schneier

Public-key cryptography makes key management easier, but it has its own unique problems. Each
person has only one public key, regardless of the number of people on the network. If Alice wants to
send a message to Bob, she has to get Bobâ€™s public key. She can go about this several ways:

â€” She can get it from Bob.
â€” She can get it from a centralized database.
â€” She can get it from her own private database.

Section 2.5 discussed a number of possible attacks against public-key cryptography, based on Mallory
substituting his key for Bobâ€™s. The scenario is that Alice wants to send a message to Bob. She goes to
the public-key database and gets Bobâ€™s public key. But Mallory, who is sneaky, has substituted his
own key for Bobâ€™s. (If Alice asks Bob directly, Mallory has to intercept Bobâ€™s transmission and
substitute his key for Bobâ€™s.) Alice encrypts her message in Malloryâ€™s key and sends it to Bob. Mallory
intercepts the message, decrypts it, and reads it. He re-encrypts it with Bobâ€™s real key and sends it on
to Bob. Neither Alice nor Bob is the wiser.

Public-key Certificates

A public-key certificate is someoneâ€™s public key, signed by a trustworthy person. Certificates are used
to thwart attempts to substitute one key for another [879]. Bobâ€™s certificate, in the public-key
database, contains a lot more than his public key. It contains information about Bobâ€”his name,
address, and so onâ€”and it is signed by someone Alice trusts: Trent (usually known as a certification
authority, or CA). By signing both the key and the information about Bob, Trent certifies that the
information about Bob is correct and that the public key belongs to Bob. Alice checks Trentâ€™s
signature and then uses the public key, secure in the knowledge that it is Bobâ€™s and no one elseâ€™s.
Certificates play an important role in a number of public-key protocols such as PEM [825] (see
Section 24.10) and X.509 [304] (see Section 24.9).

A complicated noncryptographic issue surrounds this type of system. What is the meaning of
certification? Or, to put it another way, who is trusted to issue certificates to whom? Anyone may sign
anyone elseâ€™s certificate, but there needs to be some way to filter out questionable certificates: for
example, certificates for employees of one company signed by the CA of another company. Normally,
a certification chain transfers trust: A single trusted entity certifies trusted agents, trusted agents
certify company CAs, and company CAs certify their employees.

Here are some more things to think about:

â€” What level of trust in someoneâ€™s identity is implied by his certificate?
â€” What are the relationships between a person and the CA that certified his public key,
and how can those relationships be implied by the certificate?
â€” Who can be trusted to be the â€śsingle trusted entityâ€ť at the top of the certification
chain?
â€” How long can a certification chain be?

Ideally, Bob would follow some kind of authentication procedure before the CA signs his certificate.
Additionally, some kind of timestamp or an indication of the certificateâ€™s validity period is important
to guard against compromised keys [461].

Timestamping is not enough. Keys may be invalidated before they have expired, either through
compromise or for administrative reasons. Hence, it is important the CA keep a list of invalid
certificates, and for users to regularly check that list. This key revocation problem is still a difficult

Page 162 of 666
Applied Cryptography: Second Edition - Bruce Schneier

one to solve.

And one public-key/private-key pair is not enough. Certainly any good implementation of public-key
cryptography needs separate keys for encryption and digital signatures. This separation allows for
different security levels, expiration times, backup procedures, and so on. Someone might sign
messages with a 2048-bit key stored on a smart card and good for twenty years, while they might use a
768-bit key stored in the computer and good for six months for encryption.

And a single pair of encryption and signature keys isnâ€™t enough, either. A private key authenticates a
relationship as well as an identity, and people have more than one relationship. Alice might want to
sign one document as Alice the individual, another as Alice, vice-president of Monolith, Inc., and a
third as Alice, president of her community organization. Some of these keys are more valuable than
others, so they can be better protected. Alice might have to store a backup of her work key with the
companyâ€™s security officer; she doesnâ€™t want the company to have a copy of the key she signed her
mortgage with. Just as Alice has multiple physical keys in her pocket, she is going to have multiple
cryptographic keys.

Distributed Key Management

In some situations, this sort of centralized key management will not work. Perhaps there is no CA
whom Alice and Bob both trust. Perhaps Alice and Bob trust only their friends. Perhaps Alice and
Bob trust no one.

Distributed key management, used in PGP (see Section 24.12), solves this problem with introducers.
Introducers are other users of the system who sign their friendsâ€™ public keys. For example, when Bob
generates his public key, he gives copies to his friends: Carol and Dave. They know Bob, so they each
sign Bobâ€™s key and give Bob a copy of the signature. Now, when Bob presents his key to a stranger,
Alice, he presents it with the signatures of these two introducers. If Alice also knows and trusts Carol,
she has reason to believe that Bobâ€™s key is valid. If she knows and trusts Carol and Dave a little, she
has reason to believe that Bobâ€™s key is valid. If she doesnâ€™t know either Carol or Dave, she has no
reason to trust Bobâ€™s key.

Over time, Bob will collect many more introducers. If Alice and Bob travel in similar circles, the odds
are good that Alice will know one of Bobâ€™s introducers. To prevent against Malloryâ€™s substituting one
key for another, an introducer must be sure that Bobâ€™s key belongs to Bob before he signs it. Perhaps
the introducer should require the key be given face-to-face or verified over the telephone.

The benefit of this mechanism is that there is no CA that everyone has to trust. The down side is that
when Alice receives Bobâ€™s public key, she has no guarantee that she will know any of the introducers
and therefore no guarantee that she will trust the validity of the key.

Chapter 9
Algorithm Types and Modes
There are two basic types of symmetric algorithms: block ciphers and stream ciphers. Block ciphers
operate on blocks of plaintext and ciphertextâ€”usually of 64 bits but sometimes longer. Stream
ciphers operate on streams of plaintext and ciphertext one bit or byte (sometimes even one 32-bit
word) at a time. With a block cipher, the same plaintext block will always encrypt to the same
ciphertext block, using the same key. With a stream cipher, the same plaintext bit or byte will encrypt
to a different bit or byte every time it is encrypted.

Page 163 of 666
Applied Cryptography: Second Edition - Bruce Schneier

A cryptographic mode usually combines the basic cipher, some sort of feedback, and some simple
operations. The operations are simple because the security is a function of the underlying cipher and
not the mode. Even more strongly, the cipher mode should not compromise the security of the
underlying algorithm.

There are other security considerations: Patterns in the plaintext should be concealed, input to the
cipher should be randomized, manipulation of the plaintext by introducing errors in the ciphertext
should be difficult, and encryption of more than one message with the same key should be possible.
These will be discussed in detail in the next sections.

Efficiency is another consideration. The mode should not be significantly less efficient than the
underlying cipher. In some circumstances it is important that the ciphertext be the same size as the
plaintext.

A third consideration is fault-tolerance. Some applications need to parallelize encryption or
decryption, while others need to be able to preprocess as much as possible. In still others it is
important that the decrypting process be able to recover from bit errors in the ciphertext stream, or
dropped or added bits. As we will see, different modes have different subsets of these characteristics.

9.1 Electronic Codebook Mode

Electronic codebook (ECB) mode is the most obvious way to use a block cipher: A block of plaintext
encrypts into a block of ciphertext. Since the same block of plaintext always encrypts to the same
block of ciphertext, it is theoretically possible to create a code book of plaintexts and corresponding
ciphertexts. However, if the block size is 64 bits, the code book will have 264 entriesâ€”much too large
to precompute and store. And remember, every key has a different code book.

This is the easiest mode to work with. Each plaintext block is encrypted independently. You donâ€™t
have to encrypt a file linearly; you can encrypt the 10 blocks in the middle first, then the blocks at the
end, and finally the blocks in the beginning. This is important for encrypted files that are accessed
randomly, like a database. If a database is encrypted with ECB mode, then any record can be added,
deleted, encrypted, or decrypted independently of any other recordâ€”assuming that a record consists
of a discrete number of encryption blocks. And processing is parallizeable; if you have multiple
 << ńňđ. 7(âńĺăî 29)ŃÎÄĹĐĆŔÍČĹ >>