ńňđ. 5 |

Africa in 1970, or building a chemical plant for Iraq in 1980. Even worse, effortless and untraceable

tapping of communications may tempt a number of governments into tracking many of their citizensâ€™

communications, even those which havenâ€™t generally tried to do so before. And thereâ€™s no guarantee

that liberal democracies will be immune to this temptation.

Chapter 5

Advanced Protocols

5.1 Zero-Knowledge Proofs

Hereâ€™s another story:

Alice: â€śI know the password to the Federal Reserve System computer, the ingredients in

McDonaldâ€™s secret sauce, and the contents of Volume 4 of Knuth.â€ť

Bob: â€śNo, you donâ€™t.â€ť

Alice: â€śYes, I do.â€ť

Bob: â€śDo not!â€ť

Alice: â€śDo too!â€ť

Bob: â€śProve it!â€ť

Alice: â€śAll right. Iâ€™ll tell you.â€ť She whispers in Bobâ€™s ear.

Bob: â€śThatâ€™s interesting. Now I know it, too. Iâ€™m going to tell The Washington Post.â€ť

Page 94 of 666

Applied Cryptography: Second Edition - Bruce Schneier

Alice: â€śOops.â€ť

Unfortunately, the usual way for Alice to prove something to Bob is for Alice to tell him. But then he

knows it, too. Bob can then tell anyone else he wants to and Alice can do nothing about it. (In the

literature, different characters are often used in these protocols. Peggy is usually cast as the prover

and Victor is the verifier. These names appear in the upcoming examples, instead of Alice and Bob.)

Using one-way functions, Peggy could perform a zero-knowledge proof [626]. This protocol proves to

Victor that Peggy does have a piece of information, but it does not give Victor any way to know what

the information is.

These proofs take the form of interactive protocols. Victor asks Peggy a series of questions. If Peggy

knows the secret, she can answer all the questions correctly. If she does not, she has some chanceâ€”50

percent in the following examplesâ€”of answering correctly. After 10 or so questions, Victor will be

convinced that Peggy knows the secret. Yet none of the questions or answers gives Victor any

information about Peggyâ€™s informationâ€”only about her knowledge of it.

Basic Zero-Knowledge Protocol

Jean-Jacques Quisquater and Louis Guillou explain zero-knowledge with a story about a cave [1281].

The cave, illustrated in Figure 5.1, has a secret. Someone who knows the magic words can open the

secret door between C and D. To everyone else, both passages lead to dead ends.

Peggy knows the secret of the cave. She wants to prove her knowledge to Victor, but she doesnâ€™t want

to reveal the magic words. Hereâ€™s how she convinces him:

(1) Victor stands at point A.

(2) Peggy walks all the way into the cave, either to point C or point D.

(3) After Peggy has disappeared into the cave, Victor walks to point B.

(4) Victor shouts to Peggy, asking her either to:

(a) come out of the left passage or

(b) come out of the right passage.

(5) Peggy complies, using the magic words to open the secret door if she has to.

(6) Peggy and Victor repeat steps (1) through (5) n times.

Assume that Victor has a camcorder and records everything he sees. He records Peggy disappearing

into the cave, he records when he shouts out where he wants Peggy to come out from, and he records

Peggy coming out. He records all n trials. If he showed this recording to Carol, would she believe that

Peggy knew the magic words to open the door? No. What if Peggy and Victor had agreed beforehand

what Victor would call out, and Peggy would make sure that she went into that path. Then she could

come out where Victor asked her every time, without knowing the magic words. Or maybe they

couldnâ€™t do that. Peggy would go into one of the passages and Victor would call out a random request.

If Victor guessed right, great; if he didnâ€™t, they would edit that trial out of the camcorder recording.

Either way, Victor can get a recording showing exactly the same sequence of events as in a real proof

where Peggy knew the magic words.

Page 95 of 666

Applied Cryptography: Second Edition - Bruce Schneier

Figure 5.1 The zero-knowledge cave.

This shows two things. One, it is impossible for Victor to convince a third party of the proofâ€™s validity.

And two, it proves that the protocol is zero-knowledge. In the case where Peggy did not know the

magic words, Victor will obviously not learn anything from watching the recording. But since there is

no way to distinguish a real recording from a faked recording, Victor cannot learn anything from the

real proofâ€”it must be zero knowledge.

The technique used in this protocol is called cut and choose, because of its similarity to the classic

protocol for dividing anything fairly:

(1) Alice cuts the thing in half.

(2) Bob chooses one of the halves for himself.

(3) Alice takes the remaining half.

It is in Aliceâ€™s best interest to divide fairly in step (1), because Bob will choose whichever half he

wants in step (2). Michael Rabin was the first person to use the cut-and-choose technique in

cryptography [1282]. The concepts of interactive protocol and zero-knowledge were formalized later

[626,627].

The cut-and-choose protocol works because there is no way Peggy can repeatedly guess which side

Victor will ask her to come out of. If Peggy doesnâ€™t know the secret, she can only come out the way she

came in. She has a 50 percent chance of guessing which side Victor will ask in each round (sometimes

called an accreditation) of the protocol, so she has a 50 percent chance of fooling him. The chance of

her fooling him in two rounds is 25 percent, and the chance of her fooling him all n times is 1 in 2n.

After 16 rounds, Peggy has a 1 in 65,536 chance of fooling Victor. Victor can safely assume that if all

16 of Peggyâ€™s proofs are valid, then she must know the secret words to open the door between points

C and D. (The cave analogy isnâ€™t perfect. Peggy can simply walk in one side and out the other; thereâ€™s

no need for any cut-and-choose protocol. However, mathematical zero knowledge requires it.)

Assume that Peggy knows some information, and furthermore that the information is the solution to a

hard problem. The basic zero-knowledge protocol consists of several rounds.

(1) Peggy uses her information and a random number to transform the hard problem into

another hard problem, one that is isomorphic to the original problem. She then uses her

information and the random number to solve this new instance of the hard problem.

(2) Peggy commits to the solution of the new instance, using a bit-commitment scheme.

(3) Peggy reveals to Victor the new instance. Victor cannot use this new problem to get

any information about the original instance or its solution.

(4) Victor asks Peggy either to:

(a) prove to him that the old and new instances are isomorphic (i.e., two different

solutions to two related problems), or

(b) open the solution she committed to in step (2) and prove that it is a solution to

the new instance.

Page 96 of 666

Applied Cryptography: Second Edition - Bruce Schneier

(5) Peggy complies.

(6) Peggy and Victor repeat steps (1) through (5) n times.

Remember the camcorder in the cave protocol? You can do the same thing here. Victor can make a

transcript of the exchange between him and Peggy. He cannot use this transcript to convince Carol,

because he can always collude with Peggy to build a simulator that fakes Peggyâ€™s knowledge. This

argument can be used to prove that the proof is zero-knowledge.

The mathematics behind this type of proof is complicated. The problems and the random

transformation must be chosen carefully, so that Victor does not get any information about the

solution to the original problem, even after many iterations of the protocol. Not all hard problems can

be used for zero-knowledge proofs, but a lot of them can.

Graph Isomorphism

An example might go a long way to explain this concept; this one comes from graph theory [619,622].

A graph is a network of lines connecting different points. If two graphs are identical except for the

names of the points, they are called isomorphic. For an extremely large graph, finding whether two

graphs are isomorphic can take centuries of computer time; itâ€™s one of those NP-complete problems

discussed in Section 11.1.

Assume that Peggy knows the isomorphism between the two graphs, G1 and G2. The following

protocol will convince Victor of Peggyâ€™s knowledge:

(1) Peggy randomly permutes G1 to produce another graph, H, that is isomorphic to G1.

Because Peggy knows the isomorphism between H and G1, she also knows the isomorphism

between H and G2. For anyone else, finding an isomorphism between G1 and H or between G2

and H is just as hard as finding an isomorphism between G1 and G2.

(2) Peggy sends H to Victor.

(3) Victor asks Peggy either to:

(a) prove that H and G1 are isomorphic, or

(b) prove that H and G2 are isomorphic.

(4) Peggy complies. She either:

(a) proves that H and G1 are isomorphic, without proving that H and G2 are

isomorphic, or

(b) proves that H and G2 are isomorphic, without proving that H and G1 are

isomorphic.

(5) Peggy and Victor repeat steps (1) through (4) n times.

If Peggy does not know an isomorphism between G1 and G2, she cannot create graph H which is

isomorphic to both. She can create a graph that is either isomorphic to G1 or one that is isomorphic to

G2. Like the previous example, she has only a 50 percent chance of guessing which proof Victor will

ask her to perform in step (3).

This protocol doesnâ€™t give Victor any useful information to aid him in figuring out an isomorphism

between G1 and G2. Because Peggy generates a new graph H for each round of the protocol, he can

get no information no matter how many rounds they go through the protocol. He wonâ€™t be able to

Page 97 of 666

Applied Cryptography: Second Edition - Bruce Schneier

figure out an isomorphism between G1 and G2 from Peggyâ€™s answers.

In each round, Victor receives a new random permutation of H, along with an isomorphism between

H and either G1 or G2. Victor could just as well have generated this by himself. Because Victor can

create a simulation of the protocol, it can be proven to be zero-knowledge.

Hamiltonian Cycles

A variant of this example was first presented by Manuel Blum [196]. Peggy knows a circular,

continuous path along the lines of a graph that passes through each point exactly once. This is called a

Hamiltonian cycle. Finding a Hamiltonian cycle is another hard problem. Peggy has this piece of

informationâ€”she probably got it by creating the graph with a certain Hamiltonian cycleâ€”and this is

what she wants to convince Victor that she knows.

Peggy knows the Hamiltonian cycle of a graph, G. Victor knows G, but not the Hamiltonian cycle.

Peggy wants to prove to Victor that she knows this Hamiltonian cycle without revealing it. This is how

she does it:

(1) Peggy randomly permutes G. She moves the points around and changes their labels to

make a new graph, H. Since G and H are topologically isomorphic (i.e., the same graph), if she

knows the Hamiltonian cycle of G then she can easily find the Hamiltonian cycle of H. If she

didnâ€™t create H herself, determining the isomorphism between two graphs would be another

hard problem; it could also take centuries of computer time. She then encrypts H to get HÂ´.

(This has to be a probabilistic encryption of each line in H, that is, an encrypted 0 or an

encrypted 1 for each line in H.)

(2) Peggy gives Victor a copy of HÂ´.

(3) Victor asks Peggy either to:

(a) prove to him that HÂ´ is an encryption of an isomorphic copy of G, or

(b) show him a Hamiltonian cycle for H.

(4) Peggy complies. She either:

(a) proves that HÂ´ is an encryption of an isomorphic copy of G by revealing the

permutations and decrypting everything, without showing a Hamiltonian cycle for either

G or H, or

(b) shows a Hamiltonian cycle for H by decrypting only those lines that constitute a

Hamiltonian cycle, without proving that G and H are topologically isomorphic.

(5) Peggy and Victor repeat steps (1) through (4) n times.

If Peggy is honest, she can provide either proof in step (4) to Victor. However, if she does not know a

Hamiltonian cycle for G, she cannot create an encrypted graph HÂ´ which can meet both challenges.

The best she can do is to create a graph that is either isomorphic to G or one that has the same

number of points and lines and a valid Hamiltonian cycle. While she has a 50 percent chance of

guessing which proof Victor will ask her to perform in step (3), Victor can repeat the protocol enough

times to convince himself that Peggy knows a Hamiltonian cycle for G.

Parallel Zero-Knowledge Proofs

The basic zero-knowledge protocol involves n exchanges between Peggy and Victor. Why not do them

all in parallel:

(1) Peggy uses her information and n random numbers to transform the hard problem

into n different isomorphic problems. She then uses her information and the random numbers

to solve the n new hard problems.

Page 98 of 666

Applied Cryptography: Second Edition - Bruce Schneier

(2) Peggy commits to the solution of the n new hard problems.

(3) Peggy reveals to Victor the n new hard problems. Victor cannot use these new

problems to get any information about the original problems or its solutions.

(4) For each of the n new hard problems, Victor asks Peggy either to:

(a) prove to him that the old and new problems are isomorphic, or

(b) open the solution she committed to in step (2) and prove that it is a solution to

the new problem.

(5) Peggy complies for each of the n new hard problems.

Unfortunately, itâ€™s not that simple. This protocol does not have the same zero-knowledge properties as

the previous protocol. In step (4), Victor can choose the challenges as a one-way hash of all the values

committed to in the second step, thus making the transcript nonsimulatable. It is still zero-knowledge,

but of a different sort. It seems to be secure in practice, but no one knows how to prove it. We do

know that in certain circumstances, certain protocols for certain problems can be run in parallel

while retaining their zero-knowledge property [247,106,546,616].

Noninteractive Zero-Knowledge Proofs

Carol canâ€™t be convinced because the protocol is interactive, and she is not involved in the interaction.

To convince Carol, and anyone else who may be interested, we need a noninteractive protocol.

Protocols have been invented for noninteractive zero-knowledge proofs [477,198,478,197]. These

protocols do not require any interaction; Peggy could publish them and thereby prove to anyone who

takes the time to check that the proof is valid.

The basic protocol is similar to the parallel zero-knowledge proof, but a one-way hash function takes

the place of Victor:

(1) Peggy uses her information and n random numbers to transform the hard problem

into n different isomorphic problems. She then uses her information and the random numbers

to solve the n new hard problems.

(2) Peggy commits to the solution of the n new hard problems.

(3) Peggy uses all of these commitments together as a single input to a one-way hash

function. (After all, the commitments are nothing more than bit strings.) She then saves the first

n bits of the output of this one-way hash function.

(4) Peggy takes the n bits generated in step (3). For each ith new hard problem in turn,

she takes the ith bit of those n bits and:

(a) if it is a 0, she proves that the old and new problems are isomorphic, or

(b) if it is a 1, she opens the solution she committed to in step (2) and proves that it

is a solution to the new problem.

(5) Peggy publishes all the commitments from step (2) as well as the solutions in step (4).

(6) Victor or Carol or whoever else is interested, verifies that steps (1) through (5) were

executed properly.

This is amazing: Peggy can publish some data that contains no information about her secret, but can

be used to convince anyone of the secretâ€™s existence. The protocol can also be used for digital

signature schemes, if the challenge is set as a one-way hash of both the initial messages and the

message to be signed.

This works because the one-way hash function acts as an unbiased random-bit generator. For Peggy

to cheat, she has to be able to predict the output of the one-way hash function. (Remember, if she

doesnâ€™t know the solution to the hard problem, she can do either (a) or (b) of step (4), but not both.) If

Page 99 of 666

Applied Cryptography: Second Edition - Bruce Schneier

she somehow knew what the one-way hash function would ask her to do, then she could cheat.

However, there is no way for Peggy to force the one-way function to produce certain bits or to guess

which bits it will produce. The one-way function is, in effect, Victorâ€™s surrogate in the protocolâ€”

randomly choosing one of two proofs in step (4).

In a noninteractive protocol, there must be many more iterations of the challenge/reply sequence.

Peggy, not Victor, picks the hard problems using random numbers. She can pick different problems,

hence different commitment vectors, till the hash function produces something she likes. In an

interactive protocol, 10 iterationsâ€”a probability of 1 in 210 (1 in 1024) that Peggy can cheatâ€”may be

fine. However, thatâ€™s not enough for noninteractive zero-knowledge proofs. Remember that Mallory

can always do either (a) or (b) of step (4). He can try to guess which he will be asked to do, go through

steps (1) through (3), and see if he guessed right. If he didnâ€™t, he can try againâ€”repeatedly. Making

1024 guesses is easy on a computer. To prevent this brute-force attack, noninteractive protocols need

64 iterations, or even 128 iterations, to be valid.

This is the whole point of using a one-way hash function: Peggy cannot predict the output of the hash

function because she cannot predict its input. The commitments which are used as the input are only

known after she solves the new problems.

Generalities

Blum proved that any mathematical theorem can be converted into a graph such that the proof of that

theorem is equivalent to proving a Hamiltonian cycle in the graph. The general case that any NP

statement has a zero-knowledge proof, assuming one-way functions and therefore good encryption

algorithms, was proved in [620]. Any mathematical proof can be converted into a zero-knowledge

proof. Using this technique, a researcher can prove to the world that he knows the proof of a

particular theorem without revealing what that solution is. Blum could have published these results

without revealing them.

There are also minimum-disclosure proofs [590]. In a minimum-disclosure proof, the following

properties hold:

1. Peggy cannot cheat Victor. If Peggy does not know the proof, her chances of convincing

Victor that she knows the proof are negligible.

2. Victor cannot cheat Peggy. He doesnâ€™t get the slightest hint of the proof, apart from the

fact that Peggy knows the proof. In particular, Victor cannot demonstrate the proof to anyone

else without proving it himself from scratch.

Zero-knowledge proofs have an additional condition:

3. Victor learns nothing from Peggy that he could not learn by himself without Peggy,

apart from the fact that Peggy knows the proof.

There is considerable mathematical difference between proofs that are only minimum-disclosure and

those that are zero-knowledge. That distinction is beyond the scope of this book, but more

sophisticated readers are welcome to peruse the references. The concepts were introduced in

[626,619,622]. Further elaboration on their ideas, based on different mathematical assumptions, were

developed in [240,319,239].

There are also different kinds of zero-knowledge proofs:

â€” Perfect. There is a simulator that gives transcripts identically distributed to real

Page 100 of 666

Applied Cryptography: Second Edition - Bruce Schneier

transcripts (the Hamiltonian cycle and graph isomorphism examples).

â€” Statistical. There is a simulator that gives transcripts identically distributed to real

transcripts, except for some constant number of exceptions.

â€” Computational. There is a simulator that gives transcripts indistinguishable from real

transcripts.

â€” No-use. A simulator may not exist, but we can prove that Victor will not learn any

polynomial amount of information from the proof (the parallel example).

Over the years, extensive work, both theoretical and applied, has been done on minimum-disclosure

and zero-knowledge proofs. Mike Burmester and Yvo Desmedt invented broadcast interactive proofs,

where one prover can broadcast a zero-knowledge interactive proof to a large group of verifiers [280].

Cryptographers proved that everything that can be proven with an interactive proof can also be

proven with a zero-knowledge interactive proof [753,137].

A good survey article on the topic is [548]. For additional mathematical details, variations, protocols,

and applications, consult [590,619,240,319,620,113,241,1528,660,238,591,617,510,592,214,104,216,832,

97,939,622,482,615,618,215,476,71]. A lot has been written on this subject.

5.2 Zero-Knowledge Proofs of Identity

In the real world, we often use physical tokens as proofs of identity: passports, driverâ€™s licenses, credit

cards, and so on. The token contains something that links it to a person: a picture, usually, or a

signature, but it could almost as easily be a thumbprint, a retinal scan, or a dental x-ray. Wouldnâ€™t it

be nice to do the same thing digitally?

Using zero-knowledge proofs as proofs of identity was first proposed by Uriel Feige, Amos Fiat, and

Adi Shamir [566,567]. Aliceâ€™s private key becomes a function of her â€śidentity.â€ť Using a zero-

knowledge proof, she proves that she knows her private key and therefore proves her identity.

Algorithms for this can be found in Section 23.11.

This idea is quite powerful. It allows a person to prove his identity without any physical token.

However, itâ€™s not perfect. Here are some abuses.

The Chess Grandmaster Problem

Hereâ€™s how Alice, who doesnâ€™t even know the rules to chess, can defeat a grandmaster. (This is

sometimes called the Chess Grandmaster Problem.) She challenges both Gary Kasparov and Anatoly

Karpov to a game, at the same time and place, but in separate rooms. She plays white against

Kasparov and black against Karpov. Neither grandmaster knows about the other.

Karpov, as white, makes his first move. Alice records the move and walks into the room with

Kasparov. Playing white, she makes the same move against Kasparov. Kasparov makes his first move

as black. Alice records the move, walks into the room with Karpov, and makes the same move. This

continues, until she wins one game and loses the other, or both games end in a draw.

In reality, Kasparov is playing Karpov and Alice is simply acting as the middleman, mimicking the

moves of each grandmaster on the otherâ€™s board. However, if neither Karpov nor Kasparov knows

about the otherâ€™s presence, each will be impressed with Aliceâ€™s play.

This kind of fraud can be used against zero-knowledge proofs of identity [485,120]. While Alice is

proving her identity to Mallory, Mallory can simultaneously prove to Bob that he is Alice.

Page 101 of 666

Applied Cryptography: Second Edition - Bruce Schneier

The Mafia Fraud

When discussing his zero-knowledge identification protocol, Adi Shamir [1424] said: â€śI could go to a

Mafia-owned store a million successive times and they will still not be able to misrepresent themselves

as me.â€ť

Hereâ€™s how the Mafia can. Alice is eating at Bobâ€™s Diner, a Mafia-owned restaurant. Carol is

shopping at Daveâ€™s Emporium, an expensive jewelry store. Bob and Carol are both members of the

Mafia and are communicating by a secret radio link. Alice and Dave are unaware of the fraud.

At the end of Aliceâ€™s meal, when she is ready to pay and prove her identity to Bob, Bob signals Carol

that the fraud is ready to begin. Carol chooses some expensive diamonds and gets ready to prove her

identity to Dave. Now, as Alice proves her identity to Bob, Bob radios Carol and Carol performs the

same protocol with Dave. When Dave asks a question in the protocol, Carol radios the question back

to Bob, and Bob asks it of Alice. When Alice answers, Bob radios the correct answer to Carol.

Actually, Alice is just proving her identity to Dave, and Bob and Carol are simply sitting in the middle

of the protocol passing messages back and forth. When the protocol finishes, Alice has proved herself

to Dave and has purchased some expensive diamonds (which Carol disappears with).

The Terrorist Fraud

If Alice is willing to collaborate with Carol, they can also defraud Dave. In this protocol, Carol is a

well-known terrorist. Alice is helping her enter the country. Dave is the immigration officer. Alice and

Carol are connected by a secret radio link.

When Dave asks Carol questions as part of the zero-knowledge protocol, Carol radios them back to

Alice, who answers them herself. Carol recites these answers to Dave. In reality, Alice is proving her

identity to Dave, with Carol acting as a communications path. When the protocol finishes, Dave

thinks that Carol is Alice and lets her into the country. Three days later, Carol shows up at some

government building with a minivan full of explosives.

Suggested Solutions

Both the Mafia and Terrorist frauds are possible because the conspirators can communicate via a

secret radio. One way to prevent this requires all identifications to take place inside Faraday cages,

which block all electromagnetic radiation. In the terrorist example, this assures immigration officer

Dave that Carol was not receiving her answers from Alice. In the Mafia example, Bob could simply

build a faulty Faraday cage in his restaurant, but jeweler Dave would have a working one; Bob and

Carol would not be able to communicate. To solve the Chess Grandmaster Problem, Alice should be

forced to sit in her seat until the end of a game.

Thomas Beth and Yvo Desmedt proposed another solution, one using accurate clocks [148]. If each

step in the protocol must take place at a given time, no time would be available for the conspirators to

communicate. In the Chess Grandmaster Problem, if every move in each game must be made as a

clock strikes one minute, then Alice will have no time to run from room to room. In the Mafia story,

Bob and Carol will have no time to pass questions and answers to one another.

The Multiple Identity Fraud

There are other possible abuses to zero-knowledge proofs of identity, also discussed in [485,120]. In

some implementations, there is no check when an individual registers a public key. Hence, Alice can

Page 102 of 666

Applied Cryptography: Second Edition - Bruce Schneier

have several private keys and, therefore, several identities. This can be a great help if she wants to

commit tax fraud. Alice can also commit a crime and disappear. First, she creates and publishes

several identities. One of them she doesnâ€™t use. Then, she uses that identity once and commits a crime

so that the person who identifies her is the witness. Then, she immediately stops using that identity.

The witness knows the identity of the person who committed the crime, but if Alice never uses that

identity againâ€”sheâ€™s untraceable.

To prevent this, there has to be some mechanism by which each person has only one identity. In [120]

the authors suggest the bizarre idea of tamperproof babies who are impossible to clone and contain a

unique number as part of their genetic code. They also suggested having each baby apply for an

identity at birth. (Actually, the parents would have to do this as the baby would be otherwise

occupied.) This could easily be abused; parents could apply for multiple identities at the childâ€™s birth.

In the end, the uniqueness of an individual is based on trust.

Renting Passports

Alice wants to travel to Zaire, but that government wonâ€™t give her a visa. Carol offers to rent her

identity to Alice. (Bob offered first, but there were some obvious problems.) Carol sells Alice her

private key and Alice goes off to Zaire pretending to be Carol.

Carol has not only been paid for her identity, but now she has a perfect alibi. She commits a crime

while Alice is in Zaire. â€śCarolâ€ť has proved her identity in Zaire; how could she commit a crime back

home?

Of course, Alice is free to commit crimes as well. She does so either before she leaves or after she

returns, near Carolâ€™s home. First she identifies herself as Carol (she has Carolâ€™s private key, so she

can easily do that), then she commits a crime and runs away. The police will come looking for Carol.

Carol will claim she rented her identity to Alice, but who would believe such a nonsensical story?

The problem is that Alice isnâ€™t really proving her identity; she is proving that she knows a piece of

secret information. It is the link between that information and the person it belongs to that is being

abused. The tamperproof baby solution would protect against this type of fraud, as would a police

state where all citizens would have to prove their identity very frequently (at the end of each day, at

each street corner, etc.). Biometric methodsâ€”fingerprints, retinal scanning, voiceprints, and so onâ€”

may help solve this problem.

Proofs of Membership

Alice wants to prove to Bob that she is a member of some super-secret organization, but she does not

want to reveal her identity. This problem is similar but different to proving identity, and has also

been studied [887,906,907,1201,1445]. Some solutions are related to the problem of group signatures

(see Section 4.6).

5.3 Blind Signatures

An essential feature of digital signature protocols is that the signer knows what he is signing. This is a

good idea, except when we want the reverse.

We might want people to sign documents without ever seeing their contents. There are ways that a

signer can almost, but not exactly, know what he is signing. But first things first.

Page 103 of 666

Applied Cryptography: Second Edition - Bruce Schneier

Completely Blind Signatures

Bob is a notary public. Alice wants him to sign a document, but does not want him to have any idea

what he is signing. Bob doesnâ€™t care what the document says; he is just certifying that he notarized it

at a certain time. He is willing to go along with this.

(1) Alice takes the document and multiplies it by a random value. This random value is

called a blinding factor.

(2) Alice sends the blinded document to Bob.

(3) Bob signs the blinded document.

(4) Alice divides out the blinding factor, leaving the original document signed by Bob.

This protocol only works if the signature function and multiplication are commutative. If they are not,

there are other ways to modify the document other than by multiplying. Some relevant algorithms

appear in Section 23.12. For now, assume that the operation is multiplication and all the math works.

Can Bob cheat? Can he collect any information about the document that he is signing? If the blinding

factor is truly random and makes the blinded document truly random, he cannot. The blinded

document Bob signs in step (2) looks nothing like the document Alice began with. The blinded

document with Bobâ€™s signature on it in step (3) looks nothing like the signed document at the end of

step (4). Even if Bob got his hands on the document, with his signature, after completing the protocol,

he cannot prove (to himself or to anyone else) that he signed it in that particular protocol. He knows

that his signature is valid. He can, like anyone else, verify his signature. However, there is no way for

him to correlate any information he received during the signing protocol with the signed document. If

he signed a million documents using this protocol, he would have no way of knowing in which

instance he signed which document.

The properties of completely blind signatures are:

1. Bobâ€™s signature on the document is valid. The signature is a proof that Bob signed the

document. It will convince Bob that he signed the document if it is ever shown to him. It also has

all of the other properties of digital signatures discussed in Section 2.6.

2. Bob cannot correlate the signed document with the act of signing the document. Even if

he keeps records of every blind signature he makes, he cannot determine when he signed any

given document.

Eve, who is in the middle, watching this protocol, has even less information than Bob.

Blind Signatures

With the completely blind signature protocol, Alice can have Bob sign anything: â€śBob owes Alice a

million dollars,â€ť â€śBob owes Alice his first-born child,â€ť â€śBob owes Alice a bag of chocolates.â€ť The

possibilities are endless. This protocol isnâ€™t useful in many applications.

However, there is a way that Bob can know what he is signing, while still maintaining the useful

properties of a blind signature. The heart of this protocol is the cut-and-choose technique. Consider

this example. Many people enter this country every day, and the Department of Immigration wants to

make sure they are not smuggling cocaine. The officials could search everyone, but instead they use a

probabilistic solution. They will search one-tenth of the people coming in. One person in ten has his

belongings inspected; the other nine get through untouched. Chronic smugglers will get away with

their misdeeds most of the time, but they have a 10 percent chance of getting caught. And if the court

Page 104 of 666

Applied Cryptography: Second Edition - Bruce Schneier

system is effective, the penalty for getting caught once will more than wipe out the gains from the

other nine times.

If the Department of Immigration wants to increase the odds of catching smugglers, they have to

search more people. If they want to decrease the odds, they have to search fewer people. By

manipulating the probabilities, they control how successful the protocol is in catching smugglers.

The blind signature protocol works in a similar manner. Bob will be given a large pile of different

blinded documents. He will open, that is examine, all but one and then sign the last.

Think of the blinded document as being in an envelope. The process of blinding the document is

putting the document in an envelope and the process of removing the blinding factor is opening the

envelope. When the document is in an envelope, nobody can read it. The document is signed by

having a piece of carbon paper in the envelope: When the signer signs the envelope, his signature goes

through the carbon paper and signs the document as well.

This scenario involves a group of counterintelligence agents. Their identities are secret; not even the

counterintelligence agency knows who they are. The agencyâ€™s director wants to give each agent a

signed document stating: â€śThe bearer of this signed document, (insert agentâ€™s cover name here), has

full diplomatic immunity.â€ť Each of the agents has his own list of cover names, so the agency canâ€™t just

hand out signed documents. The agents do not want to send their cover names to the agency; the

enemy might have corrupted the agencyâ€™s computer. On the other hand, the agency doesnâ€™t want to

blindly sign any document an agent gives it. A clever agent might substitute a message like: â€śAgent

(name) has retired and collects a million-dollar-a-year pension. Signed, Mr. President.â€ť In this case,

blind signatures could be useful.

Assume that all the agents have 10 possible cover names, which they have chosen themselves and

which no one else knows. Also assume that the agents donâ€™t care under which cover name they are

going to get diplomatic immunity. Also assume that the agencyâ€™s computer is the Agencyâ€™s Large

Intelligent Computing Engine, or ALICE, and that our particular agent is the Bogota Operations

Branch: BOB.

(1) BOB prepares n documents, each using a different cover name, giving himself

diplomatic immunity.

(2) BOB blinds each of these documents with a different blinding factor.

(3) BOB sends the n blinded documents to ALICE.

(4) ALICE chooses n â€“ 1 documents at random and asks BOB for the blinding factors for

each of those documents.

(5) BOB sends ALICE the appropriate blinding factors.

(6) ALICE opens (i.e., she removes the blinding factor) n â€“ 1 documents and makes sure

they are correctâ€”and not pension authorizations.

(7) ALICE signs the remaining document and sends it to BOB.

(8) Agent removes the blinding factor and reads his new cover name: â€śThe Crimson

Streak.â€ť The signed document gives him diplomatic immunity under that name.

This protocol is secure against BOB cheating. For him to cheat, he would have to predict accurately

which document ALICE would not examine. The odds of him doing this are 1 in nâ€”not very good.

ALICE knows this and feels confident signing a document that she is not able to examine. With this

one document, the protocol is the same as the previous completely blinded signature protocol and

maintains all of its properties of anonymity.

There is a trick that makes BOBâ€™s chance of cheating even smaller. In step (4), ALICE randomly

Page 105 of 666

Applied Cryptography: Second Edition - Bruce Schneier

chooses n/2 of the documents to challenge, and BOB sends her the appropriate blinding factors in step

(5). In step (7), ALICE multiplies together all of the unchallenged documents and signs the mega-

document. In step (8), BOB strips off all the blinding factors. ALICEâ€™s signature is acceptable only if

it is a valid signature of the product of n/2 identical documents. To cheat BOB has to be able to guess

exactly which subset ALICE will challenge; the odds are much smaller than the odds of guessing

which one document ALICE wonâ€™t challenge.

BOB has another way to cheat. He can generate two different documents, one that ALICE is willing to

sign and one that ALICE is not. Then he can find two different blinding factors that transform each

document into the same blinded document. That way, if ALICE asks to examine the document, BOB

gives her the blinding factor that transforms it into the benign document. If ALICE doesnâ€™t ask to see

the document and signs it, he uses the blinding factor that transforms it into the malevolent document.

While this is theoretically possible, the mathematics of the particular algorithms involved make the

odds of BOBâ€™s being able to find such a pair negligibly small. In fact, it can be made as small as the

odds of Bob being able to produce the signature on an arbitrary message himself. This issue is

discussed further in Section 23.12.

Patents

Chaum has patents for several flavors of blind signatures (see Table 5.1).

5.4 Identity-Based Public-Key Cryptography

Alice wants to send a secure message to Bob. She doesnâ€™t want to get his public key from a key server;

she doesnâ€™t want to verify some trusted third partyâ€™s signature on his public-key certificate; and she

doesnâ€™t even want to store Bobâ€™s public key on her own computer. She just wants to send him a secure

message.

Identity-based cryptosystems, sometimes called Non-Interactive Key Sharing (NIKS) systems, solve

this problem [1422]. Bobâ€™s public key is based on his name and network address (or telephone

number, or physical street address, or whatever). With normal public-key cryptography, Alice needs

a signed certificate that associates Bobâ€™s public key with his identity. With identity-based

cryptography, Bobâ€™s public key is his identity. This is a really cool idea, and about as ideal as you can

get for a mail system: If Alice knows Bobâ€™s address, she can send him secure mail. It makes the

cryptography about as transparent as possible.

The system is based on Trent issuing private keys to users based on their identity. If Aliceâ€™s private

key is compromised, she has to change some aspect of her identity to get another one. A serious

problem is designing a system in such a way that a collusion of dishonest users cannot forge a key.

A lot of work has been done on the mathematics of these sorts of schemesâ€”most of it in Japanâ€”

which turn out to be infuriatingly complicated to make secure. Many of the proposed solutions

involve Trent choosing a random number for each userâ€”in my opinion this defeats the real point of

the system. Some of the algorithms discussed in Chapters 19 and 20 can be identity-based. For details,

algorithms, and cryptanalysis, see

[191,1422,891,1022,1515,1202,1196,908,692,674,1131,1023,1516,1536,1544,63,

1210,314,313,1545,1539,1543,933,1517,748,1228]. An algorithm that does not rely on any random

numbers is [1035]. The system discussed in [1546,1547,1507] is insecure against a chosen-public-key

attack; so is the system proposed as NIKS-TAS [1542,1540,1541,993,375,1538]. Honestly, nothing

proposed so far is both practical and secure.

Page 106 of 666

Applied Cryptography: Second Edition - Bruce Schneier

TABLE 5.1

Chaumâ€™s Blind Signature Patents

U.S. PATENT # DATE TITLE

4,759,063 7/19/88 Blind Signature Systems [323]

4,759,064 7/19/88 Blind Unanticipated Signature Systems [324]

4,914,698 3/3/90 One-Show Blind Signature Systems [326]

4,949,380 8/14/90 Returned-Value Blind Signature Systems [328]

4,991,210 2/5/91 Unpredictable Blind Signature Systems [331]

5.5 Oblivious Transfer

Cryptographer Bob is desperately trying to factor a 500-bit number, n. He knows itâ€™s the product of

five 100-bit numbers, but nothing more. (This is a problem. If he canâ€™t recover the key heâ€™ll have to

work overtime and heâ€™ll miss his weekly mental poker game with Alice.)

What do you know? Here comes Alice now:

â€śI happen to know one factor of the number,â€ť she says, â€śand Iâ€™ll sell it to you for $100.

Thatâ€™s a dollar a bit.â€ť To show sheâ€™s serious, she uses a bit-commitment scheme and

commits to each bit individually.

Bob is interested, but has only $50. Alice is unwilling to lower her price and offers to sell

Bob half the bits for half the price. â€śItâ€™ll save you a considerable amount of work,â€ť she

says.

â€śBut how do I know that your number is actually a factor of n? If you show me the

number and let me verify that it is a factor, then I will agree to your terms,â€ť says Bob.

They are at an impasse. Alice cannot convince Bob that her number is a factor of n

without revealing it, and Bob is unwilling to buy 50 bits of a number that could very well

be worthless.

This story, stolen from Joe Kilian [831], introduces the concept of oblivious transfer. Alice transmits a

group of messages to Bob. Bob receives some subset of those messages, but Alice has no idea which

ones he receives. This doesnâ€™t completely solve the problem, however. After Bob has received a

random half of the bits, Alice has to convince him that the bits she sent are part of a factor of n, using

a zero-knowledge proof.

In the following protocol, Alice will send Bob one of two messages. Bob will receive one, and Alice will

not know which.

(1) Alice generates two public-key/private-key key pairs, or four keys in all. She sends

both public keys to Bob.

(2) Bob chooses a key in a symmetric algorithm (DES, for example). He chooses one of

Aliceâ€™s public keys and encrypts his DES key with it. He sends the encrypted key to Alice

without telling her which of her public keys he used to encrypt it.

(3) Alice decrypts Bobâ€™s key twice, once with each of her private keys. In one of the cases,

she uses the correct key and successfully decrypts Bobâ€™s DES key. In the other case, she uses the

Page 107 of 666

Applied Cryptography: Second Edition - Bruce Schneier

wrong key and only manages to generate a meaningless pile of bits that nonetheless look like a

random DES key. Since she does not know the correct plaintext, she has no idea which is which.

(4) Alice encrypts both of her messages, each with a different one of the DES keys she

generated in the previous step (one real and one meaningless) and sends both of them to Bob.

(5) Bob gets one of Aliceâ€™s messages encrypted with the proper DES key and the other one

encrypted with the gibberish DES key. When Bob decrypts each of them with his DES key, he

can read one of them; the other just looks like gibberish to him.

Bob now has one of the two messages from Alice and Alice does not know which one he was able to

read successfully. Unfortunately, if the protocol stopped here it would be possible for Alice to cheat.

Another step is necessary.

(6) After the protocol is complete and both possible results of the transfer are known,

Alice must give Bob her private keys so that he can verify that she did not cheat. After all, she

could have encrypted the same message with both keys in step (4).

At this point, of course, Bob can figure out the second message.

The protocol is secure against an attack by Alice because she has no way of knowing which of the two

DES keys is the real one. She encrypts them both, but Bob only successfully recovers one of themâ€”

until step (6). It is secure against an attack by Bob because, before step (6), he cannot get Aliceâ€™s

private keys to determine the DES key that the other message was encrypted in. This may still seem

like nothing more than a more complicated way to flip coins over a modem, but it has extensive

implications when used in more complicated protocols.

Of course, nothing stops Alice from sending Bob two completely useless messages: â€śNyah Nyahâ€ť and

â€śYou sucker.â€ť This protocol guarantees that Alice sends Bob one of two messages; it does nothing to

ensure that Bob wants to receive either of them.

Other oblivious transfer protocols are found in the literature. Some of them are noninteractive,

meaning that Alice can publish her two messages and Bob can learn only one of them. He can do this

on his own; he doesnâ€™t have to communicate with Alice [105].

No one really cares about being able to do oblivious transfer in practice, but the notion is an

important building block for other protocols. Although there are many types of oblivious transferâ€”I

have two secrets and you get one; I have n secrets and you get one; I have one secret which you get

with probability 1/2; and so onâ€”they are all equivalent [245,391,395].

5.6 Oblivious Signatures

Honestly, I canâ€™t think of a good use for these, but there are two kinds [346]:

1. Alice has n different messages. Bob can choose one of the n messages for Alice to sign,

and Alice will have no way of knowing which one she signed.

2. Alice has one message. Bob can choose one of n keys for Alice to use in signing the

message, and Alice will have no way of knowing which key she used.

Itâ€™s a neat idea; Iâ€™m sure it has a use somewhere.

5.7 Simultaneous Contract Signing

Page 108 of 666

Applied Cryptography: Second Edition - Bruce Schneier

Contract Signing with an Arbitrator

Alice and Bob want to enter into a contract. Theyâ€™ve agreed on the wording, but neither wishes to sign

unless the other signs as well. Face to face, this is easy: Both sign together. Over a distance, they could

use an arbitrator.

(1) Alice signs a copy of the contract and sends it to Trent.

(2) Bob signs a copy of the contract and sends it to Trent.

(3) Trent sends a message to both Alice and Bob indicating that the other has signed the

contract.

(4) Alice signs two copies of the contract and sends them to Bob.

(5) Bob signs both copies of the contract, keeps one for himself, and sends the other to

Alice.

(6) Alice and Bob both inform Trent that they each have a copy of the contract signed by

both of them.

(7) Trent tears up his two copies of the contract with only one signature each.

This protocol works because Trent prevents either of the parties from cheating. If Bob were to refuse

to sign the contract in step (5), Alice could appeal to Trent for a copy of the contract already signed by

Bob. If Alice were to refuse to sign in step (4), Bob could do the same. When Trent indicates that he

received both contracts in step (3), both Alice and Bob know that the other is bound by the contract.

If Trent does not receive both contracts in steps (1) and (2), he tears up the one he received and

neither party is bound.

Simultaneous Contract Signing without an Arbitrator (Face-to-Face)

If Alice and Bob were sitting face-to-face, they could sign the contract this way [1244]:

(1) Alice signs the first letter of her name and passes the contract to Bob.

(2) Bob signs the first letter of his name and passes the contract to Alice.

(3) Alice signs the second letter of her name and passes the contract to Bob.

(4) Bob signs the second letter of his name and passes the contract to Alice.

(5) This continues until both Alice and Bob have signed their entire names.

If you ignore the obvious problem with this protocol (Alice has a longer name than Bob), it works just

fine. After signing only one letter, Alice knows that no judge will bind her to the terms of the contract.

But the letter is an act of good faith, and Bob responds with a similar act of good faith.

After each party has signed several letters, a judge could probably be convinced that both parties had

signed the contract. The details are murky, though. Surely they are not bound after only the first

letter; just as surely they are bound after they sign their entire names. At what point in the protocol

do they become bound? After signing one-half of their names? Two-thirds of their names? Three-

quarters?

Since neither Alice nor Bob is certain of the exact point at which she or he is bound, each has at least

some fear that she or he is bound throughout the protocol. At no point can Bob say: â€śYou signed four

letters and I only signed three. You are bound but I am not.â€ť Bob has no reason not to continue with

the protocol. Furthermore, the longer they continue, the greater the probability that a judge will rule

that they are bound. Again, there is no reason not to continue with the protocol. After all, they both

wanted to sign the contract; they just didnâ€™t want to sign before the other one.

Page 109 of 666

Applied Cryptography: Second Edition - Bruce Schneier

Simultaneous Contract Signing without an Arbitrator (Not Face-to-Face)

This protocol uses the same sort of uncertainty [138]. Alice and Bob alternate taking baby steps

toward signing until both have signed.

In the protocol, Alice and Bob exchange a series of signed messages of the form: â€śI agree that with

probability p, I am bound by this contract.â€ť

The recipient of this message can take it to a judge and, with probability p, the judge will consider the

contract to be signed.

(1) Alice and Bob agree on a date by which the signing protocol should be completed.

(2) Alice and Bob decide on a probability difference that they are willing to live with. For

example, Alice might decide that she is not willing to be bound with a greater probability than 2

percent over Bobâ€™s probability. Call Aliceâ€™s difference a; call Bobâ€™s difference b.

(3) Alice sends Bob a signed message with p = a.

(4) Bob sends Alice a signed message with p = a + b.

(5) Let p be the probability of the message Alice received in the previous step from Bob.

Alice sends Bob a signed message with pÂ´ = p + a or 1, whichever is smaller.

(6) Let p be the probability of the message Bob received in the previous step from Alice.

Bob sends Alice a signed message with pÂ´ = p + b or 1, whichever is smaller.

(7) Alice and Bob continue alternating steps (5) and (6) until both have received messages

with p = 1 or until the date agreed to in step (1) has passed.

As the protocol proceeds, both Alice and Bob agree to be bound to the contract with a greater and

greater probability. For example, Alice might define her a as 2 percent and Bob might define his b as

1 percent. (It would be nice if they had chosen larger increments; we will be here for a while.) Aliceâ€™s

first message might state that she is bound with 2 percent probability. Bob might respond that he is

bound with 3 percent probability. Aliceâ€™s next message might state that she is bound with 5 percent

probability and so on, until both are bound with 100 percent probability.

If both Alice and Bob complete the protocol by the completion date, all is well. Otherwise, either party

can take the contract to the judge, along with the other partyâ€™s last signed message. The judge then

randomly chooses a value between 0 and 1 before seeing the contract. If the value is less than the

probability the other party signed, then both parties are bound. If the value is greater than the

probability, then both parties are not bound. (The judge then saves the value, in case he has to rule on

another matter regarding the same contract.) This is what is meant by being bound to the contract

with probability p.

Thatâ€™s the basic protocol, but it can have more complications. The judge can rule in the absence of

one of the parties. The judgeâ€™s ruling either binds both or neither party; in no situation is one party

bound and the other one not. Furthermore, as long as one party is willing to have a slightly higher

probability of being bound than the other (no matter how small), the protocol will terminate.

Simultaneous Contract Signing without an Arbitrator (Using Cryptography)

This cryptographic protocol uses the same baby-step approach [529]. DES is used in the description,

although any symmetric algorithm will do.

(1) Both Alice and Bob randomly select 2n DES keys, grouped in pairs. The pairs are

Page 110 of 666

Applied Cryptography: Second Edition - Bruce Schneier

nothing special; they are just grouped that way for the protocol.

(2) Both Alice and Bob generate n pairs of messages, Li and Ri: â€śThis is the left half of my

ith signatureâ€ť and â€śThis is the right half of my ith signature,â€ť for example. The identifier, i,

runs from 1 to n. Each message will probably also include a digital signature of the contract and

a timestamp. The contract is considered signed if the other party can produce both halves, Li

and Ri, of a single signature pair.

(3) Both Alice and Bob encrypt their message pairs in each of the DES key pairs, the left

message with the left key in the pair and the right message with the right key in the pair.

(4) Alice and Bob send each other their pile of 2n encrypted messages, making clear

which messages are which halves of which pairs.

(5) Alice and Bob send each other every key pair using the oblivious transfer protocol for

each pair. That is, Alice sends Bob either the key used to encrypt the left message or the key

used to encrypt the right message, independently, for each of the n pairs. Bob does the same.

They can either alternate sending halves or one can send 100 and then the otherâ€”it doesnâ€™t

matter. Now both Alice and Bob have one key in each key pair, but neither knows which halves

the other one has.

(6) Both Alice and Bob decrypt the message halves that they can, using the keys they

received. They make sure that the decrypted messages are valid.

(7) Alice and Bob send each other the first bits of all 2n DES keys.

(8) Alice and Bob repeat step (7) for the second bits of all 2n DES keys, the third bits, and

so on, until all the bits of all the DES keys have been transferred.

(9) Alice and Bob decrypt the remaining halves of the message pairs and the contract is

signed.

(10) Alice and Bob exchange the private keys used during the oblivious transfer protocol

in step (5) and each verifies that the other did not cheat.

Why do Alice and Bob have to go through all this work? Letâ€™s assume Alice wants to cheat and see

what happens. In steps (4) and (5), Alice could disrupt the protocol by sending Bob nonsense bit

strings. Bob would catch this in step (6), when he tried to decrypt whatever half he received. Bob

could then stop safely, before Alice could decrypt any of Bobâ€™s message pairs.

If Alice were very clever, she could only disrupt half the protocol. She could send one half of each pair

correctly, but send a gibberish string for the other half. Bob has only a 50 percent chance of receiving

the correct half, so half the time Alice could cheat. However, this only works if there is one key pair. If

there were only two pairs, this sort of deception would succeed 25 percent of the time. That is why n

should be large. Alice has to guess correctly the outcome of n oblivious transfer protocols; she has a 1

in 2n chance of doing this. If n = 10, Alice has a 1 in 1024 chance of deceiving Bob.

Alice could also send Bob random bits in step (8). Perhaps Bob wonâ€™t know that she is sending him

random bits until he receives the whole key and tries to decrypt the message halves. But again, Bob

has probability on his side. He has already received half of the keys, and Alice does not know which

half. If n is large enough, Alice is sure to send him a nonsense bit to a key he has already received and

he will know immediately that she is trying to deceive him.

Maybe Alice will just go along with step (8) until she has enough bits of the keys to mount a brute-

force attack and then stop transmitting bits. DES has a 56-bit-long key. If she receives 40 of the 56

bits, she only has to try 216, or 65,536, keys in order to read the messageâ€”a task certainly within the

realm of a computerâ€™s capabilities. But Bob will have exactly the same number of bits of her keys (or,

at worst, one bit less), so he can do the same thing. Alice has no real choice but to continue the

protocol.

The basic point is that Alice has to play fairly, because the odds of fooling Bob are just too small. At

Page 111 of 666

Applied Cryptography: Second Edition - Bruce Schneier

the end of the protocol, both parties have n signed message pairs, any one of which is sufficient for a

valid signature.

There is one way Alice can cheat; she can send Bob identical messages in Step (5). Bob canâ€™t detect

this until after the protocol is finished, but he can use a transcript of the protocol to convince a judge

of Aliceâ€™s duplicity.

There are two weaknesses with protocols of this type [138]. First, itâ€™s a problem if one of the parties

has significantly more computing power than the other. If, for example, Alice can mount a brute-force

attack faster than Bob can, then she can stop sending bits early in step (8), and figure out Bobâ€™s keys

herself. Bob, who cannot do the same in a reasonable amount of time, will not be happy.

Second, itâ€™s a problem if one of the parties stops the protocol early. If Alice abruptly stops the

protocol, both face similar computational efforts, but Bob does not have any real legal recourse. If, for

example, the contract specifies that she do something in a week, and Alice terminates the protocol at a

point when Bob would have to spend a yearâ€™s worth of computing power before she is really

committed, thatâ€™s a problem. The real difficulty here is the lack of a near-term deadline by which the

process cleanly terminates with either both or neither party bound.

These problems also apply to the protocols in Sections 5.8 and 5.9.

5.8 Digital Certified Mail

The same simultaneous oblivious transfer protocol used for contract signing works, with some

modifications, for computer certified mail [529]. Suppose Alice wants to send a message to Bob, but

she does not want him to read it without signing a receipt. Surly postal workers handle this process in

real life, but the same thing can be done with cryptography. Whitfield Diffie first discussed this

problem in [490].

At first glance, the simultaneous contract-signing protocol can do this. Alice simply encrypts her

message with a DES key. Her half of the protocol can be something like: â€śThis is the left half of the

DES key: 32f5,â€ť and Bobâ€™s half can be something like: â€śThis is the left half of my receipt.â€ť Everything

else stays the same.

To see why this wonâ€™t work, remember that the protocol hinges on the fact that the oblivious transfer

in step (5) keeps both parties honest. Both of them know that they sent the other party a valid half,

but neither knows which. They donâ€™t cheat in step (8) because the odds of getting away with it are

miniscule. If Alice is sending Bob not a message but half of a DES key, Bob canâ€™t check the validity of

the DES key in step (6). Alice can still check the validity of Bobâ€™s receipt, so Bob is still forced to be

honest. Alice can freely send Bob some garbage DES key, and he wonâ€™t know the difference until she

has a valid receipt. Tough luck, Bob.

Getting around this problem requires some adjustment of the protocol:

(1) Alice encrypts her message using a random DES key, and sends the message to Bob.

(2) Alice generates n pairs of DES keys. The first key of each pair is generated at random;

the second key of each pair is the XOR of the first key and the message encryption key.

(3) Alice encrypts a dummy message with each of her 2n keys.

(4) Alice sends the whole pile of encrypted messages to Bob, making sure he knows which

messages are which halves of which pairs.

(5) Bob generates n pairs of random DES keys.

(6) Bob generates a pair of messages that indicates a valid receipt. â€śThis is the left half of

Page 112 of 666

Applied Cryptography: Second Edition - Bruce Schneier

my receiptâ€ť and â€śthis is the right half of my receiptâ€ť are good candidates, with the addition of

some kind of random-bit string. He makes n receipt pairs, each numbered. As with the previous

protocol, the receipt is considered valid if Alice can produce both halves of a receipt (with the

same number) and all of her encryption keys.

(7) Bob encrypts each of his message pairs with DES key pairs, the ith message pair with

the ith key pair, the left message with the left key in the pair, and the right message with the

right key in the pair.

(8) Bob sends his pile of message pairs to Alice, making sure that Alice knows which

messages are which halves of which pairs.

(9) Alice and Bob send each other every key pair using the oblivious transfer protocol.

That is, Alice sends Bob either the key used to encrypt the left message or the key used to

encrypt the right message, for each of the n pairs. Bob does the same. They can either alternate

sending halves or one can send n and then the otherâ€”it doesnâ€™t matter. Now both Alice and Bob

have one key in each key pair, but neither knows which halves the other has.

(10) Both Alice and Bob decrypt the halves they can and make sure that the decrypted

messages are valid.

(11) Alice and Bob send each other the first bits of all 2n DES keys. (If they are worried

about Eve being able to read these mail messages, then they should encrypt their transmissions

to each other.)

(12) Alice and Bob repeat step (11) for the second bits of all 2n DES keys, the third bits,

and so on, until all the bits of all the DES keys have been transferred.

(13) Alice and Bob decrypt the remaining halves of the message pairs. Alice has a valid

receipt from Bob, and Bob can XOR any key pair to get the original message encryption key.

(14) Alice and Bob exchange the private keys used during the oblivious transfer protocol

and each verifies that the other did not cheat.

Steps (5) through (8) for Bob, and steps (9) through (12) for both Alice and Bob, are the same as the

contract-signing protocol. The twist is all of Aliceâ€™s dummy messages. They give Bob some way of

checking the validity of her oblivious transfer in step (10), which forces her to stay honest during steps

(11) through (13). And, as with the simultaneous contract-signing protocol, both a left and a right half

of one of Aliceâ€™s message pairs are required to complete the protocol.

5.9 Simultaneous Exchange of Secrets

Alice knows secret A; Bob knows secret B. Alice is willing to tell Bob A, if Bob tells her B. Bob is

willing to tell Alice B, if Alice tells him A. This protocol, observed in a schoolyard, does not work:

(1) Alice: â€śIâ€™ll tell if you tell me first.â€ť

(2) Bob: â€śIâ€™ll tell if you tell me first.â€ť

(3) Alice: â€śNo, you first.â€ť

(4) Bob: â€śOh, all right.â€ť Bob whispers.

(5) Alice: â€śHa! I wonâ€™t tell you.â€ť

(6) Bob: â€śThatâ€™s not fair.â€ť

Cryptography can make it fair. The previous two protocols are implementations of this more general

protocol, one that lets Alice and Bob exchange secrets simultaneously [529]. Rather than repeat the

whole protocol, Iâ€™ll sketch the modifications to the certified mail protocol.

Alice performs steps (1) through (4) using A as the message. Bob goes through similar steps using B as

his message. Alice and Bob perform the oblivious transfer in step (9), decrypt the halves they can in

step (10), and go through the iterations in steps (11) and (12). If they are concerned about Eve, they

should encrypt their messages. Finally, both Alice and Bob decrypt the remaining halves of the

message pairs and XOR any key pair to get the original message encryption key.

Page 113 of 666

Applied Cryptography: Second Edition - Bruce Schneier

This protocol allows Alice and Bob to exchange secrets simultaneously, but says nothing about the

quality of the secrets exchanged. Alice could promise Bob the solution to the Minotaurâ€™s labyrinth,

but actually send him a map of Bostonâ€™s subway system. Bob will get whatever secret Alice sends him.

Other protocols are [1286,195,991,1524,705,753,259,358,415].

Chapter 6

Esoteric Protocols

6.1 Secure Elections

Computerized voting will never be used for general elections unless there is a protocol that both

maintains individual privacy and prevents cheating. The ideal protocol has, at the very least, these six

requirements:

1. Only authorized voters can vote.

2. No one can vote more than once.

3. No one can determine for whom anyone else voted.

4. No one can duplicate anyone elseâ€™s vote. (This turns out to be the hardest requirement.)

5. No one can change anyone elseâ€™s vote without being discovered.

6. Every voter can make sure that his vote has been taken into account in the final

tabulation.

Additionally, some voting schemes may have the following requirement:

7. Everyone knows who voted and who didnâ€™t.

Before describing the complicated voting protocols with these characteristics, letâ€™s look at some

simpler protocols.

Simplistic Voting Protocol #1

(1) Each voter encrypts his vote with the public key of a Central Tabulating Facility

(CTF).

(2) Each voter sends his vote in to the CTF.

(3) The CTF decrypts the votes, tabulates them, and makes the results public.

This protocol is rife with problems. The CTF has no idea where the votes are from, so it doesnâ€™t even

know if the votes are coming from eligible voters. It has no idea if eligible voters are voting more than

once. On the plus side, no one can change anyone elseâ€™s vote; but no one would bother trying to

modify someone elseâ€™s vote when it is far easier to vote repeatedly for the result of your choice.

Simplistic Voting Protocol #2

(1) Each voter signs his vote with his private key.

(2) Each voter encrypts his signed vote with the CTFâ€™s public key.

(3) Each voter sends his vote to a CTF.

(4) The CTF decrypts the votes, checks the signatures, tabulates the votes, and makes the

results public.

Page 114 of 666

Applied Cryptography: Second Edition - Bruce Schneier

This protocol satisfies properties one and two: Only authorized voters can vote and no one can vote

more than onceâ€”the CTF would record votes received in step (3). Each vote is signed with the voterâ€™s

private key, so the CTF knows who voted, who didnâ€™t, and how often each voter voted. If a vote comes

in that isnâ€™t signed by an eligible voter, or if a second vote comes in signed by a voter who has already

voted, the facility ignores it. No one can change anyone elseâ€™s vote either, even if they intercept it in

step (3), because of the digital signature.

The problem with this protocol is that the signature is attached to the vote; the CTF knows who voted

for whom. Encrypting the votes with the CTFâ€™s public key prevents anyone from eavesdropping on

the protocol and figuring out who voted for whom, but you have to trust the CTF completely. Itâ€™s

analogous to having an election judge staring over your shoulder in the voting booth.

These two examples show how difficult it is to achieve the first three requirements of a secure voting

protocol, let alone the others.

Voting with Blind Signatures

We need to somehow dissociate the vote from the voter, while still maintaining authentication. The

blind signature protocol does just that.

(1) Each voter generates 10 sets of messages, each set containing a valid vote for each

possible outcome (e.g., if the vote is a yes or no question, each set contains two votes, one for

â€śyesâ€ť and the other for â€śnoâ€ť). Each message also contains a randomly generated identification

number, large enough to avoid duplicates with other voters.

(2) Each voter individually blinds all of the messages (see Section 5.3) and sends them,

with their blinding factors, to the CTF.

(3) The CTF checks its database to make sure the voter has not submitted his blinded

votes for signature previously. It opens nine of the sets to check that they are properly formed.

Then it individually signs each message in the set. It sends them back to the voter, storing the

name of the voter in its database.

(4) The voter unblinds the messages and is left with a set of votes signed by the CTF.

(These votes are signed but unencrypted, so the voter can easily see which vote is â€śyesâ€ť and

which is â€śno.â€ť)

(5) The voter chooses one of the votes (ah, democracy) and encrypts it with the CTFâ€™s

public key.

(6) The voter sends his vote in.

(7) The CTF decrypts the votes, checks the signatures, checks its database for a duplicate

identification number, saves the serial number, and tabulates the votes. It publishes the results

of the election, along with every serial number and its associated vote.

A malicious voter, call him Mallory, cannot cheat this system. The blind signature protocol ensures

that his votes are unique. If he tries to send in the same vote twice, the CTF will notice the duplicate

serial number in step (7) and throw out the second vote. If he tries to get multiple votes signed in step

(2), the CTF will discover this in step (3). Mallory cannot generate his own votes because he doesnâ€™t

know the facilityâ€™s private key. He canâ€™t intercept and change other peopleâ€™s votes for the same

reason.

The cut-and-choose protocol in step (3) is to ensure that the votes are unique. Without that step,

Mallory could create a set of votes that are the same except for the identification number, and have

them all validated.

A malicious CTF cannot figure out how individuals voted. Because the blind signature protocol

Page 115 of 666

Applied Cryptography: Second Edition - Bruce Schneier

prevents the facility from seeing the serial numbers on the votes before they are cast, the CTF cannot

link the blinded vote it signed with the vote eventually cast. Publishing a list of serial numbers and

their associated votes allows voters to confirm that their vote was tabulated correctly.

There are still problems. If step (6) is not anonymous and the CTF can record who sent in which vote,

then it can figure out who voted for whom. However, if it receives votes in a locked ballot box and

then tabulates them later, it cannot. Also, while the CTF may not be able to link votes to individuals, it

can generate a large number of signed, valid votes and cheat by submitting those itself. And if Alice

discovers that the CTF changed her vote, she has no way to prove it. A similar protocol, which tries to

correct these problems, is [1195, 1370].

Voting with Two Central Facilities

One solution is to divide the CTF in two. Neither party would have the power to cheat on its own.

The following protocol uses a Central Legitimization Agency (CLA) to certify voters and a separate

CTF to count votes [1373].

(1) Each voter sends a message to the CLA asking for a validation number.

(2) The CLA sends the voter back a random validation number. The CLA maintains a list

of validation numbers. The CLA also keeps a list of the validation numbersâ€™ recipients, in case

someone tries to vote twice.

(3) The CLA sends the list of validation numbers to the CTF.

(4) Each voter chooses a random identification number. He creates a message with that

number, the validation number he received from the CLA, and his vote. He sends this message

to the CTF.

(5) The CTF checks the validation number against the list it received from the CLA in

step (3). If the validation number is there, the CTF crosses it off (to prevent someone from

voting twice). The CTF adds the identification number to the list of people who voted for a

particular candidate and adds one to the tally.

(6) After all votes have been received, the CTF publishes the outcome, as well as the lists

of identification numbers and for whom their owners voted.

Like the previous protocol, each voter can look at the lists of identification numbers and find his own.

This gives him proof that his vote was counted. Of course, all messages passing among the parties in

the protocol should be encrypted and signed to prevent someone from impersonating someone else or

intercepting transmissions.

The CTF cannot modify votes because each voter will look for his identification string. If a voter

doesnâ€™t find his identification string, or finds his identification string in a tally other than the one he

voted for, he will immediately know there was foul play. The CTF cannot stuff the ballot box because

it is being watched by the CLA. The CLA knows how many voters have been certified and their

validation numbers, and will detect any modifications.

Mallory, who is not an eligible voter, can try to cheat by guessing a valid validation number. This

threat can be minimized by making the number of possible validation numbers much larger than the

number of actual validation numbers: 100-digit numbers for a million voters, for example. Of course,

the validation numbers must be generated randomly.

Despite this, the CLA is still a trusted authority in some respects. It can certify ineligible voters. It can

certify eligible voters multiple times. This risk could be minimized by having the CLA publish a list of

certified voters (but not their validation numbers). If the number of voters on this list is less than the

Page 116 of 666

Applied Cryptography: Second Edition - Bruce Schneier

number of votes tabulated, then something is awry. However, if more voters were certified than votes

tabulated, it probably means that some certified people didnâ€™t bother voting. Many people who are

registered to vote donâ€™t bother to cast ballots.

This protocol is vulnerable to collusion between the CLA and the CTF. If the two of them got

together, they could correlate databases and figure out who voted for whom.

Voting with a Single Central Facility

A more complex protocol can be used to overcome the danger of collusion between the CLA and the

CTF [1373]. This protocol is identical to the previous one, with two modifications:

â€” The CLA and the CTF are one organization, and

â€” ANDOS (see Section 4.13) is used to anonymously distribute validation numbers in step

(2).

Since the anonymous key distribution protocol prevents the CTF from knowing which voter got

which validation number, there is no way for the CTF to correlate validation numbers with votes

received. The CTF still has to be trusted not to give validation numbers to ineligible voters, though.

You can also solve this problem with blind signatures.

Improved Voting with a Single Central Facility

This protocol also uses ANDOS [1175]. It satisfies all six requirements of a good voting protocol. It

doesnâ€™t satisfy the seventh requirement, but has two properties additional to the six listed at the

beginning of the section:

7. A voter can change his mind (i.e., retract his vote and vote again) within a given period

of time.

8. If a voter finds out that his vote is miscounted, he can identify and correct the problem

without jeopardizing the secrecy of his ballot.

Hereâ€™s the protocol:

(1) The CTF publishes a list of all legitimate voters.

(2) Within a specified deadline, each voter tells the CTF whether he intends to vote.

(3) The CTF publishes a list of voters participating in the election.

(4) Each voter receives an identification number, I, using an ANDOS protocol.

(5) Each voter generates a public-key/private-key key pair: k, d. If v is the vote, he

generates the following message and sends it to the CTF:

I,Ek(I,v)

This message must be sent anonymously.

(6) The CTF acknowledges receipt of the vote by publishing:

Ek(I,v)

(7) Each voter sends the CTF:

I,d

(8) The CTF decrypts the votes. At the end of the election, it publishes the results of the

election and, for each different vote, the list of all Ek(I,v) values that contained that vote.

(9) If a voter observes that his vote is not properly counted, he protests by sending the

ńňđ. 5 |