VERSUS AND

assumption P = NP (and even NP ⊆ BPP) is nec-

/

DEFINITION 1 (one-way function). A one-to-one

essary for most of modern cryptography. For exam-

function f is one-way if it satis¬es the following

ple, take any ef¬cient encryption scheme and con-

conditions.

sider the following computational problem: given

1. (Easy to evaluate). f can be evaluated in polyno-

a ciphertext C, ¬nd the corresponding message M

mial time.

along with the key K and any randomization R

2. (Hard to invert). For every probabilistic

used in the encryption process. This is an NP prob-

polynomial-time algorithm A, there is a negli-

lem: the solution (M, K, R) can be veri¬ed by re-

gible function such that

encrypting the message M using the key K and the

Pr [A( f (X )) = X ] ¤ (n),

randomization R and checking whether the result

equals C. Thus, if P = NP, this problem can be

where the probability is taken over selecting an

solved in polynomial time, i.e. there is an ef¬cient

input X of length n uniformly at random and

algorithm for breaking the encryption scheme.7

the random choices of the algorithm A.

However, the assumption P = NP (or even NP ⊆ /

BPP) does not appear suffcient for cryptography. For simplicity, we have only given the de¬nition

The main reason for this is that P = NP refers for one-to-one one-way functions. Without the one-

to worst-case complexity. That is, the fact that a to-one constraint, the de¬nition should refer to the

computational problem is not in P only means problem of ¬nding some preimage of f (X ), i.e., re-

that for every polynomial-time algorithm A, there quire the probability that A( f (X )) ∈ f’1 ( f (X )) is

exist inputs on which A fails to solve . However, negligible.8

these “hard inputs” could conceivably be very rare The input length n can be thought of as corre-

and very hard to ¬nd. Intuitively, to make use of sponding to the security parameter (or key length)

intractability (for the security of cryptosystems), in a cryptographic protocol using f. If f is one-way,

we need to be able to ef¬ciently generate hard in- we are guaranteed that by making n suf¬ciently

stances of an intractable computational problem. large, inverting f takes much more time than eval-

uating f. However, to know how large to set n in an

ONE-WAY FUNCTIONS: The notion of a one-way implementation requires a concrete security ana-

function captures the kind of computational in- logue of the above de¬nition, where the maximum

tractability needed in cryptography. Informally, a success probability is speci¬ed for A with a par-

one-way function is a function f that is “easy to ticular running time on a particular input length

evaluate” but “hard to invert.” That is, we require n, and a particular model of computation.

that the function f can be computed in polynomial The “inversion problem” is an NP problem (to

time, but given y = f (x), it is intractable to recover verify that X is a preimage of Y, simply evaluate

x. It is required that the dif¬culty of inversion f(X ) and compare with Y.) Thus, if NP ⊆ BPP

holds even when the input x is chosen at random. then one-way functions do not exist. However, the

Thus, we can ef¬ciently generate hard instances converse is an open problem, and proving it would

be a major breakthrough in complexity theory. For-

6 BPP stands for “bounded-error probabilistic polynomial tunately, even though the existence of one-way

time.”

functions does not appear to follow from NP ⊆ /

7 Technically, to conclude that the cryptosystem is broken re-

BPP, there are a number of natural candidates

quires that the message M is uniquely determined by cipher-

text C. This will essentially be the case for most messages if the for one-way functions.

message length is greater than the key length. (If the message

length is less than or equal to the key length, then there exist

8

encryption schemes that achieve information-theoretic secu- For technical reasons, we also need to require that f does

not shrink its input too much, e.g. that the length of | f (x)| and

rity (for a single encryption, e.g., the one-time pad), regardless

of whether or not P = NP.) length of |x| are polynomially related (in both directions.)

96 Computational complexity

SOME CANDIDATE ONE-WAY FUNCTIONS: THEOREM 1. The existence of one-way functions is

necessary and suf¬cient for each of the following:

r

These examples are described informally, and

The existence of commitment schemes

r

may not all match up perfectly with the sim-

The existence of pseudorandom number genera-

pli¬ed de¬nition above. In particular, some are

actually collections of one-way functions F = { fi : tors

r

Di ’ Ri }, in the functions fi are parameterized by The existence of pseudorandom functions

r The existence of symmetric cryptosystems

r

an index i that is generated by some randomized

algorithm.9 The existence of digital signature schemes.

1. (Multiplication) f ( p, q) = p · q, where p and

These results are proven via the notion of re-

q are primes of equal length. Inverting f is

ducibility mentioned above, albeit in much more

the FACTORING problem (see integer factoring),

sophisticated forms. For example, to show that the

which indeed seems intractable even on ran-

dom inputs of the form p · q. existence of one-way functions implies the exis-

f (x1 , . . . , xn , S) = (x1 , . . . , xn , tence of pseudorandom generators, one describes

2. (Subset Sum)

a general construction of a pseudorandom gener-

i∈S xi ). Here each xi is an n-bit integer and

S ⊆ [n]. Inverting f is the SUBSET SUM problem ator G from any one-way function f. To prove the

correctness of this construction, one shows how to

(see knapsack cryptographic schemes). This

“reduce” the task of inverting the one-way func-

problem is known to be NP-complete, but for

tion f to that of “distinguishing” the output of the

the reasons discussed above, this does not

pseudorandom generator G from a truly random

provide convincing evidence that f is one-way

sequence. That is, any polynomial-time algorithm

(nevertheless it seems to be so).

3. (The Discrete Log Collection) f (x) = g x , that distinguishes the pseudorandom generator

G,g

where G is a cyclic group (e.g., G = Z— for prime can be converted into a polynomial-time algorithm

p

p), g is a generator of G, and x ∈ {1, . . . , |G| ’ 1}. that inverts the one-way function. But if f is one-

way, it cannot be inverted, so we conclude that the

Inverting f is the DISCRETE LOG problem (see

G,g

pseudorandom generator is secure. These reduc-

discrete logarithm problem), which seems in-

tions are much more delicate than those arising

tractable. This (like the next two examples) is

in, say, the NP-completeness, because they involve

actually a collection of one-way functions, pa-

nontraditional computational tasks (e.g., inver-

rameterized by the group G and generator g.

4. (The RSA Collection) f (x) = x e mod n, where sion, distinguishing) that must be analyzed in the

n,e

average case (i.e., with respect to non-negligible

n is the product of two equal-length primes, e

satis¬es gcd(e, φ(n)) = 1, and x ∈ Z— . Inverting success probability).

n

The general constructions asserted in Theorem

f is the RSA problem.

n,e

1 are very involved and not ef¬cient enough to

5. (Rabin™s Collection) (see Rabin cryptosystem,

Rabin digital signature scheme). f (x) = x 2 mod be used in practice (though still polynomial time),

n

n, where n is a composite and x ∈ Z— . Inverting so it should be interpreted only as a “plausibil-

n

ity result.” However, from special cases of one-

f is known to be as hard as factoring n.

n

way functions, such as one-way permutations (see

6. (Hash Functions and Block Ciphers). Most

one-way function) or some of the speci¬c candidate

cryptographic hash functions seem to be ¬nite

one-way functions mentioned earlier, much more

analogues of one-way functions with respect to

ef¬cient constructions are known.

concrete security. Similarly, one can obtain can-

didate one-way functions from block ciphers,

TRAPDOOR FUNCTIONS: For some tasks in

say by de¬ning f (K ) to be the block cipher ap-

cryptography, most notably public-key encryption

plied to some ¬xed message using key K.

(see public-key cryptography), one-way functions

In a long sequence of works by many re-

do not seem to suf¬ce, and additional proper-

searchers, it has been shown that one-way func-

ties are used. One such property is the trap-

tions are indeed the “right assumption” for

door property, which requires that the function

complexity-based cryptography. On one hand, al-

can be easily inverted given certain “trapdoor

most all tasks in cryptography imply the ex-

information.” We do not give the full de¬nition

istence of one-way functions. Conversely (and

here, but just list the main properties. (See also

more remarkably), many useful cryptographic

trapdoor one-way function.)

tasks can be accomplished given any one-way

function.

DEFINITION 2 (trapdoor functions, informal).

A collection of one-to-one functions F = { fi : Di ’

9 Actually, one can convert a collection of one-way functions

Ri } is a collection of trapdoor functions if

into a single one-way function, and conversely. See [3].

Contract signing 97

1. (Ef¬cient generation). There is a probabilistic it relates to cryptography, and the texts [4, 5] for

polynomial-time algorithm that, on input a se- other aspects of complexity theory.

curity parameter n, generates a pair (i, ti ), where

Salil Vadhan

i is the index to a (random) function in the

family and t i is the associated “trapdoor infor-

References

mation.”

2. (Easy to evaluate). Given i and x ∈ Di , one can

compute fi (x) in polynomial time. [1] Garey, Michael R. and David S. Johnson (1979).

Computers and Intractability: A Guide to the Theory

3. (Hard to invert). There is no probabilistic

of NP-Completeness. W.H. Freeman and Company,

polynomial-time algorithm that on input (i,

San Francisco.

fi (x)) outputs x with non-negligible probability.

[2] Goldreich, Oded (1999). Modern Cryptography,

(Here, the probability is taken over i, x ∈ Di, and

Probabilistic Proofs, and Pseudorandomness, Num-

the coin tosses of the inverter.)

ber 17 in Algorithms and Combinatorics. Springer-

4. (Easy to invert with trapdoor). Given ti and Verlag, Berlin.

fi (x), one can compute x in polynomial time. [3] Goldreich, Oded (2001). Foundations of Cryptogra-

phy: Basic Tools. Cambridge University Press, Cam-

Thus, trapdoor functions are collections of one- bridge, MA.

[4] Papadimitriou, Christos H. (1994). Computational

way functions with an additional trapdoor prop-

Complexity. Addison-Wesley, Reading, MA.

erty (Item 4). The RSA and Rabin collections

[5] Sipser, Michael (1997). Introduction to the Theory of

described earlier have the trapdoor property. Spe-

Computation. PWS Publishing, Boston, MA.

ci¬cally, they can be inverted in polynomial time

given the factorization of the modulus n.

One of the main applications of trapdoor func-

tions is for the construction of public-key encryp-

CONTRACT SIGNING

tion schemes.

A contract is a nonrepudiable agreement on a

THEOREM 2. If trapdoor functions exist, then

given contract text, i.e., a contract can be used to

public-key encryption schemes exist.

prove agreement between the signatories to any

veri¬er. A contract signing scheme [4] is used to

There are a number of other useful strength-

fairly compute a contract such that, even if one of

enings of the notion of a one-way function, dis-

the signatories misbehaves, either both or none of

cussed elsewhere in this volume: claw-free per-

the signatories obtain a contract. Contract signing

mutations, collision-resistant hash functions (see

generalizes fair exchange of signatures: a contract

collision resistance), and universal one-way hash

signing protocol does not need to output signa-

functions.

tures but can de¬ne its own format instead. Con-

tract signing can be categorized by the properties

OTHER INTERACTIONS WITH CRYPTOGRAPHY: of fair exchange (like abuse-freeness) as well as

The interaction between computational complex- the properties of the nonrepudiation tokens it pro-

ity and cryptography has been very fertile. Above, duces (like third-party time stamping of the con-

we have described the role that computational tract). Unlike agreement protocols, contract sign-

complexity plays in cryptography. Conversely, ing needs to provide a nonrepudiable proof that an

several important concepts that originated in agreement has been reached.

cryptography research have had a tremendous im- Early contract signing protocols were either

pact on computational complexity. Two notable ex- based on an in-line Trusted Third Party [8], grad-

amples are the notions of pseudorandom number ual exchange of secrets [5], or gradual increase

generators and interactive proof systems. For of privilege [3]. Like fair exchange protocols, two-

more on these topics and the resulting develop- party contract signing protocols either do not guar-

ments in computational complexity, see [2]. antee termination or may else produce a partially

signed contract. As a consequence, a trusted third

FURTHER READING: Above, we have touched party is needed for most practical applications.

upon only a small portion of computational com- Optimistic contract signing [7] protocols optimize

plexity, and even in the topics covered, many im- by involving this third-party only in case of excep-

portant issues were ignored (not to mention histor- tions. The ¬rst optimistic contract signing scheme

ical references). Thus, we refer the reader to the has been described in [6]. An optimistic contract

text [3] for more on computational complexity as signing scheme for asynchronous networks has

98 Control vectors

........................................... Optimistic Phase w/o T TP: .....................................

Signatory A Signatory B

m1 := signA ( m1 , tstart ,TTP, A, B, CA)

?

C A = C B?

m2 := signB( m2 , m1)

if m2: output m2

elseif no m5: abort

m3 := signA( m3 , m2)

if m3: output m3

else: continue

.............................................. Error Recovery w/ TTP: .............. ............. ..... ..........

Signatory A T TPT Signatory B

m4 := signB ( m4 , m2)

check

m5 := signT ( m5 , m4)m5 := signT ( m5 , m4)

output m5 output m5

Fig. 1. Sketch of an optimistic synchronous contract signing protocol [7]

been described in [1]. An example for a multiparty Electrical Engineering and Computer Sciences,

University of California at Berkeley, Berkley,

abuse-free optimistic contract signing protocol has

CA.

been published in [2]. A simple optimistic con-

[5] Blum, Manuel (1983). “Coin ¬‚ipping by telephone,

tract signing protocol for synchronous networks is

a protocol for solving impossible problems.” ACM

sketched in Figure 1: party A sends a proposal,

SIGACT News, 15 (1), 23“27.

party B agrees, and party A con¬rms. If party

[6] Even, Shimon (1983). “A protocol for signing con-

A does not con¬rm, B obtains its contract from tracts.” ACM SIGACT News, 15 (1), 34“39.

the TTP. (Note that the generic fair exchange pro- [7] P¬tzmann, Birgit, Matthias Schunter, and Michael

tocol (see fair exchange) can be adapted for con- Waidner (1998). “Optimal ef¬ciency of optimistic

tract signing, too, by using a signature under C as contract signing.” 17th Symposium on Principles of

itemX , using (C, X) as description descX , and using Distributed Computing (PODC). ACM Press, New

the signature veri¬cation function as the veri¬ca- York, 113“122.

[8] Rabin, Michael O. (1983). “Transaction protection

tion function.)

by beacons.” Journal of Computer and System Sci-

Matthias Schunter ences, 27, 256“267.

References

[1] Asokan, N., Victor Shoup, and Michael Waidner

CONTROL VECTORS

(1988). “Optimistic fair exchange of digital signa-

tures.” Advances in Cryptology”EUROCRYPT™98,

A method introduced”and patented in a number

Lecture Notes in Computer Science, vol. 1403, ed.

of application scenarios”by IBM in the 1980s for

K. Nyberg. Springer-Verlag, Berlin, 591“606.

[2] Baum-Waidner, Birgit and Michael Waidner (2000). the control of cryptographic key usage. The basic

“Round-optimal and abuse-free optimistic multi- idea is that each cryptographic key has an asso-

party contract signing.” 27th International Collo- ciated control vector, which de¬nes the permitted

quium on Automata, Languages and Programming uses of the key within the system, and this is en-

(ICALP), Lecture Notes in Computer Science, vol.

forced by the use of tamper resistant hardware. At

1853, eds. U. Montanari, J.D.P Rolim, and E. Welzl.

key generation, the control vector is cryptograph-

Springer-Verlag, Berlin, 524 ff.

ically coupled to the key, e.g., by XOR-ring the key

[3] Ben-Or, M., O. Goldreich, S. Micali, and R.L. Rivest

with the control vector before encryption and dis-

(1990). “A fair protocol for signing contracts.” IEEE

tribution. Each encrypted key and control vector

Transactions on Information Theory, 36 (1), 40“46.

is stored and distributed within the cryptographic

[4] Blum, Manuel (1981). “Three applications of the

system as a single token.

oblivious transfer.” Version 2. Department of

Copy protection 99

As an example, nonrepudiation may be achieved This approach is somewhat unrealistic because

between two communicating hardware boxes by even a small number of remaining unregistered

the use of a conventional MAC algorithms using recorders can be used by organized hackers to

symmetric methods. The communicating boxes produce large quantities of pirated copies.

would share the same key, but whereas one box Instead of protecting the distribution media of

would only be allowed to generate a MAC with digital content, one can protect copyrighted digi-

that key, the other box would only be able to ver- tal content itself by marking copyrighted content

ify a MAC with the same key. The transform of and enforcing playback control by allowing only

the same key from, e.g., MAC-generation to MAC- players that interpret these copyright marks ac-

veri¬cation is known as key translation and needs cording to certain access policies (access control).

to be carried out in tamper resistant hardware as This approach works for digital content that is be-

well. Similarly the same symmetric key may be ing distributed on physical media as well as being

used for encryption only enforced by one control broadcast or distributed online. It is an example

vector in one device and for decryption only en- of a digital rights management system (DRMS).

forced by a different control vector in a different Mark copyrighted content: If consumers are

device. allowed to make a few backup copies for their

personal use, then the copyrighted digital con-

Peter Landrock tent itself can be marked as copy protected in

order to be distinguishable from unprotected

digital content. The more robust the marking,

i.e., the harder it is to remove it without signif-

COPY PROTECTION icantly degrading the quality of the digital con-

tent, the stronger copy protection mechanism

can be achieved.

Copy protection attempts to ¬nd ways which limit

Playback control: Players for copyrighted con-

the access to copyrighted material and/or inhibit

tent need to have a tamper resistant access cir-

the copy process itself. Examples of copy protec-

cuitry that is aware of the copy protection mark-

tion include encrypted digital TV broadcast, access

ing, or players need to use online license servers

controls to copyrighted software through the use of

to check the actual marks. Before converting

license servers or through the use of special hard-

digital content into audible or visible signals,

ware add-ons (dongles), and technical copy protec-

the player compares the actual marking against

tion mechanisms on the media.

the licenses or tickets, which are either built into

Copy protection mechanisms can work pro-

their access circuitry or retrieved from a license

actively by aiming to prevent users from accessing

server online, and stops if the former does not

copy protected content.

match the latter. The exact behavior of players is

For content that is distributed on physical me-

determined by access policies. There can be dif-

dia such as ¬‚oppy disks, digital audio tape (DAT),

ferent kinds of tickets or licenses. Tickets of one

CD-ROM or digital versatile disk (DVD), copy pro-

kind may represent the right of ownership of a

tection can be achieved by master copy control and

particular piece of content, i.e., the piece of con-

copy generation control:

tent can be played or used as many times as the

Master copy control: If consumers are not al-

owner wishes. Tickets of another kind may rep-

lowed to even make backup copies of their mas-

resent the right of one-time play or use (pay-per-

ter media, then one can mark the master media

view). Other kinds of tickets can be de¬ned. The

themselves in addition to or instead of marking

more tamper resistant the access circuitry is or

the copyrighted content. This was an inexpen-

the more protected the communication with the

sive and common way to protect software dis-

license server and the license server itself, the

tributed for example on ¬‚oppy disks. One of the

stronger the copy protection mechanism that

sectors containing critical parts of the software

can be achieved.

was marked as bad such that data could be read

Marking of copyrighted content can use anything

from that sector, but could not be copied to an-

from simple one-bit marks to XrML tags to sophis-

other ¬‚oppy disk.

ticated watermarking techniques. An example of

Copy generation control: If consumers are al-

the former is to de¬ne a new audio ¬le format, in

lowed to make copies of their master copy, but

which the mark is a part of the header block but

not of copies of the master copy, then one needs

is not removable without destroying the original

to establish control over the vast majority of con-

signal, because part of the de¬nition of the ¬le for-

tent recorders, which must be able to effectively

mat requires the mark to be therein. In this case

prevent the making of unauthorized copies.

100 Copy protection

the signal would not really be literally “destroyed” kinds of decryption keys can be de¬ned. The

but any application using this ¬le format would more tamper resistant the access circuitry or the

not touch it without a valid mark. Some electronic more protected the communication with the li-

copyright management systems (ECMS) propose cense server and the license server itself, the

mechanisms like this. Such schemes are weak as stronger the copy protection mechanism that

anyone with a computer or a digital editing work- can be achieved.

station would be able to convert the information to In order to effectively prevent consumers from

another format and remove the mark at the same copying digital content protected in this way, the

time. Finally this new audio format would be in- players must not allow consumers to easily ac-

compatible with the existing one. Thus the mark cess the decrypted digital content. Otherwise, the

should really be embedded in the audio signal. containing approach would not prevent consumers

This is very similar to S.C.M.S. (Serial Code Man- from reselling, trading, or exchanging digital con-

agement System). When Phillips and Sony intro- tent at their discretion. As a ¬rst line of protection,

duced the “S/PDIF” (Sony/Phillips Digital Inter- players should not provide a high quality output

change Format), they included the S.C.M.S. which interface for the digital content. A stronger level

provides a way copies of digital music are regu- of protection is achieved if the decryption mecha-

lated in the consumer market. This information is nism is integrated into the display, such that pi-

added to the stream of data that contains the mu- rates would only obtain signals of degraded qual-

sic when one makes a digital copy (a “clone”). This ity. The content scrambling system (CSS) used for

is in fact just a bit saying: digital copy prohibited digital versatile disks (DVDs) [2] is an example of

or permitted. Some professional equipment are ex- the containing approach: in CSS, each of n manu-

empt from having S.C.M.S. With watermarking facturers (n being several hundreds by 2002) has

however, the copy control information is part one or more manufacturer keys, and each player

of the audiovisual signal and aims at surviv- has one or more keys of its manufacturer built in.

ing ¬le format conversion and other transfor- Each DVD has its own disk key dk , which is stored

mations. n times in encrypted form, once encrypted under

An alternative to marking is containing copy- each manufacturer key. The DVD content is en-

righted content. With this approach, the recording crypted under respective sector keys, which are

industry encrypts copyrighted digital content un- all derived from the disk key dk .

der certain encryption keys such that only players Copy protection mechanisms can also work

with appropriate decryption keys can access and retroactively by deterring authorized users from

playback the content. leaking copies to unauthorized users. This ap-

Encrypt copyrighted content: The copyrighted proach requires solving the following two prob-

digital content itself is encrypted in order to lems.

be accessible by authorized consumers only. Mark copy protected content individually:

The more robust the encryption, the stronger Copy protected digital content carries informa-

the copy protection mechanism that can be tion about its origin, i.e. the original source, au-

achieved. thor, distributor, etc. in order to allow to trace

Playback control: Players for copyrighted con- its distribution and spreading. It is like em-

tent need to have a tamper resistant access cir- bedding a unique serial number in each autho-

cuitry that is aware of certain decryption keys rized copy of protected content. The more ro-

that are necessary to unlock the contents the bust the embedded marking, i.e., the harder it

consumer wants to be played. Before converting is to remove it without signi¬cantly degrading

digital content into audible or visible signals, the quality of the digital content, the stronger

the player needs to look up the respective de- the copy protection mechanism that can be

cryption keys, which are either built into the achieved.

access circuitry of the player or are retrieved Deter from unauthorized access: Players need

from a license server online. The exact behav- to have no tamper resistant access circuitry nor

ior of players is determined by access policies. online access to license servers. Instead, each

There can be different kinds of decrypting keys. customer who receives an authorized copy is

Decrypting keys of one kind may represent the registered with the serial number of the copy

right of ownership of a particular piece of con- provided. The marking serves as forensic ev-

tent, i.e., the piece of content can be played or idence in investigations to ¬gure out where

used as many times as the owner wishes. Tick- and when unauthorized copies of original con-

ets of another kind may represent the right tent have surfaced. This retroactive approach

of one-time play or use (pay-per-view). Other can be combined with the above mentioned

Copy protection 101

proactive approach by using the embedded se- tium™s reputation and revenue so much that the

rial numbers as individual watermarks, which SDMI consortium threatened to sue the authors

are recognized by players for the respective if they would present their work at a public con-

content. ference. Attacks on various other copy protection

This approach can use anything from hidden mechanisms have been described by Anderson in

serial numbers to sophisticated ¬ngerprinting Section 20.3.30 of [1].

techniques. Fingerprints are characteristics of an The requirements on ¬ngerprinting are contra-

object that tend to distinguish it from other simi- dictory as well. On one hand the broadcaster or

lar objects. They enable the owner to trace autho- copyright holder may want to easily recognize the

rized users distributing them illegally. In the case ¬ngerprint, preferably visually. This allows easy

of encrypted satellite television broadcasting, for tracing of a decoder that is used for illegal pur-

instance, users could be issued a set of keys to de- poses. This approach is very similar to the com-

crypt the video streams and the television station monly used watermarking by means of the logo of a

could insert ¬ngerprint bits into each packet of TV station that is continuously visible in one of the

the traf¬c to detect unauthorized uses. If a group corners of the screen. On the other hand, the ¬n-

of users give their subset of keys to unauthorized gerprint should be hidden, in order not to disturb

people (so that they can also decrypt the traf¬c), at paying viewers with program-unrelated messages

least one of the key donors can be traced when the on their screen, or to avoid any pirate detecting

unauthorized decoder is captured. In this respect, and erasing the ¬ngerprint electronically. In the

¬ngerprinting is usually discussed in the context latter case, one may require speci¬c equipment to

of the traitor tracing problem. detect and decode a ¬ngerprint.

Copy protection is inherently dif¬cult to achieve Despite the inherent technical dif¬culties to

in open systems for at least two reasons: build effective large-scale copy protection systems,

The requirements on watermarking are contra- the content industries (TV producers, movie mak-

dictory. In order to build an effective large-scale ers, audio makers, software companies, publish-

copy protection system, the vast majority of avail- ers) have and will continue to have a strong inter-

able players had to be equipped with some kind est in protecting their revenues against pirates.

of tamper resistant circuitry or had online ac- They are trying to overcome the contradictory

cess to some license servers. Such circuitry had requirements mentioned above by two comple-

to be cheap and be integrated right into the play- menting approaches: they try to control the en-

ers, and such online service had to be cheap tire market of media players and recorders by

and conveniently fast. Otherwise, the watermark- contracting with the large suppliers. While they

ing had no chance to gain any signi¬cant mar- opt for relatively weak but inexpensive access cir-

ket share. However, tamper resistant hardware cuitry for these players, they compensate for the

is expensive, so the cost per player limits the weakness by promoting suitable laws that deter

strength of the tamper resistance of its access cir- consumers from breaking this access circuitry or

cuitry. Online services incur communication costs resorting to unauthorized players, or using unau-

on consumers and do not give them the indepen- thorized recorders. An example for trying to make

dence and speed of of¬‚ine access circuitry. The secure access circuitry pervasive in the PC market

way how the CSS used for DVDs was “hacked” is is the trusted computing platform alliance (TCPA)

just one more incident demonstrating the contra- [8]. An example of such legislative initiative is

dicting requirements: since the encryption mech- the digital millenium copyright act (DMCA) [4] in

anism was chosen to be a weak feedback shift the United States. It prohibits the modi¬cation of

register cipher, it was only a matter of time un- any electronic copyright arrangement information

til a program called DeCSS surfaced, which can (CMI) bundled with digital content, such as details

decipher any DVD. The access circuitry of play- of ownership and licensing, and outlaws the man-

ers into which the deciphering algorithm is built ufacture, importation, sale, or offering for sale of

was not well protected against reverse engineer- anything primarily designed to circumvent copy-

ing the algorithm, and hence, the secret algorithm right protection technology.

leaked, and with it the DVD keys one by one. The It is clear that the issue of copy protection is a

watermarking scheme of the Secure Digital Mu- special case of the more general issue of digital

sic Initiative (SDMI) [7], (a successor of MP3) was rights management (DRM). Both issues bear the

broken by Fabien Petitcolas [6]. Later, a public risk of a few companies de¬ning the access poli-

challenge of this watermarking scheme was bro- cies, which are enforced by the players and thus

ken by Felten et al. [3]. The SDMI consortium felt determine what and how a vast majority of peo-

this piece of research might jeopardize the consor- ple would be able to read, watch, listen to, or work

102 Correcting-block attack

with. A moderate overview of the hot debate about The hash functions based on algebraic struc-

content monopolies, pricing, free speech, demo- tures are particularly vulnerable to this attack,

cratic, privacy, and legislative issues, etc. is found since it is often possible to invert the compres-

at [5]. sion function using algebraic manipulations [9].

A typical countermeasure to a correcting-block at-

Gerrit Bleumer tack consists of adding redundancy to the message

blocks in such a way that it becomes computation-

ally infeasible to ¬nd a correcting block with the

References

necessary redundancy. The price paid for this so-

lution is a degradation of the performance.

[1] Anderson, Ross (2001). Security Engineering.

A ¬rst example is a multiplicative hash pro-

Wiley & Sons, New York.

posed by Bosset in 1977 [1], based on GL2 (GF( p)),

[2] Bloom, Jefrey A., Ingemar J. Cox, Ton Kalker, Jean-

the group of 2 — 2 invertible matrices over

Paul, M.G. Linnartz, Matthew L. Miller, C. Brendan,

the ¬nite ¬eld GF( p), with p = 10 007. Camion

and S. Traw (1999). “Copy protection for DVD video.”

Proceedings of the IEEE, 87 (7), 1267“1276. showed how to ¬nd a second preimage using a

[3] Craver, Scott A., Min Wu, Bede Liu, Adam Sub- correcting-block attack that requires 48 correcting

ble¬eld, Ben Swartzlander, Dan S. Wallach, Drew

blocks of 6 bits each [2].

Dean, and Edward W. Felten (2001). “Reading be-

In 1984 Davies and Price [4] proposed a hash

tween the lines: Lessons from the SDMI Chal-

function with the following compression func-

lenge.” 10th Usenix Security Symposium 2002, The

tion f :

USENIX Association 2001, 353“363.

[4] Digital Millenium Copyright Act: http://www.loc f = (Hi’1 • X i )2 mod N ,

.gov/copyright/legislation/dmca.pdf

where Xi is the message block, Hi’1 is the chain-

[5] Digital Rights Management: http://www.epic.org/

ing variable, and N is an RSA modulus (see RSA

privacy/drm/

[6] Petitcolas, Fabien A., Ross Anderson, Markus G. digital signature scheme). In order to preclude a

Kuhn (1998). “Attacks on copyright marking sys- correcting block attack, the text input is encoded

tems.” Information Hiding, Lecture Notes in Com- such that the most (or least) signi¬cant 64 bits

puter Science, vol. 1525, ed. D. Aucsmith. Springer- of each block are equal to 0. However, Girault

Verlag, Berlin, 218“238.

[5] has demonstrated that a second preimage can

[7] Secure Digital Music Initiative http://www.sdmi.org

be found using the extended Euclidean algorithm

[8] Trusted Computing Platform Initiative: http://

(see Euclidean algorithm); improved results can

www.trustedcomputing.org

be found in [6].

The 1988 scheme of CCITT X.509 Annex D [8]

tried to remedy this approach by distributing the

redundancy (one nibble in every byte). However,

CORRECTING-BLOCK Coppersmith [3] applied a correcting-block attack

ATTACK to ¬nd two distinct messages X and X such that

h(X ) = 256 · h(X) .

This attack can ¬nd collisions or (second) preim-

This is a highly undesirable property, which

ages for certain classes of hash functions. It con-

a.o. implies that this hash function cannot be

sists of substituting all blocks of the input except

used in combination with a multiplicative digital

for one or more blocks. This attack often applies to

signature scheme such as RSA. In 1998, ISO has

the last block and is then called a correcting-last-

adopted two improved schemes based on modular

block attack, but it can also apply to the ¬rst block

arithmetic (ISO/IEC 10118-4 Modular Arithmetic

or to some blocks in the middle. For a preimage at-

Secure Hash, MASH-1 and MASH-2 [7]), which so

tack, one chooses an arbitrary message X and ¬nds

far have resisted correcting-block attacks.

one or more correcting blocks Y such that h(X Y)

takes a certain value (here denotes concatena-

B. Preneel

tion). For a second preimage attack on the target

message X Y, one chooses X and searches for one References

or more correcting blocks Y such that h(X Y ) =

h(X Y) (note that one may select X = X). For a col- [1] Bosset, J. (1977). “Contre les risques d™alt´ ration,

e

lision attack, one chooses two arbitrary messages un syst` me de certi¬cation des informations.” 01 In-

e

X and X with X = X; subsequently one searches formatique, 107.

for one or more correcting blocks denoted by Y and [2] Camion, P. (1986). “Can a fast signature scheme

Y , such that h(X Y ) = h(X Y). without secret be secure?” Proc. 2nd International

Correlation attack for stream ciphers 103

ORIGINAL CORRELATION ATTACK ON COM-

Conference on Applied Algebra, Algebraic Algo-

BINATION GENERATORS: The correlation attack

rithms, and Error-Correcting Codes, Lecture Notes

in Computer Science, vol. 228, ed. A. Poli. Springer- exploits the existence of a statistical dependence

Verlag, Berlin, 215“241. between the keystream and the output of a single

[3] Coppersmith, D. (1989). “Analysis of ISO/CCITT

constituent LFSR. In a binary combination gener-

document X.509 annex D.” IBM T.J. Watson Center,

ator, such a dependence exists if and only if the

Yorktown Heights, NY. Internal Memo.

output of the combining function f is correlated to

[4] Davies, D. and W.L. Price (1984). “Digital signa-

one of its inputs, i.e., if

tures, an update.” Proc. 5th International Confer-

pi = Pr[ f (x1 , . . . , xn ) = xi ] = 1

ence on Computer Communication, October 1984,

2

845“849.

for some i, 1 ¤ i ¤ n. It equivalently means that

[5] Girault, M. (1988). “Hash-functions using

the keystream sequence s = (st )t≥0 is correlated to

modulo-n operations.” Advances in Cryptology”

the sequence u = (ut )t≥0 generated by the ith con-

EUROCRYPT™87, Lecture Notes in Computer

Science, vol. 304, eds. D. Chaum and W.L. Price. stituent LFSR. Namely, the correlation between

Springer-Verlag, Berlin, 217“226. both sequences calculated on N bits

[6] Girault, M. and J.-F. Misarsky (1997). “Selective

N’1

forgery of RSA signatures using redundancy.” Ad-

(’1)st +ut mod 2

vances in Cryptology”EUROCRYPT™97, Lecture

t=0

Notes in Computer Science, vol. 1233, ed. W. Fumy.

Springer-Verlag, Berlin, 495“507. (where the sum is de¬ned over real numbers)

[7] ISO/IEC 10118 (1998). “Information technology” is a random variable which is binomially dis-

security techniques”hash-functions.” Part 4: Hash- tributed with mean value N(1 ’ 2 pi ) and with

Functions Using Modular Arithmetic, 10118-

variance 4Npi (1 ’ pi ) (when N is large enough).

4:1998.

It can be compared to the correlation between

[8] ITU-T X.500 (1988). “The directory”overview of

the keystream s and a sequence r = (rt )t≥0 in-

concepts.” ITU-T Recommendation X.500 (same as

dependent of s (i.e., such that Pr[st = rt ] = 1/2).

IS 9594-1, 1989).

For such a sequence r, the correlation between s

[9] Preneel, B. (1993). “Analysis and design of cryp-

and r is binomially distributed with mean value 0

tographic hash functions.” Doctoral Dissertation,

and with variance N. Thus, an exhaustive search

Katholieke Universiteit Leuven.

for the initialization of the ith LFSR can be per-

formed. The value of the correlation enables to dis-

tinguish the correct initial state from a wrong one

CORRELATION ATTACK since the sequence generated by a wrong initial

state is assumed to be statistically independent of

FOR STREAM CIPHERS the keystream. Table 1 gives a complete descrip-

tion of the attack.

The correlation attack for stream ciphers was pro-

In practice, an initial state is accepted if the

posed by Siegenthaler in 1985. It applies to any

magnitude of the correlation exceeds a certain

running-key generator composed of several linear

feedback shift registers (LFSRs). The correlation

Table 1. Correlation attack

attack is a divide-and-conquer technique: it aims

at recovering the initial state of each constituent Input. s0 s1 . . . s N’1 , N keystream bits,

LFSRs separately from the knowledge of some pi = Pr [ f(x1 , . . . , xn ) = xi ] = 1/2.

Output. u0 . . . u Li ’1 , the initial state of the i-th

keystream bits (in a known plaintext attack).

A similar ciphertext only attack can also be constituent LFSR.

For each possible initial state u0 . . . u Li ’1

mounted when there exists redundancy in the

Generate the ¬rst N bits of the sequence u

plaintext (see [3]).

produced by the ith LFSR from the chosen

The original correlation attack presented in [3]

initial state.

applies to some combination generators composed

Compute the correlation between s0 s1 . . . s N’1

of n LFSRs of lengths L1 , . . . , Ln . It enables to

and

recover the complete initialization of the gen- u0 u1 . . . u N’1 :

n

i=1 2 ’ 1 trials instead of

Li

erator with only

n

the i=1 2 Li ’ 1 tests required by an exhaus- N’1

(’1)st +ut mod 2

±←

tive search. Some ef¬cient variants of the origi- i=0

nal correlation attack can also be applied to other

If ± is close to N(1 ’ 2 pi )

keystream generators based on LFSRs, like ¬lter

return u0 u1 . . . u Li ’1

generators (see fast correlation attack for details).

104 Correlation attack for stream ciphers

decision threshold which is deduced from the ex- LFSR 1

pected false alarm probability Pf and the non-

s

LFSR 2

detection probability Pn (see [3]). The required f

.

.

keystream length N depends on the probabil- .

ity pi and on the length Li of the involved LFSR:

for Pn = 1.3 — 10’3 and Pf = 2’Li , the attack LFSR n

correlation

requires

LFSR i1

2

ln(2 Li ’1 ) + 3 2 pi (1 ’ pi ) u

. g

√ .

N .

2( pi ’ 0.5) LFSR ik

running-key bits. Clearly, the attack performs Fig. 1. Correlation attack involving several constituent

2 Li ’1 trials on average where Li is the length of LFSRs of a combination generator

the target LFSR. The correlation attack only ap-

plies if the probability pi differs from 1/2.

function m+1 xi j + µ [1, 4]. Thus, the most ef¬-

j=1

cient correlation attacks that can be mounted rely

CORRELATION ATTACK ON OTHER KEY- on the correlation between the keystream s and

STREAM GENERATORS: More generally, the cor- the sequence u obtained by adding the outputs

relation attack applies to any keystream genera- of LFSRs i1 , i2 , . . . , im+1 . This correlation corres-

tor as soon as the keystream is correlated to the ponds to

output sequence u of a ¬nite state machine whose

1 1

initial state depends on some key bits. These key

Pr[st = ut ] = ’ n+1 | f (t)|,

bits can be determined by recovering the initial- 22

ization of u as follows: an exhaustive search for where n is the number of variables of the combin-

the initialization of u is performed, and the cor- ing function, t is the n-bit vector whose ith com-

rect one is detected by computing the correlation ponent equals 1 if and only if i ∈ {i1 , i2 , . . . , im+1 }

between the corresponding sequence u and the and f denotes the Walsh transform of f (see

keystream. Boolean functions). In order to increase the com-

plexity of the correlation attack, the combining

CORRELATION ATTACK ON COMBINATION function used in a combination generator should

GENERATORS INVOLVING SEVERAL LFSRS: have a high correlation-immunity order and a high

nonlinearity (more precisely, its Walsh coef¬cients

For combination generators, the correlation at-

f (t) should have a low magnitude for all vec-

tack can be prevented by using a combining func-

tors t with a small Hamming weight). For an m-

tion f whose output is not correlated to any of

resilient combining function, the complexity of the

its inputs. Such functions are called ¬rst-order

correlation attack is 2 Li1 +Li2 +···+Lim+1 . It can be sig-

correlation-immune (or 1-resilient in the case

ni¬cantly reduced by using some improved algo-

of balanced functions). In this case, the run-

rithms, called fast correlation attacks.

ning key is statistically independent of the out-

put of each constituent LFSR; any correlation

Anne Canteaut

attack should then consider several LFSRs si-

multaneously. More generally, a correlation at-

References

tack on a set of k constituent LFSRs, namely

LFSR i1 , . . . , LFSR ik , exploits the existence of

a correlation between the running-key s and [1] Canteaut, A. and M. Trabbia (2000). “Improved

fast correlation attacks using parity-check equa-

the output u of a smaller combination genera-

tions of weight 4 and 5.” Advances in Cryptology”

tor, which consists of the k involved LFSRs com-

EUROCRYPT 2000, Lecture Notes in Computer Sci-

bined by a Boolean function g of k variables (see

ence number 1807, ed. B. Preneel. Springer-Verlag,

Figure 1). Since Pr[st = ut ] = Pr[ f (x1 , . . . , xn ) =

Berlin, 573“588.

g(xi1 , . . . , xik )] = pg , this attack only succeeds when

[2] Siegenthaler, T. (1984). “Correlation-immunity of

pg = 1/2. The smallest number of LFSRs that nonlinear combining functions for cryptographic ap-

can be attacked simultaneously is equal to m + plications.” IEEE Trans. Inform. Theory, IT-30 (5),

1 where m is the highest correlation-immunity 776“780.

order of the combining function. Moreover, the [3] Siegenthaler, T. (1985). “Decrypting a class of

Boolean function g of (m + 1) variables which pro- stream ciphers using ciphertext only.” IEEE Trans.

vides the best approximation of f is the af¬ne Computers, C-34 (1), 81“84.

Correlation immune and resilient boolean functions 105

also have high algebraic degree d —¦ f and high non-

[4] Zhang, M. (2000). “Maximum correlation analysis of

linearity N L( f ) (see Boolean functions). There are

nonlinear combining functions in stream ciphers.”

Journal of Cryptology, 13 (3), 301“313. necessary trade-offs between the number of vari-

ables, the algebraic degree, the nonlinearity, and

the resiliency order of a function.

“ Siegenthaler™s bound [5] states that any m-

resilient function (0 ¤ m < n ’ 1) has algebraic

CORRELATION IMMUNE

degree smaller than or equal to n ’ m ’ 1 and

AND RESILIENT BOOLEAN that any (n ’ 1) resilient function is af¬ne

FUNCTIONS (Siegenthaler also proved that any n-variable

mth order correlation-immune function has de-

gree at most n ’ m).

Cryptographic Boolean functions must be bal-

“ The values of the Walsh transform of an n-

anced (i.e., their output must be uniformly

variable, m-resilient function are divisible by

distributed) for avoiding statistical dependence

2m+2 if m ¤ n ’ 2, cf. [4] (and they are divisi-

between their input and their output (such sta-

n’m’2

tistical dependence can be used in attacks). m+2+

ble by 2 if f has degree d, see [2]).

d

Moreover, any combining function f (x) (see These divisibility bounds have provided non-

combination generator), used for generating the trivial upper bounds on the nonlinearities of re-

pseudorandom sequence in a stream cipher, must silient functions [2, 4], also partially obtained

stay balanced if we keep constant some coordi- in [6, 7]. The nonlinearity of any m-resilient

nates xi of x (at most m of them, where m is as large function is upper bounded by 2n’1 ’ 2m+1 . This

as possible). We say that f is then m-resilient. More bound is tight, at least when m ≥ 0.6n, and any

generally, a (non necessarily balanced) Boolean m-resilient function achieving it also achieves

function, whose output distribution probability is Siegenthaler™s bound (see [6]).

unaltered when any m of its input bits are kept High order resilient functions with high degrees

constant, is called mth order correlation-immune. and high nonlinearities are needed for applica-

The notion of correlation-immune function is re- tions in stream ciphers.

lated to the notion of orthogonal array (see [1]).

Only resilient functions are of practical interest

EXAMPLE [1]. Let r be a positive integer smaller

as cryptographic functions.

than n; denote n ’ r by s; let g be any Boolean

The notion of correlation immunity was intro-

function on F2 s and let φ be a mapping from F2 s to

duced by Siegenthaler in [5]; it is related to an

F2 r . De¬ne the function:

attack on pseudorandom generators using com-

bining functions: if such combining function f is r

f (x, y) = x · φ(y) • g(y) = xi φi (y) • g(y),

not mth order correlation-immune, then there ex- φ,g

ists a correlation between the output of the func- i=1

x ∈ F2 , y ∈ F2 ,

r s

tion and (at most) m coordinates of its input; if (2)

m is small enough, a divide-and-conquer attack

where φi (y) is the ith coordinate of φ(y). Then, if ev-

due to Siegenthaler (see correlation attack for

ery element in φ(F2 s ) has Hamming weight strictly

stream ciphers) and later improved by several au-

greater than k, then f is m-resilient with m ≥ k.

φ,g

thors (see fast correlation attack) uses this weak-

In particular, if φ(F2 ) does not contain the null

s

ness for attacking the system.

vector, then f is balanced.

φ,g

The maximum value of m such that f is m-

resilient is called the resiliency order of f.

Examples of m-resilient functions achieving the

Correlation immunity and resiliency can be

best possible nonlinearity 2n’1 ’ 2m+1 (and thus

characterized through the Walsh transform

the best degree) have been obtained for n ¤ 10 and

f (u) = x∈Fn (’1) f (x)•x·u , see [3]: f is mth order

2

correlation-immune if and only if f (u) = 0 for all for every m ≥ 0.6n (n being then not limited, see

u ∈ F2 n such that 1 ¤ w H (u) ¤ m, where w H de- [6]). They have been constructed by using the fol-

notes the Hamming weight (that is, the num- lowing methods permitting to construct resilient

ber of nonzero coordinates); and it is m-resilient functions from known ones:

r

if and only if f (u) = 0 for all u ∈ F2 n such that [1, 5] Let g be a Boolean function on F2 n .

w H (u) ¤ m. Consider the Boolean function on F2 n+1 :

f (x1 , . . . , xn , xn+1 ) = g(x1 , . . . , xn ) • xn+1 . Then,

It is not suf¬cient for a combining function f,

N L( f ) = 2N L(g) and d —¦ f = d —¦ g if d —¦ g ≥ 1.

used in a stream cipher, to be m-resilient with

If g is m-resilient, then f is (m + 1)-resilient.

large m. As any cryptographic function, it must

106 Covert channels

r [5] Let g and h be two Boolean functions on immune functions.” Proceedings of Selected Areas in

F2 . Consider the function f (x1 , . . . , xn , xn+1 ) =

n Cryptography 2000, Lecture Notes in Computer Sci-

xn+1 g(x1 , . . . , xn ) • ˜(xn+1 • 1)h(x1 , . . . , xn ) on ence, vol. 2012, eds. D.R. Stinson and S.E. Tavares.

Springer, 262“274.

F2 n+1 . Then, N L f ≥ N Lg + N Lh (moreover, if g

and h are such that, for every word a, at least

one of the numbers g(a), h(a) is null, then N L( f )

equals 2n’1 + min(N L(g), N L(h))). COVERT CHANNELS

If the algebraic normal forms of g and h do

not have the same monomials of highest degree, Lampson [8, p. 614] informally speci¬ed a special

then d —¦ f = 1 + max(d —¦ g, d —¦ h). type of communication being:

If g and h are m-resilient, then f is m-

resilient (moreover, if for every a ∈ F2 n of weight Covert channels, i.e. those not intended for in-

m + 1, we have g(a) + h(a) = 0, then f is (m + formation transfer at all, such as the service

1)-resilient; this happens with h(x) = g(x1 • program™s effect on the system load.

1, . . . , xn • 1) • , where = m mod 2, see [1]).

r [6] Let g be any Boolean function on F2 n . A more general de¬nition can be found in [14,

De¬ne the Boolean function f on F2 n+1 by p. 110].

f (x1 , . . . , xn , xn+1 ) = xn+1 • g(x1 , . . . , xn’1 , xn • Covert channels often involve what is called tim-

xn+1 ). Then, N L( f) = 2 N L(g) and d —¦ f = ing channels and storage channels. An example

d —¦ g if d —¦ g ≥ 1. If g is m-resilient, then of a timing channel is the start-time of a pro-

f is m-resilient (and it is (m + 1)-resilient cess. The modulation of disc space is an example

if g(a1 , . . . , an’1 , 1) is null for every vector of a storage channel. Methods that describe how

(a1 , . . . , an’1 ) of weight at most m). covert channels can be fought can, e.g., be found

in [9]. For more information about covert channels,

Claude Carlet

see [1].

Simmons [10] introduced the research on covert

References

channels to the cryptographic community by in-

troducing a special type of channel, which he

[1] Camion, P., C. Carlet, P. Charpin, and N. Sendrier

called a subliminal channel. (Simmons did not

(1992). “On correlation-immune functions.” Ad-

regard these channels as ¬tting the de¬nition

vances in Cryptology”CRYPTO™91, Lecture Notes

of a covert channel.) He observed that covert

in Computer Science, vol. 576, ed. J. Feigenbaum.

Springer, 86“100. data could be hidden within the authenticator

[2] Carlet, C. (2001). “On the coset weight divisibil- of an authentication code [10]. The capacity of

ity and nonlinearity of resilient and correlation- this channel was not large (a few bits per au-

immune functions.” Proceedings of SETA™01 (Se- thenticator), but as Simmons disclosed 10 years

quences and their Applications 2001), Discrete

later [12, pp. 459“462] (see also [13]), the poten-

Mathematics and Theoretical Computer Science.

tial impact on treaty veri¬cation and on the na-

Springer, Berlin, 131“144.

tional security of the USA could have been catas-

[3] Xiao, Guo-Zhen and J.L. Massey (1988). “A spectral

trophic.

characterization of correlation-immune combining

In 1987 the concept of subliminal channel was

functions.” IEEE Trans. Inf. Theory, IT-34 (3), 569“

extended to be hidden inside a zero-knowledge

571.

interactive proof [6]. The concept was generalized

[4] Sarkar, P. and S. Maitra (2000). “Nonlinearity

bounds and constructions of resilient Boolean func- to a hidden channel inside any cryptographic sys-

tions.” Advances in Cryptology”CRYPTO 2000, tem [3, 4]. Mechanisms to protect against sublim-

Lecture Notes in Computer Science, vol. 1880, ed. inal channels were also presented [3, 4] and rein-

Mihir Bellare. Springer, 515“532. vented 5 years later [11]. Problems with some of

[5] Siegenthaler, T. (1984). “Correlation-immunity of

these solutions were discussed in [5] (see [2]) for a

nonlinear combining functions for cryptographic ap-

more complete discussion.

plications.” IEEE Transactions on Information the-

A subliminal channel in a key-distribution

ory, IT-30 (5), 776“780.

scheme (see key management) could undermine

[6] Tarannikov, Y.V. (2000). “On resilient Boolean func-

key escrow, as discussed in [7]. Kleptography is

tions with maximum possible nonlinearity.” Pro-

the study of how a designer, making a black box

ceedings of INDOCRYPT 2000, Lecture Notes in

cipher, can leak the user™s secret key subliminally

Computer Science, vol. 1977, eds. B.K. Roy and E.

Okamoto. Springer, 19“30. to the designer (see, e.g., [15]).

[7] Zheng, Y. and X.-M. Zhang (2001). “Improving upper

Yvo Desmedt

bound on the nonlinearity of high order correlation

CPS, certi¬cate practice statement 107

References [14] U.S. Department of Defense (1983). Department

of Defense Trusted Computer System Evaluation

Criteria. Also known as the Orange Book.

[1] Bishop, M. (2003). Computer Security. Addison-

[15] Young, A. and M. Yung (1997). “Kleptography:

Wesley, Reading, MA.

Using cryptography against cryptography.” Ad-

[2] Burmester, M., Y.G. Desmedt, T. Itoh, K. Saku-

vances in Cryptology”EUROCRYPT™97, Lecture

rai, H. Shizuya, and M. Yung (1996). “A progress

Notes in Computer Science, vol. 1233, ed. W. Fumy.

report on subliminal-free channels.” Information

Springer-Verlag, Konstanz, Germany, 62“74.

Hiding, First International Workshop, Proceedings,

May 30“June 1 (Lecture Notes in Computer Sci-

ence, vol. 1174, ed. R. Anderson). Springer-Verlag,

Cambridge, UK, 159“168.

[3] Desmedt, Y. (1990). “Abuses in cryptography and

CPS, CERTIFICATE

how to ¬ght them.” Advances in Cryptology”

PRACTICE STATEMENT

CRYPTO™88, Lecture Notes in Computer Science,

vol. 403, ed. S. Goldwasser. Springer-Verlag, Santa

Barbara, CA, 375“389.

A Certi¬cation Authority (CA) describes in a Cer-

[4] Desmedt, Y. (1990). “Making conditionally se-

ti¬cate Practice Statement (CPS) the procedures

cure cryptosystems unconditionally abuse-free in

and practices that it employs when managing cer-

a general context.” Advances in Cryptology”

ti¬cates (issuing, revoking, renewing, and rekey-

CRYPTO™89, Lecture Notes in Computer Science,

ing). The CPS describes manual processes for se-

vol. 435, ed. G. Brassard. Springer-Verlag, Santa

curely operating the CA and contains information

Barbara, CA, 6“16.

[5] Desmedt, Y. (1996). “Simmons™ protocol is not free on cryptographic aspects, including management

of subliminal channels.” Proceedings: 9th IEEE of the keys used by the CA (see also key manage-

Computer Security Foundations Workshop, June ment). The certi¬cate authority documents in its

10“12. Kenmare, Ireland, 170“175. CPS that it manages certi¬cates according to some

[6] Desmedt, Y., C. Goutier, and S. Bengio (1988). “Spe-

certi¬cate policy (see trust model). The certi¬cate

cial uses and abuses of the Fiat“Shamir passport

policy lists the requirements for the management

protocol.” Advances in Cryptology”CRYPTO™87,

of certi¬cates by the CA and de¬nes the applica-

Lecture Notes in Computer Science, vol. 293, ed.

bility of a certi¬cate issued under this policy. The

C. Pomerance. Springer-Verlag, Santa Barbara,

policy might for example indicate that the certi¬-

CA, 21“39.

cate may be used for authenticating the subject

[7] Kilian, J. and T. Leighton (1995). “Failsafe

(holder) in certain types of business transactions.

key escrow, revisited.” Advances in Cryptology”

CRYPTO™95, Lecture Notes in Computer Science, The certi¬cate policy under which a certi¬cate

vol. 963, ed. D. Coppersmith. Springer-Verlag, is issued may be indicated in the certi¬cate. For

Santa Barbara, CA, 208“221. X.509 certi¬cates a speci¬c extension is de¬ned for

[8] Lampson, B.W. (1973). “A note on the con¬nement this purpose. This information allows relying par-

problem.” Comm. ACM, 16 (10), 613“615.

ties to decide whether the certi¬cate may be used

[9] Simmons, G.J. (1983). “Veri¬cation of treaty

in a given situation without knowing the CPS of

compliance-revisited.” Proc. of the 1983 IEEE Sym-

the CA issuing the certi¬cate.

posium on Security and Privacy, April 25“27, 1983.

Whereas the policy lists the requirements for

IEEE Computer Society Press, Oakland, CA, 61“

the CA, the CPS describes how the CA meets these

66.

requirements. Thus the two documents will often

[10] Simmons, G.J. (1983). “The prisoners™ problem and

have similar structures. The certi¬cate policy may

the subliminal channel.” Advances in Cryptology”

CRYPTO™83, Lecture Notes in Computer Science, in particular de¬ne the rules for approving that a

ed. D. Chaum. Plenum Press, New York, 51“67. given CPS satis¬es the policy and for validating

[11] Simmons, G.J. (1993). “An introduction to the that the CA continuously follows the CPS. It may,

mathematics of trust in security protocols.” Pro- for example, be required that an independent au-

ceedings: Computer Security Foundations Work-

ditor initially approves that the CPS complies with

shop VI, June 15“17. IEEE Computer Society

the certi¬cate policy and yearly reviews the pro-

Press, Franconia, NH, 121“127.

cedures actually followed by the CA against the

[12] Simmons, G.J. (1994). “Subliminal channels; past

CPS.

and present.” European Trans. on Telecommunica-

As a consequence, different certi¬cation author-

tions, 5 (4), 459“473.

ities following different certi¬cate practice state-

[13] Simmons, G.J. (1996). “The history of subliminal

ments may issue certi¬cates under the same policy

channels.” Information Hiding, First International

Workshop, Proceedings, May 30“June 1, (Lecture as long as their CPS satis¬es the policy.

Notes in Computer Science, vol. 1174, ed. R. Ander-

Torben Pedersen

son). Springer-Verlag, Cambridge, UK, 237“256.

108 Cramer“Shoup public key system

Reference Cramer and Shoup prove that the system is

IND-CCA2 if the DDH assumption [3] (see Deci-

sional Dif¬e“Hellman problem) holds in G and

[1] RfC2527: Internet X.509 Public Key Infrastruc-

the hash function H is collision resistant. They

ture Certi¬cate Policy and Certi¬cation Practices

Framework. See http://www.rfc-editor.org/rfc.html show that if a successful IND-CCA2 attacker ex-

ists, then (assuming H is collision resistant) one

can construct an algorithm B, with approximately

the same running as the attacker, that decides if

CRAMER“SHOUP PUBLIC a given 4-tuple g, g, a, a ∈ G is a random DDH tu-

ˆ ˆ

KEY SYSTEM ple or a random tuple in G4 . Very brie¬‚y, Algo-

rithm B works as follows: it gives the attacker

a public key for which B knows the private key.

The Cramer“Shoup cryptosystem [6, 8] is the ¬rst

This enables B to respond to the attacker™s de-

public-key cryptography system that is ef¬cient

cryption queries. The given 4-tuple is then used to

and is proven to be chosen ciphertext secure with-

construct the challenge ciphertext given to the at-

out the random oracle model using a standard

tacker. Cramer and Shoup show that if the 4-tuple

complexity assumption. Before describing the sys-

is a random DDH tuple, then the attacker will win

tem we give a bit of history.

the semantic security game with non-negligible

The standard notion of security for a public-key

advantage. However, if the input 4-tuple is a ran-

encryption system is known as semantic security

dom tuple in G4 , then the attacker has zero advan-

under an adaptive chosen ciphertext attack and

tage in winning the semantic security game. This

denoted by IND-CCA2. The concept is due to

behavioral difference enables B to decide whether

Rackoff and Simon [12] and the ¬rst proof that

the given input tuple is a random DDH tuple

such systems exist is due to Dolev et al. [9]. Several

or not.

ef¬cient constructions for IND-CCA2 systems exist

We brie¬‚y mention a number of variants of the

in the random oracle model. For example, OAEP is

system. Ideally, one would like an IND-CCA2 sys-

known to be IND-CCA2 when using the RSA trap-

tem that can be proven secure in two different

door permutation [2, 10]. Until the discovery of

ways: (i) without random oracles it can be proven

the Cramer“Shoup system, there were no ef¬-

secure using the decisional Dif¬e“Hellman as-

cient IND-CCA2 systems that are provably se-

sumption, and (ii) with random oracles it can be

cure under standard assumptions without random

proven secure using the much weaker computa-

oracles.

tional Dif¬e“Hellman assumption. For such a sys-

The Cramer“Shoup system makes use of a

tem, the random oracle model provides a hedge in

group G of prime order q. It also uses a hash

function H : G3 ’ Zq (see also modular arith- case the DDH assumption is found to be false. A

small variant of the Cramer“Shoup system above

metic). We assume that messages to be encrypted

can be shown to have this property [8].

are elements of G. The most basic variant of the

Occasionally, one only requires security against

systems works as follows:

a weak chosen ciphertext attack in which the at-

Key Generation. Pick an arbitrary generator g

—

of G. Pick a random w in Zq and random x1 , x2 , tacker can only issue decryption queries before be-

y1 , y2 , z in Zq . Set g = gw , e = g x1 g x2 , f = g y1 g y2 ,

ˆ ˆ ˆ ing given the challenge ciphertext [1, 11]. A sim-

h = g . The public key is (g, g, e, f, g, G, q, H)

z ˆ pler version of the Cramer“Shoup system, called

and the private key is (x1 , x2 , y1 , y2 , z, G, q, H). CS-Lite, can be shown to be secure against this

weaker chosen ciphertext attack assuming DDH

Encryption. Given the public key

(g, g, e, f, h, G, q, H) and a message m ∈ G:

ˆ holds in G. This variant is obtained by computing

d as d = eu . There is no need for y1 , y2 , f, or the

1. Pick a random u in Zq .

2. Set a = gu , a = gu , c = hu · m, v = H(a, a, c),

ˆ hash function H. When decrypting we verify that

ˆ ˆ

d = a x1 a x2 in step 2.

d=e f . u uv

ˆ

3. The ciphertext is C = (a, a, c, d) ∈ G4 . Finally, one may wonder how to construct

ˆ

Decryption. To decrypt a ciphertext C = ef¬cient IND-CCA2 systems using an assump-

(a, a, c, d) using the private key (x1 , x2 , y1 , y2 , tion other than DDH. Cramer and Shoup [7]

ˆ

showed that their system is a special case of a

z, G, q, H):

more general paradigm. Using this generaliza-

ˆ

1. Test that a, a, c, d belong to G; output ˜reject™

tion they construct a CCA2 system based on the

and halt if not.

2. Compute v = H(a, a, c) ∈ Zq . Test that d = Quadratic Residuosity assumption modulo a com-

ˆ

x1 +vy1 x2 +vy2

posite. They obtain a more ef¬cient system us-

ˆ

a a ; output ˜reject™ and halt if not.

3. Compute m = c/a z ∈ G and output m as the ing a stronger assumption known as the Pal-

lier assumption. Other constructions for ef¬cient

decryption of C.

Credentials 109

IND-CCA2 systems are given in [4, 5]. Finally, we [11] Naor, Moni and Moti Yung (1990). “Public-key cryp-

tosystems provably secure against chosen cipher-

note that Sahai and Elkind [13] show that the

text attacks.” Proceedings of 22nd ACM Sympo-

Cramer“Shoup system can be linked to the Naor“

sium on Theory of Computing, May 1990, 427“437.

Yung double encryption paradigm [11].

[12] Rackoff, C. and D. Simon (1991). “Noninterac-

tive zero-knowledge proof of knowledge and cho-

Dan Boneh

sen ciphertext attack.” Advances in Cryptology”

CRYPTO™91, Lecture Notes in Computer Sci-

References ence, vol. 576, ed. J. Feigenbaum. Springer-Verlag,

Berlin, 433“444.

[13] Sahai, Amit and Edith Elkind (2002). “A uni-

[1] Bellare, M., A. Desai, D. Pointcheval, and P. Rog-

¬ed methodology for constructing public-key en-

away (1998). “Relations among notions of security

cryption schemes secure against adaptive chosen-

for public-key encryption schemes.” Advances in

ciphertext attack.” http://eprint.iacr.org/2002/042/

Cryptology”CRYPTO™98, Lecture Notes in Com-

puter Science, vol. 1462, ed. H. Krawczyk. Springer,

Berlin, 26“45.

[2] Bellare, M. and P. Rogaway (1994). “Optimal

CREDENTIALS

asymmetric encryption.” Advances in Cryptology”

EUROCRYPT™94, Lecture Notes in Computer Sci-

ence, vol. 950, ed. A. De Santis. Springer, Berlin,

In a general sense, credentials are something that

92“111.

gives a title to credit or con¬dence. In computer

[3] Boneh, Dan (1998). “The decision Dif¬e“Hellman

systems, credentials are descriptions of privileges

problem.” Proceedings of the 3rd Algorithmic

that are issued by an authority to a subject. The

Number Theory Symposium, Lecture Notes in

privilege may be an access right, an eligibility,

Computer Science, vol. 1423, ed. J.P. Buhler.

or membership (see also privilege management

Springer-Verlag, Berlin, 48“63.

[4] Boneh, Dan and Xavier Boyen (2004). “Ef¬cient and access control). Examples from real life are

selective-ID secure identity based encryption with- driver™s licenses, club membership cards, or pass-

out random oracles.” Advances in Cryptology” ports. A credential can be shown to a veri¬er in or-

EUROCRYPT 2004, Lecture Notes in Computer der to prove one™s eligibility or can be used toward

Science, vol. 3027, eds. C. Cachin and J. Camenisch.

a recipient in order to exercise the described privi-

223“238.

lege or receive the described service. The integrity

[5] Canetti, Ran, Shai Halevi, and Jonathan Katz

of a credential scheme relies on the veri¬ers being

(2004). “Chosen-ciphertext security from identity-

able to effectively check the following three condi-

based encryption.” Advances in Cryptology”

tions before granting access or providing service:

EUROCRYPT 2004, Lecture Notes in Computer

1. The credential originates from a legitimate au-

Science, vol. 3027, eds. C. Cachin and J. Camenisch.

thority. For example, the alleged authority is

207“222.

[6] Cramer, Ronald and Victor Shoup (1998). “A known or listed as an approved provider of cre-

practical public key cryptosystem provably se- dentials for the requested service.

cure against adaptive chosen ciphertext attack.” 2. The credential is legitimately shown or used by

Advances in Cryptology”CRYPTO™98, Lecture the respective subject.

Notes in Computer Science, vol. 1462, ed. Hugo

3. The privilege described is suf¬cient for the ser-

Krawczyk. Springer-Verlag, Berlin, 13“25.

vice requested.

[7] Cramer, Ronald and Victor Shoup (2002). “Uni-

In centralized systems, credentials are called ca-

versal hash proofs and a paradigm for chosen ci-

pabilities, i.e., descriptions of the access rights

phertext secure public key encryption.” Advances

to certain security critical objects (see access

in Cryptology”EUROCRYPT 2002, Lecture Notes

control). The centralized system manages the is-

in Computer Science, vol. 2332, ed. Lars Knudsen.

suing of capabilities to subjects through a trusted

Springer-Verlag, Berlin, 45“64.

[8] Cramer, Ronald and Victor Shoup (2004). “De- issuing process, and all attempts of subjects to ac-

sign and analysis of practical public-key encryption cess objects through a trusted veri¬er, i.e., the ac-

schemes secure against adaptive chosen ciphertext cess enforcement mechanism. If the subject has

attack.” SIAM Journal of Computing, 33 (1), 167“ suf¬cient capabilities assigned, it is allowed to ac-

226.

cess the requested object, otherwise the access is

[9] Dolev, D., C. Dwork, and M. Naor (2000). “Non-

denied. The capabilities and their assignment to

malleable cryptography.” SIAM Journal of Com-

subjects are stored in a central trusted repository,

puting, 30 (2), 391“437.

where they can be looked up by the access enforce-

[10] Fujisaki, Eiichiro, Tatsuaki Okamoto,

ment mechanism. Thus, in centralized systems,

David Pointcheval, and Jacques Stern (2004).

the integrity requirements 1, 2, 3 are enforced by

“RSA-OAEP is secure under the RSA assumption.”

trusted central processes.

J. Cryptology, 17 (2), 81“104.

110 Credentials

In distributed systems there are autonomous Such a system would effectively ban drivers with-

entities acting as issuing authorities, as users who out a valid license, but it could also effectively

get credentials issued or show/use credentials, or monitor the moving of all honest drivers. Consid-

as veri¬ers. Distributed credentials need to sat- erations like this led Chaum [8] to look for privacy

isfy the above integrity requirements even in the in credentials:

presence of one or more cheating users, possi- Unlinkable credentials can be issued and

bly collaborating. In addition, one can be inter- shown/used in such a way that even a coalition

ested in privacy requirements of users against of cheating issuers and veri¬ers has no chance

cheating issuers and veri¬ers, possibly collabo- to determine which issuing and showing/using

rating. David Chaum introduced credentials in or which two showings/usings originate from the

this context of distributed systems in [8]. Dis- same credential (see unlinkability).

tributed credentials have been proposed to repre- Unlinkable credentials also leave the holders

sent such different privileges as electronic cash, anonymous, because if transactions on the same

passports, driver™s licenses, diplomas, and many credential cannot be linked, neither can such

others. Depending on what privilege a credential transactions be linked to the credential holder™s

represents, its legitimate use must be restricted identity. (Otherwise, they were no longer unlink-

appropriately (see integrity requirement 3 above). able.)

The following atomic use restrictions have been In the cryptographic literature, the term cre-

considered in the literature. dential is most often used for nontransferable

Nontransferable credentials cannot be (suc- and unlinkable credentials, i.e., those that are

cessfully) shown by subjects to whom they have irreversibly tied to human individuals, and pro-

not been issued in the ¬rst place. Such creden- tecting the privacy of users. Numerous crypto-

tials could represent nontransferable privileges graphic solutions have been proposed both for con-

such as diplomas or passports. sumable credentials and for personal credentials

Revocable credentials cannot be (successfully) alike. Chaum et al. [14] kicked off the develop-

shown after they have expired or have been ment of consumable credentials. Improvements

revoked. Such credentials could represent re- followed by Chaum et al. [10“12], Chaum and

vocable privileges such as driver™s licenses or Pedersen [15], Cramer and Pedersen [17], Brands

public key certi¬cates commonly used in public [2], Franklin and Yung [21], and others. Brands so-

key infrastructures (PKI). lution achieves overspending prevention by using

Consumable credentials cannot be (success- a wallet-with-observer architecture (see [19] and

fully) shown after they have been used a spec- electronic wallet), overspender detection without

i¬ed number of times. Such credentials could assuming tamper resistant hardware, and uncon-

represent privileges that get consumed when ditional unlinkability of payments also without

you use them, e.g., electronic cash. assuming tamper resistant hardware. Brands so-

More complex use restrictions can be de¬ned lution satis¬ed almost all requirements that had

by combining these atomic use restrictions in been stated by 1992 for ef¬cient of¬‚ine consum-

Boolean formulae. For example, revocable non- able credentials (e-cash) in a surprisingly ef¬cient

transferable credentials could be used as driver™s way.

licenses that expire after a speci¬ed period of, e.g., Naccache and von Solms [23] pointed out later

2 years. that unconditional unlinkability (which implies

A credential scheme is called online if its cre- payer anonymity) might be undesirable in practice

dentials can be shown and used only by involv- because it would allow blackmailing and money

ing a central trusted authority that needs to clear laundering. They suggested to strive for a better

the respective transactions. If the holder and ver- balance between the individuals™ privacy and law

i¬er can do so without involving a third party, the enforcement. This work triggered a number of pro-

credentials scheme is called of¬‚ine. Online cre- posals for consumable credentials with anonymity

dential schemes are regarded as more secure for revocation by Stadler et al. [24], Brickell et al. [4],

the issuers and veri¬ers, while of¬‚ine credential Camenisch et al. [7], and Frankel et al. [20].

schemes are regarded as more ¬‚exible and conve- About the same amount of work has been

nient to customers. done on developing personal credential schemes.

Credentials and their use could carry a lot of per- Quite surprisingly, the problem of nontransfer-

sonal information about their holders. For exam- ability between cheating collaborating individu-

ple, consider an automated toll system that checks als was neglected in many of the early papers

the driver™s license of each car driver frequently by Chaum and Evertse [8, 9, 13] and Chen [16].

but conveniently via wireless road check points. Chen™s credentials are more limited than Chaum™s

Credentials 111

and Evertse™s because they can be shown only tials. Individuals must either transfer all their

˚

once. Damard [18] stated nontransferability as a credentials or none (all-or-nothing nontransfer-

security requirement but the proposed solution ability). They argue that even collaborating at-

did not address nontransferability. Chaum and tackers would refrain from granting each other

Pedersen [15] introduced the wallet-with-observer access to their credit card accounts, when they

architecture and proposed personal credentials to are collaborating only to share, e.g., a driver™s li-

be kept inside wallet databases, i.e., decentral- cense. Obviously, this deterrence from not trans-

ized databases keeping all the privileges of their ferring credentials is quickly neutralized if two or

respective owners. Their proposal only partially more participants mutually transfer credentials

addressed nontransferability by suggesting “dis- to each other. If any of them misuses a credit card

tance bounding protocols” (Brands and Chaum [3]) account of the other, he may experience the same

in order to ensure the physical proximity of a kind of misuse with his own credit card account as

wallet-with-observer during certain transactions. a matter of retaliation. It appears as if this con-

Distance bounding protocols can prevent Ma¬a cept promotes and protects closed groups of crim-

frauds, where the individual present at an organi- inal credential sharers. In addition, it would be

zation connects her device online to the wallet of hard in practice to guarantee that for each in-

another remote individual who holds the required dividual the risk of sharing any particular cre-

credential and then diverts the whole communica- dential is signi¬cantly higher than the respective

tion with the organization to that remote wallet. bene¬ts. Thus, for most real-world applications

Distance bounding cannot, however, discourage such as driver™s licenses, membership cards, or

individuals from simply lending or trading their passports, this deterrence approach to nontrans-

wallets. Lysyanskaya et al. [22] proposed a general ferability would face severe acceptance problems

scheme based on one-way functions and general from the issuers™ and veri¬ers™ perspective. Their

zero-knowledge proofs, which is impractical, and scheme also supports anonymity revocation as an

a practical scheme that has the same limitations option, but at the cost of about a 10-fold increase

as Chen™s: credentials can be shown only once. of computational complexity. Subsequent work by

The fundamental problem of enforcing non- Camenisch and Lysyanskaya [6] also shows how