<<

. 2
( 5)



>>

rectly on the card, and hence an additional layer
EUROCRYPT™93, Lecture Notes in Computer
of security is applied for the terminal to strongly
Science, vol. 765, ed. T. Helleseth. Springer-Verlag,
authenticate itself to the card.
Berlin, 386“397.

Peter Landrock

CLAW-FREE
CLOCK-CONTROLLED
A pair of functions f and g is said to be claw-free
GENERATOR
or claw-resistant if it is dif¬cult to ¬nd inputs x, y
to the functions such that
Let us consider a scheme that involves several reg-
f (x) = g(y).
isters and produces one output sequence. Based on
Such a pair of inputs is called a claw, describing some clocking mechanism, the registers go from
the two-pronged inverse. one state to another, thereby producing an output
78 Clock-controlled generator


bit. We can choose whether or not to synchronize example (the ¬rst row corresponds to the initial-
these registers. In the second case, the output of ization; the internal states are of the form st st’1
the scheme will be more nonlinear than in the ¬rst or st st’1 st’2 ):
case.
We will consider here registers whose clock
R R0 R1
is controlled by some events. The best studied
case is the one of Linear Feedback Shift Registers State Output State Output State Output Output
(LFSR), but this concept could be applied also to
Nonlinear Feedback Shift Registers (NLFSR). 11 010 01
01 1 010 0 10 1 1
So, the main idea here is that we have, for ex-
10 1 010 0 11 0 0
ample, two LFSRs, say R1 and R2 , and that the
11 0 001 0 11 0 0
output of R1 will determine the clocking of R2 .
01 1 001 0 01 1 1
For example, if R1 outputs a 1, then clock R2
10 1 001 0 10 1 1
twice, and if R1 outputs a 0, then clock R2 three
11 0 100 1 10 1 0
times. The output of the scheme could be the one 01 1 100 1 11 0 1
of R2 . 10 1 100 1 01 1 0
. . . . . . .
. . . . . . .
. . . . . . .
EXAMPLE

Some studies have been performed on the
R1
clock
properties of the output sequence (period, linear
clock
complexity, etc.), according to the nature of the
output
R2
sequences associated with R, R0 , and R1 . A
survey of techniques for attacking clock-controlled
Some particular examples of such generators generators is given in [3], and more recent results
have been studied. We will mention here the al- are discussed in [1, 2, 4, 5].
ternating step generator, and the shrinking gener-
Caroline Fontaine
ator. We can also remark that a LFSR can man-
age its clocking by itself, since some of its internal
References
bits can be chosen to determine its clocking; an
example is the self-shrinking generator.
The alternating step generator consists of three [1] Golic, J. (1995). “Towards fast correlation attacks
LFSRs, say R, R0 , and R1 . The role of R is to deter- on irregularly clocked shift registers.” Advances
in Cryptology”EUROCRYPT™95, Lecture Notes in
mine the clocking of both R0 and R1 . If R outputs
Computer Science, vol. 921, eds. L.C. Gvillou and
a 0, then only R0 is clocked, and if R outputs a 1,
J.-J. Quisquater. Springer-Verlag, Berlin, 248“262.
then only R1 is clocked. At each step, a LFSR that
[2] Golic, J. and L. O™Connor (1995). “Embedding
is not clocked outputs the same bit as previously
and probabilistic correlation attacks on clock-
(a 0 if there is no previous step). So, at each step
controlled shift registers.” Advances in Cryptology”
both R0 and R1 output one bit each, but only one EUROCRYPT™94, Lecture Notes in Computer Sci-
of them has been clocked. The output sequence of ence, vol. 950, ed. A. De Santis. Springer-Verlag,
the scheme is obtained by XORing those two bits. Berlin, 230“243.
[3] Gollmann, D. (1994). “Cryptanalysis of clock-
controlled shift registers.” Fast Software Encryp-
clock
R0
tion, Lecture Notes in Computer Science, vol. 809,
ed. R.J. Anderson. Springer-Verlag, Berlin, 121“
0
126.
output
R
clock
1 [4] Johansson, T. (1998). “Reduced complexity correla-
tion attacks on two clock-controlled generators.” Ad-
R1
vances in Cryptography”ASIACRYPT™98, Lecture
clock
Notes in Computer Science, vol. 1514, eds. K. Ohta
and D. Pei. Springer-Verlag, Berlin, 342“356.
EXAMPLE. Let us suppose that R and R1 are of [5] Kholosha, A. (2001). “Clock-controlled shift regis-
length 2 and have period 3; the feedback rela- ters and generalized Geffe key-stream generator.”
tion for R is st+1 = st + st’1 . For R1 , let us con- Progress in Cryptology”INDOCRYPT™01, Lecture
sider st+1 = st + st’1 ; R0 has length 3, and its feed- Notes in Computer Science, vol. 2247, eds. C. Pandu
back relation is st+1 = st + st’2 . Then we have for Rangan and C. Ding. Springer-Verlag, Berlin, 287 ff.
Closest vector problem 79


CLOSEST VECTOR process corresponds roughly to a CVP computa-
tion. These cryptosystems are based on the fact
PROBLEM that any lattice admits many different represen-
tations (e.g., it can be represented by different
The Closest Vector Problem (CVP) is a computa- bases), and some of them may have better geomet-
tional problem on lattices closely related to SVP ric properties than others, so that they can be used
(see Shortest Vector Problem). Given a lattice L as a decryption trapdoor. However, there are lat-
and a target point x, CVP asks to ¬nd the lat- tices [4, 8, 10] that admit no good representation,
tice point closest to the target. As for SVP, CVP i.e., solving CVP (even approximately) is NP-hard
can be de¬ned with respect to any norm, but the no matter which basis (or other auxiliary informa-
Euclidean norm is the most common (see the entry tion) is given. Therefore, the CVP instances used
lattice for a de¬nition). A more relaxed version of by lattice based cryptosystems (for which CVP can
the problem (used mostly in computational com- be ef¬ciently solved using the decryption key) are
plexity) only asks to compute the distance of the conceivably easier than general CVP instances.
target from the lattice, without actually ¬nding CVP has many applications in computer science,
the closest lattice vector. besides cryptography, and ¬nding a good CVP ap-
CVP has been studied in mathematics (in the proximation algorithm with approximation fac-
equivalent language of quadratic forms) since the tors that grow as a polynomial in the dimension
nineteenth century. One of the ¬rst references to of the lattice is one of the major open problems
CVP (under the name “Nearest Vector Problem”) in the area. For further information about CVP
in the computer science literature is [11], where and other computational problems on lattices, the
the problem is shown to be NP-hard to solve reader is referred to the book [9].
exactly.
Daniele Micciancio
Many applications of the CVP only require ¬nd-
ing a lattice vector that is not too far from the
References
target, even if not necessarily the closest. A g-
approximation algorithm for CVP ¬nds a lattice
[1] Arora, S., L. Babai, J. Stern, and E.Z. Sweedyk
vector within distance at most g times the dis-
(1997). “The hardness of approximate optima in
tance of the optimal solution. The best known
lattices, codes, and systems of linear equations.”
polynomial-time algorithms to solve CVP due to Journal of Computer and System Sciences, 54 (2),
Babai [2] and Kannan [7] are based on lattice 317“331. Preliminary version in FOCS™93.
reduction, and achieve approximation factors that [2] Babai, L. (1986). “On Lovasz™ lattice reduction and
(in the worst case) are essentially exponential in the nearest lattice point problem.” Combinatorica
the dimension of the lattice. In practice, heuris- 6 (1), 1“13.
[3] Dinur, I., G. Kindler, and S. Safra (1998). “Ap-
tics approaches (e.g., the “embedding technique,”
proximating CVP to within almost-polynomial fac-
see lattice reduction) seem to ¬nd relatively good
tors is NP-hard.” Proceedings of the 39th Annual
approximations to CVP in a reasonable amount
Symposium on Foundations of Computer Science”
of time when the dimension of the lattice is suf¬-
FOCS™98, Palo Alto, CA, USA, November 1998.
ciently small.
IEEE, Piscataway, NJ, 99“111.
CVP is widely regarded, both in theory and in
[4] Feige, U. and D. Micciancio (2004). “The inapprox-
practice, as a considerably harder problem than imability of lattice and coding problems with pre-
SVP. CVP is known to be NP-hard to solve ap- processing.” Journal of Computer and System Sci-
proximately within any constant factor or even ences, 69 (1), 45“46. To appear. Preliminary version
some slowly increasing subpolynomial function in CCC 2002.
(see polynomial time) of the dimension n [1, 3]. [5] Goldreich, O. and S. Goldwasser (2000). “On the
limits of nonapproximability of lattice problems.”
However, CVP is unlikely to be NP-hard to ap-
proximate within small polynomial factors g = Journal of Computer and System Sciences, 60 (3),
540“563. Preliminary version in STOC™98.
O( n/ log n) [5]. Goldreich et al. [6] showed that
[6] Goldreich, O., D. Micciancio, S. Safra, and J.-P.
any algorithm to ef¬ciently approximate CVP can
Seifert (1999). “Approximating shortest lattice vec-
be used to ef¬ciently approximate SVP within the
tors is not harder than approximating closest lat-
same approximation factor and with essentially
tice vectors.” Information Processing Letters, 71 (2),
the same computational effort, formalizing the in- 55“61.
tuition that CVP is not an easier (and is a possibly [7] Kannan, R. (1987). Annual Reviews of Computer
harder) problem than SVP. Science, Volume 2, Chapter Algorithmic Geometry
CVP is the basis of various cryptosystems (see of Numbers. Annual Review Inc., Palo Alto, CA,
lattice based cryptography) where the decryption 231“267.
80 Codebook attack


COLLISION ATTACK
[8] Micciancio, D. (2001). “The hardness of the closest
vector problem with preprocessing.” IEEE Trans-
actions on Information Theory, 47 (3), 1212“1215. A collision attack exploits repeating values that
[9] Micciancio, D. and S. Goldwasser (2002). Com- occur when a random variable is chosen with re-
plexity of Lattice Problems: A Cryptographic Per- placement from a ¬nite set S. By the birthday
spective, The Kluwer International Series in En-
paradox, repetitions will occur after approxi-

gineering and Computer Science, vol. 671, Kluwer
mately |S| attempts, where |S| denotes the size
Academic Publishers, Boston, MA.
of the set S. Many cryptographic attacks are based
[10] Regev, O. (2003). “Improved inapproximability of
on collisions.
lattice and coding problems with preprocessing.”
The most obvious application of a collision at-
Proceedings of the 18th IEEE Annual Conference on
˚ tack is to ¬nd collisions for a cryptographic hash
Computational Complexity”CCC™03, Arhus, Den-
function. For a hash function with an n-bit result,
mark, July 2003. IEEE, Piscataway, NJ, 315“322.
[11] van Emde Boas, P. (1981). “Another NP-complete an ef¬cient collision search based on the birthday
paradox requires approximately 2n/2 hash func-
problem and the complexity of computing short
vectors in a lattice.” Technical Report 81-04, Math- tion evaluations [10]. For this application, one can
ematische Instituut, University of Amsterdam. substantially reduce the memory requirements
Available on-line at URL: http://turing.wins.uva
(and also the memory accesses) by translating the
.nl/peter/
problem to the detection of a cycle in an iterated
mapping [7]. Van Oorschot and Wiener propose an
ef¬cient parallel variant of this algorithm [9]. In
CODEBOOK ATTACK order to make a collision search infeasible for the
next 15“20 years, the hash result needs to be
180 bits or more. A collision attack can also play
A codebook attack is an example of a known
a role to ¬nd (second) preimages for a hash func-
plaintext attack scenario in which the attacker is
tion: if one has 2n/2 values to invert, one expects to
given access to a set of plaintexts and their corre-
¬nd at least one (second) preimage after 2n/2 hash
sponding encryptions (for a ¬xed key): (Pi , Ci ), i =
function evaluations.
1, . . . , N. These pairs constitute a codebook which
An internal collision attack on a MAC algorithm
someone could use to listen to further communica-
exploits collisions of the chaining variable of a
tion and which could help him to partially decrypt
MAC algorithm. It allows for a MAC forgery. As
the future messages even without the knowledge
an example, a forgery attack for CBC-MAC and
of the secret key. He could also use this knowledge
variants based on an n-bit block cipher requires at
in a replay attack by replacing blocks in the com-
most 2n/2 known texts and a single chosen text [6].
munication or by constructing meaningful mes-
For some MAC algorithms, such as MAA, internal
sages from the blocks of the codebook. Codebook
collisions can lead to a key recovery attack [5].
attack may even be applied in a passive traf¬c
A block cipher should be a one-way function
analysis scenario, i.e., as a ciphertext-only attack,
from key to ciphertext (for a ¬xed plaintext). If the
which would start with frequency analysis of the
same plaintext is encrypted using 2k/2 keys (where
received blocks and attempts to guess their mean-
k is the key length in bits), one expects to recover
ing. Ciphers with small block size are vulnera-
one of the keys after 2k/2 trial encryptions [1]. This
ble to the Codebook attack, especially if used in
attack can be precluded by the mode of operation;
the simplest Electronic Codebook mode of opera-
however, collision attacks also apply to these
tion. Already with N = 2n/2 known pairs, where
modes. In the Cipher Block Chaining (CBC) and
n is the block size of the cipher, the attacker has
Cipher FeedBack (CFB) mode of an n-bit block
good chances to observe familiar blocks in the fu-
cipher, a repeated value of an n-bit ciphertext
ture communications of size O(2n/2 ), due to the
string leaks information on the plaintext [3,4] (see
birthday paradox. If communication is redundant,
block ciphers for more details).
the size of the codebook required may be even
For synchronous stream ciphers that have a
smaller. Modern block ciphers use 128-bit block
next state function that is a random function
size to make such attacks harder to mount. A
(rather than a permutation), one expects that the
better way to combat such attacks is to use chain-
key stream will repeat after 2m/2 output symbols,
ing modes of operation like Cipher-Block Chain-
with m being the size of the internal memory in
ing mode (which makes further blocks of cipher-
bits. Such a repetition leaks the sum of the corre-
text dependent on all the previous blocks) together
sponding plaintexts, which is typically suf¬cient
with the authentication of the ciphertext.
to recover them. This attack applies to a variant
of the Output FeedBack (OFB) mode of a block
Alex Biryukov
Collision resistance 81


cipher where less than n output bits are fed back Computer Science, vol. 1233, ed. W. Fumy.
Springer-Verlag, Berlin, 256“266.
to the input. If exactly n bits are fed back as spec-
[9] Van Oorschot, P.C. and M. Wiener (1999). “Parallel
i¬ed by the OFB mode, one expects a repetition
collision search with cryptanalytic applications.”
after the selection of 2n/2 initial values.
Journal of Cryptology, 12 (1), 1“28.
The best generic algorithm to solve the discrete
[10] Yuval, G. (1979). “How to swindle Rabin.” Cryptolo-
logarithm problem in any group G requires time
√ gia, 3, 187“189.
O( p) where p is the largest prime dividing
the order of G [8]; this attack is based on colli-
sions.
In many cryptographic protocols, e.g., entity
COLLISION RESISTANCE
authentication protocols, the veri¬er submits a
random challenge to the prover. If an n-bit chal-
lenge is chosen uniformly at random, one expects Collision resistance is the property of a hash func-
to ¬nd a repeating challenge after 2n/2 runs of the tion that it is computationally infeasible to ¬nd
protocol. A repeating challenge leads to a break of two colliding inputs. This property is related to
the protocol. second preimage resistance, which is also known
A meet-in-the-middle attack is a speci¬c vari- as weak collision resistance. A minimal require-
ant of a collision attack which allows to cryptana- ment for a hash function to be collision resistant
lyze some hash functions and multiple encryption is that the length of its result should be 160 bits
modes (see block ciphers). (in 2004). A hash function is said to be a colli-
A more sophisticated way to exploit collisions sion resistant hash function (CRHF) if it is a col-
to recover a block cipher key or to ¬nd (second) lision resistant one-way hash function (OWHF)
preimages is a time-memory trade-off [2]. (see hash function). The exact relation between
collision resistance, second preimage resistance,
B. Preneel
and preimage resistance is rather subtle, and de-
pends on the formalization of the de¬nition: it
References
is shown in [8] that under certain conditions,
collision resistance implies second preimage res-
[1] Biham, E. (2002). “How to decrypt or even sub-
istance and preimage resistance.
stitute DES-encrypted messages in 228 steps.” Inf.
In order to formalize the de¬nition of a collision
Process. Lett., 84 (3), 117“124.
resistant hash function (see [1]), one needs to in-
[2] Hellman, M. (1980). “A cryptanalytic time-memory
troduce a class of functions indexed by a public
trade-off.” IEEE Trans. on Information Theory, IT-
parameter, which is called a key. Indeed, one can-
26 (4), 401“406.
not require that there does not exist an adversary
[3] Knudsen, L.R. (1994). “Block ciphers”analysis,
design and applications.” PhD Thesis, Aarhus Uni- who can produce a collision for a ¬xed hash func-
versity, Denmark. tion, since any simple adversary who stores two
[4] Maurer, U.M. (1991). “New approaches to the short colliding inputs for a function would be able
design of self-synchronizing stream ciphers.” to output a collision ef¬ciently. Introducing a class
Advances in Cryptology”EUROCRYPT™91, Lec-
of functions solves this problem, since an adver-
ture Notes in Computer Science, vol. 547, ed.
sary cannot store a collision for each value of the
D.W. Davies. Springer-Verlag, Berlin, 458“471.
key (provided that the key space is not too small).
[5] Preneel, B., V. Rijmen, and P.C. van Oorschot
For a hash function with an n-bit result, an ef-
(1997). “A security analysis of the message authen-
¬cient collision research based on the birthday
ticator algorithm (MAA).” European Transactions
paradox requires approximately 2n/2 hash func-
on Telecommunications, 8 (5), 455“470.
tion evaluations. One can substantially reduce the
[6] Preneel, B. and P.C. van Oorschot (1999). “On the
security of iterated message authentication codes.” memory requirements (and also the memory ac-
IEEE Trans. on Information Theory, IT-45 (1), 188“ cesses) by translating the problem to the detec-
199. tion of a cycle in an iterated mapping. This was
[7] Quisquater, J.-J. and J.-P. Delescaille (1990). “How ¬rst proposed by Quisquater and Delescaille [6].
easy is collision search? Application to DES.” Ad-
Van Oorschot and Wiener propose an ef¬cient par-
vances in Cryptology”EUROCRYPT™89, Lecture
allel variant of this algorithm [10]; with a US$ 10
Notes in Computer Science, vol. 434, eds. J.-
million machine, collisions for MD5 (with n = 128)
J. Quisquater and J. Vandewalle. Springer-Verlag,
can be found in 21 days in 1994, which corresponds
Berlin, 429“434.
to 5 hours in 2004. In order to make a collision
[8] Shoup, V. (1997). “Lower bounds for discrete
search infeasible for the next 15“20 years, the
logarithms and related problems.” Advances in
hash result needs to be 180 bits or more.
Cryptology”EUROCRYPT™97, Lecture Notes in
82 Combination generator


COMBINATION
Second preimage resistance and collision resis-
tance of hash functions have been introduced by
GENERATOR
Rabin in [7]; the attack based on the birthday
paradox was ¬rst pointed out by Yuval [11]. Fur-
A combination generator is a running-key genera-
ther work on collision resistance can be found in
tor for stream cipher applications. It is composed
[1“5, 9, 12]. For an extensive discussion of the re-
of several linear feedback shift registers (LFSRs)
lation between collision resistance and (second)
whose outputs are combined by a Boolean function
preimage resistance, the reader is referred to
to produce the keystream. Then, the output se-
Rogaway and Shrimpton [8].
quence (st )t≥0 of a combination generator com-
posed of n LFSRs is given by
B. Preneel
st = f(u1 , u2 , . . . , un ), ∀t ≥ 0,
t t t

where (ui )t≥0 denotes the sequence generated by
References t
the ith constituent LFSR and f is a function of
n variables. In the case of a combination genera-
˚
[1] Damgard, I.B. (1990). “A design principle for hash
tor composed of n LFSRs over Fq , the combining
functions.” Advances in Cryptology”CRYPTO™89,
n
function is a function from Fq into Fq .
Lecture Notes in Computer Science, vol. 435, ed.
G. Brassard. Springer-Verlag, Berlin, 416“427.
[2] Gibson, J.K. (1990). “Some comments on ut1
˚
Damgard™s hashing principle.” Electronics Let-
ters, 26 (15), 1178“1179.
ut2
[3] Merkle, R. (1979). Secrecy, Authentication, and
Public Key Systems. UMI Research Press, Ann
f st
Arbor, MI.
.
[4] Preneel, B. (1993). “Analysis and design of cryp- .
.
tographic hash functions.” Doctoral Dissertation,
utn
Katholieke Universiteit Leuven.
[5] Preneel, B. (1999). “The state of cryptographic
hash functions.” Lectures on Data Security, Lec-
ture Notes in Computer Science, vol. 1561, ed.
The combining function f should obviously be
˚
I. Damgard. Springer-Verlag, Berlin, 158“182.
balanced, i.e., its output should be uniformly dis-
[6] Quisquater, J.-J. and J.-P. Delescaille (1990). “How
tributed. The constituent LFSRs should be cho-
easy is collision search? Application to DES.” Ad-
sen to have primitive feedback polynomials (see
vances in Cryptology”EUROCRYPT™89, Lecture
primitive element) for ensuring good statistical
Notes in Computer Science, vol. 434, eds. J.-
properties of their output sequences (see Linear
J. Quisquater and J. Vandewalle. Springer-Verlag,
Feedback Shift Register for more details).
Berlin, 429“434.
[7] Rabin, M.O. (1978). “Digitalized signatures.” Foun- The characteristics of the constituent LFSRs
dations of Secure Computation, eds. R. Lipton and and the combining function are usually publicly
R. DeMillo. Academic Press, New York, 155“166. known. The secret parameters are the initial
[8] Rogaway, P. and T. Shrimpton (2004). “Crypto- states of the LFSRs, which are derived from the se-
graphic hash function basics: De¬nitions, impli-
cret key of the cipher by a key-loading algorithm.
cations, and separations for preimage resistance,
Therefore, most attacks on combination genera-
second-preimage resistance, and collision resis-
tors consist in recovering the initial states of all
tance.” Fast Software Encryption, Lecture Notes
LFSRs from the knowledge of some digits of the
in Computer Science, Springer-Verlag, Berlin, to
sequence produced by the generator (in a known
appear.
plaintext attack), or of some digits of the cipher-
[9] Stinson, D. (2001). “Some observations on the the-
text sequence (in a ciphertext only attack). When
ory of cryptographic hash functions.” Technical Re-
port 2001/020, University of Waterloo. the feedback polynomials of the LFSR and the
[10] van Oorschot, P.C. and M. Wiener (1999). “Parallel combining function are not known, the reconstruc-
collision search with cryptanalytic applications.” tion attack presented in [2] enables to recover the
Journal of Cryptology, 12 (1), 1“28. complete description of the generator from the
[11] Yuval, G. (1979). “How to swindle Rabin.” Cryptolo-
knowledge of a large segment of the ciphertext
gia, 3, 187“189.
sequence.
[12] Zheng, Y., T. Matsumoto, and H. Imai (1990). “Con-
nections between several versions of one-way hash
STATISTICAL PROPERTIES OF THE OUTPUT
functions.” Proc. SCIS90, The 1990 Symposium
SEQUENCE: The sequence produced by a combi-
on Cryptography and Information Security, Ni-
nation generator is a linear recurring sequence.
hondaira, Japan, January 31“February 2, 1990.
Commitment 83


Its period and its linear complexity can be de- attacks infeasible, the LFSR feedback polynomi-
rived from those of the sequences generated by als should not be sparse. The combining func-
the constituent LFSRs and from the algebraic nor- tion should have a high correlation-immunity or-
mal form of the combining function (see Boolean der, also called resiliency order when the involved
function). Indeed, if we consider two linear recur- function is balanced (see correlation-immune and
ring sequences u and v over Fq with linear com- resilient Boolean function). But, there exists a
plexities (u) and (v), we have the following trade-off between the correlation-immunity order
properties: and the algebraic degree of a Boolean function.
r The linear complexity of the sequence u + v = Most notably, the correlation-immunity of a bal-
(u t + vt )t≥0 satis¬es anced Boolean function of n variables cannot ex-
ceed n ’ 1 ’ deg( f ), when the algebraic degree of
(u + v) ¤ (u) + (v),
f, deg( f ), is greater than 1. Moreover, the complex-
with equality if and only if the minimal poly- ity of correlation attacks and of fast correlation
nomials of u and v are relatively prime. More- attacks also increases with the nonlinearity of
over, in the case of equality, the period of u + v the combining function (see correlation attack).
is the least common multiple of the periods of u The trade-offs between high algebraic degree, high
and v.
r correlation-immunity order, and high nonlinear-
The linear complexity of the sequence uv = ity can be circumvented by replacing the com-
(ut vt )t≥0 satis¬es bining function by a ¬nite state automaton with
memory. Examples of such combination genera-
(uv) ¤ (u) (v),
tors with memory are the summation generator
where equality holds if the minimal polynomials and the stream cipher E0 used in Bluetooth.
of u and v are primitive and if (u) and (v)
Anne Canteaut
are distinct and greater than 2. Other general
suf¬cient conditions for (uv) = (u) (v) can
References
be found in [3“5].
Thus, the keystream sequence produced by a com-
[1] Brynielsson, L. (1986). “On the linear complex-
bination generator composed of n binary LFSRs
ity of combined shift register sequences.” Advances
with primitive feedback polynomials which are
in Cryptology”EUROCRYPT™85, Lecture Notes in
combined by a Boolean function f satis¬es the fol-
Computer Science, vol. 219, ed. F. Pichler. Springer-
lowing property proven in [5]. If all LFSR lengths
Verlag, Berlin, 156“160.
L1 , . . . , Ln are distinct and greater than 2 (and if
[2] Canteaut, A. and E. Filiol (2001). “Ciphertext only
all LFSR initializations differ from the all-zero reconstruction of stream ciphers based on combi-
state), the linear complexity of the output se- nation generators.” Fast Software Encryption 2000,
quence s is equal to Lecture Notes in Computer Science, vol. 1978, ed.
B. Schneier. Springer-Verlag, Berlin, 165“180.
f (L1 , L2 , . . . , Ln ),
[3] Herlestam, T. (1986). “On functions of linear
where the algebraic normal form of f is evalu- shift register sequences.” Advances in Cryptology”
EUROCRYPT™85, Lecture Notes in Computer Sci-
ated over integers. For instance, if four LFSRs of
lengths L1 , . . . , L4 satisfying the previous condi- ence, vol. 219, ed. F. Pichler. Springer-Verlag, Berlin,
119“129.
tions are combined by the Boolean function x1 x2 +
[4] G¨ ttfert, R. and H. Niederreiter (1995). “On the
o
x2 x3 + x4 , the linear complexity of the resulting se-
minimal polynomial of the product of linear recur-
quence is L1 L2 + L2 L3 + L4 . Similar results con-
ring sequences.” Finite Fields and their Applica-
cerning the combination of LFSRs over Fq can be tions, 1 (2), 204“218.
found in [5] and [1]. A high linear complexity is a [5] Rueppel, R.A. and O.J. Staffelbach (1987). “Products
desirable property for a keystream sequence since of linear recurring sequences with maximum com-
it ensures that the Berlekamp“Massey algorithm plexity.” IEEE Transactions on Information Theory,
becomes computationally infeasible. Thus, the 33 (1), 124“131.
combining function f should have a high algebraic
degree (the algebraic degree of a Boolean func-
tion is the highest number of terms occurring in a
COMMITMENT
monomial of its algebraic normal form).

KNOWN ATTACKS AND RELATED DESIGN CRI- COMMITMENT: A commitment scheme is a two-
TERIA: Combination generators are vulnerable phase cryptographic protocol between two parties,
to the correlation attack and its variants called a sender and a receiver, satisfying the following
fast correlation attacks. In order to make these constraints. At the end of the Commit phase the
84 Commitment



I owe you
$100.




Bob




Fig. 1. Committing with an envelope


sender is committed to a speci¬c value (often a Unveiling the content of the envelope is
single bit) that he cannot change later on (Com- achieved by opening it and extracting the piece
mitments are binding) and the receiver should of paper inside (see Figure 2).
have no information about the committed value, The terminology of commitments, in¬‚uenced by
other than what he already knew before the pro- the legal vocabulary, ¬rst appeared in the contract
tocol (Commitments are concealing). In the Un- signing protocols of Even [14], although it seems
veil phase, the sender sends extra information fair to attribute the concept to Blum [3] who im-
to the receiver that allows him to determine the plicitly uses it for coin ¬‚ipping around the same
value that was concealed by the commitment. time. In his Crypto 81 paper, Even refers to Blum™s
Bit commitments are important components of contribution saying: In the summer of 1980, in
zero-knowledge protocols [4, 16], and other more a conversation, M. Blum suggested the use of
general two-party cryptographic protocols [19]. randomization for such protocols. So apparently
A natural intuitive implementation of a com- Blum introduced the idea of using random hard
mitment is performed using an envelope (see problems to commit to something (coin, contract,
Figure 1). Some information written on a piece of etc.). However, one can also argue that the earlier
paper may be committed to by sealing it inside work of Shamir et al. [26] on mental poker implic-
an envelope. The value inside the sealed envelope itly used commitments as well, since in order to
cannot be guessed (envelopes are concealing) with- generate a fair deal of cards, Alice encrypts the
out modifying the envelope (opening it) nor the card names under her own encryption key, which
content can be modi¬ed (envelopes are binding). is the basic idea for implementing commitments.



I owe you
$100.




Bob




Fig. 2. Unveiling from an envelope
Commitment 85


Under such computational assumptions, com- ments [13] (with respect to unveiling [12]),
mitments come in two dual ¬‚avors: binding mutually independent commitments [21], and uni-
but computationally concealing commitments and versally composable commitments [7].
concealing but computationally binding commit-
Claude Cr´ peau
e
ments.
Commitments of the ¬rst type may be achieved
from any one-way function [18, 24] while those of References
the second type may be achieved from any one-
way permutation (or at least regular one-way func- [1] Ben-Or, M., S. Goldwasser, J. Kilian, and
tion) [25] or any collision-free hash function [17] A. Wigderson (1988). “Multi-prover interactive
(see also collision resistance and hash function). proofs: How to remove intractability assumptions.”
It is still an open problem to achieve commit- Proceedings of 20th Annual AMC Symposium on
ments of the second type from one-way functions Theory of Computing, 113“122.
[2] Ben-Or, M., S. Goldwasser, and A. Wigderson
only.
(1988). “Completeness theorems for fault-tolerant
A simple example of a bit commitment of the
distributed computing.” Proc. 20th ACM Sympo-
¬rst type is obtained using the Goldwasser“Micali
sium on Theory of Computing, Chicago, 1988. ACM
probabilistic encryption scheme with one™s own
Press, New York, 1“10.
pair of public keys (n, q) such that n is an RSA
[3] Blum, M. (1982). “Coin ¬‚ipping by telephone.” Ad-
modulus (see RSA public key encryption) and q
vances in Cryptography, Santa Barbara, Califor-
a random quadratic nonresidue modulo n with nia, USA, ed. Allen Gersho. University of Califor-
Jacobi symbol +1 (see also quadratic residue). Un- nia, Santa Barbara, 11“15.
veiling is achieved by providing a square root [4] Brassard, G., D. Chaum, and C. Cr´ peau (1998).
e
of each quadratic residue and of quadratic non- “Minimum disclosure proofs of knowledge.” JCSS,
residue multiplied by q. A similar example of a bit 37, 156“189.
[5] Brassard, G., C. Cr´ peau, R. Jozsa, and D. Langlois
e
commitment of the second type is constructed from
(1993). “A quantum bit commitment scheme prov-
someone else™s pair of public keys (n, r ) such that
ably unbreakable by both parties.” 29th Symp. on
n is an RSA modulus and r a random quadratic
Found. of Computer Sci. IEEE, Piscataway, NJ, 42“
residue modulo n. A zero bit is committed using a
52.
random quadratic residue mod n while a one bit is
[6] Cachin, C., C. Cr´ peau, and J. Marcil (1998).
e
committed using a random quadratic residue mul-
“Oblivious transfer with a memory-bounded re-
tiplied by r modulo n. Unveiling is achieved by pro- ceiver.” 39th Annual Symposium on Foundations
viding a square root of quadratic residues commit- of Computer Science: Proceedings, November 8“11,
ting to a zero and of quadratic residues multiplied 1998, Palo Alto, California. IEEE Computer Soci-
by r used to commit to a one. ety Press, Los Alamitos, CA, 493“502.
Unconditionally binding and concealing com- [7] Canetti Ran and Marc Fischlin (2001). “Uni-
versally composable commitments.” Advances in
mitments can also be obtained under the assump-
Cryptology”CRYPTO 2001, International Asso-
tion of the existence of a binary symmetric channel
ciation for Cryptologic Research, Lecture Notes
[10] and under the assumption that the receiver
in Computer Science, vol. 2139, ed. Joe Kilian.
owns a bounded amount of memory [6]. In multi-
Springer-Verlag, Berlin, 19“40.
party scenarios [2, 8, 16], commitments are usu-
˚
[8] Chaum, D., C. Cr´ peau, and I. Damgard (1988).
e
ally achieved through Veri¬able Secret Sharing
“Multi-party unconditionally secure protocols.”
Schemes [9]. However, the two-prover case [1] does Proc. 20th ACM Symposium on Theory of Comput-
not require the veri¬able property because the ing, Chicago, 1988. ACM Press, New York.
provers are physically isolated from each other [9] Chor Benny, Sha¬ Goldwasser, Silvio Micali,
during the life span of the commitments. and Baruch Awerbuch (1985). “Veri¬able secret
In a quantum computation model (see quantum sharing and achieving simultaneity in the pres-
ence of faults (extended abstract).” Proc. of 26th
cryptography) it was ¬rst believed that commit-
FOCS, Portland, OR, October 21“23, 1985. IEEE,
ment schemes could be implemented with uncon-
Piscataway, NJ, 383“395.
ditional security for both parties [5] but it was
[10] Cr´ peau C. (1997). “Ef¬cient cryptographic pro-
e
later demonstrated that if the sender is equipped
tocols based on noisy channels.” Advances in
with a quantum computer, then any uncondition-
Cryptology”EUROCRYPT™97, Lecture Notes in
ally concealing commitment cannot be binding
Computer Science, vol. 1233, ed. Walter Fumy.
[22, 23]. Springer-Verlag, Berlin, 306“317.
Commitments exist with various extra prop- [11] Cr´ peau C., J. van de Graaf, and A. Tapp
e
erties: chameleon/trapdoor commitments [1, 15], (1995). “Committed oblivious transfer and private
commitments with equality (attributed to Bennett multi-party computation.” Advances in Crypto-
and Rudich in [11, 20]), nonmalleable commit- logy”CRYPTO™95, Lecture Notes in Computer
86 Common criteria


for NP can be based on general complexity assump-
Science, vol. 963, ed. Don Coppersmith. Springer-
tions.” Advances in Cryptology”CRYPTO™92, Lec-
Verlag, Berlin, 110“123.
ture Notes on Computer Science, vol. 740, ed. E.F.
[12] Di Crescenzo, Giovanni, Yuval Ishai, and Rafail
Brickell. Springer-Verlag. This work was ¬rst pre-
Ostrovsky (1998). “Non-interactive and non-
sented at the DIMACS Workshop on Cryptography,
malleable commitment.” 30th Symposium on the
October 1990.
Theory of Computing, 141“150.
[26] Shamir, A., R.L. Rivest, and L.M. Adleman (1981).
[13] Dolev, D., C. Dwork, and M. Naor (1991). “Non-
“Mental poker.” The Mathematical Gardner, ed.
malleable cryptography.” Proceedings of the Twenty
D. Klarner. Wadsworth, Belmont, CA.
Third Annual ACM Symposium on Theory of Com-
puting, New Orleans, LA, May 6“8, 1991. IEEE
Computer Society Press, Los Alamitos, CA.
[14] Even, S. (1982). “Protocol for signing contracts.”
COMMON CRITERIA
Advances in Cryptography, Santa Barbara, CA,
USA, 1982, ed. Allen Gersho. University of
California, Santa Barbara. The Common Criteria (CC) is meant to be used as
[15] Feige, U. and A. Shamir (1989). “Zero knowl- the basis for evaluation of security properties of
edge proofs of knowledge in two rounds.” Ad- IT products and systems. The objective desired is
vances in Cryptology”CRYPTO™89, Lecture Notes
that by establishing a common base for criteria,
in Computer Science, vol. 435, ed. Gilles Brassard.
the evaluation results of an IT product will be of
Springer-Verlag, Berlin, 526“544.
more value to a wider audience.
[16] Goldreich Oded, Silvio Micali, and Avi Wigderson
The goal is for Common Criteria to permit com-
(1991). “Proofs that yield nothing but their validity
parability of products based on the results of inde-
or all languages in NP have zero-knowledge proof
pendent security evaluations for various products
systems.” Journal of the Association for Computing
evaluated by separate organizations in different
Machinery, 38 (3), 691“729.
[17] Halevi, S. and S. Micali (1996). “Practical countries. The vision is that by providing a com-
and provably-secure commitment schemes from mon set of requirements for the security functions
collision-free hashing.” Advances in Cryptology” of IT products, and a common set of assurance
CRYPTO™96, Lecture Notes in Computer Science, measurements applied to them that the evalua-
vol. 1109, ed. Neal Koblitz. Springer-Verlag, Berlin,
tion process will establish a level of con¬dence in
201“215.
the knowledge and trust of the evaluated prod-
˚
[18] Hastad, Johan, Russell Impagliazzo, Leonid A.
ucts. The evaluation results may help consumers
Levin, and Michael Luby (1999). “A pseudorandom
to determine whether an IT product or system
generator from any one-way function.” SICOMP.
is appropriate for their intended application and
SIAM Journal on Computing, 28 (4), 1364“1396.
whether the security risks implicit in its use are
[19] Kilian, J. (1988). “Founding cryptography on obliv-
acceptable.
ious transfer.” Proc. 20th ACM Symposium on The-
ory of Computing, Chicago, 1988. ACM Press, New Common Criteria is not a security speci¬cation
York, 20“31. that prescribes speci¬c or necessary security func-
[20] Kilian Joe (1992). “A note on ef¬cient zero- tionality or assurance. Therefore, it is not intended
knowledge proofs and arguments (extended ab- to verify the quality or security of cryptographic
stract).” Proceedings of the Twenty-Fourth Annual
implementations. In general, products that re-
ACM Symposium on the Theory of Computing,
quire cryptography are often required to attain
Victoria, British Columbia, Canada, May 4“6,
a FIPS 140-2 validation for their cryptographic
1992, 723“732.
functionality before the common criteria evalua-
[21] Liskov Moses, Anna Lysyanskaya, Silvio Micali,
tion can be completed. There are security products
Leonid Reyzin, and Adam Smith (2001). “Mu-
that are very important to security but may not in-
tually independent commitments.” Advances in
corporate cryptography as a part of their function-
Cryptology”ASIACRYPT 2001, Lecture Notes in
Computer Science, vol. 2248, ed. C. Boyd. Springer, ality. Examples include operating systems, ¬re-
Berlin, 385“401. walls, and IDS systems. Common Criteria is a
[22] Lo, H.-K. and F. Chau (1997). “Is quantum bit com- methodology to gain assurance that a product is
mitment really possible.” Physical Review Letters, actually designed and subsequently performs ac-
78 (17), 3410“3413.
cording to the claims in the product™s “Security
[23] Mayers Dominic (1997). “Unconditionally secure
Target” document. The level of assurance (EAL)
quantum bit commitment is impossible.” Physical
can be speci¬ed to one of seven levels described
Review Letters, 78 (17), 3414“3417.
later.
[24] Naor Moni (1991). “Bit commitment using pseudo-
The Common Criteria speci¬cation has been
randomness.” Journal of Cryptology, 4 (2), 151“158.
published as International Standard ISO/IEC
[25] Naor, M., R. Ostrovsky, R. Venkatesan, and M.
15408:1999. It is sometimes also published in
Yung (1993). “Perfect zero-knowledge arguments
Common criteria 87


formats speci¬c to a given country that facilities EAL1. The objective for evaluation assurance level
use in their individual test scheme. The content 1 (EAL1) is described as “functionally tested” is
and requirements are intended to be identical. to con¬rm that the product functions in a man-
Seven governmental organizations, which are ner consistent with its documentation, and that
collectively called “the Common Criteria Project it provides useful protection against identi¬ed
Sponsoring Organizations,” were formed to de- threats.
velop the standard and program. The countries EAL1 is applicable where some con¬dence in
and organizations are: correct operation is required, but the threats to
r Canada: Communications Security Establish- security are not viewed as serious. The evalua-
ment tion will be of value where independent assur-
r France: Service Central de la Scurit des ance is required to support the contention that
Systmes dInformation due care has been exercised with respect to the
r Germany: Bundesamt fr Sicherheit in der protection of personal or similar information.
Informationstechnik EAL1 provides an evaluation of the product
r Netherlands: Netherlands National Commu- as made available to the customer, including in-
nications Security Agency dependent testing against a speci¬cation, and
r United Kingdom: Communications-Electro- an examination of the guidance documentation
nics Security Group provided. It is intended that an EAL1 evalua-
r United States: National Institute of Standards tion could be successfully conducted without as-
and Technology sistance from the developer of the product, and
r United States: National Security Agency for minimal cost and schedule impact.
The Common Criteria Project Sponsoring Orga- EAL2. The objective for evaluation assurance
nizations approved the licensing and use of CC level 2 (EAL2) is described as “structurally
v2.1 to be the basis of ISO 15408. Because of its tested.”
international basis, certi¬cations under Common EAL2 requires the cooperation of the devel-
Criteria are under a “Mutual Recognition Agree- oper in terms of the delivery of design infor-
ment.” This is an agreement that certi¬cates is- mation and test results, but should not demand
sued by organizations under a speci¬c scheme will more effort on the part of the developer than is
be accepted in other countries as if they were eval- consistent with good commercial practice, and
uated under their own schemes. The list of coun- therefore, should not require a substantially in-
tries that have signed up to the mutual recognition creased investment of cost or time.
have grown beyond just the original sponsoring or- EAL2 is applicable in those circumstances
ganizations. where developers or users require a low to mod-
Common Criteria incorporates a feature called erate level of independently assured security
a Protection Pro¬le (PP). This is a document that but does not require the submission of a com-
speci¬es an implementation-independent set of se- plete development record by the vendor. Such a
curity requirements for a category of products (i.e., situation may arise when securing legacy sys-
Traf¬c Filters or smart cards) that meet the needs tems, or where access to the developer may be
of speci¬c consumer groups, communities of inter- limited.
est, or applications. Protection Pro¬les are consid- EAL3. The objectives for evaluation assurance
ered a product in themselves, and are evaluated level 3 (EAL3) are described as “methodically
and tested for compliance to Common Criteria, tested and checked.”
just as a functional product would. Before a prod- EAL3 permits a conscientious developer to
uct can be validated using common criteria to a gain maximum assurance from positive secu-
given protection pro¬le (or a combination of them), rity engineering at the design stage without
the Protection Pro¬les have to be evaluated and is- substantial alteration of existing sound devel-
sued certi¬cates of compliance. Instead of the Se- opment practices.
curity Target (a required document) referring to EAL3 is applicable in those circumstances
a protection pro¬le for a set of security function- where developers or users require a moderate
ality and assurance criteria, it is acceptable for level of independently assured security, and re-
the product Security Target to independently state quire a thorough investigation of the product
the security functionality and assurance level to and its development without substantial re-
which the product will be evaluated. The limita- engineering.
tion is that this restricts the ability of product EAL4. The objectives for evaluation assurance
consumers or users to readily compare products level 4 (EAL4) are described as “methodically
of similar functionality. designed, tested, and reviewed.”
88 Communication channel anonymity


EAL4 permits a developer to gain maximum Common Criteria is documented in a family of
assurance from positive security engineering three interrelated documents:
based on good commercial development prac- 1. CC Part 1: Introduction and general model
tices, which, though rigorous, do not require sub- 2. CC Part 2: Security functional requirements
stantial specialist knowledge, skills, and other 3. CC Part 3: Security assurance requirements.
resources. The managed international homepage of the Com-
EAL4 is therefore applicable in those circum- mon Criteria is available at www.commoncriteria
stances where developers or users require a .org. The homepage for US based vendors and
moderate to high level of independently assured customers is managed by NIST at http://csrc.nist
security in conventional commodity products .gov/cc.
and are prepared to incur additional security- Part 1, introduction and general model, is the
speci¬c engineering costs. introduction to the CC. It de¬nes general con-
EAL5. The objectives for evaluation assurance cepts and principles of IT security evaluation and
level 5 (EAL5) are described as “semiformally presents a general model of evaluation. Part 1 also
designed and tested.” presents constructs for expressing IT security ob-
EAL5 permits a developer to gain maximum jectives, for selecting and de¬ning IT security re-
assurance from security engineering based upon quirements, and for writing high-level speci¬ca-
rigorous commercial development practices sup- tions for products and systems. In addition, the
ported by moderate application of specialist se- usefulness of each part of the CC is described in
curity engineering techniques. Such a product terms of each of the target audiences.
will probably be designed and developed with Part 2, security functional requirements, estab-
the intent of achieving EAL5 assurance. It is lishes a set of functional components as a standard
likely that the additional costs attributable to way of expressing the functional requirements for
the EAL5 requirements, relative to rigorous de- Targets of Evaluation. Part 2 catalogs the set of
velopment without the application of specialized functional components, families, and classes.
techniques, will not be large. Part 3, security assurance requirements, estab-
EAL5 is therefore applicable in those circum- lishes a set of assurance components as a standard
stances where developers or users require a way of expressing the assurance requirements for
high level of independently assured security in Targets of Evaluation. Part 3 catalogs the set of
a planned development and require a rigorous assurance components, families, and classes. Part
development approach without incurring unrea- 3 also de¬nes evaluation criteria for Protection
sonable costs attributable to specialist security Pro¬les and Security Targets and presents eval-
engineering techniques. uation assurance levels that de¬ne the prede¬ned
EAL6. The objectives for evaluation assurance CC scale for rating assurance for Targets of Eval-
level 6 (EAL6) are described as “semiformally uation, which is called the Evaluation Assurance
veri¬ed design and tested.” Levels.
EAL6 permits developers to gain high assur- Each country implements its own scheme of
ance from application of security engineering how it will implement the Common Evaluation
techniques to a rigorous development environ- Methodology for Information Technology Security.
ment in order to produce a premium product for
Tom Caddy
protecting high value assets against signi¬cant
risks.
EAL6 is therefore applicable to the develop-
ment of security product for application in high-
COMMUNICATION
risk situations where the value of the protected
CHANNEL ANONYMITY
assets justi¬es the additional costs.
EAL7. The objectives of evaluation assurance level
7 (EAL7) are described as “formally veri¬ed de- Communication channel anonymity or relation-
sign and tested.” ship anonymity [4] is achieved in a messaging sys-
EAL7 is applicable to the development of secu- tem if an eavesdropper who picks up messages
rity products for application in extremely high- from the communication line of a sender and the
risk situations and/or where the high value of communication line of a recipient cannot tell with
the assets justi¬es the higher costs. Practical better probability than pure guess whether the
application of EAL7 is currently limited to prod- sent message is the same as the received message.
ucts with tightly focused security functionality During the attack, the eavesdropper may also lis-
that is amenable to extensive formal analysis. ten on all communication lines of the network, and
Compromising emanations 89


he may also send and receive his own messages. such compromising emanations to steal informa-
It is clear that all messages in such a network tion. Even where emissions are intended, as is
must be encrypted to the same length in order the case with transmitters and displays, only a
to keep the attacker from distinguishing different small fraction of the overall energy and infor-
messages by their content or length. mation content emitted will ever reach the in-
Communication channel anonymity implies ei- tended recipient. Eavesdroppers can use special-
ther sender anonymity or recipient anonymity [4]. ized more sensitive receiving equipment to tap
Communication channel anonymity can be into the rest and access con¬dential information,
achieved against computationally restricted often in unexpected ways, as some of the following
eavesdroppers by MIX networks [1] and against examples illustrate.
computationally unrestricted eavesdroppers by Much knowledge in this area is classi¬ed mili-
DC networks [2, 3]. tary research. Some types of compromising ema-
Note that communication channel anonymity nations that have been demonstrated in the open
is weaker than communication link unobservabil- literature include:
r
ity, where the attacker cannot even determine Radio-frequency waves radiated into free space
r
whether or not any message is exchanged between Radio-frequency waves conducted along cables
r
any particular pair of participants at any point of Power-supply current ¬‚uctuations
r
time. Communication link unobservability can be Vibrations, acoustic and ultrasonic emissions
r
achieved with MIX networks and DC networks by High-frequency optical signals.
adding dummy traf¬c. They can be picked up passively using directional
antennas, microphones, high-frequency power-
Gerrit Bleumer line taps, telescopes, radio receivers, oscilloscopes,
and similar sensing and signal-processing equip-
References ment. In some situations, eavesdroppers can ob-
tain additional information by actively directing
[1] Chaum, David (1981). “Untraceable electronic mail, radio waves or light beams toward a device and
return addresses, and digital pseudonyms.” Com- analyzing the re¬‚ected energy.
munications of the ACM, 24 (2), 84“88. Some examples of compromising emanations
[2] Chaum, David (1985). “Security without identi¬ca-
are:
r
tion: Transaction systems to make Big Brother ob-
Electromagnetic impact printers can produce
solete.” Communications of the ACM, 28 (10), 1030“
low-frequency acoustic, magnetic, and power-
1044.
supply signals that are characteristic for each
[3] Chaum, David (1988). “The dining cryptographers
printed character. In particular, this has been
problem: Unconditional sender and recipient un-
demonstrated with some historic dot-matrix
traceability.” Journal of Cryptology, 1 (1), 65“75.
and “golfball” printers. As a result, printed text
[4] P¬tzmann, Andreas and Marit K¨ hntopp (2001).
o
“Anonymity, unobservability, and pseudonymity” could be reconstructed with the help of power-
a proposal for terminology.” Designing Privacy En- line taps, microphones, or radio antennas. The
hancing Technologies, Lecture Notes in Computer signal sources are the magnetic actuators in
Science, vol. 2009, ed. H. Frederrath. Springer- the printer and the electronic circuits that drive
Verlag, Berlin, 1“9.
them.
r Cathode-ray tube (CRT) displays are fed with
an analog video signal voltage, which they am-
plify by a factor of about 100 and apply it to a
COMPROMISING control grid that modulates the electron beam.
EMANATIONS This arrangement acts, together with the video
cable, as a parasitic transmission antenna. As
a result, CRT displays emit the video signal as
Computer and communications devices emit nu-
electromagnetic waves, particularly in the VHF
merous forms of energy. Many of these emissions
and UHF bands (30 MHz to 3 GHz). An AM ra-
are produced as unintended side effects of nor-
dio receiver with a bandwidth comparable to the
mal operation. For example, where these emis-
pixel-clock frequency of the video signal can be
sions take the form of radio waves, they can
tuned to one of the harmonics of the emitted sig-
often be observed interfering with nearby radio
nal. The result is a high-pass ¬ltered and recti-
receivers. Some of the unintentionally emitted en-
¬ed approximation of the original video signal.
ergy carries information about processed data.
It lacks color information and each vertical edge
Under good conditions, a sophisticated and well-
appears merely as a line. Figure 1 demonstrates
equipped eavesdropper can intercept and analyze
90 Compromising emanations




Fig. 1. The top image shows a short test text displayed on a CRT monitor. In the bottom image, the compromising
emanation from this text was picked up with the help of an AM radio receiver (tuned at 450 MHz, 50 MHz
bandwidth) and a broadband UHF antenna. The output was then digitized, averaged over 256 frames to reduce
noise, and ¬nally presented as a reconstructed pixel raster



that text characters remain quite readable after (DVI) connector. To a ¬rst approximation, the
this distortion. Where the display font and char- signal picked up by an eavesdropping receiver
acter spacing are predictable, automatic text from a Gbit/s serial video interface cable indi-
recognition is particularly practical. For older, cates the number of bit transitions in the data
1980s, video displays, even modi¬ed TV sets, words that represent each pixel color. For ex-
with de¬‚ection frequencies adjusted to match ample, text that is shown in foreground and
those of the eavesdropped device, could be used background colors encoded by the serial data
to demonstrate the reconstruction of readable words 10101010 and 00000000, respectively,
text at a distance [7]. In modern computers, will be particularly readable via radio emana-
pixel-clock frequencies exceed the bandwidth of tions [3].
r
TV receivers by an order of magnitude. Eaves- Data has been eavesdropped successfully from
dropping attacks on these require special re- shielded RS-232 cables several meters away
ceivers with large bandwidth (50 MHz or more) with simple AM shortwave radios [5]. Such
connected to a computer monitor or high-speed serial-interface links use unbalanced transmis-
signal-processing system [3]. sion. Where one end lacks an independent earth
r CRT displays also leak the video signal as a connection, the cable forms the inductive part
high-frequency ¬‚uctuation of the emitted light. of a resonant circuit that works in conjunction
On this channel, the video signal is distorted with the capacitance between the device and
by the afterglow of the screen phosphors and earth. Each edge in the data signal feeds into
by the shot noise that background light contri- this oscillator energy that is then emitted as a
butes. It is possible to reconstruct readable decaying high-frequency radio wave.
r
text from screen light even after diffuse re¬‚ec- Line drivers for data cables have data-
tion, for example from a user™s face or a wall. dependent power consumption, which can af-
This can be done from nearby buildings using fect the supply voltage slightly. This in turn
a telescope connected to a very fast photosen- can cause small variations in the frequency
sor (photomultiplier tube). The resulting signal of nearby oscillator circuits. As a result, the
needs to be digitally processed using periodic- electromagnetic waves generated by these os-
averaging and deconvolution techniques to be- cillators broadcast frequency-modulated data,
come readable. This attack is particularly feasi- which can be picked up with FM radios [5].
r
ble in dark environments, where light from the Where several cables share the same conduit,
target CRT contributes a signi¬cant fraction of capacitive and inductive coupling occurs. This
the overall illumination onto the observed sur- can result in crosstalk from one cable to the
face. Flat-panel displays that update all pixels other, even where the cables run parallel for
in a row simultaneously are immune from this just a few meters. With a suitable ampli-
attack [2]. ¬er, an external eavesdropper might discover
r Some ¬‚at-panel displays can be eavesdropped that the high-pass ¬ltered version of a signal
via UHF radio, especially where a high-speed from an internal data cable is readable, for
digital serial connection is used between the example, on a telephone line that leaves the
video controller and display. This is the case, building.
r
for example, in many laptops and with modern Devices with low-speed serial ports, such
graphics cards with a Digital Visual Interface as analog telephone modems with RS-232
Compromising emanations 91


interface, commonly feature light-emitting separating images from several computers in a
diodes (LEDs) that are connected as status in- building.
dicators directly to data lines. These emit the RF video-signal eavesdropping can be easily
processed data optically, which can be picked up demonstrated with suitable equipment. Even in a
remotely with a telescope and photo sensor [4]. noisy of¬ce environment and without directional
Such optical compromising emanations are in- antennas, reception across several rooms (5“20
visible to the human eye, which cannot perceive meters) requires only moderate effort. Larger
¬‚icker above about 50 Hz. Therefore, all opti- eavesdropping distances can be achieved in the
cal data rates above 1 kbit/s appear as constant quieter radio spectrum of a rural area or with
light. the help of directional antennas. Eavesdropping
r The sound of a keystroke can identify which key of nonperiodic compromising signals from modern
on a keyboard was used. Just as guitar strings of¬ce equipment is usually only feasible where a
and drums sound very different depending on sensor or accessible conductor (crosstalk) can be
where they are hit, the mix of harmonic fre- placed very close to the targeted device. Where an
quencies produced by a resonating circuit board eavesdropper can arrange for special software to
on which keys are mounted varies with the be installed on the targeted computer, this can be
location of the keystroke. Standard machine- used to deliberately modulate many other emis-
learning algorithms can be trained to distin- sion sources with selected and periodically re-
guish, for a speci¬c keyboard model, keys based peated data for easier reception, including system
on spectrograms of acoustic keystroke record- buses, transmission lines, and status indicators.
ings [1]. Compromising radio emanations are often
r Smart cards are used to protect secret keys broadband impulse signals that can be received at
and intermediate results of cryptographic com- many different frequencies. Eavesdroppers tune
putations from unauthorized access, especially their receivers to a quiet part of the spectrum,
from the cardholder. Particular care is neces- where the observed impulses can be detected with
sary in their design with regard to compro- the best signal-to-noise ratio. The selected receiver
mising emanations. Due to the small package, bandwidth has to be small enough to suppress the
eavesdropping sensors can be placed very close powerful signals from broadcast stations on neigh-
to the microprocessor, to record, for example, boring frequencies and large enough to keep the
supply-current ¬‚uctuations or magnetic ¬elds width of detected impulses short enough for the
that leak information about executed instruc- observed data rate.
tions and processed data. The restricted space Electromagnetic and acoustic compromising
available in an only 0.8 mm thick plastic card emanations have been a concern to military orga-
makes careful shielding and ¬ltering dif¬cult. nizations since the 1960s. Secret protection stan-
See also smartcard tamper resistance. dards (TEMPEST ) have been developed. They de-
Video signals are a particularly dangerous type ¬ne how equipment used to process critical secret
of compromising emanation due to their periodic information must be shielded, tested, and main-
nature. The refresh circuit in the video adapter tained. Civilian radio-emission limits for comput-
transmits the display content continuously, re- ers, such as the CISPR 22 and FCC Class B regula-
peated 60“90 times per second. Even though the tions, are only designed to help avoid interference
leaked signal power measures typically only a few with radio broadcast services at distances more
nanowatts, eavesdroppers can use digital signal- than 10 meters. They do not forbid the emission of
processing techniques to determine the exact rep- compromising signals that could be picked up at
etition frequency, record a large number of frames, a quiet site by a determined receiver with direc-
and average them to reduce noise from other ra- tional antennas and careful signal processing sev-
dio sources. As frame and pixel frequencies differ eral hundred meters away. Protection standards
by typically six orders of magnitude, the averag- against compromising radio emanations therefore
ing process succeeds only if the frame rate has have to set limits for the allowed emission power
been determined correctly within at least seven about a million times (60 dB) lower than civil-
digits precision. This is far more accurate than ian radio-interference regulations. Jamming is an
the manufacturing tolerances of the crystal oscil- alternative form of eavesdropping protection, but
lators used in graphics adapters. An eavesdropper this is not preferred in military applications where
can therefore use periodic averaging to separate keeping the location of equipment secret is an ad-
the signals from several nearby video displays, ditional requirement.
even if they use the same nominal refresh fre-
quency. Directional antennas are another tool for Markus Kuhn
92 Computational complexity


References number of “bit operations” when the input is given
as a string of 0™s and 1™s. Typically, the running
time is measured as a function of the input length.
[1] Asonov, Dmitri and Rakesh Agrawal (2004). “Key-
For numerical problems, we assume the input is
board acoustic emanations.” Proceedings 2004 IEEE
Symposium on Security and Privacy, Oakland, Cal- represented in binary, so the length of an integer
ifornia, May 9“12, 2004. IEEE Computer Society N is roughly log2 N. For example, the elementary-
Press, Los Alamitos, CA. school method for adding two n-bit numbers has
[2] Kuhn, Markus G. (2002). “Optical time-domain running time proportional to n. (For each bit of
eavesdropping risks of CRT displays.” Proceedings
the output, we add the corresponding input bits
2002 IEEE Symposium on Security and Privacy,
plus the carry.) More succinctly, we say that addi-
Oakland, California, May 12“15, 2002. IEEE Com-
tion can be solved in time “order n,” denoted O(n)
puter Society Press, Los Alamitos, 3“18, ISBN
(see O-notation). The elementary-school multipli-
0-7695-1543-6.
cation algorithm, on the other hand, can be seen to
[3] Kuhn, Markus G. (2003). “Compromising emana-
have running time O(n2 ). In these examples (and
tions: Eavesdropping risks of computer displays.”
in much of complexity theory), the running time
Technical Report UCAM-CL-TR-577, University of
Cambridge, Computer Laboratory. is measured in the worst case. That is, we mea-
[4] Loughry, Joe, and David A. Umphress (2002). “In- sure the maximum running time over all inputs of
formation leakage from optical emanations.” ACM length n.
Transactions on Information Systems Security, 5 (3),
262“289.
POLYNOMIAL TIME: Both the addition and multi-
[5] Smulders, Peter (1990). “The threat of information
plication algorithms are considered to be ef¬cient,
theft by reception of electromagnetic radiation from
because their running time grows only mildly with
RS-232 cables.” Computers & Security, 9, 53“58.
the input length. More generally, polynomial time
[6] 1991 Symposium on Electromagnetic Security
(running time O(nc ) for a constant c) is typically
for Information Protection (SEPI™91), Proceedings,
Rome, Italy, November 21“22, 1991. Fondazione Ugo adopted as the criterion of ef¬ciency in computa-
Bordoni. tional complexity. The class of all computational
[7] van Eck, Wim (1985). “Electromagnetic radiation problems possessing polynomial-time algorithms
from video display units: An eavesdropping risk?”
is denoted P.1 Thus ADDITION and MULTIPLICATION
Computers & Security, 4, 269“286.
are in P, and more generally we think of P as iden-
tifying the “easy” computational problems. Even
though not all polynomial-time algorithms are fast
in practice, this criterion has the advantage of ro-
COMPUTATIONAL bustness: the class P seems to be independent of
COMPLEXITY changes in computing technology. P is an exam-
ple of a complexity class”a class of computational
problems de¬ned via some algorithmic constraint,
Computational complexity theory is the study of
which in this case is “polynomial time.”
the minimal resources needed to solve compu-
In contrast, algorithms that do not run in poly-
tational problems. In particular, it aims to dis-
nomial time are considered infeasible. For ex-
tinguish between those problems that possess
ample, consider the trial division algorithms for
ef¬cient algorithms (the “easy” problems) and
integer factoring or primality testing (see prime
those that are inherently intractable (the “hard”
number). For an n-bit number, trial division can
problems). Thus computational complexity pro-
take time up to 2n/2 , which is exponential time
vides a foundation for most of modern cryptog-
rather than polynomial time in n. Thus, even
raphy, where the aim is to design cryptosystems
for moderate values of n (e.g., n = 200) trial di-
that are “easy to use” but “hard to break.” (See
vision of n-bit numbers is completely infeasible
security.)
for present-day computers, whereas addition and
multiplication can be done in a fraction of a sec-
RUNNING TIME: The most basic resource studied ond. Computational complexity, however, is not
in computational complexity is running time”the concerned with the ef¬ciency of a particular algo-
number of basic “steps” taken by an algorithm. rithm (such as trial division), but rather whether a
(Other resources, such as space (i.e., memory us- problem has any ef¬cient algorithm at all. Indeed,
age), are also studied, but we will not discuss them
here.) To make this precise, one needs to ¬x a 1 Typically, P is de¬ned as a class of decision problems (i.e.,
model of computation (such as the Turing ma- problems with a yes/no answer), but here we make no such
chine), but here we will informally think of it as the restriction.
Computational complexity 93


for primality testing, there are polynomial- exponential hardness and hence the cryptographic
time algorithms known (see prime number), so protocols based on its hardness may have exponen-
tial security.2
PRIMALITY is in P. For integer factoring, on the
other hand, the fastest known algorithm has run-
1/3
ning time greater than 2n , which is far from COMPLEXITY-BASED CRYPTOGRAPHY: As de-
polynomial. Indeed, it is believed that FACTORING scribed above, a major aim of complexity theory
is not in P; the RSA and Rabin cryptosystems is to identify problems that cannot be solved in
(see RSA public-key encryption, RSA digital sig- polynomial time and a major aim of cryptography
nature scheme, Rabin cryptosystem, Rabin signa- is to construct protocols that cannot be broken in
ture scheme) rely on this conjecture. One of the polynomial time. These two goals are clearly well-
ultimate goals of computational complexity is to matched. However, since proving lower bounds (at
rigorously prove such lower bounds, i.e., establish least for the kinds of problems arising in cryptog-
theorems stating that there is no polynomial-time raphy) seems beyond the reach of current tech-
algorithm for a given problem. (Unfortunately, to niques in complexity theory, an alternative ap-
date, such theorems have been elusive, so cryp- proach is needed.
tography continues to rest on conjectures, albeit Present-day complexity-based cryptography
widely believed ones. More on this is given below.) therefore takes a reductionist approach: it at-
tempts to relate the wide variety of complicated
POLYNOMIAL SECURITY: Given the above associ- and subtle computational problems arising in
ation of “polynomial time” with feasible computa- cryptography (forging a signature, computing
tion, the general goal of cryptography becomes to partial information about an encrypted message,
construct cryptographic protocols that have poly- etc.) to a few, simply stated assumptions about the
nomial ef¬ciency (i.e., can be executed in poly- complexity of various computational problems.
nomial time) but super-polynomial security (i.e., For example, under the assumption that there is
cannot be broken in polynomial time). This guar- no polynomial-time algorithm for FACTORING (that
antees that, for a suf¬ciently large setting of the succeeds on a signi¬cant fraction of composites
of the form n = pq), it has been demonstrated
security parameter (which roughly corresponds to
the input length in complexity theory), “breaking” (through a large body of research) that it is
the protocol takes much more time than using the possible to construct algorithms for almost all
protocol. This is referred to as asymptotic security. cryptographic tasks of interest (e.g., asymmetric
While polynomial time and asymptotic security cryptosystems, digital signature schemes, secure
are very useful for the theoretical development of multiparty computation, etc.). However, since the
the subject, more re¬ned measures are needed to assumption that FACTORING is not in P is only a
evaluate real-life implementations. Speci¬cally, conjecture and could very well turn out to be false,
one needs to consider the complexity of using and it is not desirable to have all of modern cryptog-
breaking the system for ¬xed values of the input raphy to rest on this single assumption. Thus
length, e.g., n = 1000, in terms of the actual time another major goal of complexity-based cryptogra-
(e.g., in seconds) taken on current technology (as phy is to abstract the properties of computational
opposed to the “basic steps” taken on an abstract problems that enable us to build cryptographic
model of computation). Efforts in this direction protocols from them. This way, even if one problem
are referred to as concrete security. Almost all turns out to be in P, any other problem satisfying
results in computational complexity and cryp- those properties can be used without changing
tography, while usually stated asymptotically, any of the theory. In other words, the aim is to
can be interpreted in concrete terms. However, base cryptography on assumptions that are as
they are often not optimized for concrete security weak and general as possible.
(where even constant factors hidden in O-notation Modern cryptography has had tremendous suc-
are important). cess with this reductionist approach. Indeed, it
Even with asymptotic security, it is some- is now known how to base almost all basic cryp-
times preferable to demand that the growth of tographic tasks on a few simple and general
the gap between the ef¬ciency and security of complexity assumptions (that do not rely on the
cryptographic protocols is faster than polynomial
growth. For example, instead of asking simply for 2 In cryptography, a slightly different de¬nition of exponential
super-polynomial security, one may ask for expo- hardness is typically employed, with exponential security (com-
pare exponential time) only referring to protocols that cannot be
nential security (i.e., cannot be broken in time 2n
broken in time 2 n for some > 0. Accordingly, in cryptography,
for some > 0). Based on the current best known FACTORING is typically considered to provide subexponential se-
algorithms, it seems that FACTORING may have curity (compare subexponential time).
94 Computational complexity


intractability of a single computational problem, PRIMALITY is in P, we can easily see that FACTORING
but may be realized by any of several candidate is in NP: to verify that a supposed prime factoriza-
problems). Among other things, the text below dis- tion of a number N is correct, we can simply test
cusses the notion of a reduction from complex- each of the factors for primality and check that
ity theory that is central to this reductionist ap- their product equals N. NP can be thought of as
proach, and the types of general assumptions, such the class of “well-posed” search problems: it is not
as the existence of one-way functions, on which reasonable to search for something unless you can
cryptography can be based. recognize when you have found it. Given this natu-
ral de¬nition, it is not surprising that the class NP
REDUCTIONS: One of the most important notions has taken on a fundamental position in computer
science.
in computational complexity, which has been in-
It is evident that P ⊆ NP, but whether or not
herited by cryptography, is that of a reduction be-
P = NP is considered to be one of the most im-
tween computational problems. We say that prob-
portant open problems in mathematics and com-
lem reduces to problem if can be solved in
puter science.5 It is widely believed that P = NP;
polynomial time given access to an “oracle” that
indeed, we have seen that FACTORING is one candi-
solves (i.e., a hypothetical black box that will
date for a problem in NP \ P. In addition to FAC-
solve on instances of our choosing in a single
TORING, NP contains many other computational
time step). Intuitively, this captures the idea that
problems of great importance, from many disci-
problem is no harder than problem . For a sim-
plines, for which no polynomial-time algorithms
ple example, let us show that PRIMALITY reduces to
FACTORING.3 Suppose we have an oracle that, when are known.
The signi¬cance of NP as a complexity class is
fed any integer, returns its prime factorization in
due in part to the NP-complete problems. A com-
one time step. Then we could solve PRIMALITY in
putational problem is said to be NP-complete
polynomial time as follows: on input N, feed the
∈ NP and every problem in NP reduces to
if
oracle with N, output “prime” if the only factor re-
. Thus the NP-complete problems are the “hard-
turned by the oracle is N itself, and output “com-
est” problems in NP, and are the ones most likely
posite” otherwise.
to be intractable. (Indeed, if even a single prob-
It is easy to see that if problem reduces to
problem , and ∈ P, then ∈ P: if we substi- lem in NP is not in P, then all the NP-complete
problems are not in P.) Remarkably, thousands of
tute the oracle queries with the actual polynomial-
natural computational problems have been shown
algorithm for , we obtain a polynomial-time al-
gorithm for . Turning this around, ∈ P implies
/ to be NP-complete. (See [1].) Thus, it is an ap-
that ∈ P. Thus, reductions provide a way to use
/ pealing possibility to build cryptosystems out of
NP-complete problems, but unfortunately, NP-
an assumption that one problem is intractable to
completeness does not seem suf¬cient for crypto-
deduce that other problems are intractable. Much
graphic purposes (as discussed later).
work in cryptography is based on this paradigm:
for example, one may take a complexity assump-
RANDOMIZED ALGORITHMS: Throughout cryp-
tion such as “there is no polynomial-time algo-
rithm for FACTORING” and use reductions to deduce tography, it is assumed that parties have the abil-
statements such as “there is no polynomial-time ity to make random choices; indeed this is how one
algorithm for breaking encryption scheme X.” (As models the notion of a secret key. Thus, it is natu-
discussed later, for cryptography, the formaliza- ral to allow not just algorithms whose computation
tions of such statements and the notions of reduc- proceeds deterministically (as in the de¬nition of
tion in cryptography are more involved than sug- P), but also consider randomized algorithms”
gested here.) ones that may make random choices in their com-
putation. (Thus, such algorithms are designed to
NP: Another important complexity class is NP. be implemented with a physical source of random-
ness. See random bit generation (hardware).)
Roughly speaking, this is the class of all compu-
Such a randomized (or probabilistic) algorithm
tational problems for which solutions can be veri-
¬ed in polynomial time.4 For example, given that A is said to solve a given computational problem
if on every input x, the algorithm outputs the cor-
rect answer with high probability (over its random
3 Of course, this reduction is redundant given that PRIMAL-
is in P, but suppose for a moment that we did not know
ITY
this.
4 NP stands for nondeterministic polynomial time. Like P, NP 5 The signi¬cance of P versus NP in mathematics comes from
is typically de¬ned as a class of decision problems, but again the fact that it is equivalent to asking whether we can ¬nd
that constraint is not essential for our informal discussion. short mathematical proofs ef¬ciently.
Computational complexity 95


choices). The error probability of such a random- of the problem “¬nd a preimage of y,” by selecting
x at random and setting y = f (x). (Note that we
ized algorithm can be made arbitrarily small by
running the algorithm many times. For exam- actually generate a hard instance together with a
ples of randomized algorithms, see the probabilis- solution; this is another aspect in which one-way
functions are stronger than what follows from P =
tic primality tests in the entry on prime number.
The class of computational problems having NP.) To formalize the de¬nition, we need the con-
cept of a negligible function. A function : N ’
polynomial-time randomized algorithms is de-
noted BPP.6 A widely believed strengthening of [0, 1] is negligible if for every constant c, there is
the P = NP conjecture is that NP ⊆ BPP.
/ an n0 such that (n) ¤ 1/nc for all n ≥ n0 . That is,
vanishes faster than any polynomial. Then we

<<

. 2
( 5)



>>