Chapter 9

Applications and the Future

The distinction between past, present and future is only an illusion, however
persistent.
Albert Einstein (1879“1955)
German born theoretical physicist and discoverer of the theory of relativity



9.1 Secrecy and Authentication

Suppose that two adversarial countries enter into an agreement to terminate
all underground nuclear weapons testing. Obviously, each wants to verify that
the other is complying and not surreptitiously engaging in such testing. In
this case, there is no need for secrecy, only guaranteed authentication, called
authentication without secrecy. What is being sought is authentication without
covert channels9.1 with a goal to verify treaty compliance. This was indeed
proposed in the late 1970s and early 1980s by Simmons9.2 [209]“[211] as a means
9.1 A covert channel is not uniformly de¬ned throughout the literature. We will use Lamp-
son™s de¬nition [130]: any communication pathway that was neither designed nor intended to
transfer information. Hence, only untrustworthy adversaries would ¬nd and use them.
9.2 Gustavus J. Simmons was born on October 27, 1930 in Ansted, West Virginia. He re-

ceived his B.Sc. in mathematics from New Mexico Highlands University, Las Vegas in 1955,
his M.Sc. from the University of Oklahoma at Norman in 1958, and his Ph.D. in 1968 from
the University of New Mexico, Albuquerque. He worked for Sandia National Laboratories
from which he retired in 1993 as the Director for National Security Studies. His work at San-
dia was principally concerned with integrity and authentication issues surrounding national
security, especially those involving command and control of nuclear weapons. His wide range
of research interests includes a core focus on combinatorics and graph theory, with applica-
tions to cryptography. In 1986, he was honoured with the U.S. Government™s E.O. Lawrence
Award, as well as the Department of Energy Weapons Recognition of Excellence Award for
“Contributions to the Command and Control of Nuclear Weapons” in the same year. In 1996
he was made an honourary Lifetime Fellow of the Institute of Combinatorics and Its Appli-
cations. Dr. Simmons has not only made such contributions, but also has a warm, collegial,
and humanitarian disposition with a humorous foil, which allows him (as he puts it in [221])
“to pillory the National Security Agency, one of my favorite pastimes.”


179
© 2003 by CRC Press LLC
180 9. Applications and the Future

for the U.S.A. and the former U.S.S.R., say, to ban such testing and have a treaty
in place to verify compliance using public-key cryptography.
x Nuclear Test Ban Treaty Compliance
To detect an underground nuclear test, seismic sensors can be placed in each
country in probable sites where such testing would occur, essentially creating
a family of seismic nets. However, each country has to have assurance that
the other country will not be able to cheat, so there must be con¬dence in the
scheme through authentication, and the assurance of tamper-resistant seismic
sensors. The latter can be accomplished by the host (meaning the host country
which is allowing the other side to put sensors in their territory) creating their
own tamper-proof sensors.9.3 Suppose the treaty speci¬es that each host must
allow monitors from the other country who can engage in on-site inspections
within the their borders, and suppose that the United Nations (UN) is involved
in the treaty as an arbitrator.
The authentication equipment, which we will call HAL, in the downhole
seismic device, generates primes p and q, and forms n = pq, which we assume is
in compliance with security issues in Chapter 6. HAL selects an RSA encipher-
ing exponent e, kept secret from all parties, including the monitor, we™ll call
Monty, and the host, we™ll call Hostvania. HAL also computes d from p, q, and
e. Only n and d are made public. Also, HAL collects data m, then computes
c ≡ me (mod n). Hostvania ¬rst receives (m, c) from HAL™s sensors in response
to Monty™s request (challenge). They calculate m ≡ cd (mod n) and verify that
this copy of m matches their copy (bit for bit). Then Hostvania is convinced
that there is no hidden information in the cipher, so they send (m, c) to Monty
and the UN, who compute m ≡ cd (mod n) independently, and verify that the
two copies of m match. (Compare this scheme with the RSA signature scheme
on page 135.) Here is what this scheme ensures:
(1) Since no entity knows e, then by the RSA conjecture, they cannot forge
messages that would be accepted as authentic (see Exercise 9.1).
(2) Since n and d are public, Monty, the UN, and Hostvania can independently
verify the authenticity of messages.
(3) Since e is kept secret from all entities, no unilateral actions are possible
by any party that would be capable of lessening the con¬dence in the
authentication of the message.
(4) No part of the message is concealed from Hostvania, the UN, or Monty.
Hence, the above is an application of authenticity without secrecy using
public-key cryptography, and each of the entities can try to cheat as much
as they wish without compromising the system.
As noted by Simmons at the end of [210], the above scheme has a direct
analogue for communication between international banks each having branches
in the foreign host country (see Section 7.3).
9.3 Thisis essentially what the NSA asked Sandia to do in the late 1970s (see [221] and
Footnote 9.2).



© 2003 by CRC Press LLC
9.1. Secrecy and Authentication 181

In 1984, Simmons [213] discovered a problem with the above scheme. Al-
though a built-in feature of the scheme is that it does not allow for a covert
channel to be built into the message (since a process is in place for Hostvania
to verify this), HAL could still be used to hide (what Simmons calls) a sub-
liminal channel. What this means is that a channel can be implanted so that
Hostvania could not detect the use of the covert channel and could not read the
hidden part. In particular, as noted by Simmons [219] in 1994 (with reference
to the Second Strategic Arms Limitation Treaty (SALT II) between the former
U.S.S.R. and the U.S.A.) the subliminal channel could be used to reveal to the
other country which of those silos in the host country were loaded with missiles
and which were empty. The country in possession of this knowledge would be
able to successfully launch a ¬rst strike! In the early 1990s, Simmons [216]“[217]
came up with a proposed solution to the problem (see also [215], [218], [220]).
However, in 1996, Desmedt [70] provided a counterexample to this claim, and
demonstrated how several other protocols in the literature are susceptible to
this problem. This was addressed by Simmons [222] in 1998. The actual de-
tails, including the very de¬nition of subliminal-channel-free protocol is beyond
the scope of this text. For details consult [70]“[71], as well as the aforementioned
papers by Simmons.
Now we look at another public-key cryptographic application to authenticity
without secrecy. In Chapter 8, we discussed key management issues including
PKI itself in Section 8.3. Among the PKI issues that we considered was the
use of certi¬cates, with examples such as PGP (see page 174) for the sending of
secure e-mail messages. There is another e-mail speci¬cation that is sometimes
used in conjunction with PGP (see [81]), called Multipart Internet Mail Exten-
sion (MIME) developed by IETF (see Footnote 8.8) to attach nontextual data,
including graphics, in e-mail messages. However, there is no security attached
to MIME. In 1995, a group of vendors, including the leader RSA Data Security
Inc., enhanced the speci¬cations and in concert with IETF, created S/MIME,
or Secure MIME. The current version is S/MIME 3 ” see [109] and [193]. The
S/MIME Version 3 speci¬cations include PKI concepts such as certi¬cate pro-
cessing and CRLs (see Footnote 8.11). This includes the use of X.509 certi¬cates
(see page 174) with extensions speci¬c to S/MIME (see [110]).
The IETF S/MIME working group has developed many such enhancements.
The following is a scheme allowing S/MIME to be used for authentication with-
out secrecy via digital signatures only algorithms, such as DSA. We describe it
in the following for public-key enciphering algorithms.
x S/MIME Authentication Protocol ” Without Secrecy
Alice wants to prove to Bob that she is the sender of a message, m, by ex-
changing e-mail messages. Their e-mail programs both use a hashing algorithm,
h and a public-key enciphering algorithm P .
Protocol Steps:

(1) Alice™s e-mail program creates a message digest from m using h, to create
h(m), then uses her private key dA from P to encipher and get dA (h(m)).



© 2003 by CRC Press LLC
182 9. Applications and the Future

(2) Her e-mail program sends an S/MIME e-mail message,

M = (m, dA (h(m)), C(A))

to Bob, where C(A) is Alice™s X.509 certi¬cate.
(3) Bob™s e-mail program veri¬es C(A) and obtain™s Alice™s public key eA from
it, to decipher and obtain h(m).
(4) Bob™s e-mail program independently computes h(m) from m and compares
the two copies. If they are both the same, Bob is convinced of Alice™s
identity since the message cannot have been subject to tampering, given
that Alice™s private key dA is secure.

The recommended hash functions for the above protocol are either MD5
or SHA-1. The MD5 algorithm developed from an initial version called MD,
created by Rivest (see footnote 3.2 on page 61), with the ¬fth version, developed
in 1991, being the strongest. Simply put, it takes an input of arbitrary length
and outputs a 128-bit message digest (see [156] for details of the algorithm).
Rivest also played a hand in developing the Secure Hash Algorithm SHA-1,
which produces a digest of 160 bits, but there is a variant that produces 256-bit
digests, called SHA-512. This is part of the Draft FIPS 180-2, announced by
NIST on May 30, 2001 (see page 139). There are also the proposed variants
SHA-256, which produces 256-bit digests and SHA-384, which produces 384-bit
digests. Hence, the strength of the SHA-1 algorithm gives it a higher pro¬le
in the cryptographic community. Also, see [156] for a detailed description of
SHA-1.
Such signed-only data, as in the above protocol, may contain any number
of certi¬cates and CRLs to help Bob™s e-mail program with the job of path
construction and certi¬cate veri¬cation.
S/MIME is also a good candidate for illustrating authenticity with secrecy.
In order to accomplish this, we ¬rst need the following protocol.
x S/MIME Secrecy Protocol ” Without Authentication
Alice wants to send a message M to Bob that no other entity can read.
The e-mail programs used by Alice and Bob share a common symmetric-key
enciphering algorithm E (typically DES or triple DES ” see page 22), and a
public-key cryptosystem from which Bob has public/private key pair (eB , dB ).
Protocol Steps:
(1) Alice™s e-mail program generates a random key k, called the session key
(valid only for this e-mail transaction).
(2) Her e-mail program computes Ek (M ), then calculates eB (k), and sends the
S/MIME e-mail message

C = (Ek (M ), eB (k), C(A))

to Bob were C(A) is as in the preceding protocol.


© 2003 by CRC Press LLC
9.1. Secrecy and Authentication 183

(3) Bob™s e-mail program uses dB to obtain k, which is used to get M .

The above employs a technique known as enveloping data (see page 75). It
provides secrecy but not authentication. To get both we need the following.
x S/MIME Authentication and Secrecy Protocol
If Alice and Bob wish to send a message that is both secret and authenti-
cated, they perform what is called a nesting of protocols. This means that they
¬rst put the message m through steps (1)“(2) in the authentication protocol to
output M . Then M is made secret by putting it through steps (1)“(2) of the
secrecy protocol, which produces C. When Bob receives C, he ¬rst uses dB to
get M , then performs steps (3)“(4) of the authentication protocol (on M ) to
both verify m and Alice™s identity.
The above protocol provides both the desired security and authenticity. The
nesting described therein can also be accomplished in the reverse order. In
other words, the message can ¬rst be made secret, then the secret output can
be authenticated. The security risks of either approach are described in [128]
and [192].
We brie¬‚y mentioned the Secure Sockets Layer (SSL) in Footnote 3.10 on
page 67. SSL is an Internet protocol that provides authenticity and secrecy for
session-based communication. It provides a secure channel on the client/server
model using a secret sharing scheme. The security model of SSL is that it
encrypts the channel by enciphering the bits that go through that channel. As
mentioned in Footnote 3.10, SSL began with Netscape who originated it in 1994.
In 1996, Netscape handed over the speci¬cations of SSL to IETF who worked
to standardize the SSL version 3 model, which had been released in 1995. The
IETF section responsible for working on the SSL project is called the Transport
Layer Security (TLS) working group. In 1999, they released TLS version 1,
which has now become the IETF standards-track variant of the SSL version 3
protocol (see [72]). For details of the mechanisms behind SSL, see [49].
Although well-studied and widely used, the SSL/TLS mechanism is some-
times considered unsuitable for use on the WWW due to the lack of a central
PKI, which can contribute to the proliferation of weak authentication schemes.
The reason is that each browser must trust the public keys of a large number of
CA root keys that are embedded within it. Most of these keys belong to commer-
cial, third-party CAs, such as Verisign, which may have di¬erent constraints on
issuing certi¬cates as required by any particular application or security domain.
On the other hand, the cryptographic power of SSL/TLS is that it operates at
the transport level so HTTP runs on top of SSL, called HTTPS.
We now look at another Internet authentication and secrecy scheme. Sup-
pose that Alice wants to access a service provider, whom we will call QZZ.com.
How does QZZ.com authenticate Alice? One mechanism is to use a cookie,
which is an HTTP header embodying a textual string of data that is put into a
browser™s memory. The textual string must have within it, not only the cookie™s




© 2003 by CRC Press LLC
184 9. Applications and the Future

name and value, but also the domain name,9.4 path,9.5 lifetime,9.6 and a ¬eld
for a “secure” label.9.7 Cookies9.8 are required to keep track of visitors to the
given website, such as Alice to QZZ.com. Alice ¬rst must be authenticated by
some strong authentication scheme. Once she successfully logs in, QZZ.com
gives her two cookies L and T . The T cookie is a timestamp together with a
MAC (see Footnote 7.3 on page 138). The L and H cookies will be encrypted
using a symmetric-key enciphering algorithm in order to keep any sensitive in-
formation con¬dential. The L cookie will be accepted for a low-level security
request such as her homepage on QZZ.com. However, if she asks for her e-
mail, for instance, then the L cookie will embed her user identi¬cation, and
some other pertinent data unique to her. If she wants to buy something from
QZZ.com using her credit card, then another cookie H is needed for high-level
security. The H cookie will only operate in SSL mode, and requires a secure
password from Alice. Internally, QZZ.com uses separate systems of computers
to authenticate the transactions. Cookies also embed demographics and other
information pertinent to marketing that QZZ.com wants to accomplish.
Although the above QZZ.com is contrived, it represents some aspects of
existing authentication and security schemes for e-commerce.

Exercises

9.1. Suppose that instead of the nuclear test ban treaty veri¬cation scheme we
have discussed in this section, we have the following. Monty picks p, q, e,
and securely downloads n = pq and e into HAL. Then Hostvania is given n
and d, which Monty also has. Show how Monty can generate undetectable
forgeries, and how this could compromise the scheme if the UN, say, is an
arbitrator.
9.4 This is the textual equivalent of the numerical IP address (see Footnote 8.3 on page 171)
of a given resource on the Internet.
9.5 This is a speci¬cation of the subset of URLs in a domain for which a cookie is valid. A

URL is a Uniform Resource Locator, which is the global address associated with given data.
The ¬rst part of he URL indicates which protocol to employ, and the second part indicates
the domain name. For instance: http://www.math.ucalgary.ca/˜ramollin/ indicates that this
is a WWW page and the HTTP protocol should be used. The second part is the domain
name where my homepage is located.
9.6 This is a date string specifying the valid lifetime of the cookie. Once expired, the cookie

is discarded, and is rejected as invalid if there is an attempt to use it again.
9.7 If the cookie is marked secure, it will be transmitted only if the communication channel

within the host is secure. Usually this means over an HTTPS server.
9.8 The term “cookie” ostensibly comes from the computer science term for “an opaque

portion of data held by an intermediary,” according to Paul Bonner [42]. Cookies are needed
since, without them, HTTP cannot di¬erentiate among the visits to a website by a given
entity. The cookie, in a sense, “stamps” the user by embedding unique data in the user™s
browser. To be able to state the reason for this, we need to de¬ne another concept. The state
of a protocol may be viewed as a description of the protocol at some point in time. Thus, a
stateless protocol (sometimes called non-persistent) is one that is incapable of distinguishing
its description from one time to another. HTTP is a stateless protocol (see footnote 8.5 on
page 171). It needs a cookie to mark the visits in order to distinguish them one from the other.
By contrast, SSL has two states, one for the client side of the protocol and the other for the
server side. In the case of SSL, the interaction between the two states is called a handshake
(see [49]).



© 2003 by CRC Press LLC
9.2. Other Threats to System Security 185

9.2 Other Threats to System Security
There are no such things as applied sciences, only applications of science.
Louis Pasteur (1822“1895) French chemist and bacteriologist

In this section, we discuss security of passwords and the general features of
logging into a system and ensuring secure access and use. We have mentioned
passwords throughout the text, and de¬ned it in our discussion of EKE at the
bottom of page 169, along with variants of it. Anyone who has used a computer
over a network understands the notion of logging on (sometimes called signing
on), which involves a process of identi¬cation (a username or userid) together
with some authentication, a password, for instance. If the process is made
secure, then the legitimate user is uniquely identi¬ed to the system at hand.
Suppose that the user is attempting to login from home to his or her computer
at work (remote login). Then passwords may have to travel over unsecured chan-
nels, which are susceptible to eavesdropping by Eve and interception by Mallory.
Also, bad choices of passwords are common so this compromises security. At
work, she is probably protected from intrusion by a ¬rewall (a mechanism in-
stalled at the juncture where network connections enter a network tra¬c control
centre for a given system) that prevents intruders from accessing the system.
However, there needs to be some strong authentication mechanism for remote
logins. This is where a secure PKI can assist. If, for instance, Alice needs to
remotely access several di¬erent servers at the corporation where she works, she
would normally need several passwords, one for each server. This impedes her
ease of use, and if she tries to short-cut the process in order to circumvent the
multi-password requirement, this will reduce security. A secure PKI could assist
by supporting a strong authentication protocol so that passwords do not travel
over the network. Then Alice can login remotely to a central server (securely)
from which she may access other servers necessitating only one login.
Another (potential) feature of such a secure PKI (although not all PKIs
provide this) is the black box, meaning that there is end-user transparency in
the sense that Alice, for example, does not need to know much of anything
about hardware, or software such as how HTTPS works, or IP addresses. This
means that virtually all of the security measures in the PKI are invisible to
Alice. What she does need to understand can be kept on a “need-to-know”
basis, such as ¬rst time logins.
PKI yields secure communication through various schemes such as secure
e-mail via say, S/MIME (see Section 9.1), or secure server access through TLS
(see page 183 and [72]).
Now we look at some of the means for ensuring the above security measures.
If Alice chooses a password that is easy to guess, the system will have a program
called a password checker, which tells Alice when she has chosen a “bad” pass-
word (in her initial login). Once Alice has chosen a password that satis¬es the
checker, then at any time in the future, she can have access, but there will be a
limitation (usually three times) that she will be able to re-enter her password,
if she forgets it ” another security device.



© 2003 by CRC Press LLC
186 9. Applications and the Future

So what is a good password? Here is a potential list of parameters for a good
password:
(1) It must have at least 7 symbols.
(2) It must use at least three of: uppercase letters, lowercase letters, numbers,
and symbols such as $,#,&, and so on.
(3) No symbol should be repeated.
(4) No personal data such as birthdays, or telephone numbers should be used.

(5) Memorize it ” don™t write it down, and certainly do not type it into your
computer to be stored as a ¬le.

So now we have a good password that has been accepted by the system
through the checker. How does the system protect your password? We already
have seen a method described on page 115, namely, salting, by adding some
random bitstring as a pad to the password. Then the system can store a hash
of the password together with the salt value and Alice™s identi¬cation. When
Alice enters her password, x, the system ¬nds the salt for Alice, applies it to
x, hashes it, and if this matches the stored value, she is granted access. This
salting of passwords helps to thwart dictionary attacks (see page 171).
x Birthday Attack
We are given a hash function h : S ’ T, with |T| = n and |S| > n, so there
is at least one collision. If |S| ≥ 2n, there are at least n collisions.9.9 How do
we ¬nd one? For a given m ¤ n of trials, we could randomly choose elements
sj ∈ S with 1 ¤ j ¤ m and compute the h(sj ) to see if we get a collision.
The analogue of the above is given in Exercise 9.2. In fact upon solving this
exercise, the reader will determine that the probability that there do not exist
any collisions is j=1 (1 ’ j/n). However, by Exercise 9.4, 1 ’ x ≈ e’x for
m’1

small x values (such as ours). Hence, the probability of no collisions is,
m’1 m’1
j
e’j/n = e’m(m’1)/(2n) .
(1 ’ ) ≈
n
j=1 j=1

Therefore, the probability of at least one collision occurring is,

pc ≈ 1 ’ e’m(m’1)/(2n) . (9.1)

Moreover, by Exercise 9.5, if pc = 1/2, then√ ≈ 1.17 n. This tells us quite
m
plainly that by hashing over little more than n random elements of S, we have
a greater than 50% chance of ¬nding a collision. This is the birthday attack.
It places a lower bound on the number of bits a hash function should possess
can be shown that when ∞ > |S| ≥ 2|T|, there exists a probabilistic algorithm that
9.9 It

¬nds a collision for h with probability bigger than 1/2 (see [156, Fact 9.33] and [225, Theorem
7.1, p. 235]).




© 2003 by CRC Press LLC
9.2. Other Threats to System Security 187

in order to be secure, since the birthday attack can ¬nd a collision in O(2k/2 )
hashings on an k-bit function. Thus, if k = 64, then it is not secure against the
birthday attack since only 232 hashings are required.
q Alice Cheats Bob Using the Birthday Attack
The hash function has 64 bits. Alice wants Bob to sign a contract that he
thinks will bene¬t him, and later she wants to “prove” that he signed a contract
that actually robs him of his life savings.

(1) Alice prepares two contracts, one that is “good” for Bob, CG , and one, CB ,
which will sign away his savings.

(2) Alice makes very minor changes in each of CG and CB . Then she hashes
232 modi¬ed versions of CG and 232 modi¬ed versions of CB .

(3) She compares the two sets of hash values until she ¬nds a collision h(CG ) =
h(CB ) and recovers the corresponding preimages.

(4) Alice has Bob sign CG via the hash of its value.

(5) Later Alice substitutes CB for CG whose hash value is the same as that
signed by Bob, who has now lost all his money.

The above scenario between Alice and Bob was ¬rst described by Yuval
[246] in 1979. The hash function MD-5, described on page 182, is essentially
the foundation for the Secure Hash Algorithm SHA-1, also described therein.
Since these are typically employed in S/MIME, we can be relatively certain that
they are secure. Moreover, the proposed variants of SHA-1, discussed earlier,
provide very secure alternatives for the future.
We have seen already, in Remark 1.30 on page 27, another type of password
attack called keypress snooping (also called keypress sni¬ng). If your computer
is connected to a network, then Eve can remotely install a keylogger (via say
a virus) and capture any password without detection (and without using any
cryptanalytic techniques). Ostensibly, DOS-based PGP is particularly vulnera-
ble to this type of attack.
There is yet another more insidious attack, ¬rst described in 1985 by van
Eck [78], called Van Eck Snooping9.10 (also called Van Eck Sni¬ng). In this
case, there does not need to be any physical contact with the computer (lo-
cally or remotely). All an adversary needs to do is read the electromagnetic
9.10 Wim van Eck was a member of the Bio-engineering Group of the Electronics Department
at the Twente University of Technology in the Netherlands, from which he had graduated in
1981. In 1982, he became a member of the Propagation and Electromagnetism Compatibility
Department of the Dr. Neher Laboratories of the Netherlands. His work revolves around
supervising research projects on emission and susceptibility aspects of telecommunications
equipment and related areas. In [78], he uses the term phreaking to describe the mechanism
of reading radiation for reconstruction of information. This is a term that previously applied
to methods for using a telephone system without paying.




© 2003 by CRC Press LLC
188 9. Applications and the Future

radiation from the video display screen (as well as SCSI cables,9.11 printer and
network cables, for instance). The reason is that electrical devices generate
electromagnetic ¬elds when they operate. In particular, digital computers gen-
erate an enormous amount of electromagnetic signals. A trained adversary can
intercept, process, and reconstruct information. Essentially, any device with a
diode, microchip, or transistor radiates these ¬elds. In this type of surveillance,
an adversary can not only get passwords, but also can reconstruct all activity
on a given computer.
Perhaps the most notable use of Van Eck Snooping was accomplished by
the F.B.I., which used the attack to monitor the activities of C.I.A. intelligence
o¬cer, Aldrich Hazen Ames, known as “Rick” Ames, who turned out to be a
Soviet-Russian mole. On October 9, 1993, the F.B.I. began monitoring Ames™
computer via Van Eck snooping by using an antenna outside Ames™ home and
gathered enough evidence to arrest him. He was arrested in February 1994,
perhaps the most deleterious mole to have tunnelled into the C.I.A. He was
charged with conspiracy to commit espionage on behalf of the former Soviet
Union and Russia. On April 28, 1994, Ames and his wife, Rosario, pleaded guilty
to charges arising from the espionage activities. In an agreed-upon statement of
facts, Ames admitted to having received more than $1.8 million U.S. from the
KGB and that $900, 000 was in an account for his later activities.
The U.S. military thwarts the Van Eck snooping attack via a project called,
Transient ElectroMagnetic Pulse Emanation STandard (TEMPEST), joint be-
tween the U.S. National Security Agency (NSA) and the U.S. Department of
Defense (DoD). Although the term “TEMPEST” was coined in the 1960s, the
notions behind building security against radiation-type leaks was invented in
1918 when it was discovered that normal unmodi¬ed equipment was allowing
classi¬ed information to be leaked to the enemy though “compromising ema-
nations”. This was largely due to Herbert Yardley9.12 and his group at the
Black Chamber. TEMPEST equipment essentially provides shielding, ground-
ing, and bonding to avoid compromising emanations. What is known as the
van Eck receiver was founded on older video monitors using signals with little
9.11 This is pronounced “scuzzy”, the acronym for Small Computer System Interface. SCSI is
an ANSI standard, designed to provide faster data transmission than serial and parallel ports.
9.12 Herbert Osborne Yardley was born on April 13, 1889 in Worthington, Indiana. Early in

life, he recognized his ¬‚air for cryptanalysis when he was employed as a “code clerk” in the
State Department at the age of twenty-three. After the declaration of war in April 1917 by
the U.S.A., Yardley was made head of the newly established cryptology section of the Military
Intelligence Division, MI-8. In May of 1919, he submitted a plan for a permanent cryptology
organization. This ultimately came to be known as the American Black Chamber. The
Black Chamber operated very successfully by cryptanalyzing more than 45, 000 enciphered
telegrams from various countries. By 1929, however, the Black Chamber was shut down by
the Secretary of State, Henry L. Stimson, who disapproved of the Chamber saying “Gentlemen
do not read each other™s mail”. In 1931, Yardley published a book entitled The American
Black Chamber which was an expos´ of the U.S.A.™s weak, if not defenceless, status in the
e
arena of cryptology. It caused a furor in many circles. In fact, when he tried to publish a
second book Japanese Diplomatic Secrets, it was suppressed by the U.S. government. He
involved himself in real estate speculations in the late 1930s, and served as enforcement agent
in the O¬ce of Price Administration during WWII. He died of a stroke on August 7, 1958 in
Silver Spring, Maryland.




© 2003 by CRC Press LLC
9.2. Other Threats to System Security 189

or no standard shielding. The term TEMPEST has been updated, and one often
hears the modern term EMission SECurity (EMSEC). Nevertheless, one needs
to ensure that, certainly in such circles as the military, all equipment including
cables are shielded. Otherwise, if a trained adversary can read the radiation,
they can read both the encrypted and the plaintext data (both of which radiate)
and separate the two, resulting in a total break of any cryptosystem.
A hacker (an entity who attempts to break into a computer system) may use
a password cracking software package, which might be an automated exhaustive
password search mechanism. Their activities are called hacking. A cracker, on
the other hand, might be a hacker, but is often an entity who is hired by an
organization to ¬nd weaknesses in their system by attempting to hack or break
their system™s security.9.13 A system administrator or sys admin is some entity
responsible for maintaining the systems and network including the management
of accounts. Since one aspect of the power of a sys admin is to be able to override
password protection in order to deal with problems including users forgetting
their passwords, for example, then hackers who can impersonate a sys admin
would be their optimal task. Hence, a secure, in-depth, comprehensive PKI that
ensures data integrity including password protection is vital. On page 171, we
discussed the SRP protocol that eliminates the need to send passwords over a
network. These kinds of protocols in conjunction with a well-developed PKI,
help to create a secure system. In the next section, we look at another type of
modern-day real-world security concern ” wireless authentication.

Exercises

° 9.2. Suppose there are n > 1 balls in a container numbered from 1 to n inclusive
and 1 < m ¤ n of them are drawn one at a time, listed, and replaced each
time. Find the probability that one of the balls is drawn at least twice. Use
the formulation to prove that in any room of 23 people, the probability
that at least two of them have the same birthday is greater than 50%.
This is called the birthday paradox.
k/2
9.3. Let n ∈ N, x be a variable, and Dk (x) = j=0 k’j k’j xk’2j , which is a
k
j
special case of a Dickson polynomial of the ¬rst kind, the brackets denoting
the binomial coe¬cient. Alice and Bob have RSA key pairs (eA , dA ),
(eB , dB ), respectively, with an RSA modulus n = pq. Alice encrypts her
message m as DeB (m) ≡ c (mod n), where gcd(eB , (p2 ’ 1)(q 2 ’ 1)) = 1,
and DdA (c) ≡ s (mod n), is her signed message, which she sends to Bob.
How can Bob decipher to get m and verify Alice™s identity? (This is a
result by Lidl and Mullen ” see [141]“[142].)
9.4. If e is the natural exponential base, and x ∈ R is a small real number,
show that 1 ’ x ≈ e’x .

9.5. Prove that if pc = 1/2 in (9.1), then m ≈ 1.17 n.

9.13 However, the debate is open. Some would de¬ne a hacker as an entity with benign intent,
and a cracker as a malicious entity. On can avoid this by using the term “attacker”.



© 2003 by CRC Press LLC
190 9. Applications and the Future

9.3 Wireless Security
Along the electric wire the message came: He is not better ” he is just the
same.
Anonymous

Data communication is being revolutionized by advances in wireless (radio)
mobile communications that augment Internet, terrestrial ¬ber networks, and
satellite communications advances. However, wireless communication is notori-
ously (and inherently) insecure. Wireless networking was deployed long before
the means for making it secure were put in place ” and they are still not there.
Nearly ¬ve million Wireless Local Area Network (WLAN) nodes were shipped
globally in 2000 with more than ten times that predicted for 2006. Thus, wire-
less privacy and security standards are fundamental goals to be achieved for a
successful multifarious global communications network of the future.
WLANs provide numerous advantages over wired networks, one of the most
important being the elimination of cabling costs. The obvious advantage to the
user is mobility, but attached to that bene¬t comes the disadvantage of serious
security issues. Broadcasting in the clear leads to the problem of determining
whether or not the messages are actually coming from the mobile device in ques-
tion. Moreover, a serious problem occurs if Mallory can interfere with WLANs
transmission so that there is a prevention of the normal communication facilities,
called a denial of service attack (DOS)-attack. Many computationally intensive
operations such as those required for some security operations can be problem-
atic on handheld9.14 devices such as mobile (cell) phones. Moreover, existing
security attempts at encryption schemes have proved to be woefully inadequate.
For example, the current WLAN standard, IEEE9.15 802.11, called Wired Equiv-
alent Privacy (WEP) is very easy to break, and has no key management scheme
” a major ¬‚aw for any system, as we saw in Chapter 8.9.16 Furthermore, the
fact that so many handheld devices, mobile phones, and PDAs9.17 are lost or
stolen each year (conservative estimates put the total at more than a quarter
million in 2001 at airports alone!) then to secure these devices physically is
a serious priority. However, we will be concerned herein with cryptographic
security ” a necessity once physical security is established. The unfortunate
9.14 A handheld device, usually a phone or computing mechanism, is one that will conveniently
¬t into a suitable pocket and can be used while you are holding it in hand.
9.15 This is the Institute of Electrical and Electronics Engineers Inc., with predecessors the

AIEE American Institute of Electrical Engineers and the IRE Institute of Radio Engineers,
dating to 1884.
9.16 However, RSA Security Incorporated has a “¬x” for WEP, called Fast Packet Keying (see

http://www.rsasecurity.com/rsalabs/technotes/wep-¬x.html).
9.17 A PDA is a Personal Digital Assistant, a handheld device that combines computing,

telephone, and networking functions. Typically a PDA will have a built-in keyboard or an
electronically sensitive pad where a stylus can be used for handwriting to be electronically
sent. The ¬rst of these was the now defunct Apple Newton. The PDA is an acronym now used
generically to reference any of a range of such products including Hewlett-Packard™s (HP™s)
Palmtop and 3Com™s Palm Pilot.




© 2003 by CRC Press LLC
9.3. Wireless Security 191

tendency of vendors is to focus on features before security and quality issues
have been addressed.
PDAs have low power capacity, slow processors, and limited storage capac-
ity, so they are not well suited to carry out cryptographic calculations. Since
PDAs were meant to support personal applications, this was not seen, initially,
as a problem. Yet, most of these devices depend upon password-based access
control, which typically are not secure. Once an adversary gets hold of the
password, consider what happens when the PDA user has the same password
for many other access points. Corporate employees could thereby subject their
corporate resources to compromise. One solution is a smart card to be inserted
in order to access a PDA (see Section 9.4). Global System for Mobile Com-
munication (GSM) phones already have this capacity and have had it or some
time. A feature called the Subscriber Identity Module (SIM) is a removable,
hardware security module, separate from the mobile phone. The SIM can have
an associated Wireless Application Protocol Identity Module (WIM), which can
be used for application level security.
There exist methods for organizations to “¬x” the problem with WEP. For
instance, Cisco SystemsTM employs a CA (see Section 8.3) that generates and
distributes RSA key pairs at the client level for authentication. The CA also
sends out RC49.18 keys for encryption of data. There exist user-based authenti-
cation servers sold by vendors for wireless communication, which will not allow
a client to access a system before validation. This must be seen as a basic se-
curity requirement. Yet most WLANs do not have this minimal security. At
a minimum, a proprietary WLAN should only be accessed through a Virtual
Private network (VPN) to a central server with secure authentication mecha-
nisms. Defenders of WEP say that it was only meant to be a deterrent, not a
VPN replacement. WEP essentially uses RC4 in a fashion similar to a one-time-
pad, and of course the “one-time-pad” is used signi¬cantly more than once. In
any case, the ¬x is needed, whatever the initial intention, since most modern
corporations, medical facilities, and military installations, for example, require
secure WLANs. The IEEE 802.11i group is working on enhanced security for
WLANs and is ¬xing the security in 802.11. They will have two modes, one
updates WEP and addresses its insecurities. The other uses AES and provides
for modern authentication and key management using PKI or Kerberos, for
instance.
The ¬rst step in wireless security, to be carried out before encryption can be
used, is authentication. When a mobile phone is used in a commercial network,
the initial connection is authenticated by a base station in order to ensure proper
billing. In military networks, mobile devices have to be authenticated by base
stations to ensure that the transmitted data is legitimate. Conversely, the mobile
device has to ensure the authenticity of the base station to guarantee validity of
9.18 RC4 is a stream cipher developed at RSA Data Security Inc. by Ron Rivest in 1987
(accounting for the “R” in it as is the “R” in RSA). RSA kept RC4 secret, but in 1994, it was
somehow leaked to the cyberpunks mailing list on the Internet. RC4 is actually part of SSL
(see page 183), among other applications. RC4 is very fast, about 700 times faster than RSA
using a 1204-bit modulus.




© 2003 by CRC Press LLC
192 9. Applications and the Future

the receiver. A hybrid system is best suited to the task in these communications
where some computations can be carried out at the base station, which will have
some higher computing power than the mobile units, thereby solving one of the
problems (albeit not being considered in IEEE 802.11i).
We now look at two wireless protocols and examine their suitability. The
¬rst [21] was introduced in 1993.
x The Beller-Yacobi Wireless Authentication/Key Agreement
Protocol
Alice has a mobile phone and she wants to authenticate with Bob at the
base station who also needs veri¬cation of Alice™s identity. Their second goal to
establish a symmetric session key.
Background Assumptions: A symmetric-key algorithm E is used and a
public-key algorithm P is employed (keys to be determined below). Also, there
is a digital signature scheme and an associated hash function h. Trent acts as
the CA.
Setup Stage:
(1) A prime nS and a primitive root ± modulo nS are chosen as ElGamal
parameters (see page 67).
(2) Trent chooses primes p and q and forms the RSA modulus nT = pq (which
will be used for RSA signatures ” see page 135).
(3) Trent™s public key is eT = 3, and his private key dT is computed so that

eT dT ≡ 1 (mod (p ’ 1)(q ’ 1)).

(4) Alice and Bob are each given a copy of the public exponent eT = 3 by
Trent, as well as the public-key modulus nT and the public (nS , ±).

(5) Trent generates identity strings IA and IB for Alice and Bob, respectively.

Mobile Phone Initialization:

(1) Alice chooses a random integer a (as the ElGamal private signature key
” see page 136) in the range 1 < a ¤ nS ’ 2, and calculates the ElGamal
signature public key uA ≡ ±a (mod nS ). She keeps a private, but sends
uA to Trent.
(2) Trent creates the certi¬cate C(A) = (IA , uA , sigT (IA , uA )), where sigT ≡
(h(IA , uA ))dT (mod nT ) is Trent™s RSA signature. He sends C(A) to Alice.

Base Station Initialization:

(1) Bob chooses an RSA modulus nB , creates a public key eB = 3 and the
corresponding private key dB . Then he sends nB to Trent.




© 2003 by CRC Press LLC
9.3. Wireless Security 193

(2) Trent creates a certi¬cate C(B) = (IB , nB , sigT (IB , nB )), where

sigT (IB , nB ) ≡ (h(IB , nB ))dT (mod nT ),

is Trent™s signature. He sends C(B) to Bob.

Protocol Steps:

(1) Precomputation Step: Alice chooses a random integer x such that
1 < x ¤ nS ’ 2 and computes,

x’1 (mod nS ’ 1);
v ≡ ±x (mod nS ); av (mod nS ’ 1),
and

where gcd(x, nS ’ 1) = 1.

(2) Bob sends C(B) to Alice.
(3) Alice veri¬es the authenticity of nB by computing

h(IB , nB ) ≡ sig3 (IB , nB ) (mod nT ).
T

If valid, she selects a random key k > 1 such that k < nB ’ 1 and sends
PB (k) ≡ k 3 (mod nB ) to Bob.

(4) Bob computes

k ≡ sigB (PB (k)) ≡ PB (k)dB (mod nB ).

Then he chooses a random challenge number m, pads it with t ≥ 50 least
signi¬cant zeros, to form m , and enciphers it as Ek (m ), which he sends
to Alice.
’1
(5) Alice uses Ek to decipher m , veri¬es the format, to get m and if valid,
she accepts that k came from Bob. Alice then forms M = (m, IB ) and
computes
w ≡ (M ’ av)x’1 (mod nS ’ 1).
She sends Ek ((v, w), C(A)) to Bob, where (v, w) is her ElGamal signature
on M and C(A).
’1
(6) Bob uses Ek to get (v, w) and C(A). He veri¬es the authenticity of uA
by computing,

h(IA , uA ) ≡ sig3 (IA , uA ) (mod nT ).
T

If valid, he creates M = (m, IB ). He veri¬es Alice™s signature on m by
calculating
±M ≡ uv v w (mod nS ),
A

(see Exercise 9.6). If all is valid, Bob accepts that k came from Alice.



© 2003 by CRC Press LLC
194 9. Applications and the Future

In step (3), Alice uses a cubic RSA exponentiation to encrypt the message,
and in step (5), she uses an ElGamal signature scheme to sign the message. In
step (4), Bob uses his corresponding private RSA exponent to decrypt it, and
in step (6) uses the cubic RSA public exponent to authenticate. The cubic RSA
exponent is an inexpensive public-key algorithm, and the ElGamal signature is
also inexpensive in that there is one modular exponentiation, the one in step
(5), excluding the precomputation in step (1). Moreover, mutual authentication
requires both Alice and Bob to demonstrate knowledge of their respective private
keys (a for Alice from the ElGamal signature scheme, and dB for Bob from
the RSA signature scheme). Hence, this provides a unique mingling of the
two public-key schemes and the exploitation of economy involved in their base
operations ” a nice feature.
The second hybrid algorithm, ¬rst developed [12] in 1994, is given as fol-
lows. The protocol provides link security only, meaning that the mobile device
and the base station are mutually authenticated to one another which allows
the formation of a communication link over which the key agreement can be
exercised. This provides a seamless embedding into existing WLANs (not con-
sidered in IEEE 802.11i). Another feature is the ¬‚exibility in the protocol that
allows a choice of the symmetric-key algorithm to be chosen, which therefore
has an eye to the future since future developments in symmetric-key technology
can be implemented.
x Aziz-Di¬e Wireless Authentication/Key Agreement Protocol
Alice has a mobile phone and she wants to authenticate with Bob at the
base station who also needs veri¬cation of Alice™s identity.
Background Assumptions: We assume that LS is a list of symmetric-
key cryptosystems containing information on key sizes. Also, Alice and Bob
have public/private key pairs (eA , dA ) and (eB , dB ), respectively, from a public-
key cryptosystem. Moreover, certi¬cates are issued by the CA, Trent. Alice™s
certi¬cate, for instance, is C(A) = sigT (SA , VA , IA , eA , IT ), where SA is the
serial number of the certi¬cate, VA is the lifetime (or validity period) of it, IA
is Alice™s unique identi¬er string, eA is Alice™s public key, and IT is Trent™s
identi¬er. Also, verT is Trent™s public veri¬cation algorithm, and sigT is his
signature. As well, Alice and Bob have digital signature schemes sigdA and
sigdB , respectively.
Authentication Protocol Steps:

(1) Alice selects a nonce nA , which is a random 128-bit string, and sends
(C(A), nA , LS ) to Bob at the base station.

(2) Bob veri¬es C(A) using verT . If valid, he generates a nonce nB (128-
bit) and selects a symmetric-key cryptosystem E from the intersection of
LS with the set of them supported by his base station. The key sizes
are determined by the minimum of what Alice™s list suggests and what
the base station can support. He extracts eA from Alice™s certi¬cate and
sends her (C(B), eA (nB ), E, sigdB (eA (nB ), E, nA , LS )).



© 2003 by CRC Press LLC
9.3. Wireless Security 195

(3) Alice veri¬es C(B) using verT , and if valid, the signature is veri¬ed using
eB . She compares nA and LS with what she sent in her message. If they
match, she is convinced of Bob™s authenticity at the base station. She
generates another nonce, nA , and sends (eB (nA ), sigdA (eB (nA ), eA (nB )))
to Bob.
(4) Bob uses eA to verify the signature. He veri¬es Alice™s identity by compar-
ing eA (nB ) with what he sent Alice in step (2). If they match, then Alice
has been identi¬ed to Bob.

Now that they have established a communication link, they can proceed to
establish a shared symmetric key through an exchange as follows.
Key Agreement Protocol Steps:

(1) Bob computes dB (eB (nA )) = nA and forms a session key, nB • nA = nA,B
by addition modulo 2. He sends, (eA (nB ), sigdB (eA (nA,B ))), to Alice.
(2) Alice computes dA (eA (nB )) = nB , and forms nB • nA = nA,B , the session
key. She sends, (eB (nA ), sigdA (eB (nA,B ))), to Bob, and they now have
agreement on a session key.

If Mallory were to attempt to modify LS by intercepting Alice™s message
from step (1), Bob will detect this in step (2). When Alice sends her message
in step (3), she completes her authentication and sends the second half of the
key nA to Bob as well. This initializes the key agreement protocol should the
authentication protocol be successful. The fact that there is an XORing step by
the both of them rather than simply choosing nB , say, for the key is that the
addition of the two key parts reduces the damage that would otherwise occur
if one of Alice or Bob™s private key is compromised. Hence, with the setup as
above, Mallory would have to obtain both of Alice and Bob™s private keys.
The key exchange is provided in case there is a breach of security and Alice
and Bob need to change the session key. This might occur even if the initial
connection remains unbroken. Moreover, the key agreement protocol may be
initiated by Alice, in which case steps (1) and (2) are interchanged. Since nB
and nA are from previous key exchanges, the signed values in the key agreement
protocol thwart any attempt by Mallory to launch a replay attack (see page 133).
The Aziz-Di¬e protocol has less storage requirements than the Beller-Yacobi
protocol, and it is a three-pass protocol, whereas Beller-Yacobi is a four-pass
protocol. Also, Aziz-Di¬e requires no precomputation step, whereas Beller-
Yacobi does (each time it sets up a call). Lastly, Beller-Yacobi o¬ers forward
secrecy (see page 171) to Alice but not to Bob, whereas in Aziz-Di¬e there is
forward secrecy for both.
There exist a number of other wireless protocols that have been developed
over the years. For instance, the Future Public Land Mobile Telecommunications
Systems (FPLMTS) wireless protocol [244] was developed in 1995, and updated
to ICG IMT-2000 (see http://www.itu.int/itudoc/itu-t/icg/imt2000/). This is


© 2003 by CRC Press LLC
196 9. Applications and the Future

a symmetric-key protocol with three passes, so there are key management prob-
lems. There is the Groupe Sp´cial Mobile (GSM), developed in Europe in the
e
early 1990s as a standard for mobile phones (see [166] and [168]). It was the
¬rst WLAN architecture to provide user authentication, con¬dentiality, and key
agreement. It is currently used in Europe, Australia, and other countries. These
are but a few of many. There are more recent developments proposing the use
of elliptic curve cryptography. For instance see [11], which involves the imple-
mentation of ECDSA (see Footnote 7.5 on page 139). They demonstrate that
their method requires less bandwidth than either the Aziz-Di¬e protocol or the
Beller-Chang-Yacobi protocol [20], which was a precursor to the Beller-Yacobi
protocol given above. Moreover it is shown to have low storage requirements,
making it suitable for PDAs and smart cards (see section 9.4). Lastly, they
show that there are modest computational requirements for the user. In fact,
there is a current brand of opinion that advances elliptic curve cryptography as
the new wave for wireless and smart cards. For example, Certicom maintains
that its ECC software is better suited than RSA™s BSAFE9.19 in terms of less
computational overhead, smaller key sizes, and lower bandwidth. Moreover,
RSA Security has a multi-prime proposal, which is specially suited for wireless
environments. (See ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1.doc
for the latest version of PKCS#1, which includes multi-prime RSA). Not to
be outdone, NTRU claims its cryptographic software makes “dinosaurs” of the
older standards since theirs is faster than both RSA and ECC. Time will, as
always, tell who will be proven to be correct; or as is often the case, perhaps
some hybrid will win the day.
From what we have seen, certain observational guidelines emerge for wireless
security protocols:
(1) There must be mutual authentication.
(2) Session keys should be securely established as needed.
(3) Public-key computations within protocols should be minimized when used.
(4) Message digests should be used when possible (without reducing e¬ciency).
(5) Nonrepudiation should be guaranteed by the use of Trent, as well as for
authentication, but preferably outside the protocol actions between Alice
and Bob.
(6) Challenge-response protocols provide authentication and access control ser-
vices.
(7) Public-key certi¬cates mutually authenticate Alice and Bob.
(8) There should be authentication between the base station and the CA.
9.19 RSA Security Inc. has BSAFE encryption software for e-commerce, available in a wireless
version, (see http://www.rsasecurity.com/products/bsafe/) which is used, for instance, by
Palm Inc. in its Palm OS. There are other vendors o¬ering competing software including
Certicom (see http://www.certicom.com/) and NTRU (see http://www.ntru.com/).



© 2003 by CRC Press LLC
9.3. Wireless Security 197

What does the future hold for wireless communications? The ITU has coor-
dinated an e¬ort to support a framework for what is called third-generation (3G)
wireless communications services, called Universal Mobile Telecommunications
System (UMTS), which is part of a larger framework called IMT-2000. The goal
is to create an edi¬ce for digital content distribution in mobile communications
that can be seamlessly incorporated into the existing second-generation (2G)
wireless environment. The ¬rst UMTS services were launched in 2001, with
over one hundred 3G licenses awarded to date. The goals of UMTS include the
retention of the robust features of 2G technology; improvement of 2G security
systems; and the provision of new systems that do not exist in 2G systems. For
instance, the 3G analogue of a SIM is called a USIM, or Universal Smart Card.
The advance in 3G technology over SIM cards would be the use of dual slot
phones for WIMs not located on the SIM card. UMTS is capable of data rates
as high as 2 Mbps (see Footnote 7.1 on page 131) in conjunction with global
roaming. Some of the problems that we discussed with 2G technology above will
be addressed by providing advanced support for security and data encryption;
increased key sizes; enhancement of user identity; con¬dentiality by employing
group keys; basic UMTS algorithms will be public; and advanced enhancements
provided for integrity and con¬dentiality. There are a wealth of other features
that we cannot begin to describe here. The interested reader may ¬nd more at
http://www.umtsworld.com/.
The bottom line is that the future of wireless technology depends upon
improvement of security services across wireless networks on a global scale,
from end to end. There should be a robust mix of public and symmetric-key
management techniques to ensure optimal security.

Exercises

9.6. Verify that in (Protocol) step (6) of the Beller-Yacobi Wireless Scheme,
Bob™s computation at the end actually veri¬es Alice™s signature.
9.7. The Beller-Yacobi is a 4-pass protocol and Aziz-Di¬e is a 3-pass protocol.
The following is a 2-pass protocol. In what follows (eA , dA ) and (eB , dB )
are Alice and Bob™s respective public/private key pairs. TA is a timestamp
from Alice™s mobile phone, and C(A), C(B) are their respective certi¬cates
issued by Trent. Also, ± is a primitive root modulo a prime nS as ElGamal
parameters, while a and b are Alice and Bob™s respective private ElGamal
signature keys.
(1) Alice sends (C(A), dA (TA , ±a )) to Bob.
(2) Bob checks C(A), and veri¬es using eA to get TA and ±a . If valid, he
sends (C(B), eA (dB (TA , ±b ))) to Alice. Alice decrypts ¬rst using dA ,
then eB . If valid, they have mutually authenticated each other and
have agreement on the key ±ab , which is a Di¬e-Hellman exchange.
What is a potential weakness of this scheme, and what protects against
this weakness?



© 2003 by CRC Press LLC
198 9. Applications and the Future

9.4 Smart Cards and Biometrics
The ¬rmness makes my circle just, and makes me end, where I begun.
John Donne (1572“1631) English poet

We have encountered smart cards in our cryptographic travels in Section 7.3
when we discussed digital cash, and in the presentation of the timing attack on
page 113, as well as the mention in Footnote 7.10 on page 151 when we described
them as stored value cards. Now we are going to look at smart cards in greater
detail and depth. First of all, we should settle upon a formal de¬nition of the
term, namely, a plastic card containing an embedded integrated microprocessor
chip. Smart cards have both memory and computational capacity, and there are
numerous kinds of smart cards. The international standard for smart cards is
ISO 7816 (see Footnote 8.6 on page 174) Integrated Circuit Cards with Electrical
Contact (which has six parts relating to everything from the card size to its
electrical attributes).
In Section 7.1, we looked into the issue of identi¬cation. Smart cards can be
used to identify. An identi¬cation scheme that we did not discuss yet, but one
that is ideally suited for smart card applications is the following [103] introduced
in 1988.
x Guillou-Quisquater Identi¬cation Scheme
Background Assumptions: Alice is a smart card that wants to access a
bank account at Bob the Bank. She has her identi¬cation string IA stored as a
hash h(IA ). Trent has a public RSA modulus n = pq, where p and q are private,
and the public RSA exponent is eT . Alice™s private key is computed via

h(IA )deT ≡ 1 (mod n).
A

She sends h(IA ) to Bob and must now convince him that she knows dA without
revealing it.
Protocol Steps:
(1) Alice picks a random nonnegative integer k ¤ n ’ 1 and computes

± ≡ k eT (mod n),

which she sends to Bob.
¤ eT ’ 1, which he sends to
(2) Bob chooses a random nonnegative integer
Alice.
(3) Alice computes, β ≡ kdA (mod n), which she sends to Bob.
(4) Bob calculates, γ ≡ h(IA ) β eT (mod n), and if γ ≡ ± (mod n), then by
Exercise 9.8, Alice has been identi¬ed to Bob.

Since Alice is able to identify herself to Bob without revealing dA , this is an
example of what is called a zero-knowledge proof of identity. We encountered


© 2003 by CRC Press LLC
9.4. Smart Cards and Biometrics 199

another example of this notion in Section 2.1 (see Remark 2.1 on page 35).
The above scheme can be modi¬ed to provide a digital signature mechanism
that is also well suited for smart card applications (see [104]“[105]). On page
138, we talked about how identi¬cation schemes can be converted into signa-
ture schemes, and from where the idea originated. The following is another
application of that idea.
x Guillou-Quisquater Signature Scheme
The setup remains in e¬ect from the identi¬cation scheme above. The goal
is for Alice to sign a message m and have Bob verify the signature. We assume
that H is a cryptographic hash function.
Protocol Steps:

(1) Alice selects a random integer r with 1 < r < n ’ 1, and computes both:
s ≡ reT (mod n), and t = H(m, s).

(2) Alice then computes c ≡ rdt (mod n). She sends (m, t, c, h(IA )) to Bob.
A

(3) Bob computes s ≡ ceT h(IA )t (mod n), then t ≡ H(m, s ) (mod φ(n)). By
Exercise 9.9, if t ≡ t (mod φ(n)), then he accepts Alice™s signature.

Identi¬cation and signatures are but two of many applications for smart
card environments. We have already discussed their use in e-commerce. There is
also the use in health care since smart cards can carry vital medical information,
which might include data such as drug allergies, current medications, emergency
contact numbers, and other facts that could save a person™s life when brought
unconscious to a hospital after an accident, say. Perhaps a kidney patient
would have a smart card with dialysis and treatment data ” already a reality
in France and Japan. Naturally, such data would have to be made secure on
the card. Also, another important application of smart cards in health care is
as an identi¬cation method to high security systems that contain con¬dential
patient records or information.
Smart cards that interface with information technology are being developed.
Imagine the day when every PC comes with a smart card reader.9.20 It may
not be far away. For instance, the PC/SC workgroup is setting standards for
smart cards and their readers to be integrated into computing environments
(see http://www.pcscworkgroup.com/).
Another aspect of smart card applications that combines identi¬cation with
e-commerce is the so-called Loyalty card used by retailers globally to reward
their loyal customers with points to collect for redemption of gifts. Another
developing area is the use of smart cards in mass transit. A smart card can be
used to store transportation value, and ideally do not need contact to open fare
gates. These contactless cards may use electrical inductive coupling with the
reader and card roughly a millimeter apart. This allows for a large volume of
9.20 Smart cards can be plugged into a variety of hardware devices called readers. These are
distinguished by the means with which they interface with a PC, such as say Universal Serial
Bus (USB). However, we will not be concerned with the hardware issues herein.



© 2003 by CRC Press LLC
200 9. Applications and the Future

tra¬c to be processed quickly. For instance, Hong Kong initiated such a smart
card system in September of 1997. Since that time 2.5 million transactions per
day have taken place. When they are done they expect 7.5 million cards to be
in use. Currently there are more than four million distributed. As evidence
of the international recognition for the state of the art of their system, they
were awarded the 1998 Sesames Award for the best smart card application at
Cartes ™98. An astonishing 1.6 billion transactions occur in the Hong Kong
public transit system yearly. Thus, this smart card application is an amazing
achievement, especially when one learns that the system can handle as much as
10 million transactions per day.
In Section 9.3, we discussed wireless aspects of telephony. Smart cards can
also be used in pay phones instead of coins, and hundreds of millions of such
cards are in use worldwide.
Of course, we are all aware of debit cards. However, most in use today have
magnetic stripes. Smart cards have three major advantages over magnetic-stripe
cards. One is that they can carry as much as 100 times more information and
do so more robustly. The second is that they can carry out relatively complex
tasks when interfaced with a terminal. One of these tasks might be the above
identi¬cation and digital signature scheme used to convince a bank that Alice
is legitimate in her request to withdraw cash. Third, magnetic-stripe cards
are far less secure than smart cards. In particular, this was the driving force
in France in its switch to smart cards in the mid-1980s since fraud rates were
incredibly high with no end in sight to the escalation. The chips on smart
cards are more secure against tampering than magnetic stripes. Recall that we
mentioned the term “phreaking” in Footnote 9.10 on page 187, having originally
meant the using of telephony without paying. In Europe, where telephone rates
are signi¬cantly higher than in North America, there was incentive to switch to
smart cards as did the French. In fact, the French are responsible for the term
“smart card”, which has been in development since the 1970s when the French
invested a large amount of money into this R&D technology. They originally
called these cards Carte a memoire or memory card in the 1970s. The French
government™s marketing arm, Intelimatique, coined the term smart card in 1980.
Although we initially de¬ned smart cards in the outset of this section, there
are four types of cards, with di¬ering capabilities, that are typically referenced
with this term, so we™ll now sort this out. There are those with the capacity to
store data, but do not possess data processing capabilities ” standard memory
cards. There are those which contain some built-in logic circuit to access the
memory of the card, wired logic, called intelligent memory cards. Sometimes
these cards can be con¬gured to restrict access via a password. Third, there
are stored-value cards which have security features hard-wired into the chip at
the point of manufacture. Examples of such cards are prepaid phone cards,
wherein a terminal inside the pay phone will write a declining balance into the
card™s memory. The card can be discarded or recharged when the balance is
zero. Fourth, there are those which contain memory, a processor, and have data
processing capabilities ” processor cards. Typically, the processor is employed
for cryptographic calculations and, for instance, public/private key pairs can


© 2003 by CRC Press LLC
9.4. Smart Cards and Biometrics 201

be stored and used. Moreover, the chip is more sophisticated and can manage
data much like your PC can do. It is the latter that is the one most deserv-
ing of the term smart card. This is an integrated circuit card with ISO 7816
interface. In fact, the ¬rst three are often taken under the heading of memory
cards and the fourth under the heading of microprocessor multifunction cards.
Clearly, smart cards have greater security and convenience for transactions of
any kind. They are vulnerable to numerous attacks, and we discussed some of
those in section 6.1. Although price was initially a problem, this is correcting
itself with time and more widespread use. Lastly, the PKS#11 and PKS#15
standards are important for storing and accessing information on smart cards.
See http://www.rsasecurity.com/rsalabs/pkcs/index.html.
Now we turn to biometrics ” the science and its attendant technology for
measuring and analyzing biological data, such as ¬ngerprints, eye retinas and
irises, voice patterns, and facial geometry, especially for authentication. The
digital storage of these characteristics is called a template. Later, when an in-
dividual™s characteristic is measured, it can be compared to the template, and
if the comparisons match, the entity is authenticated. The device that mea-
sures a biometric is called a sensor, such as a retinal scanner or a microphone.
This requires an analog-to-digital converter, necessitating the taking of multiple
samples to account for measurement variance.
Very few ¬nancial institutions use biometrics, but they are often used for
physical access to various sites. For instance, the authorities responsible for the
security of the Statue of Liberty in the U.S.A. are testing, at the time of this
writing, a process whereby a picture is taken of each individual (before they are
allowed access to the statue) and compared to a database of known criminals
and terrorists. If the facial geometry matches any of them, the individual is held.
Of course, these matches cannot be exact, so there are algorithms for deciding
an acceptable range of comparison, which is necessarily somewhat subjective.
Sensitivity levels for this range can be set in a similar fashion to the scanners
at airport facilities.
One drawback in the use of biometrics is that it requires, say, for a bank,
that the customer submit several biometric indicators over a period of time for
security, whereas a PIN can be established once for account initiation. Other
problems can arise. Suppose that the biometric is a voice identi¬er and the
customer has a severe cold or laryngitis. Then the customer cannot be authen-
ticated. Thus, more than one biometric is needed and some mechanism has to
be in place to select a signi¬cant subset of them for authentication purposes.
For example, an individual™s ¬ngerprints, facial geometry, and voice pattern can
be embedded into a smart card. Then the entity can present both the smart
card, the ¬ngerprints and a check of voice patterns say, can be veri¬ed.
Most of us know about ¬ngerprint biometrics, which is the most accepted.
Moreover, it is relatively inexpensive, and perhaps the most trusted. However,
this trust appears to be misplaced. At Eurocrypt 2001, there was a rump
session talk in which it was shown that using a small amount of money a replica
of a ¬nger could be created from “gummy”, the substance used for gummy
bears. These replicas were accepted by all ¬ngerprint readers tested, often with


© 2003 by CRC Press LLC
202 9. Applications and the Future

a higher acceptance rate than real ¬ngers! Consequently, this throws serious
doubt onto the amount of trust that can be placed on ¬ngerprint scanners. See
http://www.counterpane.com/crypto-gram-0205.html#5.
We mentioned two types of optical biometrics, retinal and iris. These are the
most common. Retinal and iris scanners are more expensive than ¬ngerprint
scanners, but also more reliable in that the error rate is typically 1 in 2 · 106
for the former and 1 in 105 for the latter. Facial geometry is perhaps the most
subjective of the biometrics, but also less expensive than ¬ngerprint or optical
biometric scanners. Voice biometrics are attractive since the technology already
exists in the computer industry, and the costs are minimal. Yet error rates are
high, and we mentioned earlier if the entity has a cold or some other voice-
altering condition, the biometric is unreliable. There are several other types of
biometrics such as signatures and hand geometry, but the technology is young in
general and needs time to work out the bugs. Accuracy is an important factor,
so if costs descend in optical scanners and they become more widespread, this
may become the medium of choice. Yet there are factors that can render this
biometric unreliable such as cataracts. Again, the optimal solution may require
several biometrics to ensure proper identi¬cation and accuracy beyond merely
subjective limits. It would appear at this juncture that smart cards with several
biometrics embedded from which an arbitrary subset can be extracted may be
the best compromise. Only the future will give us the answer.

Exercises

9.8. Prove that in the Guillou-Quisquater identi¬cation scheme, Alice is iden-
ti¬ed to Bob if γ ≡ ± (mod n).
9.9. In the Guillou-Quisquater signature scheme, prove that n Bob may accept
that Alice™s signature is valid, provided that t ≡ t (mod φ(n)).
9.10. The following is a signature scheme.9.21 Let , ∈ N such that + 1 < ,
and assume that H is a cryptographic hash function (see Footnote 3.1),
generated using say, SHA-1 (see page 182). Choose two -bit safe primes
p and q (see page 120) and let n = pq. Also, choose quadratic residues
t1 , t2 modulo n and a random ( + 1)-bit prime p . The public key is
(n, t1 , t2 , p ) and the private key is (p, q). Suppose that m is the message
to be signed. Select a random ( + 1)-bit prime p = p and a random
H(u )
quadratic residue t3 modulo n. Then solve up ≡ t2 t1 (mod n), for
H(m)
p
u where u satis¬es, t3 ≡ u t1 (mod n). Thus, the signature on m is
(p , u, t3 ). Show how this signature can be veri¬es once it is checked that
p is an ( + 1)-bit integer di¬erent from p .

9.21 In2000, Cramer and Shoup proved that, under reasonable assumptions, this signature
scheme is provably secure against adaptive chosen-plaintext attacks. They call their values
, security parameters. Also, in 2000, the authors of [96] presented an undeniable signature
scheme that can be used in conjunction with the RSA scheme given on page 135 ” typically
a hash and sign mechanism. See also [118].



© 2003 by CRC Press LLC