<<

. 9
( 19)



>>



with R (SECRETKEY), resulting in the cipher letter E.
The K (FLANK) is encoded by looking at the intersection of column K and the row starting


with E (SECRETKEY), resulting in the cipher letter O.

The process continues until the entire text message FLANK EAST ATTACK AT DAWN is en-
crypted. The process can also be reversed. For instance, the F is still the cipher letter X if encoded
by looking at the intersection of row F (FLANK) and the column starting with S (SECRETKEY).
When using the Vigenere cipher and the message is longer than the key, the key is repeated. For
example, SECRETKEYSECRETKEYSEC is required to encode FLANK EAST ATTACK AT
DAWN:
Secret key: SECRE TKEY SECRET KE YSEC

Plaintext: FLANK EAST ATTACK AT DAWN

Cipher text: XPCEO XKUR SXVRGD KX BSAP

Although the Vigenere cipher uses a longer key, it can still be cracked. For this reason, a better ci-
pher method was required.
Gilbert Vernam was an AT&T Bell Labs engineer who, in 1917, invented and patented the stream
cipher and later co-invented the one-time pad cipher. Vernam proposed a teletype cipher in which a
prepared key consisting of an arbitrarily long, non-repeating sequence of numbers was kept on
paper tape. It was then combined character by character with the plaintext message to produce the
ciphertext. To decipher the ciphertext, the same paper tape key was again combined character by
character, producing the plaintext. Each tape was used only once, hence the name one-time pad.
As long as the key tape does not repeat or is not reused, this type of cipher is immune to cryptana-
lytic attack because the available ciphertext does not display the pattern of the key.
198 CCNA Security Course Booklet, Version 1.0




Several difficulties are inherent in using one-time pads in the real world. One difficulty is the chal-
lenge of creating random data. Computers, because they have a mathematical foundation, are inca-
pable of creating true random data. Additionally, if the key is used more than once, it is easy to
break. RC4 is an example of this type of cipher that is widely used on the Internet. Again, because
the key is generated by a computer, it is not truly random. In addition to these issues, key distribu-
tion is also challenging with this type of cipher.


7.1.3 Cryptanalysis
As long as there has been cryptography, there has been cryptanalysis. Cryptanalysis is the practice
and study of determining the meaning of encrypted information (cracking the code), without ac-
cess to the shared secret key.
A variety of methods are used in cryptanalysis.
Brute-Force Attack
In a brute-force attack, an attacker tries every possible key with the decryption algorithm knowing
that eventually one of them will work. All encryption algorithms are vulnerable to this attack. On
average, a brute-force attack succeeds about 50 percent of the way through the keyspace, which is
the set of all possible keys. The objective of modern cryptographers is to have a keyspace large
enough that it takes too much money and too much time to accomplish a brute-force attack.
Recently, a DES cracking machine was used to recover a 56-bit DES key in 22 hours using brute
force. It is estimated that on the same equipment it would take 149 trillion years to crack Ad-
vanced Encryption Standard (AES) using the same method.
Ciphertext-Only Attack
In a ciphertext-only attack, the attacker has the ciphertext of several messages, all of which have
been encrypted using the same encryption algorithm, but the attacker has no knowledge of the un-
derlying plaintext. The job of the attacker is to recover the ciphertext of as many messages as pos-
sible. Even better for the attacker is to deduce the key or keys used to encrypt the messages to
decrypt other messages encrypted with the same keys. The attacker could use statistical analysis to
deduce the key. These kinds of attacks are no longer practical, because modern algorithms produce
pseudorandom output that is resistant to statistical analysis.
Known-Plaintext Attack
In a known-plaintext attack, the attacker has access to the ciphertext of several messages, but also
knows something about the plaintext underlying that ciphertext. With knowledge of the underlying
protocol, file type, or some characteristic strings that appear in the plaintext, the attacker uses a
brute-force attack to try keys until decryption with the correct key produces a meaningful result.
This attack might be the most practical attack, because attackers can usually assume some features
of the underlying plaintext if they can only capture the ciphertext. Modern algorithms with enor-
mous keyspaces make it unlikely for this attack to succeed because, on average, an attacker must
search through at least half of the keyspace to be successful.
Chosen-Plaintext Attack
In a chosen-plaintext attack, the attacker chooses which data the encryption device encrypts and
observes the ciphertext output. A chosen-plaintext attack is more powerful than a known-plaintext
attack because the chosen plaintext might yield more information about the key. This attack is not
very practical because, unless the trusted network has been breached and the attacker already has
access to confidential information, it is often difficult or impossible to capture both the ciphertext
and plaintext.
Chapter 7: Cryptographic Systems 199




Chosen-Ciphertext Attack
In a chosen-ciphertext attack, the attacker can choose different ciphertext to be decrypted and has
access to the decrypted plaintext. With the pair, the attacker can search through the keyspace and
determine which key decrypts the chosen ciphertext in the captured plaintext. For example, the at-
tacker has access to a tamperproof encryption device with an embedded key. The attacker must de-
duce the embedded key by sending data through the device. This attack is analogous to the
chosen-plaintext attack. Like the chosen-plaintext attack, this attack is not very practical. Unless
the trusted network has been breached, and the attacker already has access to confidential informa-
tion, it is difficult or impossible for the attacker to capture both the ciphertext and plaintext.
Meet-in-the-Middle
The meet-in-the-middle attack is a known plaintext attack. The attacker knows a portion of the
plaintext and the corresponding ciphertext. The plaintext is encrypted with every possible key, and
the results are stored. The ciphertext is then decrypted using every key, until one of the results
matches one of the stored values.
As an example of how to choose the cryptanalysis method, consider the Caesar cipher encrypted
code. The best way to crack the code is to use brute force. Because there are only 25 possible rota-
tions, it is not a big effort to try all possible rotations and see which one returns something that
makes sense.
A more scientific approach is to use the fact that some characters in the English alphabet are used
more often than others. This method is called frequency analysis. For example, the letters E, T, and
A are the most popular letters used in the English language. The letters J, Q, X, and Z are the least
popular. Understanding this pattern can help discover which letters are probably included in the ci-
pher message.
For example, in the Caesar ciphered message IODQN HDVW DWWDFN DW GDZQ, the cipher
letter D appears six times, while the cipher letter W appears four times. There is a good possibility
that the cipher letters D and W represent either the plaintext E, T, or A. In this case, the D repre-
sents the letter A, and the W represents the letter T.



7.1.4 Cryptology
Cryptology is the science of making and breaking secret codes. Cryptology combines the two sep-
arate disciplines of cryptography, which is the development and use of codes, and cryptanalysis,
which is the breaking of those codes. There is a symbiotic relationship between the two disci-
plines, because each makes the other one better. National security organizations employ members
of both disciplines and put them to work against each other.
There have been times when one of the disciplines has been ahead of the other. For example, dur-
ing the Hundred Years War between France and England, the cryptanalysts were ahead of the cryp-
tographers. France believed that the Vigenere cipher was unbreakable; however, the British were
able to crack it. Some historians believe that World War II largely turned on the fact that the win-
ning side on both fronts was much more successful than the losing side at cracking the encryption
of its adversary. Currently, it is believed that cryptographers have the edge.
Cryptanalysis is often used by governments in military and diplomatic surveillance, by enterprises
in testing the strength of security procedures, and by malicious hackers in exploiting weaknesses
in websites.
While cryptanalysis is often linked to mischievous purposes, it is actually a necessity. It is an
ironic fact of cryptography that it is impossible to prove an algorithm secure. It can only be proven
200 CCNA Security Course Booklet, Version 1.0




that it is not vulnerable to known cryptanalytic attacks. Therefore, there is a need for mathemati-
cians, scholars, and security forensic experts to keep trying to break the encryption methods.
In the world of communications and networking, authentication, integrity, and data confidentiality
are implemented in many ways using various protocols and algorithms. The choice of protocol and
algorithm varies based on the level of security required to meet the goals in the network security
policy.
For example, for message integrity, message-digest 5 (MD5) is faster but less secure than SHA2.
Confidentiality can be implemented using DES, 3DES, or the very secure AES. Again, the choice
varies depending on the security requirements specified in the network security policy document.
Old encryption algorithms, such as the Caesar cipher or the Enigma machine, were based on the
secrecy of the algorithm to achieve confidentiality. With modern technology, where reverse engi-
neering is often simple, public-domain algorithms are often used. With most modern algorithms,
successful decryption requires knowledge of the appropriate cryptographic keys. This means that
the security of encryption lies in the secrecy of the keys, not the algorithm. How can the keys be
kept secret?




7.2 Basic Integrity and Authenticity
7.2.1 Cryptographic Hashes
A hash function takes binary data, called the message, and produces a condensed representation,
called the message digest. Hashing is based on a one-way mathematical function that is relatively
easy to compute, but significantly harder to reverse. Grinding coffee is a good example of a one-
way function. It is easy to grind coffee beans, but it is almost impossible to put all of the tiny
pieces back together to rebuild the original beans.
The cryptographic hashing function is designed to verify and ensure data integrity. It can also be
used to verify authentication. The procedure takes a variable block of data and returns a fixed-
length bit string called the hash value or message digest.
Hashing is similar to calculating cyclic redundancy check (CRC) checksums, but it is much
stronger cryptographically. For instance, given a CRC value, it is easy to generate data with the
same CRC. With hash functions, it is computationally infeasible for two different sets of data to
come up with the same hash output. Every time the data is changed or altered, the hash value also
changes. Because of this, cryptographic hash values are often called digital fingerprints. They can
be used to detect duplicate data files, file version changes, and similar applications. These values
are used to guard against an accidental or intentional change to the data and accidental data cor-
ruption.
The cryptographic hash function is applied in many different situations:

To provide proof of authenticity when it is used with a symmetric secret authentication key,


such as IP Security (IPsec) or routing protocol authentication.
To provide authentication by generating one-time and one-way responses to challenges in


authentication protocols such as the PPP Challenge Handshake Authentication Protocol
(CHAP).
To provide a message integrity check proof, such as those used in digitally signed contracts,


and public key infrastructure (PKI) certificates, such as those accepted when accessing a
secure site using a browser.
Chapter 7: Cryptographic Systems 201




Mathematically, a hash function (H) is a process that takes an input (x) and returns a fixed-size
string, which is called the hash value (h). The formula for the calculation is h = H(x).
A cryptographic hash function should have the following properties:

The input can be any length.



The output has a fixed length.



H(x) is relatively easy to compute for any given x.



H(x) is one way and not reversible.



H(x) is collision free, meaning that two different input values will result in different hash


results.
If a hash function is hard to invert, it is considered a one-way hash. Hard to invert means that given
a hash value of h, it is computationally infeasible to find some input, (x), such that H(x) = h.
Hash functions are helpful when ensuring data is not changed accidentally, but they cannot ensure
that data is not changed deliberately. For instance, the sender wants to ensure that the message is
not altered on its way to the receiver. The sending device inputs the message into a hashing algo-
rithm and computes its fixed-length digest or fingerprint. Both the message and the hash are in
plaintext. This fingerprint is then attached to the message and sent to the receiver. The receiving
device removes the fingerprint from the message and inputs the message into the same hashing al-
gorithm. If the hash that is computed by the receiving device is equal to the one that is attached to
the message, the message has not been altered during transit.
When the message traverses the network, a potential attacker could intercept the message, change
it, recalculate the hash, and append it to the message. Hashing only prevents the message from
being changed accidentally, such as by a communication error. There is nothing unique to the
sender in the hashing procedure, so anyone can compute a hash for any data, as long as they have
the correct hash function.
These are two well-known hash functions:

Message Digest 5 (MD5) with 128-bit digests



Secure Hash Algorithm 1 (SHA-1) with 160-bit digests




7.2.2 Integrity with MD5 and SHA-1
The MD5 algorithm is a hashing algorithm that was developed by Ron Rivest and is used in a vari-
ety of Internet applications today.
MD5 is a one-way function that makes it easy to compute a hash from the given input data, but
makes it unfeasible to compute input data given only a hash value. MD5 is also collision resistant,
which means that two messages with the same hash are very unlikely to occur. MD5 is essentially
a complex sequence of simple binary operations, such as exclusive OR (XORs) and rotations, that
are performed on input data and produce a 128-bit digest.
The main algorithm is based on a compression function, which operates on blocks. The input is a
data block plus a feedback of previous blocks. 512-bit blocks are divided into 16 32-bit sub-
blocks. These blocks are then rearranged with simple operations in a main loop, which consists of
four rounds. The output of the algorithm is a set of four 32-bit blocks, which concatenate to form a
single 128-bit hash value. The message length is also encoded into the digest.
MD5 is based on MD4, an earlier algorithm. MD4 has been broken, and MD5 is now considered
less secure than SHA-1 by many authorities on cryptography. These authorities consider MD5 less
secure because some noncritical weaknesses have been found in one of the MD5 building blocks.
202 CCNA Security Course Booklet, Version 1.0




The U.S. National Institute of Standards and Technology (NIST) developed the Secure Hash Algo-
rithm (SHA), the algorithm that is specified in the Secure Hash Standard (SHS). SHA-1, published
in 1994, corrected an unpublished flaw in SHA. Its design is very similar to the MD4 and MD5
hash functions that Ron Rivest developed.
The SHA-1 algorithm takes a message of less than 264 bits in length and produces a 160-bit mes-
sage digest. The algorithm is slightly slower than MD5, but the larger message digest makes it
more secure against brute-force collision and inversion attacks.
NIST published four additional hash functions in the SHA family, each with longer digests:

SHA-224 (224 bit)



SHA-256 (256 bit)



SHA-384 (384 bit)



SHA-512 (512 bit)



These four versions are collectively known as SHA-2, although the term SHA-2 is not standard-
ized. SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 are the secure hash algorithms re-
quired by law for use in certain U.S. government applications, including use within other
cryptographic algorithms and protocols, for the protection of sensitive unclassified information.
Both MD5 and SHA-1 are based on MD4. This makes MD5 and SHA-1 similar in many ways.
SHA-1 and SHA-2 are more resistant to brute-force attacks because their digest is at least 32 bits
longer than the MD5 digest.
SHA-1 involves 80 steps, and MD5 involves 64 steps. The SHA-1 algorithm must also process a
160-bit buffer instead of the 128-bit buffer of MD5. Because there are fewer steps, MD5 usually
executes more quickly, given the same device.
When choosing a hashing algorithm, SHA-1 or SHA-2 is preferred over MD5. MD5 has not been
proven to contain any critical flaws, but its security is questionable today. If performance is an
issue, the MD5 algorithm is slightly faster than the algorithm for SHA-1. Keep in mind that MD5
may prove to be substantially less secure than SHA-1.
Are hashes only used to provide data integrity?


7.2.3 Authenticity with HMAC
In cryptography, a keyed-hash message authentication code (HMAC or KHMAC) is a type of mes-
sage authentication code (MAC). An HMAC is calculated using a specific algorithm that combines
a cryptographic hash function with a secret key. Hash functions are the basis of the protection
mechanism of HMACs.
Only the sender and the receiver know the secret key, and the output of the hash function now de-
pends on the input data and the secret key. Only parties who have access to that secret key can
compute the digest of an HMAC function. This characteristic defeats man-in-the-middle attacks
and provides authentication of the data origin.
If two parties share a secret key and use HMAC functions for authentication, a properly con-
structed HMAC digest of a message that a party has received indicates that the other party was the
originator of the message, because it is the only other entity possessing the secret key.
The cryptographic strength of the HMAC depends on the cryptographic strength of the underlying
hash function, on the size and quality of the key, and the size of the hash output length in bits.
Chapter 7: Cryptographic Systems 203




Cisco technologies use two well-known HMAC functions:

Keyed MD5 (HMAC-MD5), based on the MD5 hashing algorithm



Keyed SHA-1 (HMAC-SHA-1), based on the SHA-1 hashing algorithm



When an HMAC digest is created, data of an arbitrary length is input into the hash function, to-
gether with a secret key. The result is a fixed-length hash that depends on the data and the secret
key.
Care must be taken to distribute secret keys only to the parties who are involved because, if the se-
cret key is compromised, the other party can forge and change packets, violating data integrity.
Consider an example where a sender wants to ensure that the message is not altered in transit, and
wants to provide a way for the receiver to authenticate the origin of the message.
The sending device inputs data and the secret key into the hashing algorithm and calculates the
fixed-length HMAC digest or fingerprint. This authenticated fingerprint is then attached to the
message and sent to the receiver.
The receiving device removes the fingerprint from the message and uses the plaintext message
with its secret key as input to the same hashing function. If the fingerprint that is calculated by the
receiving device is equal to the fingerprint that was sent, the message has not been altered. Addi-
tionally, the origin of the message is authenticated, because only the sender possesses a copy of the
shared secret key. The HMAC function has ensured the authenticity of the message.
IPsec virtual private networks (VPNs) rely on HMAC functions to authenticate the origin of every
packet and provide data integrity checking.
Cisco products use hashing for entity authentication, data integrity, and data authenticity purposes:

Cisco IOS routers use hashing with secret keys in an HMAC-like manner to add


authentication information to routing protocol updates.
IPsec gateways and clients use hashing algorithms, such as MD5 and SHA-1 in HMAC mode,


to provide packet integrity and authenticity.
Cisco software images that are downloaded from Cisco.com have an MD5-based checksum


available so that customers can check the integrity of downloaded images.
Hashing can also be used in a feedback-like mode to provide a shared secret key to encrypt


data. For example, TACACS+ uses an MD5 hash as the key to encrypt the session.
Digital signatures are an alternative to HMAC.


7.2.4 Key Management
Key management is often considered the most difficult part of designing a cryptosystem. Many
cryptosystems have failed because of mistakes in their key management, and all modern crypto-
graphic algorithms require key management procedures. In practice, most attacks on cryptographic
systems are aimed at the key management level, rather than at the cryptographic algorithm itself.
There are several essential characteristics of key management to consider:

Generation - It was up to Caesar to choose the key of his cipher. The Vigenere cipher key is


also chosen by the sender and receiver. In a modern cryptographic system, key generation is
usually automated and not left to the end user. The use of good random number generators is
needed to ensure that all keys are likely to be equally generated so that the attacker cannot
predict which keys are more likely to be used.
204 CCNA Security Course Booklet, Version 1.0




Verification - Some keys are better than others. Almost all cryptographic algorithms have


some weak keys that should not be used. With the help of key verification procedures, these
keys can be regenerated if they occur. With the Caesar cipher, using a key of 0 or 25 does not
encrypt the message, so it should not be used.
Storage - On a modern multi-user operating system that uses cryptography, a key can be


stored in memory. This presents a possible problem when that memory is swapped to the disk,
because a Trojan Horse program installed on the PC of a user could then have access to the
private keys of that user.
Exchange - Key management procedures should provide a secure key exchange mechanism


that allows secure agreement on the keying material with the other party, probably over an
untrusted medium.
Revocation and Destruction - Revocation notifies all interested parties that a certain key has


been compromised and should no longer be used. Destruction erases old keys in a manner that
prevents malicious attackers from recovering them.

Two terms that are used to describe keys are key length and keyspace. The key length is the meas-
ure in bits, and the keyspace is the number of possibilities that can be generated by a specific key
length. As key lengths increase, the keyspace increases exponentially:

A 2-bit (22) key length = a keyspace of 4, because there are four possible keys (00, 01, 10, and


11).
A 3-bit (23) key length = a keyspace of 8, because there are eight possible keys (000, 001, 010,


011, 100, 101, 110, 111).
A 4-bit (24) key length = a keyspace of 16 possible keys.



A 40-bit (240) key length = a keyspace of 1,099,511,627,776 possible keys.



The keyspace of an algorithm is the set of all possible key values. A key that has n bits produces a
keyspace that has 2n possible key values. By adding one bit to the key, the keyspace is effectively
doubled. For example, DES with its 56-bit keys has a keyspace of more than
72,000,000,000,000,000 (256) possible keys. By adding one bit to the key length, the keyspace dou-
bles, and an attacker needs twice the amount of time to search the keyspace.
Almost every algorithm has some weak keys in its keyspace that enable an attacker to break the
encryption via a shortcut. Weak keys show regularities in encryption or poor encryption. For in-
stance, DES has four keys for which encryption is the same as decryption. This means that if one
of these weak keys is used to encrypt plaintext, an attacker can use the weak key to encrypt the ci-
phertext and reveal the plaintext.
The DES weak keys are those that produce 16 identical subkeys. This occurs when the key bits
are:

Alternating ones plus zeros (0101010101010101)



Alternating F plus E (FEFEFEFEFEFEFEFE)



E0E0E0E0F1F1F1F1



1F1F1F1F0E0E0E0E



It is very unlikely that such keys would be chosen, but implementations should still verify all keys
and prevent weak keys from being used. With manual key generation, take special care to avoid
defining weak keys.
Chapter 7: Cryptographic Systems 205




Several types of cryptographic keys can be generated:

Symmetric keys, which can be exchanged between two routers supporting a VPN



Asymmetric keys, which are used in secure HTTPS applications



Digital signatures, which are used when connecting to a secure website



Hash keys, which are used in symmetric and asymmetric key generation, digital signatures,


and other types of applications
Regardless of the type of key, all keys share similar issues. Choosing a suitable key length is one
issue. If the cryptographic system is trustworthy, the only way to break it is with a brute-force at-
tack. A brute-force attack is a search through the entire keyspace, trying all the possible keys to
find a key that decrypts the data. If the keyspace is large enough, the search requires an enormous
amount of time, making such an exhaustive effort impractical.
On average, an attacker has to search through half of the keyspace before the correct key is found.
The time that is needed to accomplish this search depends on the computer power that is available
to the attacker. Current key lengths can easily make any attempt insignificant, because it takes mil-
lions or billions of years to complete the search when a sufficiently long key is used. With modern
algorithms that are trusted, the strength of protection depends solely on the length of the key.
Choose the key length so that it protects data confidentiality or integrity for an adequate period of
time. Data that is more sensitive and needs to be kept secret longer must use longer keys.
Performance is another issue that can influence the choice of a key length. An administrator must
find a good balance between the speed and protective strength of an algorithm, because some algo-
rithms, such as the Rivest, Shamir, and Adleman (RSA) algorithm, run slowly because of large key
sizes. Strive for adequate protection, while enabling unhindered communication over untrusted
networks.
The estimated funding of the attacker should also affect the choice of key length. When assessing
the risk of someone breaking the encryption algorithm, estimate the resources of the attacker and
how long the data must be protected. For example, classic DES can be broken by a $1 million ma-
chine in a couple of minutes. If the data that is being protected is worth significantly more than $1
million dollars needed to acquire a cracking device, then, classic DES is a bad choice. It would
take an attacker a million years or more to crack 168-bit 3DES or 128-bit RC4, which makes ei-
ther of these key length choices more than adequate.
Because of the rapid advances in technology and cryptanalytic methods, the key size that is needed
for a particular application is constantly increasing. For example, part of the strength of the RSA
algorithm is the difficulty of factoring large numbers. If a 1024-bit number is hard to factor, a
2048-bit number is going to be even harder. Even with the fastest computers available today, it
would take many lifetimes to factor a 1024-bit number that is a factor of two 512-bit prime num-
bers. Of course, this advantage is lost if an easy way to factor large numbers is found, but cryptog-
raphers consider this possibility unlikely. The rule “the longer the key, the better” is valid, except
for possible performance reasons.



7.3 Confidentiality
7.3.1 Encryption
Cryptographic encryption can provide confidentiality at several layers of the OSI model by incor-
porating various tools and protocols:

Proprietary link-encrypting devices provide Data Link Layer confidentiality.

206 CCNA Security Course Booklet, Version 1.0




Network Layer protocols, such as the IPsec protocol suite, provide Network Layer


confidentiality.
Protocols such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS) provide


Session Layer confidentiality.
Secure email, secure database session (Oracle SQL*net), and secure messaging (Lotus Notes


sessions) provide Application Layer confidentiality.
There are two approaches to ensuring the security of data when using various encryption methods.
The first is to protect the algorithm. If the security of an encryption system is based on the secrecy
of the algorithm itself, the algorithm code must be heavily guarded. If the algorithm is revealed,
every party that is involved must change the algorithm. The second approach is to protect the keys.
With modern cryptography, all algorithms are public. The cryptographic keys ensure the secrecy of
the data. Cryptographic keys are sequences of bits that are input into an encryption algorithm to-
gether with the data to be encrypted.
Two basic classes of encryption algorithms protect the keys: symmetric and asymmetric. Each dif-
fers in its use of keys. Symmetric encryption algorithms use the same key, sometimes called a se-
cret key, to encrypt and decrypt data. The key must be pre-shared. A pre-shared key is known by
the sender and receiver before any encrypted communications commence. Because both parties are
guarding a shared secret, the encryption algorithms used can have shorter key lengths. Shorter key
lengths mean faster execution. Symmetric algorithms are generally much less computationally in-
tensive than asymmetric algorithms.
Asymmetric encryption algorithms use different keys to encrypt and decrypt data. Secure mes-
sages can be exchanged without having to have a pre-shared key. Because both parties do not have
a shared secret, very long key lengths must be used to thwart attackers. These algorithms are re-
source intensive and slower to execute. In practice, asymmetric algorithms are typically hundreds
to thousands times slower than symmetric algorithms.
To help understand the differences between both types of algorithms, consider an example where
Alice and Bob live in different locations and want to exchange secret messages to one another
through the mail system. In this example, Alice wants to send a secret message to Bob.
Symmetric Algorithm
In the symmetric algorithm example, Alice and Bob have identical keys to a single padlock. These
keys were exchanged prior to sending any secret messages. Alice writes a secret message and puts
it in a small box that she locks using the padlock with her key. She mails the box to Bob. The mes-
sage is safely locked inside the box as the box makes its way through the post office system. When
Bob receives the box, he uses his key to unlock the padlock and retrieve the message. Bob can use
the same box and padlock to send a secret reply back to Alice.
Asymmetric Algorithm
In the asymmetric algorithm example, Bob and Alice do not exchange keys prior to sending secret
messages. Instead, Bob and Alice each have a separate padlock with separate corresponding keys.
For Alice to send a secret message to Bob, she must first contact him and ask him to send his open
padlock to her. Bob sends the padlock but keeps his key. When Alice receives the padlock, she
writes her secret message and puts it in a small box. She also puts her open padlock in the box, but
keeps her key. She then locks the box with Bob™s padlock. When Alice locks the box, she is no
longer able to get inside because she does not have a key to that padlock. She mails the box to
Bob. As the box is sent through the mail system, no one is able to open the box. When Bob re-
ceives the box, he can use his key to unlock the box and retrieve the message from Alice. To send a
secure reply, Bob puts his secret message in the box along with his open padlock and locks the box
using Alice™s padlock. Bob mails the secured box back to Alice.
Chapter 7: Cryptographic Systems 207




Symmetric, or secret key, encryption is the most commonly used form of cryptography, because
the shorter key length increases the speed of execution. Additionally, symmetric key algorithms are
based on simple mathematical operations that can easily be accelerated by hardware. Symmetric
encryption is often used for wire-speed encryption in data networks and to provide bulk encryption
when data privacy is required, such as to protect a VPN.
With symmetric encryption, key management can be a challenge. The encryption and decryption
keys are the same. The sender and the receiver must exchange the symmetric, secret key using a
secure channel before any encryption can occur. The security of a symmetric algorithm rests in the
secrecy of the symmetric key. By obtaining the key, anyone can encrypt and decrypt messages.
DES, 3DES, AES, Software Encryption Algorithm (SEAL), and the Rivest ciphers (RC) series,
which includes RC2, RC4, RC5, and RC6, are all well-known encryption algorithms that use sym-
metric keys. There are many other encryption algorithms, such as Blowfish, Twofish, Threefish,
and Serpent. However, these protocols are either not supported on Cisco platforms or have yet to
gain wide acceptance.
The most commonly used techniques in symmetric encryption cryptography are block ciphers and
stream ciphers.
Block Ciphers
Block ciphers transform a fixed-length block of plaintext into a common block of ciphertext of 64
or 128 bits. Block size refers to how much data is encrypted at any one time. Currently the block
size, also known as the fixed length, for many block ciphers is either 64 bits or 128 bits. The key
length refers to the size of the encryption key that is used. This ciphertext is decrypted by applying
the reverse transformation to the ciphertext block, using the same secret key.
Block ciphers usually result in output data that is larger than the input data, because the ciphertext
must be a multiple of the block size. For example, DES encrypts blocks in 64-bit chunks using a
56-bit key. To accomplish this, the block algorithm takes data one chunk at a time, for example, 8
bytes each chunk, until the entire block size is full. If there is less input data than one full block,
the algorithm adds artificial data (blanks) until the full 64 bits are used.
Common block ciphers include DES with a 64-bit block size, AES with a 128-bit block size, and
RSA with a variable block size.
Stream Ciphers
Unlike block ciphers, stream ciphers encrypt plaintext one byte or one bit at a time. Stream ciphers
can be thought of as a block cipher with a block size of one bit. With a stream cipher, the transfor-
mation of these smaller plaintext units varies, depending on when they are encountered during the
encryption process. Stream ciphers can be much faster than block ciphers, and generally do not in-
crease the message size, because they can encrypt an arbitrary number of bits.
The Vigenere cipher is an example of a stream cipher. This cipher is periodic, because the key is of
finite length, and the key is repeated if it is shorter than the message.
Common stream ciphers include A5, which is used to encrypt GSM cell phone communications,
and the RC4 cipher. DES can also be used in stream cipher mode.
Choosing an encryption algorithm is one of the most important decisions a security professional
makes when building a cryptosystem. Two main criteria should be considered when selecting an
encryption algorithm for an organization:

The algorithm is trusted by the cryptographic community. Most new algorithms are broken


very quickly, so algorithms that have been resisting attacks for a number of years are
preferred. Inventors and promoters often oversell the benefits of new algorithms.
208 CCNA Security Course Booklet, Version 1.0




The algorithm adequately protects against brute-force attacks. A good cryptographic


algorithm is designed in such a way that it resists common cryptographic attacks. The best
way to break data that is protected by the algorithm is to try to decrypt the data using all the
possible keys. The amount of time that such an attack needs depends on the number of
possible keys, but is generally a very long time. With appropriately long keys, such attacks are
usually considered unfeasible. If the algorithm is considered trusted, there is no shortcut to
break it, and the attacker must search through the keyspace to guess the correct key. The
algorithm must allow key lengths that satisfy the confidentiality requirements of an
organization. For example, DES does not provide enough protection for most modern needs
because of its short key.
Other criteria to consider:

The algorithm supports variable and long key lengths and scalability. Variable key lengths


and scalability are also desirable attributes of a good encryption algorithm. The longer the
encryption key, the longer it takes an attacker to break it. For example, a 16-bit key has 65,536
possible keys, but a 56-bit key has 7.2 — 1016 possible keys. Scalability provides flexible key
length and enables the administrator to select the strength and speed of the encryption
required.
The algorithm does not have export or import restrictions. Carefully consider export and


import restrictions when using encryption internationally. Some countries do not allow the
export of encryption algorithms, or allow only the export of these algorithms with shorter
keys. Some countries impose import restrictions on cryptographic algorithms.


7.3.2 Data Encryption Standard
Data Encryption Standard (DES) is a symmetric encryption algorithm that usually operates in
block mode. It encrypts data in 64-bit blocks. The DES algorithm is essentially a sequence of per-
mutations and substitutions of data bits combined with an encryption key. The same algorithm and
key are used for both encryption and decryption.
DES has a fixed key length. The key is 64-bits long, but only 56 bits are used for encryption. The
remaining 8 bits are used for parity. The least significant bit of each key byte is used to indicate
odd parity.
A DES key is always 56 bits long. When DES is used with a weaker encryption of a 40-bit key, the
encryption key is 40 secret bits and 16 known bits, which make the key length 56 bits. In this case,
DES has a key strength of 40 bits.
Although DES typically uses block cipher mode, it can also encrypt using stream cipher mode. To
encrypt or decrypt more than 64 bits of data, DES uses two standardized block cipher modes,
Electronic Code Book (ECB) or Cipher Block Chaining (CBC).
Both cipher modes use the logical operation XOR with the following definition:
1 XOR 1 = 0
1 XOR 0 = 1
0 XOR 1 = 1
0 XOR 0 = 0
Block Cipher Mode
ECB mode serially encrypts each 64-bit plaintext block using the same 56-bit key. If two identical
plaintext blocks are encrypted using the same key, their ciphertext blocks are the same. Therefore,
Chapter 7: Cryptographic Systems 209




an attacker could identify similar or identical traffic flowing through a communications channel.
The attacker could then, without even knowing the meaning behind the traffic, build a catalog of
messages and replay them later to possibly gain unauthorized entry. For example, an attacker
might unknowingly capture a login sequence of someone with administrative privilege whose traf-
fic is protected by DES-ECB and then replay it. That risk is undesirable, so CBC mode was in-
vented to mitigate this risk.
In CBC mode, each 64-bit plaintext block is exclusive ORed (XORed) bitwise with the previous
ciphertext block and then is encrypted using the DES key. The encryption of each block depends
on previous blocks. Encryption of the same 64-bit plaintext block can result in different ciphertext
blocks.
CBC mode can help guard against certain attacks, but it cannot help against sophisticated crypt-
analysis or an extended brute-force attack.
Stream Cipher Mode
To encrypt or decrypt more than 64 bits of data, DES uses two common stream cipher modes:

Cipher feedback (CFB), which is similar to CBC and can encrypt any number of bits,


including single bits or single characters.
Output feedback (OFB) generates keystream blocks, which are then XORed with the plaintext


blocks to get the ciphertext.
In stream cipher mode, the cipher uses previous ciphertext and the secret key to generate a pseudo-
random stream of bits, which only the secret key can generate. To encrypt data, the data is XORed
with the pseudo-random stream bit by bit, or sometimes byte by byte, to obtain the ciphertext. The
decryption procedure is the same. The receiver generates the same random stream using the secret
key, and XORs the ciphertext with the pseudo-random stream to obtain the plaintext.
There are several things to consider when securing DES-encrypted data:

Change keys frequently to help prevent brute-force attacks.



Use a secure channel to communicate the DES key from the sender to the receiver.



Consider using DES in CBC mode. With CBC, the encryption of each 64-bit block depends


on previous blocks. CBC is the most widely used mode of DES.
Test a key to see if it is a weak key before using it. DES has 4 weak keys and 12 semi-weak


keys. Because there are 256 possible DES keys, the chance of picking one of these keys is very
small. However, because testing the key has no significant impact on the encryption time,
testing is recommended.
Because of its short key length, DES is considered a good protocol to protect data for a very short
time. 3DES is a better choice to protect data. It has an algorithm that is very trusted and has higher
security strength.


7.3.3 3DES
With advances in computer-processing power, the original 56-bit DES key became too short to
withstand attack from those with a medium-sized budget for hacking technology. One way to in-
crease the DES effective key length, without changing the well-analyzed algorithm itself, is to use
the same algorithm with different keys several times in a row.
The technique of applying DES three times in a row to a plaintext block is called 3DES. Today,
brute-force attacks on 3DES are considered unfeasible because the basic algorithm has been well
tested in the field for more than 35 years. It is considered very trustworthy.
210 CCNA Security Course Booklet, Version 1.0




The Cisco IPsec implementation uses DES and 3DES in CBC mode.
3DES uses a method called 3DES-Encrypt-Decrypt-Encrypt (3DES-EDE) to encrypt plaintext.
First, the message is encrypted using the first 56-bit key, known as K1. Next, the data is decrypted
using the second 56-bit key, known as K2. Finally, the data is encrypted again, using the third 56-
bit key, known as K3.
The 3DES-EDE procedure is much more effective at increasing security than simply encrypting
the data three times with three different keys. Encrypting data three times in a row using different
56-bit keys equals a 58-bit key strength. The 3DES-EDE procedure, on the other hand, provides
encryption with an effective key length of 168 bits. If keys K1 and K3 are equal, as in some imple-
mentations, a less secure encryption of 112 bits is achieved.
To decrypt the message, the opposite of the 3DES-EDE method is used. First, the ciphertext is de-
crypted using key K3. Next, the data is encrypted the data using key K2. Finally, the data is de-
crypted the data using key K1.
Although 3DES is very secure, it is also very resource intensive. For this reason, the AES encryp-
tion algorithm was developed. It has proven to be as secure as 3DES, but with much faster results.


7.3.4 Advanced Encryption Standard
For a number of years, it was recognized that DES would eventually reach the end of its useful-
ness. In 1997, the AES initiative was announced, and the public was invited to propose encryption
schemes to replace DES. After a five-year standardization process in which 15 competing designs
were presented and evaluated, the U.S. National Institute of Standards and Technology (NIST) se-
lected the Rijndael block cipher as the AES algorithm.
The Rijndael cipher, developed by Joan Daemen and Vincent Rijmen, has a variable block length
and key length. Rijndael is an iterated block cipher, which means that the initial input block and
cipher key undergo multiple transformation cycles before producing output. The algorithm can op-
erate over a variable-length block using variable-length keys. A 128-, 192-, or 256-bit key can be
used to encrypt data blocks that are 128, 192, or 256 bits long, and all nine combinations of key
and block length are possible.
The accepted AES implementation of Rijndael contains only some of the capabilities of the Rijn-
dael algorithm. The algorithm is written so that the block length or the key length or both can eas-
ily be extended in multiples of 32 bits, and the system is specifically designed for efficient
implementation in hardware or software on a range of processors.
The AES algorithm has been analyzed extensively and is now used worldwide. Although it has not
been proven in day-to-day use to the degree that 3DES has, AES with the Rijndael cipher is the
more efficient algorithm. It can be used in high-throughput, low-latency environments, especially
when 3DES cannot handle the throughput or latency requirements. AES is expected to gain trust as
time passes and more attacks have been attempted against it.
AES was chosen to replace DES for a number of reasons. The key length of AES makes the key
much stronger than DES. AES runs faster than 3DES on comparable hardware. AES is more effi-
cient than DES and 3DES on comparable hardware, usually by a factor of five when it is compared
with DES. AES is more suitable for high-throughput, low-latency environments, especially if pure
software encryption is used.
Despite these advantages, AES is a relatively young algorithm. The golden rule of cryptography
states that a mature algorithm is always more trusted. 3DES is therefore a more trusted choice in
terms of strength, because it has been tested and analyzed for 35 years.
Chapter 7: Cryptographic Systems 211




AES is available in the following Cisco VPN devices as an encryption transform:

IPsec-protected traffic using Cisco IOS Release 12.2(13)T and later



Cisco PIX Firewall software version 6.3 and later



Cisco ASA software version 7.0 and later



Cisco VPN 3000 software version 3.6 and later




7.3.5 Alternate Encryption Algorithms
The Software-optimized Encryption Algorithm (SEAL) is an alternative algorithm to software-
based DES, 3DES, and AES. Phillip Rogaway and Don Coppersmith designed SEAL in 1993. It is
a stream cipher that uses a 160-bit encryption key. Because it is a stream cipher, data to be en-
crypted is continuously encrypted and, therefore, much faster than block ciphers. However, it has a
longer initialization phase during which a large set of tables is created using SHA.
SEAL has a lower impact on the CPU compared to other software-based algorithms. SEAL sup-
port was added to Cisco IOS Software Release 12.3(7)T.
SEAL has several restrictions:

The Cisco router and the peer must support IPsec.



The Cisco router and the other peer must run an IOS image with k9 long keys (the k9


subsystem).
The router and the peer must not have hardware IPsec encryption.



The RC algorithms were designed all or in part by Ronald Rivest, who also invented MD5. The
RC algorithms are widely deployed in many networking applications because of their favorable
speed and variable key-length capabilities.
There are a number of widely used RC algorithms:

RC2 - Variable key-size block cipher that was designed as a “drop-in” replacement for DES.



RC4 - World™s most widely used stream cipher. This algorithm is a variable key-size Vernam


stream cipher that is often used in file encryption products and for secure communications,
such as within SSL. It is not considered a one-time pad, because its key is not random. The
cipher can be expected to run very quickly in software and is considered secure, although it
can be implemented insecurely, as in Wired Equivalent Privacy (WEP).
RC5 - A fast block cipher that has a variable block size and key size. RC5 can be used as a


drop-in replacement for DES if the block size is set to 64-bit.
RC6 - Developed in 1997, RC6 was an AES finalist (Rijndael won). A 128-bit to 256- bit


block cipher that was designed by Rivest, Sidney, and Yin and is based on RC5. Its main
design goal was to meet the requirement of AES.

7.3.6 Diffie-Hellman Key Exchange
Whitfield Diffie and Martin Hellman invented the Diffie-Hellman (DH) algorithm in 1976. The
DH algorithm is the basis of most modern automatic key exchange methods and is one of the most
common protocols used in networking today. Diffie-Hellman is not an encryption mechanism and
is not typically used to encrypt data. Instead, it is a method to securely exchange the keys that en-
crypt data.
212 CCNA Security Course Booklet, Version 1.0




In a symmetric key system, both sides of the communication must have identical keys. Securely
exchanging those keys has always been a challenge. Asymmetric key systems address this chal-
lenge because they use two keys. One key is called the private key, and the other is the public key.
The private key is secret and known only to the user. The public key is openly shared and easily
distributed.
DH is a mathematical algorithm that allows two computers to generate an identical shared secret
on both systems, without having communicated before. The new shared key is never actually ex-
changed between the sender and receiver. But because both parties know it, it can be used by an
encryption algorithm to encrypt traffic between the two systems. Its security is based on the diffi-
culty of calculating the discrete logarithms of very large numbers.
DH is commonly used when data is exchanged using an IPsec VPN, data is encrypted on the Inter-
net using either SSL or TLS, or when SSH data is exchanged.
Unfortunately, asymmetric key systems are extremely slow for any sort of bulk encryption. This is
why it is common to encrypt the bulk of the traffic using a symmetric algorithm such as DES,
3DES, or AES and use the DH algorithm to create keys that will be used by the encryption algo-
rithm.
To help understand how DH is used, consider this example of communication between Alice and
Bob.

To start a DH exchange, Alice and Bob must agree on two non-secret numbers.
Step 1.
The first number, g, is a base number (also called the generator). The second
number, p, is a prime number that is used as the modulus. These numbers are
usually public and are chosen from a table of known values. Typically, g is a
very small number, such as 2, 3, 4, or 5 and p is a larger prime number.
Next, Alice generates a secret number Xa, and Bob generates his secret number
Step 2.
Xb.
Based on g, p, and Alice™s X number, Alice calculates a public value (Ya) using
Step 3.
the DH algorithm. She sends her public value (Ya) to Bob.
Bob also calculates a public value (Yb) using g, p and his secret number. Bob
Step 4.
sends his public value (Yb) to Alice. These values are not the same.
Alice now performs a second DH algorithm using Bob™s public value (Yb) as
Step 5.
the new base number.
Bob also performs a second DH algorithm using Alice™s public value (Ya) as the
Step 6.
new base number.

The result is that Alice and Bob both come up with the same result (Z). This new value is now a
shared secret between Alice and Bob and can be used by an encryption algorithm as a shared se-
cret key between Alice and Bob.
Anyone listening on the channel cannot compute the secret value, because only g, p, Ya, and Yb
are known, and at least one secret value is needed to calculate the shared secret. Unless the attack-
ers can compute the discrete algorithm of the above equation to recover Xa or Xb, they cannot ob-
tain the shared secret.
Although DH is used with symmetric algorithms to create shared keys, it is important to remember
that it is actually an asymmetric algorithm.
What other asymmetric algorithms are there and what are they used for?
Chapter 7: Cryptographic Systems 213




7.4 Public Key Cryptography
7.4.1 Symmetric Versus Asymmetric Encryption
Asymmetric algorithms, also sometimes called public-key algorithms, are designed so that the key
that is used for encryption is different from the key that is used for decryption. The decryption key
cannot, in any reasonable amount of time, be calculated from the encryption key and vice versa.
In the example of Alice and Bob, they did not exchange pre-shared keys prior to communication.
Instead, they each had separate padlocks and corresponding keys. In this same manner, asymmet-
ric algorithms are used to exchange secret messages without ever having had a shared secret before
the exchange.
There are four protocols that use asymmetric key algorithms:

Internet Key Exchange (IKE), a fundamental component of IPsec VPNs



Secure Socket Layer, now implemented as IETF standard TLS



SSH



Pretty Good Privacy (PGP), a computer program that provides cryptographic privacy and


authentication and often used to increase the security of email communications
Asymmetric algorithms use two keys: a public key and a private key. Both keys are capable of the
encryption process, but the complementary matched key is required for decryption. For example, if
a public key encrypts the data, the matching private key decrypts the data. The opposite is also
true. If a private key encrypts the data, the corresponding public key decrypts the data.
This process enables asymmetric algorithms to achieve authentication, integrity, and confidential-
ity.
The confidentiality objective of asymmetric algorithms is achieved when the encryption process is
started with the public key. The process can be summarized using the formula:
Public Key (Encrypt) + Private Key (Decrypt) = Confidentiality
When the public key is used to encrypt the data, the private key must be used to decrypt the data.
Only one host has the private key, therefore, confidentiality is achieved.
If the private key is compromised, another key pair must be generated to replace the compromised
key.
The authentication objective of asymmetric algorithms is achieved when the encryption process is
started with the private key. The process can be summarized using the formula:
Private Key (Encrypt) + Public Key (Decrypt) = Authentication
When the private key is used to encrypt the data, the corresponding public key must be used to de-
crypt the data. Because only one host has the private key, only that host could have encrypted the
message, providing authentication of the sender. Typically, no attempt is made to preserve the se-
crecy of the public key, so any number of hosts can decrypt the message. When a host successfully
decrypts a message using a public key, it is trusted that the private key encrypted the message,
which verifies who the sender is. This is a form of authentication.
When sending a message that ensures message confidentiality, authentication and integrity, the
combination of two encryption phases is necessary.
Phase 1 - Confidentiality
214 CCNA Security Course Booklet, Version 1.0




Alice wants to send a message to Bob ensuring message confidentiality (only Bob can read the
document in plaintext). Alice uses the public key of Bob to cipher the message. Only Bob can de-
cipher it, using his private key.
Phase 2 - Authentication and Integrity
Alice also wants to ensure message authentication and integrity (Bob is sure that the document
was not modified, and was sent by Alice). Alice uses her private key to cipher a hash of the mes-
sage. In this way, Bob can use the public key of Alice to verify that the message was not modified
(the received hash is equal to the locally determined hash based on Alice™s public key). Addition-
ally, this verifies that Alice is definitely the sender of the message because nobody else has Alice™s
private key.
By sending a message that was ciphered using Bob™s public key and a ciphered hash that was en-
crypted using Alice™s private key, confidentiality, authenticity and integrity are ensured.
A variety of well-known asymmetric key algorithms are available:

Diffie-Hellman



Digital Signature Standard (DSS), which incorporates the Digital Signature Algorithm



RSA encryption algorithms



ElGamal



Elliptical curve techniques



Although the mathematics differ with each algorithm, they all share one trait in that the calcula-
tions required are complicated. Their design is based on computational problems, such as factoring
extremely large numbers or computing discrete logarithms of extremely large numbers. As a re-
sult, computation takes more time for asymmetric algorithms. In fact, asymmetric algorithms can
be up to 1,000 times slower than symmetric algorithms. Because they lack speed, asymmetric al-
gorithms are typically used in low-volume cryptographic mechanisms, such as key exchanges that
have no inherent key exchange technology, and digital signatures.
The key management of asymmetric algorithms tends to be simpler than that of symmetric algo-
rithms, because usually one of the two encryption or decryption keys can be made public.
Typical key lengths for asymmetric algorithms range from 512 to 4096 bits. Key lengths greater
than or equal to 1024 are considered to be trustworthy, while key lengths that are shorter than 1024
bits are considered unreliable for most algorithms.
It is not relevant to compare the key length of asymmetric and symmetric algorithms because the
underlying design of the two algorithm families differs greatly. To illustrate this point, it is gener-
ally thought that a 2048-bit encryption key of RSA is roughly equivalent to a 128-bit key of RC4
in terms of resistance against brute-force attacks.


7.4.2 Digital Signatures
Handwritten signatures have long been used as a proof of authorship of the contents of a docu-
ment. Digital signatures can provide the same functionality as handwritten signatures, and much
more. For example, assume a customer sends transaction instructions via an email to a stockbro-
ker, and the transaction turns out badly for the customer. It is conceivable that the customer could
claim never to have sent the transaction order or that someone forged the email.
Chapter 7: Cryptographic Systems 215




The brokerage could protect itself by requiring the use of digital signatures before accepting in-
structions via email. In fact, digital signatures are often used in the following situations:

To provide a unique proof of data source, which can only be generated by a single party, such


as contract signing in e-commerce environments.
To authenticate a user by using the private key of that user and the signature it generates.



To prove the authenticity and integrity of PKI certificates.



To provide a secure timestamp using a trusted time source.



Specifically, digital signatures provide three basic security services:

Authenticity of digitally signed data - Digital signatures authenticate a source, proving that a


certain party has seen and signed the data in question.
Integrity of digitally signed data - Digital signatures guarantee that the data has not changed


from the time it was signed.
Nonrepudiation of the transaction - The recipient can take the data to a third party, and the


third party accepts the digital signature as a proof that this data exchange did take place. The
signing party cannot repudiate that it has signed the data.
To better understand nonrepudiation, consider using HMAC functions, which also provide authen-
ticity and integrity guarantees. With HMAC functions, two or more parties share the same authen-
tication key and can compute the HMAC fingerprint. Therefore, taking received data and its
HMAC fingerprint to a third party does not prove that the other party sent this data. Other users
could have generated the same HMAC fingerprint, because they have a copy of the HMAC authen-
tication key. With digital signatures, each party has a unique, secret signature key, which is not
shared with any other party, making nonrepudiation possible.
Digital signatures have specific properties that enable entity authentication and data integrity:

The signature is authentic and not forgeable. The signature is proof that the signer, and no one


else, signed the document.
The signature is not reusable. The signature is part of the document and cannot be moved to a


different document.
The signature is unalterable. After a document is signed, it cannot be altered.



The signature cannot be repudiated. For legal purposes, the signature and the document are


considered physical things. Signers cannot claim later that they did not sign it.
In some countries, including the United States, digital signatures are considered equivalent to
handwritten signatures if they meet certain provisions. Some of these provisions include the proper
protection of the certificate authority, the trusted signer of all other public keys, and the proper
protection of the private keys of the users. In such a scenario, users are responsible for keeping
their private keys private, because a stolen private key can be used to steal their identity.
Many Cisco products use digital signatures:

IPsec gateways and clients use digital signatures to authenticate their Internet Key Exchange


(IKE) sessions if the administrator chooses digital certificates and the IKE RSA signature
authentication method.
Cisco SSL endpoints, such as Cisco IOS HTTP servers, and the Cisco Adaptive Security


Device Manager (ASDM) use digital signatures to prove the identity of the SSL server.
216 CCNA Security Course Booklet, Version 1.0




Some of the service provider-oriented voice management protocols for billing and settlement


use digital signatures to authenticate the involved parties.
The current signing procedures of digital signatures are not simply implemented by public-key op-
erations. In fact, a modern digital signature is based on a hash function and a public-key algorithm.
There are six steps to the digital signature process:

The sending device (signer) creates a hash of the document.
Step 1.

The sending device encrypts the hash with the private key of the signer.
Step 2.

The encrypted hash, known as the signature, is appended to the document.
Step 3.

The receiving device (verifier) accepts the document with the digital signature
Step 4.
and obtains the public key of the sending device.
The receiving device decrypts the signature using the public key of the sending
Step 5.
device. This step unveils the assumed hash value of the sending device.
The receiving device makes a hash of the received document, without its
Step 6.
signature, and compares this hash to the decrypted signature hash. If the hashes
match, the document is authentic; it was signed by the assumed signer and has
not changed since it was signed.
Both encryption and digital signatures are required to ensure that the message is private and has
not changed.
In addition to ensuring authenticity and integrity of messages, digital signatures are commonly
used to provide assurance of the authenticity and integrity of mobile and classic software codes.
The executable files, or possibly the entire installation package of a program, are wrapped with a
digitally signed envelope, which allows the end user to verify the signature before installing the
software.
Digitally signing code provides several assurances about the code:

The code has not been modified since it left the software publisher.



The code is authentic and is actually sourced by the publisher.



The publisher undeniably publishes the code. This provides nonrepudiation of the act of


publishing.
The digital signature could be forged only if someone obtained the private key of the publisher.
The assurance level of digital signatures is extremely high if the private key is protected properly.
The user of the software must also obtain the public key, which is used to verify the signature. The
user can obtain the key in a secure fashion. For example, the key could be included with the instal-
lation of the operating system or transferred securely over the network.
Protecting the private key is of the highest importance when using digital signatures. If the signa-
ture key of an entity is compromised, the attacker can sign data in the name of that entity, and re-
pudiation is not possible. To exchange verification keys in a scalable fashion, a secure but
accessible method must be deployed.
Well-known asymmetric algorithms, such as RSA or Digital Signature Algorithm (DSA), are typi-
cally used to perform digital signing.
DSA
In 1994, the U.S. NIST selected the DSA as the Digital Signature Standard (DSS). DSA is based
on the discrete logarithm problem and can only provide digital signatures.
Chapter 7: Cryptographic Systems 217




DSA, however, has had several criticisms. Critics claim that DSA lacks the flexibility of RSA. The
verification of signatures is too slow, and the process by which NIST chose DSA was too secretive
and arbitrary. In response to these criticisms, the DSS now incorporates two additional algorithm
choices: Digital Signature Using Reversible Public Key Cryptography (which uses RSA) and the
Elliptic Curve Digital Signature Algorithm (ECDSA).
A network administrator must decide whether RSA or DSA is more appropriate for a given situa-
tion. DSA signature generation is faster than DSA signature verification. On the other hand, RSA
signature verification is much faster than signature generation.


7.4.3 Rivest, Shamir, and Alderman
RSA is one of the most common asymmetric algorithms. Ron Rivest, Adi Shamir, and Len Adle-
man invented the RSA algorithm in 1977. It was a patented public-key algorithm. The patent ex-
pired in September 2000, and the algorithm is now in the public domain. Of all the public-key
algorithms that were proposed over the years, RSA is by far the easiest to understand and imple-
ment.
The RSA algorithm is very flexible because it has a variable key length, so the key can be short-
ened for faster processing. There is a tradeoff; the shorter the key, the less secure it is.
The RSA keys are usually 512 to 2048 bits long. RSA has withstood years of extensive crypt-
analysis. Although the security of RSA has been neither proved nor disproved, it does suggest a
confidence level in the algorithm. The security of RSA is based on the difficulty of factoring very
large numbers. If an easy method of factoring these large numbers were discovered, the effective-
ness of RSA would be destroyed.
The RSA algorithm is based on a public key and a private key. The public key can be published
and given away, but the private key must be kept secret. It is not possible to determine the private
key from the public key using any computationally feasible algorithm and vice versa.
RSA keys are long term and are usually changed or renewed after some months or even years. It is
currently the most common method for signature generation and is used widely in e-commerce
systems and Internet protocols.
RSA is about a hundred times slower than DES in hardware, and about a thousand times slower
than DES in software. This performance problem is the main reason that RSA is typically used
only to protect small amounts of data.
RSA is mainly used to ensure confidentiality of data by performing encryption, and to perform au-
thentication of data or nonrepudiation of data, or both, by generating digital signatures.


7.4.4 Public Key Infrastructure
In large organizations, it is impractical for all parties to continually exchange identification docu-
ments. With trusted third-party protocols, all individuals agree to accept the word of a neutral third
party. Presumably, the third party does an in-depth investigation prior to the issuance of creden-
tials. After this in-depth investigation, the third party issues credentials that are difficult to forge.
From that point forward, all individuals who trust the third party simply accept the credentials that
the third party issues. Certificate servers are an example of a trusted third party.
As an example, a large organization such as Cisco goes to reasonable lengths to identify employ-
ees and contractors, and then issues an ID badge. This badge is relatively difficult to forge. Mea-
sures are in place to protect the integrity of the badge and the badge issuance. Because of these
measures, all Cisco personnel accept this badge as authoritative of the identity of any individual.
218 CCNA Security Course Booklet, Version 1.0




If this method did not exist and 10 individuals needed to validate each other, 90 validations would
need to be performed before everyone would have validated everyone else. Adding a single indi-
vidual to the group would require an additional 20 validations because each one of the original 10
individuals would need to authenticate the new individual, and the new individual would need to
authenticate the original 10. This method does not scale well.
For another example, assume that Alice applies for a driver™s license. In this process, she submits
evidence of her identity and her qualifications to drive. Her application is approved, and a license
is issued. Later, Alice needs to cash a check at the bank. Upon presenting the check to the bank
teller, the bank teller asks her for ID. The bank, because it trusts the government agency that is-
sued the driver™s license, verifies her identity and cashes her check.
Certificate servers function like the driver™s license bureau. The driver™s license is analogous to a
certificate in a Public Key Infrastructure (PKI) or another technology that supports certificates.
How does PKI actually work?
PKI is the service framework that is needed to support large-scale public key-based technologies.
A PKI allows for very scalable solutions and is becoming an extremely important authentication
solution for VPNs.
PKI is a set of technical, organizational, and legal components that are needed to establish a sys-
tem that enables large-scale use of public key cryptography to provide authenticity, confidentiality,
integrity, and nonrepudiation services. The PKI framework consists of the hardware, software,
people, policies, and procedures needed to create, manage, store, distribute, and revoke digital cer-
tificates.
Two very important terms must be defined when talking about a PKI: certificates and Certificate
authority (CA).
Certificates are used for various purposes in a network. Certificates are public information. They
contain the binding between the names and public keys of entities and are usually published in a
centralized directory so that other PKI users can easily access them.
The CA is a trusted third-party entity that issues certificates. The certificate of a user is always
signed by a CA. Every CA also has a certificate containing its public key, signed by itself. This is
called a CA certificate or, more properly, a self-signed CA certificate.
A single CA server can facilitate many applications that require digital certificates for authentica-
tion purposes. Using CA servers is a solution that simplifies the management of authentication and
provides strong security due to the strength of the cryptographic mechanisms that are used in com-
bination with digital certificates.
PKI is more than just a CA and its users. In addition to implementing the enabling technology,
building a large PKI involves a huge amount of organizational and legal work. There are five main
components of a PKI:

PKI users, such as people, devices, and servers



CAs for key management



Storage and protocols



Supporting organizational framework, known as practices and user authentication using Local


Registration Authorities (LRAs)
Supporting legal framework


Many vendors offer CA servers as a managed service or as an end-user product, including
VeriSign, Entrust Technologies, RSA, CyberTrust, Microsoft, and Novell. CAs, especially out-
Chapter 7: Cryptographic Systems 219




sourced ones, can issue certificates of a number of classes, which determine how trusted a certifi-
cate is. A single outsourcing vendor such as VeriSign might run a single CA, issuing certificates of
different classes, and its customers use the CA they need depending on the desired level of trust.
A certificate class is usually identified by a number. The higher the number, the more trusted the
certificate. The trust in the certificate is usually determined by how rigorous the procedure was
that verified the identity of the holder when the certificate was issued:

Class 0 is for testing purposes in which no checks have been performed.



Class 1 is for individuals with a focus on verification of email.



Class 2 is for organizations for which proof of identity is required.



Class 3 is for servers and software signing for which independent verification and checking of


identity and authority is done by the issuing certificate authority.
Class 4 is for online business transactions between companies.



Class 5 is for private organizations or governmental security.



For example, a class 1 certificate might require an email reply from the holder to confirm the wish
to enroll. This kind of confirmation is a weak authentication of the holder. For a class 3 or 4 cer-
tificate, the future holder must prove identity and authenticate the public key by showing up in per-
son with at least two official ID documents.
Some PKIs offer the possibility, or even require the use, of two key pairs per entity. The first pub-
lic and private key pair is intended only for encryption operations. The public key encrypts, and
the private key decrypts. The second public and private key pair is intended for digital signing op-
erations. The private key signs, and the public key verifies the signature.
These keys are sometimes called usage or special keys. They may differ in key length and even in
the choice of the public key algorithm. If the PKI requires two key pairs per entity, a user has two
certificates. An encryption certificate contains the public key of the user, which encrypts the data,
and a signature certificate contains the public key of the user, which verifies the digital signature
of the user.
The following scenarios typically employ usage keys:

When an encryption certificate is used much more frequently than a signing certificate, the


public and private key pair is more exposed because of its frequent usage. In this case, it
might be a good idea to shorten the lifetime of the key pair and change it more often, while
having a separate signing private and public key pair with a longer lifetime.
When different levels of encryption and digital signing are required because of legal, export,


or performance issues, usage keys allow an administrator to assign different key lengths to the
two pairs.
When key recovery is desired, such as when a copy of a user™s private key is kept in a central

<<

. 9
( 19)



>>