Skip to main content
Engineering LibreTexts

5.4: Authenticating Messages

  • Page ID
    58526
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Returning to Figure \(5.3.1\), when receiving a message, the guard needs an ensured way of determining what the sender said in the message and who sent the message. Answering these two questions is the province of message authentication. Message authentication techniques prevent an adversary from forging messages that pretend to be from someone else, and allow the guard to determine if an adversary has modified a legitimate message while it was en route. 

    In practice, the ability to establish who sent a message is limited; all that the guard can establish is that the message came from the same origin as some previous message. For this reason, what the guard really does is to establish that a message is a member of a chain of messages identified with some principal. The chain may begin in a message that was communicated by a physical rendezvous. That physical rendezvous securely binds the identity of a real-world person with the name of a principal, and both the real-world person and that principal can now be identified as the origin of the current message. For some applications it is unimportant to establish the real-world person that is associated with the origin of the message. It may be sufficient to know that the message originated from the same source as earlier messages and that the message is unaltered. Once the guard has identified the principal (and perhaps the real-world identity associated with the principal), then we may be able to use psychological means to establish trust in the principal, as explained in Section 5.3.

    To establish that a message belongs to a chain of messages, a guard must be able to verify the authenticity of the message. Message authenticity requires both:

    • data integrity: the message has not been changed since it was sent;
    • origin authenticity: the claimed origin of the message, as learned by the receiver from the message content or from other information, is the actual origin.

    The issues of data integrity and origin authenticity are closely related. Messages that have been altered effectively have a new origin. If an origin cannot be determined, the very concept of message integrity becomes questionable (the message is unchanged with respect to what?). Thus, integrity of message data has to include message origin, and vice versa. The reason for distinguishing them is that designers using different techniques to tackle the two.

    In the context of authentication, we mostly talk about authenticating messages. However, the concept also applies to communication streams, files, and other objects containing data. A stream is authenticated by authenticating successive segments of the stream. We can think of each segment as a message from the point of view of authentication.

    Message Authentication is Different from Confidentiality

    The goal of message confidentiality (keeping the content of messages private) and the goal of message authentication are related but different, and separate techniques are usually used for each objective, similar to the physical world. With paper mail, signatures authenticate the author and sealed envelopes protect the letter from being read by others.

    Authentication and confidentiality can be combined in four ways, three of which have practical value:

    • Authentication and confidentiality. An application (e.g., electronic banking) might require both authentication and confidentiality of messages. This case is like a signed letter in a sealed envelope, which is appropriate if the content of the message (e.g., it contains personal financial information) must be protected and the origin of the message must be established (e.g., the user who owns the bank account).
    • Authentication only. An application, like DNS, might require just authentication for its announcements. This case is like a signed letter in an unsealed envelope. It is appropriate, for example, for a public announcement from the president of a company to its employees.
    • Confidentiality only. Requiring confidentially without authentication is uncommon. The value of a confidential message with an unverified origin is not great. This case is like a letter in a sealed envelope, but without a signature. If the guard has no idea who sent the letter, what level of confidence can the guard have in the content of the letter? Moreover, if the receiver doesn’t know who the sender is, the receiver has no basis to trust the sender to keep the content of the message confidential; for all the receiver knows, the sender may have released the content of the letter to someone else too. For these reasons confidentiality only is uncommon in practice.
    • Neither authentication or confidentiality. This combination is appropriate if there are no intentionally malicious users or there is a separate code of ethics.

    To illustrate the difference between authentication and confidentiality, consider a user who browses a Web service that publishes data about company stocks (e.g., the company name, the current trading price, recent news announcements about the company, and background information about the company). This information travels from the Web service over the Internet, an untrusted network, to the user’s Web browser. We can think of this action as a message that is being sent from the Web service to the user’s browser:

    From: stock.com
    To: John’s browser
    Body: At 10 a.m. Generic Moneymaking, Inc. was trading at $1

    The user is not interested in confidentiality of the data; the stock data is public anyway. The user, however, is interested in the authenticity of the stock data, since the user might decide to trade a particular stock based on that data. The user wants to be assured that the data is coming from "stock.com" (and not from a site that is pretending to be stock.com) and that the data was not altered when it crossed the Internet. For example, the user wants to be assured that an adversary hasn’t changed "Generic Moneymaking, Inc.", the price, or the time. We need a scheme that allows the user to verify the authenticity of the publicly readable content of the message. The next section introduces cryptography for this purpose. When cryptography is used, content that is publicly readable is known as plaintext or cleartext.

     

    Closed versus Open Designs and Cryptography

    In the authentication model there are two secure areas (a physical space or a virtual address space in which information can be safely confined) separated by an insecure communication path (as shown in Figure \(\PageIndex{1}\)) and two boxes: SIGN and VERIFY. Our goal is to set up a secure channel between the two secure areas that provides authenticity for messages sent between the two secure areas. (Section 5.5 shows how one can implement a secure channel that also provides confidentiality.)

    In one secure area, Alice sends out a message M to Bob and creates an authentication tag for the message using SIGN. The message and tag pass through an insecure area to get to a second secure area, where Bob receives the message and can either ACCEPT or REJECT it based on the authentication status that is returned when he applies VERIFY to the tag.

    Figure \(\PageIndex{1}\): A closed design for authentication relies on the secrecy of an algorithm.

    Before diving in the details of how to implement SIGN and VERIFY, let's consider how we might use them. In a secure area, the sender Alice creates an authentication tag for a message by invoking SIGN with the message as an argument. The tag and message are communicated through the insecure area to the receiver Bob. The insecure communication path might be a physical wire running down the street or a connection across the Internet. In both cases, we must assume that a wire-tapper can easily and surreptitiously gain access to the message and authentication tag. Bob verifies the authenticity of the message by a computation based on the tag and the message. If the received message is authentic, VERIFY returns ACCEPT; otherwise it returns REJECT.

    Cryptographic transformations can be used protect against a wide range of attacks on messages, including ones on the authenticity of messages. Our interest in cryptographic transformations is not the underlying mathematics (which is fascinating by itself, as can been seen in Section 5.9), but that these transformations can be used to implement security primitives such as SIGN and VERIFY.

    One approach to implementing a cryptographic system, called a closed design, is to keep the construction of cryptographic primitives, such as VERIFY and SIGN, secret through that idea that if the adversary doesn’t understand how SIGN and VERIFY work, it will be difficult to break the tag. Auguste Kerchkoffs observed more than a century ago\(^*\) that this closed approach is typically bad, since it violates the basic design principles for secure systems in a number of ways. It doesn’t minimize what needs to be secret. If the design is compromised, the whole system needs to be replaced. A review to certify the design must be limited, since it requires revealing the secret design to the reviewers. Finally, it is unrealistic to attempt to maintain secrecy of any system that receives wide distribution.

    These problems with closed designs led Kerchkoffs to propose a design rule, now known as Kerchkoffs' criterion, which is a particular application of the principles of open design and least privilege: minimize secrets. For a cryptographic system, open design means that we concentrate the secret in a corner of a cryptographic transformation, and make the secret removable and easily changeable. An effective way of doing this is to reduce the secret to a string of bits; this secret bit string is known as a cryptographic key, or key for short. By choosing a longer key, one can generally increase the time for the adversary to compromise the transformation.

    Figure \(\PageIndex{2}\) shows an open design for SIGN and VERIFY. In this design the algorithms for SIGN and VERIFY are public and the only secrets are two keys, K1 and K2. What distinguishes this open design from a closed design is (1) that public analysis of SIGN and VERIFY can provide verification of their strength without compromising their security; and (2) it is easy to change the secret parts (i.e., the two keys) without having to reanalyze the system’s strength.

    In one secure area, Alice sends out a message M and also uses SIGN, in combination with the key K1, to generate an authentication tag for the message. The message and tag pass through an insecure area before getting to a second secure area, where Bob can use the key K2 to VERIFY the tag and either ACCEPT or REJECT the message based on the status of the tag.

    Figure \(\PageIndex{2}\): An open design for authentication relies on the secrecy of keys.

    Depending on the relation between K1 and K2, there are two basic approaches to key-based transformations of a message: shared-secret cryptography and public-key cryptography. In shared-secret cryptography K1 is easily computed from K2 and vice versa. Usually in shared-secret cryptography K1 = K2, and we make that assumption in the text that follows.

    In public-key cryptography K1 cannot be derived easily from K2 (and vice versa). In public-key cryptography, only one of the two keys must be kept secret; the other one can be made public. (A better label for public-key cryptography might be "cryptography without shared secrets", or even "non-secret encryption", which is the label adopted by the intelligence community. Either of those labels would better contrast it with shared-secret cryptography, but the label "public-key cryptography" has become too widely used to try to change it.)

    Public-key cryptography allows Alice and Bob to perform cryptographic operations without having to share a secret. Before public-key systems were invented, cryptographers worked under the assumption that Alice and Bob needed to have a shared secret to create, for example, SIGN and VERIFY primitives. Because sharing a secret can be awkward and maintaining its secrecy can be problematic, this assumption made certain applications of cryptography complicated. Because public-key cryptography removes this assumption, it resulted in a change in the way cryptographers thought, and has led to interesting applications, as we will see in this chapter.

    To distinguish the keys in shared-secret cryptography from the ones in public-key cryptography, we refer to the key in shared-secret cryptography as the shared-secret key. We refer to the key that can be made public in public-key cryptography as the public key and to the key that is kept secret in public-key cryptography as the private key. Since shared-secret keys must also be kept secret, the unqualified term "secret key," which is sometimes used in the literature, can be ambiguous, so we avoid using it.

    We can now see more specifically the two ways in which SIGN and VERIFY can benefit if they are an open design. First, if K1 or K2 is compromised, we can select a new key for future communication, without having to replace SIGN and VERIFY. Second, we can now publish the overall design of the system and how SIGN and VERIFY work. Anyone can review the design and offer opinions about its correctness.

    Because most cryptographic techniques use open design and reduce any secrets to keys, a system may have several keys that are used for different purposes. To keep the keys apart, we refer to the keys for authentication as authentication keys.


    \(^*\) "Il faut un systeme remplissant certaines conditions exceptionelles ... il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvenient tomber entre les mains de l’ennemi." (Compromise of the system should not disadvantage the participants.) Auguste Kerchkoffs, La cryptographie Militaire, Chapter II (1883).

     

    Key-Based Authentication Model

    Returning to Figure \(\PageIndex{2}\), to authenticate a message, the sender signs the messages using a key K1. Signing produces as output an authentication tag: a key-based cryptographic transformation (usually shorter than the message). We can write the operation of signing as follows:

    T ← SIGN (M, K1)

    where T is the authentication tag.

    The tag may be sent to the receiver separately from the message or it may be appended to the message. The message and tag may be stored in separate files or attachments. The details don't matter.

    Let’s assume that the sender sends a message {M, T}. The receiver receives a message {M', T'}, which may be the same as {M, T} or may not. The purpose of message authentication is to decide which. The receiver unmarshals {M', T'} into its components M' and T' , and verifies the authenticity of the received message, by performing the computation:

    result ← VERIFY (M', T', K2)

    This computation returns ACCEPT if M' and T' match; otherwise, it returns REJECT.

    The design of SIGN and VERIFY should be such that if an adversary forges a tag, re-uses a tag from a previous message on a message fabricated by the adversary, etc. the adversary won't succeed. Of course, if the adversary replays a message {M, T} without modifying it, then VERIFY will again return ACCEPT; we need a more elaborate security protocol, the topic of Section 5.6, to protect against replayed messages.

    If M is a long message, a user might sign and verify the cryptographic hash of M , which is typically less expensive than signing M because the cryptographic hash is shorter than M . This approach complicates the protocol between sender and receiver a bit because the receiver must accurately match up M , its cryptographic hash, and its tag. Some implementations of SIGN and VERIFY implement this performance optimization themselves.

     

    Properties of SIGN and VERIFY

    To get a sense of the challenges of implementing SIGN and VERIFY, we outline some of the basic requirements for SIGN and VERIFY, and some attacks that a designer must consider. The sender sends {M, T} and the receiver receives {M', T'}. The requirements for an authentication system with shared-secret key K are as follows:

    1. VERIFY (M', T', K) returns ACCEPT if M' = M, T' = SIGN (M, K)
    2. Without knowing K , it is difficult for an adversary to compute an M' and T' such that VERIFY (M', T', K) returns ACCEPT
    3. Knowing M , T , and the algorithms for SIGN and VERIFY doesn’t allow an adversary to compute K 

    In short, T should be dependent on the message content M and the key K . For an adversary who doesn’t know key K , it should be impossible to construct a message M' and a T' different from M and T that verifies correctly using key K .

    A corresponding set of properties must hold for public-key authentication systems:

    1. VERIFY (M', T', K2) returns ACCEPT if M' = M, T' = SIGN (M, K1)
    2. Without knowing K1, it is difficult for an adversary to compute an M' and T' such that VERIFY (M', T', K2) returns ACCEPT
    3. Knowing M , T , K2, and the algorithms for VERIFY and SIGN doesn’t allow an adversary to compute K1

    The requirements for SIGN and VERIFY are formulated in absolute terms. Many good implementations of VERIFY and SIGN, however, don’t meet these requirements perfectly. Instead, they might guarantee property 2 with very high probability. If the probability is high enough, then as a practical matter we can treat such an implementation as being acceptable. What we require is that the probability of not meeting property 2 be much lower than the likelihood of a human error that leads to a security breach.

    The work factor involved in compromising SIGN and VERIFY is dependent on the key length; a common way to increase the work factor for the adversary is use a longer key. A typical key length used in the field for the popular RSA public-key cipher (see Section 5.9) is 1,024 or 2,048 bits. SIGN and VERIFY implemented with shared-secret ciphers often use shorter keys (in the range of 128 to 256 bits) because existing shared-secret ciphers have higher work factors than existing public-key ciphers. It is also advisable to change keys periodically to limit the damage in case a key is compromised, and cryptographic protocols often do so (see Section 5.6).

    Broadly speaking, the attacks on authentication systems fall in five categories:

    1. Modifications to M and T . An adversary may attempt to change M and the corresponding T . The VERIFY function should return REJECT even if the adversary deletes or flips only a single bit in M and tries to make corresponding change to T . Returning to our trading example, VERIFY should return REJECT if the adversary changes M from "At 10 a.m. Generic Moneymaking, Inc. was trading at $1" to "At 10 a.m. Generic Moneymaking, Inc. was trading at $200" and tries to make the corresponding changes to T .
    2. Reordering M . An adversary may not change any bits, but just reorder the existing content of M . For example, VERIFY should return REJECT if the adversary changes M to "At 1 a.m. Generic Moneymaking, Inc. was trading at $10" (The adversary has moved "0" from "10 a.m." to "$10").
    3. Extending M by prepending or appending information to M . An adversary may not change the content of M , but just prepend or append some information to the existing content of M . For example, an adversary may change M to "At 10 a.m. Generic Moneymaking, Inc. was trading at $10" (The adversary has appended "0" to the end of the message).
    4. Splicing several messages and tags. An adversary may have recorded two messages and their tags, and tried to combine them into a new message and tag. For example, an adversary might take "At 10 a.m. Generic Moneymaking, Inc." from one transmitted message and combine it with "was trading at $9" from another transmitted message, and splice the two tags that go along with those messages by taking the first several bytes from the first tag and the remainder from the second tag.
    5. Since SIGN and VERIFY are based on cryptographic transformations, it may also be possible to directly attack those transformations. Some mathematicians, known as cryptanalysts, are specialists in devising such attacks.

    These requirements and the possible attacks make clear that the construction of SIGN and VERIFY primitives is a difficult task. To protect messages against the attacks listed above requires a cryptographer who can design the appropriate cryptographic transformations on the messages. These transformations are based on sophisticated mathematics. Thus, we have the worst of two possible worlds: we must achieve a negative goal using complex tools. As a result, even experts have come up with transformations that failed spectacularly. Thus, a non-expert certainly should not attempt to implement SIGN and VERIFY , and their implementation falls outside the scope of this book. (The interested reader can consult Section 5.9 to get a flavor of the complexities.)

    The window of validity for SIGN and VERIFY is the minimum of the time to compromise the signing algorithm, the time to compromise the hash algorithm used in the signature (if one is used), the time to try out all keys, and the time to compromise the signing key.

    As an example of the importance of keeping track of the window of validity, a team of researchers in 2008 was able to create forged signatures that many Web browsers accepted as valid.\(^*\) The team used a large array of processors found in game consoles to perform a collision attack on a hash function designed in 1994 called MD5. MD5 had been identified as potentially weak as early as 1996 and a collision attack was demonstrated in 2004. Continued research revealed ways of rapidly creating collisions, thus allowing a search for helpful collisions. The 2008 team was able to find a helpful collision with which they could forge a trusted signature on an authentication message. Because some authentication systems that Web browsers trust had not yet abandoned their use of MD5, many browsers accepted the signature as valid and the team was able to trick these browsers into making what appeared to be authenticated connections to well-known Web sites. The connections actually led to impersonation Web sites that were under the control of the research team. (The forged signatures were on certificates for the transport layer security (TLS) protocol. Certificates are discussed in Sections 5.6.1 and 5.8.4, and Section 5.11 is a case study of TLS.)


    \(^*\) A. Sotirov et al. MD5 considered harmful: creating a rogue CA certificate. 25th Annual Chaos Communication Congress, Berlin, December 2008.

     

    Public-key versus Shared-Secret Authentication

    If Alice signs the message using a shared-secret key, then Bob verifies the tag using the same shared-secret key. That is, VERIFY checks the received authentication tag from the message and the shared-secret key. An authentication tag computed with a shared-secret key is called a message authentication code (MAC). (The verb "to MAC" is the common jargon for "to compute an authentication tag using shared-secret cryptography".)

    In the literature, the word "sign" is usually reserved for generating authentication tags with public-key cryptography. If Alice signs the message using public-key cryptography, then Bob verifies the message using a different key from the one that Alice used to compute the tag. Alice uses her private key to compute the authentication tag. Bob uses Alice’s corresponding public key to verify the authentication tag. An authentication tag computed with a public-key system is called a digital signature. The digital signature is analogous to a conventional signature because only one person, the holder of the private key, could have applied it.

    Alice's digital signatures can be checked by anyone who knows Alice's public key, while checking her MACs requires knowledge of the shared-secret key that she used to create the MAC. Thus, Alice might be able to successfully repudiate (disown) a message authenticated with a MAC by arguing that Bob (who also knows the shared-secret key) forged the message and the corresponding MAC.

    In contrast, the only way to repudiate a digital signature is for Alice to claim that someone else has discovered her private key. Digital signatures are thus more appropriate for electronic checks and contracts. Bob can verify Alice’s signature on an electronic check she gives him, and later when Bob deposits the check at the bank, the bank can also verify her signature. When Alice uses digital signatures, neither Bob nor the bank can forge a message purporting to be from Alice, in contrast to the situation in which Alice uses only MACs.

    Of course, non-repudiation depends on not losing one's private key. If one loses one's private key, a reliable mechanism is needed for broadcasting the fact that the private key is no longer secret so that one can repudiate later forged signatures with the lost private key. Methods for revoking compromised private keys are the subject of considerable debate.

    SIGN and VERIFY are two powerful primitives, but they must be used with care. Consider the following attack. Alice and Bob want to sign a contract saying that Alice will pay Bob $100. Alice types it up as a document using a word-processing application and both digitally sign it. In a few days Bob comes to Alice to collect his money. To his surprise, Alice presents him with a Word document that states he owes her $100. Alice also has a valid signature from Bob for the new document. In fact, it is the exact same signature as for the contract Bob remembers signing and, to Bob's great amazement, the two documents are actually bit-for-bit identical. What Alice did was create a document that included an if statement that changed the displayed content of the document by referring to an external input such as the current date or filename. Thus, even though the signed contents remained the same, the displayed contents changed because they were partially dependent on unsigned inputs. The problem here is that Bob's mental model doesn't correspond to what he has signed. As always with security, all aspects must be thought through! Bob is much better off signing only documents that he himself created.

     

    Key Distribution

    We assumed that if Bob successfully verified the authentication tag of a message, that Alice is the message's originator. This assumption, in fact, has a serious flaw. What Bob really knows is that the message originated from a principal that knows key K1. The assumption that the key K1belongs to Alice may not be true. An adversary may have stolen Alice’s key or may have tricked Bob into believing that K1is Alice’s key. Thus, the way in which keys are bound to principals is an important problem to address.

    The problem of securely distributing keys is also sometimes called the name-to-key binding problem; in the real world, principals are named by descriptive names rather than keys. So, when we know the name of a principal, we need a method for securely finding the key that goes along with the named principal. The trust that we put in a key is directly related to how secure the key distribution system is.

    Secure key distribution is based on a name discovery protocol, which starts, perhaps unsurprisingly, with trusted physical delivery. When Alice and Bob meet, Alice can give Bob a cryptographic key. This key is authenticated because Bob knows he received it exactly as Alice gave it to him. If necessary, Alice can give Bob this key secretly (in an envelope or on a portable storage card), so others don't see or overhear it. Alice could also use a mutually trusted courier to deliver a key to Bob in a secret and authenticated manner.

    Cryptographic keys can also be delivered over a network. However, an adversary might add, delete, or modify messages on the network. A good cryptographic system is needed to ensure that the network communication is authenticated (and confidential, if necessary). In fact, in the early days of cryptography, the doctrine was never to send keys over a network; a compromised key will result in more damage than one compromised message. However, nowadays cryptographic systems are believed to be strong enough to take that risk. Furthermore, with a key-distribution protocol in place it is possible to periodically generate new keys, which is important to limit the damage in case a key is compromised.

    The catch is that one needs cryptographic keys already in place in order to distribute new cryptographic keys over the network! This approach works if the recursion "bottoms out" with physical key delivery. Suppose two principals Alice and Bob wish to communicate, but they have no shared (shared-secret or public) key. How can they establish keys to use?

    One common approach is to use a mutually-trusted third party (Charles) with whom Alice and Bob already each share key information. For example, Charles might be a mutual friend of Alice and Bob. Charles and Alice might have met physically at some point in time and exchanged keys, and similarly, Charles and Bob might have met and also exchanged keys. If Alice and Bob both trust Charles, then Alice and Bob can exchange keys through Charles.

    How Charles can assist Alice and Bob depends on whether they are using shared-secret or public-key cryptography. Shared-secret keys need to be distributed in a way that is both confidential and authenticated. Public keys do not need to be kept secret, but need to be distributed in an authenticated manner. What we see developing here is a need for another security protocol, which we will study in Section 5.6.

    In some applications it is difficult to arrange for a common third party. Consider a person who buys a personal electronic device that communicates over a wireless network. The owner installs the new gadget (e.g., digital surveillance camera) in the owner's house and would like to make sure that burglars cannot control the device over the wireless network. But, how does the device authenticate the owner, so that it can distinguish the owner from other principals (e.g., burglars)? One option is that the manufacturer or distributor of the device plays the role of Charles. When purchasing a device, the manufacturer records the buyer's public key. The device has burned into it the public key of the manufacturer; when the buyer turns on the device, the device establishes a secure communication link using the manufacturer’s public key and asks the manufacturer for the public key of its owner. This solution is impractical, unfortunately: what if the device is not connected to a global network and thus cannot reach the manufacturer? This solution might also have privacy objections: should manufacturers be able to track when consumers use devices? Sidebar \(\PageIndex{1}\), about the resurrecting duckling, provides a solution that allows key distribution to be performed locally, without a central principal involved.

    Sidebar \(\PageIndex{1}\)

    Authenticating personal devices: the resurrecting duckling policy

    Inexpensive consumer devices have (or will soon have) embedded microprocessors in them that are able to communicate with other devices over inexpensive wireless networks. If household devices such as the home theatre, the heating system, the lights, and the surveillance cameras are controlled by, say, a universal remote control, an owner must ensure that these devices (and new ones) obey the owner's commands and not the neighbor's or, worse, a burglar's.This situation requires that a device and the remote control be able to establish a secure relationship. The relationship may be transient, however; the owner may want to resell one of the devices, or replace the remote control.

    In The resurrecting duckling: security issues for ad-hoc wireless networks [Suggestions for Further Reading 5.4.2], Stajano and Anderson provide a solution based on the vivid analogy of how ducklings authenticate their mother. When a duckling emerges from its egg, it will recognize as its mother the first moving object that makes a sound. In the Stajano and Anderson proposal, a device will recognize as its owner the first principal that sends it an authentication key. As soon as the device receives a key, its status changes from newborn to imprinted, and it stays faithful to that key until its death. Only an owner can force a device to die and thereby reverse its status to newborn. In this way, an owner can transfer ownership.

    A widely used example of the resurrecting duckling is purchasing wireless routers. These routers often come with the default user name "Admin" and password "password". When the buyer plugs the router in for the first time, it is waiting to be imprinted with a better password; the first principal to change the password gets control of the router. The router has a resurrection button that restores the defaults, thus again making it imprintable (and allowing the buyer to recover if an adversary did grab control).

    Not all applications deploy a sophisticated key-distribution protocol. For example, the secure shell (SSH), a popular Internet protocol used to log onto a remote computer, has a simple key distribution protocol. The first time that a user logs onto a server named "athena.Scholarly.edu", SSH sends a message in the clear to the machine with DNS name athena.Scholarly.edu asking it for its public key. SSH uses that public key to set up an authenticated and confidential communication link with the remote computer. SSH also caches this key and remembers that the key is associated with the DNS name "athena.Scholarly.edu". The next time the user logs onto athena.Scholarly.edu, SSH uses the cached key to set up the communication link.

    Because the DNS protocol does not include message authentication, the security risk in SSH’s approach is a masquerading attack: an adversary might be able to intercept the DNS lookup for "athena.Scholarly.edu" and return an IP address for a computer controlled by the adversary. When the user connects to that IP address, the adversary replies with a key that the adversary has generated. When the user makes an SSH connection using that public key, the adversary’s computer masquerades as athena.Scholarly.edu. To counter this attack, the SSH client asks a question to the user on the first connection to a remote computer: "I don't recognize the key of this remote computer, should I trust it?", and a wary user should compare the displayed key with one that it received from the remote computer's system administrator over an out-of-band secure communication link (e.g., a piece of paper). Many users aren't wary and just answer "yes" to the question.

    The advantage of the SSH approach is that no key distribution protocol is necessary (beyond obtaining the fingerprint). This has simplified the deployment of SSH and has made it a success. As we will see in Section 5.6, securely distributing keys such that a masquerading attack is impossible is a challenging problem.

     

    Long-Term Data Integrity with Witnesses

    Careful use of SIGN and VERIFY can provide both data integrity and authenticity guarantees. Some applications have requirements for which it is better to use different techniques for integrity and authenticity. Sidebar 1.2.1 mentions a digital archive, which requires protection against an adversary who tries to change the content of a file stored in the archive. To protect a file, a designer wants to make many separate replicas of the file, following the durability mantra, preferably in independently administered and thus separately protected domains. If the replicas are separately protected, it is more difficult for an adversary to change all of them.

    Since maintaining widely-separated copies of large files consumes time, space, and communication bandwidth, one can reduce the resource expenditure by replacing some (but not all) copies of the file with a smaller witness, with which users can periodically check the validity of replicas (as explained in Section 4.4.4). If the replica disagrees with the witness, then one repairs the replica by finding a replica that matches the witness. Because the witness is small, it is easy to protect it against tampering. For example, one can publish the witness in a widely-read newspaper, which is likely to be preserved either on microfilm or digitally in many public libraries.

    This scheme requires that a witness be cryptographically secure. One way of constructing a secure witness is using SIGN and VERIFY. The digital archiver uses a cryptographic hash function to create a secure fingerprint of the file, signs the fingerprint with its private key, and then distributes copies of the file widely. Anyone can verify the integrity of a replica by computing the fingerprint of the replica, verifying the witness using the public key of the archiver, and then comparing the fingerprint of the witness against the fingerprint of the replica.

    This scheme works well in general, but is less suitable for long-term data integrity. The window of validity of this scheme is determined by the minimum time to compromise the private key used for signing, the signing algorithm, the hashing algorithm, and the validity of the name-to-public key binding. If the goal of the archiver is to protect the data for many decades (or forever), it is likely that the digital signature will be invalid before the data.

    In such cases, it is better to protect the witness by widely publishing just the cryptographic hash instead of using SIGN and VERIFY . In this approach, the validity of the witness is the time to compromise the cryptographic hash. This window can be made large. One can protect against a compromised cryptographic hash algorithm by occasionally computing and publishing a new witness with the latest, best hash algorithm. The new witness is a hash of the original data, the original witness, and a timestamp, thereby demonstrating the integrity of the original data at the time of the new witness calculation.

    The confidence a user has in the authenticity of a witness is determined by how easily the user can verify that the witness was indeed produced by the archiver. If the newspaper or the library physically received the witnesses directly from the archiver, then this confidence may be high.


    This page titled 5.4: Authenticating Messages is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Jerome H. Saltzer & M. Frans Kaashoek (MIT OpenCourseWare) .

    • Was this article helpful?