Skip to main content
Engineering LibreTexts

5.8: Reasoning About Authentication (Advanced Topic)

  • Page ID
    58530
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    The security model has three key steps that are executed by the guard on each request: authenticating the user, verifying the integrity of the request, and determining if the user is authorized. Authenticating the user is typically the most difficult of the three steps because the guard can establish only that the message came from the same origin as some previous message. To determine the principal that is associated with a message, the guard must establish that it is part of a chain of messages that often originated in a message that was communicated by physical rendezvous. That physical rendezvous securely binds the identity of a real-world person with a principal.

    The authentication step is further complicated because the messages in the chain might even come from different principals, as we have seen in some of the security protocols in Section 5.6. If a message in the chain comes from a different principal and makes a statement about another principal, we can view the message as one principal speaking for another principal. To establish that the chain of messages originated from a particular real-world user, the guard must follow a chain of principals.

    Consider a simple security protocol, in which a certificate authority signs certificates, associating authentication keys with names (e.g., "key Kpub belongs to the user named X"). If a service receives this certificate together with a message M for which VERIFY (M, Kpub) returns ACCEPT, then the question is if the guard should believe this message originated with "X". The answer is no until the guard can establish the following facts:

    1. The guard knows that a message originated from a principal who knows a private authentication key Kpriv because the message verified with Kpub.
    2. The certificate is a message from the certification authority telling the guard that the authentication key Kpub is associated with user "X". (The guard can tell that the certificate came from the certificate authority because the certificate was signed with the private authentication key of the authority and the guard has obtained the public authentication key of the authority through some other chain of messages that originated in physical rendezvous.)
    3. The certification authority speaks for user "X". The guard may believe this assumption, if the guard can establish two facts:
      • User "X" says the certificate authority speaks for "X". That is, user "X" delegated authority to the certificate authority to speak on behalf of "X". If the guard believes the certificate authority carefully minted a key for "X" that speaks for only "X" and verified the identity of "X", then the guard may consider this belief a fact.
      • The certificate authority says Kpub speaks for user "X". If the guard believes that the certificate authority carefully minted a key for "X" that speaks for only "X" and verified the identity of "X", then the guard may consider this belief a fact.

    With these facts, the guard can deduce that the origin of the first message is user "X" as follows:

    1. If user "X" says that the certificate authority speaks on behalf of "X", then the guard can conclude that the certificate authority speaks for "X" because "X" said it.
    2. If we combine the first conclusion with the statement that the certificate authority says that "X" says that Kpub speaks for "X", then the guard can conclude that "X" says that Kpub speaks for "X".
    3. If "X" says that Kpub speaks for "X", then the guard can conclude that Kpub speaks for "X" because "X" said it.
    4. Because the first message verified with Kpub, the guard can conclude that the message must have originated with user “X”.

    In this section, we will formalize this type of reasoning using a simple form of what is called authentication logic, which defines more precisely what "speaks for" means. Using that logic we can establish the assumptions under which a guard is willing to believe that a message came from a particular person. Once the assumptions are identified, we can decide if the assumptions are acceptable, and, if the assumptions are acceptable, the guard can accept the authentication as valid and go on to determine if the principal is authorized.

    Authentication Logic

    Burrows-Abadi-Needham (BAN) authentication logic is a particular logic to reason about authentication systems. We give an informal and simplified description of the logic and its usage. If you want to use it to reason about a complete protocol, read Authentication in Distributed Systems: Theory and Practice [Suggestions for Further Reading 5.3.1].

    Consider the following example. Alice types at her workstation "Send me the quiz" (see Figure \(\PageIndex{1}\)). Her workstation A sends a message over the wire from network interface 14 to network interface 5, which is attached to the file service machine F, which runs the file service. The file service stores the object "quiz".

    A wire connects interface 14, which is attached to a workstation, to interface 25, which is attached to a file service. At the workstation, Alice sends a message saying "Send me the quiz" over the wire to the service. The file service contains the file "quiz."

    Figure \(\PageIndex{1}\): Authentication example.

    What the file service needs to know is that "Alice says send quiz". This phrase is a statement in the BAN authentication logic. This statement "A says B" means that agent A originated the request B. Informally, "A says B" means we have determined somehow that A actually said B. If we were within earshot, "A says B" is an axiom (we directly heard A say it!), but if we only know that "A says B" indirectly (“through hearsay”), we need to use additional reasoning, and perhaps make some other assumptions before we believe it. Unfortunately, the file system knows only that network interface F.5 (that is, network interface 5 on machine F) said Alice wants the quiz sent to her. That is, the file system knows "network interface F.5 says (Alice says send the quiz)". So "Alice says send the quiz" is only hearsay at the moment. The question is, can we trust network interface F.5 to tell the truth about what Alice did or did not say? If we do trust F.5 to speak for Alice, we write "network interface F.5 speaks for Alice" in BAN authentication logic. In this example, then, if we believe that "network interface F.5 speaks for Alice", we can deduce that "Alice says send the quiz."

    To make reasoning with this logic work, we need three rules:

    • Rule 1: Delegating authority:
              If        A says (B speaks for A)
              then    B speaks for A
      This rule allows Alice to delegate authority to Bob, which allows Bob to speak for Alice.
       
    • Rule 2: Use of delegated authority.
              If         A speaks for B
              and      A says (B says X)
              then     B says X
      This rule says that if Bob delegated authority to Alice, and Alice says that Bob said something, then we can believe that Bob actually said it.
       
    • Rule 3: Chaining of delegation.
              If         A speaks for B
              and      B speaks for C
              then     A speaks for C
      This rule says that delegation of authority is transitive: if Bob has delegated authority to Alice and Charles has delegated authority to Bob, then Charles also delegated authority to Alice.

    To capture real-world situations better, the full-bore BAN logic uses more refined rules then these. However, as we will see in the rest of this chapter, even these three simple rules are useful enough to help flush out fuzzy thinking.

    Hard-wired Approach

    How can the file service decide that "network interface F.5 speaks for Alice"? The first approach would be to hard-wire our installation. If we hard-wire Alice to her workstation, her workstation to network interface A.14, and network interface A.14 through the wire to network interface F.5, then we have:

    • network interface F.5 speaks for the wire: we must assume no one rewired it.
    • the wire speaks for network interface A.14: we must assume no one tampered with the channel.
    • network interface A.14 speaks for workstation A: we must assume the workstation was wired correctly.
    • workstation A speaks for Alice: we assume the operating system on Alice's workstation can be trusted.

    In short, we assume that the network interface, the wiring, and Alice's workstation are part of the trusted computing base. With this assumption we can apply the chaining of delegation rule repeatedly to obtain "network interface F.5 speaks for Alice". Then, we can apply the use of delegated authority rule and obtain "Alice says send the quiz". Authentication of message origin is now complete, and the file system can look for Alice's token on its access control list.

    The logic forced us to state our assumptions explicitly. Having made the list of assumptions, we can inspect them and see if we believe each is reasonable. We might even hire an outside auditor to offer an independent opinion.

    Internet Approach

    Now, suppose we instead connect the workstation's interface 14 to the file service's interface 5 using the Internet. Then, following the previous pattern, we get:

    • network interface F.5 speaks for the Internet: we must assume no one rewired it.
    • the Internet speaks for network interface A.14: we must assume the Internet is trusted!

    The latter assumption is clearly problematic; we are dead in the water.

    What can we do? Suppose the message is sent with some authentication tag—Alice actually sends the message with a MAC (reminder: {M}k denotes a plaintext message signed with a key k ):

    Alice ⇒ file service: {From: Alice; To: file service; "send the quiz"}T

    Then, we have:

    • key T says (Alice says send the quiz).

    If we know that Alice was the only person in the world who knows the key T , then we would be able to say:

    • key T speaks for Alice.

    With the use of delegated authority rule we could conclude "Alice says send the quiz". But is Alice really the only person in the world who knows key T ? We are using a shared-secret key system, so the file service must also know the key, and somehow the key must have been securely exchanged between Alice and the file service. So we must add to our list of assumptions:

    • the file service is not trying to trick itself;
    • the exchange of the shared-secret key was secure;
    • neither Alice nor the file service have revealed the key.

    With these assumptions we really can believe that "key T speaks for Alice", and we are home free. This reasoning is not a proof, but it is a method that helps us to discover and state our assumptions clearly.

    The logic as presented doesn't deal with freshness. In fact, in the example, we can conclude only that "Alice said send the quiz", but not that Alice said it recently. Someone else might be replaying the message. Extensions to the basic logic can deal with freshness by introducing additional rules for freshness that relate says and said.

     

    Authentication in Distributed Systems

    All of the authentication examples we have discussed so far have involved one service. Using the techniques from Section 5.7, it is easy to see how we can build a single-service authentication and authorization system. A user sets up a confidential and authenticated communication channel to a particular service. The user authenticates itself over the secure channel and receives from the service a token to be used for access control. The user sends requests over the secure channel. The service then makes its access control decisions based on the token that accompanies the request.

    Authentication in the World-Wide Web is an example of this approach. The browser sets up a secure channel using the SSL/TLS protocol described in Section 5.11. Then, the browser asks the user for a password and sends this password over the secure channel to the service. If the service identifies the user successfully with the received password, the service returns a token (a cookie in Web terminology), which the browser stores. The browser sends subsequent Web requests over the secure channel and includes the cookie with each request so that the user doesn't have to retype the password for each request. The service authenticates the principal and authorizes the request based on the cookie. (In practice, many Web applications don't set up a secure channel, but just communicate the password and cookie without any protection. These applications are vulnerable to most of the attacks discussed in previous sections.)

    The disadvantage of this approach to authentication is that services cannot share information about clients. The user has to log in to each service separately and each service has to implement its own authentication scheme. If the user uses only a few services, these shortcomings are not a serious inconvenience. However, in a realm (say a large company or a university) where there are many services and where information needs to be shared between services, a better plan is needed.

    In such an environment we would like to have the following properties:

    1. the user logs in once;
    2. the tokens the user obtains after login in should be usable by all services for authentication and to make authorization decisions;
    3. users are named in a uniform way so that their names can be put on and removed from access control lists;
    4. users and services don’t have to trust the network.

    These goals are sometimes summarized as single login or single sign-on. Few system designs or implementations meet these requirements. One system that comes close is Kerberos (see Sidebar 5.6.1). Another system that is gaining momentum for single sign-on to Web sites is openID; its goal is to allow users to have one ID for different Internet stores. The openID protocols are driven by a public benefit organization called the OpenID Foundation. Many major companies have joined the openID Foundation and are providing support in their services for openID.

     

    Authentication across Administrative Realms

    Extending authentication across realms that are administrated by independent authorities is a challenge. Consider a student who is running a service on a personal computer in his dorm room. The personal computer is not under the administrative authority of the university; yet, the student might want to obtain access to his service from a computer in a laboratory, which is administered by central campus authority. Furthermore, the student might want to provide access to his service to family and friends, who are in yet other administrative realms. It is unlikely that the campus administration will delegate authority to the personal computer and set up secure channels from the campus authentication service to each student's authentication service.

    Sharing information with many users across many different administrative realms raises a number of questions:

    1. How can we authenticate services securely? The Domain Name System (DNS) doesn't provide authenticated bindings of name to IP addresses, and so we cannot use DNS names to authenticate services.
    2. How can we name users securely? We could use email addresses, such as bob@Scholarly.edu, to identify principals but email addresses can be spoofed.
    3. How do we manage many users? If Pedantic University is willing to share course software with all students at The Institute of Scholar Studies, Pedantic University shouldn't have to list individually every student of The Institute of Scholar Studies on the access control list for the files. Clearly, protection groups are needed. But, how does a student at The Institute of Scholar Studies prove to Pedantic University's service that the student is part of the group students@Scholarly.edu?

    These three problems are naming problems: how do we name a service, a user, a group, and a member of a protection group securely? A promising approach is to split the problem into two parts: (1) name all principals (e.g., services, users, and groups) by public keys and (2) securely distribute symbolic names for the public keys separately. We discuss this approach in more detail.

    By naming principals by a public key we eliminate the distinction of realms. For example, a user Alice at Pedantic University might be named by a public key KApub and a user Bob at The Institute of Scholar Studies is named by a public KBpub; from the public key we cannot tell whether the Alice is at Pedantic University or The Institute of Scholar Studies. From the public key alone we cannot tell if the public key is Alice’s, but we will solve the binding from public key to symbolic name separately in the next sections, 5.8.4 through 5.8.6.

    If the Alice wants to authorize Bob to have access to her files, Alice adds KBpub to her access control list. If Bob wants to use Alice's files, Bob sends a request to Alice's service including his public key KBpub. Alice checks if KBpub appears on her access control list. If not, she denies the request. Otherwise, Alice's service challenges Bob to prove that he has the private key corresponding to KBpub. If Bob can prove that he has KBpriv (for example, by signing a challenge that Alice's service verifies with Bob's public key KBpub), then Alice's service allows access.

    When Alice approves the request, she doesn't know for sure if the request came from the principal named "Bob"; she just knows the request came from a principal holding the private key KBpriv. The symbolic name "Bob" doesn't play a role in the mediation decision. Instead, the crucial step was the authorization decision when Alice added KBpub to her access control; as part of that authorization decision Alice must assure herself that KBpub speaks for Bob before adding KBpub to her access control list. That assurance relies on securely distributing bindings from name to public key, which we separated out as an independent problem and will discuss in the sections 5.8.4 through 5.8.6.

    We can name protection groups also by a public key. Suppose that Alice knew for sure that KISSstudentspub is a public key representing students of The Institute of Scholarly Studies. If Alice wanted to grant all students at The Institute of Scholarly Studies access to her files, she could add KISSstudentspub to her access control list. Then, if Charles, a student at The Institute of Scholar Studies, wanted to have access to one of Alice's files, he would have to present a proof that he is a member of that group, for example, by providing a statement to Alice signed by KISSstudentspriv to Alice saying:

    {KCharlespub is a member of the group KISSstudentspub} KISSstudentspriv

    which in the BAN logic translates to:

    KCharlespub speaks for KISSstudentspub

    That is, Alice has delegated authority to the member Charles to speak on behalf of the group of students at The Institute of Scholarly Studies.

    Alice's service can verify this statement using KISSstudentspub, which is on Alice's access control list. After Alice's service successfully verifies the statement, then the service can challenge Charles to prove that he is the holder of the private key KCharlespriv. Once Charles can prove he is the holder of that private key, then Alice's service can grant access to Charles.

    In this setup, Alice must trust the holder of KISSstudentspriv to be a responsible person who carefully verifies that Charles is a student at The Institute of Scholarly Studies. If she trusts the holder of that key to do so, then Alice doesn't have to maintain her own list of who is a student at The Institute of Scholar Studies; in fact, she doesn't need to know at all which particular principals are students at The Institute of Scholarly Studies.

    If services are also named by public keys, then Bob and Charles can easily authenticate Alice's service. When Bob wants to connect to Alice's service, he specifies the public key of the service. If the service can prove that it possesses the corresponding private key, then Bob can have confidence that he is talking to the right service.

    By naming all principals with public keys we can construct distributed authentication systems. Unfortunately, public keys are long, unintelligible bit strings, which are awkward and unfriendly for users to remember or type. When Alice adds KBobpub and KISSstudentspub to her access control list, she shouldn't be required to type in a 1,024-bit number. Similarly when Bob and Charles refer to Alice's service, they shouldn't be required to know the bit representation of the public key of Alice's service. What is necessary is a way of naming public keys with symbolic names and authenticating the binding between name and key, which we will discuss next.

     

    Authenticating Public Keys

    How do we authenticate that KBpub is Bob's public key? As we have seen before, that authentication can be based on a key-distribution protocol, which start with a rendezvous step. For example, Bob and Alice meet face-to-face and Alice hands Bob a signed piece of paper with her public key and name. This piece of paper constitutes a self-signed certificate. Bob can have reasonable confidence in this certificate because Bob can verify that the certificate is valid and is Alice's. (Bob can ask Alice to sign again and compare it with the signature on the certificate and ask Alice for her driver license to prove her identity.)

    If Bob receives a self-signed certificate over an untrusted network, however, we are out of luck. The certificate says "Hi, I am Alice and here is my public key" and it is signed with Alice's digital signature, but Bob does not know Alice's public key yet. In this case, anybody could impersonate Alice to Bob because Bob cannot verify whether or not Alice produced this certificate. An adversary can generate a public/private key pair, create a certificate for Alice listing the public key as Alice's public key, and sign it with the private key, and send this self-signed certificate to Bob.

    Bob needs a way to find out securely what Alice's public key is. Most systems rely on a separate infrastructure for naming and distributing public keys securely. Such an infrastructure is called a public key infrastructure, PKI for short. There is a wide range of designs for such infrastructures, but their basic functions can be described well with the authentication logic. We start with a simple example using physical rendezvous and then later use certificate authorities to introduce principals to each other who haven't met through physical rendezvous.

    Consider the following example where Alice receives a message from Bob, asking Alice to send a private file, and Alice wants to decide whether or not to send it. The first step in this decision is for Alice to establish if the message really came from Bob.

    Suppose that Bob previously handed Alice a piece of paper on which Bob has written her public key, KpubBob. We can describe Alice's take on this event in authentication logic as

    Bob says (KpubBob speaks for Bob)             (belief #1)

    and by applying the delegation of authority rule, Alice can immediately conclude that she is safe in believing

    KpubBob speaks for Bob                                (belief #2),

    assuming that the information on the piece of paper is accurate. Alice realizes that she should should start making a list of assumptions for review later. (She ignores freshness for now because our stripped-down authentication logic has no said operation for capturing that.)

    Next, Bob prepares a message, M1:

    Bob says M1

    signs it with his private key:

    {M1}KprivBob

    which, in authentication logic, can be described as

    KprivBob says (Bob says M1)

    and sends it to Alice. Since the message arrived via the Internet, Alice now wonders if she should believe

    Bob says M1                         (?)

    Fortunately, M1is signed, so Alice doesn't need to invoke any beliefs about the Internet. But the only beliefs she has established so far are (#1) and (#2), and those are not sufficient to draw any conclusions. So the first thing Alice does is check the signature:

    result ← VERIFY ({M1}KprivBob, KpubBob)

    If result is ACCEPT then one might think that Alice is entitled to believe:

    KprivBob says (Bob says M1)             (belief #3?)

    but that belief actually requires a leap of faith that the cryptographic system is secure. Alice decides that it probably is, adds that assumption to her list, and removes the question mark on belief #3. But she still hasn’t collected enough beliefs to answer the question. In order to apply the chaining and use of authority rules, Alice needs to believe that

    (KprivBob speaks for KpubBob)                     (belief #4?)

    which sounds plausible, but for her to accept that belief requires another leap of faith: that Bob is the only person who knows KprivBob. Alice decides that Bob is probably careful enough to be trusted to keep his private key private, so she adds that assumption to her list and removes the question mark from belief #4.

    Now, Alice can apply chaining of delegation rule to beliefs #4 and #2 to conclude

    KprivBob speaks for Bob                             (belief #5)

    and she can now use the use of delegated authority rule to beliefs #5 and #3 to conclude that

    Bob says M1                                                (belief #6)

    Alice decides to accepts the message as a genuine utterance of Bob. The assumptions that emerged during this reasoning were:

    • KpubBob is a true copy of Bob's public key.
    • The cryptographic system used for signing is computationally secure.
    • Bob has kept KprivBob secret.

     

    Authenticating Certificates

    One of the prime usages of a public key infrastructure is to introduce principals that haven't met through a physical rendezvous. To do so, a public key infrastructure provides certificates and one or more certificate authorities.

    Continuing our example, suppose that Charles, whom Alice does not know, sends Alice the message

    {M2}KprivCharles

    This situation resembles the previous one, except that several things are missing: Alice does not know KpubCharles, so she can't verify the signature, and in addition, Alice does not know who Charles is. Even if Alice finds a scrap of paper that has written on it Charles's name and what purports to be Charles's public key, KpubCharles, and

    result ← VERIFY (M2, SIGN (M2, KprivCharles), KpubCharles)

    is ACCEPT, all she believes (again assuming that the cryptographic system is secure) is that

    KprivCharles says (Charles says M2)

    Without something corresponding to the previous beliefs #2 and #4, Alice still does not know what to make of this message. Specifically, Alice doesn’t yet know whether or not to believe

    KprivCharles speaks for Charles                 (?)

    Knowing that this might be a problem, Charles went to a well-known certificate authority, TrustUs.com, purchased the digital certificate:

    {"Charles's public key is KpubCharles"}KprivTrustUs

    and posted this certificate on his Web site. Alice discovers the certificate and wonders if it is any more useful than the scrap of paper she previously found. She knows that where she found the certificate has little bearing on its trustworthiness; a copy of the same certificate found on Lucifer's Web site would be equally trustworthy (or worthless, as the case may be).

    Expressing this certificate in authentication logic requires two steps. The first thing we note is that the certificate is just another signed message, M3, so Alice can interpret it in the same way that she interpreted the message from Bob:

    KprivTrustUs says M3

    Following the same reasoning that she used for the message from Bob, if Alice believes that she has a true copy of KpubTrustUs she can conclude that

    TrustUs says M3

    subject to the assumptions (exactly parallel to the assumptions she used for the message from Bob)

    • KpubTrustUs is a true copy of the TrustUs.com public key.
    • The cryptographic system used for signing is computationally secure.
    • TrustUs.com has kept KprivTrustUs secret.

    Alice decides that she is willing to accept those assumptions, so she turns her attention to M3, which was the statement "Charles's public key is KpubCharles". Since TrustUs.com is taking Charles's word on this, that statement can be expressed in authentication logic as

    Charles says (KpubCharles speaks for Charles)

    Combining, we have:

    TrustUs says (Charles says (KpubCharles speaks for Charles))

    To make progress, Alice needs to a further leap of faith. If Alice knew that

    TrustUs speaks for Charles             (?)

    then she could apply the delegated authority rule to conclude that

    Charles says (KpubCharles speaks for Charles)

    and she could then follow an analysis just like the one she used for the earlier message from Bob. Since Alice doesn't know Charles, she has no way of knowing the truth of the questioned belief (TrustUs speaks for Charles), so she ponders what it really means:

    1. TrustUs.com has been authorized by Charles to create certificates for her. Alice might think that finding the certificate on Charles's Web site gives her some assurance on this point, but Alice has no way to verify that Charles's Web site is secure, so she has to depend on TrustUs.com being a reputable outfit.
    2. TrustUs.com was careful in checking the credentials—perhaps a driver's license—that Charles presented for identification. If TrustUs.com was not careful, it might, without realizing it, be speaking for Lucifer rather than Charles. (Unfortunately, certificate authorities have been known to make exactly that mistake.) Of course, TrustUs.com is assuming that the credentials Charles presented were legitimate; it is possible that Charles has stolen someone else’s identity. As usual, authentication of origin is never absolute; at best it can provide no more than a secure tie to some previous authentication of origin.

    Alice decides to review the complete list of the assumptions she needs to make in order to accept Charles’s original message M2 as genuine:

    • KpubTrustUs is a true copy of the TrustUs.com public key.
    • The cryptographic system used for signing is computationally secure.
    • TrustUs.com has kept KprivTrustUs secret.
    • TrustUs.com has been authorized by Charles.
    • TrustUs.com carefully checked Charles’s credentials.
    • TrustUs.com has signed the right public key (that is KpubCharles).
    • Charles has kept KprivCharles secret.

    She notices that in addition to relying heavily on the trustworthiness of TrustUs.com, she doesn't know Charles, so the last assumption may be a weakness. For this reason, she would be well-advised to accept message M2 with a certain amount of caution. In addition, Alice should keep in mind that since Charles's public key was not obtained by a physical rendezvous, she knows only that the message came from someone named "Charles"; she as yet has no way to connect that name with a real person.

    As in the previous examples, the stripped-down authentication logic we have been using for illustration has no provision for checking freshness, so it hasn't alerted Alice that she is also assuming that the two public keys are fresh and that the message itself is recent.

    The above example is a distributed authorization system that is ticket-oriented. TrustUs.com has generated a ticket (the certificate) that Alice uses to authenticate Charles's request. Given this observation, this immediately raises the question of how Charles revokes the certificate that he bought from TrustUs.com. If Charles, for example, accidently discloses his private key, the certificate from TrustUS.com becomes worthless and he should revoke it so that Alice cannot be tricked into believing that M2 came from Charles. One way to address this problem is to make a certificate valid for only a limited length of time. Another approach is for TrustUs.com to maintain a list of revoked certificates and for Alice to first check with TrustUS.com before accepting an certificate as valid.

    Neither solution is quite satisfactory. The first solution has the disadvantage that if Charles loses his private key, the certificate will remain valid until it expires. The second solution has the disadvantage that TrustUs.com has to be available at the instant that Alice tries to check the validity of the certificate.

     

    Certificate Chains

    The public key infrastructure developed so far has one certificate authority, TrustUs.com. How do we certify the public key of TrustUs.com? There might be many certificate authorities, some of which Alice doesn't know about. However, Alice might possess a certificate for another certificate authority that certifies TrustUs.com, creating a chain of certification. Public key infrastructures organize such chains in two primary ways; we discuss them in turn.

    Hierarchy of Central Certificate Authorities

    In the central-authority approach, key certificate authorities record public keys and are managed by central authorities. For example, in the Word Wide Web, certificates authenticating Web sites are usually signed by one of several well-known root certificate authorities. Commercial Web sites, such as amazon.com, for instance, present a certificate signed by Verisign to a client when it connects. All Web browsers embed the public key of the root certificates in their programs. When the browser receives a certificate from amazon.com, it uses the embedded public key for Verisign to verify the certificate.

    Some Web sites, such as a company's internal Web site, generate a self-signed certificate and send that to a client when it connects. To be able to verify a self-signed certificate, the client must have obtained the key of the Web site securely in advance.

    The Web approach to certifying keys has a shallow hierarchy. In DNSSEC\(^*\), a secure version of DNS, CAs can be arranged in a deeper hierarchy. If Alice types in the name "athena.Scholarly.edu", her resolver will contact one of the root servers and obtain an address and certificate for "edu". In authentication logic, the meaning of this certificate is "Kprivroot says that Kpubedu speaks for edu". To be able to verify this certificate she must have obtained the public key of the root servers in some earlier rendezvous step. If the certificate for "edu" verifies, she contacts the server for the "edu" domain, and asks for the server's address and certificate for "Scholarly", and so on.

    One problem with the hierarchical approach is that one must trust a central authority, such as the DNS root service. The central authority may ask an unreasonable price for the service, enforce policies that you don’t like, or considered untrustworthy by some. For example, in DNS and DNSSEC, there is a lot of politics around which institution should run the root servers and the policies of that institution. Since the Internet and DNS originated in the U.S.A., it is currently run by an U.S.A. organization. Unhappiness with this organization has led the Chinese to start their own root service. 

    Another problem with the hierarchical approach is that certificate authorities determine to whom they delegate authority for a particular domain name. You might be happy with the Institute of Scholarly Studies managing the "Scholarly" domain, but have less trust in a rogue government managing the top-level domain for all DNS names in that country.

    Because of problems like these, it is difficult in practice to agree and manage a single PKI that allows for strong authentication world wide. Currently, no global PKI exist.


    \(^*\) D. Eastlake, Domain Name System Security Extensions, Internet Engineering Task Force Request For Comments (RFC 2535), Mach 1999.

    Web of Trust

    The web-of-trust approach avoids using a chain of central authorities. Instead, Bob can decide himself whom he trusts. In this approach, Alice obtains certificates from her friends Charles, Dawn, and Ella and posts these on her Web page: {Alice, KApub}KCpriv, {Alice, KApub}KDpriv, {Alice, KApub}KEpriv. If Bob knows the public key of any one of Charles, Dawn, or Ella, he can verify one of the certificates by verifying the certificate that person signed. To the extent that he trusts that person to be careful in what he or she signs, he has confidence that he now has Alice's true public key.

    On the other hand, if Bob doesn't know Charles, Dawn, or Ella, he might know someone (say Felipe) who knows one of them. Bob may learn that Felipe knows Ella because he checks Ella's Web site and finds a certificate signed by Felipe. If he trusts Felipe, he can get a certificate from Felipe, certifying one of the public keys KCpub, KDpub, or KEpub, which he can then use to certify Alice's public key. Another possibility is that Alice offers a few certificate chains in the hope that Bob trusts one of the signers in one of the chains, and has the signer's public key in his set of keys. Independent of how Bob learned Alice's public key, he can inspect the chain of trust by which he learned and verified Alice's public key and see whether he likes it or not. The important point here is that Bob must trust every link in the chain. If any link untrustworthy, he will have no guarantees.

    The web of trust scheme relies on the observation that it usually takes only a few acquaintance steps to connect anyone in the world to anyone else. For example, it has been claimed that everyone is separated by no more than 6 steps from the President of the United States. (There may be some hermits in Tibet that require more steps.) With luck, there will be many chains connecting Bob with Alice, and one of them may consist entirely of links that Bob trusts.

    The central idea in the web-of-trust approach is that Bob can decide whom he trusts instead of having to trust a central authority. PGP (Pretty Good Privacy) \(^*\) and a number of other systems use the web of trust approach.

    \(^*\) Simson Garfinkel. PGP: Pretty Good Privacy. O’Reilly & Associates, Sebastopol, California, 1995. ISBN: 978–1–56592–098–9 (paperback). 430 pages.

    Nominally a user's guide to the PGP encryption package developed by Phil Zimmermann, this book starts out with six readable overview chapters on the subject of encryption, its history, and the political and licensing environment that surrounds encryption systems. Even the later chapters, which give details on how to use PGP, are filled with interesting tidbits and advice applicable to all encryption uses.


    This page titled 5.8: Reasoning About Authentication (Advanced Topic) is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Jerome H. Saltzer & M. Frans Kaashoek (MIT OpenCourseWare) .