Skip to main content
Engineering LibreTexts

1.11: Exercises

  • Page ID
    58721
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Exercise \(\PageIndex{1}\)

    1996–1–1e

    Chapter 1 discussed four general methods for coping with complexity: modularity, abstraction, hierarchy, and layering. Which of those four methods does a protocol stack use as its primary organizing scheme?

    Exercise \(\PageIndex{2}\)

    1999–2–03

    The end-to-end argument

    A. is a guideline for placing functions in a computer system;

    B. is a rule for placing functions in a computer system;

    C. is a debate about where to place functions in a computer system;

    D. is a debate about anonymity in computer networks.

    Exercise \(\PageIndex{3}\)

    1998–2–01

    Of the following, the best example of an end-to-end argument is:

    A. If you laid all the Web hackers in the world end to end, they would reach from Cambridge to CERN.

    B. Every byte going into the write end of a UNIX pipe eventually emerges from the pipe’s read end.

    C. Even if a chain manufacturer tests each link before assembly, he’d better test the completed chain.

    D. Per-packet checksums must be augmented by a parity bit for each byte.

    E. All important network communication functions should be moved to the application layer.

    Exercise \(\PageIndex{4}\)

    1995–1–5a

    Give two scenarios in the form of timing diagrams showing how a duplicate request might end up at a service.

    Exercise \(\PageIndex{5}\)

    1987–1–2a,b

    After sending a frame, a certain piece of network software waits one second for an acknowledgment before retransmitting the frame. After each retransmission, it cuts delay in half, so after the first retransmission the wait is \(1/2\) second, after the second retransmission the wait is \(1/4\) second, etc. If it has reduced the delay to \(1/1024\) second without receiving an acknowledgment, the software gives up and reports to its caller that it was not able to deliver the frame.

    a) Is this a good way to manage retransmission delays for Ethernet? Why or why not?

    b) Is this a good way to manage retransmission delays for a receive-and-forward network? Why or why not?

    Exercise \(\PageIndex{6}\)

    1995–1–1f

    Variable delay is an intrinsic problem of isochronous networks. True or false?

    Exercise \(\PageIndex{7}\)

    1987–1–3a,b

    Host \(A\) is sending frames to host \(B\) over a noisy communication link. The median transit time over the communication link is 100 milliseconds. The probability of a frame being damaged en route in either direction across the communication link is \(\alpha\), and \(B\) can reliably detect the damage. When \(B\) gets a damaged frame it simply discards it. To ensure that frames arrive safely, \(B\) sends an acknowledgment back to \(A\) for every frame received intact.

    a) How long should \(A\) wait for a frame to be acknowledged before retransmitting it?

    b) What is the average number of times that \(A\) will have to send each frame?

    Exercise \(\PageIndex{8}\)

    2000–2–02

    Consider the protocol reference model of this chapter with the link, network, and end-to-end layers. Which of the following is a behavior of the reference model?

    A. An end-to-end layer at an end host tells its network layer which network layer protocol to use to reach a destination.

    B. The network layer at a router maintains a separate queue of packets for each end-to-end protocol.

    C. The network layer at an end host looks at the end-to-end type field in the network header to decide which end-to-end layer protocol handler to invoke.

    D. The link layer retransmits packets based on the end-to-end type of the packets: if the end-to-end protocol is reliable, then a link-layer retransmission occurs when a loss is detected at the link layer, otherwise not.

    Exercise \(\PageIndex{9}\)

    1997–1–1e

    Congestion is said to occur in a receive-and-forward network when

    A. Communication stalls because of cycles in the flow-control dependencies.

    B. The throughput demanded of a network link exceeds its capacity.

    C. The volume of e-mail received by each user exceeds the rate at which users can read e-mail.

    D. The load presented to a network link persistently exceeds its capacity.

    E. The amount of space required to store routing tables at each node becomes burdensome.

    Exercise \(\PageIndex{10}\)

    1994–2–6

    Alice has arranged to send a stream of data to Bob using the following protocol:

    • Each message segment has a block number attached to it; block numbers are consecutive starting with 1.
    • Whenever Bob receives a segment of data with the number \(N\), he sends back an acknowledgment saying “OK to send block \(N+1\)”.
    • Whenever Alice receives an “OK to send block \(K\),” she sends block \(K\).

    Alice initiates the protocol by sending a block numbered 1; she terminates the protocol by ignoring any “OK to send block \(K\)” for which \(K\) is larger than the number on the last block she wants to send. The network has been observed to never lose message segments, so Bob and Alice have made no provision for timer expirations and retries. They have also made no provision for deduplication. Unfortunately, the network systematically delivers every segment twice. Alice starts the protocol, planning to send a three-block stream. How many “OK to send block 4” responses does she ignore at the end?

    Exercise \(\PageIndex{11}\)

    1980–3–3

    \(A\) and \(B\) agree to use a simple window protocol for flow control for data going from \(A\) to \(B\): When the connection is first established, \(B\) tells \(A\) how many message segments \(B\) can accept, and as \(B\) consumes the segments it occasionally sends a message to \(A\) saying “you can send \(M\) more”. In operation, \(B\) notices that occasionally \(A\) sends more segments than it was supposed to. Explain.

    Exercise \(\PageIndex{12}\)

    1995–2–2a...c

    Assume a client and a service are directly connected by a private, 800,000 bytes per second link. Also assume that the client and the service produce and consume message segments at the same rate. Using acknowledgments, the client measures the round-trip between itself and the service to be 10 milliseconds.

    a) If the client is sending message segments that require 1000-byte frames, what is the smallest window size that allows the client to achieve 800,000 bytes per second throughput?

    b) One scheme for establishing the window size is similar to the slow start congestion control mechanism. The idea is that the client starts with a window size of one. For every segment received, the service responds with an acknowledgment telling the client to double the window size. The client does so until it realizes that there is no point in increasing it further. For the same parameters as in part (a), how long would it take for the client to realize it has reached the maximum throughput?

    c) Another scheme for establishing the window size is called fast start. In (an oversimplified version of) fast start, the client simply starts sending segments as fast as it can, and watches to see when the first acknowledgment returns. At that point, it counts the number of outstanding segments in the pipeline, and sets the window size to that number. Again using the same parameters as in part (a), how long will it take for the client to know it has achieved the maximum throughput?

    Exercise \(\PageIndex{13}\)

    1988–2–2a...c

    A satellite in stationary orbit has a two-way data channel that can send frames containing up to 1000 data bytes in a millisecond. Frames are received without error after 249 milliseconds of propagation delay. A transmitter \(T\) frequently has a data file that takes 1000 of these maximal-length frames to send to a receiver \(R\). \(T\) and \(R\) start using lock-step flow control. \(R\) allocates a buffer which can hold one message segment. As soon as the buffered segment is used and the buffer is available to hold new data, \(R\) sends an acknowledgment of the same length. \(T\) sends the next segment as soon as it sees the acknowledgment for the last one.

    a) What is the minimum time required to send the file?

    b) \(T\) and \(R\) decide that lock-step is too slow, so they change to a bang-bang protocol. A bang-bang protocol means that \(R\) sends explicit messages to \(T\) saying "go ahead" or "pause". The idea is that \(R\) will allocate a receive buffer of some size \(B\), send a goahead message when it is ready to receive data. \(T\) then sends data segments as fast as the channel can absorb them. \(R\) sends a pause message at just the right time so that its buffer will not overflow even if \(R\) stops consuming message segments. Suppose that \(R\) sends a go-ahead, and as soon as it sees the first data arrive it sends a pause. What is the minimum buffer size \(B_{min}\) that it needs?)

    c) What now is the minimum time required to send the file?

    Exercise \(\PageIndex{14}\)

    2000–2–09

    Some end-to-end protocols include a destination field in the end-to-end header. Why?

    A. So the protocol can check that the network layer routed the packet containing the message segment correctly.

    B. Because an end-to-end argument tells us that routing should be performed at the endto-end layer.

    C. Because the network layer uses the end-to-end header to route the packet.

    D. Because the end-to-end layer at the sender needs it to decide which network protocol to use.

    Exercise \(\PageIndex{15}\)

    1994–2–5

    One value of hierarchical naming of network attachment points is that it allows a reduction in the size of routing tables used by packet forwarders. Do the packet forwarders themselves have to be organized hierarchically to take advantage of this space reduction?

    Exercise \(\PageIndex{16}\)

    1982–3–4

    The System Network Architecture (SNA) protocol family developed by IBM uses a flow control mechanism called pacing. With pacing, a sender may transmit a fixed number of message segments, and then must pause. When the receiver has accepted all of these segments, it can return a pacing response to the sender, which can then send another burst of message segments.

    Suppose that this scheme is being used over a satellite link, with a delay from earth station to earth station of 250 milliseconds. The frame size on the link is 1000 bits, four segments are sent before pausing for a pacing response, and the satellite channel has a data rate of one megabit per second.

    a) The timing diagram below illustrates the frame carrying the first segment. Fill in the diagram to show the next six frames exchanged in the pacing system. Assume no frames are lost, delays are uniform, and sender and receiver have no internal delays (for example, the first bit of the second frame may immediately follow the last bit of the first).

    A timing diagram shows that the first bit of the first frame leaves the sender at time = 0 milliseconds and arrives at the receiver at time = 250 ms. The last bit of the first frame leaves the sender at time = 1 ms and arrives at the receiver at time = 251 ms.

    Figure \(\PageIndex{1}\): Timing diagram for the pacing system described in Exercise \(\PageIndex{16a}\).

    b) What is the maximum fraction of the available satellite capacity that can be used by this pacing scheme?

    c) We would like to increase the utilization of the channel to 50% but we can't increase the frame size. How many message segments would have to be sent between pacing responses to achieve this capacity?

    Exercise \(\PageIndex{17}\)

    2001–2–01, 2002–2–02, and 2004–2–2

    Which are true statements about network address translators as described in Section 1.5.5?

    A. NATs break the universal addressing scheme of the Internet.

    B. NATs break the layering abstraction of the network model of Chapter 7.

    C. NATs increase the consumption of Internet addresses.

    D. NATs address the problem that the Internet has a shortage of Internet addresses.

    E. NATs constrain the design of new end-to-end protocols.

    F. When a NAT translates the Internet address of a packet, it must also modify the Ethernet checksum, to ensure that the packet is not discarded by the next router that handles it. The client application might be sending its Internet address in the TCP payload to the server.

    G. When a packet from the public Internet arrives at a NAT box for delivery to a host behind the NAT, the NAT must examine the payload and translate any Internet addresses found therein.

    H. Clients behind a NAT cannot communicate with servers that are behind the same NAT because the NAT does not know how to forward those packets.

    Exercise \(\PageIndex{18}\)

    1994–1–2

    Some network protocols deal with both big-endian and little-endian clients by providing two different network ports. Big-endian clients send requests and data to one port, while little-endian clients send requests and data to the other. The service may, of course, be implemented on either a big-endian or a little-endian machine. This approach is unusual—most Internet protocols call for just one network port, and require that all data be presented at that port in “network standard form”, which is little- endian. Explain the advantage of the two port structure as compared with the usual structure.

    Exercise \(\PageIndex{19}\)

    1994–1–3b

    True or False? Ethernet cannot scale to large sizes because a centralized mechanism is used to control network contention.

    Exercise \(\PageIndex{20}\)

    1999–2–01, 2000–1–04

    Ethernet

    A. uses luminiferous ether to carry packets.

    B. uses Manchester encoding to frame bits.

    C. uses exponential back-off to resolve repeated conflicts between multiple senders.

    D. uses retransmissions to avoid congestion.

    E. delegates arbitration of conflicting transmissions to each station.

    F. always guarantees the delivery of packets.

    G. can support an unbounded number of computers.

    H. has limited physical range.

    Exercise \(\PageIndex{21}\)

    1998-2-02

    Ethernet cards have unique addresses built into them. What role do these unique addresses play in the Internet?

    A. None. They are there for Macintosh compatibility only.

    B. A portion of the Ethernet address is used as the Internet address of the computer using the card.

    C. They provide routing information for packets destined to non-local subnets.

    D. They are used as private keys in the Security Layer of the ISO protocol.

    E. They provide addressing within each subnet for an Internet address resolution protocol.

    F. They provide secure identification for warranty service.

    Exercise \(\PageIndex{22}\)

    2004–1–3

    If eight stations on an Ethernet all want to transmit one packet, which of the following statements is true?

    A. It is guaranteed that all transmissions will succeed.

    B. With high probability all stations will eventually end up being able to transmit their data successfully.

    C. Some of the transmissions may eventually succeed, but it is likely some may not.

    D. It is likely that none of the transmissions will eventually succeed.

    Exercise \(\PageIndex{23}\)

    1996–2–1a

    Ben Bitdiddle has been thinking about remote procedure call. He remembers that one of the problems with RPC is the difficulty of passing pointers: since pointers are really just addresses, if the service dereferences a client pointer, it’ll get some value from its address space, rather than the intended value in the client’s address space. Ben decides to redesign his RPC system to always pass, in the place of a bare pointer, a structure consisting of the original pointer plus a context reference. Louis Reasoner, excited by Ben’s insight, decides to change all end-to-end protocols along the same lines. Argue for or against Louis’s decision.

    Exercise \(\PageIndex{24}\)

    1996–2–2

    Alyssa's mobiles.\(^*\) Alyssa P. Protocol-Hacker is designing an end-to-end protocol for locating mobile hosts. A mobile host is a computer that plugs into the network at different places at different times, and get assigned a new network address at each place. The system she starts with assigns each host a home location, which can be found simply by looking the user up in a name service. Her end-to-end protocol will use a network that can reorder packets, but doesn’t ever lose or duplicate them. Her first protocol is simple: every time a user moves, store a forwarding pointer at the previous location, pointing to the new location. This creates a chain of forwarding pointers with the permanent home location at the beginning and the mobile host at the end. Packets meant for the mobile host are sent to the home location, which forwards them along the chain until they reach the mobile host itself. (The chain is truncated when a mobile host returns to a previously visited location.)

    Alyssa notices that because of the long chains of forwarding pointers, performance generally gets worse each time she moves her mobile host. Alyssa's first try at fixing the problem works like this: Each time a mobile host moves, it sends a message to its home location indicating its new location. The home location maintains a pointer to the new location. With this protocol, there are no chains at all. Places other than the home location do not maintain forwarding information.

    a) When this protocol is implemented, Alyssa notices that packets regularly get lost when she moves from one location to another. Explain why or give an example.

    Alyssa is disappointed with her first attempt, and decides to start over. In her new scheme, no forwarding pointers are maintained anywhere, not even at the home node. Say a packet destined for a mobile host \(A\) arrives at a node \(N\). If \(N\) can directly communicate with \(A\), then \(N\) sends the packet to \(A\), and we're done. Otherwise, \(N\) broadcasts a search request for \(A\) to all the other fixed nodes in the network. If \(A\) is near a different fixed node \(N'\), then \(N'\) responds to the search request. On receiving this response, \(N\) forwards the packet for \(A\) to \(N'\).

    b) Will packets get lost with this protocol, even if \(A\) moves before the packet gets to \(N'\)? Explain.

    Unfortunately, the network doesn't support broadcast efficiently, so Alyssa goes back to the keyboard and tries again. Her third protocol works like this. Each time a mobile host moves, say from \(N\) to \(N'\), a forwarding pointer is stored at \(N\) pointing to \(N'\). Every so often, the mobile host sends a message to its permanent home node with its current location. Then, the home node propagates a message down the forwarding chain, asking the intermediate nodes to delete their forwarding state.

    c) Can Alyssa ever lose packets with this protocol? Explain. (Hint: think about the properties of the underlying network.)

    d) What additional steps can the home node take to ensure that the scheme in part (c) never loses packets?


    \(^*\) Credit for developing Exercise \(\PageIndex{24}\) goes to Anant Agarwal.

    Exercise \(\PageIndex{25}\)

    1997–2–2

    ByteStream, Inc. sells three data-transfer products: Send-and-wait, Blast, and Flow-control. Mike R. Kernel is deciding which product to use. The protocols work as follows:

    • Send-and-wait sends one segment of a message and then waits for an acknowledgment before sending the next segment.
    • Flow-control uses a sliding window of 8 segments. The sender sends until the window closes (i.e., until there are 8 unacknowledged segments). The receiver sends an acknowledgment as soon as it receives a segment. Each acknowledgment opens the sender’s window with one segment.
    • Blast uses only one acknowledgment. The sender blasts all the segments of a message to the receiver as fast as the network layer can accept them. The last segment of the blast contains a bit indicating that it is the last segment of the message. After sending all segments in a single blast, the sender waits for one acknowledgment from the receiver. The receiver sends an acknowledgment as soon as it receives the last segment.

    Mike asks you to help him compute for each protocol its maximum throughput. He is planning to use a 1,000,000 bytes per second network that has a packet size of 1,000 bytes. The propagation time from the sender to the receiver is 500 microseconds. To simplify the calculation, Mike suggests making the following approximations: (1) there is no processing time at the sender and the receiver; (2) the time to send an acknowledgment is just the propagation time (number of data bytes in an ACK is zero); (3) the data segments are always 1,000 bytes; and (4) all headers are zero-length. He also assumes that the underlying communication medium is perfect (frames are not lost, frames are not duplicated, etc.) and that the receiver has unlimited buffering.

    a) What is the maximum throughput for the Send-and-wait?

    b) What is the maximum throughput for Flow-control?

    c) What is the maximum throughput for Blast?

    Mike needs to choose one of the three protocols for an application which periodically sends arbitrary-sized messages. He has a reliable network, but his application involves unpredictable computation times at both the sender and the receiver. And this time the receiver has a 20,000-byte receive buffer.

    d) Which product should he choose for maximum reliable operation?

    A. Send-and-wait, the others might hang.

    B. Blast, which outperforms the others.

    C. Flow-control, since Blast will be unreliable and Send-and-wait is slower.

    D. There is no way to tell from the information given.

    Exercise \(\PageIndex{26}\)

    1994–1–5

    Suppose the longest packet you can transmit across the Internet can contain 480 bytes of useful data, you are using a lock-step end-to-end protocol, and you are sending data from Boston to California. You have measured the round-trip time and found that it is about 100 milliseconds. 

    a) If there are no lost packets, estimate the maximum data rate you can achieve.

    b) Unfortunately, 1% of the packets are getting lost, so you install a resend timer set to 1000 milliseconds. Estimate the data rate you now expect to achieve.

    c) On Tuesdays the phone company routes some westward-bound packets via satellite link, and we notice that 50% of the round trips now take exactly 100 extra milliseconds. What effect does this delay have on the overall data rate when the resend timer is not in use. (Assume the network does not lose any packets.)

    d) Ben turns on the resend timer, but since he hadn’t heard about the satellite delays he sets it to 150 milliseconds. What now is the data rate on Tuesdays? (Again, assume the network does not lose any packets.)

    e) Usually, when discussing end-to-end data rate across a network, the first parameter one hears is the data rate of the slowest link in the network. Why wasn't that parameter needed to answer any of the previous parts of this question?

    Exercise \(\PageIndex{27}\)

    1998–1–2a…d

    Ben Bitdiddle is called in to consult for Microhard. Bill Doors, the CEO, has set up an application to control the Justice department in Washington, D.C. The client running on the TNT operating system makes RPC calls from Seattle to the server running in Washington, D.C. The server also runs on TNT (surprise!). Each RPC call instructs the Justice department on how to behave; the response acknowledges the request but contains no data (the Justice department always complies with requests from Microhard). Bill Doors, however, is unhappy with the number of requests that he can send to the Justice department. He therefore wants to improve TNT’s communication facilities.

    Ben observes that the Microhard application runs in a single thread and uses RPC. He also notices that the link between Seattle and Washington, D.C. is reliable. He then proposes that Microhard enhance TNT with a new communication: primitive pipe calls.

    Like RPCs, pipe calls initiate remote computation on the server. Unlike RPCs, however, pipe calls return immediately to the caller and execute asynchronously on the server. TNT packs multiple pipe calls into request messages that are 1000 bytes long. TNT sends the request message to the server as soon as one of the following two conditions becomes true: 1) the message is full, or 2) the message contains at least 1 pipe call and it has been 1 second since the client last performed a pipe call. Pipe calls have no acknowledgments. Pipe calls are not synchronized with respect to RPC calls.

    Ben quickly settles down to work and measures the network traffic between Seattle and Washington. Here is what he observes:

    Seattle to D.C. transit time: 12.5 \(\times\) 10-3 seconds
    D.C. to Seattle transit time: 12.5 \(\times\) 10-3 seconds
    Channel bandwidth in each direction: 1.5 \(\times\) 106 bits per second
    RPC or Pipe data per call: 10 bytes
    Network overhead per message: 40 bytes
    Size of RPC request message (per call): 50 bytes = 10 bytes data + 40 bytes overhead
    Size of pipe request message: 1000 bytes (96 pipe calls per message)
    Size of RPC reply message (no data): 50  bytes
    Client computation time per request: 100 \(\times\) 10-6 seconds
    Server computation time per request: 50 \(\times\) 10-6 seconds

    The Microhard application is the only one sending messages on the link.

    a) What is the transmission delay the client thread observes in sending an RPC request message)?

    b) Assuming that only RPCs are used for remote requests, what is the maximum number of RPCs per second that will be executed by this application?

    c) Assuming that all RPC calls are changed to pipe calls, what is the maximum number of pipe calls per second that will be executed by this application?

    d) Assuming that every pipe call includes a serial number argument, and serial numbers increase by one with every pipe call, how could you know the last pipe call was executed?

    A. Ensure that serial numbers are synchronized to the time of day clock, and wait at the client until the time of the last serial number.

    B. Call an RPC both before and after the pipe call, and wait for both calls to return.

    C. Call an RPC passing as an argument the serial number that was sent on the last pipe call, and design the remote procedure called to not return until a pipe call with a given serial number had been processed.

    D. Stop making pipe calls for twice the maximum network delay, and reset the serial number counter to zero.

    Exercise \(\PageIndex{28}\)

    1996–1–4a…d

    Alyssa P. Hacker is implementing a client/service spell checker in which a network will stand between the client and the service. The client scans an ASCII file, sending each word to the service in a separate message. The service checks each word against its database of correctly spelled words and returns a one-bit answer. The client displays the list of incorrectly spelled words.

    a) The client’s cost for preparing a message to be sent is 1 millisecond, regardless of length. The network transit time is 10 milliseconds, and network data rate is infinite. The service can look up a word and determine whether or not it is misspelled in 100 microseconds. Since the service runs on a supercomputer, its cost for preparing a message to be sent is zero milliseconds. Both the client and service can receive messages with no overhead. How long will Alyssa’s design take to spell check a 1,000 word file if she uses RPC for communication (ignore acknowledgments to requests and replies, and assume that messages are not lost or reordered)?

    b) Alyssa does the same computations that you did and decides that the design is too slow. She decides to group several words into each request. If she packs 10 words in each request, how long will it take to spell check the same file?

    c) Alyssa decides that grouping words still isn’t fast enough, so she wants to know how long it would take if she used an asynchronous message protocol (with grouping words) instead of RPC. How long will it take to spell check the same file? (For this calculation, assume that messages are not lost or reordered.)

    d) Alyssa is so pleased with the performance of this last design that she decides to use it (without grouping) for a banking system. The service maintains a set of accounts and processes requests to debit and credit accounts (i.e., modify account balances). One day Alyssa deposits $10,000 and transfers it to Ben’s account immediately afterwards. The transfer fails with a reply saying she is overdrawn. But when she checks her balance afterwards, the $10,000 is there! Draw a time diagram explaining these events.


    This page titled 1.11: Exercises is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Jerome H. Saltzer & M. Frans Kaashoek (MIT OpenCourseWare) .