Chapter 5: Computer Networks
5
THE NETWORK LAYER
The network layer is concerned with getting packets from the source all the way to the destination. Getting to the destination may require making many hops at intermediate routers along the way. This function clearly contrasts with that of the data link layer, which has the more modest goal of just moving frames from one end of a (virtual) ‘‘wire’’ to the other. Thus, the network layer is the lowest layer that deals with end-to-end transmission.
To achieve its goals, the network layer must learn about the topology of the network (i.e., the set of all routers and links) and compute appropriate paths through it, even for large networks. It must also take care when choosing routes to avoid overloading some of the communication lines and routers while leaving oth- ers idle. Finally, when the source and destination are in different independently operated networks, sometimes called autonomous systems, new challenges arise, such as coordinating traffic flows across multiple networks and managing network utilization. These problems are typically handled at the network layer; network operators are often tasked with dealing with these challenges manually. Conven tionally, network operators had to reconfigure the network layer manually, through low-level configuration. More recently, however, the advent of software-defined networking and programmable hardware has made it possible to configure the net- work layer from higher-level software programs, and even to redefine the functions of the network layer entirely. In this chapter, we will study all these issues and illustrate them, focusing in particular on the Internet and its network layer protocol, IP (Internet Protocol).
359
360 THE NETWORK LAYER CHAP. 5 5.1 NETWORK LAYER DESIGN ISSUES
In the following sections, we will give an introduction to some of the issues that the designers of the network layer must grapple with. These issues include the service provided to the transport layer and the internal design of the network.
5.1.1 Store-and-Forward Packet Switching
Before starting to explain the details of the network layer, it is worth restating the context in which the network layer protocols operate. This context can be seen in Fig. 5-1. The major components of the network are the ISP’s equipment (rout- ers, switches, and middleboxes connected by transmission lines), shown inside the shaded oval, and the customers’ equipment, shown outside the oval. Host H1 is di rectly connected to one of the ISP’s routers, A, perhaps as a home computer that is plugged into a DSL modem. In contrast, H2 is on a LAN, which might be an office Ethernet, with a router, F, owned and operated by the customer. This router has a leased line to the ISP’s equipment. We have shown F as being outside the oval because it does not belong to the ISP. For the purposes of this chapter, howev- er, routers on customer premises are considered part of the ISP network because they run the same algorithms as the ISP’s routers (and our main concern here is al- gorithms).
Router ISP’s equipment
Process P1
B
P2
D
Host H1
A E F
LAN H2
C
Packet
Figure 5-1. The environment of the network layer protocols.
This equipment is used as follows. A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-to-point link to the ISP (e.g., over an ADSL line or a cable television wire). The packet is stored there until it has fully arrived and the link has finished its processing by verifying the checksum. Then it is forwarded to the next router along the path until it reaches the destination host, where it is delivered. This mechanism is store-and-forward packet switching, as we have seen in previous chapters.
SEC. 5.1 NETWORK LAYER DESIGN ISSUES 361 5.1.2 Services Provided to the Transport Layer
The network layer provides services to the transport layer at the network layer/transport layer interface. An important question is precisely what kind of ser- vices the network layer provides to the transport layer. The services need to be carefully designed with the following goals in mind:
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers present.
3. The network addresses made available to the transport layer should use a uniform numbering plan, even across LANs and WANs.
Given these goals, the designers of the network layer have a lot of freedom in writing detailed specifications of the services to be offered to the transport layer. This freedom often degenerates into a raging battle between two warring factions. The discussion centers on whether the network layer should provide connec tion-oriented service or connectionless service.
One camp (represented by the Internet community) argues that the routers’ job is moving packets around and nothing else. In this view (based on 40 years of experience with a real computer network), the network is inherently unreliable, no matter how it is designed. Therefore, the hosts should accept this fact and do error control (i.e., error detection and correction) and flow control themselves.
This viewpoint leads to the conclusion that the network service should be con- nectionless, with primitives SEND PACKET and RECEIVE PACKET and little else. In particular, no packet ordering and flow control should be done, because the hosts are going to do that anyway and there is usually little to be gained by doing it twice. This reasoning is an example of the end-to-end argument, a design prin- ciple that has been very influential in shaping the Internet (Saltzer et al., 1984). Furthermore, each packet must carry the full destination address, because each packet sent is carried independently of its predecessors, if any.
The other camp (represented by the telephone companies) argues that the net- work should provide a reliable, connection-oriented service. They claim that 100 years of successful experience with the worldwide telephone system is an excellent guide. In this view, quality of service is the dominant factor, and without connections in the network, quality of service is very difficult to achieve, especial ly for real-time traffic such as voice and video.
Even after several decades, this controversy is still very much alive. Early, widely used data networks, such as X.25 in the 1970s and its successor Frame Relay in the 1980s, were connection-oriented. However, since the days of the ARPANET and the early Internet, connectionless network layers have grown tremendously in popularity. The IP protocol is now an ever-present symbol of suc- cess. It was undeterred by a connection-oriented technology called ATM that was
362 THE NETWORK LAYER CHAP. 5
developed to overthrow it in the 1980s; instead, it is ATM that is now found in niche uses and IP that is taking over telephone networks. Under the covers, how- ever, the Internet is evolving connection-oriented features as quality of service be- comes more important. Two examples of connection-oriented technologies are multiprotocol label switching, which we will describe in this chapter, and VLANs, which we saw in Chap. 4. Both technologies are widely used.
5.1.3 Implementation of Connectionless Service
Having looked at the two classes of service the network layer can provide to its users, it is time to see how this layer works inside. Two different organizations are possible, depending on the type of service offered. If connectionless service is of fered, packets are injected into the network individually and routed independently of each other. No advance setup is needed. In this context, the packets are fre- quently called datagrams (in analogy with telegrams) and the network is called a datagram network. If connection-oriented service is used, a path from the source router all the way to the destination router must be established before any data packets can be sent. This connection is called a VC (Virtual Circuit), in analogy with the physical circuits set up by the (old) telephone system, and the network is called a virtual-circuit network. In this section, we will examine datagram net- works; in the next one, we will examine virtual-circuit networks.
Let us now see how a datagram network works. Suppose that the process P1 in Fig. 5-2 has a long message for P2. It hands the message to the transport layer, with instructions to deliver it to process P2 on host H2. The transport layer code runs on H1, typically within the operating system. It prepends a transport header to the front of the message and hands the result to the network layer, probably just another procedure within the operating system.
Let us assume for this example that the message is four times longer than the maximum packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send each of them in turn to router A using some point-to-point proto- col, for example, PPP. At this point the ISP takes over. Every router has an inter- nal table telling it where to send packets for each of the possible destinations. Each table entry is a pair consisting of a destination and the outgoing line to use for that destination. Only directly connected lines can be used. For example, in Fig. 5-2, A has only two outgoing lines—to B and to C—so every incoming packet must be sent to one of these routers, even if the ultimate destination is to some other router. A’s initial routing table is shown in the figure under the label ‘‘ini tially.’’
At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link and had their checksums verified. Then each packet is forwarded according to A’s table, onto the outgoing link to C within a new frame. Packet 1 is then forwarded to E and then to F. When it gets to F, it is sent within a frame over the LAN to H2. Packets 2 and 3 follow the same route.
SEC. 5.1 NETWORK LAYER DESIGN ISSUES 363 Router ISP’s equipment
Process P1
B
4
P2
D
1
A E F
Host H1
Packet
3
2
C
LAN H2
A’s table (initially) A’s table (later) C’s table E’s table A
–
B B C C D B E C F C
Dest. Line
A
–
B B C C D B E B F B
A
A
B A C – D E E E F E
A
C
B D C C D D E –
F F
Figure 5-2. Routing within a datagram network.
However, something different happens to packet 4. When it gets to A it is sent to router B, even though it is also destined for F. For some reason, A decided to send packet 4 via a different route than that of the first three packets. Perhaps it has learned of a traffic jam somewhere along the ACE path and updated its routing table, as shown under the label ‘‘later.’’ The algorithm that manages the tables and makes the routing decisions is called the routing algorithm. Routing algorithms are one of the main topics we will study in this chapter. There are several different kinds of them, as we will see.
IP, which is the basis for the entire Internet, is the dominant example of a con- nectionless network service. Each packet carries a destination IP address that rout- ers use to individually forward each packet. The addresses are 32 bits in IPv4 pack- ets and 128 bits in IPv6 packets. We will describe IP and these two versions in
much detail later in this chapter.
5.1.4 Implementation of Connection-Oriented Service
For connection-oriented service, we need to have a virtual-circuit network. Let us see how that works. The idea behind virtual circuits is to avoid having to choose a new route for every packet sent, as in Fig. 5-2. Instead, when a con- nection is established, a route from the source machine to the destination machine is chosen as part of the connection setup and stored in tables inside the routers. That route is used for all traffic flowing over the connection, exactly the same way
364 THE NETWORK LAYER CHAP. 5
that the telephone system works. When the connection is released, the virtual cir- cuit is also terminated. With connection-oriented service, each packet carries an identifier telling which virtual circuit it belongs to.
As an example, consider the situation illustrated in Fig. 5-3. Here, host H1 has established connection 1 with host H2. This connection is remembered as the first entry in each of the routing tables. The first line of A’s table says that if a packet bearing connection identifier 1 comes in from H1, it is to be sent to router C and given connection identifier 1. Similarly, the first entry at C routes the packet to E, also with connection identifier 1.
P3
Router ISP’s equipment
P2
H3
Process P1
B
A
D
1
4
3
2
E F
LAN H2
Host H1
Packet
A’s table
C
C’s table
E’s table
H1
1
C
1
H3 1
C 2
In Out
A
1 E
1
C
1 F
1
A 2
E 2
C 2
F 2
Figure 5-3. Routing within a virtual-circuit network.
Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses connection identifier 1 (because it is initiating the connection and this is its only connection) and tells the network to establish the virtual circuit. This leads to the second row in the tables. Please note that we have a conflict here because although A can easily distinguish connection 1 packets from H1 from con- nection 1 packets from H3, C cannot do this. For this reason, A assigns a different connection identifier to the outgoing traffic for the second connection. Avoiding conflicts of this kind is why routers need the ability to replace connection identi fiers in outgoing packets.
An example of a connection-oriented network service is MPLS (MultiProtocol Label Switching). It is used within ISP networks in the Internet, with IP packets wrapped in an MPLS header having a 20-bit connection identifier or label. MPLS
SEC. 5.1 NETWORK LAYER DESIGN ISSUES 365
is often hidden from customers, with the ISP establishing long-term connections for large amounts of traffic, but it is increasingly being used to help when quality of service is important but also with other ISP traffic management tasks. We will have more to say about MPLS later in this chapter.
5.1.5 Comparison of Virtual-Circuit and Datagram Networks
Both virtual circuits and datagrams have their supporters and their detractors. We will now attempt to summarize both sets of arguments. The major issues are listed in Fig. 5-4, although purists could probably find a counterexample for every thing in the figure.
Issue Datagram network Virtual-circuit network Circuit setup Not needed Required
Addressing Each packet contains the full
Each packet contains a
source and destination address
short VC number
State information Routers do not hold state
Each VC requires router
information about connections
table space per connection
Routing Each packet is routed independently
Effect of router failures None, except for packets lost during the crash
Route chosen when VC is set up; all packets follow it All VCs that passed through the failed
router are terminated
Quality of service Difficult Easy if enough resources can be allocated in
advance for each VC
Congestion control Difficult Easy if enough resources can be allocated in
advance for each VC
Figure 5-4. Comparison of datagram and virtual-circuit networks.
Inside the network, several trade-offs exist between virtual circuits and data- grams. One trade-off is setup time versus address parsing time. Using virtual cir- cuits requires a setup phase, which takes time and consumes resources. However, once this price is paid, figuring out what to do with a data packet in a virtual-cir- cuit network is easy: the router just uses the circuit number to index into a table to find out where the packet goes. In a datagram network, no setup is needed but a more complicated lookup procedure is required to locate the entry for the destina tion.
A related issue is that the destination addresses used in datagram networks are longer than circuit numbers used in virtual-circuit networks because they have a global meaning. If the packets tend to be fairly short, including a full destination
366 THE NETWORK LAYER CHAP. 5
address in every packet may represent a significant amount of overhead, and hence a waste of bandwidth.
Yet another issue is the amount of table space required in router memory. A datagram network needs to have an entry for every possible destination, whereas a virtual-circuit network just needs an entry for each virtual circuit. However, this advantage is somewhat illusory since connection setup packets have to be routed too, and they use destination addresses, the same as datagrams do.
Virtual circuits have some advantages in guaranteeing quality of service and avoiding congestion within the network because resources (e.g., buffers, band- width, and CPU cycles) can be reserved in advance, when the connection is estab lished. Once the packets start arriving, the necessary bandwidth and router capaci ty will be there. With a datagram network, congestion avoidance is more difficult.
For transaction processing systems (e.g., stores calling up to verify credit card purchases), the overhead required to set up and clear a virtual circuit may easily dwarf the use of the circuit. If the majority of the traffic is expected to be of this kind, the use of virtual circuits inside the network makes little sense. On the other hand, for long-running uses such as VPN traffic between two corporate offices, permanent virtual circuits (that are set up manually and last for months or years) may be useful.
Virtual circuits also have a vulnerability problem. If a router crashes and loses its memory, even if it comes back up a second later, all the virtual circuits passing through it will have to be aborted. In contrast, if a datagram router goes down, only those users whose packets were queued in the router at the time need suffer (and probably not even then since the sender is likely to retransmit them shortly). The loss of a communication line is fatal to virtual circuits using it, but can easily be compensated for if datagrams are used. Datagrams also allow the routers to bal- ance the traffic throughout the network, since routes can be changed partway through a long sequence of packet transmissions.
5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK
The main function of the network layer is routing packets from the source ma- chine to the destination machine. In this section, we discuss how the network layer achieves this function within a single administrative domain or autonomous sys tem. In most networks, packets will require multiple hops to make the journey. The only notable exception is for broadcast networks, but even here routing is an issue if the source and destination are not on the same network segment. The algo rithms that choose the routes and the data structures that they use are a major area of network layer design.
The routing algorithm is that part of the network layer software responsible for deciding which output line an incoming packet should be transmitted on. If the network uses datagrams internally, the routing decision must be made anew for
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 367
every arriving data packet since the best route may have changed since last time. If the network uses virtual circuits internally, routing decisions are made only when a new virtual circuit is being set up. Thereafter, data packets just follow the already established route. The latter case is sometimes called session routing because a route remains in force for an entire session (e.g., while logged in over a VPN).
It is sometimes useful to make a distinction between routing, which is making the decision which routes to use, and forwarding, which is what happens when a packet arrives. One can think of a router as having two processes inside it. One of them handles each packet as it arrives, looking up the outgoing line to use for it in the routing tables. This process is forwarding. The other process is responsible for filling in and updating the routing tables. That is where the routing algorithm comes into play.
Regardless of whether routes are chosen independently for each packet sent or only when new connections are established, certain properties are desirable in a routing algorithm: correctness, simplicity, robustness, stability, fairness, and ef ficiency. Correctness and simplicity hardly require comment, but the need for ro- bustness may be less obvious at first. Once a major network comes on the air, it may be expected to run continuously for years without system-wide failures. Dur ing that period there will be hardware and software failures of all kinds. Hosts, routers, and lines will fail repeatedly, and the topology will change many times. The routing algorithm should be able to cope with changes in the topology and traffic without requiring all jobs in all hosts to be aborted. Imagine the havoc if the network needed to be rebooted every time some router crashed.
Stability is also an important goal for the routing algorithm. There exist rout ing algorithms that never converge to a fixed set of paths, no matter how long they run. A stable algorithm reaches equilibrium and stays there. It should converge quickly too, since communication may be disrupted until the routing algorithm has reached equilibrium.
Fairness and efficiency may sound obvious—surely no reasonable person would oppose them—but as it turns out, they are often contradictory goals. As a simple example of this conflict, look at Fig. 5-5. Suppose that there is enough traf fic between A and Av, between B and Bv, and between C and Cv to saturate the hori- zontal links. To maximize the total flow, the X to Xv traffic should be shut off alto- gether. Unfortunately, X and Xv may not see it that way. Evidently, some compro- mise between global efficiency and fairness to individual connections is needed.
Before we can even attempt to find trade-offs between fairness and efficiency, we must decide what it is we seek to optimize. Minimizing the mean packet delay is an obvious candidate to send traffic through the network effectively, but so is maximizing total network throughput. Furthermore, these two goals are also in conflict, since operating any queueing system near capacity implies a long queue ing delay. As a compromise, many networks attempt to minimize the distance a packet must travel, or alternatively, simply reduce the number of hops a packet must make. Either choice tends to improve the delay and also reduce the amount of
368 THE NETWORK LAYER CHAP. 5 A B C
X Xv
A' B' C'
Figure 5-5. Network with a conflict between fairness and efficiency.
bandwidth consumed per packet, which generally tends to improve the overall net- work throughput as well.
Routing algorithms can be grouped into two major classes: nonadaptive and adaptive. Nonadaptive algorithms do not base their routing decisions on any measurements or estimates of the current topology and traffic. Instead, the choice of the route to use to get from I to J (for all I and J) is computed in advance, offline, and downloaded to the routers when the network is booted. This procedure is sometimes called static routing. Because it does not respond to failures, static routing is mostly useful for situations in which the routing choice is clear. For ex- ample, router F in Fig. 5-3 should send packets headed into the network to router E regardless of the ultimate destination.
Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the topology, and sometimes changes in the traffic as well. These dynamic routing algorithms differ in where they get their information (e.g., locally, from adjacent routers, or from all routers), when they change the routes (e.g., when the topology changes, or every 6T seconds as the load changes), and what metric is used for optimization (e.g., distance, number of hops, or estimated transit time).
In the following sections, we will discuss a variety of routing algorithms. The algorithms cover delivery models besides sending a packet from a source to a dest ination. Sometimes the goal is to send the packet to multiple, all, or one of a set of destinations. All the routing algorithms we describe here make decisions based on the topology; we defer the possibility of decisions based on the traffic to Sec. 5.3.
5.2.1 The Optimality Principle
Before we get into specific algorithms, it may be helpful to note that one can make a general statement about optimal routes without regard to network topology or traffic. This statement is known as the optimality principle (Bellman, 1957).
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 369
It states that if router J is on the optimal path from router I to router K, then the optimal path from J to K also falls along the same route. To see this, call the part of the route from I to J r1 and the rest of the route r2. If a route better than r2 existed from J to K, it could be concatenated with r1to improve the route from I to K, contradicting our statement that r1r2is optimal. As a direct consequence of the optimality principle, we can see that the set of optimal routes from all sources to a given destination form a tree rooted at the dest ination. Such a tree is called a sink tree and is illustrated in Fig. 5-6(b) for the net- work of Fig. 5-6(a). Here, the distance metric is the number of hops. The goal of all routing algorithms is to discover and use the sink trees for all routers.
B
A
B
A
C
D EC
D E
F
G
H
J
I
F
G
H
J
I
L
K
M
(a)
N O
L
K
M
(b)
N O
Figure 5-6. (a) A network. (b) A sink tree for router B.
Note that a sink tree is not necessarily unique; other trees with the same path lengths may exist. If we allow all of the possible paths to be chosen, the tree be- comes a more general structure called a DAG (Directed Acyclic Graph). DAGs have no loops. We will use sink trees as a convenient shorthand for both cases. Both cases also depend on the technical assumption that the paths do not interfere with each other so, for example, a traffic jam on one path will not cause another path to divert.
Since a sink tree is indeed a tree, it does not contain any loops, so each packet will be delivered within a finite and bounded number of hops. In practice, life is not quite this easy. Links and routers can go down and come back up during oper- ation, so different routers may have different ideas about the current topology. Also, we have quietly finessed the issue of whether each router has to individually acquire the information on which to base its sink tree computation or whether this information is collected by some other means. We will come back to these issues shortly. Nevertheless, the optimality principle and the sink tree provide a bench- mark against which other routing algorithms can be measured.
370 THE NETWORK LAYER CHAP. 5 5.2.2 Shortest Path Algorithm
Let us begin our study of routing algorithms with a simple technique for com- puting optimal paths given a complete picture of the network. These paths are the ones that we want a distributed routing algorithm to find, even though not all rout- ers may know all of the details of the network.
The idea is to build a graph of the network, with each node of the graph repres- enting a router and each edge of the graph representing a communication line, or link. To choose a route between a given pair of routers, the algorithm just finds the shortest path between them on the graph.
The concept of a shortest path deserves some explanation. One way of mea- suring path length is the number of hops. Using this metric, the paths ABC and ABE in Fig. 5-7 are equally long. Another metric is the geographic distance in kilometers, in which case ABC is clearly much longer than ABE (assuming the fig- ure is drawn to scale).
B 7 C
B (2, A) C (', <)
2
2
2
33
E (', <)
A
E 2 F
E
A D 1
A F (', <) D (', <)
6 6
1
4
4
2
2
G
G
(a)
H
G (6, A)
G
(b)
H (', <) H
B (2, A) C (9, B)
B (2, A) C (9, B)
A
E (4, B)
F (', <) D (',<)
E (4, B)
A F (6, E) D (',1)
G (6, A)
(c)
H (', <)
G (5, E)
(d)
H (', <)
B (2, A) C (9, B)
E (4, B)
A F (6, E) D (',<)
B (2, A) C (9, B)
E (4, B)
A F (6,E) D (',<)
G (5, E)
(e)
H (9, G)
G (5, E)
(f)
H (8, F)
Figure 5-7. The first six steps used in computing the shortest path from A to D. The arrows indicate the working node.
However, many other metrics besides hops and physical distance are also pos- sible. For example, each edge could be labeled with the mean delay of a standard
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 371
test packet, as measured by hourly runs. With this graph labeling, the shortest path is the fastest path rather than the path with the fewest edges or kilometers. In the general case, the labels on the edges could be computed as a function of the distance, bandwidth, average traffic, communication cost, measured delay, and other factors. By changing the weighting function, the algorithm would then com- pute the ‘‘shortest’’ path measured according to any one of a number of criteria or to a combination of criteria.
Several algorithms for computing the shortest path between two nodes of a graph are known. This one is due to Dijkstra (1959) and finds the shortest paths between a source and all destinations in the network. Each node is labeled (in par- entheses) with its distance from the source node along the best known path. The distances must be non-negative, as they will be if they are based on real quantities like bandwidth and delay. Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and paths are found, the labels may change, reflecting better paths. A label may be either tentative or permanent. Ini tially, all labels are tentative. When it is discovered that a label represents the shortest possible path from the source to that node, it is made permanent and never changed thereafter.
To illustrate how the labeling algorithm works, look at the weighted, undi rected graph of Fig. 5-7(a), where the weights represent, for example, distance. We want to find the shortest path from A to D. We start out by marking node A as permanent, indicated by a filled-in circle. Then we examine, in turn, each of the nodes adjacent to A (the working node), relabeling each one with the distance to A. Whenever a node is relabeled, we also label it with the node from which the probe was made so that we can reconstruct the final path later. If the network had more than one shortest path from A to D and we wanted to find all of them, we would need to remember all of the probe nodes that could reach a node with the same dis tance.
Having examined each of the nodes adjacent to A, we examine all the tenta tively labeled nodes in the whole graph and make the one with the smallest label permanent, as shown in Fig. 5-7(b). This one becomes the new working node.
We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the distance from B to the node being considered is less than the label on that node, we have a shorter path, so the node is relabeled.
After all the nodes adjacent to the working node have been inspected and the tentative labels changed if possible, the entire graph is searched for the tentatively labeled node with the smallest value. This node is made permanent and becomes the working node for the next round. Figure 5-7 shows the first six steps of the algorithm.
To see why the algorithm works, look at Fig. 5-7(c). At this point we have just made E permanent. Suppose that there were a shorter path than ABE, say AXYZE (for some X and Y). There are two possibilities: either node Z has already been made permanent, or it has not been. If it has, then E has already been probed (on
372 THE NETWORK LAYER CHAP. 5
the round following the one when Z was made permanent), so the AXYZE path has not escaped our attention and thus cannot be a shorter path.
Now consider the case where Z is still tentatively labeled. If the label at Z is greater than or equal to that at E, then AXYZE cannot be a shorter path than ABE. If the label is less than that of E, then Z and not E will become permanent first, allowing E to be probed from Z.
This algorithm is given in C in Fig. 5-8. The global variables n and dist de- scribe the graph and are initialized before shortest path is called. The only dif ference between the program and the algorithm described above is that in Fig. 5-8, we compute the shortest path starting at the terminal node, t, rather than at the source node, s.
Since the shortest paths from t to s in an undirected graph are the same as the shortest paths from s to t, it does not matter at which end we begin. The reason for searching backward is that each node is labeled with its predecessor rather than its successor. When the final path is copied into the output variable, path, the path is thus reversed. The two reversal effects cancel, and the answer is produced in the correct order.
5.2.3 Flooding
When a routing algorithm is implemented, each router must make decisions based on local knowledge, not the complete picture of the network. A simple local technique is flooding, in which every incoming packet is sent out on every out- going line except the one it arrived on.
Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless some measures are taken to damp the process. One such measure is to have a hop counter contained in the header of each packet that is decremented at each hop, with the packet being discarded when the counter reaches zero. Ideally, the hop counter should be initialized to the length of the path from source to destination. If the sender does not know how long the path is, it can initialize the counter to the worst case, namely, the full diameter of the network.
Flooding with a hop count can produce an exponential number of duplicate packets as the hop count grows and routers duplicate packets they have seen be fore. A better technique for damming the flood is to have routers keep track of which packets have been flooded, to avoid sending them out a second time. One way to achieve this goal is to have the source router put a sequence number in each packet it receives from its hosts. Each router then needs a list per source router tel ling which sequence numbers originating at that source have already been seen. If an incoming packet is on the list, it is not flooded.
To prevent the list from growing without bound, each list should be augmented by a counter, k, meaning that all sequence numbers through k have been seen. When a packet comes in, it is easy to check if the packet has already been flooded (by comparing its sequence number to k); if so, it is discarded. Furthermore, the
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 373
#define MAX NODES 1024 /* maximum number of nodes */#define INFINITY 1000000000 /* a number larger than every maximum path
int n, dist[MAX NODES][MAX NODES]; /* dist[i][j] is the distance from i to j*/ void shortest path(int s, int t, int path[])
{ struct state { /*the path being worked on */int predecessor; /* previous node */
int length; /*length from source to this node*/ enum {permanent, tentative} label; /*label state */ } state[MAX NODES];
int i, k, min;
struct state *p;
for (p = &state[0]; p < &state[n]; p++) { /*initialize state */ p->predecessor = <1;
p->length = INFINITY;
p->label = tentative;
}
state[t].length = 0; state[t].label = permanent;
k = t; /*k is the initial working node */do { /*Is there a better path from k? */for (i = 0; i < n; i++) /*this graph has n nodes */ if (dist[k][i] != 0 && state[i].label == tentative) {
if (state[k].length + dist[k][i] < state[i].length) {
state[i].predecessor = k;
state[i].length = state[k].length + dist[k][i];
}
}
/* Find the tentatively labeled node with the smallest label. */ k = 0; min = INFINITY;
for (i = 0; i < n; i++)
if (state[i].label == tentative && state[i].length < min) {
min = state[i].length;
k = i;
}
state[k].label = permanent;
} while (k != s);
/* Copy the path into the output array. */
i = 0; k = s;
do {path[i++] = k; k = state[k].predecessor; } while (k >= 0);
}
Figure 5-8. Dijkstra’s algorithm to compute the shortest path through a graph. full list below k is not needed, since k effectively summarizes it.
*/
Flooding is not practical for sending most packets, but it does have some im- portant uses. First, it ensures that a packet is delivered to every node in the net- work. This may be wasteful if there is a single destination that needs the packet,
374 THE NETWORK LAYER CHAP. 5
but it is effective for broadcasting information. In wireless networks, all messages transmitted by a station can be received by all other stations within its radio range, which is, in fact, flooding, and some algorithms utilize this property.
Second, flooding is tremendously robust. Even if large numbers of routers are blown to smithereens (e.g., in a military network located in a war zone), flooding will find a path if one exists, to get a packet to its destination. Flooding also re- quires little in the way of setup. The routers only need to know their neighbors. This means that flooding can be used as a building block for other routing algo rithms that are more efficient but need more in the way of setup. Flooding can also be used as a metric against which other routing algorithms can be compared. Flooding always chooses the shortest path because it chooses every possible path in parallel. Consequently, no other algorithm can produce a shorter delay (if we ignore the overhead generated by the flooding process itself).
5.2.4 Distance Vector Routing
Computer networks generally use dynamic routing algorithms that are more complex than flooding, but more efficient because they find shortest paths for the current topology. Two dynamic algorithms in particular, distance vector routing and link state routing, are the most popular. In this section, we will look at the for- mer algorithm. In the following section, we will study the latter algorithm.
A distance vector routing algorithm operates by having each router maintain a table (i.e., a vector) giving the best known distance to each destination and which link to use to get there. These tables are updated by exchanging information with the neighbors. Eventually, every router knows the best link to reach each destina
tion.
The distance vector routing algorithm is sometimes called by other names, most commonly the distributed Bellman-Ford routing algorithm, after the re- searchers who developed it (Bellman, 1957; and Ford and Fulkerson, 1962). It was the original ARPANET routing algorithm and was also used in the Internet under the name RIP.
In distance vector routing, each router maintains a routing table indexed by, and containing one entry for, each router in the network. This entry has two parts: the preferred outgoing line to use for that destination, and an estimate of the dis tance to that destination. The distance might be measured as the number of hops or using another metric, as we discussed for computing shortest paths.
The router is assumed to know the ‘‘distance’’ to each of its neighbors. If the metric is hops, the distance is just one hop. If the metric is propagation delay, the router can measure it directly with special ECHO packets that the receiver just timestamps and sends back as fast as it can.
As an example, assume that delay is used as a metric and that the router knows the delay to each of its neighbors. Once every T msec, each router sends to each neighbor a list of its estimated delays to each destination. It also receives a similar
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 375
list from each neighbor. Imagine that one of these tables has just come in from neighbor X, with Xi being X’s estimate of how long it takes to get to router i. If the router knows that the delay to X is m msec, it also knows that it can reach router i via X in Xi + m msec. By performing this calculation for each neighbor, a router can find out which estimate seems the best and use that estimate and the corres- ponding link in its new routing table. Note that the old routing table is not used in the calculation.
This updating process is illustrated in Fig. 5-9. Part (a) shows a network. The first four columns of part (b) show the delay vectors received from the neighbors of router J. A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc. Suppose that J has measured or estimated its delay to its neigh- bors, A, I, H, and K, as 8, 10, 12, and 6 msec, respectively.
Router
A B C D
New estimated
delay from J
To A I H K Line A
0
24
20
B
12
36
21
8
A
C 25
31
28
20
A
F GH E
I J K L
18
19
36
40
27
8
24
D
E
14
7
30
22
F
23
20
19
40
G
18
31
6
31
H
17
20
0
19
I
21
0
14
22
J
9
11
7
10
K
24
22
22
0
L
29
33
9
9
28
I
20
H
17
I
30
I
18
H
12
H
10
I
0
<
6
K
15
K
JA JI JH JK
delay delay delay delay
New
is is is is 8 10 12 6
routing table
for J
(a)
Vectors received from J's four neighbors
(b)
Figure 5-9. (a) A network. (b) Input from A, I, H, K, and the new routing table for J.
Consider how J computes its new route to router G. It knows that it can get to A in 8 msec, and furthermore A claims to be able to get to G in 18 msec, so J knows it can count on a delay of 26 msec to G if it forwards packets bound for G to A. Similarly, it computes the delay to G via I, H, and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec, respectively. The best of these values is 18, so it makes an entry in its routing table that the delay to G is 18 msec and that the route to use is via H. The same calculation is performed for all the other destinations, with the new routing table shown in the last column of the figure.
376 THE NETWORK LAYER CHAP. 5 The Count-to-Infinity Problem
The settling of routes to best paths across the network is called convergence. Distance vector routing is useful as a simple technique by which routers can col lectively compute shortest paths, but it has a serious drawback in practice: although it converges to the correct answer, it may do so slowly. In particular, it reacts ra- pidly to good news, but leisurely to bad news. Consider a router whose best route to destination X is long. If, on the next exchange, neighbor A suddenly reports a short delay to X, the router just switches over to using the line to A to send traffic to X. In one vector exchange, the good news is processed.
To see how fast good news propagates, consider the five-node (linear) network of Fig. 5-10, where the delay metric is the number of hops. Suppose A is down ini tially and all the other routers know this. In other words, they have all recorded the delay to A as infinity.
A B C D E · · · ·
Initially
· · ·
A B C D E Initially
1 2 3 4
1
After 1 exchange
3
After 1 exchange
1
· ·
2 3 4
2
·
1
2
After 2 exchanges After 3 exchanges
3
4
3 4
After 2 exchanges
After 3 exchanges
3
1
After 4 exchanges
5
4
5
4
After 4 exchanges
2
3
4
5
6
5
6
After 5 exchanges
7 6 7 6
7 8 7 8
...
After 6 exchanges
· · · ·
(a) (b)
Figure 5-10. The count-to-infinity problem.
When A comes up, the other routers learn about it via the vector exchanges. For simplicity, we will assume that there is a gigantic gong somewhere that is struck periodically to initiate a vector exchange at all routers simultaneously. At the time of the first exchange, B learns that its left-hand neighbor has zero delay to A. B now makes an entry in its routing table indicating that A is one hop away to the left. All the other routers still think that A is down. At this point, the routing table entries for A are as shown in the second row of Fig. 5-10(a). On the next ex- change, C learns that B has a path of length 1 to A, so it updates its routing table to indicate a path of length 2, but D and E do not hear the good news until later. Clearly, the good news is spreading at the rate of one hop per exchange. In a net- work whose longest path is of length N hops, within N exchanges everyone will know about newly revived links and routers.
Now let us consider the situation of Fig. 5-10(b), in which all the links and routers are initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 377
hops, respectively. Suddenly, either A goes down or the link between A and B is cut (which is effectively the same thing from B’s point of view).
At the first packet exchange, B does not hear anything from A. Fortunately, C says ‘‘Do not worry; I have a path to A of length 2.’’ Little does B suspect that C’s path runs through B itself. For all B knows, C might have 10 links all with separate paths to A of length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do not update their entries for A on the first exchange.
On the second exchange, C notices that each of its neighbors claims to have a path to A of length 3. It picks one of them at random and makes its new distance to A 4, as shown in the third row of Fig. 5-10(b). Subsequent exchanges produce the history shown in the rest of Fig. 5-10(b).
From this figure, it should be clear why bad news travels slowly: no router ever has a value more than one higher than the minimum of all its neighbors. Gradu- ally, all routers work their way up to infinity, but the number of exchanges required depends on the numerical value used for infinity. For this reason, it is wise to set infinity to the longest path plus 1.
Not entirely surprisingly, this problem is known as the count-to-infinity prob lem. There have been many attempts to solve it, for example, preventing routers from advertising their best paths back to the neighbors from which they heard them. Split horizon with poisoned reverse rule are discussed in RFC 1058. How- ever, none of these heuristics work well in practice despite the colorful names. The core of the problem is that when X tells Y that it has a path somewhere, Y has no way of knowing whether it itself is on the path.
5.2.5 Link State Routing
Distance vector routing was used in the ARPANET until 1979, when it was re- placed by link state routing. The primary problem that caused its demise was that the algorithm often took too long to converge after the network topology changed (due to the count-to-infinity problem). Consequently, it was replaced by an en tirely new algorithm, now called link state routing. Variants of link state routing called IS-IS and OSPF are the routing algorithms that are most widely used inside large networks and the Internet today.
The idea behind link state routing is fairly simple and can be stated as five parts. Each router must do the following things to make it work:
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router.
378 THE NETWORK LAYER CHAP. 5
In effect, the complete topology is distributed to every router. Then Dijkstra’s al- gorithm can be run at each router to find the shortest path to every other router. Below we will consider each of these five steps in more detail.
Learning about the Neighbors
When a router is booted, its first task is to learn who its neighbors are. It accomplishes this goal by sending a special HELLO packet on each point-to-point line. The router on the other end is expected to send back a reply giving its name. These names must be globally unique because when a distant router later hears that three routers are all connected to F, it is essential that it can determine whether all three mean the same F.
When two or more routers are connected by a broadcast link (e.g., a switch, ring, or classic Ethernet), the situation is slightly more complicated. Figure 5-11(a) illustrates a broadcast LAN to which three routers, A, C, and F, are directly connected. Each of these routers is connected to one or more additional routers, as shown.
H
Router
B
D E
G G H
D E
I
B
A
C
F
C
A
F I
LAN
N
(a) (b)
Figure 5-11. (a) Nine routers and a broadcast LAN. (b) A graph model of (a).
The broadcast LAN provides connectivity between each pair of attached rout- ers. However, modeling the LAN as many point-to-point links increases the size of the topology and leads to wasteful messages. A better way to model the LAN is to consider it as a node itself, as shown in Fig. 5-11(b). Here, we have introduced a new, artificial node, N, to which A, C, and F are connected. One designated router on the LAN is selected to play the role of N in the routing protocol. The fact that it is possible to go from A to C on the LAN is represented by the path ANC here.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 379 Setting Link Costs
The link state routing algorithm requires each link to have a distance or cost metric for finding shortest paths. The cost to reach neighbors can be set automat ically, or configured by the network operator. A common choice is to make the cost inversely proportional to the bandwidth of the link. For example, 1-Gbps Ethernet may have a cost of 1 and 100-Mbps Ethernet may have a cost of 10. This makes higher-capacity paths better choices.
If the network is geographically spread out, the delay of the links may be fac tored into the cost so that paths over shorter links are better choices. The most direct way to determine this delay is to send over the line a special ECHO packet that the other side is required to send back immediately. By measuring the round trip time and dividing it by two, the sending router can get an estimate of the delay.
Building Link State Packets
Once the information needed for the exchange has been collected, the next step is for each router to build a packet containing all the data. The packet starts with the identity of the sender, followed by a sequence number and age (to be described later) and a list of neighbors. The cost to each neighbor is also given. An example
network is presented in Fig. 5-12(a) with costs shown as labels on the lines. The corresponding link state packetsfor all six routers are shown in Fig. 5-12(b).
B C 2
4 3
Link State Packets
A
B C D E F
Seq.
Seq.
Seq.
Seq.
Seq.
Seq.
A D 1 6
5 7 E F
8
(a)
Age
Age
Age
Age
Age
Age
B 4
A 4
B 2
C 3
A 5
B 6
E 5
C 2
D 3
F 7
C 1
D 7
F 6 E 1 F 8 E 8 (b)
Figure 5-12. (a) A network. (b) The link state packets for this network.
Building the link state packets is easy. The hard part is determining when to build them. One possibility is to build them periodically, at regular intervals. An- other possibility is to build them when some specific event occurs, such as a line or neighbor going down or coming back up again or changing its properties.
Distributing the Link State Packets
The trickiest part of the algorithm is distributing the link state packets. All of the routers must get all of the link state packets quickly and reliably. If different routers are using different versions of the topology, the routes they compute can have inconsistencies, such as loops, unreachable machines, and other problems.
380 THE NETWORK LAYER CHAP. 5
First, we will describe the basic distribution algorithm. After that, we will give some refinements. The fundamental idea is to use flooding to distribute the link state packets to all routers. To keep the flood in check, each packet contains a se- quence number that is incremented for each new packet sent. Routers keep track of all the (source router, sequence) pairs they see. When a new link state packet comes in, it is checked against the list of packets already seen. If it is new, it is for- warded on all lines except the one it arrived on. If it is a duplicate, it is discarded. If a packet with a sequence number lower than the highest one seen so far ever ar rives, it is rejected as being obsolete as the router has more recent data.
This algorithm has a few problems, but they are manageable. First, if the se- quence numbers wrap around, confusion will reign. The solution here is to use a 32-bit sequence number. With one link state packet per second, it would take 137 years to wrap around, so this possibility can be ignored.
Second, if a router ever crashes, it will lose track of its sequence number. If it starts again at 0, the next packet it sends will be rejected as a duplicate. Third, if a sequence number is ever corrupted and 65,540 is received instead of 4 (a 1-bit error), packets 5 through 65,540 will be rejected as obsolete, since the current sequence number will be thought to be 65,540.
The solution to these problems is to include the age of each packet after the se- quence number and decrement it once a second. When the age hits zero, the infor- mation from that router is discarded. Normally, a new packet comes in, say, every 10 sec, so router information only times out when a router is down (or six consecu tive packets have been lost, an unlikely event). The Age field is also decremented by each router during the initial flooding process, to make sure no packet can get lost and live for an indefinite period of time (a packet with age zero is discarded).
Some refinements to this algorithm make it more robust. When a link state packet comes in to a router for flooding, it is not queued for transmission im- mediately. Instead, it is put in a holding area to wait a short while in case more links are coming up or going down. If another link state packet from the same source comes in before the first packet is transmitted, their sequence numbers are compared. If they are equal, the duplicate is discarded. If they are different, the older one is thrown out. To guard against errors on the links, all link state packets are acknowledged.
The data structure used by router B for the network shown in Fig. 5-12(a) is depicted in Fig. 5-13. Each row here corresponds to a recently arrived, but as yet not fully processed, link state packet. The table records where the packet origi- nated, its sequence number and age, and the data. In addition, there are send and acknowledgement flags for each of B’s three links (to A, C, and F, respectively). The send flags mean that the packet must be sent on the indicated link. The ac- knowledgement flags mean that it must be acknowledged there.
In Fig. 5-13, the link state packet from A arrives directly, so it must be sent to C and F and acknowledged to A, as indicated by the flag bits. Similarly, the packet from F has to be forwarded to A and C and acknowledged to F.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 381
Send flags ACK flags
Source Seq. Age A C F A C F Data
A 21 60 0 1 1 1 0 0
F 21 60 1 1 0 0 0 1
E 21 59 0 1 0 1 0 1
C 20 60 1 0 1 0 1 0
D 21 59 1 0 0 0 1 1
Figure 5-13. The packet buffer for router B in Fig. 5-12(a).
However, the situation with the third packet, from E, is different. It arrives twice, once via EAB and once via EFB. Consequently, it has to be sent only to C but must be acknowledged to both A and F, as indicated by the bits.
If a duplicate arrives while the original is still in the buffer, bits have to be changed. For example, if a copy of C’s state arrives from F before the fourth entry in the table has been forwarded, the six bits will be changed to 100011 to indicate that the packet must be acknowledged to F but not sent there.
Computing the New Routes
Once a router has accumulated a full set of link state packets, it can construct the entire network graph because every link is represented. Every link is, in fact, represented twice, once for each direction. The different directions may even have different costs. The shortest-path computations may then find different paths from router A to B than from router B to A.
Now Dijkstra’s algorithm can be run locally to construct the shortest paths to all possible destinations. The results of this algorithm tell the router which link to use to reach each destination. This information is installed in the routing tables, and normal operation is resumed.
Compared to distance vector routing, link state routing requires more memory and computation. For a network with n routers, each of which has k neighbors, the memory required to store the input data is proportional to kn, which is at least as large as a routing table listing all the destinations. Also, the computation time grows faster than kn, even with the most efficient data structures, an issue in large networks. Nevertheless, in many practical situations, link state routing works well because it does not suffer from slow convergence problems.
Link state routing is widely used in actual networks, so a few words about some example protocols are in order. Many ISPs use the IS-IS (Intermediate Sys tem-to-Intermediate System) link state protocol (Oran, 1990). It was designed
382 THE NETWORK LAYER CHAP. 5
for an early network called DECnet, later adopted by ISO for use with the OSI pro tocols and then modified to handle other protocols as well, most notably, IP. OSPF (Open Shortest Path First), which will be discussed in Sec. 5.7.6, is the other main link state protocol. It was designed by IETF several years after IS-IS and adopted many of the innovations designed for IS-IS. These innovations include a self-stabi lizing method of flooding link state updates, the concept of a designated router on a LAN, and the method of computing and supporting path splitting and multiple metrics. As a consequence, there is very little difference between IS-IS and OSPF. The most important difference is that IS-IS can carry information about multiple network layer protocols at the same time (e.g., IP, IPX, and AppleTalk). OSPF does not have this feature, and it is an advantage in large multiprotocol environ- ments.
A general comment on routing algorithms is also in order. Link state, distance vector, and other algorithms rely on processing at all the routers to compute routes. Problems with the hardware or software at even a small number of routers can wreak havoc across the network. For example, if a router claims to have a link it does not have or forgets a link it does have, the network graph will be incorrect. If a router fails to forward packets or corrupts them while forwarding them, the route will not work as expected. Finally, if it runs out of memory or does the routing cal- culation wrong, bad things will happen. As the network grows into the range of tens or hundreds of thousands of nodes, the probability of some router failing occa- sionally becomes nonnegligible. The trick is to try to arrange to limit the damage when the inevitable happens. Perlman (1988) discusses these problems and their possible solutions in detail.
5.2.6 Hierarchical Routing within a Network
As networks grow in size, the router routing tables grow proportionally. Not only is router memory consumed by ever-increasing tables, but more CPU time is needed to scan them and more bandwidth is needed to send status reports about them. Additionally, even if every router could store the entire topology, recomput ing shortest paths every time the network experienced changes in the topology would be prohibitive; imagine, for example, if a very large network would need to computer shortest paths every time a link in the network failed or recovered. At a certain point, the network may grow to a size where it is no longer feasible for every router to have an entry for every other router, so the routing will have to be done hierarchically, through the use of routing areas.
When hierarchical routing is used, the routers are divided into what we will call regions or areas. Each router knows all the details about how to route packets to destinations within its own region but knows nothing about the internal structure of other regions. When different networks are interconnected, it is natural to regard each one as a separate region to free the routers in one network from having to know the topological structure of the other ones.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 383
For huge networks, a two-level hierarchy may be insufficient; it may be neces- sary to group the regions into clusters, the clusters into zones, the zones into groups, and so on, until we run out of names for units of aggregation. As an ex- ample of a simple multilevel hierarchy, consider how a packet might be routed from Berkeley, California, to Malindi, Kenya. The Berkeley router would know the detailed topology within California but would send all out-of-state traffic to the Los Angeles router. The Los Angeles router would be able to route traffic directly to other domestic routers but would send all foreign traffic to New York. The New York router would be programmed to direct all traffic to the router in the destina tion country responsible for handling foreign traffic, say, in Nairobi. Finally, the packet would work its way down the tree in Kenya until it got to Malindi.
Figure 5-14 gives a quantitative example of routing in a two-level hierarchy with five regions. The full routing table for router 1A has 17 entries, as shown in Fig. 5-14(b). When routing is done hierarchically, as in Fig. 5-14(c), there are en tries for all the local routers, as before, but all other regions are condensed into a single router, so all traffic for region 2 goes via the 1B-2A line, but the rest of the remote traffic goes via the 1C-3B line. Hierarchical routing has reduced the table from 17 to 7 entries. As the ratio of the number of regions to the number of routers per region grows, the savings in table space increase.
Region 1 Region 2
Full table for 1A
Dest. Line Hops 1A – – 1B
Hierarchical table for 1A
Dest. Line Hops – –
1A
1B 1
1B
1A
1C
2A 2B
2C
2D
5B 5C
1B 1
1C
1C 1
2A
1B 2
2B
1B 3
2C
1B 3
2D
1B 4
3A
1B
1C 1
1C
2
1B 2
1C 2
3
4
1C 3
1C 4
5
3A
4A
5A
4B 4C
1C 3
3B
1C 2
3B
5D
5E
4A
1C 3
4B
Region 3 Region 4 Region 5
1C 4
4C
1C 4
5A
1C 4
5B
1C 5
5C
1B 5
5D
1C 6
5E
1C 5
(a) (b) (c)
Figure 5-14. Hierarchical routing.
Unfortunately, these gains in space are not free. There is a penalty to be paid: increased path length. For example, the best route from 1A to 5C is via region 2,
384 THE NETWORK LAYER CHAP. 5
but with hierarchical routing all traffic to region 5 goes via region 3, because that is better for most destinations in region 5.
When a single network becomes very large, an interesting question is ‘‘How many levels should the hierarchy have?’’ For example, consider a network with 720 routers. If there is no hierarchy, each router needs 720 routing table entries. If the network is partitioned into 24 regions of 30 routers each, each router needs 30 local entries plus 23 remote entries for a total of 53 entries. If a three-level hier- archy is chosen, with 8 clusters each containing 9 regions of 10 routers, each router needs 10 entries for local routers, 8 entries for routing to other regions within its own cluster, and 7 entries for distant clusters, for a total of 25 entries. Kamoun and Kleinrock (1979) discovered that the optimal number of levels for an N router net- work is ln N, requiring a total of e ln N entries per router. They have also shown that the increase in effective mean path length caused by hierarchical routing is sufficiently small that it is usually acceptable.
5.2.7 Broadcast Routing
In some applications, hosts need to send messages to many or all other hosts. For example, a service distributing weather reports, stock market updates, or live radio programs might work best by sending to all machines and letting those that are interested read the data. Sending a packet to all destinations simultaneously is called broadcasting. Various methods have been proposed for doing it.
One broadcasting method that requires no special features from the network is for the source to simply send a distinct packet to each destination. Not only is the method wasteful of bandwidth and slow, but it also requires the source to have a complete list of all destinations. This method is not desirable in practice, even though it is widely applicable.
An improvement is multidestination routing, in which each packet contains either a list of destinations or a bit map indicating the desired destinations. When a packet arrives at a router, the router checks all the destinations to determine the set of output lines that will be needed. (An output line is needed if it is the best route to at least one of the destinations.) The router generates a new copy of the packet for each output line to be used and includes in each packet only those destinations that are to use the line. In effect, the destination set is partitioned among the output lines. After a sufficient number of hops, each packet will carry only one destina tion like a normal packet. Multidestination routing is like using separately ad- dressed packets, except that when several packets must follow the same route, one of them pays full fare and the rest ride free. The network bandwidth is therefore used more efficiently. However, this scheme still requires the source to know all the destinations, plus it is as much work for a router to determine where to send one multidestination packet as it is for multiple distinct packets.
We have already seen a better broadcast routing technique: flooding. When implemented with a sequence number per source, flooding uses links efficiently
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 385
with a decision rule at routers that is relatively simple. Although flooding is ill- suited for ordinary point-to-point communication, it rates serious consideration for broadcasting. However, it turns out that we can do better still once the shortest path routes for regular packets have been computed.
The idea for reverse path forwarding is elegant and remarkably simple once it has been pointed out (Dalal and Metcalfe, 1978). When a broadcast packet ar rives at a router, the router checks to see if the packet arrived on the link that is nor- mally used for sending packets toward the source of the broadcast. If so, there is an excellent chance that the broadcast packet itself followed the best route from the router and is therefore the first copy to arrive at the router. This being the case, the router forwards copies of it onto all links except the one it arrived on. If, however, the broadcast packet arrived on a link other than the preferred one for reaching the source, the packet is discarded as a likely duplicate.
B C
AB CD
I
A
D
E
F
E
F
F H J N
I
LN
H
G
J
H
I
L
G
A D E K G O M O J
E K
K
O
K
N
O
C G D N
M
M
H B L H
L B
(a)
(b) (c)
Figure 5-15. Reverse path forwarding. (a) A network. (b) Sink tree for router I. (c) The tree built by reverse path forwarding from I.
An example of reverse path forwarding is shown in Fig. 5-15. Part (a) shows a network, part (b) shows a sink tree for router I of that network, and part (c) shows how the reverse path algorithm works. On the first hop, I sends packets to F, H, J, and N, as indicated by the second row of the tree. Each of these packets arrives on the preferred path to I (assuming that the preferred path falls along the sink tree) and is so indicated by a circle around the letter. On the second hop, eight packets are generated, two by each of the routers that received a packet on the first hop. As it turns out, all eight of these arrive at previously unvisited routers, and five of these arrive along the preferred line. Of the six packets generated on the third hop, only three arrive on the preferred path (at C, E, and K); the others are duplicates. After five hops and 24 packets, the broadcasting terminates, compared with four hops and 14 packets had the sink tree been followed exactly.
The principal advantage of reverse path forwarding is that it is efficient while being easy to implement. It sends the broadcast packet over each link only once in each direction, just as in flooding, yet it requires only that routers know how to
386 THE NETWORK LAYER CHAP. 5
reach all destinations, without needing to remember sequence numbers (or use other mechanisms to stop the flood) or list all destinations in the packet. Our last broadcast algorithm improves on the behavior of reverse path for- warding. It makes explicit use of the sink tree—or any other convenient spanning tree for that matter—for the router initiating the broadcast. A spanning tree is a subset of the network that includes all the routers but contains no loops. Sink trees are spanning trees. If each router knows which of its lines belong to the spanning tree, it can copy an incoming broadcast packet onto all the spanning tree lines ex- cept the one it arrived on. This method makes excellent use of bandwidth, generat ing the absolute minimum number of packets necessary to do the job. In Fig. 5-15, for example, when the sink tree of part (b) is used as the spanning tree, the broad- cast packet is sent with the minimum 14 packets. The only problem is that each router must have knowledge of some spanning tree for the method to be applicable. Sometimes this information is available (e.g., with link state routing, all routers know the complete topology, so they can compute a spanning tree) but sometimes it is not (e.g., with distance vector routing).
5.2.8 Multicast Routing
Some applications, such as a multiplayer game or live video of a sports event streamed to many viewing locations, send packets to multiple receivers. Unless the group is very small, sending a distinct packet to each receiver is expensive. On the other hand, broadcasting a packet is wasteful if the group consists of, say, 1000 machines on a million-node network, so that most receivers are not interested in the message (or worse yet, they are definitely interested but are not supposed to see it, for example, because it is part of a pay-per-view sports event). Thus, we need a way to send messages to well-defined groups that are numerically large in size but small compared to the network as a whole.
Sending a message to such a group is called multicasting, and the routing al- gorithm used is called multicast routing. All multicasting schemes require some way to create and destroy groups and to identify which routers are members of a group. How these tasks are accomplished is not of concern to the routing algo rithm. For now, we will assume that each group is identified by a multicast address and that routers know the groups to which they belong. We will revisit group mem- bership when we describe Internet multicasting in Sec. 5.7.8.
Multicast routing schemes build on the broadcast routing schemes we have al ready studied, sending packets along spanning trees to deliver the packets to the members of the group while making efficient use of bandwidth. However, the best spanning tree to use depends on whether the group is dense, with receivers scat tered over most of the network, or sparse, with much of the network not belonging to the group. In this section we will consider both cases.
If the group is dense, broadcast is a good start because it efficiently gets the packet to all parts of the network. But broadcast will reach some routers that are
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 387
not members of the group, which is wasteful. The solution explored by Deering and Cheriton (1990) is to prune the broadcast spanning tree by removing links that do not lead to members. The result is an efficient multicast spanning tree.
As an example, consider the two groups, 1 and 2, in the network shown in Fig. 5-16(a). Some routers are attached to hosts that belong to none, one or both of these groups, as indicated in the figure. A spanning tree for the leftmost router is shown in Fig. 5-16(b). This tree can be used for broadcast but is overkill for multi- cast, as can be seen from the two pruned versions that are shown next. In Fig. 5-16(c), all the links that do not lead to hosts that are members of group 1 have been removed. The result is the multicast spanning tree for the leftmost router to send to group 1. Packets are forwarded only along this spanning tree, which is more efficient than the broadcast tree because there are 7 links instead of 10. Fig. 5-16(d) shows the multicast spanning tree after pruning for group 2. It is ef ficient too, with only five links this time. It also shows that different multicast groups have different spanning trees.
21 2 1
1, 2
1, 2
1
2
2
1, 2
1, 2
2 2
1
1
1
(a) (b) 1
2
1
1
2
2
2 2
1
1
(c) (d)
Figure 5-16. (a) A network. (b) A spanning tree for the leftmost router. (c) A multicast tree for group 1. (d) A multicast tree for group 2.
Various ways of pruning the spanning tree are possible. The simplest one can be used if link state routing is used and furthermore each router is aware of the complete topology, including which hosts belong to which groups. Each router can
388 THE NETWORK LAYER CHAP. 5
then construct its own pruned spanning tree for each sender to the group in ques tion by constructing a sink tree for the sender as usual and then removing all links that do not connect group members to the sink node. MOSPF (Multicast OSPF) is an example of a link state protocol that works in this way (Moy, 1994).
With distance vector routing, a different pruning strategy can be followed. The basic algorithm is reverse path forwarding. However, whenever a router with no hosts interested in a particular group and no connections to other routers receives a multicast message for that group, it responds with a PRUNE message, telling the neighbor that sent the message not to send it any more multicasts from the sender for that group. When a router with no group members among its own hosts has re- ceived such messages on all the lines to which it sends the multicast, it, too, can re- spond with a PRUNE message. In this way, the spanning tree is recursively pruned. DVMRP (Distance Vector Multicast Routing Protocol) is an example of a multi- cast routing protocol that works this way (Waitzman et al., 1988).
Pruning results in efficient spanning trees that use only the links that are ac tually needed to reach members of the group and no others. One potential disad- vantage is that it is lots of work for routers, especially for very big networks. Sup- pose that a network has n groups, each with an average of m nodes. At each router and for each group m pruned spanning trees must be stored, for a total of mn trees. For example, Fig. 5-16(c) gives the spanning tree for the leftmost router to send to group 1. The spanning tree for the rightmost router to send to group 1 (not shown in the figure) will look quite different, as packets will head directly for group mem- bers rather than via the left side of the graph. This in turn means that routers must forward packets destined to group 1 in different directions depending on which node is sending to the group. When many large groups with many senders exist, considerable storage is needed to store all the trees.
An alternative design uses core-based trees to compute a single spanning tree for the group (Ballardie et al., 1993). All of the routers agree on a root (called the core or rendezvous point) and build the tree by sending a packet from each mem- ber to the root. The tree is the union of the paths traced by these packets. Fig. 5-17(a) shows a core-based tree for group 1. To send to this group, a sender sends a packet to the core. When the packet reaches the core, it is forwarded down the tree. This is shown in Fig. 5-17(b) for the sender on the righthand side of the network. As a performance optimization, packets destined for the group do not need to reach the core before they are multicast. As soon as a packet reaches the tree, it can be forwarded up toward the root, as well as down all the other branches. This is the case for the sender at the top of Fig. 5-17(b).
Having a shared tree is not optimal for all sources. For example, in Fig. 5-17(b), the packet from the sender on the righthand side reaches the top-right group member via the core in three hops, instead of directly. The inefficiency de- pends on where the core and senders are located, but often it is reasonable when the core is in the middle of the senders. When there is only a single sender, as in a video that is streamed to a group, using the sender as the core is optimal.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 389
1
1
Core
1
1
Sender 1
1
Sender
1
1
1
Core 1
(a) (b)
Figure 5-17. (a) Core-based tree for group 1. (b) Sending to group 1.
Also of note is that shared trees can be a major savings in storage costs, mes- sages sent, and computation. Each router has to keep only one tree per group, in- stead of m trees. Further, routers that are not part of the tree do no work at all to support the group. For this reason, shared tree approaches like core-based trees are used for multicasting to sparse groups in the Internet as part of popular protocols such as protocol independent multicast (Fenner et al., 2006).
5.2.9 Anycast Routing
So far, we have covered delivery models in which a source sends to a single destination (called unicast), to all destinations (called broadcast), and to a group of destinations (called multicast). Another delivery model, called anycast is some times also useful. In anycast, a packet is delivered to the nearest member of a group (Partridge et al., 1993). Schemes that find these paths are called anycast routing.
Why would we want anycast? Sometimes nodes provide a service, such as time of day or content distribution for which it is getting the right information that mat ters, not the node that is contacted; any node will do. For example, anycast is used in the Internet as part of DNS, as we will see in Chap. 7.
Fortunately, regular distance vector and link state routing can produce anycast routes, so we do not need to devise a new routing scheme for anycast. Suppose we want to anycast to the members of group 1. They will all be given the address ‘‘1,’’ instead of different addresses. Distance vector routing will distribute vectors as usual, and nodes will choose the shortest path to destination 1. This will result in nodes sending to the nearest instance of destination 1. The routes are shown in Fig. 5-18(a). This procedure works because the routing protocol does not realize that there are multiple instances of destination 1. That is, it believes that all the instances of node 1 are the same node, as in the topology shown in Fig. 5-18(b).
390 THE NETWORK LAYER CHAP. 5 1
1
1
1
1
1
(a) (b)
Figure 5-18. (a) Anycast routes to group 1. (b) Topology seen by the routing protocol.
This procedure works for link state routing as well, although there is the added consideration that the routing protocol must not find seemingly short paths that pass through node 1. This would result in jumps through hyperspace, since the instances of node 1 are really nodes located in different parts of the network. How- ever, link state protocols already make this distinction between routers and hosts. We glossed over this fact earlier because it was not needed for our discussion.
5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER
Too many packets in any part of the network can ultimately introduce packet delay and loss that degrades performance. This situation is called congestion.
5.3.1 The Need for Traffic Management: Congestion
The network and transport layers share the responsibility for managing conges tion. Because congestion occurs within the network, it is the network layer that di rectly experiences it and must ultimately determine what to do with the excess packets. The most effective way to control congestion is to reduce the load that the transport layer is placing on the network. This requires the network and transport layers to work together. The network layer does not automatically mitigate con- gestion, but network operators can configure routers, switches, and other devices at the network layer to mitigate the effects of congestion, typically by taking actions that would encourage a sender to reduce the sending rate, or by sending traffic along different, less-congested paths through the network. In this chapter we will look at the aspects of congestion that concern the network layer, and mechanisms that the network layer uses to control and manage congestion. To avoid confusion with the more common use of the phase ‘‘congestion control,’’ which is frequently used by some authors to describe functions of the transport layer, in this chapter we
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 391
will discuss practices to manage congestion at the network layer as congestion management or traffic management. In Chap. 6, we will finish the topic by cov- ering the mechanisms that the transport layer uses to manage congestion control.
Figure 5-19 shows the onset of congestion. When the number of packets that hosts send into the network is well within the network’s capacity, the amount of traffic that is delivered is proportional to the amount of traffic that is sent: If twice as much traffic is sent, twice as much is delivered. However, as the offered load approaches the carrying capacity, bursts of traffic occasionally fill up the buffers inside routers and some packets are lost. These lost packets consume some of the capacity, so the number of delivered packets falls below the ideal curve. At this point, the network is experiencing congestion.
Ideal
Capacity of
)
c
e
s
/
s
t
e
kc
a
p
(
t
u
pdo
o
G
the network
Onset of
congestion
Desirable response
Congestion collapse
Offered load (packet/sec)
Figure 5-19. Performance drops significantly in the presence of congestion: packet loss rates increase, and latency also increases as router queues fill with packets.
At some point, the network may experience a congestion collapse, where per formance plummets as the offered load increases beyond the capacity. In short, congestion collapse occurs when increasing load on the network actually results in less traffic being successfully delivered. This situation can occur if packets are sufficiently delayed inside the network that they are no longer useful when they leave the network. For example, in the early Internet, the time a packet spent wait ing for a backlog of packets ahead of it to be sent over a slow 56-kbps link could reach the maximum time it was allowed to remain in the network. It then had to be thrown away. A different failure mode occurs when senders retransmit packets that are greatly delayed, thinking that they have been lost. In this case, copies of the same packet will be delivered by the network, again wasting its capacity. To cap ture these factors, the y-axis of Fig. 5-19 is given as goodput, which is the rate at which useful packets are delivered by the network.
We would like to design networks that avoid congestion where possible and do not suffer from congestion collapse if they somehow do become congested. Unfor tunately, in a packet-switched network, congestion cannot wholly be avoided. If
392 THE NETWORK LAYER CHAP. 5
all of a sudden, streams of packets begin arriving on three or four input lines and all need the same output line, a queue will build up. If there is insufficient memory to hold all of them, packets will be lost. Adding more memory may help up to a
point, but Nagle (1987) realized that if routers have an infinite amount of memory, congestion frequently gets worse, not better. More recently, researchers discovered that many network devices tend to have more memory than they need, a concept that became known as bufferbloat. Network devices that have too much memory can degrade network performance for a variety of reasons. First, by the time pack- ets get to the front of the queue, they have already timed out (repeatedly) and dup licates have been sent. Second, as we will discuss in Chap. 6, senders need timely information about network congestion, and if packets are stored in router buffers, rather than dropped, then senders will continue to send traffic that congests the net- work. All of this makes matters worse, not better—it leads to congestion collapse.
Low-bandwidth links or routers that process packets more slowly than the ca- pacity of a network link can also become congested. In cases where the network has additional capacity in other parts of the network, congestion can be mitigated by directing some of the traffic away from the bottleneck to other (less congested) parts of the network. Ultimately, however, increasing traffic demands may result in congestion being pervasive throughout the network. When this occurs, there are two approaches that operators can take: shedding load (i.e., dropping traffic), or provisioning additional capacity.
It is worth pointing out the difference between congestion control, traffic management, and flow control, as the relationship is a subtle one. Traffic man- agement (sometimes also called traffic engineering) has to do with making sure the network is able to carry the offered traffic; it can be performed by devices in the network, or by the senders of traffic (often through mechanisms in the transport protocol, which are often referred to as congestion control). Congestion manage- ment and control concerns the behavior of all the hosts and routers. Flow control, in contrast, relates to the traffic between a particular sender and a particular re- ceiver and is generally concerned with making sure that the sender is not trans- mitting data faster than the receiver can process it. Its job is to make sure no data is lost because the sender is more powerful than the receiver and can send data faster that the receiver can absorb it.
To see the difference between these two concepts, consider a network made up of 100-Gbps fiber optic links on which a supercomputer is trying to force feed a large file to a personal computer that is capable of handling only 1 Gbps. Al though there is no congestion (the network itself is not in trouble), flow control is needed to force the supercomputer to stop frequently to give the personal computer a chance to breathe.
At the other extreme, consider a network with 1-Mbps lines and 1000 large computers, half of which are trying to transfer files at 100 kbps to the other half. Here, the problem is not that of fast senders overpowering slow receivers, but that the total offered traffic exceeds what the network can handle.
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 393
The reason congestion control and flow control are often confused is that the best way to handle both problems is to get the host to slow down. Thus, a host can get a ‘‘slow-down’’ message either because the receiver cannot handle the load or because the network cannot handle it. We will come back to this point in Chap. 6.
We will start our study of congestion management by looking at the ap- proaches that network operators can apply at different time scales. Then we will look at approaches that can prevent congestion from occurring in the first place, followed by approaches for coping with it once it has set in.
5.3.2 Approaches to Traffic Management
The presence of congestion means that the load is (temporarily) greater than the resources (in a part of the network) can handle. There are two approaches to dealing with it: increase the resources or decrease the load. As shown in Fig. 5-20, these solutions are usually applied on different time scales to either prevent con- gestion or react to it once it has occurred.
Network
Traffic-aware
Admission
provisioning
routing
Slower
(Preventative)
control
Traffic throttling
Load
shedding
Faster
(Reactive)
Figure 5-20. Timescales of approaches to traffic and congestion management.
The most straightforward way to avoid congestion is to build a network that is provisioned for the traffic load that it must carry. If there is a low-bandwidth link on the path along which most traffic is directed, congestion is likely. Sometimes resources can be added dynamically when there is serious congestion, for example, turning on spare routers or enabling lines that are normally used only as backups (to make the system fault tolerant) or purchasing bandwidth on the open market. More often, links and routers that are regularly heavily utilized are upgraded at the earliest opportunity. This is called provisioning and happens on a time scale of months, driven by long-term traffic trends.
To make the most of the existing network capacity, routes can be tailored to traffic patterns that change during the day as network users wake and sleep in dif ferent time zones. For example, routes may be changed to shift traffic away from heavily used paths by changing the shortest path weights. Some local radio sta tions have helicopters flying around their cities to report on road congestion to make it possible for their mobile listeners to route their packets (cars) around hotspots. This is called traffic-aware routing. Splitting traffic across multiple paths can also be helpful.
However, sometimes it is not possible to increase capacity, especially on short time scales. The only way then to beat back the congestion is to decrease the load.
394 THE NETWORK LAYER CHAP. 5
In a virtual-circuit network, new connections can be refused if they would cause the network to become congested. This is one example of admission control, a concept that simply denies senders the ability to send traffic if the network capacity cannot support it.
When congestion is imminent, the network can deliver feedback to the sources whose traffic flows are responsible for the problem. The network can request these sources to slow down the sending rates, or it can simply slow down the traffic it- self, a process sometimes referred to as throttling. Two difficulties with this ap- proach are how to identify the onset of congestion, and how to inform the source that needs to slow down. To tackle the first issue, routers can monitor the average load, queueing delay, or packet loss and send feedback to senders, either explicitly or implicitly (e.g., by dropping packets) to tell them to slow down.
In the case where feedback is explicit, routers must participate in a feedback loop with the sources. For a scheme to work correctly, the time scale must be adjusted carefully. If every time two packets arrive in a row, a router yells STOP and every time a router is idle for 20 µsec, it yells GO, the system will oscillate wildly and never converge. On the other hand, if it waits 30 minutes to make sure before saying anything, the congestion-control mechanism will react too sluggishly to be of any use. Delivering timely feedback is a nontrivial matter. An added con- cern is having routers send more messages when the network is already congested.
Another approach is for the network to discard packets that it cannot deliver. The general name for this approach is load shedding, and there are various ways to achieve it, including traffic shaping (restricting the transmission rate for a particu lar sender) and traffic policing (dropping traffic from a particular sender if it exceeds some rate). A good policy for choosing which packets to discard can help to prevent congestion collapse. We will discuss all of these topics below.
Traffic-Aware Routing
The first approach we will examine is traffic-aware routing. The routing ap- proaches we looked at in Sec. 5.2 used fixed link weights that adapted to changes in topology, but not to changes in traffic load. The goal in taking load into account when computing routes is to shift traffic away from hotspots that will be the first places in the network to experience congestion.
The most direct way to do this is to set the link weight to be a function of the (fixed) link bandwidth and propagation delay plus the (variable) measured load or average queueing delay. Least-weight paths will then favor paths that are more lightly loaded, all else being equal.
Traffic-aware routing was used in the early Internet according to this model (Khanna and Zinky, 1989). However, there is a peril. Consider the network of Fig. 5-21, which is divided into two parts, East and West, connected by two links, CF and EI. Suppose that most of the East-West traffic is using link CF, resulting
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 395
in this link being heavily loaded with long delays. Including queueing delay in the weight used for the shortest path calculation will make EI more attractive. After the new routing tables have been installed, most of the East-West traffic will now go over EI, loading this link. Consequently, in the next update, CF will appear to be the shortest path. As a result, the routing tables may oscillate wildly, leading to erratic routing and many potential problems.
West East
G
C F
B
A
E
I
H
D
J
Figure 5-21. A network in which the East and West parts are connected by two links.
If load is ignored and only bandwidth and propagation delay are considered, this problem does not occur. Attempts to include load but change weights within a narrow range only slow down routing oscillations. Two techniques can contribute to a successful solution. The first is multipath routing, in which there can be multi- ple paths from a source to a destination. In our example this means that the traffic can be spread across both of the East to West links. The second one is for the rout ing scheme to shift traffic across routes slowly enough that it is able to converge, as in the scheme of Gallagher (1977).
Given these difficulties, in the Internet routing protocols do not generally adjust their routes depending on the load. Instead, network operators make adjust- ments to routing protocols on slower time scales by slowly changing the routing configuration and parameters, a process sometimes called traffic engineering. Traffic engineering has long been a painstaking, manual process, akin to a black art. Some work has attempted to formalize this process, but Internet traffic loads are unpredictable enough, and the protocol configuration parameters are coarse and clunky enough that the process has remained fairly primitive. More recently, how- ever, the advent of software defined networking has made it possible to automate some of these tasks, and the increasing use of certain technologies such as MPLS tunnels across the network has provided operators with more flexibility for a wide range of traffic engineering tasks.
396 THE NETWORK LAYER CHAP. 5 Admission Control
One technique that is widely used in virtual-circuit networks to keep conges tion at bay is admission control. The idea is simple: do not set up a new virtual circuit unless the network can carry the added traffic without becoming congested.
Thus, attempts to set up a virtual circuit may fail. This approach is better than the alternative, as letting more people in when the network is busy just makes matters worse. By analogy, in the telephone system, when a switch gets overloaded, it practices admission control by not giving dial tones.
The trick with this approach is working out when a new virtual circuit will lead to congestion. The task is straightforward in the telephone network because of the fixed bandwidth of calls (64 kbps for uncompressed audio). However, virtual cir- cuits in computer networks come in all shapes and sizes. Thus, the circuit must come with some characterization of its traffic if we are to apply admission control.
Traffic is often described in terms of its rate and shape. The problem of how to describe it in a simple yet meaningful way is difficult because traffic is typically bursty—the average rate is only half the story. For example, traffic that varies while browsing the Web is more difficult to handle than a streaming movie with the same long-term throughput because the bursts of Web traffic are more likely to congest routers in the network. A commonly used descriptor that captures this ef fect is the leaky bucket or token bucket. A leaky bucket has two parameters that bound the average rate and the instantaneous burst size of traffic. Because these are two common mechanisms for performing traffic shaping, we will cover these top ics in more detail in that section.
Given traffic descriptions, the network can decide whether to admit the new virtual circuit. One possibility is for the network to reserve enough capacity along the paths of each of its virtual circuits that congestion will not occur. In this case, the traffic description is a service agreement for what the network will guarantee its users. We have prevented congestion but veered into the related topic of quality of service a little too early; we will return to it shortly.
Even without making guarantees, the network can use traffic descriptions for admission control. The task is then to estimate how many circuits will fit within the carrying capacity of the network without congestion. Suppose that virtual cir- cuits that may blast traffic at rates up to 10 Mbps all pass through the same 100-Mbps physical link. How many circuits should be admitted? Clearly, 10 cir- cuits can be admitted without risking congestion, but this is wasteful in the normal case since it may rarely happen that all 10 are transmitting full blast at the same time. In real networks, measurements of past behavior that capture the statistics of transmissions can be used to estimate the number of circuits to admit, to trade bet ter performance for acceptable risk.
Admission control can be combined with traffic-aware routing by considering routes around traffic hotspots as part of the setup procedure. For example, consid- er the network of Fig. 5-22(a), in which two routers are congested, as indicated.
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 397
A
Congestion
A
B
Congestion
B
Virtual
circuit
(a) (b)
Figure 5-22. (a) A congested network. (b) The portion of the network that is not congested. A virtual circuit from A to B is also shown.
Suppose that a host attached to router A wants to set up a connection to a host attached to router B. Normally, this connection would pass through one of the con- gested routers. To avoid this situation, we can redraw the network as shown in Fig. 5-22(b), omitting the congested routers and all of their lines. The dashed line shows a possible route for the virtual circuit that avoids the congested routers. Shaikh et al. (1999) give a design for this kind of load-sensitive routing.
Load Shedding
When none of the above methods make the congestion disappear, routers can bring out the heavy artillery: load shedding. This is a fancy way of saying that when routers are being inundated by packets that they cannot handle, they just throw them away. The term comes from the world of electrical power generation, where it refers to the practice of utilities intentionally blacking out certain areas to save the entire grid from collapsing on hot summer days when the demand for electricity (to power air conditioners) greatly exceeds the supply.
The key question for a router drowning in packets is which packets to drop. The preferred choice may depend on the type of applications that use the network. For a file transfer, an old packet is worth more than a new one. This is because dropping packet 6 and keeping packets 7 through 10, for example, will only force the receiver to do more work to buffer data that it cannot yet use. In contrast, for real-time media, a new packet is worth more than an old one. This is because packets become useless if they are delayed and miss the time at which they must be played out to the user.
The former policy (old is better than new) is often called wine and the latter (new is better than old) is often called milk because most people prefer new milk over old milk and old wine over new wine.
398 THE NETWORK LAYER CHAP. 5
More intelligent load shedding requires cooperation from the senders. An ex- ample is packets that carry routing information. These packets are more important than regular data packets because they establish routes; if they are lost, the network may lose connectivity. Another example is that algorithms for compressing video, like MPEG, periodically transmit an entire frame and then send subsequent frames as differences from the last full frame. In this case, dropping a packet that is part of a difference is preferable to dropping one that is part of a full frame because fu ture packets depend on the full frame.
To implement an intelligent discard policy, applications must mark their pack- ets to indicate to the network how important they are. Then, when packets have to be discarded, routers can first drop packets from the least important class, then the next most important class, and so on.
Of course, unless there is some significant incentive to avoid marking every packet as VERY IMPORTANT—NEVER, EVER DISCARD, nobody will do it. Often accounting and money are used to discourage frivolous marking. For ex- ample, the network might let senders transmit faster than the service they pur- chased allows if they mark excess packets as low priority. Such a strategy is ac tually not a bad idea because it makes more efficient use of idle resources, allow ing hosts to use them as long as nobody else is interested, but without establishing a right to them when times get tough.
Traffic Shaping
Before the network can make performance guarantees, it must know what traf fic is being guaranteed. In the telephone network, this characterization is simple. For example, a voice call (in uncompressed format) needs 64 kbps and consists of one 8-bit sample every 125 µsec. However, traffic in data networks is bursty. It
typically arrives at nonuniform rates as the traffic rate varies (e.g., videoconferenc ing with compression), users interact with applications (e.g., browsing a new Web page), and computers switch between tasks. Bursts of traffic are more difficult to handle than constant-rate traffic because they can fill buffers and cause packets to be lost.
Traffic shaping is a technique for regulating the average rate and burstiness of a flow of data that enters the network. The goal is to allow applications to transmit a wide variety of traffic that suits their needs, including some bursts, yet have a simple and useful way to describe the possible traffic patterns to the network. When a flow is set up, the user and the network (i.e., the customer and the pro- vider) agree on a certain traffic pattern (i.e., shape) for that flow. In effect, the cus tomer says to the provider ‘‘My transmission pattern will look like this; can you handle it?’’
Sometimes this agreement is called an SLA (Service Level Agreement), espe- cially when it is made over aggregate flows and long periods of time, such as all of
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 399
the traffic for a given customer. As long as the customer fulfills her part of the bar- gain and only sends packets according to the agreed-on contract, the provider promises to deliver them all in a timely fashion.
Traffic shaping reduces congestion and thus helps the network live up to its promise. However, to make it work, there is also the issue of how the provider can tell if the customer is following the agreement and what to do if the customer is not. Packets in excess of the agreed pattern might be dropped by the network, or they might be marked as having lower priority. Monitoring a traffic flow is called traffic policing.
Shaping and policing are not so important for peer-to-peer and other transfers that will consume any and all available bandwidth, but they are of great importance for real-time data, such as audio and video connections, which have stringent qual ity-of-service requirements. We have already seen one way to limit the amount of data an application sends: the sliding window, which uses one parameter to limit how much data is in transit at any given time, which indirectly limits the rate. Now we will look at a more general way to characterize traffic, with the leaky bucket and token bucket algorithms. The formulations are slightly different but give an e- quivalent result.
Try to imagine a bucket with a small hole in the bottom, as illustrated in Fig. 5-23(b). No matter the rate at which water enters the bucket, the outflow is at a constant rate, R, when there is any water in the bucket and zero when the bucket is empty. Also, once the bucket is full to capacity B, any additional water entering it spills over the sides and is lost.
Host
Packets Network
Put in
water
Check
bucket
here
Rate
R
B
Take out
water/tokens
Rate
R
B
(a) (b) (c)
Figure 5-23. (a) Shaping packets. (b) A leaky bucket. (c) A token bucket.
This bucket can be used to shape or police packets entering the network, as shown in Fig. 5-23(a). Conceptually, each host is connected to the network by an interface containing a leaky bucket. To send a packet into the network, it must be possible to put more water into the bucket. If a packet arrives when the bucket is full, the packet must either be queued until enough water leaks out to hold it or be discarded. The former might happen at a host shaping its traffic for the network as part of the operating system. The latter might happen in hardware at a provider
400 THE NETWORK LAYER CHAP. 5
network interface that is policing traffic entering the network. This technique was proposed by Turner (1986) and is called the leaky bucket algorithm. A different but equivalent formulation is to imagine the network interface as a bucket that is being filled, as shown in Fig. 5-23(c). The tap is running at rate R and the bucket has a capacity of B, as before. Now to send a packet we must be able to take water, or tokens, as the contents are commonly called, out of the bucket (rather than putting water into the bucket). No more than a fixed number of tokens, B, can accumulate in the bucket, and if the bucket is empty, we must wait until more tokens arrive before we can send another packet. This algorithm is call- ed the token bucket algorithm.
Leaky and token buckets limit the long-term rate of a flow but allow short-term bursts up to a maximum regulated length to pass through unaltered and without suffering any artificial delays. Large bursts will be smoothed by a leaky bucket traffic shaper to reduce congestion in the network. As an example, imagine that a computer can produce data at up to 1000 Mbps (125 million bytes/sec) and that the first link of the network also runs at this speed. The pattern of traffic the host gen- erates is shown in Fig. 5-24(a). This pattern is bursty. The average rate over one second is 200 Mbps, even though the host sends a burst of 16,000 KB at the top speed of 1000 Mbps (for 1/8 of the second).
Rate (Mbps) 1000
125 MB/s for
125 msec
25 MB/s for
250 msec
Bucket (KB) 16000
(a) (d)
With R = 25 MB/s, B = 9600 KB
9600
Bucket empties, traffic delayed
(b) (e)
With R = 25 MB/s, B = 0
0
Bucket always empty
Time (msec)
1000
Time (msec) 1000
(c) (f)
Figure 5-24. (a) Traffic from a host. Output shaped by a token bucket of rate 200 Mbps and capacity (b) 9600 KB and (c) 0 KB. Token bucket level for shaping with rate 200 Mbps and capacity (d) 16,000 KB, (e) 9600 KB, and (f) 0 KB.
Now suppose that the routers can accept data at the top speed only for short in tervals, until their buffers fill up. The buffer size is 9600 KB, smaller than the
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 401
traffic burst. For long intervals, the routers work best at rates not exceeding 200 Mbps (say, because this is all the bandwidth given to the customer). The implica tion is that if traffic is sent in this pattern, some of it will be dropped in the network because it does not fit into the buffers at routers.
To avoid this packet loss, we can shape the traffic at the host with a token bucket. If we use a rate, R, of 200 Mbps and a capacity, B, of 9600 KB, the traffic will fall within what the network can handle. The output of this token bucket is shown in Fig. 5-24(b). The host can send full throttle at 1000 Mbps for a short while until it has fully drained the bucket. Then it has to cut back to 200 Mbps un til the burst has been sent. The effect is to spread out the burst over time because it was too large to handle all at once. The level of the token bucket is shown in Fig. 5-24(e). It starts off full and is depleted by the initial burst. When it reaches zero, new packets can be sent only at the rate at which the buffer is filling; there can be no more bursts until the bucket has recovered. The bucket fills when no traf fic is being sent and stays flat when traffic is being sent at the fill rate.
We can also shape the traffic to be less bursty. Fig. 5-24(c) shows the output of a token bucket with R = 200 Mbps and a capacity of 0. This is the extreme case in which the traffic has been completely smoothed. No bursts are allowed, and the traffic enters the network at a steady rate. The corresponding bucket level, shown in Fig. 5-24(f), is always empty. Traffic is being queued on the host for release into the network and there is always a packet waiting to be sent when it is allowed.
Finally, Fig. 5-24(d) illustrates the bucket level for a token bucket with R = 200 Mbps and a capacity of B = 16, 000 KB. This is the smallest token bucket through which the traffic passes unaltered. It might be used at a router in the net- work to police the traffic that the host sends. However, if the host is sending traffic that conforms to the token bucket on which it has agreed with the network, the traf fic will fit through that same token bucket run at the router at the edge of the net- work. If the host sends at a faster or burstier rate, the token bucket will run out of water. If this happens, a traffic policer will know that the traffic is not as was de- scribed. It will then either drop the excess packets or lower their priority, depend ing on the design of the network. In our example, the bucket empties only momen tarily, at the end of the initial burst, then recovers enough for the next burst.
Leaky and token buckets are easy to implement. We will now describe the op- eration of a token bucket. Even though we have described water flowing continu- ously into and out of the bucket, real implementations must work with discrete quantities. A token bucket is implemented with a counter for the level of the bucket. The counter is advanced by R/6T units at every clock tick of 6T seconds. This would be 200 Kbit every 1 msec in our example above. Every time a unit of traffic is sent into the network, the counter is decremented, and traffic may be sent until the counter reaches zero.
When the packets are all the same size, the bucket level can just be counted in packets (e.g., 200 Kbit is 20 packets of 1250 bytes). However, often variable-sized packets are used. In this case, the bucket level can be counted in bytes. If the
402 THE NETWORK LAYER CHAP. 5
residual byte count is too low to send a large packet, the packet must wait until the next tick (or even longer, if the fill rate is small).
Calculating the length of the maximum burst (until the bucket empties) is slightly tricky. It is longer than just 9600 KB divided by 125 MB/sec because while the burst is being output, more tokens arrive. If we call the burst length S sec., the maximum output rate M bytes/sec, the token bucket capacity B bytes, and the token arrival rate R bytes/sec, we can see that an output burst contains a maxi- mum of B + RS bytes. We also know that the number of bytes in a maxi- mum-speed burst of length S seconds is MS. Hence, we have
B + RS = MS
We can solve this equation to get S = B/(M < R). For our parameters of B = 9600 KB, M = 125 MB/sec, and R = 25 MB/sec, we get a burst time of about 94 msec. A potential problem with the token bucket algorithm is that it reduces large bursts down to the long-term rate R. It is frequently desirable to reduce the peak rate, but without going down to the long-term rate (and also without raising the long-term rate to allow more traffic into the network). One way to get smoother traffic is to insert a second token bucket after the first one. The rate of the second bucket should be much higher than the first one. Basically, the first bucket charac terizes the traffic, fixing its average rate but allowing some bursts. The second bucket reduces the peak rate at which the bursts are sent into the network. For ex- ample, if the rate of the second token bucket is set to be 500 Mbps and the capacity is set to 0, the initial burst will enter the network at a peak rate of 500 Mbps, which is lower than the 1000 Mbps rate we had previously.
Using all of these buckets can be a bit tricky. When token buckets are used for traffic shaping at hosts, packets are queued and delayed until the buckets permit them to be sent. When token buckets are used for traffic policing at routers in the network, the algorithm is simulated to make sure that no more packets are sent than permitted. Nevertheless, these tools provide ways to shape the network traffic into more manageable forms to assist in meeting quality-of-service requirements.
Active Queue Management
In the Internet and many other computer networks, senders adjust their trans- missions to send as much traffic as the network can readily deliver. In this setting, the network aims to operate just before the onset of congestion. When congestion is imminent, it must tell the senders to throttle back their transmissions and slow down. This feedback is business as usual rather than an exceptional situation. The term congestion avoidance is sometimes used to contrast this operating point with the one in which the network has become (overly) congested.
Let us now look at some approaches to throttling traffic that can be used in both datagram networks and virtual-circuit networks alike. Each approach must solve two problems. First, routers must determine when congestion is approaching,
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 403
ideally before it has arrived. To do so, each router can continuously monitor the resources it is using. Three possibilities are the utilization of the output links, the buffering of queued packets inside the router, and the number of packets that are lost due to insufficient buffering. Of these possibilities, the second one is the most
useful. Averages of utilization do not directly account for the burstiness of most traffic—a utilization of 50% may be low for smooth traffic and too high for highly variable traffic. Counts of packet losses come too late. Congestion has already set in by the time that packets are lost.
The queueing delay inside routers directly captures any congestion experi- enced by packets. It should be low most of time, but will jump when there is a burst of traffic that generates a backlog. To maintain a good estimate of the queue ing delay, d, a sample of the instantaneous queue length, s, can be made periodical ly and d updated according to
dnew = _ dold + (1 < _ )s
where the constant _ determines how fast the router forgets recent history. This is called an EWMA (Exponentially Weighted Moving Average). It smoothes out fluctuations and is equivalent to a low-pass filter. Whenever d moves above some predefined threshold, the router notes the onset of congestion.
The second problem is that routers must deliver timely feedback to the senders that are causing the congestion. Congestion is experienced in the network, but relieving congestion requires action on behalf of the senders that are using the net- work. To deliver feedback, the router must identify the appropriate senders. It must then warn them carefully, without sending many more packets into the al ready congested network. Different schemes use different feedback mechanisms, as we will now describe.
Random Early Detection
Dealing with congestion when it first starts is more effective than letting it gum up the works and then trying to deal with it. This observation leads to an inter- esting twist on load shedding, which is to discard packets before all the buffer space is really exhausted.
The motivation for this idea is that most Internet hosts do not yet get conges tion signals from routers in the form of an explicit notification. Instead, the only reliable indication of congestion that hosts get from the network is packet loss. After all, it is difficult to build a router that does not drop packets when it is com- pletely overloaded. Transport protocols such as TCP are thus hardwired to react to loss as congestion, slowing down the source in response. The reasoning behind this logic is that TCP was designed for wired networks and wired networks are very reliable, so lost packets are mostly due to buffer overruns rather than trans- mission errors. Wireless links must recover transmission errors at the link layer (so they are not seen at the network layer) to work well with TCP.
404 THE NETWORK LAYER CHAP. 5
This situation can be exploited to help reduce congestion. By having routers drop packets early, before the situation has become hopeless, there is time for the source to take action before it is too late. A popular algorithm for doing this is called RED (Random Early Detection) (Floyd and Jacobson, 1993). To deter- mine when to start discarding, routers maintain a running average of their queue lengths. When the average queue length on some link exceeds a threshold, the link is said to be congested and a small fraction of the packets are dropped at random. Picking packets at random makes it more likely that the fastest senders will see a packet drop; this is the best option since the router cannot tell which source is causing the most trouble in a datagram network. The affected sender will notice the loss when there is no acknowledgement, and then the transport protocol will slow down. The lost packet is thus delivering the same message as a notification packet, but implicitly, without the router sending any explicit signal.
RED routers improve performance compared to routers that drop packets only when their buffers are full, though they require tuning to work well. For example, the ideal number of packets to drop depends on how many senders need to be noti fied of congestion. However, explicit notification is the better option if it is avail- able. It works in exactly the same manner, but delivers a congestion signal expli- citly rather than as a loss; RED is used when hosts cannot receive explicit signals.
Choke Packets
The most direct way to notify a sender of congestion is to tell it directly. In this approach, the router selects a congested packet and sends a choke packet back to the source host, giving it the destination found in the packet. The original pack- et may be tagged (a header bit is turned on) so that it will not generate any more choke packets farther along the path and then forwarded in the usual way. To avoid increasing load on the network during a time of congestion, the router may only send choke packets at a low rate.
When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination, for example, by 50%. In a datagram network, simply picking packets at random when there is congestion is likely to cause choke packets to be sent to fast senders, because they will have the most packets in the queue. The feedback created by this protocol can help prevent congestion yet not throttle any sender unless it causes trouble. For the same reason, it is likely that multiple choke packets will be sent to a given host and destination. The host should ignore these additional chokes for the fixed time interval until its reduction in traffic takes effect. After that period, further choke packets indicate that the net- work is still congested.
A choke packet used in the early Internet is the SOURCE QUENCH message (Postel, 1981). It never caught on, though, partly because the circumstances in which it was generated and the effect it had were not well specified. The modern Internet uses a different notification design that we will describe next.
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 405 Explicit Congestion Notification
Instead of generating additional packets to warn of congestion, a router can tag any packet it forwards (by setting a bit in the packet’s header) to signal that it is experiencing congestion. When the network delivers the packet, the destination can note that there is congestion and inform the sender when it sends a reply pack- et. The sender can then throttle its transmissions as before.
This design is called ECN (Explicit Congestion Notification) and is used in the Internet (Ramakrishnan et al., 2001). It is a refinement of early congestion sig- naling protocols, notably the binary feedback scheme of Ramakrishnan and Jain (1988) that was used in the DECnet architecture. Two bits in the IP packet header are used to record whether the packet has experienced congestion. Packets are unmarked when they are sent, as illustrated in Fig. 5-25. If any of the routers they pass through is congested, that router will then mark the packet as having experi- enced congestion as it is forwarded. The destination will then echo any marks it has received back to the sender as an explicit congestion signal in its next reply packet. This is shown with a dashed line in the figure to indicate that it happens above the IP level (e.g., in TCP). The sender must then throttle its transmissions, as in the case of choke packets.
Host
Packet Congested router
Congestion signal
Marked packet
Host
Figure 5-25. Explicit congestion notification
Hop-by-Hop Backpressure
At high speeds or over long distances, many new packets may be transmitted after congestion has been signaled because of the delay before the signal takes ef fect. Consider, for example, a host in San Francisco (router A in Fig. 5-26) that is sending traffic to a host in New York (router D in Fig. 5-26) at the OC-3 speed of 155 Mbps. If the New York host begins to run out of buffers, it will take about 40 msec for a choke packet to get back to San Francisco to tell it to slow down. An ECN indication will take even longer because it is delivered via the destination. Choke packet propagation is illustrated as the second, third, and fourth steps in Fig. 5-26(a). In those 40 msec, another 6.2 megabits will have been sent. Even if the host in San Francisco completely shuts down immediately, the 6.2 megabits in the pipe will continue to pour in and have to be dealt with. Only in the seventh diagram in Fig. 5-26(a) will the New York router notice a slower flow.
406 THE NETWORK LAYER CHAP. 5
An alternative approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of Fig. 5-26(b). Here, as soon as the choke packet reaches F, F is required to reduce the flow to D. Doing so will re- quire F to devote more buffers to the connection, since the source is still sending away at full blast, but it gives D immediate relief, like a headache remedy in a tele- vision commercial. In the next step, the choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater demand on E’s buffers but gives F immediate relief. Finally, the choke packet reaches A and the flow genuinely slows down.
The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion, at the price of using up more buffers upstream. In this way, conges tion can be nipped in the bud without losing any packets. The idea is discussed in detail by Mishra et al. (1996).
5.4 QUALITY OF SERVICE AND APPLICATION QOE
The techniques we looked at in the previous sections are designed to reduce congestion and improve network performance. However, there are applications (and customers) that demand stronger performance guarantees from the network than ‘‘the best that could be done under the circumstances,’’ sometimes referred to as best effort. Yet, many applications often require some minimum level of throughput to function and also do not perform well when latency exceeds some threshold.. In this section, we will continue our study of network performance, with a sharper focus on ways to provide quality of service that can meet applica tion needs. This is an area in which the Internet is undergoing a long-term up- grade. More recently, there has also been increased focus on user (QoE) Quality of Experience, which recognizes that ultimately the user experience matters, and different applications have very different requirements and thresholds, as far as net- work performance goes. An increasing area of focus pertains to estimating user QoE given the ability to observe only encrypted network traffic.
5.4.1 Application QoS Requirements
A stream of packets from a source to a destination is called a flow (Clark, 1988). A flow might be all the packets of a connection in a connection-oriented network, or all the packets sent from one process to another process in a con- nectionless network. The needs of each flow can be characterized by four primary parameters: bandwidth, delay, jitter, and loss. Together, these determine the QoS (Quality of Service) the flow requires.
Several common applications and the stringency of their network requirements are listed in Fig. 5-27. Note that network requirements are less demanding than application requirements in those cases that the application can improve on the
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 407
B C
A D E F
Choke
Choke
Choke
Reduced
flow
Flow is still
B C
A D
Heavy flow
E F
Choke
Choke
Reduced
flow
Choke
at maximum rate
Flow is
reduced
(a) (b)
Figure 5-26. (a) A choke packet that affects only the source. (b) A choke packet that affects each hop it passes through.
408 THE NETWORK LAYER CHAP. 5
service provided by the network. In particular, networks do not need to be lossless for reliable file transfer, and they do not need to deliver packets with identical delays for audio and video playout. Some amount of loss can be repaired with re transmissions, and some amount of jitter can be smoothed by buffering packets at the receiver. However, there is nothing applications can do to remedy the situation if the network provides too little bandwidth or too much delay.
Application Bandwidth Delay Jitter Loss
Email Low Low Low Medium File sharing High Low Low Medium Web access Medium Medium Low Medium Remote login Low Medium Medium Medium Audio on demand Low Low High Low
Video on demand High Low High Low
Telephony Low High High Low
Videoconferencing High High High Low
Figure 5-27. Stringency of applications’ quality-of-service requirements.
The applications differ in their bandwidth needs, with email, audio in all forms, and remote login not needing much, but file sharing and video in all forms needing a great deal.
More interesting are the delay requirements. File transfer applications, includ ing email and video, are not delay sensitive. If all packets are delayed uniformly by a few seconds, no harm is done. Interactive applications, such as Web surfing and remote login, are more delay sensitive. Real-time applications, such as tele- phony and videoconferencing, have strict delay requirements. If all the words in a telephone call are each delayed by too long, the users will find the connection unacceptable. On the other hand, playing audio or video files from a server does not require low delay.
The variation (i.e., standard deviation) in the delay or packet arrival times is called jitter. The first three applications in Fig. 5-27 are not sensitive to the pack- ets arriving with irregular time intervals between them. Remote login is somewhat sensitive to that, since updates on the screen will appear in little bursts if the con- nection suffers much jitter. Video and especially audio are extremely sensitive to jitter. If a user is watching a video over the network and the frames are all delayed by exactly 2.000 seconds, no harm is done. But if the transmission time varies ran- domly between 1 and 2 seconds, the result will be terrible unless the application hides the jitter. For audio, a jitter of even a few milliseconds is clearly audible.
The first four applications have more stringent requirements on loss than audio and video because all bits must be delivered correctly. This goal is usually achieved with retransmissions of packets that are lost in the network by the tran- sport layer. This is wasted work; it would be better if the network refused packets
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 409
it was likely to lose in the first place. Audio and video applications can tolerate some lost packets without retransmission because people do not notice short pauses or occasional skipped frames.
To accommodate a variety of applications, networks may support different cat- egories of QoS. An influential example comes from ATM networks, which were once part of a grand vision for networking but have since become a niche technolo- gy. They support:
1. Constant bit rate (e.g., telephony).
2. Real-time variable bit rate (e.g., compressed videoconferencing). 3. Non-real-time variable bit rate (e.g., watching a movie on demand). 4. Available bit rate (e.g., file transfer).
These categories are also useful for other purposes and other networks. Constant bit rate is an attempt to simulate a wire by providing a uniform bandwidth and a uniform delay. Variable bit rate occurs when video is compressed, with some frames compressing more than others. Sending a frame with a lot of detail in it may require sending many bits, whereas a shot of a white wall may compress ex tremely well. Movies on demand are not actually real time because a few seconds of video can easily be buffered at the receiver before playback starts, so jitter on the network merely causes the amount of stored-but-not-played video to vary. Available bit rate is for applications such as email that are not sensitive to delay or jitter and will take what bandwidth they can get.
5.4.2 Overprovisioning
An easy solution to provide good quality of service is to build a network with enough capacity for whatever traffic will be thrown at it. The name for this solution is overprovisioning. The resulting network will carry application traffic without significant loss and, assuming a decent routing scheme, will deliver packets with low latency. Performance doesn’t get any better than this. To some extent, the telephone system is overprovisioned because it is rare to pick up a telephone and not get a dial tone instantly. There is simply so much capacity available that de- mand can almost always be met.
The trouble with this solution is that it is expensive. It is basically solving a problem by throwing money at it. Quality of service mechanisms let a network with less capacity meet application requirements just as well at a lower cost. Moreover, overprovisioning is based on expected traffic. All bets are off if the traf fic pattern changes too much. With quality of service mechanisms, the network can honor the performance guarantees that it makes even when traffic spikes, at the cost of turning down some requests.
410 THE NETWORK LAYER CHAP. 5
Four issues must be addressed to ensure quality of service:
1. What applications need from the network.
2. How to regulate the traffic that enters the network.
3. How to reserve resources at routers to guarantee performance.
4. Whether the network can safely accept more traffic.
No single technique deals efficiently with all these issues. Instead, a variety of techniques have been developed for use at the network (and transport) layer. Prac tical quality-of-service solutions combine multiple techniques. To this end, we will describe two versions of quality of service for the Internet called Integrated Services and Differentiated Services.
5.4.3 Packet Scheduling
Being able to regulate the shape of the offered traffic is a good start. However, to provide a performance guarantee, we must reserve sufficient resources along the route that the packets take through the network. To do this, we are assuming that the packets of a flow follow the same route. Spraying them over routers at random makes it hard to guarantee anything. As a consequence, something similar to a vir tual circuit has to be set up from the source to the destination, and all the packets that belong to the flow must follow this route.
Algorithms that allocate router resources among the packets of a flow and be tween competing flows are called packet scheduling algorithms. Three different kinds of resources can potentially be reserved for different flows:
1. Bandwidth.
2. Buffer space.
3. CPU cycles.
The first one, bandwidth, is the most obvious. If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps, trying to direct three flows through that line is not going to work. Thus, reserving bandwidth means not oversubscribing any output line.
A second resource that is often in short supply is buffer space. When a packet arrives, it is buffered inside the router until it can be transmitted on the chosen out- going line. The purpose of the buffer is to absorb small bursts of traffic as the flows contend with each other. If no buffer is available, the packet has to be dis- carded since there is no place to put it. For good quality of service, some buffers might be reserved for a specific flow so that flow does not have to compete for buffers with other flows. Up to some maximum value, there will always be a buff- er available when the flow needs one.
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 411
Finally, CPU cycles may also be a scarce resource. It takes router CPU time to process a packet, so a router can process only a certain number of packets per sec- ond. While modern routers are able to process most packets quickly, some kinds of packets require greater CPU processing, such as the ICMP packets we will de- scribe in Sec. 5.7.4. Making sure that the CPU is not overloaded is needed to ensure timely processing of these packets.
First-In First-Out (FIFO) Scheduling
Packet scheduling algorithms allocate bandwidth and other router resources by determining which of the buffered packets to send on the output line next. We al ready described the most straightforward scheduler when explaining how routers work. Each router buffers packets in a queue for each output line until they can be sent, and they are sent in the same order that they arrived. This algorithm is known as FIFO (First-In First-Out), or equivalently FCFS (First-Come First-Served).
FIFO routers usually drop newly arriving packets when the queue is full. Since the newly arrived packet would have been placed at the end of the queue, this be- havior is called tail drop. It is intuitive, and you may be wondering what alterna tives exist. In fact, the RED algorithm we described in Sec. 5.3.2 chose a newly ar riving packet to drop at random when the average queue length grew large. The other scheduling algorithms that we will describe also create other opportunities for deciding which packet to drop when the buffers are full.
Fair Queueing
FIFO scheduling is simple to implement, but it is not suited to providing good quality of service because when there are multiple flows, one flow can easily affect the performance of the other flows. If the first flow is aggressive and sends large bursts of packets, they will lodge in the queue. Processing packets in the order of their arrival means that the aggressive sender can hog most of the capacity of the routers its packets traverse, starving the other flows and reducing their quality of service. To add insult to injury, the packets of the other flows that do get through are likely to be delayed because they had to sit in the queue behind many packets from the aggressive sender.
Many packet scheduling algorithms have been devised that provide stronger isolation between flows and thwart attempts at interference (Bhatti and Crowcroft, 2000). One of the first ones was the fair queueing algorithm devised by Nagle (1987). The essence of this algorithm is that routers have separate queues, one for each flow for a given output line. When the line becomes idle, the router scans the queues round robin, as shown in Fig. 5-28. It then takes the first packet on the next queue. In this way, with n hosts competing for the output line, each host gets to send one out of every n packets. It is fair in the sense that all flows get to send packets at the same rate. Sending more packets will not improve this rate.
412 THE NETWORK LAYER CHAP. 5 1
2
3
Input queues
Round-robin service
3 2 1 3 2 1 Output line
Figure 5-28. Round-robin fair queueing.
Although a start, the algorithm has a flaw: it gives more bandwidth to hosts that use large packets than to hosts that use small packets. Demers et al. (1990) suggested an improvement in which the round robin is done in such a way as to simulate a byte-by-byte round robin, instead of a packet-by-packet round robin. The trick is to compute a virtual time that is the number of the round at which each packet would finish being sent. Each round drains a byte from all of the queues that have data to send. The packets are then sorted in order of their finishing times and sent in that order.
This algorithm and an example of finish times for packets arriving in three flows are illustrated in Fig. 5-29. If a packet has length L, the round at which it will finish is simply L rounds after the start time. The start time is either the finish time of the previous packet, or the arrival time of the packet, if the queue is empty when it arrives.
Arrives late
Arrives after D but goes first
Packet Arrival
Length Finish
Output
time
time
order
H
F
A
D
B
Fair
queueing
A 0 8 8 1 B 5 6 11 3 C 5 10 10 2 D 8 9 20 7 E 8 8 14 4
G E C 2X
F 10 6 16 5 G 11 10 19 6
Input queues
Weight is 2
H 20 8 28 8
(a) (b)
Figure 5-29. (a) Weighted Fair Queueing. (b) Finishing times for the packets.
From the table in Fig. 5-29(b), and looking only at the first two packets in the top two queues, packets arrive in the order A, B, D, and F. Packet A arrives at round 0 and is 8 bytes long, so its finish time is round 8. Similarly the finish time for packet B is 11. Packet D arrives while B is being sent. Its finish time is 9 byte rounds after it starts when B finishes, or 20. Similarly, the finish time for F is 16. In the absence of new arrivals, the relative sending order is A, B, F, D, even though F arrived after D. It is possible that another small packet will arrive on the top flow and obtain a finish time before D. It will only jump ahead of D if the
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 413
transmission of that packet has not started. Fair queueing does not preempt pack- ets that are currently being transmitted. Because packets are sent in their entirety, fair queueing is only an approximation of the ideal byte-by-byte scheme. But it is a very good approximation, staying within one packet transmission of the ideal scheme at all times.
Weighted Fair Queueing
One shortcoming of this algorithm in practice is that it gives all hosts the same priority. In many situations, it is desirable to give, for example, video servers more bandwidth than, say, file servers. This is easily possible by giving the video server two or more bytes per round. This modified algorithm is called WFQ (Weighted Fair Queueing). Letting the number of bytes per round be the weight of a flow, W , we can now give the formula for computing the finish time:
Fi = max(Ai, Fi<1) + Li /W
where Ai is the arrival time, Fi is the finish time, and Li is the length of packet i. The bottom queue of Fig. 5-29(a) has a weight of 2, so its packets are sent more quickly as you can see in the finish times given in Fig. 5-29(b).
Another practical consideration is implementation complexity. WFQ requires that packets be inserted by their finish time into a sorted queue. With N flows, this is at best an O(log N) operation per packet, which is difficult to achieve for many flows in high-speed routers. Shreedhar and Varghese (1995) describe an approxi- mation called deficit round robin that can be implemented very efficiently, with only O(1) operations per packet. WFQ is widely used given this approximation.
Other kinds of scheduling algorithms exist, too. A simple example is priority scheduling, in which each packet is marked with a priority. High-priority packets are always sent before any low-priority packets that are buffered. Within a priority, packets are sent in FIFO order. However, priority scheduling has the disadvantage that a burst of high-priority packets can starve low-priority packets, which may have to wait indefinitely. WFQ often provides a better alternative. By giving the high-priority queue a large weight, say 3, high-priority packets will often go through a short line (as relatively few packets should be high priority) yet some fraction of low-priority packets will continue to be sent even when there is high priority traffic. A high- and low-priority system is essentially a two-queue WFQ system in which the high priority has infinite weight.
As a final example of a scheduler, packets might carry timestamps and be sent in timestamp order. Clark et al. (1992) describe a design in which the timestamp records how far the packet is behind or ahead of schedule as it is sent through a se- quence of routers on the path. Packets that have been queued behind other packets at a router will tend to be behind schedule, and the packets that have been serviced first will tend to be ahead of schedule. Sending packets in order of their time- stamps has the beneficial effect of speeding up slow packets while at the same time
414 THE NETWORK LAYER CHAP. 5
slowing down fast packets. The result is that all packets are delivered by the net- work with a more consistent delay, which is obviously a good thing.
Putting it Together
We have now seen all the necessary elements for QoS, so it is time to put them together to actually provide it. QoS guarantees are established through the process of admission control. We first saw admission control used to control congestion, which is a performance guarantee, albeit a weak one. The guarantees we are con- sidering now are stronger, but the model is the same. The user offers a flow with an accompanying QoS requirement to the network. The network then decides whether to accept or reject the flow based on its capacity and the commitments it has made to other flows. If it accepts, the network reserves capacity in advance at routers to guarantee QoS when traffic is sent on the new flow.
The reservations must be made at all of the routers along the route that the packets take through the network. Any routers on the path without reservations might become congested, and a single congested router can break the QoS guaran tee. Many routing algorithms find the single best path between each source and each destination and send all traffic over that path. This may cause some flows to be rejected if there is not enough spare capacity along the best path. QoS guaran tees for new flows may still be accommodated by choosing a different route for the flow that has excess capacity. This is called QoS routing. Chen and Nahrstedt (1998) give an overview of these techniques. It is also possible to split the traffic for each destination over multiple paths to more easily find excess capacity. A simple method is for routers to choose equal-cost paths and to divide the traffic equally or in proportion to the capacity of the outgoing links. However, more sophisticated algorithms are also available (Nelakuditi and Zhang, 2002).
Given a path, the decision to accept or reject a flow is not a simple matter of comparing the resources (bandwidth, buffers, and cycles) requested by the flow with the router’s excess capacity in those three dimensions. It is a little more com- plicated than that. To start with, although some applications may know about their bandwidth requirements, few know about buffers or CPU cycles, so at the mini- mum, a different way is needed to describe flows and translate this description to router resources. We will get to this shortly.
Next, some applications are far more tolerant of an occasional missed deadline than others. The applications must choose from the type of guarantees that the net- work can make, whether hard guarantees or behavior that will hold most of the time. All else being equal, everyone would like hard guarantees, but the difficulty is that they are expensive because they constrain worst case behavior. Guarantees for most of the packets are often sufficient for applications, and more flows with this guarantee can be supported for a fixed capacity.
Finally, some applications may be willing to haggle about the flow parameters and others may not be willing to do so. For example, a movie viewer that normally
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 415
runs at 30 frames/sec may be willing to drop back to 25 frames/sec if there is not enough free bandwidth to support 30 frames/sec. Similarly, the number of pixels per frame, audio bandwidth, and other properties may be adjustable.
Because many parties may be involved in the flow negotiation (the sender, the receiver, and all the routers along the path between them), flows must be described accurately in terms of specific parameters that can be negotiated. A set of such pa rameters is called a flow specification. Typically, the sender (e.g., the video ser- ver) produces a flow specification proposing the parameters it would like to use. As the specification propagates along the route, each router examines it and modi fies the parameters as need be. The modifications can only reduce the flow, not in- crease it (e.g., a lower data rate, not a higher one). When it gets to the other end, the parameters can be established.
As an example of what can be in a flow specification, consider the example of Fig. 5-30. This is based on RFC 2210 and RFC 2211 for Integrated Services, a QoS design we will cover in the next section. It has five parameters. The first two parameters, the token bucket rate and token bucket size, use a token bucket to give the maximum sustained rate the sender may transmit, averaged over a long time in terval, and the largest burst it can send over a short time interval.
Parameter Unit
Token bucket rate Bytes/sec
Token bucket size Bytes
Peak data rate Bytes/sec
Minimum packet size Bytes
Maximum packet size Bytes
Figure 5-30. An example flow specification.
The third parameter, the peak data rate, is the maximum transmission rate tol- erated, even for brief time intervals. The sender must never exceed this rate even for short bursts.
The last two parameters specify the minimum and maximum packet sizes, in- cluding the transport and network layer headers (e.g., TCP and IP). The minimum size is useful because processing each packet takes some fixed time, no matter how short. A router may be prepared to handle 10,000 packets/sec of 1 KB each, but not be prepared to handle 100,000 packets/sec of 50 bytes each, even though this represents a lower data rate. The maximum packet size is important due to internal network limitations that may not be exceeded. For example, if part of the path goes over an Ethernet, the maximum packet size will be restricted to no more than 1500 bytes no matter what the rest of the network can handle.
An interesting question is how a router turns a flow specification into a set of specific resource reservations. At first glance, it might appear that if a router has a link that runs at, say, 1 Gbps and the average packet is 1000 bits, it can process 1
416 THE NETWORK LAYER CHAP. 5
million packets/sec. This observation is not the case, though, because there will al- ways be idle periods on the link due to statistical fluctuations in the load. If the link needs every bit of capacity to get its work done, idling for even a few bits cre- ates a backlog it can never get rid of.
Even with a load slightly below the theoretical capacity, queues can build up and delays can occur. Consider a situation in which packets arrive at random with a mean arrival rate of h packets/sec. The packets have random lengths and can be sent on the link with a mean service rate of µ packets/sec. Under the assumption that both the arrival and service distributions are Poisson distributions (what is call- ed an M/M/1 queueing system, where ‘‘M’’ stands for Markov, i.e., Poisson), it can be proven using queueing theory that the mean delay experienced by a packet, T , is
T =1
µ ×1
1 < h/µ =1
µ ×1
1 < l
where l = h /µ is the CPU utilization. The first factor, 1/µ, is what the service time would be in the absence of competition. The second factor is the slowdown due to competition with other flows. For example, if h = 950, 000 packets/sec and µ = 1, 000, 000 packets/sec, then l = 0. 95 and the mean delay experienced by each packet will be 20 µsec instead of 1 µsec. This time accounts for both the
queueing time and the service time, as can be seen when the load is very low (h /µ 5 0). If there are, say, 30 routers along the flow’s route, queueing delay alone will account for 600 µsec of delay.
One method of relating flow specifications to router resources that correspond to bandwidth and delay performance guarantees is given by Parekh and Gallagher (1993, 1994). It is based on traffic sources shaped by (R, B) token buckets and WFQ at routers. Each flow is given a WFQ weight W large enough to drain its token bucket rate R as shown in Fig. 5-31. For example, if the flow has a rate of 1 Mbps and the router and output link have a capacity of 1 Gbps, the weight for the flow must be greater than 1/1000th of the total of the weights for all of the flows at that router for the output link. This guarantees the flow a minimum bandwidth. If it cannot be given a large enough rate, the flow cannot be admitted.
wi
W x C R < weights
(R, B)
Traffic source Router
W
Capacity C
wi
Weighted
fair queue
Figure 5-31. Bandwidth and delay guarantees with token buckets and WFQ.
The largest queueing delay the flow will see is a function of the burst size of the token bucket. Consider the two extreme cases. If the traffic is smooth, without
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 417
any bursts, packets will be drained from the router just as quickly as they arrive. There will be no queueing delay (ignoring packetization effects). On the other hand, if the traffic is saved up in bursts, then a maximum-size burst, B, may arrive at the router all at once. In this case, the maximum queueing delay, D, will be the time taken to drain this burst at the guaranteed bandwidth, or B/R (again, ignoring packetization effects). If this delay is too large, the flow must request more band- width from the network.
These guarantees are hard. The token buckets bound the burstiness of the source, and fair queueing isolates the bandwidth given to different flows. This means that the flow will meet its bandwidth and delay guarantees regardless of how the other competing flows behave at the router. Those other flows cannot break the guarantee even by saving up traffic and all sending at once.
Moreover, the result holds for a path through multiple routers in any network topology. Each flow gets a minimum bandwidth because that bandwidth is guaran teed at each router. The reason each flow gets a maximum delay is more subtle. In the worst case that a burst of traffic hits the first router and competes with the traf fic of other flows, it will be delayed up to the maximum delay of D. However, this delay will also smooth the burst. In turn, this means that the burst will incur no further queueing delays at later routers. The overall queueing delay will be at most D.
5.4.4 Integrated Services
Between 1995 and 1997, IETF put a lot of effort into devising an architecture for streaming multimedia. This work resulted in over two dozen RFCs, starting with RFC 2205 through RFC 2212. The generic name for this work is integrated services. It was aimed at both unicast and multicast applications. An example of the former is a single user streaming a video clip from a news site. An example of the latter is a collection of digital television stations broadcasting their programs as streams of IP packets to many receivers at various locations. Below we will con- centrate on multicast, since unicast is a special case of multicast.
In many multicast applications, groups can change membership dynamically, for example, as people enter a video conference and then get bored and switch to a soap opera or the croquet channel. Under these conditions, the approach of having the senders reserve bandwidth in advance does not work well, since it would re- quire each sender to track all entries and exits of its audience. For a system de- signed to transmit television with millions of subscribers, it would not work at all.
RSVP—The Resource reSerVation Protocol
The main part of the integrated services architecture that is visible to the users of the network is RSVP (Resource reSerVation Protocol). It is described in RFC 2205 through RFC 2210. This protocol is used for making the reservations; other
418 THE NETWORK LAYER CHAP. 5
protocols are used for sending the data. RSVP allows multiple senders to transmit to multiple groups of receivers, permits individual receivers to switch channels freely, and also optimizes bandwidth use while at the same time eliminating con- gestion.
In its simplest form, the protocol uses multicast routing using spanning trees, as discussed earlier. Each group is assigned a group address. To send to a group, a sender puts the group’s address in its packets. The standard multicast routing algo rithm then builds a spanning tree covering all group members. The routing algo rithm is not part of RSVP. The only difference from normal multicasting is a little extra information that is multicast to the group periodically to tell the routers along the tree to maintain certain data structures in their memories.
As an example, consider the network of Fig. 5-32(a). Hosts 1 and 2 are multi- cast senders, and hosts 3, 4, and 5 are multicast receivers. In this example, the senders and receivers are disjoint, but in general, the two sets may overlap. The multicast trees for hosts 1 and 2 are shown in Fig. 5-32(b) and Fig. 5-32(c), re- spectively.
Senders
1 2 B
1 2 B
1 2 B
A D G J
E H
K
C F I
L
A D G J
E H
K
C F I
L
A D G J
E H
K
C F I
L
3 4 5 Receivers
3 4 5
3 4 5
(a) (b) (c)
Figure 5-32. (a) A network. (b) The multicast spanning tree for host 1. (c) The multicast spanning tree for host 2.
To get better reception and eliminate congestion, any of the receivers in a group can send a reservation message up the tree to the sender. The message is
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 419
propagated using the reverse path forwarding algorithm discussed earlier. At each hop, the router notes the reservation and reserves the necessary bandwidth. We saw in the previous section how a weighted fair queueing scheduler can be used to
make this reservation. If insufficient bandwidth is available, it reports back failure. By the time the message gets back to the source, bandwidth has been reserved all the way from the sender to the receiver making the reservation request along the spanning tree.
An example of such a reservation is shown in Fig. 5-33(a). Here host 3 has re- quested a channel to host 1. Once it has been established, packets can flow from 1 to 3 without congestion. Now consider what happens if host 3 next reserves a channel to the other sender, host 2, so the user can watch two television programs at once. A second path is reserved, as illustrated in Fig. 5-33(b). Note that two separate channels are needed from host 3 to router E because two independent streams are being transmitted.
1 2
1 2 2 1
A
B
C
A
B
C
A
B
C
Bandwidth reserved for source 2
D
E
F
D
E
F
D
E
F
Bandwidth reserved for source 1
G
H
I
G
H
I
G
H
I
J
K
L
J
K
L
J
K
L
3 4 5
3 4 5
3 4 5
(a) (b) (c)
Figure 5-33. (a) Host 3 requests a channel to host 1. (b) Host 3 then requests a second channel, to host 2. (c) Host 5 requests a channel to host 1.
Finally, in Fig. 5-33(c), host 5 decides to watch the program being transmitted by host 1 and also makes a reservation. First, dedicated bandwidth is reserved as far as router H. However, this router sees that it already has a feed from host 1, so if the necessary bandwidth has already been reserved, it does not have to reserve any more. Note that hosts 3 and 5 might have asked for different amounts of band- width (e.g., if host 3 is playing on a small screen and only wants the low-resolution information), so the capacity reserved must be large enough to satisfy the greediest receiver.
When making a reservation, a receiver can (optionally) specify one or more sources that it wants to receive from. It can also specify whether these choices are
420 THE NETWORK LAYER CHAP. 5
fixed for the duration of the reservation or whether the receiver wants to keep open the option of changing sources later. The routers use this information to optimize bandwidth planning. In particular, two receivers are only set up to share a path if they both agree not to change sources later on.
The reason for this strategy in the fully dynamic case is that reserved band- width is decoupled from the choice of source. Once a receiver has reserved band- width, it can switch to another source and keep that portion of the existing path that is valid for the new source. If host 2 is transmitting several video streams in real time, for example a TV broadcaster with multiple channels, host 3 may switch be tween them at will without changing its reservation: the routers do not care what program the receiver is watching.
5.4.5 Differentiated Services
Flow-based algorithms have the potential to offer good quality of service to one or more flows because they reserve whatever resources are needed along the route. However, they also have a downside. They require an advance setup to es tablish each flow, something that does not scale well when there are thousands or millions of flows. Also, they maintain internal per-flow state in the routers, mak ing them vulnerable to router crashes. Finally, the changes required to the router code are substantial and involve complex router-to-router exchanges for setting up the flows. As a consequence, while work continues to advance integrated services, few deployments of it or anything like it exist yet.
For these reasons, IETF has also devised a simpler approach to quality of ser- vice, one that can be largely implemented locally in each router without advance setup and without having the whole path involved. This approach is known as class-based (as opposed to flow-based) quality of service. IETF has standardized an architecture for it, called differentiated services, which is described in RFC 2474, RFC 2475, and numerous others. We will now describe it.
Differentiated services can be offered by a set of routers forming an adminis trative domain (e.g., an ISP or a telco). The administration defines a set of service classes with corresponding forwarding rules. If a customer subscribes to dif ferentiated services, customer packets entering the domain are marked with the class to which they belong. This information is carried in the Dif erentiated ser- vices field of IPv4 and IPv6 packets (described in Sec. 5.7.1). The classes are de fined as per-hop behaviors because they correspond to the treatment the packet will receive at each router, not a guarantee across the network. Better service is provided to packets with some per-hop behaviors (e.g., premium service) than to others (e.g., regular service). Traffic within a class may be required to conform to some specific shape, such as a leaky bucket with some specified drain rate. An op- erator with a good nose for business might charge extra for each premium packet transported or might allow up to N premium packets per month for a fixed addi tional monthly fee. Note that this scheme requires no advance setup, no resource
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 421
reservation, and no time-consuming end-to-end negotiation for each flow, as with integrated services. This makes differentiated services relatively easy to imple- ment.
Class-based service also occurs in other industries. For example, package de livery companies often offer overnight, two-day, and three-day service. Airlines offer first class, business class, and cattle-class service. Long-distance trains have multiple service classes. The Paris subway even had two service classes for the
same quality of seating. For packets, the classes may differ in terms of delay, jitter, and probability of being discarded in the event of congestion, among other possi- bilities (but probably not roomier Ethernet frames).
To make the difference between flow-based quality of service and class-based quality of service clearer, consider an example: Internet telephony. With a flow- based scheme, each telephone call gets its own resources and guarantees. With a class-based scheme, all the telephone calls together get the resources reserved for the class telephony. These resources cannot be taken away by packets from the Web browsing class or other classes, but no telephone call gets any private re- sources reserved for it alone.
Expedited Forwarding
The choice of service classes is up to each operator, but since packets are often forwarded between networks run by different operators, IETF has defined some network-independent service classes. The simplest class is expedited forwarding, so let us start with that one. It is described in RFC 3246.
The idea behind expedited forwarding is very simple. Two classes of service are available: regular and expedited. The vast majority of the traffic is expected to be regular, but a limited fraction of the packets are expedited. The expedited pack- ets should be able to transit the network as though no other packets were present. In this way, they will get low loss, low delay and low jitter service—just what is needed for VoIP. A symbolic representation of this ‘‘two-tube’’ system is given in Fig. 5-34. Note that there is still just one physical line. The two logical pipes shown in the figure represent a way to reserve bandwidth for different classes of service, not a second physical line.
One way to implement this strategy is as follows. Packets are classified as expedited or regular and marked accordingly. This step might be done on the send ing host or in the ingress (first) router. The advantage of doing classification on the sending host is that more information is available about which packets belong to which flows. This task may be performed by networking software or even the op- erating system, to avoid having to change existing applications. For example, it is becoming common for VoIP packets to be marked for expedited service by hosts. If the packets pass through a corporate network or ISP that supports expedited ser- vice, they will receive preferential treatment. If the network does not support expe- dited service, no harm is done. In that case, it makes sense to at least try.
422 THE NETWORK LAYER CHAP. 5 Expedited packets
Regular packets
Figure 5-34. Expedited packets experience a traffic-free network.
Of course, if the marking is done by the host, the ingress router is likely to police the traffic to make sure that customers are not sending more expedited traf fic than they have paid for. Within the network, the routers may have two output queues for each outgoing line, one for expedited packets and one for regular pack- ets. When a packet arrives, it is queued accordingly. The expedited queue is given priority over the regular one, for example, by using a priority scheduler. In this way, expedited packets see an unloaded network, even when there is, in fact, a heavy load of regular traffic.
Assured Forwarding
A somewhat more elaborate scheme for managing the service classes is called assured forwarding. It is described in RFC 2597. Assured forwarding specifies that there shall be four priority classes, each class having its own resources. The top three classes might be called gold, silver, and bronze. In addition, it defines three discard classes for packets that are experiencing congestion: low, medium, and high. Taken together, these factors define 12 service classes.
Figure 5-35 shows one way packets might be processed under assured for- warding. The first step is to classify the packets into one of the four priority classes. As before, this step might be done on the sending host (as shown in the figure) or in the ingress router, and the rate of higher-priority packets may be limit- ed by the operator as part of the service offering.
The next step is to determine the discard class for each packet. This is done by passing the packets of each priority class through a traffic policer such as a token bucket. The policer lets all of the traffic through, but it identifies packets that fit within small bursts as low discard, packets that exceed small bursts as medium dis- card, and packets that exceed large bursts as high discard. The combination of pri- ority and discard class is then encoded in each packet.
Finally, the packets are processed by routers in the network with a packet scheduler that carefully distinguishes the different classes. A common choice is to
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 423 Packets with
Gold
Silver
Bronze
Packet
DiffServ mark
Classifier Policer
source Four priority
classes
Twelve priority/drop classes
Weighted
fair queues
Router
Figure 5-35. A possible implementation of assured forwarding.
use weighted fair queueing for the four priority classes, with higher classes given higher weights. In this way, the higher classes will get most of the bandwidth, but the lower classes will not be starved of bandwidth entirely. For example, if the weights double from one class to the next higher class, the higher class will get twice the bandwidth. Within a priority class, packets with a higher discard class can be preferentially dropped by running an algorithm such as RED. RED will start to drop packets as congestion builds but before the router has run out of buffer space. At this stage, there is still buffer space with which to accept low discard packets while dropping high discard packets.
5.5 INTERNETWORKING
Until now, we have implicitly assumed that there is a single homogeneous net- work, with each machine using the same protocol in each layer. Unfortunately, this assumption is wildly optimistic. Many different networks exist, including PANs, LANs, MANs, and WANs. We have described Ethernet, Internet over cable, the fixed and mobile telephone networks, 802.11, and more. Numerous protocols are in widespread use acrossthese networks in every layer.
5.5.1 Internetworks: An Overview
In the following sections, we will take a careful look at the issues that arise when two or more networks are connected to form an internetwork, or more sim- ply an internet.
It would be much simpler to join networks together if everyone used a single networking technology, and it is often the case that there is a dominant kind of net- work, such as Ethernet. Some pundits speculate that the multiplicity of technolo- gies will go away as soon as everyone realizes how wonderful [fill in your favorite network] is. Do not count on it. History shows this to be wishful thinking. Dif ferent kinds of networks grapple with different problems, so, for example, Ethernet
424 THE NETWORK LAYER CHAP. 5
and satellite networks are always likely to differ. Reusing existing systems, such as running data networks on top of cable, the telephone network, and power lines, adds constraints that cause the features of the networks to diverge. Heterogeneity is here to stay.
If there will always be different networks, it would be simpler if we did not need to interconnect them. This also is very unlikely. Bob Metcalfe postulated that the value of a network with N nodes is the number of connections that may be made between the nodes, or N2(Gilder, 1993). This means that large networks are far more valuable than small networks because they allow many more connections, so there always will be an incentive to combine smaller networks.
The Internet is the prime example of this interconnection. (We will write Inter- net with a capital ‘‘I’’ to distinguish it from other internets, or connected net- works.) The purpose of joining all these networks is to allow users on any of them to communicate with users on all the other ones. When you pay an ISP for Internet service, you may be charged depending on the bandwidth of your line, but what you are really paying for is the ability to exchange packets with any other host that is also connected to the Internet. After all, the Internet would not be very popular if you could only send packetsto other hosts in the same city.
Since networks often differ in important ways, getting packets from one net- work to another is not always so easy. We must address problems of heterogeneity, and also problems of scale as the resulting internet grows very large. We will be- gin by looking at how networks can differ to see what we are up against. Then we shall see the approach used so successfully by IP, the network layer protocol of the Internet, including techniques for tunneling through networks, routing in internet- works, and packet fragmentation.
5.5.2 How Networks Differ
Networks can differ in many ways. Some of the differences, such as different modulation techniques or frame formats, are internal to the physical and data link layers. These differences will not concern us here. Instead, in Fig. 5-36 we list some of the differences that can be exposed to the network layer. It is papering over these differences that makes internetworking more difficult than operating within a single network.
When packets sent by a source on one network must transit one or more for- eign networks before reaching the destination network, many problems can occur at the interfaces between networks. To start with, the source needs to be able to address the destination. What do we do if the source is on an Ethernet network and the destination is on the cellular telephone network? Assuming we can even speci fy a cellular destination from an Ethernet network, packets would cross from a connectionless network to a connection-oriented one. This may require that a new connection be set up on short notice, which injects a delay, and much overhead if the connection is not used for many more packets.
SEC. 5.5 INTERNETWORKING 425
Item Some Possibilities
Service offered Connectionless versus connection oriented
Addressing Different sizes, flat or hierarchical
Broadcasting Present or absent (also multicast)
Packet size Every network has its own maximum
Ordering Ordered and unordered delivery
Quality of service Present or absent; many different kinds
Reliability Different levels of loss
Security Privacy rules, encryption, etc.
Parameters Different timeouts, flow specifications, etc.
Accounting By connect time, packet, byte, or not at all
Figure 5-36. Some of the many ways networks can differ.
Many specific differences may have to be accommodated as well. How do we multicast a packet to a group with some members on a network that does not sup- port multicast? The differing max packet sizes used by different networks can be a major nuisance, too. How do you pass an 8000-byte packet through a network whose maximum size is 1500 bytes? If packets on a connection-oriented network transit a connectionless network, they may arrive in a different order than they were sent. That is something the sender likely did not expect, and it might come as an (unpleasant) surprise to the receiver as well.
With effort, these kinds of differences can be papered over. For example, a gateway joining two networks might generate separate packets for each destination to simulate multicast. A large packet might be broken up, sent in pieces, and then joined back together. Receivers might buffer packets and deliver them in order.
Networks also can differ in large respects that are more difficult to reconcile. The clearest example is quality of service. If one network has strong QoS and the other offers best effort service, it will be impossible to make bandwidth and delay guarantees for real-time traffic end to end. In fact, they can likely only be made while the best-effort network is operated at a low utilization, or hardly used, which is unlikely to be the goal of most ISPs. Security mechanisms are problematic, but at least encryption for confidentiality and data integrity can be layered on top of networks that do not already include it. Finally, differences in accounting can lead to unwelcome bills when normal usage suddenly becomes expensive, as roaming mobile phone users with data plans have discovered.
5.5.3 Connecting Heterogeneous Networks
There are two basic choices for connecting different networks: we can build devices that translate or convert packets from each kind of network into packets for each other network, or as computer scientists often do, we can try to solve the
426 THE NETWORK LAYER CHAP. 5
problem by adding a layer of indirection and building a common layer on top of the different networks. In either case, the devices are placed at the boundaries be tween networks; initially, these devices were called gateways.
Early on, Cerf and Kahn (1974) argued for a common layer to hide the dif ferences of existing networks. This approach has been tremendously successful, and the layer they proposed was eventually separated into the TCP and IP proto- cols. Almost four decades later, IP is the foundation of the modern Internet. For this accomplishment, Cerf and Kahn were awarded the 2004 Turing Award, infor- mally known as the Nobel Prize of computer science. IP provides a universal packet format that all routers recognize and that can be passed through almost every network. IP has extended its reach from computer networks to take over the telephone network. It also runs on sensor networks and other tiny devices that were once presumed too resource-constrained to support it.
We have discussed several different devices that connect networks, including repeaters, hubs, switches, bridges, routers, and gateways. Repeaters and hubs just move bits from one wire to another. They are mostly analog devices and do not understand anything about higher layer protocols. Bridges and switches operate at the link layer. They can be used to build networks, but only with minor protocol translation in the process, for example, among 10-, 100-, and 1000-Mbps Ethernet switches. Our focus in this section is interconnection devices that operate at the network layer, namely the routers. We will leave gateways, which are higher-layer interconnection devices, until later.
Let us first explore at a high level how interconnection with a common net- work layer can be used to interconnect dissimilar networks. An internet comprised of 802.11, MPLS, and Ethernet networks is shown in Fig. 5-37(a). Suppose that the source machine on the 802.11 network wants to send a packet to the destination machine on the Ethernet network. Since these technologies are different, and they are further separated by another kind of network (MPLS), some added processing is needed at the boundaries between the networks.
Because different networks may, in general, have different forms of ad- dressing, the packet carries a network layer address that can identify any host across the three networks. The first boundary the packet reaches is when it tran- sitions from an 802.11 network to an MPLS network. Remember, 802.11 provides a connectionless service, but MPLS provides a connection-oriented service. This means that a virtual circuit must be set up to cross that network. Once the packet has traveled along the virtual circuit, it will reach the Ethernet network. At this boundary, the packet may be too large to be carried, since 802.11 can work with larger frames than Ethernet. To handle this problem, the packet is divided into fragments, and each fragment is sent separately. When the fragments reach the destination, they are reassembled. Then the packet has completed its journey.
The protocol processing for this journey is shown in Fig. 5-37(b). The source accepts data from the transport layer and generates a packet with the common net- work layer header, which is IP in this example. The network header contains the
SEC. 5.5 INTERNETWORKING 427
Packet Virtual circuit
802.11 MPLS Ethernet
Source Destination Router Router
(a)
Data from
transport layer
IP
IP
IP
IP
802.11
IP MPLS IP Eth MPLS IP
IP
802.11
IP Eth IP
Physical
(b)
Figure 5-37. (a) A packet crossing different networks. (b) Network and link lay- er protocol processing.
ultimate destination address, which is used to determine that the packet should be sent via the first router. So the packet is encapsulated in an 802.11 frame whose destination is the first router and transmitted. At the router, the packet is removed
from the frame’s data field and the 802.11 frame header is discarded. The router now examines the IP address in the packet and looks up this address in its routing table. Based on this address, it decides to send the packet to the second router next. For this part of the path, an MPLS virtual circuit must be established to the second router and the packet must be encapsulated with MPLS headers that travel this circuit. At the far end, the MPLS header is discarded and the network address is again consulted to find the next network layer hop. It is the destination itself. When a packet is too long to be sent over Ethernet, it is split into two portions. Each of these portions is put into the data field of an Ethernet frame and sent to the Ethernet address of the destination. At the destination, the Ethernet header is stripped from each of the frames, and the contents are reassembled. The packet has finally reached its destination.
Observe that there is an essential difference between the routed case and the switched (or bridged) case. With a router, the packet is extracted from the frame and the network address in the packet is used for deciding where to send it. With a switch (or bridge), the entire frame is transported on the basis of its MAC address.
Switches do not have to understand the network layer protocol being used to switch packets. Routers do.
Unfortunately, internetworking is not nearly as easy as we have made it sound. In fact, when bridges were introduced, it was intended that they would join dif ferent types of networks, or at least different types of LANs. They were to do this by translating frames from one LAN into frames from another LAN. However, this did not work well, for exactly the same reason that internetworking is difficult:
428 THE NETWORK LAYER CHAP. 5
the differences in the features of LANs, such as different maximum packet sizes and LANs with and without priority classes, are hard to mask. Today, bridges are predominantly used to connect the same kind of network at the link layer, and rout- ers connect different networks at the network layer.
Internetworking has been very successful at building large networks, but it only works when there is a common network layer. There have, in fact, been many network protocols over time. Getting everybody to agree on a single format is dif ficult when companies perceive it to their commercial advantage to have a propri- etary format that they control. Examples besides IP, which is now the near-univer- sal network protocol, were IPX, SNA, and AppleTalk. None of these protocols are still in widespread use, but there will always be other protocols. The most relevant example now is probably IPv4 and IPv6. While these are both versions of IP, they are not compatible (or it would not have been necessary to create IPv6).
A router that can handle multiple network protocols is called a multiprotocol router. It must either translate the protocols, or leave connection for a higher pro tocol layer. Neither approach is entirely satisfactory. Connection at a higher layer, say, by using TCP, requires that all the networks implement TCP (which may not be the case). Then it limits usage across the networks to applications that use TCP (which does not include many real-time applications).
The alternative is to translate packets between the networks. However, unless the packet formats are close relatives with the same information fields, such con- versions will always be incomplete and often doomed to failure. For example, IPv6 addresses are 128 bits long. They will not fit in a 32-bit IPv4 address field, no matter how hard the router tries. Getting IPv4 and IPv6 to run in the same net- work has proven to be a major obstacle to the deployment of IPv6. (To be fair, so has getting customers to understand why they should want IPv6 in the first place.) Greater problems can be expected when translating between very different proto- cols, such as connectionless and connection-oriented network protocols. Given these difficulties, conversion is only rarely attempted. Arguably, even IP has only worked so well by serving as a kind of lowest common denominator. It requires lit tle of the networks on which it runs, but offers only best-effort service as a result.
5.5.4 Connecting Endpoints Across Heterogeneous Networks
Handling the general case of making two different networks interwork is exceedingly difficult. However, there is a common special case that is manageable even for different network protocols. This case is where the source and destination hosts are on the same type of network, but there is a different network in between. As an example, think of an international bank with an IPv6 network in Paris, an IPv6 network in London, and connectivity between the offices via the IPv4 Inter- net. This situation is shown in Fig. 5-38.
The solution to this problem is a technique called tunneling. To send an IP packet to a host in the London office, a host in the Paris office constructs the
SEC. 5.5 INTERNETWORKING 429
IPv6 IPv4 IPv6
Paris London Router Router
Tunnel
IPv6 packet IPv4 IPv6 packet IPv6 packet
Figure 5-38. Tunneling a packet from Paris to London.
packet containing an IPv6 address in London, and sends it to the multiprotocol router that connects the Paris IPv6 network to the IPv4 Internet. When this router gets the IPv6 packet, it encapsulates the packet with an IPv4 header addressed to the IPv4 side of the multiprotocol router that connects to the London IPv6 network. That is, the router puts a (IPv6) packet inside a (IPv4) packet. When this wrapped packet arrives, the London router removes the original IPv6 packet and sends it onward to the destination host.
The path through the IPv4 Internet can be seen as a big tunnel extending from one multiprotocol router to the other. The IPv6 packet just travels from one end of the tunnel to the other, snug in its nice box. It does not have to worry about deal ing with IPv4 at all. Neither do the hosts in Paris or London. Only the multiproto- col routers have to understand both IPv4 and IPv6 packets. In effect, the entire trip from one multiprotocol router to the other is like a hop over a single link.
An analogy may make tunneling clearer. Consider a person driving her car from Paris to London. Within France, the car moves under its own power, but when it hits the English Channel, it is loaded onto a high-speed train and tran- sported to England through the Chunnel (cars are not permitted to drive through the Chunnel). Effectively, the car is being carried as freight, as depicted in Fig. 5-39. At the far end, the car is let loose on the English roads and once again continues to move under its own power. Tunneling of packets through a foreign network works the same way.
Tunneling is widely used to connect isolated hosts and networks using other networks. The network that results is called an overlay since it has effectively been overlaid on the base network. Deployment of a network protocol with a new fea ture is a common reason, as our ‘‘IPv6 over IPv4’’ example shows. The disadvan tage of tunneling is that none of the hosts on the network that is tunneled over can be reached because the packets cannot escape in the middle of the tunnel. Howev- er, this limitation of tunnels is turned into an advantage with VPNs (Virtual Pri- vate Networks). A VPN is simply an overlay that is used to provide a measure of security. We will explore VPNs when we get to Chap. 8.
430 THE NETWORK LAYER CHAP. 5 Car English Channel
Paris London Railroad carriage
Railroad track
Figure 5-39. Tunneling a car from France to England.
5.5.5 Internetwork Routing: Routing Across Multiple Networks
Routing through an internet poses the same basic problem as routing within a single network, but with some added complications. To start, the networks may in ternally use different routing algorithms. For example, one network may use link state routing and another distance vector routing. Since link state algorithms need to know the topology but distance vector algorithms do not, this difference alone would make it unclear how to find the shortest paths across the internet.
Networks run by different operators lead to bigger problems. First, the opera tors may have different ideas about what is a good path through the network. One operator may want the route with the least delay, while another may want the most inexpensive route. This will lead the operators to use different quantities to set the shortest-path costs (e.g., milliseconds of delay vs. monetary cost). The weights will not be comparable across networks, so shortest paths on the internet will not be well defined.
Worse yet, one operator may not want another operator to even know the de tails of the paths in its network, perhaps because the weights and paths may reflect sensitive information (such as the monetary cost) that represents a competitive bus iness advantage.
Finally, the internet may be much larger than any of the networks that com- prise it. It may therefore require routing algorithms that scale well by using a hier- archy, even if none of the individual networks need to use a hierarchy.
All of these considerations lead to a two-level routing algorithm. Within each network, an intradomain or interior gateway protocol is used for routing. (‘‘Gate- way’’ is an older term for ‘‘router.’’) It might be a link state protocol of the kind we have already described. Across the networks that make up the internet, an interdo- main or exterior gateway protocol is used. The networks may all use different intradomain protocols, but they must use the same interdomain protocol. In the In ternet, the interdomain routing protocol is called Border Gateway Protocol (BGP). We will describe it in Sec. 5.7.7
There is one more important term to introduce. Since each network is operated independently of all the others, it is often referred to as an AS or Autonomous
SEC. 5.5 INTERNETWORKING 431
System. A good mental model for an AS is an ISP network. In fact, an ISP net- work may be comprised of more than one AS, if it is managed, or, has been ac- quired, as multiple networks. But the difference is usually not significant.
The two levels are usually not strictly hierarchical, as highly suboptimal paths might result if a large international network and a small regional network were both abstracted to be a single network. However, relatively little information about routes within the networks is exposed to find routes across the internetwork. This helps to address all of the complications. It improves scaling and lets operators freely select routes within their own networks using a protocol of their choosing. It also does not require weights to be compared across networks or expose sensitive information outside of networks.
However, we have said little so far about how the routes across the networks of the internet are determined. In the Internet, a large determining factor is the busi- ness arrangements between ISPs. Each ISP may charge or receive money from the other ISPs for carrying traffic. Another factor is that if internetwork routing re- quires crossing international boundaries, various laws may suddenly come into play, such as Sweden’s strict privacy laws about exporting personal data about Swedish citizens from Sweden. All of these nontechnical factors are wrapped up in the concept of a routing policy that governs the way autonomous networks select the routes that they use. We will return to routing policies when we describe BGP.
5.5.6 Supporting Different Packet Sizes: Packet Fragmentation
Each network or link imposes some maximum size on its packets. These lim its have various causes, among them
1. Hardware (e.g., the size of an Ethernet frame).
2. Operating system (e.g., all buffers are 512 bytes).
3. Protocols (e.g., the number of bits in the packet length field).
4. Compliance with some (inter)national standard.
5. Desire to reduce error-induced retransmissions to some level. 6. Desire to prevent one packet from occupying the channel too long.
The result of all these factors is that the network designers are not free to choose any old maximum packet size they wish. Maximum payloads for some common technologies are 1500 bytes for Ethernet and 2272 bytes for 802.11. IP is more generous, allows for packets as big as 65,515 bytes.
Hosts usually prefer to transmit large packets because this reduces packet over- heads such as bandwidth wasted on header bytes. An obvious internetworking problem appears when a large packet wants to travel through a network whose
432 THE NETWORK LAYER CHAP. 5
maximum packet size is too small. This nuisance has been a persistent issue, and solutions to it have evolved along with much experience gained on the Internet. One solution is to make sure the problem does not occur in the first place. However, this is easier said than done. A source does not usually know the path a packet will take through the network to a destination, so it certainly does not know how small a packet has to be to get there. This packet size is called the Path MTU (Path Maximum Transmission Unit). Even if the source did know the path MTU, packets are routed independently in a connectionless network such as the In ternet. This routing means that paths may suddenly change, which can unexpect- edly change the path MTU.
The alternative solution to the problem is to allow routers to break up packets into fragments, sending each fragment as a separate network layer packet. How- ever, as every parent of a small child knows, converting a large object into small fragments is considerably easier than the reverse process. (Physicists have even given this effect a name: the second law of thermodynamics.) Packet-switching networks, too, have trouble putting the fragments back together again.
Two opposing strategies exist for recombining the fragments back into the original packet. The first strategy is to make all the fragmentation caused by a ‘‘small-packet’’ network transparent to any subsequent networks through which the packet must pass on its way to the ultimate destination. This option is shown in
Fig. 5-40(a). In this approach, when an oversized packet arrives at G1, the router breaks it up into fragments. Each fragment is addressed to the same exit router, G2, where the pieces are recombined. In this way, passage through the small-pack- et network is made transparent. Subsequent networks are not even aware that frag- mentation has occurred.
Transparent fragmentation is straightforward but has some problems. For one thing, the exit router must know when it has received all the pieces, so either a count field or an ‘‘end-of-packet’’ bit must be provided. Also, because all packets must exit via the same router so that they can be reassembled, the routes are con- strained. By not allowing some fragments to follow one route to the ultimate desti- nation and other fragments a disjoint route, some performance may be lost. More significant is the amount of work that the router may have to do. It may need to buffer the fragments as they arrive, and decide when to throw them away if not all of the fragments arrive. Some of this work may be wasteful, too, as the packet may pass through a series of small-packet networks and need to be repeatedly frag- mented and reassembled.
The other fragmentation strategy is to refrain from recombining fragments at any intermediate routers. Once a packet has been fragmented, each fragment is treated as though it were an original packet. The routers pass the fragments, as shown in Fig. 5-40(b), and reassembly is performed only at the destination host.
The main advantage of nontransparent fragmentation is that it requires routers to do less work. IP works this way. A complete design requires that the fragments be numbered in such a way that the original data stream can be reconstructed. The
SEC. 5.5 INTERNETWORKING 433
Packet
Network 1
Network 2
G1 G2 G3 G4
G1fragments a large packet
Packet
G2
reassembles the fragments
(a)
G3fragments again
G4
reassembles again
G1 G2 G3 G4
G1fragments a large packetThe fragments are not reassembled
until the final destination (a host) is reached
(b)
Figure 5-40. (a) Transparent fragmentation. (b) Nontransparent fragmentation.
design used by IP is to give every fragment a packet number (carried on all pack- ets), an absolute byte offset within the packet, and a flag indicating whether it is the end of the packet. An example is shown in Fig. 5-41. While simple, this de-
sign has some attractive properties. Fragments can be placed in a buffer at the destination in the right place for reassembly, even if they arrive out of order. Frag- ments can also be fragmented if they pass over a network with a yet smaller MTU. This is shown in Fig. 5-41(c). Retransmissions of the packet (if all fragments were not received) can be fragmented into different pieces. Finally, fragments can be of arbitrary size, down to a single byte plus the packet header. In all cases, the desti- nation simply uses the packet number and fragment offset to place the data in the right position, and the end-of-packet flag to determine when it has the complete packet.
Unfortunately, this design still has problems. The overhead can be higher than with transparent fragmentation because fragment headers are now carried over some links where they may not be needed. But the real problem is the existence of fragments in the first place. Kent and Mogul (1987) argued that fragmentation is detrimental to performance because, as well as the header overheads, a whole packet is lost if any of its fragments are lost, and because fragmentation is more of a burden for hosts than was originally realized.
This leads us back to the original solution of getting rid of fragmentation in the network—the strategy used in the modern Internet. The process is called path MTU discovery (Mogul and Deering, 1990). It works like this. Each IP packet is sent with its header bits set to indicate that no fragmentation is allowed to be per formed. If a router receives a packet that is too large, it generates an error packet,
434 THE NETWORK LAYER CHAP. 5 Number of the first elementary fragment in this packet
Packet number
End of packet bit
1 byte
27 0 1 A B C D E F G H I J
Header
(a)
27 0 0 A B C D E F G H 27 8 1 I J
Header Header
(b)
27 0 0 A B C D E 27 5 0 F G H 27 8 1 I J
Header Header Header (c)
Figure 5-41. Fragmentation when the elementary data size is 1 byte. (a) Origi
nal packet, containing 10 data bytes. (b) Fragments after passing through a net- work with maximum packet size of 8 payload bytes plus header. (c) Fragments after passing through a size 5 gateway.
returns it to the source, and drops the packet. This is shown in Fig. 5-42. When the source receives the error packet, it uses the information inside to refragment the packet into pieces that are small enough for the router to handle. If a router further down the path has an even smaller MTU, the process is repeated.
Packet (with length)
1400 1200 900
Source Destination “Try 900” “Try 1200”
Figure 5-42. Path MTU discovery.
The advantage of path MTU discovery is that the source now knows what length packet to send. If the routes and path MTU change, new error packets will be triggered and the source will adapt to the new path. However, fragmentation is still needed between the source and the destination unless the higher layers learn the path MTU and pass the right amount of data to IP. TCP and IP are typically
SEC. 5.5 INTERNETWORKING 435
implemented together (as ‘‘TCP/IP’’) to be able to pass this sort of information. Even if this is not done for other protocols, fragmentation has still been moved out of the network and into the hosts.
The disadvantage of path MTU discovery is that there may be added startup delays simply to send a packet. More than one round-trip delay may be needed to probe the path and find the MTU before any data is delivered to the destination. This begs the question of whether there are better designs. The answer is probably ‘‘Yes.’’ Consider the design in which each router simply truncates packets that exceed its MTU. This would ensure that the destination learns the MTU as rapidly as possible (from the amount of data that was delivered) and receives some of the data.
5.6 SOFTWARE-DEFINED NETWORKING
Traffic management and engineering is historically very challenging: it re- quires network operators to tune the configuration parameters of routing protocols, which then re-compute routes. Traffic flows along the new paths and results in a re-balancing of traffic. Unfortunately, the mechanisms for traffic control in this manner are indirect: changes to routing configuration result in changes to routing both in the network and between networks, and these protocols can interact in unpredictable ways. SDN (Software-Defined Networking) aims to fix many of these problems. We will discuss it below.
5.6.1 Overview
In a certain way, networks have always been ‘‘software defined,’’ in the sense that configurable software running on routers is responsible for looking up infor- mation in packets and making forwarding decisions about them. Yet, the software that runs the routing algorithms and implements other logic about packet for- warding was historically vertically integrated with the networking hardware. An operator who bought a Cisco or Juniper router was, in some sense, stuck with the software technology that the vendor shipped with the hardware. For example, mak ing changes to the way OSPF or BGP work was simply not possible. One of the main concepts driving SDN was to recognize that the control plane, the software and logic that select routes and decide what to do with forwarding traffic, runs in software and can operate completely separately from the data plane, the hard- ware-based technology that is responsible for actually performing lookups on packets and deciding what to do with them. The two planes are shown in Fig. 5-43.
Given the architectural separation of the control plane and the data plane, the next natural logical step is to recognize that the control plane need not run on the network hardware at all! In fact, one common instantiation of SDN involves a
436 THE NETWORK LAYER CHAP. 5
logically centralized program, often written in a high-level language (e.g., Python, Java, Golang, C) making logical decisions about forwarding and communicating those decisions to every forwarding device in the network. That communication channel between the high-level software program and the underlying hardware could be anything that the network device understands. One of the first SDN con trollers used BGP itself as a control plane (Feamster et al., 2003); subsequently, technologies such as OpenFlow, NETCONF, and YANG have emerged as more flexible ways to communicate control-plane information with network devices. In some sense, SDN was a re-incarnation of a well-established idea (i.e., centralized control) at a time when various enablers (open chipset APIs, software control of distributed systems) were also at a level of maturity to enable the architectural ideas to finally gain a foothold.
Figure 5-43. Control and data plane separation in SDN.
While the technology of SDN continues to rapidly evolve, the central tenet of the separation of the data and control planes remains invariant. SDN technology has evolved over a number of years; readers who wish to appreciate a complete history of SDN can read further to appreciate the genesis of this increasingly popu lar technology (Feamster et al., 2013). Below, we survey several of the major trends in SDN: (1) control over routing and forwarding (i.e., the technology behind the control plane); (2) programmable hardware and customizable forwarding (i.e., the technology that makes the data plane more programmable), and (3) program- mable network telemetry (a network management application that puts the two pieces together and in many ways may be the ‘‘killer app’’ for SDN).
5.6.2 The SDN Control Plane: Logically Centralized Software Control
One of the main technical ideas that underlies SDN is a control plane that runs separately from the routers, often as a single, logically centralized program. In some sense, SDN has always really existed: routers are configurable, and many
SEC. 5.6 SOFTWARE-DEFINED NETWORKING 437
large networks would often even auto-generate their router configuration from a centralized database, keep it in version control, and push those configurations to the routers with scripts. While, in a pedantic sense, this kind of setup could be called an SDN, technically speaking this type of setup only gives operators limited control over how traffic is forwarded through the network. More typically, SDN control programs (sometimes called ‘‘controllers’’) are responsible for more of the control logic, such as computing the paths through the network on behalf of the routers, and simply updating the resulting forwarding tables remotely.
Early work in software-defined networking aimed to make it easier for net- work operators to perform traffic engineering tasks by directly controlling the routes that each router in the network selects, rather than relying on indirect tuning of network configuration parameters. Early incarnations of SDN thus aimed to work within the constraints of existing Internet routing protocols to use them to di rectly control the routes. One such example was the RCP (Routing Control Plat form) (Feamster et al., 2003), which was subsequently deployed in backbone net- works to perform traffic load balancing and defend against denial-of-service at tacks. Subsequent developments included a system called Ethane (Casado et al., 2007), which used centralized software control to authenticate hosts within a net- work. One of the problems with Ethane, however, was that it required customized switches to operate, which limited its deployment in practice.
After demonstrating these benefits of SDN to network management, network operators and vendors began to take notice. Additionally, there was a convenient back door to making the switches even more flexible through a programmable con trol plane: many network switches relied on a common Broadcom chipset, which had an interface that allowed direct writes into switch memory. A team of re- searchers worked with switch vendors to expose this interface to software pro- grams, ultimately developing a protocol called OpenFlow (McKeown et al, 2008). The OpenFlow protocol was exposed by many switch vendors who were trying to compete with the dominant incumbent switch vendor, Cisco. Initially, the protocol supported a very simple interface: writes into a content-addressable memory that acted as a simple match-action table. This match-action table allowed a switch to identify packets that matched one or more fields in the packet header (e.g., MAC address, IP address) and perform one of a set of possible actions, including for- warding the packet to a specific port, dropping it, or sending it to an off-path soft- ware controller.
There were multiple versions of the OpenFlow protocol standard. An early ver- sion of OpenFlow, version 1.0, had a single match-action table, where entries in the table could refer to either exact matches on combinations of packet header fields (e.g., MAC address, IP address) or wild-card entries (e.g., an IP address or MAC address prefix). Later versions of OpenFlow (the most prominent version being OpenFlow 1.3) added more complex operations, including chains of tables, but very few vendors ever implemented these standards. Expressing AND and OR conjunctions on these types of matches turned out to be a bit tricky, especially for
438 THE NETWORK LAYER CHAP. 5
programmers, so some technologies emerged to make it easier for programmers to express more complex combinations of conditionals (Foster et al., 2011), and even to incorporate temporal and other aspects into the forwarding decisions (Kim et al., 2015). In the end, adoption of some of these technologies was limited: the Open- Flow protocol gained some traction in large data centers where operators could have complete control over the network. Yet, widespread adoption in wide-area and enterprise networks proved more limited because the operations one could per form in the forward table were so limited. Additionally, many switch vendors never fully implemented later versions of the standard, making it difficult to deploy solutions that depended on these standards in practice. Ultimately, however, the OpenFlow protocol left several important legacies: (1) control over a network with a single, centralized software program, permitting coordination across network de- vices and forwarding elements, and (2) the ability to express such control over the entire network from a single high-level programming language (e.g., Python, Java).
Ultimately, OpenFlow turned out to be a very limiting interface. It was not de- signed with flexible network control in mind, but rather was a product of conven ience: network devices already had TCAM-based lookup tables in their switches and OpenFlow was, more than anything, a market-driven initiative to open the in terface to these tables so that external software programs could write to it. It wasn’t long before networking researchers started to think about whether there was a bet ter way to design the hardware as well, to allow for more flexible types of control in the data plane. The next section discusses the developments in programmable hardware that have ultimately made the switches themselves more programmable.
Meanwhile, programmable software control, mostly initially focused on transit and data center networks, is beginning to find its way into cellular networks as well. For example, the Central Office Re-Architected as a Datacenter (CORD) project aims to develop a 5G network from disaggregated commodity hardware and open-source software components (Peterson et al., 2019).
5.6.3 The SDN Data Plane: Programmable Hardware
Recognizing the limitations of the OpenFlow chipset, a subsequent develop- ment in SDN was to make the hardware itself programmable. A number of devel- opments in programmable hardware, in both network interface cards (NICs) and switches have made it possible to customize everything from packet format to for- warding behavior.
The general architecture is sometimes called a protocol-independent switch architecture. The architecture involves a fixed set of processing pipelines, each with memory for match-action tables, some amount of register memory, and sim- ple operations such as addition (Bosshart et al., 2013). The forwarding model is often referred to as RMT (Reconfigurable Match Tables), a pipeline architecture that was inspired by RISC architectures. Each stage of the processing pipeline can read information from the packet headers, make modifications to the values in the
5
THE NETWORK LAYER
The network layer is concerned with getting packets from the source all the way to the destination. Getting to the destination may require making many hops at intermediate routers along the way. This function clearly contrasts with that of the data link layer, which has the more modest goal of just moving frames from one end of a (virtual) ‘‘wire’’ to the other. Thus, the network layer is the lowest layer that deals with end-to-end transmission.
To achieve its goals, the network layer must learn about the topology of the network (i.e., the set of all routers and links) and compute appropriate paths through it, even for large networks. It must also take care when choosing routes to avoid overloading some of the communication lines and routers while leaving oth- ers idle. Finally, when the source and destination are in different independently operated networks, sometimes called autonomous systems, new challenges arise, such as coordinating traffic flows across multiple networks and managing network utilization. These problems are typically handled at the network layer; network operators are often tasked with dealing with these challenges manually. Conven tionally, network operators had to reconfigure the network layer manually, through low-level configuration. More recently, however, the advent of software-defined networking and programmable hardware has made it possible to configure the net- work layer from higher-level software programs, and even to redefine the functions of the network layer entirely. In this chapter, we will study all these issues and illustrate them, focusing in particular on the Internet and its network layer protocol, IP (Internet Protocol).
359
360 THE NETWORK LAYER CHAP. 5 5.1 NETWORK LAYER DESIGN ISSUES
In the following sections, we will give an introduction to some of the issues that the designers of the network layer must grapple with. These issues include the service provided to the transport layer and the internal design of the network.
5.1.1 Store-and-Forward Packet Switching
Before starting to explain the details of the network layer, it is worth restating the context in which the network layer protocols operate. This context can be seen in Fig. 5-1. The major components of the network are the ISP’s equipment (rout- ers, switches, and middleboxes connected by transmission lines), shown inside the shaded oval, and the customers’ equipment, shown outside the oval. Host H1 is di rectly connected to one of the ISP’s routers, A, perhaps as a home computer that is plugged into a DSL modem. In contrast, H2 is on a LAN, which might be an office Ethernet, with a router, F, owned and operated by the customer. This router has a leased line to the ISP’s equipment. We have shown F as being outside the oval because it does not belong to the ISP. For the purposes of this chapter, howev- er, routers on customer premises are considered part of the ISP network because they run the same algorithms as the ISP’s routers (and our main concern here is al- gorithms).
Router ISP’s equipment
Process P1
B
P2
D
Host H1
A E F
LAN H2
C
Packet
Figure 5-1. The environment of the network layer protocols.
This equipment is used as follows. A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-to-point link to the ISP (e.g., over an ADSL line or a cable television wire). The packet is stored there until it has fully arrived and the link has finished its processing by verifying the checksum. Then it is forwarded to the next router along the path until it reaches the destination host, where it is delivered. This mechanism is store-and-forward packet switching, as we have seen in previous chapters.
SEC. 5.1 NETWORK LAYER DESIGN ISSUES 361 5.1.2 Services Provided to the Transport Layer
The network layer provides services to the transport layer at the network layer/transport layer interface. An important question is precisely what kind of ser- vices the network layer provides to the transport layer. The services need to be carefully designed with the following goals in mind:
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers present.
3. The network addresses made available to the transport layer should use a uniform numbering plan, even across LANs and WANs.
Given these goals, the designers of the network layer have a lot of freedom in writing detailed specifications of the services to be offered to the transport layer. This freedom often degenerates into a raging battle between two warring factions. The discussion centers on whether the network layer should provide connec tion-oriented service or connectionless service.
One camp (represented by the Internet community) argues that the routers’ job is moving packets around and nothing else. In this view (based on 40 years of experience with a real computer network), the network is inherently unreliable, no matter how it is designed. Therefore, the hosts should accept this fact and do error control (i.e., error detection and correction) and flow control themselves.
This viewpoint leads to the conclusion that the network service should be con- nectionless, with primitives SEND PACKET and RECEIVE PACKET and little else. In particular, no packet ordering and flow control should be done, because the hosts are going to do that anyway and there is usually little to be gained by doing it twice. This reasoning is an example of the end-to-end argument, a design prin- ciple that has been very influential in shaping the Internet (Saltzer et al., 1984). Furthermore, each packet must carry the full destination address, because each packet sent is carried independently of its predecessors, if any.
The other camp (represented by the telephone companies) argues that the net- work should provide a reliable, connection-oriented service. They claim that 100 years of successful experience with the worldwide telephone system is an excellent guide. In this view, quality of service is the dominant factor, and without connections in the network, quality of service is very difficult to achieve, especial ly for real-time traffic such as voice and video.
Even after several decades, this controversy is still very much alive. Early, widely used data networks, such as X.25 in the 1970s and its successor Frame Relay in the 1980s, were connection-oriented. However, since the days of the ARPANET and the early Internet, connectionless network layers have grown tremendously in popularity. The IP protocol is now an ever-present symbol of suc- cess. It was undeterred by a connection-oriented technology called ATM that was
362 THE NETWORK LAYER CHAP. 5
developed to overthrow it in the 1980s; instead, it is ATM that is now found in niche uses and IP that is taking over telephone networks. Under the covers, how- ever, the Internet is evolving connection-oriented features as quality of service be- comes more important. Two examples of connection-oriented technologies are multiprotocol label switching, which we will describe in this chapter, and VLANs, which we saw in Chap. 4. Both technologies are widely used.
5.1.3 Implementation of Connectionless Service
Having looked at the two classes of service the network layer can provide to its users, it is time to see how this layer works inside. Two different organizations are possible, depending on the type of service offered. If connectionless service is of fered, packets are injected into the network individually and routed independently of each other. No advance setup is needed. In this context, the packets are fre- quently called datagrams (in analogy with telegrams) and the network is called a datagram network. If connection-oriented service is used, a path from the source router all the way to the destination router must be established before any data packets can be sent. This connection is called a VC (Virtual Circuit), in analogy with the physical circuits set up by the (old) telephone system, and the network is called a virtual-circuit network. In this section, we will examine datagram net- works; in the next one, we will examine virtual-circuit networks.
Let us now see how a datagram network works. Suppose that the process P1 in Fig. 5-2 has a long message for P2. It hands the message to the transport layer, with instructions to deliver it to process P2 on host H2. The transport layer code runs on H1, typically within the operating system. It prepends a transport header to the front of the message and hands the result to the network layer, probably just another procedure within the operating system.
Let us assume for this example that the message is four times longer than the maximum packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send each of them in turn to router A using some point-to-point proto- col, for example, PPP. At this point the ISP takes over. Every router has an inter- nal table telling it where to send packets for each of the possible destinations. Each table entry is a pair consisting of a destination and the outgoing line to use for that destination. Only directly connected lines can be used. For example, in Fig. 5-2, A has only two outgoing lines—to B and to C—so every incoming packet must be sent to one of these routers, even if the ultimate destination is to some other router. A’s initial routing table is shown in the figure under the label ‘‘ini tially.’’
At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link and had their checksums verified. Then each packet is forwarded according to A’s table, onto the outgoing link to C within a new frame. Packet 1 is then forwarded to E and then to F. When it gets to F, it is sent within a frame over the LAN to H2. Packets 2 and 3 follow the same route.
SEC. 5.1 NETWORK LAYER DESIGN ISSUES 363 Router ISP’s equipment
Process P1
B
4
P2
D
1
A E F
Host H1
Packet
3
2
C
LAN H2
A’s table (initially) A’s table (later) C’s table E’s table A
–
B B C C D B E C F C
Dest. Line
A
–
B B C C D B E B F B
A
A
B A C – D E E E F E
A
C
B D C C D D E –
F F
Figure 5-2. Routing within a datagram network.
However, something different happens to packet 4. When it gets to A it is sent to router B, even though it is also destined for F. For some reason, A decided to send packet 4 via a different route than that of the first three packets. Perhaps it has learned of a traffic jam somewhere along the ACE path and updated its routing table, as shown under the label ‘‘later.’’ The algorithm that manages the tables and makes the routing decisions is called the routing algorithm. Routing algorithms are one of the main topics we will study in this chapter. There are several different kinds of them, as we will see.
IP, which is the basis for the entire Internet, is the dominant example of a con- nectionless network service. Each packet carries a destination IP address that rout- ers use to individually forward each packet. The addresses are 32 bits in IPv4 pack- ets and 128 bits in IPv6 packets. We will describe IP and these two versions in
much detail later in this chapter.
5.1.4 Implementation of Connection-Oriented Service
For connection-oriented service, we need to have a virtual-circuit network. Let us see how that works. The idea behind virtual circuits is to avoid having to choose a new route for every packet sent, as in Fig. 5-2. Instead, when a con- nection is established, a route from the source machine to the destination machine is chosen as part of the connection setup and stored in tables inside the routers. That route is used for all traffic flowing over the connection, exactly the same way
364 THE NETWORK LAYER CHAP. 5
that the telephone system works. When the connection is released, the virtual cir- cuit is also terminated. With connection-oriented service, each packet carries an identifier telling which virtual circuit it belongs to.
As an example, consider the situation illustrated in Fig. 5-3. Here, host H1 has established connection 1 with host H2. This connection is remembered as the first entry in each of the routing tables. The first line of A’s table says that if a packet bearing connection identifier 1 comes in from H1, it is to be sent to router C and given connection identifier 1. Similarly, the first entry at C routes the packet to E, also with connection identifier 1.
P3
Router ISP’s equipment
P2
H3
Process P1
B
A
D
1
4
3
2
E F
LAN H2
Host H1
Packet
A’s table
C
C’s table
E’s table
H1
1
C
1
H3 1
C 2
In Out
A
1 E
1
C
1 F
1
A 2
E 2
C 2
F 2
Figure 5-3. Routing within a virtual-circuit network.
Now let us consider what happens if H3 also wants to establish a connection to H2. It chooses connection identifier 1 (because it is initiating the connection and this is its only connection) and tells the network to establish the virtual circuit. This leads to the second row in the tables. Please note that we have a conflict here because although A can easily distinguish connection 1 packets from H1 from con- nection 1 packets from H3, C cannot do this. For this reason, A assigns a different connection identifier to the outgoing traffic for the second connection. Avoiding conflicts of this kind is why routers need the ability to replace connection identi fiers in outgoing packets.
An example of a connection-oriented network service is MPLS (MultiProtocol Label Switching). It is used within ISP networks in the Internet, with IP packets wrapped in an MPLS header having a 20-bit connection identifier or label. MPLS
SEC. 5.1 NETWORK LAYER DESIGN ISSUES 365
is often hidden from customers, with the ISP establishing long-term connections for large amounts of traffic, but it is increasingly being used to help when quality of service is important but also with other ISP traffic management tasks. We will have more to say about MPLS later in this chapter.
5.1.5 Comparison of Virtual-Circuit and Datagram Networks
Both virtual circuits and datagrams have their supporters and their detractors. We will now attempt to summarize both sets of arguments. The major issues are listed in Fig. 5-4, although purists could probably find a counterexample for every thing in the figure.
Issue Datagram network Virtual-circuit network Circuit setup Not needed Required
Addressing Each packet contains the full
Each packet contains a
source and destination address
short VC number
State information Routers do not hold state
Each VC requires router
information about connections
table space per connection
Routing Each packet is routed independently
Effect of router failures None, except for packets lost during the crash
Route chosen when VC is set up; all packets follow it All VCs that passed through the failed
router are terminated
Quality of service Difficult Easy if enough resources can be allocated in
advance for each VC
Congestion control Difficult Easy if enough resources can be allocated in
advance for each VC
Figure 5-4. Comparison of datagram and virtual-circuit networks.
Inside the network, several trade-offs exist between virtual circuits and data- grams. One trade-off is setup time versus address parsing time. Using virtual cir- cuits requires a setup phase, which takes time and consumes resources. However, once this price is paid, figuring out what to do with a data packet in a virtual-cir- cuit network is easy: the router just uses the circuit number to index into a table to find out where the packet goes. In a datagram network, no setup is needed but a more complicated lookup procedure is required to locate the entry for the destina tion.
A related issue is that the destination addresses used in datagram networks are longer than circuit numbers used in virtual-circuit networks because they have a global meaning. If the packets tend to be fairly short, including a full destination
366 THE NETWORK LAYER CHAP. 5
address in every packet may represent a significant amount of overhead, and hence a waste of bandwidth.
Yet another issue is the amount of table space required in router memory. A datagram network needs to have an entry for every possible destination, whereas a virtual-circuit network just needs an entry for each virtual circuit. However, this advantage is somewhat illusory since connection setup packets have to be routed too, and they use destination addresses, the same as datagrams do.
Virtual circuits have some advantages in guaranteeing quality of service and avoiding congestion within the network because resources (e.g., buffers, band- width, and CPU cycles) can be reserved in advance, when the connection is estab lished. Once the packets start arriving, the necessary bandwidth and router capaci ty will be there. With a datagram network, congestion avoidance is more difficult.
For transaction processing systems (e.g., stores calling up to verify credit card purchases), the overhead required to set up and clear a virtual circuit may easily dwarf the use of the circuit. If the majority of the traffic is expected to be of this kind, the use of virtual circuits inside the network makes little sense. On the other hand, for long-running uses such as VPN traffic between two corporate offices, permanent virtual circuits (that are set up manually and last for months or years) may be useful.
Virtual circuits also have a vulnerability problem. If a router crashes and loses its memory, even if it comes back up a second later, all the virtual circuits passing through it will have to be aborted. In contrast, if a datagram router goes down, only those users whose packets were queued in the router at the time need suffer (and probably not even then since the sender is likely to retransmit them shortly). The loss of a communication line is fatal to virtual circuits using it, but can easily be compensated for if datagrams are used. Datagrams also allow the routers to bal- ance the traffic throughout the network, since routes can be changed partway through a long sequence of packet transmissions.
5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK
The main function of the network layer is routing packets from the source ma- chine to the destination machine. In this section, we discuss how the network layer achieves this function within a single administrative domain or autonomous sys tem. In most networks, packets will require multiple hops to make the journey. The only notable exception is for broadcast networks, but even here routing is an issue if the source and destination are not on the same network segment. The algo rithms that choose the routes and the data structures that they use are a major area of network layer design.
The routing algorithm is that part of the network layer software responsible for deciding which output line an incoming packet should be transmitted on. If the network uses datagrams internally, the routing decision must be made anew for
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 367
every arriving data packet since the best route may have changed since last time. If the network uses virtual circuits internally, routing decisions are made only when a new virtual circuit is being set up. Thereafter, data packets just follow the already established route. The latter case is sometimes called session routing because a route remains in force for an entire session (e.g., while logged in over a VPN).
It is sometimes useful to make a distinction between routing, which is making the decision which routes to use, and forwarding, which is what happens when a packet arrives. One can think of a router as having two processes inside it. One of them handles each packet as it arrives, looking up the outgoing line to use for it in the routing tables. This process is forwarding. The other process is responsible for filling in and updating the routing tables. That is where the routing algorithm comes into play.
Regardless of whether routes are chosen independently for each packet sent or only when new connections are established, certain properties are desirable in a routing algorithm: correctness, simplicity, robustness, stability, fairness, and ef ficiency. Correctness and simplicity hardly require comment, but the need for ro- bustness may be less obvious at first. Once a major network comes on the air, it may be expected to run continuously for years without system-wide failures. Dur ing that period there will be hardware and software failures of all kinds. Hosts, routers, and lines will fail repeatedly, and the topology will change many times. The routing algorithm should be able to cope with changes in the topology and traffic without requiring all jobs in all hosts to be aborted. Imagine the havoc if the network needed to be rebooted every time some router crashed.
Stability is also an important goal for the routing algorithm. There exist rout ing algorithms that never converge to a fixed set of paths, no matter how long they run. A stable algorithm reaches equilibrium and stays there. It should converge quickly too, since communication may be disrupted until the routing algorithm has reached equilibrium.
Fairness and efficiency may sound obvious—surely no reasonable person would oppose them—but as it turns out, they are often contradictory goals. As a simple example of this conflict, look at Fig. 5-5. Suppose that there is enough traf fic between A and Av, between B and Bv, and between C and Cv to saturate the hori- zontal links. To maximize the total flow, the X to Xv traffic should be shut off alto- gether. Unfortunately, X and Xv may not see it that way. Evidently, some compro- mise between global efficiency and fairness to individual connections is needed.
Before we can even attempt to find trade-offs between fairness and efficiency, we must decide what it is we seek to optimize. Minimizing the mean packet delay is an obvious candidate to send traffic through the network effectively, but so is maximizing total network throughput. Furthermore, these two goals are also in conflict, since operating any queueing system near capacity implies a long queue ing delay. As a compromise, many networks attempt to minimize the distance a packet must travel, or alternatively, simply reduce the number of hops a packet must make. Either choice tends to improve the delay and also reduce the amount of
368 THE NETWORK LAYER CHAP. 5 A B C
X Xv
A' B' C'
Figure 5-5. Network with a conflict between fairness and efficiency.
bandwidth consumed per packet, which generally tends to improve the overall net- work throughput as well.
Routing algorithms can be grouped into two major classes: nonadaptive and adaptive. Nonadaptive algorithms do not base their routing decisions on any measurements or estimates of the current topology and traffic. Instead, the choice of the route to use to get from I to J (for all I and J) is computed in advance, offline, and downloaded to the routers when the network is booted. This procedure is sometimes called static routing. Because it does not respond to failures, static routing is mostly useful for situations in which the routing choice is clear. For ex- ample, router F in Fig. 5-3 should send packets headed into the network to router E regardless of the ultimate destination.
Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the topology, and sometimes changes in the traffic as well. These dynamic routing algorithms differ in where they get their information (e.g., locally, from adjacent routers, or from all routers), when they change the routes (e.g., when the topology changes, or every 6T seconds as the load changes), and what metric is used for optimization (e.g., distance, number of hops, or estimated transit time).
In the following sections, we will discuss a variety of routing algorithms. The algorithms cover delivery models besides sending a packet from a source to a dest ination. Sometimes the goal is to send the packet to multiple, all, or one of a set of destinations. All the routing algorithms we describe here make decisions based on the topology; we defer the possibility of decisions based on the traffic to Sec. 5.3.
5.2.1 The Optimality Principle
Before we get into specific algorithms, it may be helpful to note that one can make a general statement about optimal routes without regard to network topology or traffic. This statement is known as the optimality principle (Bellman, 1957).
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 369
It states that if router J is on the optimal path from router I to router K, then the optimal path from J to K also falls along the same route. To see this, call the part of the route from I to J r1 and the rest of the route r2. If a route better than r2 existed from J to K, it could be concatenated with r1to improve the route from I to K, contradicting our statement that r1r2is optimal. As a direct consequence of the optimality principle, we can see that the set of optimal routes from all sources to a given destination form a tree rooted at the dest ination. Such a tree is called a sink tree and is illustrated in Fig. 5-6(b) for the net- work of Fig. 5-6(a). Here, the distance metric is the number of hops. The goal of all routing algorithms is to discover and use the sink trees for all routers.
B
A
B
A
C
D EC
D E
F
G
H
J
I
F
G
H
J
I
L
K
M
(a)
N O
L
K
M
(b)
N O
Figure 5-6. (a) A network. (b) A sink tree for router B.
Note that a sink tree is not necessarily unique; other trees with the same path lengths may exist. If we allow all of the possible paths to be chosen, the tree be- comes a more general structure called a DAG (Directed Acyclic Graph). DAGs have no loops. We will use sink trees as a convenient shorthand for both cases. Both cases also depend on the technical assumption that the paths do not interfere with each other so, for example, a traffic jam on one path will not cause another path to divert.
Since a sink tree is indeed a tree, it does not contain any loops, so each packet will be delivered within a finite and bounded number of hops. In practice, life is not quite this easy. Links and routers can go down and come back up during oper- ation, so different routers may have different ideas about the current topology. Also, we have quietly finessed the issue of whether each router has to individually acquire the information on which to base its sink tree computation or whether this information is collected by some other means. We will come back to these issues shortly. Nevertheless, the optimality principle and the sink tree provide a bench- mark against which other routing algorithms can be measured.
370 THE NETWORK LAYER CHAP. 5 5.2.2 Shortest Path Algorithm
Let us begin our study of routing algorithms with a simple technique for com- puting optimal paths given a complete picture of the network. These paths are the ones that we want a distributed routing algorithm to find, even though not all rout- ers may know all of the details of the network.
The idea is to build a graph of the network, with each node of the graph repres- enting a router and each edge of the graph representing a communication line, or link. To choose a route between a given pair of routers, the algorithm just finds the shortest path between them on the graph.
The concept of a shortest path deserves some explanation. One way of mea- suring path length is the number of hops. Using this metric, the paths ABC and ABE in Fig. 5-7 are equally long. Another metric is the geographic distance in kilometers, in which case ABC is clearly much longer than ABE (assuming the fig- ure is drawn to scale).
B 7 C
B (2, A) C (', <)
2
2
2
33
E (', <)
A
E 2 F
E
A D 1
A F (', <) D (', <)
6 6
1
4
4
2
2
G
G
(a)
H
G (6, A)
G
(b)
H (', <) H
B (2, A) C (9, B)
B (2, A) C (9, B)
A
E (4, B)
F (', <) D (',<)
E (4, B)
A F (6, E) D (',1)
G (6, A)
(c)
H (', <)
G (5, E)
(d)
H (', <)
B (2, A) C (9, B)
E (4, B)
A F (6, E) D (',<)
B (2, A) C (9, B)
E (4, B)
A F (6,E) D (',<)
G (5, E)
(e)
H (9, G)
G (5, E)
(f)
H (8, F)
Figure 5-7. The first six steps used in computing the shortest path from A to D. The arrows indicate the working node.
However, many other metrics besides hops and physical distance are also pos- sible. For example, each edge could be labeled with the mean delay of a standard
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 371
test packet, as measured by hourly runs. With this graph labeling, the shortest path is the fastest path rather than the path with the fewest edges or kilometers. In the general case, the labels on the edges could be computed as a function of the distance, bandwidth, average traffic, communication cost, measured delay, and other factors. By changing the weighting function, the algorithm would then com- pute the ‘‘shortest’’ path measured according to any one of a number of criteria or to a combination of criteria.
Several algorithms for computing the shortest path between two nodes of a graph are known. This one is due to Dijkstra (1959) and finds the shortest paths between a source and all destinations in the network. Each node is labeled (in par- entheses) with its distance from the source node along the best known path. The distances must be non-negative, as they will be if they are based on real quantities like bandwidth and delay. Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and paths are found, the labels may change, reflecting better paths. A label may be either tentative or permanent. Ini tially, all labels are tentative. When it is discovered that a label represents the shortest possible path from the source to that node, it is made permanent and never changed thereafter.
To illustrate how the labeling algorithm works, look at the weighted, undi rected graph of Fig. 5-7(a), where the weights represent, for example, distance. We want to find the shortest path from A to D. We start out by marking node A as permanent, indicated by a filled-in circle. Then we examine, in turn, each of the nodes adjacent to A (the working node), relabeling each one with the distance to A. Whenever a node is relabeled, we also label it with the node from which the probe was made so that we can reconstruct the final path later. If the network had more than one shortest path from A to D and we wanted to find all of them, we would need to remember all of the probe nodes that could reach a node with the same dis tance.
Having examined each of the nodes adjacent to A, we examine all the tenta tively labeled nodes in the whole graph and make the one with the smallest label permanent, as shown in Fig. 5-7(b). This one becomes the new working node.
We now start at B and examine all nodes adjacent to it. If the sum of the label on B and the distance from B to the node being considered is less than the label on that node, we have a shorter path, so the node is relabeled.
After all the nodes adjacent to the working node have been inspected and the tentative labels changed if possible, the entire graph is searched for the tentatively labeled node with the smallest value. This node is made permanent and becomes the working node for the next round. Figure 5-7 shows the first six steps of the algorithm.
To see why the algorithm works, look at Fig. 5-7(c). At this point we have just made E permanent. Suppose that there were a shorter path than ABE, say AXYZE (for some X and Y). There are two possibilities: either node Z has already been made permanent, or it has not been. If it has, then E has already been probed (on
372 THE NETWORK LAYER CHAP. 5
the round following the one when Z was made permanent), so the AXYZE path has not escaped our attention and thus cannot be a shorter path.
Now consider the case where Z is still tentatively labeled. If the label at Z is greater than or equal to that at E, then AXYZE cannot be a shorter path than ABE. If the label is less than that of E, then Z and not E will become permanent first, allowing E to be probed from Z.
This algorithm is given in C in Fig. 5-8. The global variables n and dist de- scribe the graph and are initialized before shortest path is called. The only dif ference between the program and the algorithm described above is that in Fig. 5-8, we compute the shortest path starting at the terminal node, t, rather than at the source node, s.
Since the shortest paths from t to s in an undirected graph are the same as the shortest paths from s to t, it does not matter at which end we begin. The reason for searching backward is that each node is labeled with its predecessor rather than its successor. When the final path is copied into the output variable, path, the path is thus reversed. The two reversal effects cancel, and the answer is produced in the correct order.
5.2.3 Flooding
When a routing algorithm is implemented, each router must make decisions based on local knowledge, not the complete picture of the network. A simple local technique is flooding, in which every incoming packet is sent out on every out- going line except the one it arrived on.
Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless some measures are taken to damp the process. One such measure is to have a hop counter contained in the header of each packet that is decremented at each hop, with the packet being discarded when the counter reaches zero. Ideally, the hop counter should be initialized to the length of the path from source to destination. If the sender does not know how long the path is, it can initialize the counter to the worst case, namely, the full diameter of the network.
Flooding with a hop count can produce an exponential number of duplicate packets as the hop count grows and routers duplicate packets they have seen be fore. A better technique for damming the flood is to have routers keep track of which packets have been flooded, to avoid sending them out a second time. One way to achieve this goal is to have the source router put a sequence number in each packet it receives from its hosts. Each router then needs a list per source router tel ling which sequence numbers originating at that source have already been seen. If an incoming packet is on the list, it is not flooded.
To prevent the list from growing without bound, each list should be augmented by a counter, k, meaning that all sequence numbers through k have been seen. When a packet comes in, it is easy to check if the packet has already been flooded (by comparing its sequence number to k); if so, it is discarded. Furthermore, the
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 373
#define MAX NODES 1024 /* maximum number of nodes */#define INFINITY 1000000000 /* a number larger than every maximum path
int n, dist[MAX NODES][MAX NODES]; /* dist[i][j] is the distance from i to j*/ void shortest path(int s, int t, int path[])
{ struct state { /*the path being worked on */int predecessor; /* previous node */
int length; /*length from source to this node*/ enum {permanent, tentative} label; /*label state */ } state[MAX NODES];
int i, k, min;
struct state *p;
for (p = &state[0]; p < &state[n]; p++) { /*initialize state */ p->predecessor = <1;
p->length = INFINITY;
p->label = tentative;
}
state[t].length = 0; state[t].label = permanent;
k = t; /*k is the initial working node */do { /*Is there a better path from k? */for (i = 0; i < n; i++) /*this graph has n nodes */ if (dist[k][i] != 0 && state[i].label == tentative) {
if (state[k].length + dist[k][i] < state[i].length) {
state[i].predecessor = k;
state[i].length = state[k].length + dist[k][i];
}
}
/* Find the tentatively labeled node with the smallest label. */ k = 0; min = INFINITY;
for (i = 0; i < n; i++)
if (state[i].label == tentative && state[i].length < min) {
min = state[i].length;
k = i;
}
state[k].label = permanent;
} while (k != s);
/* Copy the path into the output array. */
i = 0; k = s;
do {path[i++] = k; k = state[k].predecessor; } while (k >= 0);
}
Figure 5-8. Dijkstra’s algorithm to compute the shortest path through a graph. full list below k is not needed, since k effectively summarizes it.
*/
Flooding is not practical for sending most packets, but it does have some im- portant uses. First, it ensures that a packet is delivered to every node in the net- work. This may be wasteful if there is a single destination that needs the packet,
374 THE NETWORK LAYER CHAP. 5
but it is effective for broadcasting information. In wireless networks, all messages transmitted by a station can be received by all other stations within its radio range, which is, in fact, flooding, and some algorithms utilize this property.
Second, flooding is tremendously robust. Even if large numbers of routers are blown to smithereens (e.g., in a military network located in a war zone), flooding will find a path if one exists, to get a packet to its destination. Flooding also re- quires little in the way of setup. The routers only need to know their neighbors. This means that flooding can be used as a building block for other routing algo rithms that are more efficient but need more in the way of setup. Flooding can also be used as a metric against which other routing algorithms can be compared. Flooding always chooses the shortest path because it chooses every possible path in parallel. Consequently, no other algorithm can produce a shorter delay (if we ignore the overhead generated by the flooding process itself).
5.2.4 Distance Vector Routing
Computer networks generally use dynamic routing algorithms that are more complex than flooding, but more efficient because they find shortest paths for the current topology. Two dynamic algorithms in particular, distance vector routing and link state routing, are the most popular. In this section, we will look at the for- mer algorithm. In the following section, we will study the latter algorithm.
A distance vector routing algorithm operates by having each router maintain a table (i.e., a vector) giving the best known distance to each destination and which link to use to get there. These tables are updated by exchanging information with the neighbors. Eventually, every router knows the best link to reach each destina
tion.
The distance vector routing algorithm is sometimes called by other names, most commonly the distributed Bellman-Ford routing algorithm, after the re- searchers who developed it (Bellman, 1957; and Ford and Fulkerson, 1962). It was the original ARPANET routing algorithm and was also used in the Internet under the name RIP.
In distance vector routing, each router maintains a routing table indexed by, and containing one entry for, each router in the network. This entry has two parts: the preferred outgoing line to use for that destination, and an estimate of the dis tance to that destination. The distance might be measured as the number of hops or using another metric, as we discussed for computing shortest paths.
The router is assumed to know the ‘‘distance’’ to each of its neighbors. If the metric is hops, the distance is just one hop. If the metric is propagation delay, the router can measure it directly with special ECHO packets that the receiver just timestamps and sends back as fast as it can.
As an example, assume that delay is used as a metric and that the router knows the delay to each of its neighbors. Once every T msec, each router sends to each neighbor a list of its estimated delays to each destination. It also receives a similar
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 375
list from each neighbor. Imagine that one of these tables has just come in from neighbor X, with Xi being X’s estimate of how long it takes to get to router i. If the router knows that the delay to X is m msec, it also knows that it can reach router i via X in Xi + m msec. By performing this calculation for each neighbor, a router can find out which estimate seems the best and use that estimate and the corres- ponding link in its new routing table. Note that the old routing table is not used in the calculation.
This updating process is illustrated in Fig. 5-9. Part (a) shows a network. The first four columns of part (b) show the delay vectors received from the neighbors of router J. A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc. Suppose that J has measured or estimated its delay to its neigh- bors, A, I, H, and K, as 8, 10, 12, and 6 msec, respectively.
Router
A B C D
New estimated
delay from J
To A I H K Line A
0
24
20
B
12
36
21
8
A
C 25
31
28
20
A
F GH E
I J K L
18
19
36
40
27
8
24
D
E
14
7
30
22
F
23
20
19
40
G
18
31
6
31
H
17
20
0
19
I
21
0
14
22
J
9
11
7
10
K
24
22
22
0
L
29
33
9
9
28
I
20
H
17
I
30
I
18
H
12
H
10
I
0
<
6
K
15
K
JA JI JH JK
delay delay delay delay
New
is is is is 8 10 12 6
routing table
for J
(a)
Vectors received from J's four neighbors
(b)
Figure 5-9. (a) A network. (b) Input from A, I, H, K, and the new routing table for J.
Consider how J computes its new route to router G. It knows that it can get to A in 8 msec, and furthermore A claims to be able to get to G in 18 msec, so J knows it can count on a delay of 26 msec to G if it forwards packets bound for G to A. Similarly, it computes the delay to G via I, H, and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec, respectively. The best of these values is 18, so it makes an entry in its routing table that the delay to G is 18 msec and that the route to use is via H. The same calculation is performed for all the other destinations, with the new routing table shown in the last column of the figure.
376 THE NETWORK LAYER CHAP. 5 The Count-to-Infinity Problem
The settling of routes to best paths across the network is called convergence. Distance vector routing is useful as a simple technique by which routers can col lectively compute shortest paths, but it has a serious drawback in practice: although it converges to the correct answer, it may do so slowly. In particular, it reacts ra- pidly to good news, but leisurely to bad news. Consider a router whose best route to destination X is long. If, on the next exchange, neighbor A suddenly reports a short delay to X, the router just switches over to using the line to A to send traffic to X. In one vector exchange, the good news is processed.
To see how fast good news propagates, consider the five-node (linear) network of Fig. 5-10, where the delay metric is the number of hops. Suppose A is down ini tially and all the other routers know this. In other words, they have all recorded the delay to A as infinity.
A B C D E · · · ·
Initially
· · ·
A B C D E Initially
1 2 3 4
1
After 1 exchange
3
After 1 exchange
1
· ·
2 3 4
2
·
1
2
After 2 exchanges After 3 exchanges
3
4
3 4
After 2 exchanges
After 3 exchanges
3
1
After 4 exchanges
5
4
5
4
After 4 exchanges
2
3
4
5
6
5
6
After 5 exchanges
7 6 7 6
7 8 7 8
...
After 6 exchanges
· · · ·
(a) (b)
Figure 5-10. The count-to-infinity problem.
When A comes up, the other routers learn about it via the vector exchanges. For simplicity, we will assume that there is a gigantic gong somewhere that is struck periodically to initiate a vector exchange at all routers simultaneously. At the time of the first exchange, B learns that its left-hand neighbor has zero delay to A. B now makes an entry in its routing table indicating that A is one hop away to the left. All the other routers still think that A is down. At this point, the routing table entries for A are as shown in the second row of Fig. 5-10(a). On the next ex- change, C learns that B has a path of length 1 to A, so it updates its routing table to indicate a path of length 2, but D and E do not hear the good news until later. Clearly, the good news is spreading at the rate of one hop per exchange. In a net- work whose longest path is of length N hops, within N exchanges everyone will know about newly revived links and routers.
Now let us consider the situation of Fig. 5-10(b), in which all the links and routers are initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 377
hops, respectively. Suddenly, either A goes down or the link between A and B is cut (which is effectively the same thing from B’s point of view).
At the first packet exchange, B does not hear anything from A. Fortunately, C says ‘‘Do not worry; I have a path to A of length 2.’’ Little does B suspect that C’s path runs through B itself. For all B knows, C might have 10 links all with separate paths to A of length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do not update their entries for A on the first exchange.
On the second exchange, C notices that each of its neighbors claims to have a path to A of length 3. It picks one of them at random and makes its new distance to A 4, as shown in the third row of Fig. 5-10(b). Subsequent exchanges produce the history shown in the rest of Fig. 5-10(b).
From this figure, it should be clear why bad news travels slowly: no router ever has a value more than one higher than the minimum of all its neighbors. Gradu- ally, all routers work their way up to infinity, but the number of exchanges required depends on the numerical value used for infinity. For this reason, it is wise to set infinity to the longest path plus 1.
Not entirely surprisingly, this problem is known as the count-to-infinity prob lem. There have been many attempts to solve it, for example, preventing routers from advertising their best paths back to the neighbors from which they heard them. Split horizon with poisoned reverse rule are discussed in RFC 1058. How- ever, none of these heuristics work well in practice despite the colorful names. The core of the problem is that when X tells Y that it has a path somewhere, Y has no way of knowing whether it itself is on the path.
5.2.5 Link State Routing
Distance vector routing was used in the ARPANET until 1979, when it was re- placed by link state routing. The primary problem that caused its demise was that the algorithm often took too long to converge after the network topology changed (due to the count-to-infinity problem). Consequently, it was replaced by an en tirely new algorithm, now called link state routing. Variants of link state routing called IS-IS and OSPF are the routing algorithms that are most widely used inside large networks and the Internet today.
The idea behind link state routing is fairly simple and can be stated as five parts. Each router must do the following things to make it work:
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router.
378 THE NETWORK LAYER CHAP. 5
In effect, the complete topology is distributed to every router. Then Dijkstra’s al- gorithm can be run at each router to find the shortest path to every other router. Below we will consider each of these five steps in more detail.
Learning about the Neighbors
When a router is booted, its first task is to learn who its neighbors are. It accomplishes this goal by sending a special HELLO packet on each point-to-point line. The router on the other end is expected to send back a reply giving its name. These names must be globally unique because when a distant router later hears that three routers are all connected to F, it is essential that it can determine whether all three mean the same F.
When two or more routers are connected by a broadcast link (e.g., a switch, ring, or classic Ethernet), the situation is slightly more complicated. Figure 5-11(a) illustrates a broadcast LAN to which three routers, A, C, and F, are directly connected. Each of these routers is connected to one or more additional routers, as shown.
H
Router
B
D E
G G H
D E
I
B
A
C
F
C
A
F I
LAN
N
(a) (b)
Figure 5-11. (a) Nine routers and a broadcast LAN. (b) A graph model of (a).
The broadcast LAN provides connectivity between each pair of attached rout- ers. However, modeling the LAN as many point-to-point links increases the size of the topology and leads to wasteful messages. A better way to model the LAN is to consider it as a node itself, as shown in Fig. 5-11(b). Here, we have introduced a new, artificial node, N, to which A, C, and F are connected. One designated router on the LAN is selected to play the role of N in the routing protocol. The fact that it is possible to go from A to C on the LAN is represented by the path ANC here.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 379 Setting Link Costs
The link state routing algorithm requires each link to have a distance or cost metric for finding shortest paths. The cost to reach neighbors can be set automat ically, or configured by the network operator. A common choice is to make the cost inversely proportional to the bandwidth of the link. For example, 1-Gbps Ethernet may have a cost of 1 and 100-Mbps Ethernet may have a cost of 10. This makes higher-capacity paths better choices.
If the network is geographically spread out, the delay of the links may be fac tored into the cost so that paths over shorter links are better choices. The most direct way to determine this delay is to send over the line a special ECHO packet that the other side is required to send back immediately. By measuring the round trip time and dividing it by two, the sending router can get an estimate of the delay.
Building Link State Packets
Once the information needed for the exchange has been collected, the next step is for each router to build a packet containing all the data. The packet starts with the identity of the sender, followed by a sequence number and age (to be described later) and a list of neighbors. The cost to each neighbor is also given. An example
network is presented in Fig. 5-12(a) with costs shown as labels on the lines. The corresponding link state packetsfor all six routers are shown in Fig. 5-12(b).
B C 2
4 3
Link State Packets
A
B C D E F
Seq.
Seq.
Seq.
Seq.
Seq.
Seq.
A D 1 6
5 7 E F
8
(a)
Age
Age
Age
Age
Age
Age
B 4
A 4
B 2
C 3
A 5
B 6
E 5
C 2
D 3
F 7
C 1
D 7
F 6 E 1 F 8 E 8 (b)
Figure 5-12. (a) A network. (b) The link state packets for this network.
Building the link state packets is easy. The hard part is determining when to build them. One possibility is to build them periodically, at regular intervals. An- other possibility is to build them when some specific event occurs, such as a line or neighbor going down or coming back up again or changing its properties.
Distributing the Link State Packets
The trickiest part of the algorithm is distributing the link state packets. All of the routers must get all of the link state packets quickly and reliably. If different routers are using different versions of the topology, the routes they compute can have inconsistencies, such as loops, unreachable machines, and other problems.
380 THE NETWORK LAYER CHAP. 5
First, we will describe the basic distribution algorithm. After that, we will give some refinements. The fundamental idea is to use flooding to distribute the link state packets to all routers. To keep the flood in check, each packet contains a se- quence number that is incremented for each new packet sent. Routers keep track of all the (source router, sequence) pairs they see. When a new link state packet comes in, it is checked against the list of packets already seen. If it is new, it is for- warded on all lines except the one it arrived on. If it is a duplicate, it is discarded. If a packet with a sequence number lower than the highest one seen so far ever ar rives, it is rejected as being obsolete as the router has more recent data.
This algorithm has a few problems, but they are manageable. First, if the se- quence numbers wrap around, confusion will reign. The solution here is to use a 32-bit sequence number. With one link state packet per second, it would take 137 years to wrap around, so this possibility can be ignored.
Second, if a router ever crashes, it will lose track of its sequence number. If it starts again at 0, the next packet it sends will be rejected as a duplicate. Third, if a sequence number is ever corrupted and 65,540 is received instead of 4 (a 1-bit error), packets 5 through 65,540 will be rejected as obsolete, since the current sequence number will be thought to be 65,540.
The solution to these problems is to include the age of each packet after the se- quence number and decrement it once a second. When the age hits zero, the infor- mation from that router is discarded. Normally, a new packet comes in, say, every 10 sec, so router information only times out when a router is down (or six consecu tive packets have been lost, an unlikely event). The Age field is also decremented by each router during the initial flooding process, to make sure no packet can get lost and live for an indefinite period of time (a packet with age zero is discarded).
Some refinements to this algorithm make it more robust. When a link state packet comes in to a router for flooding, it is not queued for transmission im- mediately. Instead, it is put in a holding area to wait a short while in case more links are coming up or going down. If another link state packet from the same source comes in before the first packet is transmitted, their sequence numbers are compared. If they are equal, the duplicate is discarded. If they are different, the older one is thrown out. To guard against errors on the links, all link state packets are acknowledged.
The data structure used by router B for the network shown in Fig. 5-12(a) is depicted in Fig. 5-13. Each row here corresponds to a recently arrived, but as yet not fully processed, link state packet. The table records where the packet origi- nated, its sequence number and age, and the data. In addition, there are send and acknowledgement flags for each of B’s three links (to A, C, and F, respectively). The send flags mean that the packet must be sent on the indicated link. The ac- knowledgement flags mean that it must be acknowledged there.
In Fig. 5-13, the link state packet from A arrives directly, so it must be sent to C and F and acknowledged to A, as indicated by the flag bits. Similarly, the packet from F has to be forwarded to A and C and acknowledged to F.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 381
Send flags ACK flags
Source Seq. Age A C F A C F Data
A 21 60 0 1 1 1 0 0
F 21 60 1 1 0 0 0 1
E 21 59 0 1 0 1 0 1
C 20 60 1 0 1 0 1 0
D 21 59 1 0 0 0 1 1
Figure 5-13. The packet buffer for router B in Fig. 5-12(a).
However, the situation with the third packet, from E, is different. It arrives twice, once via EAB and once via EFB. Consequently, it has to be sent only to C but must be acknowledged to both A and F, as indicated by the bits.
If a duplicate arrives while the original is still in the buffer, bits have to be changed. For example, if a copy of C’s state arrives from F before the fourth entry in the table has been forwarded, the six bits will be changed to 100011 to indicate that the packet must be acknowledged to F but not sent there.
Computing the New Routes
Once a router has accumulated a full set of link state packets, it can construct the entire network graph because every link is represented. Every link is, in fact, represented twice, once for each direction. The different directions may even have different costs. The shortest-path computations may then find different paths from router A to B than from router B to A.
Now Dijkstra’s algorithm can be run locally to construct the shortest paths to all possible destinations. The results of this algorithm tell the router which link to use to reach each destination. This information is installed in the routing tables, and normal operation is resumed.
Compared to distance vector routing, link state routing requires more memory and computation. For a network with n routers, each of which has k neighbors, the memory required to store the input data is proportional to kn, which is at least as large as a routing table listing all the destinations. Also, the computation time grows faster than kn, even with the most efficient data structures, an issue in large networks. Nevertheless, in many practical situations, link state routing works well because it does not suffer from slow convergence problems.
Link state routing is widely used in actual networks, so a few words about some example protocols are in order. Many ISPs use the IS-IS (Intermediate Sys tem-to-Intermediate System) link state protocol (Oran, 1990). It was designed
382 THE NETWORK LAYER CHAP. 5
for an early network called DECnet, later adopted by ISO for use with the OSI pro tocols and then modified to handle other protocols as well, most notably, IP. OSPF (Open Shortest Path First), which will be discussed in Sec. 5.7.6, is the other main link state protocol. It was designed by IETF several years after IS-IS and adopted many of the innovations designed for IS-IS. These innovations include a self-stabi lizing method of flooding link state updates, the concept of a designated router on a LAN, and the method of computing and supporting path splitting and multiple metrics. As a consequence, there is very little difference between IS-IS and OSPF. The most important difference is that IS-IS can carry information about multiple network layer protocols at the same time (e.g., IP, IPX, and AppleTalk). OSPF does not have this feature, and it is an advantage in large multiprotocol environ- ments.
A general comment on routing algorithms is also in order. Link state, distance vector, and other algorithms rely on processing at all the routers to compute routes. Problems with the hardware or software at even a small number of routers can wreak havoc across the network. For example, if a router claims to have a link it does not have or forgets a link it does have, the network graph will be incorrect. If a router fails to forward packets or corrupts them while forwarding them, the route will not work as expected. Finally, if it runs out of memory or does the routing cal- culation wrong, bad things will happen. As the network grows into the range of tens or hundreds of thousands of nodes, the probability of some router failing occa- sionally becomes nonnegligible. The trick is to try to arrange to limit the damage when the inevitable happens. Perlman (1988) discusses these problems and their possible solutions in detail.
5.2.6 Hierarchical Routing within a Network
As networks grow in size, the router routing tables grow proportionally. Not only is router memory consumed by ever-increasing tables, but more CPU time is needed to scan them and more bandwidth is needed to send status reports about them. Additionally, even if every router could store the entire topology, recomput ing shortest paths every time the network experienced changes in the topology would be prohibitive; imagine, for example, if a very large network would need to computer shortest paths every time a link in the network failed or recovered. At a certain point, the network may grow to a size where it is no longer feasible for every router to have an entry for every other router, so the routing will have to be done hierarchically, through the use of routing areas.
When hierarchical routing is used, the routers are divided into what we will call regions or areas. Each router knows all the details about how to route packets to destinations within its own region but knows nothing about the internal structure of other regions. When different networks are interconnected, it is natural to regard each one as a separate region to free the routers in one network from having to know the topological structure of the other ones.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 383
For huge networks, a two-level hierarchy may be insufficient; it may be neces- sary to group the regions into clusters, the clusters into zones, the zones into groups, and so on, until we run out of names for units of aggregation. As an ex- ample of a simple multilevel hierarchy, consider how a packet might be routed from Berkeley, California, to Malindi, Kenya. The Berkeley router would know the detailed topology within California but would send all out-of-state traffic to the Los Angeles router. The Los Angeles router would be able to route traffic directly to other domestic routers but would send all foreign traffic to New York. The New York router would be programmed to direct all traffic to the router in the destina tion country responsible for handling foreign traffic, say, in Nairobi. Finally, the packet would work its way down the tree in Kenya until it got to Malindi.
Figure 5-14 gives a quantitative example of routing in a two-level hierarchy with five regions. The full routing table for router 1A has 17 entries, as shown in Fig. 5-14(b). When routing is done hierarchically, as in Fig. 5-14(c), there are en tries for all the local routers, as before, but all other regions are condensed into a single router, so all traffic for region 2 goes via the 1B-2A line, but the rest of the remote traffic goes via the 1C-3B line. Hierarchical routing has reduced the table from 17 to 7 entries. As the ratio of the number of regions to the number of routers per region grows, the savings in table space increase.
Region 1 Region 2
Full table for 1A
Dest. Line Hops 1A – – 1B
Hierarchical table for 1A
Dest. Line Hops – –
1A
1B 1
1B
1A
1C
2A 2B
2C
2D
5B 5C
1B 1
1C
1C 1
2A
1B 2
2B
1B 3
2C
1B 3
2D
1B 4
3A
1B
1C 1
1C
2
1B 2
1C 2
3
4
1C 3
1C 4
5
3A
4A
5A
4B 4C
1C 3
3B
1C 2
3B
5D
5E
4A
1C 3
4B
Region 3 Region 4 Region 5
1C 4
4C
1C 4
5A
1C 4
5B
1C 5
5C
1B 5
5D
1C 6
5E
1C 5
(a) (b) (c)
Figure 5-14. Hierarchical routing.
Unfortunately, these gains in space are not free. There is a penalty to be paid: increased path length. For example, the best route from 1A to 5C is via region 2,
384 THE NETWORK LAYER CHAP. 5
but with hierarchical routing all traffic to region 5 goes via region 3, because that is better for most destinations in region 5.
When a single network becomes very large, an interesting question is ‘‘How many levels should the hierarchy have?’’ For example, consider a network with 720 routers. If there is no hierarchy, each router needs 720 routing table entries. If the network is partitioned into 24 regions of 30 routers each, each router needs 30 local entries plus 23 remote entries for a total of 53 entries. If a three-level hier- archy is chosen, with 8 clusters each containing 9 regions of 10 routers, each router needs 10 entries for local routers, 8 entries for routing to other regions within its own cluster, and 7 entries for distant clusters, for a total of 25 entries. Kamoun and Kleinrock (1979) discovered that the optimal number of levels for an N router net- work is ln N, requiring a total of e ln N entries per router. They have also shown that the increase in effective mean path length caused by hierarchical routing is sufficiently small that it is usually acceptable.
5.2.7 Broadcast Routing
In some applications, hosts need to send messages to many or all other hosts. For example, a service distributing weather reports, stock market updates, or live radio programs might work best by sending to all machines and letting those that are interested read the data. Sending a packet to all destinations simultaneously is called broadcasting. Various methods have been proposed for doing it.
One broadcasting method that requires no special features from the network is for the source to simply send a distinct packet to each destination. Not only is the method wasteful of bandwidth and slow, but it also requires the source to have a complete list of all destinations. This method is not desirable in practice, even though it is widely applicable.
An improvement is multidestination routing, in which each packet contains either a list of destinations or a bit map indicating the desired destinations. When a packet arrives at a router, the router checks all the destinations to determine the set of output lines that will be needed. (An output line is needed if it is the best route to at least one of the destinations.) The router generates a new copy of the packet for each output line to be used and includes in each packet only those destinations that are to use the line. In effect, the destination set is partitioned among the output lines. After a sufficient number of hops, each packet will carry only one destina tion like a normal packet. Multidestination routing is like using separately ad- dressed packets, except that when several packets must follow the same route, one of them pays full fare and the rest ride free. The network bandwidth is therefore used more efficiently. However, this scheme still requires the source to know all the destinations, plus it is as much work for a router to determine where to send one multidestination packet as it is for multiple distinct packets.
We have already seen a better broadcast routing technique: flooding. When implemented with a sequence number per source, flooding uses links efficiently
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 385
with a decision rule at routers that is relatively simple. Although flooding is ill- suited for ordinary point-to-point communication, it rates serious consideration for broadcasting. However, it turns out that we can do better still once the shortest path routes for regular packets have been computed.
The idea for reverse path forwarding is elegant and remarkably simple once it has been pointed out (Dalal and Metcalfe, 1978). When a broadcast packet ar rives at a router, the router checks to see if the packet arrived on the link that is nor- mally used for sending packets toward the source of the broadcast. If so, there is an excellent chance that the broadcast packet itself followed the best route from the router and is therefore the first copy to arrive at the router. This being the case, the router forwards copies of it onto all links except the one it arrived on. If, however, the broadcast packet arrived on a link other than the preferred one for reaching the source, the packet is discarded as a likely duplicate.
B C
AB CD
I
A
D
E
F
E
F
F H J N
I
LN
H
G
J
H
I
L
G
A D E K G O M O J
E K
K
O
K
N
O
C G D N
M
M
H B L H
L B
(a)
(b) (c)
Figure 5-15. Reverse path forwarding. (a) A network. (b) Sink tree for router I. (c) The tree built by reverse path forwarding from I.
An example of reverse path forwarding is shown in Fig. 5-15. Part (a) shows a network, part (b) shows a sink tree for router I of that network, and part (c) shows how the reverse path algorithm works. On the first hop, I sends packets to F, H, J, and N, as indicated by the second row of the tree. Each of these packets arrives on the preferred path to I (assuming that the preferred path falls along the sink tree) and is so indicated by a circle around the letter. On the second hop, eight packets are generated, two by each of the routers that received a packet on the first hop. As it turns out, all eight of these arrive at previously unvisited routers, and five of these arrive along the preferred line. Of the six packets generated on the third hop, only three arrive on the preferred path (at C, E, and K); the others are duplicates. After five hops and 24 packets, the broadcasting terminates, compared with four hops and 14 packets had the sink tree been followed exactly.
The principal advantage of reverse path forwarding is that it is efficient while being easy to implement. It sends the broadcast packet over each link only once in each direction, just as in flooding, yet it requires only that routers know how to
386 THE NETWORK LAYER CHAP. 5
reach all destinations, without needing to remember sequence numbers (or use other mechanisms to stop the flood) or list all destinations in the packet. Our last broadcast algorithm improves on the behavior of reverse path for- warding. It makes explicit use of the sink tree—or any other convenient spanning tree for that matter—for the router initiating the broadcast. A spanning tree is a subset of the network that includes all the routers but contains no loops. Sink trees are spanning trees. If each router knows which of its lines belong to the spanning tree, it can copy an incoming broadcast packet onto all the spanning tree lines ex- cept the one it arrived on. This method makes excellent use of bandwidth, generat ing the absolute minimum number of packets necessary to do the job. In Fig. 5-15, for example, when the sink tree of part (b) is used as the spanning tree, the broad- cast packet is sent with the minimum 14 packets. The only problem is that each router must have knowledge of some spanning tree for the method to be applicable. Sometimes this information is available (e.g., with link state routing, all routers know the complete topology, so they can compute a spanning tree) but sometimes it is not (e.g., with distance vector routing).
5.2.8 Multicast Routing
Some applications, such as a multiplayer game or live video of a sports event streamed to many viewing locations, send packets to multiple receivers. Unless the group is very small, sending a distinct packet to each receiver is expensive. On the other hand, broadcasting a packet is wasteful if the group consists of, say, 1000 machines on a million-node network, so that most receivers are not interested in the message (or worse yet, they are definitely interested but are not supposed to see it, for example, because it is part of a pay-per-view sports event). Thus, we need a way to send messages to well-defined groups that are numerically large in size but small compared to the network as a whole.
Sending a message to such a group is called multicasting, and the routing al- gorithm used is called multicast routing. All multicasting schemes require some way to create and destroy groups and to identify which routers are members of a group. How these tasks are accomplished is not of concern to the routing algo rithm. For now, we will assume that each group is identified by a multicast address and that routers know the groups to which they belong. We will revisit group mem- bership when we describe Internet multicasting in Sec. 5.7.8.
Multicast routing schemes build on the broadcast routing schemes we have al ready studied, sending packets along spanning trees to deliver the packets to the members of the group while making efficient use of bandwidth. However, the best spanning tree to use depends on whether the group is dense, with receivers scat tered over most of the network, or sparse, with much of the network not belonging to the group. In this section we will consider both cases.
If the group is dense, broadcast is a good start because it efficiently gets the packet to all parts of the network. But broadcast will reach some routers that are
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 387
not members of the group, which is wasteful. The solution explored by Deering and Cheriton (1990) is to prune the broadcast spanning tree by removing links that do not lead to members. The result is an efficient multicast spanning tree.
As an example, consider the two groups, 1 and 2, in the network shown in Fig. 5-16(a). Some routers are attached to hosts that belong to none, one or both of these groups, as indicated in the figure. A spanning tree for the leftmost router is shown in Fig. 5-16(b). This tree can be used for broadcast but is overkill for multi- cast, as can be seen from the two pruned versions that are shown next. In Fig. 5-16(c), all the links that do not lead to hosts that are members of group 1 have been removed. The result is the multicast spanning tree for the leftmost router to send to group 1. Packets are forwarded only along this spanning tree, which is more efficient than the broadcast tree because there are 7 links instead of 10. Fig. 5-16(d) shows the multicast spanning tree after pruning for group 2. It is ef ficient too, with only five links this time. It also shows that different multicast groups have different spanning trees.
21 2 1
1, 2
1, 2
1
2
2
1, 2
1, 2
2 2
1
1
1
(a) (b) 1
2
1
1
2
2
2 2
1
1
(c) (d)
Figure 5-16. (a) A network. (b) A spanning tree for the leftmost router. (c) A multicast tree for group 1. (d) A multicast tree for group 2.
Various ways of pruning the spanning tree are possible. The simplest one can be used if link state routing is used and furthermore each router is aware of the complete topology, including which hosts belong to which groups. Each router can
388 THE NETWORK LAYER CHAP. 5
then construct its own pruned spanning tree for each sender to the group in ques tion by constructing a sink tree for the sender as usual and then removing all links that do not connect group members to the sink node. MOSPF (Multicast OSPF) is an example of a link state protocol that works in this way (Moy, 1994).
With distance vector routing, a different pruning strategy can be followed. The basic algorithm is reverse path forwarding. However, whenever a router with no hosts interested in a particular group and no connections to other routers receives a multicast message for that group, it responds with a PRUNE message, telling the neighbor that sent the message not to send it any more multicasts from the sender for that group. When a router with no group members among its own hosts has re- ceived such messages on all the lines to which it sends the multicast, it, too, can re- spond with a PRUNE message. In this way, the spanning tree is recursively pruned. DVMRP (Distance Vector Multicast Routing Protocol) is an example of a multi- cast routing protocol that works this way (Waitzman et al., 1988).
Pruning results in efficient spanning trees that use only the links that are ac tually needed to reach members of the group and no others. One potential disad- vantage is that it is lots of work for routers, especially for very big networks. Sup- pose that a network has n groups, each with an average of m nodes. At each router and for each group m pruned spanning trees must be stored, for a total of mn trees. For example, Fig. 5-16(c) gives the spanning tree for the leftmost router to send to group 1. The spanning tree for the rightmost router to send to group 1 (not shown in the figure) will look quite different, as packets will head directly for group mem- bers rather than via the left side of the graph. This in turn means that routers must forward packets destined to group 1 in different directions depending on which node is sending to the group. When many large groups with many senders exist, considerable storage is needed to store all the trees.
An alternative design uses core-based trees to compute a single spanning tree for the group (Ballardie et al., 1993). All of the routers agree on a root (called the core or rendezvous point) and build the tree by sending a packet from each mem- ber to the root. The tree is the union of the paths traced by these packets. Fig. 5-17(a) shows a core-based tree for group 1. To send to this group, a sender sends a packet to the core. When the packet reaches the core, it is forwarded down the tree. This is shown in Fig. 5-17(b) for the sender on the righthand side of the network. As a performance optimization, packets destined for the group do not need to reach the core before they are multicast. As soon as a packet reaches the tree, it can be forwarded up toward the root, as well as down all the other branches. This is the case for the sender at the top of Fig. 5-17(b).
Having a shared tree is not optimal for all sources. For example, in Fig. 5-17(b), the packet from the sender on the righthand side reaches the top-right group member via the core in three hops, instead of directly. The inefficiency de- pends on where the core and senders are located, but often it is reasonable when the core is in the middle of the senders. When there is only a single sender, as in a video that is streamed to a group, using the sender as the core is optimal.
SEC. 5.2 ROUTING ALGORITHMS IN A SINGLE NETWORK 389
1
1
Core
1
1
Sender 1
1
Sender
1
1
1
Core 1
(a) (b)
Figure 5-17. (a) Core-based tree for group 1. (b) Sending to group 1.
Also of note is that shared trees can be a major savings in storage costs, mes- sages sent, and computation. Each router has to keep only one tree per group, in- stead of m trees. Further, routers that are not part of the tree do no work at all to support the group. For this reason, shared tree approaches like core-based trees are used for multicasting to sparse groups in the Internet as part of popular protocols such as protocol independent multicast (Fenner et al., 2006).
5.2.9 Anycast Routing
So far, we have covered delivery models in which a source sends to a single destination (called unicast), to all destinations (called broadcast), and to a group of destinations (called multicast). Another delivery model, called anycast is some times also useful. In anycast, a packet is delivered to the nearest member of a group (Partridge et al., 1993). Schemes that find these paths are called anycast routing.
Why would we want anycast? Sometimes nodes provide a service, such as time of day or content distribution for which it is getting the right information that mat ters, not the node that is contacted; any node will do. For example, anycast is used in the Internet as part of DNS, as we will see in Chap. 7.
Fortunately, regular distance vector and link state routing can produce anycast routes, so we do not need to devise a new routing scheme for anycast. Suppose we want to anycast to the members of group 1. They will all be given the address ‘‘1,’’ instead of different addresses. Distance vector routing will distribute vectors as usual, and nodes will choose the shortest path to destination 1. This will result in nodes sending to the nearest instance of destination 1. The routes are shown in Fig. 5-18(a). This procedure works because the routing protocol does not realize that there are multiple instances of destination 1. That is, it believes that all the instances of node 1 are the same node, as in the topology shown in Fig. 5-18(b).
390 THE NETWORK LAYER CHAP. 5 1
1
1
1
1
1
(a) (b)
Figure 5-18. (a) Anycast routes to group 1. (b) Topology seen by the routing protocol.
This procedure works for link state routing as well, although there is the added consideration that the routing protocol must not find seemingly short paths that pass through node 1. This would result in jumps through hyperspace, since the instances of node 1 are really nodes located in different parts of the network. How- ever, link state protocols already make this distinction between routers and hosts. We glossed over this fact earlier because it was not needed for our discussion.
5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER
Too many packets in any part of the network can ultimately introduce packet delay and loss that degrades performance. This situation is called congestion.
5.3.1 The Need for Traffic Management: Congestion
The network and transport layers share the responsibility for managing conges tion. Because congestion occurs within the network, it is the network layer that di rectly experiences it and must ultimately determine what to do with the excess packets. The most effective way to control congestion is to reduce the load that the transport layer is placing on the network. This requires the network and transport layers to work together. The network layer does not automatically mitigate con- gestion, but network operators can configure routers, switches, and other devices at the network layer to mitigate the effects of congestion, typically by taking actions that would encourage a sender to reduce the sending rate, or by sending traffic along different, less-congested paths through the network. In this chapter we will look at the aspects of congestion that concern the network layer, and mechanisms that the network layer uses to control and manage congestion. To avoid confusion with the more common use of the phase ‘‘congestion control,’’ which is frequently used by some authors to describe functions of the transport layer, in this chapter we
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 391
will discuss practices to manage congestion at the network layer as congestion management or traffic management. In Chap. 6, we will finish the topic by cov- ering the mechanisms that the transport layer uses to manage congestion control.
Figure 5-19 shows the onset of congestion. When the number of packets that hosts send into the network is well within the network’s capacity, the amount of traffic that is delivered is proportional to the amount of traffic that is sent: If twice as much traffic is sent, twice as much is delivered. However, as the offered load approaches the carrying capacity, bursts of traffic occasionally fill up the buffers inside routers and some packets are lost. These lost packets consume some of the capacity, so the number of delivered packets falls below the ideal curve. At this point, the network is experiencing congestion.
Ideal
Capacity of
)
c
e
s
/
s
t
e
kc
a
p
(
t
u
pdo
o
G
the network
Onset of
congestion
Desirable response
Congestion collapse
Offered load (packet/sec)
Figure 5-19. Performance drops significantly in the presence of congestion: packet loss rates increase, and latency also increases as router queues fill with packets.
At some point, the network may experience a congestion collapse, where per formance plummets as the offered load increases beyond the capacity. In short, congestion collapse occurs when increasing load on the network actually results in less traffic being successfully delivered. This situation can occur if packets are sufficiently delayed inside the network that they are no longer useful when they leave the network. For example, in the early Internet, the time a packet spent wait ing for a backlog of packets ahead of it to be sent over a slow 56-kbps link could reach the maximum time it was allowed to remain in the network. It then had to be thrown away. A different failure mode occurs when senders retransmit packets that are greatly delayed, thinking that they have been lost. In this case, copies of the same packet will be delivered by the network, again wasting its capacity. To cap ture these factors, the y-axis of Fig. 5-19 is given as goodput, which is the rate at which useful packets are delivered by the network.
We would like to design networks that avoid congestion where possible and do not suffer from congestion collapse if they somehow do become congested. Unfor tunately, in a packet-switched network, congestion cannot wholly be avoided. If
392 THE NETWORK LAYER CHAP. 5
all of a sudden, streams of packets begin arriving on three or four input lines and all need the same output line, a queue will build up. If there is insufficient memory to hold all of them, packets will be lost. Adding more memory may help up to a
point, but Nagle (1987) realized that if routers have an infinite amount of memory, congestion frequently gets worse, not better. More recently, researchers discovered that many network devices tend to have more memory than they need, a concept that became known as bufferbloat. Network devices that have too much memory can degrade network performance for a variety of reasons. First, by the time pack- ets get to the front of the queue, they have already timed out (repeatedly) and dup licates have been sent. Second, as we will discuss in Chap. 6, senders need timely information about network congestion, and if packets are stored in router buffers, rather than dropped, then senders will continue to send traffic that congests the net- work. All of this makes matters worse, not better—it leads to congestion collapse.
Low-bandwidth links or routers that process packets more slowly than the ca- pacity of a network link can also become congested. In cases where the network has additional capacity in other parts of the network, congestion can be mitigated by directing some of the traffic away from the bottleneck to other (less congested) parts of the network. Ultimately, however, increasing traffic demands may result in congestion being pervasive throughout the network. When this occurs, there are two approaches that operators can take: shedding load (i.e., dropping traffic), or provisioning additional capacity.
It is worth pointing out the difference between congestion control, traffic management, and flow control, as the relationship is a subtle one. Traffic man- agement (sometimes also called traffic engineering) has to do with making sure the network is able to carry the offered traffic; it can be performed by devices in the network, or by the senders of traffic (often through mechanisms in the transport protocol, which are often referred to as congestion control). Congestion manage- ment and control concerns the behavior of all the hosts and routers. Flow control, in contrast, relates to the traffic between a particular sender and a particular re- ceiver and is generally concerned with making sure that the sender is not trans- mitting data faster than the receiver can process it. Its job is to make sure no data is lost because the sender is more powerful than the receiver and can send data faster that the receiver can absorb it.
To see the difference between these two concepts, consider a network made up of 100-Gbps fiber optic links on which a supercomputer is trying to force feed a large file to a personal computer that is capable of handling only 1 Gbps. Al though there is no congestion (the network itself is not in trouble), flow control is needed to force the supercomputer to stop frequently to give the personal computer a chance to breathe.
At the other extreme, consider a network with 1-Mbps lines and 1000 large computers, half of which are trying to transfer files at 100 kbps to the other half. Here, the problem is not that of fast senders overpowering slow receivers, but that the total offered traffic exceeds what the network can handle.
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 393
The reason congestion control and flow control are often confused is that the best way to handle both problems is to get the host to slow down. Thus, a host can get a ‘‘slow-down’’ message either because the receiver cannot handle the load or because the network cannot handle it. We will come back to this point in Chap. 6.
We will start our study of congestion management by looking at the ap- proaches that network operators can apply at different time scales. Then we will look at approaches that can prevent congestion from occurring in the first place, followed by approaches for coping with it once it has set in.
5.3.2 Approaches to Traffic Management
The presence of congestion means that the load is (temporarily) greater than the resources (in a part of the network) can handle. There are two approaches to dealing with it: increase the resources or decrease the load. As shown in Fig. 5-20, these solutions are usually applied on different time scales to either prevent con- gestion or react to it once it has occurred.
Network
Traffic-aware
Admission
provisioning
routing
Slower
(Preventative)
control
Traffic throttling
Load
shedding
Faster
(Reactive)
Figure 5-20. Timescales of approaches to traffic and congestion management.
The most straightforward way to avoid congestion is to build a network that is provisioned for the traffic load that it must carry. If there is a low-bandwidth link on the path along which most traffic is directed, congestion is likely. Sometimes resources can be added dynamically when there is serious congestion, for example, turning on spare routers or enabling lines that are normally used only as backups (to make the system fault tolerant) or purchasing bandwidth on the open market. More often, links and routers that are regularly heavily utilized are upgraded at the earliest opportunity. This is called provisioning and happens on a time scale of months, driven by long-term traffic trends.
To make the most of the existing network capacity, routes can be tailored to traffic patterns that change during the day as network users wake and sleep in dif ferent time zones. For example, routes may be changed to shift traffic away from heavily used paths by changing the shortest path weights. Some local radio sta tions have helicopters flying around their cities to report on road congestion to make it possible for their mobile listeners to route their packets (cars) around hotspots. This is called traffic-aware routing. Splitting traffic across multiple paths can also be helpful.
However, sometimes it is not possible to increase capacity, especially on short time scales. The only way then to beat back the congestion is to decrease the load.
394 THE NETWORK LAYER CHAP. 5
In a virtual-circuit network, new connections can be refused if they would cause the network to become congested. This is one example of admission control, a concept that simply denies senders the ability to send traffic if the network capacity cannot support it.
When congestion is imminent, the network can deliver feedback to the sources whose traffic flows are responsible for the problem. The network can request these sources to slow down the sending rates, or it can simply slow down the traffic it- self, a process sometimes referred to as throttling. Two difficulties with this ap- proach are how to identify the onset of congestion, and how to inform the source that needs to slow down. To tackle the first issue, routers can monitor the average load, queueing delay, or packet loss and send feedback to senders, either explicitly or implicitly (e.g., by dropping packets) to tell them to slow down.
In the case where feedback is explicit, routers must participate in a feedback loop with the sources. For a scheme to work correctly, the time scale must be adjusted carefully. If every time two packets arrive in a row, a router yells STOP and every time a router is idle for 20 µsec, it yells GO, the system will oscillate wildly and never converge. On the other hand, if it waits 30 minutes to make sure before saying anything, the congestion-control mechanism will react too sluggishly to be of any use. Delivering timely feedback is a nontrivial matter. An added con- cern is having routers send more messages when the network is already congested.
Another approach is for the network to discard packets that it cannot deliver. The general name for this approach is load shedding, and there are various ways to achieve it, including traffic shaping (restricting the transmission rate for a particu lar sender) and traffic policing (dropping traffic from a particular sender if it exceeds some rate). A good policy for choosing which packets to discard can help to prevent congestion collapse. We will discuss all of these topics below.
Traffic-Aware Routing
The first approach we will examine is traffic-aware routing. The routing ap- proaches we looked at in Sec. 5.2 used fixed link weights that adapted to changes in topology, but not to changes in traffic load. The goal in taking load into account when computing routes is to shift traffic away from hotspots that will be the first places in the network to experience congestion.
The most direct way to do this is to set the link weight to be a function of the (fixed) link bandwidth and propagation delay plus the (variable) measured load or average queueing delay. Least-weight paths will then favor paths that are more lightly loaded, all else being equal.
Traffic-aware routing was used in the early Internet according to this model (Khanna and Zinky, 1989). However, there is a peril. Consider the network of Fig. 5-21, which is divided into two parts, East and West, connected by two links, CF and EI. Suppose that most of the East-West traffic is using link CF, resulting
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 395
in this link being heavily loaded with long delays. Including queueing delay in the weight used for the shortest path calculation will make EI more attractive. After the new routing tables have been installed, most of the East-West traffic will now go over EI, loading this link. Consequently, in the next update, CF will appear to be the shortest path. As a result, the routing tables may oscillate wildly, leading to erratic routing and many potential problems.
West East
G
C F
B
A
E
I
H
D
J
Figure 5-21. A network in which the East and West parts are connected by two links.
If load is ignored and only bandwidth and propagation delay are considered, this problem does not occur. Attempts to include load but change weights within a narrow range only slow down routing oscillations. Two techniques can contribute to a successful solution. The first is multipath routing, in which there can be multi- ple paths from a source to a destination. In our example this means that the traffic can be spread across both of the East to West links. The second one is for the rout ing scheme to shift traffic across routes slowly enough that it is able to converge, as in the scheme of Gallagher (1977).
Given these difficulties, in the Internet routing protocols do not generally adjust their routes depending on the load. Instead, network operators make adjust- ments to routing protocols on slower time scales by slowly changing the routing configuration and parameters, a process sometimes called traffic engineering. Traffic engineering has long been a painstaking, manual process, akin to a black art. Some work has attempted to formalize this process, but Internet traffic loads are unpredictable enough, and the protocol configuration parameters are coarse and clunky enough that the process has remained fairly primitive. More recently, how- ever, the advent of software defined networking has made it possible to automate some of these tasks, and the increasing use of certain technologies such as MPLS tunnels across the network has provided operators with more flexibility for a wide range of traffic engineering tasks.
396 THE NETWORK LAYER CHAP. 5 Admission Control
One technique that is widely used in virtual-circuit networks to keep conges tion at bay is admission control. The idea is simple: do not set up a new virtual circuit unless the network can carry the added traffic without becoming congested.
Thus, attempts to set up a virtual circuit may fail. This approach is better than the alternative, as letting more people in when the network is busy just makes matters worse. By analogy, in the telephone system, when a switch gets overloaded, it practices admission control by not giving dial tones.
The trick with this approach is working out when a new virtual circuit will lead to congestion. The task is straightforward in the telephone network because of the fixed bandwidth of calls (64 kbps for uncompressed audio). However, virtual cir- cuits in computer networks come in all shapes and sizes. Thus, the circuit must come with some characterization of its traffic if we are to apply admission control.
Traffic is often described in terms of its rate and shape. The problem of how to describe it in a simple yet meaningful way is difficult because traffic is typically bursty—the average rate is only half the story. For example, traffic that varies while browsing the Web is more difficult to handle than a streaming movie with the same long-term throughput because the bursts of Web traffic are more likely to congest routers in the network. A commonly used descriptor that captures this ef fect is the leaky bucket or token bucket. A leaky bucket has two parameters that bound the average rate and the instantaneous burst size of traffic. Because these are two common mechanisms for performing traffic shaping, we will cover these top ics in more detail in that section.
Given traffic descriptions, the network can decide whether to admit the new virtual circuit. One possibility is for the network to reserve enough capacity along the paths of each of its virtual circuits that congestion will not occur. In this case, the traffic description is a service agreement for what the network will guarantee its users. We have prevented congestion but veered into the related topic of quality of service a little too early; we will return to it shortly.
Even without making guarantees, the network can use traffic descriptions for admission control. The task is then to estimate how many circuits will fit within the carrying capacity of the network without congestion. Suppose that virtual cir- cuits that may blast traffic at rates up to 10 Mbps all pass through the same 100-Mbps physical link. How many circuits should be admitted? Clearly, 10 cir- cuits can be admitted without risking congestion, but this is wasteful in the normal case since it may rarely happen that all 10 are transmitting full blast at the same time. In real networks, measurements of past behavior that capture the statistics of transmissions can be used to estimate the number of circuits to admit, to trade bet ter performance for acceptable risk.
Admission control can be combined with traffic-aware routing by considering routes around traffic hotspots as part of the setup procedure. For example, consid- er the network of Fig. 5-22(a), in which two routers are congested, as indicated.
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 397
A
Congestion
A
B
Congestion
B
Virtual
circuit
(a) (b)
Figure 5-22. (a) A congested network. (b) The portion of the network that is not congested. A virtual circuit from A to B is also shown.
Suppose that a host attached to router A wants to set up a connection to a host attached to router B. Normally, this connection would pass through one of the con- gested routers. To avoid this situation, we can redraw the network as shown in Fig. 5-22(b), omitting the congested routers and all of their lines. The dashed line shows a possible route for the virtual circuit that avoids the congested routers. Shaikh et al. (1999) give a design for this kind of load-sensitive routing.
Load Shedding
When none of the above methods make the congestion disappear, routers can bring out the heavy artillery: load shedding. This is a fancy way of saying that when routers are being inundated by packets that they cannot handle, they just throw them away. The term comes from the world of electrical power generation, where it refers to the practice of utilities intentionally blacking out certain areas to save the entire grid from collapsing on hot summer days when the demand for electricity (to power air conditioners) greatly exceeds the supply.
The key question for a router drowning in packets is which packets to drop. The preferred choice may depend on the type of applications that use the network. For a file transfer, an old packet is worth more than a new one. This is because dropping packet 6 and keeping packets 7 through 10, for example, will only force the receiver to do more work to buffer data that it cannot yet use. In contrast, for real-time media, a new packet is worth more than an old one. This is because packets become useless if they are delayed and miss the time at which they must be played out to the user.
The former policy (old is better than new) is often called wine and the latter (new is better than old) is often called milk because most people prefer new milk over old milk and old wine over new wine.
398 THE NETWORK LAYER CHAP. 5
More intelligent load shedding requires cooperation from the senders. An ex- ample is packets that carry routing information. These packets are more important than regular data packets because they establish routes; if they are lost, the network may lose connectivity. Another example is that algorithms for compressing video, like MPEG, periodically transmit an entire frame and then send subsequent frames as differences from the last full frame. In this case, dropping a packet that is part of a difference is preferable to dropping one that is part of a full frame because fu ture packets depend on the full frame.
To implement an intelligent discard policy, applications must mark their pack- ets to indicate to the network how important they are. Then, when packets have to be discarded, routers can first drop packets from the least important class, then the next most important class, and so on.
Of course, unless there is some significant incentive to avoid marking every packet as VERY IMPORTANT—NEVER, EVER DISCARD, nobody will do it. Often accounting and money are used to discourage frivolous marking. For ex- ample, the network might let senders transmit faster than the service they pur- chased allows if they mark excess packets as low priority. Such a strategy is ac tually not a bad idea because it makes more efficient use of idle resources, allow ing hosts to use them as long as nobody else is interested, but without establishing a right to them when times get tough.
Traffic Shaping
Before the network can make performance guarantees, it must know what traf fic is being guaranteed. In the telephone network, this characterization is simple. For example, a voice call (in uncompressed format) needs 64 kbps and consists of one 8-bit sample every 125 µsec. However, traffic in data networks is bursty. It
typically arrives at nonuniform rates as the traffic rate varies (e.g., videoconferenc ing with compression), users interact with applications (e.g., browsing a new Web page), and computers switch between tasks. Bursts of traffic are more difficult to handle than constant-rate traffic because they can fill buffers and cause packets to be lost.
Traffic shaping is a technique for regulating the average rate and burstiness of a flow of data that enters the network. The goal is to allow applications to transmit a wide variety of traffic that suits their needs, including some bursts, yet have a simple and useful way to describe the possible traffic patterns to the network. When a flow is set up, the user and the network (i.e., the customer and the pro- vider) agree on a certain traffic pattern (i.e., shape) for that flow. In effect, the cus tomer says to the provider ‘‘My transmission pattern will look like this; can you handle it?’’
Sometimes this agreement is called an SLA (Service Level Agreement), espe- cially when it is made over aggregate flows and long periods of time, such as all of
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 399
the traffic for a given customer. As long as the customer fulfills her part of the bar- gain and only sends packets according to the agreed-on contract, the provider promises to deliver them all in a timely fashion.
Traffic shaping reduces congestion and thus helps the network live up to its promise. However, to make it work, there is also the issue of how the provider can tell if the customer is following the agreement and what to do if the customer is not. Packets in excess of the agreed pattern might be dropped by the network, or they might be marked as having lower priority. Monitoring a traffic flow is called traffic policing.
Shaping and policing are not so important for peer-to-peer and other transfers that will consume any and all available bandwidth, but they are of great importance for real-time data, such as audio and video connections, which have stringent qual ity-of-service requirements. We have already seen one way to limit the amount of data an application sends: the sliding window, which uses one parameter to limit how much data is in transit at any given time, which indirectly limits the rate. Now we will look at a more general way to characterize traffic, with the leaky bucket and token bucket algorithms. The formulations are slightly different but give an e- quivalent result.
Try to imagine a bucket with a small hole in the bottom, as illustrated in Fig. 5-23(b). No matter the rate at which water enters the bucket, the outflow is at a constant rate, R, when there is any water in the bucket and zero when the bucket is empty. Also, once the bucket is full to capacity B, any additional water entering it spills over the sides and is lost.
Host
Packets Network
Put in
water
Check
bucket
here
Rate
R
B
Take out
water/tokens
Rate
R
B
(a) (b) (c)
Figure 5-23. (a) Shaping packets. (b) A leaky bucket. (c) A token bucket.
This bucket can be used to shape or police packets entering the network, as shown in Fig. 5-23(a). Conceptually, each host is connected to the network by an interface containing a leaky bucket. To send a packet into the network, it must be possible to put more water into the bucket. If a packet arrives when the bucket is full, the packet must either be queued until enough water leaks out to hold it or be discarded. The former might happen at a host shaping its traffic for the network as part of the operating system. The latter might happen in hardware at a provider
400 THE NETWORK LAYER CHAP. 5
network interface that is policing traffic entering the network. This technique was proposed by Turner (1986) and is called the leaky bucket algorithm. A different but equivalent formulation is to imagine the network interface as a bucket that is being filled, as shown in Fig. 5-23(c). The tap is running at rate R and the bucket has a capacity of B, as before. Now to send a packet we must be able to take water, or tokens, as the contents are commonly called, out of the bucket (rather than putting water into the bucket). No more than a fixed number of tokens, B, can accumulate in the bucket, and if the bucket is empty, we must wait until more tokens arrive before we can send another packet. This algorithm is call- ed the token bucket algorithm.
Leaky and token buckets limit the long-term rate of a flow but allow short-term bursts up to a maximum regulated length to pass through unaltered and without suffering any artificial delays. Large bursts will be smoothed by a leaky bucket traffic shaper to reduce congestion in the network. As an example, imagine that a computer can produce data at up to 1000 Mbps (125 million bytes/sec) and that the first link of the network also runs at this speed. The pattern of traffic the host gen- erates is shown in Fig. 5-24(a). This pattern is bursty. The average rate over one second is 200 Mbps, even though the host sends a burst of 16,000 KB at the top speed of 1000 Mbps (for 1/8 of the second).
Rate (Mbps) 1000
125 MB/s for
125 msec
25 MB/s for
250 msec
Bucket (KB) 16000
(a) (d)
With R = 25 MB/s, B = 9600 KB
9600
Bucket empties, traffic delayed
(b) (e)
With R = 25 MB/s, B = 0
0
Bucket always empty
Time (msec)
1000
Time (msec) 1000
(c) (f)
Figure 5-24. (a) Traffic from a host. Output shaped by a token bucket of rate 200 Mbps and capacity (b) 9600 KB and (c) 0 KB. Token bucket level for shaping with rate 200 Mbps and capacity (d) 16,000 KB, (e) 9600 KB, and (f) 0 KB.
Now suppose that the routers can accept data at the top speed only for short in tervals, until their buffers fill up. The buffer size is 9600 KB, smaller than the
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 401
traffic burst. For long intervals, the routers work best at rates not exceeding 200 Mbps (say, because this is all the bandwidth given to the customer). The implica tion is that if traffic is sent in this pattern, some of it will be dropped in the network because it does not fit into the buffers at routers.
To avoid this packet loss, we can shape the traffic at the host with a token bucket. If we use a rate, R, of 200 Mbps and a capacity, B, of 9600 KB, the traffic will fall within what the network can handle. The output of this token bucket is shown in Fig. 5-24(b). The host can send full throttle at 1000 Mbps for a short while until it has fully drained the bucket. Then it has to cut back to 200 Mbps un til the burst has been sent. The effect is to spread out the burst over time because it was too large to handle all at once. The level of the token bucket is shown in Fig. 5-24(e). It starts off full and is depleted by the initial burst. When it reaches zero, new packets can be sent only at the rate at which the buffer is filling; there can be no more bursts until the bucket has recovered. The bucket fills when no traf fic is being sent and stays flat when traffic is being sent at the fill rate.
We can also shape the traffic to be less bursty. Fig. 5-24(c) shows the output of a token bucket with R = 200 Mbps and a capacity of 0. This is the extreme case in which the traffic has been completely smoothed. No bursts are allowed, and the traffic enters the network at a steady rate. The corresponding bucket level, shown in Fig. 5-24(f), is always empty. Traffic is being queued on the host for release into the network and there is always a packet waiting to be sent when it is allowed.
Finally, Fig. 5-24(d) illustrates the bucket level for a token bucket with R = 200 Mbps and a capacity of B = 16, 000 KB. This is the smallest token bucket through which the traffic passes unaltered. It might be used at a router in the net- work to police the traffic that the host sends. However, if the host is sending traffic that conforms to the token bucket on which it has agreed with the network, the traf fic will fit through that same token bucket run at the router at the edge of the net- work. If the host sends at a faster or burstier rate, the token bucket will run out of water. If this happens, a traffic policer will know that the traffic is not as was de- scribed. It will then either drop the excess packets or lower their priority, depend ing on the design of the network. In our example, the bucket empties only momen tarily, at the end of the initial burst, then recovers enough for the next burst.
Leaky and token buckets are easy to implement. We will now describe the op- eration of a token bucket. Even though we have described water flowing continu- ously into and out of the bucket, real implementations must work with discrete quantities. A token bucket is implemented with a counter for the level of the bucket. The counter is advanced by R/6T units at every clock tick of 6T seconds. This would be 200 Kbit every 1 msec in our example above. Every time a unit of traffic is sent into the network, the counter is decremented, and traffic may be sent until the counter reaches zero.
When the packets are all the same size, the bucket level can just be counted in packets (e.g., 200 Kbit is 20 packets of 1250 bytes). However, often variable-sized packets are used. In this case, the bucket level can be counted in bytes. If the
402 THE NETWORK LAYER CHAP. 5
residual byte count is too low to send a large packet, the packet must wait until the next tick (or even longer, if the fill rate is small).
Calculating the length of the maximum burst (until the bucket empties) is slightly tricky. It is longer than just 9600 KB divided by 125 MB/sec because while the burst is being output, more tokens arrive. If we call the burst length S sec., the maximum output rate M bytes/sec, the token bucket capacity B bytes, and the token arrival rate R bytes/sec, we can see that an output burst contains a maxi- mum of B + RS bytes. We also know that the number of bytes in a maxi- mum-speed burst of length S seconds is MS. Hence, we have
B + RS = MS
We can solve this equation to get S = B/(M < R). For our parameters of B = 9600 KB, M = 125 MB/sec, and R = 25 MB/sec, we get a burst time of about 94 msec. A potential problem with the token bucket algorithm is that it reduces large bursts down to the long-term rate R. It is frequently desirable to reduce the peak rate, but without going down to the long-term rate (and also without raising the long-term rate to allow more traffic into the network). One way to get smoother traffic is to insert a second token bucket after the first one. The rate of the second bucket should be much higher than the first one. Basically, the first bucket charac terizes the traffic, fixing its average rate but allowing some bursts. The second bucket reduces the peak rate at which the bursts are sent into the network. For ex- ample, if the rate of the second token bucket is set to be 500 Mbps and the capacity is set to 0, the initial burst will enter the network at a peak rate of 500 Mbps, which is lower than the 1000 Mbps rate we had previously.
Using all of these buckets can be a bit tricky. When token buckets are used for traffic shaping at hosts, packets are queued and delayed until the buckets permit them to be sent. When token buckets are used for traffic policing at routers in the network, the algorithm is simulated to make sure that no more packets are sent than permitted. Nevertheless, these tools provide ways to shape the network traffic into more manageable forms to assist in meeting quality-of-service requirements.
Active Queue Management
In the Internet and many other computer networks, senders adjust their trans- missions to send as much traffic as the network can readily deliver. In this setting, the network aims to operate just before the onset of congestion. When congestion is imminent, it must tell the senders to throttle back their transmissions and slow down. This feedback is business as usual rather than an exceptional situation. The term congestion avoidance is sometimes used to contrast this operating point with the one in which the network has become (overly) congested.
Let us now look at some approaches to throttling traffic that can be used in both datagram networks and virtual-circuit networks alike. Each approach must solve two problems. First, routers must determine when congestion is approaching,
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 403
ideally before it has arrived. To do so, each router can continuously monitor the resources it is using. Three possibilities are the utilization of the output links, the buffering of queued packets inside the router, and the number of packets that are lost due to insufficient buffering. Of these possibilities, the second one is the most
useful. Averages of utilization do not directly account for the burstiness of most traffic—a utilization of 50% may be low for smooth traffic and too high for highly variable traffic. Counts of packet losses come too late. Congestion has already set in by the time that packets are lost.
The queueing delay inside routers directly captures any congestion experi- enced by packets. It should be low most of time, but will jump when there is a burst of traffic that generates a backlog. To maintain a good estimate of the queue ing delay, d, a sample of the instantaneous queue length, s, can be made periodical ly and d updated according to
dnew = _ dold + (1 < _ )s
where the constant _ determines how fast the router forgets recent history. This is called an EWMA (Exponentially Weighted Moving Average). It smoothes out fluctuations and is equivalent to a low-pass filter. Whenever d moves above some predefined threshold, the router notes the onset of congestion.
The second problem is that routers must deliver timely feedback to the senders that are causing the congestion. Congestion is experienced in the network, but relieving congestion requires action on behalf of the senders that are using the net- work. To deliver feedback, the router must identify the appropriate senders. It must then warn them carefully, without sending many more packets into the al ready congested network. Different schemes use different feedback mechanisms, as we will now describe.
Random Early Detection
Dealing with congestion when it first starts is more effective than letting it gum up the works and then trying to deal with it. This observation leads to an inter- esting twist on load shedding, which is to discard packets before all the buffer space is really exhausted.
The motivation for this idea is that most Internet hosts do not yet get conges tion signals from routers in the form of an explicit notification. Instead, the only reliable indication of congestion that hosts get from the network is packet loss. After all, it is difficult to build a router that does not drop packets when it is com- pletely overloaded. Transport protocols such as TCP are thus hardwired to react to loss as congestion, slowing down the source in response. The reasoning behind this logic is that TCP was designed for wired networks and wired networks are very reliable, so lost packets are mostly due to buffer overruns rather than trans- mission errors. Wireless links must recover transmission errors at the link layer (so they are not seen at the network layer) to work well with TCP.
404 THE NETWORK LAYER CHAP. 5
This situation can be exploited to help reduce congestion. By having routers drop packets early, before the situation has become hopeless, there is time for the source to take action before it is too late. A popular algorithm for doing this is called RED (Random Early Detection) (Floyd and Jacobson, 1993). To deter- mine when to start discarding, routers maintain a running average of their queue lengths. When the average queue length on some link exceeds a threshold, the link is said to be congested and a small fraction of the packets are dropped at random. Picking packets at random makes it more likely that the fastest senders will see a packet drop; this is the best option since the router cannot tell which source is causing the most trouble in a datagram network. The affected sender will notice the loss when there is no acknowledgement, and then the transport protocol will slow down. The lost packet is thus delivering the same message as a notification packet, but implicitly, without the router sending any explicit signal.
RED routers improve performance compared to routers that drop packets only when their buffers are full, though they require tuning to work well. For example, the ideal number of packets to drop depends on how many senders need to be noti fied of congestion. However, explicit notification is the better option if it is avail- able. It works in exactly the same manner, but delivers a congestion signal expli- citly rather than as a loss; RED is used when hosts cannot receive explicit signals.
Choke Packets
The most direct way to notify a sender of congestion is to tell it directly. In this approach, the router selects a congested packet and sends a choke packet back to the source host, giving it the destination found in the packet. The original pack- et may be tagged (a header bit is turned on) so that it will not generate any more choke packets farther along the path and then forwarded in the usual way. To avoid increasing load on the network during a time of congestion, the router may only send choke packets at a low rate.
When the source host gets the choke packet, it is required to reduce the traffic sent to the specified destination, for example, by 50%. In a datagram network, simply picking packets at random when there is congestion is likely to cause choke packets to be sent to fast senders, because they will have the most packets in the queue. The feedback created by this protocol can help prevent congestion yet not throttle any sender unless it causes trouble. For the same reason, it is likely that multiple choke packets will be sent to a given host and destination. The host should ignore these additional chokes for the fixed time interval until its reduction in traffic takes effect. After that period, further choke packets indicate that the net- work is still congested.
A choke packet used in the early Internet is the SOURCE QUENCH message (Postel, 1981). It never caught on, though, partly because the circumstances in which it was generated and the effect it had were not well specified. The modern Internet uses a different notification design that we will describe next.
SEC. 5.3 TRAFFIC MANAGEMENT AT THE NETWORK LAYER 405 Explicit Congestion Notification
Instead of generating additional packets to warn of congestion, a router can tag any packet it forwards (by setting a bit in the packet’s header) to signal that it is experiencing congestion. When the network delivers the packet, the destination can note that there is congestion and inform the sender when it sends a reply pack- et. The sender can then throttle its transmissions as before.
This design is called ECN (Explicit Congestion Notification) and is used in the Internet (Ramakrishnan et al., 2001). It is a refinement of early congestion sig- naling protocols, notably the binary feedback scheme of Ramakrishnan and Jain (1988) that was used in the DECnet architecture. Two bits in the IP packet header are used to record whether the packet has experienced congestion. Packets are unmarked when they are sent, as illustrated in Fig. 5-25. If any of the routers they pass through is congested, that router will then mark the packet as having experi- enced congestion as it is forwarded. The destination will then echo any marks it has received back to the sender as an explicit congestion signal in its next reply packet. This is shown with a dashed line in the figure to indicate that it happens above the IP level (e.g., in TCP). The sender must then throttle its transmissions, as in the case of choke packets.
Host
Packet Congested router
Congestion signal
Marked packet
Host
Figure 5-25. Explicit congestion notification
Hop-by-Hop Backpressure
At high speeds or over long distances, many new packets may be transmitted after congestion has been signaled because of the delay before the signal takes ef fect. Consider, for example, a host in San Francisco (router A in Fig. 5-26) that is sending traffic to a host in New York (router D in Fig. 5-26) at the OC-3 speed of 155 Mbps. If the New York host begins to run out of buffers, it will take about 40 msec for a choke packet to get back to San Francisco to tell it to slow down. An ECN indication will take even longer because it is delivered via the destination. Choke packet propagation is illustrated as the second, third, and fourth steps in Fig. 5-26(a). In those 40 msec, another 6.2 megabits will have been sent. Even if the host in San Francisco completely shuts down immediately, the 6.2 megabits in the pipe will continue to pour in and have to be dealt with. Only in the seventh diagram in Fig. 5-26(a) will the New York router notice a slower flow.
406 THE NETWORK LAYER CHAP. 5
An alternative approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of Fig. 5-26(b). Here, as soon as the choke packet reaches F, F is required to reduce the flow to D. Doing so will re- quire F to devote more buffers to the connection, since the source is still sending away at full blast, but it gives D immediate relief, like a headache remedy in a tele- vision commercial. In the next step, the choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater demand on E’s buffers but gives F immediate relief. Finally, the choke packet reaches A and the flow genuinely slows down.
The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion, at the price of using up more buffers upstream. In this way, conges tion can be nipped in the bud without losing any packets. The idea is discussed in detail by Mishra et al. (1996).
5.4 QUALITY OF SERVICE AND APPLICATION QOE
The techniques we looked at in the previous sections are designed to reduce congestion and improve network performance. However, there are applications (and customers) that demand stronger performance guarantees from the network than ‘‘the best that could be done under the circumstances,’’ sometimes referred to as best effort. Yet, many applications often require some minimum level of throughput to function and also do not perform well when latency exceeds some threshold.. In this section, we will continue our study of network performance, with a sharper focus on ways to provide quality of service that can meet applica tion needs. This is an area in which the Internet is undergoing a long-term up- grade. More recently, there has also been increased focus on user (QoE) Quality of Experience, which recognizes that ultimately the user experience matters, and different applications have very different requirements and thresholds, as far as net- work performance goes. An increasing area of focus pertains to estimating user QoE given the ability to observe only encrypted network traffic.
5.4.1 Application QoS Requirements
A stream of packets from a source to a destination is called a flow (Clark, 1988). A flow might be all the packets of a connection in a connection-oriented network, or all the packets sent from one process to another process in a con- nectionless network. The needs of each flow can be characterized by four primary parameters: bandwidth, delay, jitter, and loss. Together, these determine the QoS (Quality of Service) the flow requires.
Several common applications and the stringency of their network requirements are listed in Fig. 5-27. Note that network requirements are less demanding than application requirements in those cases that the application can improve on the
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 407
B C
A D E F
Choke
Choke
Choke
Reduced
flow
Flow is still
B C
A D
Heavy flow
E F
Choke
Choke
Reduced
flow
Choke
at maximum rate
Flow is
reduced
(a) (b)
Figure 5-26. (a) A choke packet that affects only the source. (b) A choke packet that affects each hop it passes through.
408 THE NETWORK LAYER CHAP. 5
service provided by the network. In particular, networks do not need to be lossless for reliable file transfer, and they do not need to deliver packets with identical delays for audio and video playout. Some amount of loss can be repaired with re transmissions, and some amount of jitter can be smoothed by buffering packets at the receiver. However, there is nothing applications can do to remedy the situation if the network provides too little bandwidth or too much delay.
Application Bandwidth Delay Jitter Loss
Email Low Low Low Medium File sharing High Low Low Medium Web access Medium Medium Low Medium Remote login Low Medium Medium Medium Audio on demand Low Low High Low
Video on demand High Low High Low
Telephony Low High High Low
Videoconferencing High High High Low
Figure 5-27. Stringency of applications’ quality-of-service requirements.
The applications differ in their bandwidth needs, with email, audio in all forms, and remote login not needing much, but file sharing and video in all forms needing a great deal.
More interesting are the delay requirements. File transfer applications, includ ing email and video, are not delay sensitive. If all packets are delayed uniformly by a few seconds, no harm is done. Interactive applications, such as Web surfing and remote login, are more delay sensitive. Real-time applications, such as tele- phony and videoconferencing, have strict delay requirements. If all the words in a telephone call are each delayed by too long, the users will find the connection unacceptable. On the other hand, playing audio or video files from a server does not require low delay.
The variation (i.e., standard deviation) in the delay or packet arrival times is called jitter. The first three applications in Fig. 5-27 are not sensitive to the pack- ets arriving with irregular time intervals between them. Remote login is somewhat sensitive to that, since updates on the screen will appear in little bursts if the con- nection suffers much jitter. Video and especially audio are extremely sensitive to jitter. If a user is watching a video over the network and the frames are all delayed by exactly 2.000 seconds, no harm is done. But if the transmission time varies ran- domly between 1 and 2 seconds, the result will be terrible unless the application hides the jitter. For audio, a jitter of even a few milliseconds is clearly audible.
The first four applications have more stringent requirements on loss than audio and video because all bits must be delivered correctly. This goal is usually achieved with retransmissions of packets that are lost in the network by the tran- sport layer. This is wasted work; it would be better if the network refused packets
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 409
it was likely to lose in the first place. Audio and video applications can tolerate some lost packets without retransmission because people do not notice short pauses or occasional skipped frames.
To accommodate a variety of applications, networks may support different cat- egories of QoS. An influential example comes from ATM networks, which were once part of a grand vision for networking but have since become a niche technolo- gy. They support:
1. Constant bit rate (e.g., telephony).
2. Real-time variable bit rate (e.g., compressed videoconferencing). 3. Non-real-time variable bit rate (e.g., watching a movie on demand). 4. Available bit rate (e.g., file transfer).
These categories are also useful for other purposes and other networks. Constant bit rate is an attempt to simulate a wire by providing a uniform bandwidth and a uniform delay. Variable bit rate occurs when video is compressed, with some frames compressing more than others. Sending a frame with a lot of detail in it may require sending many bits, whereas a shot of a white wall may compress ex tremely well. Movies on demand are not actually real time because a few seconds of video can easily be buffered at the receiver before playback starts, so jitter on the network merely causes the amount of stored-but-not-played video to vary. Available bit rate is for applications such as email that are not sensitive to delay or jitter and will take what bandwidth they can get.
5.4.2 Overprovisioning
An easy solution to provide good quality of service is to build a network with enough capacity for whatever traffic will be thrown at it. The name for this solution is overprovisioning. The resulting network will carry application traffic without significant loss and, assuming a decent routing scheme, will deliver packets with low latency. Performance doesn’t get any better than this. To some extent, the telephone system is overprovisioned because it is rare to pick up a telephone and not get a dial tone instantly. There is simply so much capacity available that de- mand can almost always be met.
The trouble with this solution is that it is expensive. It is basically solving a problem by throwing money at it. Quality of service mechanisms let a network with less capacity meet application requirements just as well at a lower cost. Moreover, overprovisioning is based on expected traffic. All bets are off if the traf fic pattern changes too much. With quality of service mechanisms, the network can honor the performance guarantees that it makes even when traffic spikes, at the cost of turning down some requests.
410 THE NETWORK LAYER CHAP. 5
Four issues must be addressed to ensure quality of service:
1. What applications need from the network.
2. How to regulate the traffic that enters the network.
3. How to reserve resources at routers to guarantee performance.
4. Whether the network can safely accept more traffic.
No single technique deals efficiently with all these issues. Instead, a variety of techniques have been developed for use at the network (and transport) layer. Prac tical quality-of-service solutions combine multiple techniques. To this end, we will describe two versions of quality of service for the Internet called Integrated Services and Differentiated Services.
5.4.3 Packet Scheduling
Being able to regulate the shape of the offered traffic is a good start. However, to provide a performance guarantee, we must reserve sufficient resources along the route that the packets take through the network. To do this, we are assuming that the packets of a flow follow the same route. Spraying them over routers at random makes it hard to guarantee anything. As a consequence, something similar to a vir tual circuit has to be set up from the source to the destination, and all the packets that belong to the flow must follow this route.
Algorithms that allocate router resources among the packets of a flow and be tween competing flows are called packet scheduling algorithms. Three different kinds of resources can potentially be reserved for different flows:
1. Bandwidth.
2. Buffer space.
3. CPU cycles.
The first one, bandwidth, is the most obvious. If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps, trying to direct three flows through that line is not going to work. Thus, reserving bandwidth means not oversubscribing any output line.
A second resource that is often in short supply is buffer space. When a packet arrives, it is buffered inside the router until it can be transmitted on the chosen out- going line. The purpose of the buffer is to absorb small bursts of traffic as the flows contend with each other. If no buffer is available, the packet has to be dis- carded since there is no place to put it. For good quality of service, some buffers might be reserved for a specific flow so that flow does not have to compete for buffers with other flows. Up to some maximum value, there will always be a buff- er available when the flow needs one.
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 411
Finally, CPU cycles may also be a scarce resource. It takes router CPU time to process a packet, so a router can process only a certain number of packets per sec- ond. While modern routers are able to process most packets quickly, some kinds of packets require greater CPU processing, such as the ICMP packets we will de- scribe in Sec. 5.7.4. Making sure that the CPU is not overloaded is needed to ensure timely processing of these packets.
First-In First-Out (FIFO) Scheduling
Packet scheduling algorithms allocate bandwidth and other router resources by determining which of the buffered packets to send on the output line next. We al ready described the most straightforward scheduler when explaining how routers work. Each router buffers packets in a queue for each output line until they can be sent, and they are sent in the same order that they arrived. This algorithm is known as FIFO (First-In First-Out), or equivalently FCFS (First-Come First-Served).
FIFO routers usually drop newly arriving packets when the queue is full. Since the newly arrived packet would have been placed at the end of the queue, this be- havior is called tail drop. It is intuitive, and you may be wondering what alterna tives exist. In fact, the RED algorithm we described in Sec. 5.3.2 chose a newly ar riving packet to drop at random when the average queue length grew large. The other scheduling algorithms that we will describe also create other opportunities for deciding which packet to drop when the buffers are full.
Fair Queueing
FIFO scheduling is simple to implement, but it is not suited to providing good quality of service because when there are multiple flows, one flow can easily affect the performance of the other flows. If the first flow is aggressive and sends large bursts of packets, they will lodge in the queue. Processing packets in the order of their arrival means that the aggressive sender can hog most of the capacity of the routers its packets traverse, starving the other flows and reducing their quality of service. To add insult to injury, the packets of the other flows that do get through are likely to be delayed because they had to sit in the queue behind many packets from the aggressive sender.
Many packet scheduling algorithms have been devised that provide stronger isolation between flows and thwart attempts at interference (Bhatti and Crowcroft, 2000). One of the first ones was the fair queueing algorithm devised by Nagle (1987). The essence of this algorithm is that routers have separate queues, one for each flow for a given output line. When the line becomes idle, the router scans the queues round robin, as shown in Fig. 5-28. It then takes the first packet on the next queue. In this way, with n hosts competing for the output line, each host gets to send one out of every n packets. It is fair in the sense that all flows get to send packets at the same rate. Sending more packets will not improve this rate.
412 THE NETWORK LAYER CHAP. 5 1
2
3
Input queues
Round-robin service
3 2 1 3 2 1 Output line
Figure 5-28. Round-robin fair queueing.
Although a start, the algorithm has a flaw: it gives more bandwidth to hosts that use large packets than to hosts that use small packets. Demers et al. (1990) suggested an improvement in which the round robin is done in such a way as to simulate a byte-by-byte round robin, instead of a packet-by-packet round robin. The trick is to compute a virtual time that is the number of the round at which each packet would finish being sent. Each round drains a byte from all of the queues that have data to send. The packets are then sorted in order of their finishing times and sent in that order.
This algorithm and an example of finish times for packets arriving in three flows are illustrated in Fig. 5-29. If a packet has length L, the round at which it will finish is simply L rounds after the start time. The start time is either the finish time of the previous packet, or the arrival time of the packet, if the queue is empty when it arrives.
Arrives late
Arrives after D but goes first
Packet Arrival
Length Finish
Output
time
time
order
H
F
A
D
B
Fair
queueing
A 0 8 8 1 B 5 6 11 3 C 5 10 10 2 D 8 9 20 7 E 8 8 14 4
G E C 2X
F 10 6 16 5 G 11 10 19 6
Input queues
Weight is 2
H 20 8 28 8
(a) (b)
Figure 5-29. (a) Weighted Fair Queueing. (b) Finishing times for the packets.
From the table in Fig. 5-29(b), and looking only at the first two packets in the top two queues, packets arrive in the order A, B, D, and F. Packet A arrives at round 0 and is 8 bytes long, so its finish time is round 8. Similarly the finish time for packet B is 11. Packet D arrives while B is being sent. Its finish time is 9 byte rounds after it starts when B finishes, or 20. Similarly, the finish time for F is 16. In the absence of new arrivals, the relative sending order is A, B, F, D, even though F arrived after D. It is possible that another small packet will arrive on the top flow and obtain a finish time before D. It will only jump ahead of D if the
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 413
transmission of that packet has not started. Fair queueing does not preempt pack- ets that are currently being transmitted. Because packets are sent in their entirety, fair queueing is only an approximation of the ideal byte-by-byte scheme. But it is a very good approximation, staying within one packet transmission of the ideal scheme at all times.
Weighted Fair Queueing
One shortcoming of this algorithm in practice is that it gives all hosts the same priority. In many situations, it is desirable to give, for example, video servers more bandwidth than, say, file servers. This is easily possible by giving the video server two or more bytes per round. This modified algorithm is called WFQ (Weighted Fair Queueing). Letting the number of bytes per round be the weight of a flow, W , we can now give the formula for computing the finish time:
Fi = max(Ai, Fi<1) + Li /W
where Ai is the arrival time, Fi is the finish time, and Li is the length of packet i. The bottom queue of Fig. 5-29(a) has a weight of 2, so its packets are sent more quickly as you can see in the finish times given in Fig. 5-29(b).
Another practical consideration is implementation complexity. WFQ requires that packets be inserted by their finish time into a sorted queue. With N flows, this is at best an O(log N) operation per packet, which is difficult to achieve for many flows in high-speed routers. Shreedhar and Varghese (1995) describe an approxi- mation called deficit round robin that can be implemented very efficiently, with only O(1) operations per packet. WFQ is widely used given this approximation.
Other kinds of scheduling algorithms exist, too. A simple example is priority scheduling, in which each packet is marked with a priority. High-priority packets are always sent before any low-priority packets that are buffered. Within a priority, packets are sent in FIFO order. However, priority scheduling has the disadvantage that a burst of high-priority packets can starve low-priority packets, which may have to wait indefinitely. WFQ often provides a better alternative. By giving the high-priority queue a large weight, say 3, high-priority packets will often go through a short line (as relatively few packets should be high priority) yet some fraction of low-priority packets will continue to be sent even when there is high priority traffic. A high- and low-priority system is essentially a two-queue WFQ system in which the high priority has infinite weight.
As a final example of a scheduler, packets might carry timestamps and be sent in timestamp order. Clark et al. (1992) describe a design in which the timestamp records how far the packet is behind or ahead of schedule as it is sent through a se- quence of routers on the path. Packets that have been queued behind other packets at a router will tend to be behind schedule, and the packets that have been serviced first will tend to be ahead of schedule. Sending packets in order of their time- stamps has the beneficial effect of speeding up slow packets while at the same time
414 THE NETWORK LAYER CHAP. 5
slowing down fast packets. The result is that all packets are delivered by the net- work with a more consistent delay, which is obviously a good thing.
Putting it Together
We have now seen all the necessary elements for QoS, so it is time to put them together to actually provide it. QoS guarantees are established through the process of admission control. We first saw admission control used to control congestion, which is a performance guarantee, albeit a weak one. The guarantees we are con- sidering now are stronger, but the model is the same. The user offers a flow with an accompanying QoS requirement to the network. The network then decides whether to accept or reject the flow based on its capacity and the commitments it has made to other flows. If it accepts, the network reserves capacity in advance at routers to guarantee QoS when traffic is sent on the new flow.
The reservations must be made at all of the routers along the route that the packets take through the network. Any routers on the path without reservations might become congested, and a single congested router can break the QoS guaran tee. Many routing algorithms find the single best path between each source and each destination and send all traffic over that path. This may cause some flows to be rejected if there is not enough spare capacity along the best path. QoS guaran tees for new flows may still be accommodated by choosing a different route for the flow that has excess capacity. This is called QoS routing. Chen and Nahrstedt (1998) give an overview of these techniques. It is also possible to split the traffic for each destination over multiple paths to more easily find excess capacity. A simple method is for routers to choose equal-cost paths and to divide the traffic equally or in proportion to the capacity of the outgoing links. However, more sophisticated algorithms are also available (Nelakuditi and Zhang, 2002).
Given a path, the decision to accept or reject a flow is not a simple matter of comparing the resources (bandwidth, buffers, and cycles) requested by the flow with the router’s excess capacity in those three dimensions. It is a little more com- plicated than that. To start with, although some applications may know about their bandwidth requirements, few know about buffers or CPU cycles, so at the mini- mum, a different way is needed to describe flows and translate this description to router resources. We will get to this shortly.
Next, some applications are far more tolerant of an occasional missed deadline than others. The applications must choose from the type of guarantees that the net- work can make, whether hard guarantees or behavior that will hold most of the time. All else being equal, everyone would like hard guarantees, but the difficulty is that they are expensive because they constrain worst case behavior. Guarantees for most of the packets are often sufficient for applications, and more flows with this guarantee can be supported for a fixed capacity.
Finally, some applications may be willing to haggle about the flow parameters and others may not be willing to do so. For example, a movie viewer that normally
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 415
runs at 30 frames/sec may be willing to drop back to 25 frames/sec if there is not enough free bandwidth to support 30 frames/sec. Similarly, the number of pixels per frame, audio bandwidth, and other properties may be adjustable.
Because many parties may be involved in the flow negotiation (the sender, the receiver, and all the routers along the path between them), flows must be described accurately in terms of specific parameters that can be negotiated. A set of such pa rameters is called a flow specification. Typically, the sender (e.g., the video ser- ver) produces a flow specification proposing the parameters it would like to use. As the specification propagates along the route, each router examines it and modi fies the parameters as need be. The modifications can only reduce the flow, not in- crease it (e.g., a lower data rate, not a higher one). When it gets to the other end, the parameters can be established.
As an example of what can be in a flow specification, consider the example of Fig. 5-30. This is based on RFC 2210 and RFC 2211 for Integrated Services, a QoS design we will cover in the next section. It has five parameters. The first two parameters, the token bucket rate and token bucket size, use a token bucket to give the maximum sustained rate the sender may transmit, averaged over a long time in terval, and the largest burst it can send over a short time interval.
Parameter Unit
Token bucket rate Bytes/sec
Token bucket size Bytes
Peak data rate Bytes/sec
Minimum packet size Bytes
Maximum packet size Bytes
Figure 5-30. An example flow specification.
The third parameter, the peak data rate, is the maximum transmission rate tol- erated, even for brief time intervals. The sender must never exceed this rate even for short bursts.
The last two parameters specify the minimum and maximum packet sizes, in- cluding the transport and network layer headers (e.g., TCP and IP). The minimum size is useful because processing each packet takes some fixed time, no matter how short. A router may be prepared to handle 10,000 packets/sec of 1 KB each, but not be prepared to handle 100,000 packets/sec of 50 bytes each, even though this represents a lower data rate. The maximum packet size is important due to internal network limitations that may not be exceeded. For example, if part of the path goes over an Ethernet, the maximum packet size will be restricted to no more than 1500 bytes no matter what the rest of the network can handle.
An interesting question is how a router turns a flow specification into a set of specific resource reservations. At first glance, it might appear that if a router has a link that runs at, say, 1 Gbps and the average packet is 1000 bits, it can process 1
416 THE NETWORK LAYER CHAP. 5
million packets/sec. This observation is not the case, though, because there will al- ways be idle periods on the link due to statistical fluctuations in the load. If the link needs every bit of capacity to get its work done, idling for even a few bits cre- ates a backlog it can never get rid of.
Even with a load slightly below the theoretical capacity, queues can build up and delays can occur. Consider a situation in which packets arrive at random with a mean arrival rate of h packets/sec. The packets have random lengths and can be sent on the link with a mean service rate of µ packets/sec. Under the assumption that both the arrival and service distributions are Poisson distributions (what is call- ed an M/M/1 queueing system, where ‘‘M’’ stands for Markov, i.e., Poisson), it can be proven using queueing theory that the mean delay experienced by a packet, T , is
T =1
µ ×1
1 < h/µ =1
µ ×1
1 < l
where l = h /µ is the CPU utilization. The first factor, 1/µ, is what the service time would be in the absence of competition. The second factor is the slowdown due to competition with other flows. For example, if h = 950, 000 packets/sec and µ = 1, 000, 000 packets/sec, then l = 0. 95 and the mean delay experienced by each packet will be 20 µsec instead of 1 µsec. This time accounts for both the
queueing time and the service time, as can be seen when the load is very low (h /µ 5 0). If there are, say, 30 routers along the flow’s route, queueing delay alone will account for 600 µsec of delay.
One method of relating flow specifications to router resources that correspond to bandwidth and delay performance guarantees is given by Parekh and Gallagher (1993, 1994). It is based on traffic sources shaped by (R, B) token buckets and WFQ at routers. Each flow is given a WFQ weight W large enough to drain its token bucket rate R as shown in Fig. 5-31. For example, if the flow has a rate of 1 Mbps and the router and output link have a capacity of 1 Gbps, the weight for the flow must be greater than 1/1000th of the total of the weights for all of the flows at that router for the output link. This guarantees the flow a minimum bandwidth. If it cannot be given a large enough rate, the flow cannot be admitted.
wi
W x C R < weights
(R, B)
Traffic source Router
W
Capacity C
wi
Weighted
fair queue
Figure 5-31. Bandwidth and delay guarantees with token buckets and WFQ.
The largest queueing delay the flow will see is a function of the burst size of the token bucket. Consider the two extreme cases. If the traffic is smooth, without
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 417
any bursts, packets will be drained from the router just as quickly as they arrive. There will be no queueing delay (ignoring packetization effects). On the other hand, if the traffic is saved up in bursts, then a maximum-size burst, B, may arrive at the router all at once. In this case, the maximum queueing delay, D, will be the time taken to drain this burst at the guaranteed bandwidth, or B/R (again, ignoring packetization effects). If this delay is too large, the flow must request more band- width from the network.
These guarantees are hard. The token buckets bound the burstiness of the source, and fair queueing isolates the bandwidth given to different flows. This means that the flow will meet its bandwidth and delay guarantees regardless of how the other competing flows behave at the router. Those other flows cannot break the guarantee even by saving up traffic and all sending at once.
Moreover, the result holds for a path through multiple routers in any network topology. Each flow gets a minimum bandwidth because that bandwidth is guaran teed at each router. The reason each flow gets a maximum delay is more subtle. In the worst case that a burst of traffic hits the first router and competes with the traf fic of other flows, it will be delayed up to the maximum delay of D. However, this delay will also smooth the burst. In turn, this means that the burst will incur no further queueing delays at later routers. The overall queueing delay will be at most D.
5.4.4 Integrated Services
Between 1995 and 1997, IETF put a lot of effort into devising an architecture for streaming multimedia. This work resulted in over two dozen RFCs, starting with RFC 2205 through RFC 2212. The generic name for this work is integrated services. It was aimed at both unicast and multicast applications. An example of the former is a single user streaming a video clip from a news site. An example of the latter is a collection of digital television stations broadcasting their programs as streams of IP packets to many receivers at various locations. Below we will con- centrate on multicast, since unicast is a special case of multicast.
In many multicast applications, groups can change membership dynamically, for example, as people enter a video conference and then get bored and switch to a soap opera or the croquet channel. Under these conditions, the approach of having the senders reserve bandwidth in advance does not work well, since it would re- quire each sender to track all entries and exits of its audience. For a system de- signed to transmit television with millions of subscribers, it would not work at all.
RSVP—The Resource reSerVation Protocol
The main part of the integrated services architecture that is visible to the users of the network is RSVP (Resource reSerVation Protocol). It is described in RFC 2205 through RFC 2210. This protocol is used for making the reservations; other
418 THE NETWORK LAYER CHAP. 5
protocols are used for sending the data. RSVP allows multiple senders to transmit to multiple groups of receivers, permits individual receivers to switch channels freely, and also optimizes bandwidth use while at the same time eliminating con- gestion.
In its simplest form, the protocol uses multicast routing using spanning trees, as discussed earlier. Each group is assigned a group address. To send to a group, a sender puts the group’s address in its packets. The standard multicast routing algo rithm then builds a spanning tree covering all group members. The routing algo rithm is not part of RSVP. The only difference from normal multicasting is a little extra information that is multicast to the group periodically to tell the routers along the tree to maintain certain data structures in their memories.
As an example, consider the network of Fig. 5-32(a). Hosts 1 and 2 are multi- cast senders, and hosts 3, 4, and 5 are multicast receivers. In this example, the senders and receivers are disjoint, but in general, the two sets may overlap. The multicast trees for hosts 1 and 2 are shown in Fig. 5-32(b) and Fig. 5-32(c), re- spectively.
Senders
1 2 B
1 2 B
1 2 B
A D G J
E H
K
C F I
L
A D G J
E H
K
C F I
L
A D G J
E H
K
C F I
L
3 4 5 Receivers
3 4 5
3 4 5
(a) (b) (c)
Figure 5-32. (a) A network. (b) The multicast spanning tree for host 1. (c) The multicast spanning tree for host 2.
To get better reception and eliminate congestion, any of the receivers in a group can send a reservation message up the tree to the sender. The message is
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 419
propagated using the reverse path forwarding algorithm discussed earlier. At each hop, the router notes the reservation and reserves the necessary bandwidth. We saw in the previous section how a weighted fair queueing scheduler can be used to
make this reservation. If insufficient bandwidth is available, it reports back failure. By the time the message gets back to the source, bandwidth has been reserved all the way from the sender to the receiver making the reservation request along the spanning tree.
An example of such a reservation is shown in Fig. 5-33(a). Here host 3 has re- quested a channel to host 1. Once it has been established, packets can flow from 1 to 3 without congestion. Now consider what happens if host 3 next reserves a channel to the other sender, host 2, so the user can watch two television programs at once. A second path is reserved, as illustrated in Fig. 5-33(b). Note that two separate channels are needed from host 3 to router E because two independent streams are being transmitted.
1 2
1 2 2 1
A
B
C
A
B
C
A
B
C
Bandwidth reserved for source 2
D
E
F
D
E
F
D
E
F
Bandwidth reserved for source 1
G
H
I
G
H
I
G
H
I
J
K
L
J
K
L
J
K
L
3 4 5
3 4 5
3 4 5
(a) (b) (c)
Figure 5-33. (a) Host 3 requests a channel to host 1. (b) Host 3 then requests a second channel, to host 2. (c) Host 5 requests a channel to host 1.
Finally, in Fig. 5-33(c), host 5 decides to watch the program being transmitted by host 1 and also makes a reservation. First, dedicated bandwidth is reserved as far as router H. However, this router sees that it already has a feed from host 1, so if the necessary bandwidth has already been reserved, it does not have to reserve any more. Note that hosts 3 and 5 might have asked for different amounts of band- width (e.g., if host 3 is playing on a small screen and only wants the low-resolution information), so the capacity reserved must be large enough to satisfy the greediest receiver.
When making a reservation, a receiver can (optionally) specify one or more sources that it wants to receive from. It can also specify whether these choices are
420 THE NETWORK LAYER CHAP. 5
fixed for the duration of the reservation or whether the receiver wants to keep open the option of changing sources later. The routers use this information to optimize bandwidth planning. In particular, two receivers are only set up to share a path if they both agree not to change sources later on.
The reason for this strategy in the fully dynamic case is that reserved band- width is decoupled from the choice of source. Once a receiver has reserved band- width, it can switch to another source and keep that portion of the existing path that is valid for the new source. If host 2 is transmitting several video streams in real time, for example a TV broadcaster with multiple channels, host 3 may switch be tween them at will without changing its reservation: the routers do not care what program the receiver is watching.
5.4.5 Differentiated Services
Flow-based algorithms have the potential to offer good quality of service to one or more flows because they reserve whatever resources are needed along the route. However, they also have a downside. They require an advance setup to es tablish each flow, something that does not scale well when there are thousands or millions of flows. Also, they maintain internal per-flow state in the routers, mak ing them vulnerable to router crashes. Finally, the changes required to the router code are substantial and involve complex router-to-router exchanges for setting up the flows. As a consequence, while work continues to advance integrated services, few deployments of it or anything like it exist yet.
For these reasons, IETF has also devised a simpler approach to quality of ser- vice, one that can be largely implemented locally in each router without advance setup and without having the whole path involved. This approach is known as class-based (as opposed to flow-based) quality of service. IETF has standardized an architecture for it, called differentiated services, which is described in RFC 2474, RFC 2475, and numerous others. We will now describe it.
Differentiated services can be offered by a set of routers forming an adminis trative domain (e.g., an ISP or a telco). The administration defines a set of service classes with corresponding forwarding rules. If a customer subscribes to dif ferentiated services, customer packets entering the domain are marked with the class to which they belong. This information is carried in the Dif erentiated ser- vices field of IPv4 and IPv6 packets (described in Sec. 5.7.1). The classes are de fined as per-hop behaviors because they correspond to the treatment the packet will receive at each router, not a guarantee across the network. Better service is provided to packets with some per-hop behaviors (e.g., premium service) than to others (e.g., regular service). Traffic within a class may be required to conform to some specific shape, such as a leaky bucket with some specified drain rate. An op- erator with a good nose for business might charge extra for each premium packet transported or might allow up to N premium packets per month for a fixed addi tional monthly fee. Note that this scheme requires no advance setup, no resource
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 421
reservation, and no time-consuming end-to-end negotiation for each flow, as with integrated services. This makes differentiated services relatively easy to imple- ment.
Class-based service also occurs in other industries. For example, package de livery companies often offer overnight, two-day, and three-day service. Airlines offer first class, business class, and cattle-class service. Long-distance trains have multiple service classes. The Paris subway even had two service classes for the
same quality of seating. For packets, the classes may differ in terms of delay, jitter, and probability of being discarded in the event of congestion, among other possi- bilities (but probably not roomier Ethernet frames).
To make the difference between flow-based quality of service and class-based quality of service clearer, consider an example: Internet telephony. With a flow- based scheme, each telephone call gets its own resources and guarantees. With a class-based scheme, all the telephone calls together get the resources reserved for the class telephony. These resources cannot be taken away by packets from the Web browsing class or other classes, but no telephone call gets any private re- sources reserved for it alone.
Expedited Forwarding
The choice of service classes is up to each operator, but since packets are often forwarded between networks run by different operators, IETF has defined some network-independent service classes. The simplest class is expedited forwarding, so let us start with that one. It is described in RFC 3246.
The idea behind expedited forwarding is very simple. Two classes of service are available: regular and expedited. The vast majority of the traffic is expected to be regular, but a limited fraction of the packets are expedited. The expedited pack- ets should be able to transit the network as though no other packets were present. In this way, they will get low loss, low delay and low jitter service—just what is needed for VoIP. A symbolic representation of this ‘‘two-tube’’ system is given in Fig. 5-34. Note that there is still just one physical line. The two logical pipes shown in the figure represent a way to reserve bandwidth for different classes of service, not a second physical line.
One way to implement this strategy is as follows. Packets are classified as expedited or regular and marked accordingly. This step might be done on the send ing host or in the ingress (first) router. The advantage of doing classification on the sending host is that more information is available about which packets belong to which flows. This task may be performed by networking software or even the op- erating system, to avoid having to change existing applications. For example, it is becoming common for VoIP packets to be marked for expedited service by hosts. If the packets pass through a corporate network or ISP that supports expedited ser- vice, they will receive preferential treatment. If the network does not support expe- dited service, no harm is done. In that case, it makes sense to at least try.
422 THE NETWORK LAYER CHAP. 5 Expedited packets
Regular packets
Figure 5-34. Expedited packets experience a traffic-free network.
Of course, if the marking is done by the host, the ingress router is likely to police the traffic to make sure that customers are not sending more expedited traf fic than they have paid for. Within the network, the routers may have two output queues for each outgoing line, one for expedited packets and one for regular pack- ets. When a packet arrives, it is queued accordingly. The expedited queue is given priority over the regular one, for example, by using a priority scheduler. In this way, expedited packets see an unloaded network, even when there is, in fact, a heavy load of regular traffic.
Assured Forwarding
A somewhat more elaborate scheme for managing the service classes is called assured forwarding. It is described in RFC 2597. Assured forwarding specifies that there shall be four priority classes, each class having its own resources. The top three classes might be called gold, silver, and bronze. In addition, it defines three discard classes for packets that are experiencing congestion: low, medium, and high. Taken together, these factors define 12 service classes.
Figure 5-35 shows one way packets might be processed under assured for- warding. The first step is to classify the packets into one of the four priority classes. As before, this step might be done on the sending host (as shown in the figure) or in the ingress router, and the rate of higher-priority packets may be limit- ed by the operator as part of the service offering.
The next step is to determine the discard class for each packet. This is done by passing the packets of each priority class through a traffic policer such as a token bucket. The policer lets all of the traffic through, but it identifies packets that fit within small bursts as low discard, packets that exceed small bursts as medium dis- card, and packets that exceed large bursts as high discard. The combination of pri- ority and discard class is then encoded in each packet.
Finally, the packets are processed by routers in the network with a packet scheduler that carefully distinguishes the different classes. A common choice is to
SEC. 5.4 QUALITY OF SERVICE AND APPLICATION QOE 423 Packets with
Gold
Silver
Bronze
Packet
DiffServ mark
Classifier Policer
source Four priority
classes
Twelve priority/drop classes
Weighted
fair queues
Router
Figure 5-35. A possible implementation of assured forwarding.
use weighted fair queueing for the four priority classes, with higher classes given higher weights. In this way, the higher classes will get most of the bandwidth, but the lower classes will not be starved of bandwidth entirely. For example, if the weights double from one class to the next higher class, the higher class will get twice the bandwidth. Within a priority class, packets with a higher discard class can be preferentially dropped by running an algorithm such as RED. RED will start to drop packets as congestion builds but before the router has run out of buffer space. At this stage, there is still buffer space with which to accept low discard packets while dropping high discard packets.
5.5 INTERNETWORKING
Until now, we have implicitly assumed that there is a single homogeneous net- work, with each machine using the same protocol in each layer. Unfortunately, this assumption is wildly optimistic. Many different networks exist, including PANs, LANs, MANs, and WANs. We have described Ethernet, Internet over cable, the fixed and mobile telephone networks, 802.11, and more. Numerous protocols are in widespread use acrossthese networks in every layer.
5.5.1 Internetworks: An Overview
In the following sections, we will take a careful look at the issues that arise when two or more networks are connected to form an internetwork, or more sim- ply an internet.
It would be much simpler to join networks together if everyone used a single networking technology, and it is often the case that there is a dominant kind of net- work, such as Ethernet. Some pundits speculate that the multiplicity of technolo- gies will go away as soon as everyone realizes how wonderful [fill in your favorite network] is. Do not count on it. History shows this to be wishful thinking. Dif ferent kinds of networks grapple with different problems, so, for example, Ethernet
424 THE NETWORK LAYER CHAP. 5
and satellite networks are always likely to differ. Reusing existing systems, such as running data networks on top of cable, the telephone network, and power lines, adds constraints that cause the features of the networks to diverge. Heterogeneity is here to stay.
If there will always be different networks, it would be simpler if we did not need to interconnect them. This also is very unlikely. Bob Metcalfe postulated that the value of a network with N nodes is the number of connections that may be made between the nodes, or N2(Gilder, 1993). This means that large networks are far more valuable than small networks because they allow many more connections, so there always will be an incentive to combine smaller networks.
The Internet is the prime example of this interconnection. (We will write Inter- net with a capital ‘‘I’’ to distinguish it from other internets, or connected net- works.) The purpose of joining all these networks is to allow users on any of them to communicate with users on all the other ones. When you pay an ISP for Internet service, you may be charged depending on the bandwidth of your line, but what you are really paying for is the ability to exchange packets with any other host that is also connected to the Internet. After all, the Internet would not be very popular if you could only send packetsto other hosts in the same city.
Since networks often differ in important ways, getting packets from one net- work to another is not always so easy. We must address problems of heterogeneity, and also problems of scale as the resulting internet grows very large. We will be- gin by looking at how networks can differ to see what we are up against. Then we shall see the approach used so successfully by IP, the network layer protocol of the Internet, including techniques for tunneling through networks, routing in internet- works, and packet fragmentation.
5.5.2 How Networks Differ
Networks can differ in many ways. Some of the differences, such as different modulation techniques or frame formats, are internal to the physical and data link layers. These differences will not concern us here. Instead, in Fig. 5-36 we list some of the differences that can be exposed to the network layer. It is papering over these differences that makes internetworking more difficult than operating within a single network.
When packets sent by a source on one network must transit one or more for- eign networks before reaching the destination network, many problems can occur at the interfaces between networks. To start with, the source needs to be able to address the destination. What do we do if the source is on an Ethernet network and the destination is on the cellular telephone network? Assuming we can even speci fy a cellular destination from an Ethernet network, packets would cross from a connectionless network to a connection-oriented one. This may require that a new connection be set up on short notice, which injects a delay, and much overhead if the connection is not used for many more packets.
SEC. 5.5 INTERNETWORKING 425
Item Some Possibilities
Service offered Connectionless versus connection oriented
Addressing Different sizes, flat or hierarchical
Broadcasting Present or absent (also multicast)
Packet size Every network has its own maximum
Ordering Ordered and unordered delivery
Quality of service Present or absent; many different kinds
Reliability Different levels of loss
Security Privacy rules, encryption, etc.
Parameters Different timeouts, flow specifications, etc.
Accounting By connect time, packet, byte, or not at all
Figure 5-36. Some of the many ways networks can differ.
Many specific differences may have to be accommodated as well. How do we multicast a packet to a group with some members on a network that does not sup- port multicast? The differing max packet sizes used by different networks can be a major nuisance, too. How do you pass an 8000-byte packet through a network whose maximum size is 1500 bytes? If packets on a connection-oriented network transit a connectionless network, they may arrive in a different order than they were sent. That is something the sender likely did not expect, and it might come as an (unpleasant) surprise to the receiver as well.
With effort, these kinds of differences can be papered over. For example, a gateway joining two networks might generate separate packets for each destination to simulate multicast. A large packet might be broken up, sent in pieces, and then joined back together. Receivers might buffer packets and deliver them in order.
Networks also can differ in large respects that are more difficult to reconcile. The clearest example is quality of service. If one network has strong QoS and the other offers best effort service, it will be impossible to make bandwidth and delay guarantees for real-time traffic end to end. In fact, they can likely only be made while the best-effort network is operated at a low utilization, or hardly used, which is unlikely to be the goal of most ISPs. Security mechanisms are problematic, but at least encryption for confidentiality and data integrity can be layered on top of networks that do not already include it. Finally, differences in accounting can lead to unwelcome bills when normal usage suddenly becomes expensive, as roaming mobile phone users with data plans have discovered.
5.5.3 Connecting Heterogeneous Networks
There are two basic choices for connecting different networks: we can build devices that translate or convert packets from each kind of network into packets for each other network, or as computer scientists often do, we can try to solve the
426 THE NETWORK LAYER CHAP. 5
problem by adding a layer of indirection and building a common layer on top of the different networks. In either case, the devices are placed at the boundaries be tween networks; initially, these devices were called gateways.
Early on, Cerf and Kahn (1974) argued for a common layer to hide the dif ferences of existing networks. This approach has been tremendously successful, and the layer they proposed was eventually separated into the TCP and IP proto- cols. Almost four decades later, IP is the foundation of the modern Internet. For this accomplishment, Cerf and Kahn were awarded the 2004 Turing Award, infor- mally known as the Nobel Prize of computer science. IP provides a universal packet format that all routers recognize and that can be passed through almost every network. IP has extended its reach from computer networks to take over the telephone network. It also runs on sensor networks and other tiny devices that were once presumed too resource-constrained to support it.
We have discussed several different devices that connect networks, including repeaters, hubs, switches, bridges, routers, and gateways. Repeaters and hubs just move bits from one wire to another. They are mostly analog devices and do not understand anything about higher layer protocols. Bridges and switches operate at the link layer. They can be used to build networks, but only with minor protocol translation in the process, for example, among 10-, 100-, and 1000-Mbps Ethernet switches. Our focus in this section is interconnection devices that operate at the network layer, namely the routers. We will leave gateways, which are higher-layer interconnection devices, until later.
Let us first explore at a high level how interconnection with a common net- work layer can be used to interconnect dissimilar networks. An internet comprised of 802.11, MPLS, and Ethernet networks is shown in Fig. 5-37(a). Suppose that the source machine on the 802.11 network wants to send a packet to the destination machine on the Ethernet network. Since these technologies are different, and they are further separated by another kind of network (MPLS), some added processing is needed at the boundaries between the networks.
Because different networks may, in general, have different forms of ad- dressing, the packet carries a network layer address that can identify any host across the three networks. The first boundary the packet reaches is when it tran- sitions from an 802.11 network to an MPLS network. Remember, 802.11 provides a connectionless service, but MPLS provides a connection-oriented service. This means that a virtual circuit must be set up to cross that network. Once the packet has traveled along the virtual circuit, it will reach the Ethernet network. At this boundary, the packet may be too large to be carried, since 802.11 can work with larger frames than Ethernet. To handle this problem, the packet is divided into fragments, and each fragment is sent separately. When the fragments reach the destination, they are reassembled. Then the packet has completed its journey.
The protocol processing for this journey is shown in Fig. 5-37(b). The source accepts data from the transport layer and generates a packet with the common net- work layer header, which is IP in this example. The network header contains the
SEC. 5.5 INTERNETWORKING 427
Packet Virtual circuit
802.11 MPLS Ethernet
Source Destination Router Router
(a)
Data from
transport layer
IP
IP
IP
IP
802.11
IP MPLS IP Eth MPLS IP
IP
802.11
IP Eth IP
Physical
(b)
Figure 5-37. (a) A packet crossing different networks. (b) Network and link lay- er protocol processing.
ultimate destination address, which is used to determine that the packet should be sent via the first router. So the packet is encapsulated in an 802.11 frame whose destination is the first router and transmitted. At the router, the packet is removed
from the frame’s data field and the 802.11 frame header is discarded. The router now examines the IP address in the packet and looks up this address in its routing table. Based on this address, it decides to send the packet to the second router next. For this part of the path, an MPLS virtual circuit must be established to the second router and the packet must be encapsulated with MPLS headers that travel this circuit. At the far end, the MPLS header is discarded and the network address is again consulted to find the next network layer hop. It is the destination itself. When a packet is too long to be sent over Ethernet, it is split into two portions. Each of these portions is put into the data field of an Ethernet frame and sent to the Ethernet address of the destination. At the destination, the Ethernet header is stripped from each of the frames, and the contents are reassembled. The packet has finally reached its destination.
Observe that there is an essential difference between the routed case and the switched (or bridged) case. With a router, the packet is extracted from the frame and the network address in the packet is used for deciding where to send it. With a switch (or bridge), the entire frame is transported on the basis of its MAC address.
Switches do not have to understand the network layer protocol being used to switch packets. Routers do.
Unfortunately, internetworking is not nearly as easy as we have made it sound. In fact, when bridges were introduced, it was intended that they would join dif ferent types of networks, or at least different types of LANs. They were to do this by translating frames from one LAN into frames from another LAN. However, this did not work well, for exactly the same reason that internetworking is difficult:
428 THE NETWORK LAYER CHAP. 5
the differences in the features of LANs, such as different maximum packet sizes and LANs with and without priority classes, are hard to mask. Today, bridges are predominantly used to connect the same kind of network at the link layer, and rout- ers connect different networks at the network layer.
Internetworking has been very successful at building large networks, but it only works when there is a common network layer. There have, in fact, been many network protocols over time. Getting everybody to agree on a single format is dif ficult when companies perceive it to their commercial advantage to have a propri- etary format that they control. Examples besides IP, which is now the near-univer- sal network protocol, were IPX, SNA, and AppleTalk. None of these protocols are still in widespread use, but there will always be other protocols. The most relevant example now is probably IPv4 and IPv6. While these are both versions of IP, they are not compatible (or it would not have been necessary to create IPv6).
A router that can handle multiple network protocols is called a multiprotocol router. It must either translate the protocols, or leave connection for a higher pro tocol layer. Neither approach is entirely satisfactory. Connection at a higher layer, say, by using TCP, requires that all the networks implement TCP (which may not be the case). Then it limits usage across the networks to applications that use TCP (which does not include many real-time applications).
The alternative is to translate packets between the networks. However, unless the packet formats are close relatives with the same information fields, such con- versions will always be incomplete and often doomed to failure. For example, IPv6 addresses are 128 bits long. They will not fit in a 32-bit IPv4 address field, no matter how hard the router tries. Getting IPv4 and IPv6 to run in the same net- work has proven to be a major obstacle to the deployment of IPv6. (To be fair, so has getting customers to understand why they should want IPv6 in the first place.) Greater problems can be expected when translating between very different proto- cols, such as connectionless and connection-oriented network protocols. Given these difficulties, conversion is only rarely attempted. Arguably, even IP has only worked so well by serving as a kind of lowest common denominator. It requires lit tle of the networks on which it runs, but offers only best-effort service as a result.
5.5.4 Connecting Endpoints Across Heterogeneous Networks
Handling the general case of making two different networks interwork is exceedingly difficult. However, there is a common special case that is manageable even for different network protocols. This case is where the source and destination hosts are on the same type of network, but there is a different network in between. As an example, think of an international bank with an IPv6 network in Paris, an IPv6 network in London, and connectivity between the offices via the IPv4 Inter- net. This situation is shown in Fig. 5-38.
The solution to this problem is a technique called tunneling. To send an IP packet to a host in the London office, a host in the Paris office constructs the
SEC. 5.5 INTERNETWORKING 429
IPv6 IPv4 IPv6
Paris London Router Router
Tunnel
IPv6 packet IPv4 IPv6 packet IPv6 packet
Figure 5-38. Tunneling a packet from Paris to London.
packet containing an IPv6 address in London, and sends it to the multiprotocol router that connects the Paris IPv6 network to the IPv4 Internet. When this router gets the IPv6 packet, it encapsulates the packet with an IPv4 header addressed to the IPv4 side of the multiprotocol router that connects to the London IPv6 network. That is, the router puts a (IPv6) packet inside a (IPv4) packet. When this wrapped packet arrives, the London router removes the original IPv6 packet and sends it onward to the destination host.
The path through the IPv4 Internet can be seen as a big tunnel extending from one multiprotocol router to the other. The IPv6 packet just travels from one end of the tunnel to the other, snug in its nice box. It does not have to worry about deal ing with IPv4 at all. Neither do the hosts in Paris or London. Only the multiproto- col routers have to understand both IPv4 and IPv6 packets. In effect, the entire trip from one multiprotocol router to the other is like a hop over a single link.
An analogy may make tunneling clearer. Consider a person driving her car from Paris to London. Within France, the car moves under its own power, but when it hits the English Channel, it is loaded onto a high-speed train and tran- sported to England through the Chunnel (cars are not permitted to drive through the Chunnel). Effectively, the car is being carried as freight, as depicted in Fig. 5-39. At the far end, the car is let loose on the English roads and once again continues to move under its own power. Tunneling of packets through a foreign network works the same way.
Tunneling is widely used to connect isolated hosts and networks using other networks. The network that results is called an overlay since it has effectively been overlaid on the base network. Deployment of a network protocol with a new fea ture is a common reason, as our ‘‘IPv6 over IPv4’’ example shows. The disadvan tage of tunneling is that none of the hosts on the network that is tunneled over can be reached because the packets cannot escape in the middle of the tunnel. Howev- er, this limitation of tunnels is turned into an advantage with VPNs (Virtual Pri- vate Networks). A VPN is simply an overlay that is used to provide a measure of security. We will explore VPNs when we get to Chap. 8.
430 THE NETWORK LAYER CHAP. 5 Car English Channel
Paris London Railroad carriage
Railroad track
Figure 5-39. Tunneling a car from France to England.
5.5.5 Internetwork Routing: Routing Across Multiple Networks
Routing through an internet poses the same basic problem as routing within a single network, but with some added complications. To start, the networks may in ternally use different routing algorithms. For example, one network may use link state routing and another distance vector routing. Since link state algorithms need to know the topology but distance vector algorithms do not, this difference alone would make it unclear how to find the shortest paths across the internet.
Networks run by different operators lead to bigger problems. First, the opera tors may have different ideas about what is a good path through the network. One operator may want the route with the least delay, while another may want the most inexpensive route. This will lead the operators to use different quantities to set the shortest-path costs (e.g., milliseconds of delay vs. monetary cost). The weights will not be comparable across networks, so shortest paths on the internet will not be well defined.
Worse yet, one operator may not want another operator to even know the de tails of the paths in its network, perhaps because the weights and paths may reflect sensitive information (such as the monetary cost) that represents a competitive bus iness advantage.
Finally, the internet may be much larger than any of the networks that com- prise it. It may therefore require routing algorithms that scale well by using a hier- archy, even if none of the individual networks need to use a hierarchy.
All of these considerations lead to a two-level routing algorithm. Within each network, an intradomain or interior gateway protocol is used for routing. (‘‘Gate- way’’ is an older term for ‘‘router.’’) It might be a link state protocol of the kind we have already described. Across the networks that make up the internet, an interdo- main or exterior gateway protocol is used. The networks may all use different intradomain protocols, but they must use the same interdomain protocol. In the In ternet, the interdomain routing protocol is called Border Gateway Protocol (BGP). We will describe it in Sec. 5.7.7
There is one more important term to introduce. Since each network is operated independently of all the others, it is often referred to as an AS or Autonomous
SEC. 5.5 INTERNETWORKING 431
System. A good mental model for an AS is an ISP network. In fact, an ISP net- work may be comprised of more than one AS, if it is managed, or, has been ac- quired, as multiple networks. But the difference is usually not significant.
The two levels are usually not strictly hierarchical, as highly suboptimal paths might result if a large international network and a small regional network were both abstracted to be a single network. However, relatively little information about routes within the networks is exposed to find routes across the internetwork. This helps to address all of the complications. It improves scaling and lets operators freely select routes within their own networks using a protocol of their choosing. It also does not require weights to be compared across networks or expose sensitive information outside of networks.
However, we have said little so far about how the routes across the networks of the internet are determined. In the Internet, a large determining factor is the busi- ness arrangements between ISPs. Each ISP may charge or receive money from the other ISPs for carrying traffic. Another factor is that if internetwork routing re- quires crossing international boundaries, various laws may suddenly come into play, such as Sweden’s strict privacy laws about exporting personal data about Swedish citizens from Sweden. All of these nontechnical factors are wrapped up in the concept of a routing policy that governs the way autonomous networks select the routes that they use. We will return to routing policies when we describe BGP.
5.5.6 Supporting Different Packet Sizes: Packet Fragmentation
Each network or link imposes some maximum size on its packets. These lim its have various causes, among them
1. Hardware (e.g., the size of an Ethernet frame).
2. Operating system (e.g., all buffers are 512 bytes).
3. Protocols (e.g., the number of bits in the packet length field).
4. Compliance with some (inter)national standard.
5. Desire to reduce error-induced retransmissions to some level. 6. Desire to prevent one packet from occupying the channel too long.
The result of all these factors is that the network designers are not free to choose any old maximum packet size they wish. Maximum payloads for some common technologies are 1500 bytes for Ethernet and 2272 bytes for 802.11. IP is more generous, allows for packets as big as 65,515 bytes.
Hosts usually prefer to transmit large packets because this reduces packet over- heads such as bandwidth wasted on header bytes. An obvious internetworking problem appears when a large packet wants to travel through a network whose
432 THE NETWORK LAYER CHAP. 5
maximum packet size is too small. This nuisance has been a persistent issue, and solutions to it have evolved along with much experience gained on the Internet. One solution is to make sure the problem does not occur in the first place. However, this is easier said than done. A source does not usually know the path a packet will take through the network to a destination, so it certainly does not know how small a packet has to be to get there. This packet size is called the Path MTU (Path Maximum Transmission Unit). Even if the source did know the path MTU, packets are routed independently in a connectionless network such as the In ternet. This routing means that paths may suddenly change, which can unexpect- edly change the path MTU.
The alternative solution to the problem is to allow routers to break up packets into fragments, sending each fragment as a separate network layer packet. How- ever, as every parent of a small child knows, converting a large object into small fragments is considerably easier than the reverse process. (Physicists have even given this effect a name: the second law of thermodynamics.) Packet-switching networks, too, have trouble putting the fragments back together again.
Two opposing strategies exist for recombining the fragments back into the original packet. The first strategy is to make all the fragmentation caused by a ‘‘small-packet’’ network transparent to any subsequent networks through which the packet must pass on its way to the ultimate destination. This option is shown in
Fig. 5-40(a). In this approach, when an oversized packet arrives at G1, the router breaks it up into fragments. Each fragment is addressed to the same exit router, G2, where the pieces are recombined. In this way, passage through the small-pack- et network is made transparent. Subsequent networks are not even aware that frag- mentation has occurred.
Transparent fragmentation is straightforward but has some problems. For one thing, the exit router must know when it has received all the pieces, so either a count field or an ‘‘end-of-packet’’ bit must be provided. Also, because all packets must exit via the same router so that they can be reassembled, the routes are con- strained. By not allowing some fragments to follow one route to the ultimate desti- nation and other fragments a disjoint route, some performance may be lost. More significant is the amount of work that the router may have to do. It may need to buffer the fragments as they arrive, and decide when to throw them away if not all of the fragments arrive. Some of this work may be wasteful, too, as the packet may pass through a series of small-packet networks and need to be repeatedly frag- mented and reassembled.
The other fragmentation strategy is to refrain from recombining fragments at any intermediate routers. Once a packet has been fragmented, each fragment is treated as though it were an original packet. The routers pass the fragments, as shown in Fig. 5-40(b), and reassembly is performed only at the destination host.
The main advantage of nontransparent fragmentation is that it requires routers to do less work. IP works this way. A complete design requires that the fragments be numbered in such a way that the original data stream can be reconstructed. The
SEC. 5.5 INTERNETWORKING 433
Packet
Network 1
Network 2
G1 G2 G3 G4
G1fragments a large packet
Packet
G2
reassembles the fragments
(a)
G3fragments again
G4
reassembles again
G1 G2 G3 G4
G1fragments a large packetThe fragments are not reassembled
until the final destination (a host) is reached
(b)
Figure 5-40. (a) Transparent fragmentation. (b) Nontransparent fragmentation.
design used by IP is to give every fragment a packet number (carried on all pack- ets), an absolute byte offset within the packet, and a flag indicating whether it is the end of the packet. An example is shown in Fig. 5-41. While simple, this de-
sign has some attractive properties. Fragments can be placed in a buffer at the destination in the right place for reassembly, even if they arrive out of order. Frag- ments can also be fragmented if they pass over a network with a yet smaller MTU. This is shown in Fig. 5-41(c). Retransmissions of the packet (if all fragments were not received) can be fragmented into different pieces. Finally, fragments can be of arbitrary size, down to a single byte plus the packet header. In all cases, the desti- nation simply uses the packet number and fragment offset to place the data in the right position, and the end-of-packet flag to determine when it has the complete packet.
Unfortunately, this design still has problems. The overhead can be higher than with transparent fragmentation because fragment headers are now carried over some links where they may not be needed. But the real problem is the existence of fragments in the first place. Kent and Mogul (1987) argued that fragmentation is detrimental to performance because, as well as the header overheads, a whole packet is lost if any of its fragments are lost, and because fragmentation is more of a burden for hosts than was originally realized.
This leads us back to the original solution of getting rid of fragmentation in the network—the strategy used in the modern Internet. The process is called path MTU discovery (Mogul and Deering, 1990). It works like this. Each IP packet is sent with its header bits set to indicate that no fragmentation is allowed to be per formed. If a router receives a packet that is too large, it generates an error packet,
434 THE NETWORK LAYER CHAP. 5 Number of the first elementary fragment in this packet
Packet number
End of packet bit
1 byte
27 0 1 A B C D E F G H I J
Header
(a)
27 0 0 A B C D E F G H 27 8 1 I J
Header Header
(b)
27 0 0 A B C D E 27 5 0 F G H 27 8 1 I J
Header Header Header (c)
Figure 5-41. Fragmentation when the elementary data size is 1 byte. (a) Origi
nal packet, containing 10 data bytes. (b) Fragments after passing through a net- work with maximum packet size of 8 payload bytes plus header. (c) Fragments after passing through a size 5 gateway.
returns it to the source, and drops the packet. This is shown in Fig. 5-42. When the source receives the error packet, it uses the information inside to refragment the packet into pieces that are small enough for the router to handle. If a router further down the path has an even smaller MTU, the process is repeated.
Packet (with length)
1400 1200 900
Source Destination “Try 900” “Try 1200”
Figure 5-42. Path MTU discovery.
The advantage of path MTU discovery is that the source now knows what length packet to send. If the routes and path MTU change, new error packets will be triggered and the source will adapt to the new path. However, fragmentation is still needed between the source and the destination unless the higher layers learn the path MTU and pass the right amount of data to IP. TCP and IP are typically
SEC. 5.5 INTERNETWORKING 435
implemented together (as ‘‘TCP/IP’’) to be able to pass this sort of information. Even if this is not done for other protocols, fragmentation has still been moved out of the network and into the hosts.
The disadvantage of path MTU discovery is that there may be added startup delays simply to send a packet. More than one round-trip delay may be needed to probe the path and find the MTU before any data is delivered to the destination. This begs the question of whether there are better designs. The answer is probably ‘‘Yes.’’ Consider the design in which each router simply truncates packets that exceed its MTU. This would ensure that the destination learns the MTU as rapidly as possible (from the amount of data that was delivered) and receives some of the data.
5.6 SOFTWARE-DEFINED NETWORKING
Traffic management and engineering is historically very challenging: it re- quires network operators to tune the configuration parameters of routing protocols, which then re-compute routes. Traffic flows along the new paths and results in a re-balancing of traffic. Unfortunately, the mechanisms for traffic control in this manner are indirect: changes to routing configuration result in changes to routing both in the network and between networks, and these protocols can interact in unpredictable ways. SDN (Software-Defined Networking) aims to fix many of these problems. We will discuss it below.
5.6.1 Overview
In a certain way, networks have always been ‘‘software defined,’’ in the sense that configurable software running on routers is responsible for looking up infor- mation in packets and making forwarding decisions about them. Yet, the software that runs the routing algorithms and implements other logic about packet for- warding was historically vertically integrated with the networking hardware. An operator who bought a Cisco or Juniper router was, in some sense, stuck with the software technology that the vendor shipped with the hardware. For example, mak ing changes to the way OSPF or BGP work was simply not possible. One of the main concepts driving SDN was to recognize that the control plane, the software and logic that select routes and decide what to do with forwarding traffic, runs in software and can operate completely separately from the data plane, the hard- ware-based technology that is responsible for actually performing lookups on packets and deciding what to do with them. The two planes are shown in Fig. 5-43.
Given the architectural separation of the control plane and the data plane, the next natural logical step is to recognize that the control plane need not run on the network hardware at all! In fact, one common instantiation of SDN involves a
436 THE NETWORK LAYER CHAP. 5
logically centralized program, often written in a high-level language (e.g., Python, Java, Golang, C) making logical decisions about forwarding and communicating those decisions to every forwarding device in the network. That communication channel between the high-level software program and the underlying hardware could be anything that the network device understands. One of the first SDN con trollers used BGP itself as a control plane (Feamster et al., 2003); subsequently, technologies such as OpenFlow, NETCONF, and YANG have emerged as more flexible ways to communicate control-plane information with network devices. In some sense, SDN was a re-incarnation of a well-established idea (i.e., centralized control) at a time when various enablers (open chipset APIs, software control of distributed systems) were also at a level of maturity to enable the architectural ideas to finally gain a foothold.
Figure 5-43. Control and data plane separation in SDN.
While the technology of SDN continues to rapidly evolve, the central tenet of the separation of the data and control planes remains invariant. SDN technology has evolved over a number of years; readers who wish to appreciate a complete history of SDN can read further to appreciate the genesis of this increasingly popu lar technology (Feamster et al., 2013). Below, we survey several of the major trends in SDN: (1) control over routing and forwarding (i.e., the technology behind the control plane); (2) programmable hardware and customizable forwarding (i.e., the technology that makes the data plane more programmable), and (3) program- mable network telemetry (a network management application that puts the two pieces together and in many ways may be the ‘‘killer app’’ for SDN).
5.6.2 The SDN Control Plane: Logically Centralized Software Control
One of the main technical ideas that underlies SDN is a control plane that runs separately from the routers, often as a single, logically centralized program. In some sense, SDN has always really existed: routers are configurable, and many
SEC. 5.6 SOFTWARE-DEFINED NETWORKING 437
large networks would often even auto-generate their router configuration from a centralized database, keep it in version control, and push those configurations to the routers with scripts. While, in a pedantic sense, this kind of setup could be called an SDN, technically speaking this type of setup only gives operators limited control over how traffic is forwarded through the network. More typically, SDN control programs (sometimes called ‘‘controllers’’) are responsible for more of the control logic, such as computing the paths through the network on behalf of the routers, and simply updating the resulting forwarding tables remotely.
Early work in software-defined networking aimed to make it easier for net- work operators to perform traffic engineering tasks by directly controlling the routes that each router in the network selects, rather than relying on indirect tuning of network configuration parameters. Early incarnations of SDN thus aimed to work within the constraints of existing Internet routing protocols to use them to di rectly control the routes. One such example was the RCP (Routing Control Plat form) (Feamster et al., 2003), which was subsequently deployed in backbone net- works to perform traffic load balancing and defend against denial-of-service at tacks. Subsequent developments included a system called Ethane (Casado et al., 2007), which used centralized software control to authenticate hosts within a net- work. One of the problems with Ethane, however, was that it required customized switches to operate, which limited its deployment in practice.
After demonstrating these benefits of SDN to network management, network operators and vendors began to take notice. Additionally, there was a convenient back door to making the switches even more flexible through a programmable con trol plane: many network switches relied on a common Broadcom chipset, which had an interface that allowed direct writes into switch memory. A team of re- searchers worked with switch vendors to expose this interface to software pro- grams, ultimately developing a protocol called OpenFlow (McKeown et al, 2008). The OpenFlow protocol was exposed by many switch vendors who were trying to compete with the dominant incumbent switch vendor, Cisco. Initially, the protocol supported a very simple interface: writes into a content-addressable memory that acted as a simple match-action table. This match-action table allowed a switch to identify packets that matched one or more fields in the packet header (e.g., MAC address, IP address) and perform one of a set of possible actions, including for- warding the packet to a specific port, dropping it, or sending it to an off-path soft- ware controller.
There were multiple versions of the OpenFlow protocol standard. An early ver- sion of OpenFlow, version 1.0, had a single match-action table, where entries in the table could refer to either exact matches on combinations of packet header fields (e.g., MAC address, IP address) or wild-card entries (e.g., an IP address or MAC address prefix). Later versions of OpenFlow (the most prominent version being OpenFlow 1.3) added more complex operations, including chains of tables, but very few vendors ever implemented these standards. Expressing AND and OR conjunctions on these types of matches turned out to be a bit tricky, especially for
438 THE NETWORK LAYER CHAP. 5
programmers, so some technologies emerged to make it easier for programmers to express more complex combinations of conditionals (Foster et al., 2011), and even to incorporate temporal and other aspects into the forwarding decisions (Kim et al., 2015). In the end, adoption of some of these technologies was limited: the Open- Flow protocol gained some traction in large data centers where operators could have complete control over the network. Yet, widespread adoption in wide-area and enterprise networks proved more limited because the operations one could per form in the forward table were so limited. Additionally, many switch vendors never fully implemented later versions of the standard, making it difficult to deploy solutions that depended on these standards in practice. Ultimately, however, the OpenFlow protocol left several important legacies: (1) control over a network with a single, centralized software program, permitting coordination across network de- vices and forwarding elements, and (2) the ability to express such control over the entire network from a single high-level programming language (e.g., Python, Java).
Ultimately, OpenFlow turned out to be a very limiting interface. It was not de- signed with flexible network control in mind, but rather was a product of conven ience: network devices already had TCAM-based lookup tables in their switches and OpenFlow was, more than anything, a market-driven initiative to open the in terface to these tables so that external software programs could write to it. It wasn’t long before networking researchers started to think about whether there was a bet ter way to design the hardware as well, to allow for more flexible types of control in the data plane. The next section discusses the developments in programmable hardware that have ultimately made the switches themselves more programmable.
Meanwhile, programmable software control, mostly initially focused on transit and data center networks, is beginning to find its way into cellular networks as well. For example, the Central Office Re-Architected as a Datacenter (CORD) project aims to develop a 5G network from disaggregated commodity hardware and open-source software components (Peterson et al., 2019).
5.6.3 The SDN Data Plane: Programmable Hardware
Recognizing the limitations of the OpenFlow chipset, a subsequent develop- ment in SDN was to make the hardware itself programmable. A number of devel- opments in programmable hardware, in both network interface cards (NICs) and switches have made it possible to customize everything from packet format to for- warding behavior.
The general architecture is sometimes called a protocol-independent switch architecture. The architecture involves a fixed set of processing pipelines, each with memory for match-action tables, some amount of register memory, and sim- ple operations such as addition (Bosshart et al., 2013). The forwarding model is often referred to as RMT (Reconfigurable Match Tables), a pipeline architecture that was inspired by RISC architectures. Each stage of the processing pipeline can read information from the packet headers, make modifications to the values in the