Chapter 4: Computer Networks
4
THE MEDIUM ACCESS CONTROLSUBLAYER
Many link-layer communications protocols that we studied in Chap. 3 rely on a broadcast communication medium to transmit data. Any such protocol requires additional mechanisms to allow multiple senders to efficiently and fairly share the broadcast medium. This chapter introduces these protocols.
In any broadcast network, the key issue involves determining who gets to use the channel when there is competition for it. For example, consider a conference call in which six people, on six different telephones, are all connected so that each one can hear and talk to everyone else. It is very likely that when one of them stops speaking, two or more will start talking at once, leading to chaos. In a face to-face meeting, chaos is often avoided by a second external channel. For ex- ample, at a meeting, people raise their hands to request permission to speak. When only a single channel is available, it is much harder to determine who should go next. Many protocols for solving the problem are known. They form the contents of this chapter. In the literature, broadcast channels are sometimes referred to as multiaccess channels or random access channels .
The protocols used to determine who goes next on a multiaccess channel be long to a sublayer of the data link layer called the MAC (Medium Access Con trol ) sublayer. The MAC sublayer is especially important in LANs, particularly wireless ones because wireless is naturally a broadcast channel. Some aspects of a WAN (e.g., a direct interconnect) are point-to-point; others (e.g., the shared access network in a cable ISP) are shared and also rely on the MAC layer to facilitate sharing. Because multiaccess channels and LANs are so closely related, in this
267
268 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
chapter we will discuss LANs in general, including a few issues that are not strictly part of the MAC sublayer. The main subject here will be control of the channel. Technically, the MAC sublayer is the bottom part of the data link layer, so logi- cally we should have studied it before examining all the point-to-point protocols in Chap. 3. Nevertheless, for most people, it is easier to understand protocols involv ing multiple parties after two-party protocols are well understood. For that reason, we have deviated slightly from a strict bottom-up order of presentation.
4.1 THE CHANNEL ALLOCATION PROBLEM
The central theme of this chapter is how to allocate a single broadcast channel among competing users. The channel might be a portion of the wireless spectrum in a geographic region, or a single wire or optical fiber to which multiple nodes are connected. It does not matter. In both cases, the channel connects each user to all other users and any user who makes full use of the channel interferes with other users who also wish to use the channel.
We will first look at the shortcomings of static allocation schemes for bursty traffic. Then, we will lay out the key assumptions used to model the dynamic schemes that we examine in the following sections.
4.1.1 Static Channel Allocation
The conventional way of allocating a single channel, such as a telephone trunk, among multiple competing users is to chop up its capacity by using one of the mul tiplexing schemes we described in Sec. 2.4.4, such as FDM (Frequency Division Multiplexing). If there are N users, the bandwidth is divided into N equal-sized portions, with each user being assigned one portion. Since each user has a private frequency band, there is now no interference among users. When there is only a small and constant number of users, each of which has a steady stream or a heavy load of traffic, this division is a simple and efficient allocation mechanism. A wireless example is FM radio stations. Each station gets a portion of the FM band and uses it most of the time to broadcast its signal.
However, when the number of senders is large and varying or the traffic is bursty, FDM presents some problems. If the spectrum is cut up into N regions and fewer than N users are currently interested in communicating, a large piece of valu- able spectrum will be wasted. And if more than N users want to communicate, some of them will be denied permission for lack of bandwidth, even if some of the users who have been assigned a frequency band hardly ever transmit or receive anything.
Even assuming that the number of users could somehow be held constant at N, dividing the single available channel into some number of static subchannels is inherently inefficient. The basic problem is that when some users are quiescent,
SEC. 4.1 THE CHANNEL ALLOCATION PROBLEM 269
their bandwidth is simply lost. They are not using it, and no one else is allowed to use it either. A static allocation is a poor fit to most computer systems, in which data traffic is extremely bursty, often with peak traffic to mean traffic ratios of 1000:1. Consequently, most of the channels will be idle most of the time.
The poor performance of static FDM can easily be seen with a simple queue ing theory calculation. Let us start by finding the mean time delay, T, to send a frame onto a channel of capacity C bps. We assume that the frames arrive ran- domly with an average arrival rate of h frames/sec, and that the frames vary in length with an average length of 1/µ bits. With these parameters, the service rate of the channel is µC frames/sec. A standard queueing theory result is
T =1
µC < h
(For the curious, this result is for an ‘‘M/M/1’’ queue. It requires that the ran- domness of the times between frame arrivals and the frame lengths follow an expo- nential distribution, or equivalently be the result of a Poisson process.)
In our example, if C is 100 Mbps, the mean frame length, 1/ µ, is 10,000 bits, and the frame arrival rate, h, is 5000 frames/sec, then T = 200 µsec. Note that if we ignored the queueing delay and just asked how long it takes to send a 10,000-bit frame on a 100-Mbps network, we would get the (incorrect) answer of 100 µsec. That result only holds when there is no contention for the channel.
Now let us divide the single channel into N independent subchannels, each with capacity C/N bps. The mean input rate on each of the subchannels will now be h/N. Recomputing T, we get
TN =1
µ(C/N) < (h/N) = N
µC < h = NT
The mean delay for the divided channel is N times worse than if all the frames were somehow magically arranged orderly in a big central queue. This same result says that a bank lobby full of ATM machines is better off having a single queue feeding all the machines than a separate partitioned queue in front of each machine because with separate queues, there may be idle ATMs while there are long lines at other ones.
Precisely the same arguments that apply to FDM also apply to other ways of statically dividing the channel. If we were to use time division multiplexing (TDM) and allocate each user every Nth time slot, if a user does not use the allocated slot, it would just lie fallow. The same would hold if we split up the networks physi- cally. Using our previous example again, if we were to replace the 100-Mbps net- work with 10 networks of 10 Mbps each and statically allocate each user to one of them, the mean delay would jump from 200 µsec to 2 msec.
Since none of the traditional static channel allocation methods work well at all with bursty traffic, we will now explore dynamic methods.
270 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 4.1.2 Assumptions for Dynamic Channel Allocation
Before we get to the first of the many channel allocation methods in this chap ter, it is worthwhile to carefully formulate the allocation problem. Underlying all the work done in this area are the following five key assumptions:
1. Independent Traffic . The model consists of N independent stations (e.g., computers, telephones), each with a program or user that gener- ates frames for transmission. The expected number of frames gener- ated in an interval of length 6t is h6t, where h is a constant (the arri- val rate of new frames). Once a frame has been generated, the station
is blocked and does nothing until the frame has been successfully transmitted.
2. Single Channel . A single channel is available for all communication. All stations can transmit on it and all can receive from it. The stations are assumed to be equally capable, though protocols may assign them different roles (e.g., priorities).
3. Observable Collisions. If two frames are transmitted simultan- eously, they overlap in time and the resulting signal is garbled. This event is called a collision. All stations can detect that a collision has occurred. A collided frame must be transmitted again later. No errors other than those generated by collisions occur.
4. Continuous or Slotted Time . Time may be assumed continuous, in which case frame transmission can begin at any instant. Alterna tively, time may be slotted or divided into discrete intervals (called slots). Frame transmissions must then begin at the start of a slot. A slot may contain 0, 1, or more frames, corresponding to an idle slot, a successful transmission, or a collision, respectively.
5. Carrier Sense or No Carrier Sense. With the carrier sense assump tion, stations can tell if the channel is in use before trying to use it. No station will attempt to use the channel while it is sensed as busy. If there is no carrier sense, stations cannot sense the channel before trying to use it. They just go ahead and transmit. Only later can they determine whether the transmission was successful.
Some discussion of these assumptions is in order. The first one says that frame arrivals are independent, both across stations and at a particular station, and that frames are generated unpredictably but at a constant rate. Actually, this assump tion is not a particularly good model of network traffic, as it has long been well known that packets come in bursts over a range of time scales (Paxson and Floyd, 1995). Recent research confirms that the pattern still holds (Fontugne et al., 2017). Nonetheless, Poisson models, as they are frequently called, are commonly used, in
SEC. 4.1 THE CHANNEL ALLOCATION PROBLEM 271
part, because they are mathematically tractable. They help us analyze protocols to understand roughly how performance changes over an operating range and how it compares with other designs.
The single-channel assumption is the heart of the model. No external ways to communicate exist. Stations cannot raise their hands to request that the teacher call on them, so we will have to come up with better solutions.
The remaining three assumptions depend on the engineering of the system, and we will say which assumptions hold when we examine a particular protocol. The collision assumption is basic. Stations need some way to detect collisions if they are to retransmit frames rather than let them be lost. For wired channels, node hardware can be designed to detect collisions when they occur. The stations can then terminate their transmissions prematurely to avoid wasting capacity. This detection is much harder for wireless channels, so collisions are usually inferred after the fact by the lack of an expected acknowledgement frame. It is also pos- sible for some frames involved in a collision to be successfully received, depending on the details of the signals and the receiving hardware. However, this situation is not the common case, so we will assume that all frames involved in a collision are lost. We will also see protocols that are designed to prevent collisions from oc- curring in the first place.
The reason for the two alternative assumptions about time is that slotted time can be used to improve performance. However, it requires the stations to follow a master clock or synchronize their actions with each other to divide time into dis- crete intervals. Hence, it is not always available. We will discuss and analyze sys tems with both kinds of time. For a given system, only one of them holds.
Similarly, a network may have carrier sensing or not. Wired networks will generally have carrier sense. Wireless networks cannot always use it effectively because not every station may be within radio range of every other station. Simi larly, carrier sense will not be available in other settings in which a station cannot communicate directly with other stations, for example a cable modem in which sta tions must communicate via the cable headend. Note that the word ‘‘carrier’’ in this sense refers to a signal on the channel and has nothing to do with the common carriers (e.g., telephone companies) that date back to the days of the Pony Express.
To avoid any misunderstanding, it is worth noting that no multiaccess protocol guarantees reliable delivery. Even in the absence of collisions, the receiver may have copied some of the frame incorrectly for various reasons. Other parts of the link layer or higher layers provide reliability.
4.2 MULTIPLE ACCESS PROTOCOLS
Many algorithms for allocating a multiple access channel are known. In the following sections, we will study a small sample of the more interesting ones and give some examples of how they are commonly used in practice.
272 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 4.2.1 ALOHA
The story of our first MAC protocol starts out in pristine Hawaii in the early 1970s. In this case, ‘‘pristine’’ can be interpreted as ‘‘not having a working tele- phone system.’’ This did not make life more pleasant for researcher Norman Abramson and his colleagues at the University of Hawaii who were trying to con- nect users on remote islands to the main computer in Honolulu. Stringing their own cables under the Pacific Ocean for long distances was not in the cards, so they looked for a different solution.
The one they found used short-range radios, with each user terminal sharing the same upstream frequency to send frames to the central computer. It included a simple and elegant method to solve the channel allocation problem. Their work has been extended by many researchers since then (Schwartz and Abramson, 2009). Although Abramson’s work, called the ALOHA system, used ground- based radio broadcasting, the basic idea is applicable to any system in which unco- ordinated users are competing for the use of a single shared channel.
We will discuss two versions of ALOHA here: pure and slotted. They differ with respect to whether time is continuous, as in the pure version; or divided into discrete slots into which all frames must fit, as in the slotted version.
Pure ALOHA
The basic idea of an ALOHA system is simple: let users transmit whenever they have data to be sent. There will be collisions, of course, and the colliding frames will be damaged. Senders need some way to find out if this is the case. In the ALOHA system, after each station has sent its frame to the central computer, this computer rebroadcasts the frame to all of the stations. A sending station can thus listen for the broadcast from the hub to see if its frame has gotten through. In other systems, such as wired LANs, the sender might be able to listen for collisions while transmitting.
If the frame was destroyed, the sender just waits a random amount of time and sends it again. The waiting time must be random or the same frames will collide over and over, in lockstep. Systems in which multiple users share a common chan- nel in a way that can lead to conflicts are known as contention systems.
A sketch of frame generation in an ALOHA system is given in Fig. 4-1. We have made the frames all the same length because the throughput of ALOHA sys tems is maximized by having a uniform frame size rather than by allowing vari- able-length frames.
Whenever two frames try to occupy the channel at the same time, there will be a collision (as seen in Fig. 4-1) and both will be garbled. If the first bit of a new frame overlaps with just the last bit of a frame that has almost finished, both frames will be totally destroyed (i.e., have incorrect checksums) and both will have
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 273
User
A
B
C
D
E
Collision Time Collision
Figure 4-1. In pure ALOHA, frames are transmitted at completely arbitrary times.
to be retransmitted later. The checksum does not (and should not) distinguish be tween a total loss and a near miss. Bad is bad.
An interesting question is: what is the efficiency of an ALOHA channel? In other words, what fraction of all transmitted frames escape collisions under these chaotic circumstances? Let us first consider an infinite collection of users typing at their terminals (stations). A user is always in one of two states: typing or waiting. Initially, all users are in the typing state. When a line is finished, the user stops typing, waiting for a response. The station then transmits a frame containing the line over the shared channel to the central computer and checks the channel to see if it was successful. If so, the user sees the reply and goes back to typing. If not, the user continues to wait while the station retransmits the frame over and over until it has been successfully sent.
Let the ‘‘frame time’’ denote the amount of time needed to transmit the stan- dard, fixed-length frame (i.e., the frame length divided by the bit rate). At this point, we assume that the new frames generated by the stations are well modeled by a Poisson distribution with a mean of N frames per frame time. (The infinite- population assumption is needed to ensure that N does not decrease as users be- come blocked.) If N > 1, the user community is generating frames at a higher rate than the channel can handle, and nearly every frame will suffer a collision. For reasonable throughput, we would expect 0 < N < 1.
In addition to the new frames, the stations also generate retransmissions of frames that previously suffered collisions. Let us further assume that the old and new frames combined are well modeled by a Poisson distribution, with mean of G frames per frame time. Clearly, G * N. At low load (i.e., N 5 0), there will be few collisions, hence few retransmissions, so G 5 N. At high load, there will be many collisions, so G > N. Under all loads, the throughput, S, is just the offered load, G, times the probability, P0, of a transmission succeeding—that is, S = GP0, where P0is the probability that a frame does notsuffer a collision.
274 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
A frame will not suffer a collision if no other frames are sent within one frame time of its start, as shown in Fig. 4-2. Under what conditions will the shaded frame arrive undamaged? Let t be the time required to send one frame. If any other user has generated a frame between time t0 and t0 + t, the end of that frame will collide with the beginning of the shaded one. In fact, the shaded frame’s fate was already sealed even before the first bit was sent, but since in pure ALOHA a station does not listen to the channel before transmitting, it has no way of knowing that another frame was already underway. Similarly, any other frame started be tween t0 + t and t 0 + 2t will bump into the end of the shaded frame.
Collides with the start of the shaded frame
t
Collides with the end of
the shaded frame
t0t0+ t t0+ 2t t0+ 3t Time
Vulnerable
Figure 4-2. Vulnerable period for the shaded frame.
The probability that k frames are generated during a given frame time, in which G frames are expected, is given by the Poisson distribution
Pr[k] = Gk e<G
k!(4-1)
so the probability of zero frames is just e<G. In an interval two frame times long, the mean number of frames generated is 2G. The probability of no frames being <2G. Using
initiated during the entire vulnerable period is thus given by P0 = e S = GP0, we get
<2G
S = Ge
The relation between the offered traffic and the throughput is shown in Fig. 4-3. The maximum throughput occurs at G = 0. 5, with S = 1/2e, which is about 0.184. In other words, the best we can hope for is a channel utilization of 18%. This result is not very encouraging, but with everyone transmitting at will, we could hardly have expected a 100% success rate.
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 275
)
e
m
i
t
e
m
a
r
f
r
e
p
t
u
p
hg
u
o
r
ht
(
0.40 0.30 0.20 0.10
Slotted ALOHA: S = Ge –G Pure ALOHA: S = Ge –2G
0 0.5 1.0 1.5 S
2.0 3.0
G (attempts per packet time)
Figure 4-3. Throughput versus offered traffic for ALOHA systems.
Slotted ALOHA
Soon after ALOHA came onto the scene, Roberts (1972) published a method for doubling the capacity of an ALOHA system. His proposal was to divide time into discrete intervals called slots, each interval corresponding to one frame. This approach requires the users to agree on slot boundaries. One way to achieve syn- chronization would be to have one special station emit a pip at the start of each in terval, like a clock.
In Roberts’ method, which has come to be known as slotted ALOHA —in contrast to Abramson’s pure ALOHA —a station is not permitted to send when- ever the user types a line. Instead, it is required to wait for the beginning of the next slot. Thus, the continuous time ALOHA is turned into a discrete time one. This halves the vulnerable period. To see this, look at Fig. 4-2 and imagine the collisions that are now possible. The probability of no other traffic during the same <G, which leads to
slot as our test frame is then e
<G
S = Ge
As you can see from Fig. 4-3, slotted ALOHA peaks at G = 1, with a throughput of S = 1/e or about 0.368, twice that of pure ALOHA. If the system is operating at G = 1, the probability of an empty slot is 0.368 (from Eq. 4-1). The best we can hope for using slotted ALOHA is 37% of the slots empty, 37% successes, and 26%collisions. Operating at higher values of G reduces the number of empties but in- creases the number of collisions exponentially. To see how this rapid growth of collisions with G comes about, consider the transmission of a test frame. The <G, which is the probability that all the
probability that it will avoid a collision is e
other stations are silent in that slot. The probability of a collision is then just 1 < e<G. The probability of a transmission requiring exactly k attempts (i.e., k < 1 collisions followed by one success) is
276 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 <G(1 < e<G )k < 1
Pk = e
The expected number of transmissions, E, per line typed at a terminal is then E = '
Y kPk = 'k=1
k=1
Y ke<G(1 < e<G)k < 1 = eG
As a result of the exponential dependence of E upon G, small increases in the channel load can drastically reduce its performance.
Slotted ALOHA is notable for a reason that may not be initially obvious. It was devised in the 1970s, used in a few early experimental systems, then almost forgotten (except by eccentric textbook authors who liked it). When Internet ac- cess over the cable was invented, all of a sudden there was a problem of how to al locate a shared channel among multiple competing users. Slotted ALOHA was pulled out of the garbage can, mixed with some new ideas, and suddenly there was a solution. It has often happened that protocols that are perfectly valid fall into dis- use for political reasons (e.g., some big company wants everyone to do things its way) or due to ever-changing technology trends. Then, years later some clever person realizes that a long-discarded protocol solves a current problem. For this reason, in this chapter we will study a number of elegant protocols that are not cur rently in widespread use but might easily be used in future applications, provided that enough network designers are aware of them. Of course, we will also study many protocolsthat are in current use as well.
4.2.2 Carrier Sense Multiple Access Protocols
With slotted ALOHA, the best channel utilization that can be achieved is 1/e. This low result is hardly surprising, since with stations transmitting at will, without knowing what the other stations are doing there are bound to be many collisions. In LANs, however, it is often possible for stations to detect what other stations are
doing, and thus adapt their behavior accordingly. These networks can achieve a much better utilization than 1/e. In this section, we will discuss some protocols for improving performance.
Protocols in which stations listen for a carrier (i.e., a transmission) and act accordingly are called carrier sense protocols . A number of them have been pro- posed, and they were long ago analyzed in detail. For example, see Kleinrock and Tobagi (1975). Below we will look at several versions of carrier sense protocols.
Persistent and Nonpersistent CSMA
The first carrier sense protocol that we will study here is called 1-persistent CSMA (Carrier Sense Multiple Access). That is a bit of a mouthful for the sim- plest CSMA scheme. When a station has data to send, it first listens to the channel to see if anyone else is transmitting at that moment. If the channel is idle, the sta tions sends its data. Otherwise, if the channel is busy, the station just waits until it becomes idle. Then, the station transmits a frame. If a collision occurs, the station
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 277
waits a random amount of time and starts all over again. The protocol is called 1-persistent because the station transmits with a probability of 1 when it finds the channel idle.
You might expect that this scheme avoids collisions except for the rare case of simultaneous sends, but in fact it does not. It’s much worse than that. If two sta tions become ready in the middle of a third station’s transmission, both will wait politely until the transmission ends, and then both will begin transmitting exactly simultaneously, resulting in a collision. If they were not so impatient, there would be fewer collisions.
More subtly, the propagation delay has a very important effect on collisions. There is a chance that just after a station begins sending, another station will be- come ready to send and sense the channel. If the first station’s signal has not yet reached the second one, the latter will sense an idle channel and will also begin sending, resulting in a collision. This chance depends on the number of frames that fit on the channel, or the bandwidth-delay product of the channel. If only a tiny fraction of a frame fits on the channel, which is the case in most LANs since the propagation delay is small, the chance of a collision happening is small. The larger the bandwidth-delay product, the more important this effect becomes, and the worse the performance of the protocol.
Even so, this protocol has better performance than pure ALOHA because both stations have the decency to desist from interfering with the third station’s frame, so it gets through undamaged. Exactly the same holds for slotted ALOHA.
A second carrier sense protocol is nonpersistent CSMA. In this protocol, a conscious attempt is made to be less greedy than in the previous one. As before, a station senses the channel when it wants to send a frame, and if no one else is send ing, the station begins doing so itself immediately. However, if the channel is al ready in use, the station does not continually sense it for the purpose of seizing it immediately upon detecting the end of the previous transmission. Instead, it waits a random period of time and then repeats the algorithm. Consequently, this algo rithm leads to fewer collisions and better channel utilization but longer delays than 1-persistent CSMA.
The last protocol is p-persistent CSMA. It applies to slotted channels and works as follows. When a station becomes ready to send, it senses the channel. If it is idle, it transmits with a probability p. With a probability q = 1 < p, it defers until the next slot. If that slot is also idle, it either transmits or defers again, with probabilities p and q. This process is repeated until either the frame has been transmitted or another station has begun transmitting. In the latter case, the unlucky station acts as if there had been a collision by waiting a random time and staring again. If the station initially senses that the channel is busy, it waits until the next slot and then applies the above algorithm IEEE 802.1 uses a refinement of p-persistent CSMA that we will discuss in Sec. 4.4.
Figure 4-4 shows the computed throughput versus offered traffic for all three protocols, as well as for pure and slotted ALOHA.
278 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
1.0
0.9
)
0.01-persistent CSMA Nonpersistent CSMA
e
m
it
t
e
k
c
a
p
r
e
p
t
u
p
hg
u
o
r
h
0.8 0.7 0.6 0.5 0.4 0.3 0.2
Pure
0.1-persistent CSMA
0.5-persistent
CSMA
Slotted
ALOHA
1-persistent
CSMA
t
(
S
0.1
ALOHA
00 1 2 3 4 5 6 7 8 9 G (attempts per packet time)
Figure 4-4. Comparison of the channel utilization versus load for various ran- dom access protocols.
CSMA with Collision Detection
Persistent and nonpersistent CSMA protocols are definitely an improvement over ALOHA because they ensure that no station begins to transmit while the channel is busy. However, if two stations sense the channel to be idle and begin transmitting simultaneously, their signals will still collide. Another improvement is for the stations to quickly detect the collision and abruptly stop transmitting, (rather than finishing them) since they are irretrievably garbled anyway. This strat- egy saves time and bandwidth.
This protocol, known as CSMA/CD (CSMA with Collision Detection ), is the basis of the classic Ethernet LAN, so it is worth devoting some time to looking at it in detail. It is important to realize that collision detection is an analog process. The station’s hardware must listen to the channel while it is transmitting. If the signal it reads back is different from the signal it is putting out, it knows that a col lision is occurring. The implications are that a received signal must not be tiny compared to the transmitted signal (which is difficult for wireless, as received sig- nals may be 1,000,000 times weaker than transmitted signals) and that the modula tion must be chosen to allow collisions to be detected (e.g., a collision of two 0-volt signals may well be impossible to detect).
CSMA/CD, as well as many other LAN protocols, uses the conceptual model of Fig. 4-5. At the point marked t0, a station has finished transmitting its frame. Any other station having a frame to send may now attempt to do so. If two or more stations decide to transmit simultaneously, there will be a collision. If a sta tion detects a collision, it aborts its transmission, waits a random period of time, and then tries again (assuming that no other station has started transmitting in the meantime). Therefore, our simple model for CSMA/CD will consist of alternating
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 279
contention and transmission periods, with idle periods occurring when all stations are quiet (e.g., for lack of work).
to
Contention slots
Frame Frame Frame Frame Transmission
Contention
Idle
period
period
Time
period
Figure 4-5. CSMA/CD can be in transmission, contention, or idle state.
Now let us look at the details of the contention algorithm. Suppose that two stations both begin transmitting at exactly time t 0. How long will it take them to realize that they have collided? The answer is vital to determining the length of the contention period and hence what the delay and throughput will be.
The minimum time to detect the collision is just the time it takes the signal to propagate from one station to the other. Based on this information, you might think that a station that has not heard a collision for a time equal to the full cable propagation time after starting its transmission can be sure it has seized the cable. By ‘‘seized,’’ we mean that all other stations know it is transmitting and will not in terfere. This conclusion is wrong.
Consider the following worst-case scenario. Let the time for a signal to propa- gate between the two farthest stations be o . At t0, one station begins transmitting. At t0 + o < ¡ , an instant before the signal arrives at the most distant station, that station also begins transmitting. Of course, it detects the collision almost instantly and stops, but the little noise burst caused by the collision does not get back to the original station until time 2o < ¡ . In other words, in the worst case a station cannot be sure that it has seized the channel until it has transmitted for 2o without hearing a collision.
Starting with this understanding, we can think of CSMA/CD contention as a slotted ALOHA system with a slot width of 2o . On a 1-km-long coaxial cable, o 5 5 µsec. The difference for CSMA/CD compared to slotted ALOHA is that slots in which only one station transmits (i.e., in which the channel is seized) are followed by the rest of a frame. This difference will greatly improve performance if the frame time is much longer than the propagation time.
4.2.3 Collision-Free Protocols
Although collisions do not occur with CSMA/CD once a station has unam- biguously captured the channel, they can still occur during the contention period. These collisions adversely affect the system performance, in particular when the
280 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
bandwidth-delay product is large, such as when the cable is long (i.e., large o ) and the frames are short. Not only do collisions reduce bandwidth, but they make the time to send a frame variable, which is not a good fit for real-time traffic such as voice over IP. CSMA/CD is also not universally applicable.
In this section, we will examine some protocols that resolve the contention for the channel without any collisions at all, not even during the contention period. Most of these protocols are not currently used in major systems, but in a rapidly changing field, having some protocols with excellent properties available for future systems is often a good thing.
In the protocols to be described, we assume that there are exactly N stations, each programmed with a unique address from 0 to N < 1. It does not matter that some stations may be inactive part of the time. We also assume that propagation delay is negligible. The basic question remains: which station gets the channel after a successful transmission? We continue using the model of Fig. 4-5 with its discrete contention slots.
A Bit-Map Protocol
In our first collision-free protocol, the basic bit-map method , each contention period consists of exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the slot 0. No other station is allowed to transmit during this slot. Regardless of what station 0 does, station 1 gets the opportunity to transmit a 1 bit during slot 1, but only if it has a frame queued. In general, station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N slots have passed by, each station has complete knowledge of which stations wish to transmit. At that point, they begin transmitting frames in numerical order (see Fig. 4-6).
Frames 8 Contention slots
8 Contention slots 1 d
0 1
2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
1 1 1 1 3 7 1 1 1 5 1 Figure 4-6. The basic bit-map protocol.
2
Since everyone agrees on who goes next, there will never be any collisions. After the last ready station has transmitted its frame, an event all stations can easily monitor, another N-bit contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out of luck and must remain silent until every station has had a chance and the bit map has come around again.
Protocols like this in which the desire to transmit is broadcast before the actual transmission are called reservation protocols because they reserve channel owner- ship in advance and prevent collisions. Let us briefly analyze the performance of
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 281
this protocol. For convenience, we will measure time in units of the contention bit slot, with data frames consisting of d time units.
Under conditions of low load, the bit map will simply be repeated over and over, for lack of data frames. Consider the situation from the point of view of a low-numbered station, such as 0 or 1. Typically, when it becomes ready to send, the ‘‘current’’ slot will be somewhere in the middle of the bit map. On average, the station will have to wait N/2 slots for the current scan to finish and another full N slots for the following scan to run to completion before it may begin transmitting.
The prospects for high-numbered stations are brighter. Generally, these will only have to wait half a scan (N/2 bit slots) before starting to transmit. High-num- bered stations rarely have to wait for the next scan. Since low-numbered stations must wait on average 1. 5N slots and high-numbered stations must wait on average 0. 5N slots, the mean for all stations is N slots.
The channel efficiency at low load is easy to compute. The overhead per frame is N bits and the amount of data is d bits, for an efficiency of d/(d + N). At high load, when all the stations have something to send all the time, the N- bit contention period is prorated over N frames, yielding an overhead of only 1 bit per frame, or an efficiency of d/(d + 1). The mean delay for a frame is equal to the sum of the time it queues inside its station, plus an additional (N < 1)d + N once it gets to the head of its internal queue. This interval is how long it takes to wait for all other stations to have their turn sending a frame and another bitmap.
Token Passing
The essence of the bit-map protocol is that it lets every station transmit a frame in turn in a predefined order. Another way to accomplish the same thing is to pass a small message called a token from one station to the next in the same predefined order. The token represents permission to send. If a station has a frame queued for transmission when it receives the token, it can send that frame before it passes the token to the next station. If it has no queued frame, it simply passes the token.
In a token ring protocol, the topology of the network is used to define the order in which stations send. The stations are connected one to the next in a single ring. Passing the token to the next station then simply consists of receiving the token in from one direction and transmitting it out in the other direction, as seen in
Fig. 4-7. Frames are also transmitted in the direction of the token. This way they will circulate around the ring and reach whichever station is the destination. How- ever, to stop the frame circulating indefinitely (like the token), some station needs to remove it from the ring. This station may be either the one that originally sent the frame, after it has gone through a complete cycle, or the station that was the in tended recipient of the frame.
Note that we do not need a physical ring to implement token passing. All that is needed is a logical ring, where each station knows its predecessor and successor. The channel connecting the stations might instead be a single long bus (cable).
282 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 Station Token
Direction of
transmission
Figure 4-7. Token ring.
Each station then uses the bus to send the token to the next station in the predefined sequence. Possession of the token allows a station to use the bus to send one frame. This protocol is called token bus. It is defined in IEEE 802.4, a standard that failed so badly that IEEE has withdrawn it. Standards are not always forever.
The performance of token passing is similar to that of the bit-map protocol, though the contention slots and frames of one cycle are now intermingled. After sending a frame, each station must wait for all N stations (including itself) to send the token to their neighbors and the other N < 1 stations to send a frame, if they have one. A subtle difference is that, since all positions in the cycle are equivalent, there is no bias for low- or high-numbered stations. For token ring, each station is also sending the token only as far as its neighboring station before the protocol takes the next step. Each token does not need to propagate to all stations before the protocol advances to the next step.
Token rings have cropped up as MAC protocols with some consistency. An early token ring protocol (called ‘‘Token Ring’’ and standardized as IEEE 802.5) was popular in the 1980s as an alternative to classic Ethernet. In the 1990s, a much faster token ring called FDDI (Fiber Distributed Data Interface ) was beaten out by switched Ethernet. In the 2000s, a token ring called RPR (Resilient Packet Ring ) was defined as IEEE 802.17 to standardize the mix of metropolitan area rings in use by ISPs. We wonder what the 2020s will have to offer.
Binary Countdown
A problem with the basic bit-map protocol, and by extension token passing, is that the overhead is 1 bit per station, so it does not scale well to networks with hun- dreds or thousands of stations. We can do better than that by using binary station addresses with a channel that combines transmissions in a certain way. A station wanting to use the channel now broadcasts its address as a binary bit string, start ing with the high-order bit. All addresses are assumed to be the same number of bits. The bits in each address position from different stations are BOOLEAN ORed together by the channel when they are sent at the same time. We will call
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 283
this protocol binary countdown . It was used in Datakit (Fraser, 1983). It implic itly assumes that the transmission delays are negligible so that all stations see asserted bits essentially instantaneously.
To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address has been overwritten with a 1, it gives up. For example, if stations 0010, 0100, 1001, and 1010 are all trying to get the channel, in the first bit time the stations transmit 0, 0, 1, and 1, respectively. These are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a higher-numbered station is competing for the channel, so they give up for the current round. Stations 1001 and 1010 continue.
The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up. The winner is station 1010 because it has the highest address. After win- ning the bidding, it may now transmit a frame, after which another bidding cycle starts. The protocol is illustrated in Fig. 4-8. It has the property that higher-num- bered stations have a higher priority than lower-numbered stations, which may be either good or bad, depending on the context.
Bit time
0 1 2 3
0 0 1 0 0 – – –
0 1 0 0 0 – – –
1 0 0 1 1 0 0 –
1 0 1 0 1 0 1 0
Result 1 0 1 0
Stations 0010 and 0100 see this 1 and give up
Station 1001
sees this 1 and gives up
Figure 4-8. The binary countdown protocol. A dash indicates silence.
The channel efficiency of this method is d/(d + log2 N). If, however, the frame format has been cleverly chosen so that the sender’s address is the first field in the frame, even these log2 N bits are not wasted, and the efficiency is 100%.
Binary countdown is an example of a simple, elegant, and efficient protocol that is waiting to be rediscovered. Hopefully, it will find a new home some day.
4.2.4 Limited-Contention Protocols
We have now considered two basic strategies for channel acquisition in a broadcast network: contention, as in CSMA, and collision-free protocols. Each strategy can be rated as to how well it does with respect to the two important
284 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
performance measures, delay at low load and channel efficiency at high load. Under conditions of light load, contention (i.e., pure or slotted ALOHA) is prefer- able due to its low delay (since collisions are rare). As the load increases, con tention becomes increasingly less attractive because the overhead associated with channel arbitration becomes greater. Just the reverse is true for the collision-free protocols. At low load, they have relatively high delay but as the load increases, the channel efficiency improves (since the overheads are fixed).
Obviously, it would be nice if we could combine the best properties of the con tention and collision-free protocols, arriving at a new protocol that used contention at low load to provide low delay, but used a collision-free technique at high load to provide good channel efficiency. Such protocols, which we will call limited-con tention protocols , do in fact exist, and will conclude our study of carrier sense net- works.
Up to now, the only contention protocols we have studied have been symmet ric. That is, each station attempts to acquire the channel with some probability, p, with all stations using the same p. Interestingly enough, the overall system per formance can sometimes be improved by using a protocol that assigns different probabilities to different stations.
Before looking at the asymmetric protocols, let us quickly review the per formance of the symmetric case. Suppose that k stations are contending for chan- nel access. Each has a probability p of transmitting during each slot. The probability that some station successfully acquires the channel during a given slot is the probability that any one station transmits, with probability p, and all other k < 1 stations defer, each with probability 1 < p. This value is kp(1 < p)k < 1. To find the optimal value of p, we differentiate with respect to p, set the result to zero, and solve for p. Doing so, we find that the best value of p is 1/k. Substituting p = 1/k, we get
k < 1
Pr[success with optimal p] =£¤k < 1
¥
¦
k
This probability is plotted in Fig. 4-9. For small numbers of stations, the chances of success are good, but as soon as the number of stations reaches even five, the probability has dropped close to its asymptotic value of 1/e.
From Fig. 4-9, it is fairly obvious that the probability of some station acquiring the channel can be increased only by decreasing the amount of competition. The limited-contention protocols do precisely that. They first divide the stations into (not necessarily disjoint) groups. Only the members of group 0 are permitted to compete for slot 0. If one of them succeeds, it acquires the channel and transmits its frame. If the slot lies fallow or if there is a collision, the members of group 1 contend for slot 1, etc. By making an appropriate division of stations into groups, the amount of contention for each slot can be reduced, thus operating each slot near the left end of Fig. 4-9.
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 285
1.0
0.8
s
s
e
c
c
us
0.6
f
o
y t
0.4
i
l
i
b
a
b
o
r
0.2
P
0.00 5 10 15 20 25 Number of ready stations
Figure 4-9. Acquisition probability for a symmetric contention channel.
The trick is how to assign stations to slots. Before looking at the general case,
let us consider some special cases. At one extreme, each group has only one mem- ber. Such an assignment guarantees that there will never be collisions because at most one station is contending for any given slot. We have seen such protocols be fore (e.g., binary countdown). The next special case is to assign two stations per 2, which for a
group. The probability that both will try to transmit during a slot is p small p is negligible. As more and more stations are assigned to the same slot, the probability of a collision grows, but the length of the bit-map scan needed to give everyone a chance shrinks. The limiting case is a single group containing all sta tions (i.e., slotted ALOHA). What we need is a way to assign stations to slots dy- namically, with many stations per slot when the load is low and few (or even just one) station per slot when the load is high.
The Adaptive Tree-Walk Protocol
One particularly simple way of performing the necessary assignment is to use the algorithm devised by the U.S. Army for testing soldiers for syphilis during World War II (Dorfman, 1943). In short, the Army took a blood sample from N soldiers. A portion of each sample was poured into a single test tube. This mixed sample was then tested for antibodies. If none were found, all the soldiers in the group were declared healthy. If antibodies were present, two new mixed samples were prepared, one from soldiers 1 through N/2 and one from the rest. The proc- ess was repeated recursively until the infected soldiers were determined.
For the computerized version of this algorithm (Capetanakis, 1979), it is con- venient to think of the stations as the leaves of a binary tree, as illustrated in Fig. 4-10. In the first contention slot following a successful frame transmission, slot 0, all stations are permitted to try to acquire the channel. If one of them does
286 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
so, fine. If there is a collision, then during slot 1 only those stations falling under node 2 in the tree may compete. If one of them acquires the channel, the slot fol lowing the frame is reserved for those stations under node 3. If, on the other hand, two or more stations under node 2 want to transmit, there will be a collision during slot 1, in which case it is node 4’s turn during slot 2.
1
2 3
4 5 6 7
Stations
A B C D E F G H
Figure 4-10. The tree for eight stations.
In essence, if a collision occurs during slot 0, the entire tree is searched, depth first, to locate all ready stations. Each bit slot is associated with some particular node in the tree. If a collision occurs, the search continues recursively with the node’s left and right children. If a bit slot is idle or if only one station transmits in it, the searching of its node can stop because all ready stations have been located. (Were there more than one, there would have been a collision.)
When the load on the system is heavy, it is hardly worth the effort to dedicate slot 0 to node 1 because that makes sense only in the unlikely event that precisely one station has a frame to send. Similarly, one could argue that nodes 2 and 3 should be skipped as well for the same reason. Put in more general terms, at what level in the tree should the search begin? Clearly, the heavier the load, the farther down the tree the search should begin. We will assume that each station has a good estimate of the number of ready stations, q, for example, from monitoring recent traffic.
To proceed, let us number the levels of the tree from the top, with node 1 in Fig. 4-10 at level 0, nodes 2 and 3 at level 1, etc. Notice that each node at level i <i of the stations below it. If the q ready stations are uniformly dis
has a fraction 2
tributed, the expected number of them below a specific node at level i is just 2<iq. Intuitively, we would expect the optimal level to begin searching the tree to be the one at which the mean number of contending stations per slot is 1, that is, the level <iq = 1. Solving this equation, we find that i = log2 q. Numerous improvements to the basic algorithm have been discovered and are
at which 2
discussed in some detail by Bertsekas and Gallager (1992). It is such a clever idea
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 287
that researchers are still tweaking it (De Marco and Kowalski, 2017). For example, consider the case of stations G and H being the only ones wanting to transmit. At node 1 a collision will occur, so 2 will be tried and discovered idle. It is pointless to probe node 3 since it is guaranteed to have a collision (we know that two or more stations under 1 are ready and none of them are under 2, so they must all be under 3). The probe of 3 can be skipped and 6 tried next. When this probe also turns up nothing, 7 can be skipped and node G tried next.
4.2.5 Wireless LAN Protocols
A system of laptop computers that communicate by radio can be regarded as a wireless LAN, as we discussed in Sec. 1.4.3. Such a LAN is an example of a broadcast channel. It also has somewhat different properties than a wired LAN, which leads to different MAC protocols. In this section, we will examine some of these protocols. In Sec. 4.4, we will look at 802.11 (WiFi) in detail.
A common configuration for a wireless LAN is an office building with access points (APs) strategically placed around the building. The APs are wired together using copper or fiber and provide connectivity to the stations that talk to them. If the transmission power of the APs and laptops is adjusted to have a range of tens of meters, nearby rooms become like a single cell and the entire building becomes like the cellular telephony systems we studied in Chap. 2, except that each cell only has one channel. This channel is shared by all the stations in the cell, includ ing the AP. It typically provides megabits/sec or even gigabits/sec of bandwidth. IEEE 802.11ac can theoretically run at 7 Gbps, but in practice, it is much slower.
We have already remarked that wireless systems cannot normally detect a col lision while it is occurring. The received signal at a station may be tiny, perhaps a million times fainter than the signal that is being transmitted. Finding it is like
looking for a ripple on the ocean. Instead, acknowledgements are used to discover collisions and other errors after the fact.
There is an even more important difference between wireless LANs and wired LANs. A station on a wireless LAN may not be able to transmit frames to or re- ceive frames from all other stations because of the limited radio range of the sta tions. In wired LANs, when one station sends a frame, all other stations receive it. The absence of this property in wireless LANs causes a variety of complications.
We will make the simplifying assumption that each radio transmitter has some fixed range, represented by a circular coverage region within which another station can sense and receive the station’s transmission. It is important to realize that in practice coverage regions are not nearly so regular because the propagation of radio signals depends on the environment. Walls and other obstacles that attenuate and reflect signals may cause the range to differ markedly in different directions. But a simple circular model will do for our purposes.
A naive approach to using a wireless LAN might be to try CSMA: just listen for other transmissions and only transmit if no one else is doing so. The trouble is,
288 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
this protocol is not really a good way to think about wireless because what matters for reception is interference at the receiver, not at the sender. To see the nature of the problem, consider Fig. 4-11, where four wireless stations are illustrated. For
our purposes, it does not matter which are APs and which are laptops. The radio range is such that A and B are within each other’s range and can potentially inter fere with one another. C can also potentially interfere with both B and D, but not with A.
A B C D A B C D
Radio range
Radio range
(a) (b)
Figure 4-11. A wireless LAN. (a) A and C are hidden terminals when trans
mitting to B.(b) B and C are exposed terminals when transmitting to A and D.
First consider what happens when A and C transmit to B, as depicted in Fig. 4-11(a). If A sends and then C immediately senses the medium, it will not hear Abecause Ais out of its range. Thus C will falsely conclude that it can trans- mit to B. If C does start transmitting, it will interfere at B, wiping out the frame from A. (We assume here that no CDMA-type scheme is used to provide multiple channels, so collisions garble the signal and destroy both frames.) We want a MAC protocol that will prevent this kind of collision from happening because it wastes bandwidth. The problem of a station not being able to detect a potential competitor for the medium because the competitor is too far away is called the hid- den terminal problem .
Now let us look at a different situation: B transmitting to A at the same time that C wants to transmit to D, as shown in Fig. 4-11(b). If C senses the medium, it will hear a transmission and falsely conclude that it may not send to D (shown as a dashed line). In fact, such a transmission would cause bad reception only in the zone between B and C, where neither of the intended receivers is located. We want a MAC protocol that prevents this kind of deferral from happening because it wastes bandwidth. The problem is called the exposed terminal problem .
The difficulty is that, before starting a transmission, a station really wants to know whether there is radio activity around the receiver. CSMA merely tells it whether there is activity near the transmitter by sensing the carrier. With a wire, all signals propagate to all stations, so this distinction does not exist. However, only one transmission can then take place at once anywhere in the system. In a system based on short-range radio waves, multiple transmissions can occur simultaneously if they all have different destinations and these destinations are out of range of one another. We want this concurrency to happen as the cell gets larger and larger, in
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 289
the same way that people at a party should not wait for everyone in the room to go silent before they talk; multiple conversations can take place at once in a large room as long as they are not directed to the same location.
An early and quite influential protocol that tackles these problems for wireless LANs is MACA (Multiple Access with Collision Avoidance ) (Karn, 1990; and Garcia-Luna-Aceves, 2017). The basic idea behind it is for the sender to stimulate the receiver into outputting a short frame, so stations nearby can detect this trans- mission and avoid transmitting for the duration of the upcoming (large) data frame. This technique is used instead of carrier sense.
MACA is illustrated in Fig. 4-12. Let us see how A sends a frame to B. A starts by sending an RTS (Request To Send) frame to B, as shown in Fig. 4-12(a). This short frame (30 bytes) contains the length of the data frame that will eventual ly follow. Then B replies with a CTS (Clear To Send ) frame, as shown in Fig. 4-12(b). The CTS frame contains the data length (copied from the RTS frame). Upon receipt of the CTS frame, Abegins transmission.
Range of A’s transmitter
Range of B’s transmitter
C B D A CTS
A RTS E
C B D E
(a) (b)
Figure 4-12. The MACA protocol. (a) A sending an RTS to B. (b) B responding with a CTS to A.
Now let us see how stations overhearing either of these frames react. Any sta tion hearing the RTS is clearly close to A and must remain silent long enough for the CTS to be transmitted back to A without conflict. Any station hearing the CTS is clearly close to B and must remain silent during the upcoming data transmission, whose length it can tell by examining the CTS frame.
In Fig. 4-12, C is within range of A but not within range of B. Therefore, it hears the RTS from Abut not the CTS from B. As long as it does not interfere with the CTS, it is free to transmit while the data frame is being sent. In contrast, D is
within range of B but not A. It does not hear the RTS but does hear the CTS. Hear ing the CTS tips it off that it is near a station that is about to receive a frame, so it defers sending anything until that frame is expected to be finished. Station E hears both control messages and, like D, must be silent until the data frame is complete.
290 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Despite these precautions, collisions can still occur. For example, B and C could both send RTS frames to A at the same time. These will collide and be lost. In the event of a collision, an unsuccessful transmitter (i.e., one that does not hear a CTS within the expected time interval) waits a random amount of time and tries again later.
4.3 ETHERNET
We have now finished our discussion of channel allocation protocols in the abstract, so it is time to see how these principles apply to real systems. Many of the designs for personal, local, and metropolitan area networks have been stan- dardized under the name of IEEE 802. A few have survived but many have not, as we saw in Fig. 1-38. Some people who believe in reincarnation think that Charles Darwin came back as a member of the IEEE Standards Association to weed out the unfit. The most important of the survivors are 802.3 (Ethernet) and 802.11 (wire less LAN). Bluetooth (wireless PAN) is widely deployed but has now been stan- dardized outside of 802.15.
We will begin our study of real systems with Ethernet, probably the most ubiq- uitous kind of computer network in the world. Two kinds of Ethernet exist: classic Ethernet , which solves the multiple access problem using the techniques we have studied in this chapter; and switched Ethernet , in which devices called switches are used to connect different computers. It is important to note that, while they are both referred to as Ethernet, they are quite different. Classic Ethernet is the original form and ran at rates from 3 to 10 Mbps. Switched Ethernet is what Ethernet has become and runs at 100, 1000, 10,000, 40,000, or 100,000 Mbps, in forms called fast Ethernet, gigabit Ethernet, 10-gigabit Ethernet, 40-gigabit Ethernet, or 100-gigabit Ethernet. In practice, only switched Ethernet is used nowadays.
We will discuss these historical forms of Ethernet in chronological order show ing how they developed. Since Ethernet and IEEE 802.3 are identical except for a minor difference (which we will discuss shortly), many people use the terms ‘‘Ethernet’’ and ‘‘IEEE 802.3’’ interchangeably. We will do so, too. For more information about Ethernet, see Spurgeon and Zimmerman (2014).
4.3.1 Classic Ethernet Physical Layer
The story of Ethernet starts about the same time as that of ALOHA, when a student named Bob Metcalfe got his bachelor’s degree at M.I.T. and then moved up the river to get his Ph.D. at Harvard. During his studies there, he was exposed to Abramson’s work on ALOHA. He became so interested in it that after graduating from Harvard, he decided to spend the summer in Hawaii working with Abramson before starting work at Xerox PARC (Palo Alto Research Center). When he got to
SEC. 4.3 ETHERNET 291
PARC, he saw that the researchers there had designed and built what would later be called personal computers. But the machines were isolated. Using his know ledge of Abramson’s work, he, together with his colleague David Boggs, designed and implemented the first local area network (Metcalfe and Boggs, 1976). It used a single long, thick coaxial cable and ran at 3 Mbps.
They called the system Ethernet after the luminiferous ether, through which electromagnetic radiation was once thought to propagate. (When the 19th-century British physicist James Clerk Maxwell discovered that electromagnetic radiation could be described by a wave equation, scientists assumed that space must be filled with some ethereal medium in which the radiation was propagating. Only after the
famous Michelson-Morley experiment in 1887 did physicists discover that electro- magnetic radiation could propagate in a vacuum.)
The Xerox Ethernet was so successful that DEC, Intel, and Xerox drew up a standard in 1978 for a 10-Mbps Ethernet, called the DIX standard . With a minor change, the DIX standard became the IEEE 802.3 standard in 1983. Unfortunately for Xerox, it already had a history of making seminal inventions (such as the per- sonal computer) and then failing to commercialize on them, a story told in Fum- bling the Future (Smith and Alexander, 1988). When Xerox showed no interest in doing anything with Ethernet other than helping standardize it, Metcalfe formed his own company, 3Com, to sell Ethernet cards for PCs. It sold millions of them.
Classic Ethernet snaked around the building as a single long cable to which all the computers were attached. This architecture is shown in Fig. 4-13. The first va riety, popularly called thick Ethernet, resembled a yellow garden hose, with markings every 2.5 meters to show where to attach computers. (The 802.3 stan- dard did not actually require the cable to be yellow, but it did suggest it.) It was succeeded by thin Ethernet, which bent more easily and made connections using industry-standard BNC connectors. Thin Ethernet was much cheaper and easier to install, but it could run for only 185 meters per segment (instead of 500 m with thick Ethernet), each of which could handle only 30 machines (instead of 100).
TransceiverInterface
cable
Ether
Figure 4-13. Architecture of classic Ethernet.
Each version of Ethernet has a maximum cable length per segment (i.e., unam- plified length) over which the signal will propagate. To allow larger networks,
292 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
multiple cables can be connected by repeaters . A repeater is a physical layer de- vice that receives, amplifies (i.e., regenerates), and retransmits signals in both di rections. As far as the software is concerned, a series of cable segments connected by repeaters is no different from a single cable (except for a small amount of delay introduced by the repeaters).
Over each of these cables, information was sent using the Manchester en- coding we studied in Sec. 2.4.3. An Ethernet could contain multiple cable seg- ments and multiple repeaters, but no two transceivers could be more than 2.5 km apart and no path between any two transceivers could traverse more than four re- peaters. The reason for this restriction was that the MAC protocol, which we will look at next, would work correctly.
4.3.2 Classic Ethernet MAC Sublayer Protocol
The format used to send frames is shown in Fig. 4-14. First comes a Preamble of 8 bytes, each containing the bit pattern 10101010 (with the exception of the last byte, in which the last 2 bits are set to 11). This last byte is called the Start of Frame delimiter for 802.3. The Manchester encoding of this pattern produces a 10-MHz square wave for 6.4 µsec to allow the receiver’s clock to synchronize with the sender’s. The last two 1 bits tell the receiver that the rest of the frame is about to start.
Bytes
8 6 6 2 0-1500 0-46 4 Destination
Source
(a) Preamble
address
Type Data Pad Check- sum address
(b) Preamble Length Data Pad Check- sum
Destination
Source
address
address
Figure 4-14. Frame formats. (a) Ethernet (DIX). (b) IEEE 802.3.
Next come two addresses, one for the destination and one for the source. They are each 6 bytes long. The first transmitted bit of the destination address is a 0 for ordinary addresses and a 1 for group addresses. Group addresses allow multiple stations to listen to a single address. When a frame is sent to a group address, all the stations in the group receive it. Sending to a group of stations is called multi- casting . The special address consisting of all 1 bits is reserved for broadcasting . A frame containing all 1s in the destination field is accepted by all stations on the network. Multicasting is more selective, but it involves group management to define which stations are in the group. Conversely, broadcasting does not dif ferentiate between stations at all, so it does not require any group management.
SEC. 4.3 ETHERNET 293
An interesting feature of station source addresses is that they are globally unique, assigned centrally by IEEE to ensure that no two stations anywhere in the world have the same address. The idea is that any station can uniquely address any other station by just giving the right 48-bit number. To do this, the first 3 bytes of the address field are used for an OUI (Organizationally Unique Identifier ). Val- ues for this field are assigned by IEEE and indicate a manufacturer. Manufacturers 24 addresses. The manufacturer assigns the last 3 bytes of
are assigned blocks of 2
the address and programs the complete address into the NIC before it is sold. Next comes the Type or Length field, depending on whether the frame is Ether- net or IEEE 802.3. Ethernet uses a Type field to tell the receiver what to do with the frame. Multiple network-layer protocols may be in use at the same time on the same machine, so when an Ethernet frame arrives, the operating system has to know which one to hand the frame to. The Type field specifies which process to give the frame to. For example, a type code of 0x0800 means that the data con tains an IPv4 packet.
IEEE 802.3, in its wisdom, decided that this field would carry the length of the frame, since the Ethernet length was determined by looking inside the data—a lay- ering violation if ever there was one. Of course, this meant there was no way for the receiver to figure out what to do with an incoming frame. That problem was handled by the addition of another header for the logical link control protocol with in the data, which we will look at later. It uses 8 bytes to convey the 2 bytes of protocol type information.
Unfortunately, by the time 802.3 was published, so much hardware and soft- ware for DIX Ethernet was already in use that few manufacturers and users were enthusiastic about repackaging the Type and Length fields. In 1997, IEEE threw in the towel and said that both ways were fine with it. Fortunately, all the Type fields in use before 1997 had values greater than 1500, then well established as the maxi- mum data size. Now the rule is that any number there less than or equal to 0x600 (1536) can be interpreted as Length, and any number greater than 0x600 can be interpreted as Type. Now IEEE can maintain that everyone is using its standard and everybody else can keep on doing what they were already doing (not bothering with logical link control protocol) without feeling guilty about it. This is what happens when (industrial) politics meets technology.
Next come the data, up to 1500 bytes. This limit was chosen somewhat arbi trarily at the time the Ethernet standard was cast in stone, mostly based on the fact that a transceiver needs enough RAM to hold an entire frame and RAM was expen-
sive in 1978. A larger upper limit would have meant more RAM, and hence a more expensive transceiver.
In addition to there being a maximum frame length, there is also a minimum frame length. While a data field of 0 bytes is sometimes useful, it causes a prob lem. When a transceiver detects a collision, it truncates the current frame, which means that stray bits and pieces of frames appear on the cable all the time. To make it easier to distinguish valid frames from garbage, Ethernet requires that valid
294 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
frames must be at least 64 bytes long, from destination address to checksum, in- cluding both. If the data portion of a frame is less than 46 bytes, the Pad field is used to fill out the frame to the minimum size.
Another (and more important) reason for having a minimum length frame is to prevent a station from completing the transmission of a short frame before the first bit has even reached the far end of the cable, where it may collide with another frame. This problem is illustrated in Fig. 4-15. At time 0, station A, at one end of the network, sends off a frame. Let us call the propagation time for this frame to reach the other end o . Just before the frame gets to the other end (i.e., at time o < ¡), the most distant station, B, starts transmitting. When B detects that it is re- ceiving more power than it is putting out, it knows that a collision has occurred, so it aborts its transmission and generates a 48-bit noise burst to warn all other sta tions. In other words, it jams the ether to make sure the sender does not miss the collision. At about time 2o , the sender sees the noise burst and aborts its transmis- sion, too. It then waits a random time before trying again.
Packet starts
Packet almost
at time 0 A B A B at B at
(a) (b)
Noise burst gets
A B Collision at
back to A at 2 A B
(c) (d)
time
Figure 4-15. Collision detection can take as long as 2o.
If a station tries to transmit a very short frame, it is conceivable that a collision will occur, but the transmission will have completed before the noise burst gets back to the station at 2o . The sender will then incorrectly conclude that the frame was successfully sent. To prevent this situation from occurring, all frames must take more than 2o to send so that the transmission is still taking place when the noise burst gets back to the sender. For a 10-Mbps LAN with a maximum length of 2500 meters and four repeaters (from the 802.3 specification), the round-trip time (including time to propagate through the four repeaters) has been determined to be nearly 50 µsec in the worst case. Therefore, the shortest allowed frame must take at least this long to transmit. At 10 Mbps, a bit takes 100 nsec, so 500 bits is the smallest frame that is guaranteed to work. To add some margin of safety, this number was rounded up to 512 bits or 64 bytes.
The final field is the Checksum. It is a 32-bit CRC of the kind we studied in Sec. 3.2. In fact, it is defined exactly by the generator polynomial we gave there,
SEC. 4.3 ETHERNET 295
which popped up for PPP, ADSL, and other links too. This CRC is an error-detect ing code that is used to determine if the bits of the frame have been received cor rectly. It just does error detection, with the frame dropped if an error is detected.
CSMA/CD with Binary Exponential Backoff
Classic Ethernet uses the 1-persistent CSMA/CD algorithm that we studied in Sec. 4.2. This descriptor just means that stations sense the medium when they have a frame to send and send the frame as soon as the medium becomes idle. They monitor the channel for collisions as they send. If there is a collision, they abort the transmission with a short jam signal and retransmit after a random interval.
Let us now see how the random interval is determined when a collision occurs, as it is a new method. The model is still that of Fig. 4-5. After a collision, time is divided into discrete slots whose length is equal to the worst-case round-trip propa- gation time on the ether (2o ). To accommodate the longest path allowed by Ether- net, the slot time has been set to 512 bit times, or 51.2 µsec.
After the first collision, each station waits either 0 or 1 slot times at random be fore trying again. If two stations collide and each one picks the same random num- ber, they will collide again. After the second collision, each one picks either 0, 1, 2, or 3 at random and waits that number of slot times. If a third collision occurs (the probability of this happening is 0.25), the next time the number of slots to wait 3 < 1.
is chosen at random from the interval 0 to 2
i < 1 is chosen,
In general, after i collisions, a random number between 0 and 2 and that number of slots is skipped. However, after 10 collisions have been reached, the randomization interval is frozen at a maximum of 1023 slots. After 16 collisions, the controller throws in the towel and reports failure back to the com- puter. Further recovery is up to higher layers.
This algorithm, called binary exponential backoff, was chosen to dynam ically adapt to the number of stations trying to send. If the randomization interval for all collisions were 1023, the chance of two stations colliding for a second time would be negligible, but the average wait after a collision would be hundreds of slot times, introducing significant delay. On the other hand, if each station always delayed for either 0 or 1 slots, then if 100 stations ever tried to send at once they would collide over and over until 99 of them picked 1 and the remaining station picked 0. This might take years. By having the randomization interval grow expo- nentially as more and more consecutive collisions occur, the algorithm ensures a low delay when only a few stations collide but also ensures that the collisions are resolved in a reasonable interval when many stations collide. Truncating the back- off at 1023 keeps the bound from growing too large.
If there is no collision, the sender assumes that the frame was probably suc- cessfully delivered. That is, neither CSMA/CD nor Ethernet provides acknowl- edgements. This choice is appropriate for wired and optical fiber channels that have low error rates. Any errors that do occur must then be detected by the CRC
296 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
and recovered by higher layers. For wireless channels that have more errors, we will see that acknowledgements are used.
4.3.3 Ethernet Performance
Now let us briefly examine the performance of classic Ethernet under condi tions of heavy and constant load, that is, with k stations always ready to transmit. A rigorous analysis of the binary exponential backoff algorithm is complicated.
Instead, we will follow Metcalfe and Boggs (1976) and assume a constant retrans- mission probability in each slot. If each station transmits during a contention slot with probability p, the probability A that some station acquires the channel in that slot is
A = kp(1 < p)k < 1
A is maximized when p = 1/k, with AA 1/e as k A '. The probability that the contention interval has exactly j slots in it is A(1 < A)j < 1, so the mean number of slots per contention is given by
'
Y jA(1 < A)j < 1 =1A
j=0
Since each slot has a duration 2o , the mean contention interval, w, is 2o /A. As- suming optimal p, the mean number of contention slots is never more than e, so w is at most 2o e 5 5. 4o .
If the mean frame takes P sec to transmit, when many stations have frames to
send,
Channel efficiency =P
P + 2o / A(4-2)
Here, we see where the maximum cable distance between any two stations enters into the performance figures. The longer the cable, the longer the contention inter- val, which is why the Ethernet standard specifies a maximum cable length.
It is instructive to formulate Eq. (4-2) in terms of the frame length, F , the net- work bandwidth, B, the cable length, L, and the speed of signal propagation, c, for the optimal case of e contention slots per frame. With P = F/B, Eq. (4-2) becomes
Channel efficiency =1
1 + 2BLe/cF(4-3)
When the second term in the denominator is large, network efficiency will be low. More specifically, increasing network bandwidth or distance (the BL product) re- duces efficiency for a given frame size. Unfortunately, much research on network hardware is aimed precisely at increasing this product. People want high band- width over long distances (fiber optic MANs, for example), yet classic Ethernet implemented in this manner is not the best system for these applications. We will see other ways of implementing Ethernet in the next section.
SEC. 4.3 ETHERNET 297
In Fig. 4-16, the channel efficiency is plotted versus the number of ready sta tions for 2o = 51. 2 µsec and a data rate of 10 Mbps, using Eq. (4-3). With a 64-byte slot time, it is not surprising that 64-byte frames are not efficient. On the
other hand, with 1024-byte frames and an asymptotic value of e 64-byte slots per contention interval, the contention period is 174 bytes long and the efficiency is 85%. This result is much better than the 37% efficiency of slotted ALOHA.
1.0
0.9 0.8
0.7
y
c
1024-byte frames 512-byte frames
256-byte frames
0.6
n
e
i
c
i
f
0.5
f
e
l
e
0.4
n
n
a
h
0.3
C
0.2
0.1
0 1 2 4 8 16
128-byte frames 64-byte frames
32 64 128 256
Number of stations trying to send
Figure 4-16. Efficiency of Ethernet at 10 Mbps with 512-bit slot times.
It is probably worth mentioning that there has been a large amount of theoreti- cal performance analysis of Ethernet (and other networks). Most of the results should be taken with a grain (or better yet, a metric ton) of salt, for two reasons. First, virtually all of the theoretical work assumes Poisson traffic. When re- searchers began looking at real data, they discovered that network traffic is rarely Poisson. Instead, it is self-similar or bursty over a range of time scales (Paxson and Floyd, 1995; and Fontugne et al., 2017). What this means is that averaging over long periods of time does not smooth out the traffic. As well as using questionable models, many of the analyses focus on the ‘‘interesting’’ performance cases of abnormally high load. Boggs et al. (1988) showed by experimentation that Ethernet works well in reality, even at moderately high load.
4.3.4 Switched Ethernet
Ethernet soon began to evolve away from the single long cable architecture of classic Ethernet. The problems associated with finding breaks or loose con- nections drove it toward a different kind of wiring pattern, in which each station has a dedicated cable running to a central hub . A hub simply connects all the
298 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
attached wires electrically, as if they were soldered together. This configuration is shown in Fig. 4-17(a).
Port
Port
Line Hub Switch
Line
(a) (b)
Figure 4-17. (a) Hub. (b) Switch.
The wires were telephone company twisted pairs, since most office buildings were already wired this way and normally plenty of spares were available. This re- use was a win, but it did reduce the maximum cable run from the hub to 100 meters (200 meters if high-quality Category 5 twisted pairs were used). Adding or removing a station is simpler in this configuration, and cable breaks can be detect- ed easily. With the advantages of being able to use existing wiring and ease of maintenance, twisted-pair hubs quickly became the dominant form of Ethernet.
However, hubs do not increase capacity because they are logically equivalent to the single long cable of classic Ethernet. As more and more stations are added, each station gets a decreasing share of the fixed capacity. Eventually, the LAN will saturate. One way out is to go to a higher speed, say, from 10 Mbps to 100 Mbps, 1 Gbps, or even higher speeds. But with the growth of multimedia and powerful servers, even a 1-Gbps Ethernet can become saturated.
Fortunately, there is an another way to deal with increased load: switched Ethernet. The heart of this system is a switch containing a high-speed backplane that connects all of the ports, as shown in Fig. 4-17(b). From the outside, a switch looks just like a hub. They are both boxes, typically with 4 to 48 ports, each with a standard RJ-45 connector for a twisted-pair cable. Each cable connects the switch or hub to a single computer, as shown in Fig. 4-18. A switch has the same advan tages as a hub, too. It is easy to add or remove a new station by plugging or unplugging a wire, and it is easy to find most faults since a flaky cable or port will usually affect just one station. There is still a shared component that can fail—the switch itself—but if all stations lose connectivity the IT folks know what to do to fix the problem: replace the whole switch.
Inside the switch, however, something very different is happening. Switches only output frames to the ports for which those frames are destined. When a switch port receives an Ethernet frame from a station, the switch checks the Ether- net addresses to see which port the frame is destined for. This step requires the
SEC. 4.3 ETHERNET 299
Switch
Hub
Switch ports
Twisted pair
Figure 4-18. An Ethernet switch.
switch to be able to work out which ports correspond to which addresses, a process that we will describe in Sec. 4.8 when we get to the general case of switches con- nected to other switches. For now, just assume that the switch knows the frame’s
destination port. The switch then forwards the frame over its high-speed backplane to the destination port. The backplane typically runs at many Gbps, using a propri- etary protocol that does not need to be standardized because it is entirely hidden inside the switch. The destination port then transmits the frame on the wire so that it reaches the intended station. None of the other ports even knows the frame exists.
What happens if more than one of the stations or ports wants to send a frame at the same time? Again, switches differ from hubs. In a hub, all stations are in the same collision domain . They must use the CSMA/CD algorithm to schedule their transmissions. In a switch, each port is its own independent collision domain. In the common case that the cable is full duplex, both the station and the port can send a frame on the cable at the same time, without worrying about other ports and stations. Collisions are now impossible and CSMA/CD is not needed. However, if the cable is half duplex, the station and the port must contend for transmission with CSMA/CD in the usual way.
A switch improves performance over a hub in two ways. First, since there are no collisions, the capacity is used more efficiently. Second, and more importantly, with a switch multiple frames can be sent simultaneously (by different stations). These frames will reach the switch ports and travel over the switch’s backplane to be output on the proper ports. However, since two frames might be sent to the same output port at the same time, the switch must have buffering so that it can tempo rarily queue an input frame until it can be transmitted to the output port. Overall, these improvements give a large performance win that is not possible with a hub. The total system throughput can often be increased by an order of magnitude, de- pending on the number of ports and traffic patterns.
The change in the ports on which frames are output also has security benefits. Most LAN interfaces have a promiscuous mode, in which all frames are given to each computer, not just those addressed to it. With a hub, every computer that is attached can see the traffic sent between all of the other computers. Spies and busybodies love this feature. With a switch, traffic is forwarded only to the ports
300 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
where it is destined. This restriction provides better isolation so that traffic will not easily escape and fall into the wrong hands. However, it is better to encrypt traffic if security is really needed.
Because the switch just expects standard Ethernet frames on each input port, it is possible to use some of the ports as concentrators. In Fig. 4-18, the port in the upper-right corner is connected not to a single station, but to a 12-port hub instead. As frames arrive at the hub, they contend for the ether in the usual way, including collisions and binary backoff. Successful frames make it through the hub to the switch and are treated there like any other incoming frames. The switch does not know they had to fight their way in. Once in the switch, they are sent to the correct output line over the high-speed backplane. It is possible that the correct destina tion was one on the lines attached to the hub, in which case the frame has already been delivered so the switch drops it. Hubs are simpler and cheaper than switches, but due to falling switch prices, they have become an endangered species. Modern networks largely use switched Ethernet. Nevertheless, legacy hubs still exist.
4.3.5 Fast Ethernet
At the same time that switches were becoming popular, the speed of 10-Mbps Ethernet was coming under pressure. At first, 10 Mbps seemed like heaven, just as cable modems seemed like heaven to the users of 56-kbps telephone modems. But the novelty wore off quickly. As a kind of corollary to Parkinson’s Law (‘‘Work expands to fill the time available for its completion’’), it seemed that data expanded to fill the bandwidth available for their transmission.
Many installations needed more bandwidth and thus had numerous 10-Mbps LANs connected by a maze of repeaters, hubs, and switches, although to the net- work managers it sometimes felt that they were being held together by bubble gum and chicken wire. But even with Ethernet switches, the maximum bandwidth of a single computer was limited by the cable that connected it to the switch port.
It was in this environment that IEEE reconvened the 802.3 committee in 1992 with instructions to come up with a faster LAN. One proposal was to keep 802.3 exactly as it was, but just make it go faster. Another proposal was to redo it totally and give it lots of new features, such as real-time traffic and digitized voice, but
just keep the old name (for marketing reasons). After some wrangling, the com- mittee decided to keep 802.3 the way it was, and just make it go faster. This strate- gy would get the job done before the technology changed and avoid unforeseen problems with a brand new design. The new design would also be backward-com- patible with existing Ethernet LANs. The people behind the losing proposal did what any self-respecting computer-industry people would have done under these circumstances: they stomped off and formed their own committee and standardized their LAN anyway (eventually as 802.12). It flopped miserably.
The work was done quickly (by standards committees’ norms), and the result, 802.3u, was approved by IEEE in June 1995. Technically, 802.3u is not really a
SEC. 4.3 ETHERNET 301
new standard, but an addendum to the existing 802.3 standard (to emphasize its backward compatibility). This strategy is used a lot. Since practically everyone calls it fast Ethernet , rather than 802.3u, we will do that, too.
The basic idea behind fast Ethernet was simple: keep all the old frame formats, interfaces, and procedural rules, but reduce the bit time from 100 nsec to 10 nsec. Technically, it would have been possible to copy 10-Mbps classic Ethernet and still detect collisions on time by just reducing the maximum cable length by a factor of
10. However, the advantages of twisted-pair wiring were so overwhelming that fast Ethernet is based entirely on this design. Thus, all fast Ethernet systems use hubs and switches; multidrop cables with vampire taps or BNC connectors are not permitted.
Nevertheless, some choices still had to be made, the most important being which wire types to support. One contender was Category 3 twisted pair. The arg- ument for it was that practically every office in the Western world had at least four Category 3 (or better) twisted pairs running from it to a telephone wiring closet within 100 meters. Sometimes two such cables existed. Thus, using Category 3 twisted pair would make it possible to wire up desktop computers using fast Ether- net without having to rewire the building, an enormous advantage for many organi- zations.
The main disadvantage of a Category 3 twisted pair is its inability to carry 100 Mbps over 100 meters, the maximum computer-to-hub distance specified for 10-Mbps hubs. In contrast, Category 5 twisted pair wiring can handle 100 m easi ly, and fiber can go much farther. The compromise chosen was to allow all three possibilities, as shown in Fig. 4-19, but to pep up the Category 3 solution to give it the additional carrying capacity needed.
Name Cable Max. segment Advantages
100Base-T4 Twisted pair 100 m Uses category 3 UTP 100Base-TX Twisted pair 100 m Full duplex at 100 Mbps (Cat 5 UTP) 100Base-FX Fiber optics 2000 m Full duplex at 100 Mbps; long runs
Figure 4-19. The original fast Ethernet cabling.
The Category 3 UTP scheme, formally called 100Base-T4 , used a signaling speed of 25 MHz, only 25% faster than standard Ethernet’s 20 MHz. (Remember that Manchester encoding, discussed in Sec. 2.4.3, requires two clock periods for each of the 10 million bits sent each second.) However, to achieve the necessary bit rate, 100Base-T4 requires four twisted pairs. Of the four pairs, one is always to the hub, one is always from the hub, and the other two are switchable to the current transmission direction. To get 100 Mbps out of the three twisted pairs in the trans- mission direction, a fairly involved scheme is used on each twisted pair. It involves sending ternary digits with three different voltage levels. This scheme is never go ing to to win any prizes for elegance, so we will (mercilfully) skip the details.
302 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
However, since standard telephone wiring for decades has had four twisted pairs per cable, most offices are able to use the existing wiring plant. It means giving up your office telephone, but that is surely a small price to pay for faster email.
100Base-T4 fell by the wayside as many office buildings were rewired with Category 5 UTP for 100Base-TX Ethernet, which came to dominate the market. This design is simpler because the wires can handle clock rates of 125 MHz. Only two twisted pairs per station are used, one to the hub and one from it. Neither straight binary coding (i.e., NRZ) nor Manchester coding is used. Instead, the 4B/5B encoding we described in Sec 2.4.3 is used. Four data bits are encoded as 5 signal bits and sent at 125 MHz to provide 100 Mbps. This scheme is simple but has sufficient transitions for synchronization and uses the bandwidth of the wire relatively well. The 100Base-TX system is full duplex; stations can transmit at 100 Mbps on one twisted pair and receive at 100 Mbps on another twisted pair at the same time.
The last option, 100Base-FX, uses two strands of multimode fiber, one for each direction, so it, too, can run full duplex with 100 Mbps in each direction. In this setup, the distance between a station and the switch can be up to 2 km.
Fast Ethernet allows interconnection by either hubs or switches. To ensure that the CSMA/CD algorithm continues to work, the relationship between the mini- mum frame size and maximum cable length must be maintained as the network speed goes up from 10 Mbps to 100 Mbps. So, either the minimum frame size of 64 bytes must go up or the maximum cable length of 2500 m must come down, proportionally. The easy choice was for the maximum distance between any two stations to come down by a factor of 10, since a hub with 100-m cables falls within this new maximum already. However, 2-km 100Base-FX cables are too long to permit a 100-Mbps hub with the normal Ethernet collision algorithm. These cables must instead be connected to a switch and operate in a full-duplex mode so that there are no collisions.
Users quickly started to deploy fast Ethernet, but they were not about to throw away 10-Mbps Ethernet cards on older computers. As a consequence, virtually all fast Ethernet switches can handle a mix of 10-Mbps and 100-Mbps stations. To make upgrading easy, the standard itself provides a mechanism called auto-negoti- ation that lets two stations automatically negotiate the optimum speed (10 or 100 Mbps) and duplexity (half or full). It works well most of the time but is known to lead to duplex mismatch problems when one end of the link autonegotiates but the other end does not and is set to full-duplex mode (Shalunov and Carlson, 2005). Most Ethernet products use this feature to configure themselves.
4.3.6 Gigabit Ethernet
The ink was barely dry on the fast Ethernet standard when the 802 committee began working on a yet faster Ethernet, quickly dubbed gigabit Ethernet . IEEE ratified the most popular form as 802.3ab in 1999. Below, we will discuss some of
SEC. 4.3 ETHERNET 303
the key features of gigabit Ethernet. More information is given by Spurgeon and Zimmerman (2014).
The committee’s goals for gigabit Ethernet were essentially the same as the committee’s goals for fast Ethernet: increase performance tenfold while main taining compatibility with all existing Ethernet standards. In particular, gigabit Ethernet had to offer unacknowledged datagram service with both unicast and broadcast, use the same 48-bit addressing scheme already in use, and maintain the same frame format, including the minimum and maximum frame sizes. The final standard met all these goals.
Like fast Ethernet, all configurations of gigabit Ethernet use point-to-point links. In the simplest configuration, illustrated in Fig. 4-20(a), two computers are directly connected to each other. The more common case, however, uses a switch or a hub connected to multiple computers and possibly additional switches or hubs, as shown in Fig. 4-20(b). In both configurations, each individual Ethernet cable has exactly two devices on it, no more and no fewer.
Switch or hub
Ethernet
Computer
Ethernet
(a) (b)
Figure 4-20. (a) A two-station Ethernet. (b) A multistation Ethernet.
Also like fast Ethernet, gigabit Ethernet supports two different modes of opera tion: full-duplex mode and half-duplex mode. The ‘‘normal’’ mode is full-duplex mode, which allows traffic in both directions at the same time. This mode is used when there is a central switch connected to computers (or other switches) on the periphery. In this configuration, all lines are buffered so each computer and switch is free to send frames whenever it wants to. The sender does not have to sense the channel to see if anybody else is using it because contention is impossible. On the line between a computer and a switch, the computer is the only possible sender to the switch, and the transmission will succeed even if the switch is currently send ing a frame to the computer (because the line is full duplex). Since no contention is possible, the CSMA/CD protocol is not used, so the maximum length of the cable is determined by signal strength issues rather than by how long it takes for a noise burst to propagate back to the sender in the worst case. Switches are free to mix and match speeds. Autonegotiation is supported just as in fast Ethernet, only now the choice is among 10, 100, and 1000 Mbps.
304 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The other mode of operation, half-duplex, is used when the computers are con- nected to a hub rather than a switch. A hub does not buffer incoming frames. In- stead, it electrically connects all the lines internally, simulating the multidrop cable used in classic Ethernet. In this mode, collisions are possible, so the standard CSMA/CD protocol is required. Because a 64-byte frame (the shortest allowed) can now be transmitted 100 times faster than in classic Ethernet, the maximum cable length must be 100 times less, or 25 meters, to maintain the essential proper ty that the sender is still transmitting when the noise burst gets back to it, even in the worst case. With a 2500-m-long cable, the sender of a 64-byte frame on a sys tem running at 1 Gbps would be long finished before the frame got even a tenth of the way to the other end, let alone to the end and back.
This length restriction was painful enough that two features were added to the standard to increase the maximum cable length to 200 meters, which is probably enough for most offices. The first feature, called carrier extension , essentially tells the hardware to add its own padding after the normal frame to extend the frame to 512 bytes. Since this padding is added by the sending hardware and re- moved by the receiving hardware, the software is unaware of it, meaning that no changes are needed to existing software. The downside is that using 512 bytes worth of bandwidth to transmit 46 bytes of user data (the payload of a 64-byte frame) has a line efficiency of only 9%.
The second feature, called frame bursting , allows a sender to transmit a con- catenated sequence of multiple frames in a single transmission. If the total burst is less than 512 bytes, the hardware pads it again. If enough frames are waiting for transmission, this scheme is very efficient and preferred over carrier extension.
In all fairness, it is hard to imagine an organization buying modern computers with gigabit Ethernet cards and then connecting them with an old-fashioned hub to simulate classic Ethernet with all its collisions. Gigabit Ethernet interfaces and switches used to be expensive, but their prices fell rapidly as sales volumes picked up. Still, backward compatibility is sacred in the computer industry, so the com- mittee was required to put it in. Today, most computers ship with an Ethernet in terface that is capable of 10-, 100-, and 1000-Mbps operation (and maybe higher) and compatible with all of them.
Gigabit Ethernet supports both copper and fiber cabling, as listed in Fig. 4-21. Signaling at or near 1 Gbps requires encoding and sending a bit every nanosecond. This trick was initially accomplished with short, shielded copper cables (the 1000Base-CX version) and optical fibers. For the optical fibers, two wavelengths are permitted and result in two different versions: 0.85 microns (short, for 1000Base-SX) and 1.3 microns (long, for 1000Base-LX).
Signaling at the short wavelength can be achieved with cheap LEDs. It is used with multimode fiber and is useful for connections within a building, as it can run up to 500 m for 50-micron fiber. Signaling at the long wavelength requires lasers. On the other hand, when combined with single-mode (10-micron) fiber, the cable can be up to 5 km. This limit allows long distance connections between buildings,
SEC. 4.3 ETHERNET 305
Name Cable Max. segment Advantages 1000Base-SX Fiber optics 550 m Multimode fiber (50, 62.5 microns) 1000Base-LX Fiber optics 5000 m Single (10 µ) or multimode (50, 62.5 µ) 1000Base-CX 2 Pairs of STP 25 m Shielded twisted pair 1000Base-T 4 Pairs of UTP 100 m Standard category 5 UTP
Figure 4-21. Gigabit Ethernet cabling.
such as for a campus backbone, as a dedicated point-to-point link. Later variations of the standard permit even longer links over single-mode fiber.
To send bits over these versions of gigabit Ethernet, the 8B/10B encoding we described in Sec. 2.4.3 was borrowed from another networking technology called Fibre Channel. That scheme encodes 8 bits of data into 10-bit codewords that are sent over the wire or fiber, hence the name 8B/10B. The codewords were chosen so that they could be balanced (i.e., have the same number of 0s and 1s) with suf ficient transitions for clock recovery. Sending the coded bits with NRZ requires a signaling bandwidth of 25% more than that required for the uncoded bits, a big im- provement over the 100% expansion of Manchester coding.
However, all of these options required new copper or fiber cables to support the faster signaling. None of them made use of the large amount of Category 5 UTP that had been installed along with fast Ethernet. Within a year, 1000Base-T came along to fill this gap, and it has been the most popular form of gigabit Ether- net ever since. People apparently dislike rewiring their buildings.
More complicated signaling is needed to make Ethernet run at 1000 Mbps over Category 5 wires. To start, all four twisted pairs in the cable are used, and each pair is used in both directions at the same time by using digital signal processing to separate signals. Over each wire, five voltage levels that carry 2 bits are used for signaling at 125 Msymbols/sec. The mapping to produce the symbols from the bits is not straightforward. It involves scrambling, for transitions, followed by an error correcting code in which four values are embedded into five signal levels.
A speed of 1 Gbps is quite fast. For example, if a receiver is busy with some other task for even 1 msec and does not empty the input buffer on some line, up to 1953 frames may have accumulated in that gap. Also, when a computer on a giga- bit Ethernet is shipping data down the line to a computer on a classic Ethernet, buffer overruns are very likely. As a consequence of these two observations, giga- bit Ethernet supports flow control. The mechanism consists of one end sending a special control frame to the other end telling it to pause for some period of time. These PAUSE control frames are normal Ethernet frames containing a type of 0x8808. Pauses are given in units of the minimum frame time. For gigabit Ether- net, the time unit is 512 nsec, allowing for pauses as long as 33.6 msec.
There is one more extension that was introduced along with gigabit Ethernet. Jumbo frames allow for frames to be longer than 1500 bytes, usually up to 9 KB.
306 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
This extension is proprietary. It is not recognized by the standard because if it is used then Ethernet is no longer compatible with earlier versions, but most vendors support it anyway. The rationale is that 1500 bytes is a short unit at gigabit speeds. By manipulating larger blocks of information, the frame rate can be decreased, along with the processing associated with it, such as interrupting the processor to say that a frame has arrived, or splitting up and recombining messages that were too long to fit in one Ethernet frame.
4.3.7 10-Gigabit Ethernet
As soon as gigabit Ethernet was standardized, the 802 committee got bored and wanted to get back to work. IEEE told them to start on 10-gigabit Ethernet. This work followed much the same pattern as the previous Ethernet standards, with standards for fiber and shielded copper cable appearing first in 2002 and 2004, fol lowed by the standard for copper twisted pair in 2006.
Ten Gbps is an impressive speed, 1000x faster than the original Ethernet. Where could it be needed? The answer is inside data centers and exchanges to connect high-end routers, switches, and servers, as well as in long-distance, high
bandwidth trunks between offices that are enabling entire metropolitan area net- works based on Ethernet and fiber. The long distance connections use optical fiber, while the short connections may use copper or fiber.
All versions of 10-gigabit Ethernet support only full-duplex operation. CSMA/CD is no longer part of the design, and the standards concentrate on the de tails of physical layers that can run at very high speed. Compatibility still matters, though, so 10-gigabit Ethernet interfaces autonegotiate and fall back to the highest speed supported by both ends of the line.
The main kinds of 10-gigabit Ethernet are listed in Fig. 4-22. Multimode fiber with the 0.85µ (short) wavelength is used for medium distances, and single-mode fiber at 1.3µ (long) and 1.5µ (extended) is used for long distances. 10GBase-ER can run for distances of 40 km, making it suitable for wide area applications. All of these versions send a serial stream of information that is produced by scrambling the data bits, then encoding them with a 64B/66B code. This encoding has less overhead than an 8B/10B code.
Name Cable Max. segment Advantages
10GBase-SR Fiber optics Up to 300 m Multimode fiber (0.85 µ) 10GBase-LR Fiber optics 10 km Single-mode fiber (1.3 µ) 10GBase-ER Fiber optics 40 km Single-mode fiber (1.5 µ) 10GBase-CX4 4 Pairs of twinax 15 m Twinaxial copper
10GBase-T 4 Pairs of UTP 100 m Category 6a UTP Figure 4-22. 10-Gigabit Ethernet cabling.
SEC. 4.3 ETHERNET 307
The first copper version defined, 10GBase-CX4, uses a cable with four pairs of twinaxial copper wiring. Each pair uses 8B/10B coding and runs at 3.125 Gsymb- ols/sec to reach 10 Gbps. This version is cheaper than fiber and was early to mar- ket, but it remains to be seen whether it will be beat out in the long run by 10-giga- bit Ethernet over more garden-variety twisted-pair wiring.
10GBase-T is the version that uses UTP cables. While it calls for Category 6a wiring, for shorter runs, it can use lower categories (including Category 5) to allow some reuse of installed cabling. Not surprisingly, the physical layer is quite invol- ved to reach 10 Gbps over twisted pair. We will only sketch some of the high-level details. Each of the four twisted pairs is used to send 2500 Mbps in both directions. This speed is reached using a signaling rate of 800 Msymbols/sec with symbols that use 16 voltage levels. The symbols are produced by scrambling the data, pro tecting it with a LDPC (Low Density Parity Check) code, and further coding for error correction.
Ten-gigabit Ethernet is now widespread in the market, so the 802.3 committee has moved on. At the end of 2007, IEEE created a group to standardize Ethernet operating at 40 Gbps and 100 Gbps. This upgrade will let Ethernet compete in very high-performance settings, including long-distance connections in backbone net- works and short connections over the equipment backplanes. The standard is not yet complete, but proprietary products are already available.
4.3.8 40- and 100-Gigabit Ethernet
After it finished standardizing 10-gigabit Ethernet, the 802.11 committee got to work on new standards for Ethernet at 40 gigabits/sec and 100 gigabits/sec. The former is targeted at internal connections in data centers, not at ordinary offices and certainly not end users. The latter is targeted at the Internet backbone and as such has to work on optical-network runs of thousands of kilometers. A possible use is a virtual private LAN to connect a data center with a million CPUs to anoth- er million-CPU data center.
The first standard was 802.3ba, approved in 2010, followed by 802.3bj (2014) and 802.3cd (2018). All of these define Ethernet at both 40 Gbps and 100 Gbps. Design goals included:
1. Backward compatibility with 802.3 standards to 1 gigabit/sec.
2. Allowing the minimum and maximum frame sizes to stay the same. 3. Handle bit-error rates of 10<12 and better.
4. Work well on optical networks.
5. Have data rates of either 40 Gbps or 100 Gbps.
6. Allow the use of single- or multimode fiber and specialized back- planes.
308 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The new standards phase out copper wire in favor of optical fiber and high-per formance (copper) backplanes used in data centers that support cloud computing. Half a dozen modulation schemes are supported, including 64B/66B (like 8B/10B,
but with more bits). In addition, up to 10 parallel lanes at 10 Gbps each can be used to get to 100 Gbps. The lanes are typically different frequency bands over an optical fiber. Integration into existing optical networks uses ITU recommendation G.709.
Starting around 2018, a small number of companies began introducing 100-Gbps switches and network adapter cards. For the folks for whom 100 Gbps is not enough, work has already begun on standards for up to 400 gigabits/sec, sometimes referred to as 400GbE. The standards are 802.3cd, 802.3ck, 802.3cm, and 802.3cn if you want to look them up. At 400 Gbps, a typical (compressed) 4K movie can be downloaded in full in about 2 seconds.
4.3.9 Retrospective on Ethernet
Ethernet has been around for over 40 years and has no serious competitors in sight, so it is likely to be around for many more years to come. Few CPU architec tures, operating systems, or programming languages have been king of the moun tain for three decades going on strong. Clearly, Ethernet did something right. What was it?
Probably the main reason for its longevity is that Ethernet is simple and flexi- ble. In practice, simple translates into reliable, cheap, and easy to maintain. Once the hub and switch architecture was adopted, failures became extremely rare. Peo- ple hesitate to replace something that works perfectly all the time, especially when they know that an awful lot of things in the computer industry work very poorly, so that many so-called ‘‘upgrades’’ are worse than what they replaced.
Simple also translates into cheap. Twisted-pair wiring is relatively inexpensive as are the hardware components. They may start out expensive when there is a transition, for example, new gigabit Ethernet NICs or switches, but they are merely additions to a well-established network (not a replacement of it) and the prices fall quickly as the sales volume picks up.
Ethernet is easy to maintain. There is no software to install (other than the drivers) and not much in the way of configuration tables to manage (and get wrong). Also, adding new hosts is as simple as just plugging them in.
Another point is that Ethernet interworks easily with TCP/IP, which has be- come dominant. IP is a connectionless protocol, so it fits perfectly with Ethernet, which is also connectionless. IP fits much less well with connection-oriented alter- natives such as ATM. This mismatch definitely hurt ATM’s chances.
Lastly, and perhaps most importantly, Ethernet has been able to evolve in cer tain crucial ways. Speeds have gone up by four orders of magnitude and hubs and switches have been introduced, but these changes have not required changing the
SEC. 4.3 ETHERNET 309
software and have often allowed the existing cabling to be reused for a time. When a network salesman shows up at a large installation and says ‘‘I have this fantastic new network for you. All you have to do is throw out all your hardware and rewrite all your software,’’ he has a problem.
Many alternative technologies that you have probably not even heard of were faster than Ethernet when they were introduced. As well as ATM, this list includes FDDI (Fiber Distributed Data Interface) and Fibre Channel,†two ring-based opti- cal LANs. Both were incompatible with Ethernet. Neither one made it. They were too complicated, which led to complex chips and high prices. The lesson that should have been learned here was KISS (Keep It Simple, Stupid). Eventually, Ethernet caught up with them in terms of speed, often by borrowing some of their technology, for example, the 4B/5B coding from FDDI and the 8B/10B coding from Fibre Channel. Then, they had no advantages left and quietly died off or fell into specialized roles.
It looks like Ethernet will continue to expand in its applications for some time. Ten-gigabit Ethernet freed it from the distance constraints of CSMA/CD. Much effort is being put into carrier-grade Ethernet to let network providers offer Ethernet-based services to their customers for metropolitan and wide area networks (Hawkins, 2016). This application carries Ethernet frames long distances over fiber and calls for better management features to help operators offer reliable, high-quality services. Very high-speed networks like 100GbE are also finding uses in backplanes connecting components in large routers or servers. Both of these uses are in addition to that of sending frames between computers in offices. The next step is 400GbE and that may not even be the last one.
4.4 WIRELESS LANS
Wireless LANs are increasingly popular, and homes, offices, cafes, libraries, airports, zoos, and other public places are being outfitted with them to connect desktop PCs, laptops, tablets, and smartphones to the Internet. Wireless LANs can also be used to let two or more nearby computers communicate without using the Internet.
The main wireless LAN standard for over two decades has been 802.11. We gave some background information on it in Sec. 1.5.3. Now it is time to take a closer look at the technology. In the following sections, we will look at the proto- col stack, physical-layer radio transmission techniques, the MAC sublayer proto- col, the frame structure, and the services provided. For more information about 802.11, see Bing (2017) and Davis (2018). To get the truth from the mouth of the horse, consult the published IEEE standards.
† It is called ‘‘Fibre Channel’’ and not ‘‘Fiber Channel’’ because the document editor was British.
310 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 4.4.1 The 802.11 Architecture and Protocol Stack
802.11 networks can be used in two modes. The most popular mode is to con- nect clients, such as laptops and smartphones, to another network, such as a com- pany intranet or the Internet. This mode is shown in Fig. 4-23(a). In infrastructure mode, each client is associated with an AP (Access Point ) that is in turn connected to the other network. The client sends and receives its packets via the AP. Several access points may be connected together, typically by a wired network called a dis tribution system, to form an extended 802.11 network. In this case, clients can send frames to other clients via their APs.
To network Access
point
Client
(a) (b)
Figure 4-23. 802.11 architecture. (a) Infrastructure mode. (b) Ad-hoc mode.
The other mode, shown in Fig. 4-23(b), is an ad hoc network . This mode is a collection of computers that are associated so that they can directly send frames to each other. There is no access point. Since Internet access is the killer application for wireless, ad hoc networks are not very popular.
Now we will look at the protocols. All the 802 protocols, including 802.11 and Ethernet, have a certain commonality of structure. A partial view of the 802.11 protocol stack for the major 802.11 variants is given in Fig. 4-24. The stack is the same for clients and APs. The physical layer corresponds fairly well to the OSI physical layer, but the data link layer in all the 802 protocols is split into two or more sublayers. In 802.11, the MAC sublayer determines how the channel is allo- cated, that is, who gets to transmit next. Above it is the logical link control sub layer, whose job it is to hide the differences between the different 802 variants and make them indistinguishable as far as the network layer is concerned. This could have been a significant responsibility, but these days the logical link control is a glue layer that identifies the protocol (e.g., IP) that is carried within an 802.11 frame.
Several transmission techniques have been added to the physical layer as 802.11 has evolved since it first appeared in 1997. Two of the initial techniques, infrared in the manner of television remote controls and frequency hopping in the 2.4-GHz band, are now defunct. The third initial technique, direct sequence spread
SEC. 4.4 WIRELESS LANS 311
Upper
layers
MAC sublayer
Logical link layer
Data link layer
802.11 (legacy)
802.11b
802.11n
802.11ac
Frequency
802.11a
802.11g
802.11ax
Physical
Spread
MIMO
MU-MIMO
hopping
OFDM
OFDM
MU-MIMO
spectrum
OFDM
OFDM
layer
and infrared
OFDMA
Release date: 1997—1999 1999 1999 2003 2009
2013
2019
Figure 4-24. Part of the 802.11 protocol stack.
spectrum at 1 or 2 Mbps in the 2.4-GHz band, was extended to run at rates up to 11 Mbps and quickly became a hit. It is now known as 802.11b.
To give wireless junkies a much-wanted speed boost, new transmission techni- ques based on the orthogonal frequency division multiplexing scheme we de- scribed in Sec. 2.5.3 were introduced in 1999 and 2003. The first is called 802.11a and uses a different frequency band, 5 GHz. The second stuck with 2.4 GHz and compatibility. It is called 802.11g. Both give rates up to 54 Mbps.
Transmission techniques that simultaneously use multiple antennas at the transmitter and receiver for a speed boost were finalized as 802.11n in Oct. 2009. In December of 2013, IEEE ran out of letters and published the next standard as 802.11ac. As an aside, the 802.11 committee members know the whole alpha- bet and use the ‘‘missing’’ letters, such as 802.11r, for minor technical refinements and amendments (often for clarifications and bug fixes). 802.11ac operates in the 5-GHz band, which means that older devices that use only the 2.4 GHz band can- not use it. Most modern mobile devices use 802.11ac. Most recently, the 802.11ax standard was approved for even more speed.
We will now examine each of these transmission techniques briefly. We will only cover those that are in use, however, skipping the legacy 802.11 transmission methods. Technically, these belong to the physical layer and should have been ex- amined in Chap. 2, but since they are so closely tied to wireless LANs in general and the 802.11 LAN in particular, we treat them here instead.
4.4.2 The 802.11 Physical Layer
Each of the transmission techniques makes it possible to send a MAC frame over the air from one station to another. They differ, however, in the technology used and speeds achievable in practice. A detailed discussion of these technologies is far beyond the scope of this book, but a few words on each one will relate the
312 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
techniques to the material we covered in Chap. 2 and provide interested readers with the key terms to search for elsewhere for more information.
All of the 802.11 techniques use short-range radios to transmit signals in either the 2.4-GHz or the 5-GHz ISM frequency bands. These bands have the advantage of being unlicensed and hence freely available to any transmitter willing to meet some restrictions, such as radiated power of at most 1 W (though 50 mW is more typical for wireless LAN radios). Unfortunately, this fact is also known to the manufacturers of garage door openers, cordless phones, microwave ovens, and countless other devices, all of which compete with laptops and smartphones using WiFi for the same spectrum. The 2.4-GHz band tends to be more crowded than the 5-GHz band, so 5 GHz can be better for some applications even though it has shorter range due to the higher frequency. Unfortunately, the shorter radio waves at 5 GHz do not penetrate walls as well as the longer ones at 2.4 GHz do, so 5 GHz is not the unquestioned champion.
All of the transmission methods also define multiple rates. The idea is that dif ferent rates can be used depending on the current conditions. If the wireless signal is weak, a low rate can be used. If the signal is clear, the highest rate can be used. This adjustment is called rate adaptation . Since the rates vary by a factor of 10 or more, good rate adaptation is important for good performance. Of course, since it
is not needed for interoperability, the standards do not say how rate adaptation should be done.
The first transmission method we shall look at is 802.11b. It is a spread-spec trum method that supports rates of 1, 2, 5.5, and 11 Mbps, though in practice the operating rate is nearly always 11 Mbps. It is similar to the CDMA system we ex- amined in Sec. 2.4.4, except that there is only one spreading code that is shared by all users. Spreading is used to satisfy the FCC requirement that power be spread over the ISM band. The spreading sequence used by 802.11b is called a Barker sequence . It has the property that its autocorrelation is low except when the se- quences are aligned. This property allows a receiver to lock onto the start of a transmission. To send at a rate of 1 Mbps, the Barker sequence is used with BPSK modulation to send 1 bit per 11 chips. The chips are transmitted at a rate of 11 Mchips/sec. To send at 2 Mbps, it is used with QPSK modulation to send 2 bits per 11 chips. The higher rates are different. These rates use a technique called CCK (Complementary Code Keying ) to construct codes instead of the Barker se- quence. The 5.5-Mbps rate sends 4 bits in every 8-chip code, and the 11-Mbps rate sends 8 bits in every 8-chip code.
Next, we come to 802.11a, which supports rates up to 54 Mbps in the 5-GHz ISM band. You might have expected that 802.11a to come before 802.11b, but that was not the case. Although the 802.11a group was set up first, the 802.11b stan- dard was approved first and its product got to market well ahead of the 802.11a products, partly because of the difficulty of operating in the higher 5-GHz band.
The 802.11a method is based on OFDM (Orthogonal Frequency Division Multiplexing ) because OFDM uses the spectrum efficiently and resists wireless
SEC. 4.4 WIRELESS LANS 313
signal degradations such as multipath. Bits are sent over 52 subcarriers in parallel, 48 carrying data and 4 used for synchronization. Each symbol lasts 4µs and sends 1, 2, 4, or 6 bits. The bits are coded for error correction with a binary convolu tional code first, so only 1/2, 2/3, or 3/4 of the bits are not redundant. With dif ferent combinations, 802.11a can run at eight different rates, ranging from 6 to 54 Mbps. These rates are significantly faster than 802.11b rates, and there is less in terference in the 5-GHz band. However, 802.11b has a range that is about seven times greater than that of 802.11a, which is more important in many situations.
Even with the greater range, the 802.11b people had no intention of letting this upstart win the speed championship. Fortunately, in May 2002, the FCC dropped its long-standing rule requiring all wireless communications equipment operating in the ISM bands in the U.S. to use spread spectrum, so it got to work on 802.11g, which was approved by IEEE in 2003. It copies the OFDM modulation methods of 802.11a but operates in the narrow 2.4-GHz ISM band along with 802.11b. It offers the same rates as 802.11a (6 to 54 Mbps) plus of course compatibility with any 802.11b devices that happen to be nearby. All of these different choices can be confusing for customers, so it is common for products to support 802.11a/b/g in a single network interface card.
Not content to stop there, the IEEE committee began work on a high-through- put physical layer called 802.11n. It was ratified in 2009. The goal for 802.11n was throughput of at least 100 Mbps after all the wireless overheads were removed. This goal called for a raw speed increase of at least a factor of four. To make it happen, the committee doubled the channels from 20 MHz to 40 MHz and reduced framing overheads by allowing a group of frames to be sent together. More signifi- cantly, however, 802.11n uses up to four antennas to transmit up to four streams of information at the same time. The signals of the streams interfere at the receiver, but they can be separated using MIMO (Multiple Input Multiple Output ) com- munications techniques. The use of multiple antennas gives a large speed boost, or better range and reliability instead. MIMO, like OFDM, is one of those clever communications ideas that is changing wireless designs and which we are all likely to hear a lot about in the future. For a brief introduction to multiple antennas in 802.11, see Halperin et al. (2010).
In 2013, IEEE published the 802.11ac standard. It uses wider (80 MHz and 160 MHz) channels, 256-QAM modulation, and MU-MIMO (MultiUser MIMO ) with up to eight streams and other tricks to crank the bit rate up to a theoretical maximum of 7 Gbps, although in practice this is virtually never even approached. Modern consumer mobile devices generally use 802.11ac.
Another recent 802.11 standard is 802.11ad . This one operates in the 60 GHz band (57–71 GHz), which means the radio waves are very short: only 5 mm long. These waves do not penetrate walls or anything else, so the standard is only useful within a single room. However, this is an advantage as well as a disadvantage. It means that whatever the person in the next office or apartment is doing will not in terfere with what you are doing. The combination of high bandwidth and poor
314 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
penetration makes it ideal for streaming uncompressed 4K or 8K movies from a base station in a room to mobile devices in the room. An improvement to this stan- dard, increasing the bandwidth by a factor of four, is the 802.11ay standard.
Now we come to 802.11ax, sometimes referred to high-efficiency wireless. The consumer-friendly name for the standard is WiFi 6 (in case you thought you slept through WiFi 1 through 5, you did not; the old names were based on the IEEE standards numbers, and the WiFi Alliance decided to call this revision WiFi 6 be- cause it is the sixth version of the WiFi standard). It allows for more efficient QAM encoding along with a new modulation scheme, OFDMA. It can (in prin- ciple) operate in unlicensed parts of the spectrum up to 7 GHz and can (theoreti- cally) achieve a data rate of 11 Gbps. You can try this at home if you like, but unless you have a perfectly designed test lab at home, you are not going to get 11 Gbps. You might get 1 Gbps, though.
In 802.11ax OFDMA, a central scheduler allocates fixed-length resource units to each of the transmitting stations, thus reducing contention in dense deployments. 802.11ax also provides support for spatial spectrum reuse, through a technique called coloring , whereby a sender marks the beginning of its transmission in such a way that allows other senders to determine whether simultaneous use of the spec trum could take place. In some circumstances, a sender could transmit simultan- eously if it reduces its power accordingly.
Additionally, 802.11ax uses 1024-QAM, which allows each symbol to encode 10 bits, as opposed to the 8 bits/symbol in 256-QAM that 802.11ac uses. The stan- dard also supports smarter scheduling through a feature called target wake time , which allows a router to put devices in the home on transmission schedules to min imize collisions. This feature is likely to be most useful in smart homes, where an increasing number of connected devices may need to periodically send heartbeats to the home router.
4.4.3 The 802.11 MAC Sublayer Protocol
Let us now return from the land of electrical engineering to the land of com- puter science. The 802.11 MAC sublayer protocol is quite different from that of Ethernet, due to two factors that are fundamental to wireless communication.
First, radios are nearly always half duplex, meaning that they cannot transmit and listen for noise bursts at the same time on a single frequency. The received signal can easily be a million times weaker than the transmitted signal, so it cannot be heard at the same time. With Ethernet, a station just waits until the ether goes silent and then starts transmitting. If it does not receive a noise burst back while transmitting the first 64 bytes, the frame has almost assuredly been delivered cor rectly. With wireless, this collision detection mechanism does not work.
Instead, 802.11 tries to avoid collisions with a protocol called CSMA/CA (CSMA with Collision Avoidance ). This protocol is conceptually similar to Ethernet’s CSMA/CD, with channel sensing before sending and exponential back
SEC. 4.4 WIRELESS LANS 315
off after collisions. However, a station that has a frame to send starts with a random backoff (except in the case that it has not used the channel recently and the channel is idle). It does not wait for a collision. The number of slots to backoff is chosen in the range 0 to, say, 15 in the case of the OFDM physical layer. The station waits until the channel is idle, by sensing that there is no signal for a short period of time (called the DIFS, as we explain below), and counts down idle slots, pausing when frames are sent. It sends its frame when the counter reaches 0. If the frame gets through, the destination immediately sends a short acknowledgement. Lack of an acknowledgement is inferred to indicate an error, whether a collision or otherwise. In this case, the sender doubles the backoff period and tries again, continuing with exponential backoff as in Ethernet until the frame has been successfully trans- mitted or the maximum number of retransmissions has been reached.
An example timeline is shown in Fig. 4-25. Station A is the first to send a frame. While A is sending, stations B and C become ready to send. They see that the channel is busy and wait for it to become idle. Shortly after A receives an ac- knowledgement, the channel goes idle. However, rather than sending a frame right away and colliding, B and C both perform a backoff. C picks a short backoff, and thus sends first. B pauses its countdown while it senses that C is using the channel, and resumes after C has received an acknowledgement. B soon completes its back- off and sends its frame.
Station
A sends to D
D acks A
A
Data
Ack
B ready to send
B sends to D D acks B
B
Data
Wait for idle Backoff Rest of backoff Wait for idle
Ack
C ready to send
C sends to D D acks C
C
Wait for idle Backoff
Data
Ack
Time
Figure 4-25. Sending a frame with CSMA/CA.
Compared to Ethernet, there are two main differences. First, starting backoffs early helps to avoid collisions. This avoidance is worthwhile because collisions are expensive, as the entire frame is transmitted even if one occurs. Second, acknowl- edgements are used to infer collisions because collisions cannot be detected.
This mode of operation is called DCF (Distributed Coordination Function ) because each station acts independently, without any kind of central control. The standard also includes an optional additional mode of operation called PCF (Point
316 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Coordination Function ) in which the access point controls all activity in its cell, just like a cellular base station. However, PCF is not used in practice because there is normally no way to prevent stations in another nearby network from transmitting competing traffic.
The second problem is that the transmission ranges of different stations may be different. With a wire, the system is engineered so that all stations can hear each other. With the complexities of RF propagation, this situation does not hold for wireless stations. Consequently, situations such as the hidden terminal problem mentioned earlier and illustrated again in Fig. 4-26(a) can arise. Since not all sta tions are within radio range of each other, transmissions going on in one part of a cell may not be received elsewhere in the same cell. In this example, station C is transmitting to station B. If A senses the channel, it will not hear anything and will falsely conclude that it may now start transmitting to B. This decision leads to a collision.
A wants to send to B
but cannot hear that
B is busy
Range
of C's
radio
B wants to send to C but mistakenly thinks
the transmission will fail
Range
of A's
radio
A C
A B C C is
B
A is
transmitting
(a)
transmitting (b)
Figure 4-26. (a) The hidden terminal problem. (b) The exposed terminal problem.
The inverse situation is the exposed terminal problem, illustrated in Fig. 4-26(b). Here, B wants to send to C, so it listens to the channel. When it hears a transmission, it falsely concludes that it may not send to C, even though A may in fact be transmitting to D (not shown). This decision wastes a transmission opportunity.
To reduce ambiguities about which station is sending, 802.11 defines channel sensing to consist of both physical sensing and virtual sensing. Physical sensing simply checks the medium to see if there is a valid signal. With virtual sensing, each station keeps a logical record of when the channel is in use by tracking the NAV (Network Allocation Vector ). Each frame carries a NAV field that says how long the sequence of which this frame is part will take to complete. Stations that overhear this frame know that the channel will be busy for the period indicated by the NAV , regardless of whether they can sense a physical signal. For example, the
SEC. 4.4 WIRELESS LANS 317
NAV of a data frame includes the time needed to send an acknowledgement. All stations that hear the data frame will defer during the acknowledgement period, whether or not they can hear the acknowledgement. Essentially, the NAV serves like a countdown timer, during which period the sender assumes that the channel is busy. In 802.11, the units of the NAV are microseconds. In dense deployments, the NAV set by one sender can be reset by other senders in the same transmission range, thus causing collisions and suboptimal performance. To mitigate this effect, 802.11ax introduces two NAVs; one NAV is modified by frames corresponding to frames that the station is associated with, and the second NAV is modified by frames that are heard by the station but originate in overlapping networks.
An optional RTS/CTS mechanism uses the NAV to prevent terminals from send ing frames at the same time as hidden terminals. It is shown in Fig. 4-27. In this example, Awants to send to B. C is a station within range of A(and possibly with in range of B, but that does not matter). D is a station within range of B but not within range of A.
RTS Data A
CTS ACK B
C D
NAV
NAV
Time
Figure 4-27. Virtual channel sensing using CSMA/CA.
The protocol starts when A decides it wants to send data to B. A begins by sending an RTS frame to B to request permission to send it a frame. If B receives this request, it answers with a CTS frame to indicate that the channel is clear to send. Upon receipt of the CTS, A sends its frame and starts an ACK timer. Upon correct receipt of the data frame, B responds with an ACK frame, completing the exchange. If A’s ACK timer expires before the ACK gets back to it, it is treated as a collision and the whole protocol is run again after a backoff.
Now let us consider this exchange from the viewpoints of C and D. C is within range of A, so it may receive the RTS frame. If it does, it realizes that someone is going to send data soon. From the information provided in the RTS request, it can estimate how long the sequence will take, including the final ACK. So, for the good of all, it desists from transmitting anything until the exchange is completed. It does so by updating its record of the NAV to indicate that the channel is busy, as shown in Fig. 4-27. D does not hear the RTS, but it does hear the CTS, so it also updates its NAV . Note that the NAV signals are not transmitted; they are just inter- nal reminders to keep quiet for a certain period of time.
318 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
However, while RTS/CTS sounds good in theory, it is one of those designs that has proved to be of little value in practice. Several reasons why it is seldom used are known. It does not help for short frames (which are sent in place of the RTS) or for the AP (which everyone can hear, by definition). For other situations, it only slows down operation. RTS/CTS in 802.11 is a little different than in the MACA protocol we saw in Sec 4.2 because everyone hearing the RTS or CTS remains quiet for the duration to allow the ACK to get through without collision. Because of this, it does not help with exposed terminals as MACA did, only with hidden terminals. Most often there are few hidden terminals, and CSMA/CA already helps them by slowing down stations that transmit unsuccessfully, whatever the cause, to make it more likely that transmissions will succeed.
CSMA/CA with physical and virtual sensing is the core of the 802.11 protocol. However, there are several other mechanisms that have been developed to go with it. Each of these mechanisms was driven by the needs of real operation, so we will look at them briefly.
The first need we will take a look at is reliability. In contrast to wired net- works, wireless networks are noisy and unreliable, in no small part due to inter ference from other kinds of devices, such as microwave ovens, which also use the unlicensed ISM bands. The use of acknowledgements and retransmissions is of lit tle help if the probability of getting a frame through is small in the first place.
The main strategy that is used to increase successful transmissions is to lower the transmission rate. Slower rates use more robust modulations that are more like ly to be received correctly for a given signal-to-noise ratio. If too many frames are lost, a station can lower the rate. If frames are delivered with little loss, a station can occasionally test a higher rate to see if it should be used.
Another strategy to improve the chance of the frame getting through undam- aged is to send shorter frames. If the probability of any bit being in error is p, the probability of an n-bit frame being received entirely correctly is (1 < p)n. For ex- <4, the probability of receiving a full Ethernet frame (12,144 bits)
ample, for p = 10
correctly is less than 30%. Most frames will be lost. But if the frames are only a third as long (4048 bits), two thirds of them will be received correctly. Now most frames will get through and fewer retransmissions will be needed.
Shorter frames can be implemented by reducing the maximum size of the mes- sage that is accepted from the network layer. Alternatively, 802.11 allows frames to be split into smaller pieces, called fragments , each with its own checksum. The fragment size is not fixed by the standard, but is a parameter that can be adjusted by the AP. The fragments are individually numbered and acknowledged using a stop-and-wait protocol (i.e., the sender may not transmit fragment k + 1 until it has received the acknowledgement for fragment k). Once the channel has been ac- quired, multiple fragments are sent as a burst. They go one after the other with an acknowledgement (and possibly retransmissions) in between, until either the whole frame has been successfully sent or the transmission time reaches the maximum al lowed. The NAV mechanism described above keeps other stations quiet only until
SEC. 4.4 WIRELESS LANS 319
the next acknowledgement, but another mechanism (see below) is used to allow a burst of fragments to be sent without other stations sending a frame in the middle. The second need we will discuss is saving power. Battery life is always an issue with mobile wireless devices. The 802.11 standard pays attention to the issue of power management so that clients need not waste power when they have neither information to send nor to receive.
The basic mechanism for saving power builds on beacon frames . Beacons are periodic broadcasts by the AP (e.g., every 100 msec). The frames advertise the presence of the AP to clients and carry system parameters, such as the identifier of the AP, the time, how long until the next beacon, and security settings.
Clients can set a power-management bit in frames that they send to the AP to tell it that they are entering power-save mode. In this mode, the client can doze and the AP will buffer traffic intended for it. To check for incoming traffic, the cli- ent wakes up for every beacon, and checks a traffic map that is sent as part of the beacon. This map tells the client if there is buffered traffic. If so, the client sends a poll message to the AP, which then sends the buffered traffic. The client can then go back to sleep until the next beacon is sent.
Another power-saving mechanism, called APSD (Automatic Power Save Delivery ), was added to 802.11 in 2005. With this new mechanism, the AP buffers frames and sends them to a client just after the client sends frames to the AP. The client can then go to sleep until it has more traffic to send (and receive). This mechanism works well for applications such as VoIP that have frequent traffic in both directions. For example, a VoIP wireless phone might use it to send and re- ceive frames every 20 msec, much more frequently than the beacon interval of 100 msec, while dozing in between.
The third and last need we will examine is quality of service. When the VoIP traffic in the preceding example competes with peer-to-peer traffic, the VoIP traffic will suffer. It will be delayed due to contention with the high-bandwidth peer-to- peer traffic, even though the VoIP bandwidth is low. These delays are likely to degrade the voice calls. To prevent this degradation, we would like to let the VoIP traffic go ahead of the peer-to-peer traffic, as it is of higher priority.
IEEE 802.11 has a clever mechanism to provide this kind of quality of service that was introduced as set of extensions under the name 802.11e in 2005. It works by extending CSMA/CA with carefully defined intervals between frames. After a frame has been sent, a certain amount of idle time is required before any station may send a frame to check that the channel is no longer in use. The trick is to define different time intervals for different kinds of frames.
Five intervals are depicted in Fig. 4-28. The interval between regular data frames is called the DIFS (DCF InterFrame Spacing ). Any station may attempt to acquire the channel to send a new frame after the medium has been idle for DIFS. The usual contention rules apply, and binary exponential backoff may be needed if a collision occurs. The shortest interval is SIFS (Short InterFrame Spacing ). It is used to allow the parties in a single dialog the chance to go first.
320 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Examples include letting the receiver send an ACK, other control frame sequences like RTS and CTS, or letting a sender transmit a burst of fragments. Sending the next fragment after waiting only SIFS is what prevents another station from jump ing in with a frame in the middle of the exchange.
Control frame or next fragment may be sent here
SIFS
AIFS1
High-priority frame here
Regular DCF frame here
DIFS
Low-priority frame here
ACK
AIFS4 Time
Bad frame recovery done EIFS
Figure 4-28. Interframe spacing in 802.11.
The two AIFS (Arbitration InterFrame Space ) intervals show examples of two different priority levels. The short interval, AIFS1, is smaller than DIFS but longer than SIFS. It can be used by the AP to move voice or other high-priority traffic to the head of the line. The AP will wait for a shorter interval before it sends the voice traffic, and thus send it before regular traffic. The long interval, AIFS4, is larger than DIFS. It is used for background traffic that can be deferred until after regular traffic. The AP will wait for a longer interval before it sends this traffic, giving regular traffic the opportunity to transmit first. The complete quality of ser- vice mechanism defines four different priority levels that have different backoff pa rameters as well as different idle parameters.
The last time interval, EIFS (Extended InterFrame Spacing ), is used only by a station that has just received a bad or unknown frame, to report the problem. The idea is that since the receiver may have no idea of what is going on, it should wait a while to avoid interfering with an ongoing dialog between two stations.
A further part of the quality of service extensions is the notion of a TXOP or transmission opportunity . The original CSMA/CA mechanism let stations send one frame at a time. This design was fine until the range of rates increased. With 802.11a/g, one station might be sending at 6 Mbps and another station be sending at 54 Mbps. They each get to send one frame, but the 6-Mbps station takes nine times as long (ignoring fixed overheads) as the 54-Mbps station to send its frame. This disparity has the unfortunate side effect of slowing down a fast sender who is competing with a slow sender to roughly the rate of the slow sender. For example, again ignoring fixed overheads, when sending alone the 6-Mbps and 54-Mbps senders will get their own rates, but when sending together they will both get 5.4 Mbps on average. It is a stiff penalty for the fast sender. This issue is known as the rate anomaly (Heusse et al., 2003).
SEC. 4.4 WIRELESS LANS 321
With transmission opportunities, each station gets an equal amount of airtime, not an equal number of frames. Stations that send at a higher rate for their airtime will get higher throughput. In our example, when sending together the 6-Mbps and 54-Mbps senders will now get 3 Mbps and 27 Mbps, respectively.
4.4.4 The 802.11 Frame Structure
The 802.11 standard defines three different classes of frames in the air: data, control, and management. Each of these has a header with a variety of fields used within the MAC sublayer. In addition, there are some headers used by the physical layer, but these mostly deal with the modulation techniques used, so we will not discuss them here.
We will look at the format of the data frame as an example. It is shown in Fig. 4-29. First comes the Frame control field, which is made up of 11 subfields. The first of these is the Protocol version, set to 00. It is there to allow future ver- sions of 802.11 to operate at the same time in the same cell. Then come the Type (data, control, or management) and Subtype fields (e.g., RTS, or CTS). For a regu lar data frame (without quality of service), they are set to 10 and 0000 in binary. The To DS and From DS bits are set to indicate whether the frame is going to or coming from the network connected to the APs, which is called the distribution system. The More fragments bit means that more fragments will follow. The Retry bit marks a retransmission of a frame sent earlier. The Power management bit indicates that the sender is going into power-save mode. The More data bit in- dicates that the sender has additional frames for the receiver. The Protected Frame bit indicates that the frame body has been encrypted for security. We will discuss security briefly in the next section. Finally, the Order bit tells the receiver that the higher layer expects the sequence of frames to arrive strictly in order.
Bytes 2 2 2 0–2312
6 6 6 4
(recipient) Duration Data Frame
Sequence Address 1
control
Version
Type
Address 2
(transmitter) Address 3
Check sequence
Subtype
To
= 00
From
More
frag. Retry Pwr. mgt.More
data Protected Order
= 10
= 0000
DS
DS
Bits 1 1 1 1 1 2 2 1 1
4 1
Figure 4-29. Format of the 802.11 data frame.
The second field of the data frame, the Duration field, tells how long the frame and its acknowledgement will occupy the channel, measured in microseconds. It is present in all types of frames, including control frames, and is what stations use to manage the NAV mechanism.
322 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Next come addresses. Data frames sent to or from an AP have three addresses, all in standard IEEE 802 format. The first address is the receiver, and the second address is the transmitter. They are obviously needed, but what is the third address for? Remember that the AP is simply a relay point for frames as they travel be tween a client and another point on the network, perhaps a distant client or a portal to the Internet. The third address gives this distant endpoint.
The Sequence field numbers frames so that duplicates can be detected. Of the 16 bits available, 4 identify the fragment and 12 carry a number that is advanced with each new transmission. The Data field contains the payload, up to 2312 bytes. The first bytes of this payload are in a format known as LLC (Logical Link Control ). This layer is the glue that identifies the higher-layer protocol (e.g., IP) to which the payloads should be passed. Last comes the Frame check sequence, which is the same 32-bit CRC we saw in Sec. 3.2.2 and elsewhere.
Management frames have the same format as data frames, plus a format for the data portion that varies with the subtype (e.g., parameters in beacon frames). Con trol frames are short. Like all frames, they have the Frame control, Duration , and Frame check sequence fields. However, they may have only one address and no data portion. Most of the key information is conveyed with the Subtype field (e.g., ACK, RTS, and CTS).
4.4.5 Services
The 802.11 standard defines the services that the clients, the access points, and the network connecting them must be a conformant wireless LAN. The 802.11 standard offers variousservices.
Association and Data Delivery
The association service is used by mobile stations to connect themselves to APs. Typically, it is used just after a station moves within radio range of the AP. Upon arrival, the station learns the identity and capabilities of the AP, either from beacon frames or by directly asking the AP. The capabilities include the data rates supported, security arrangements, power-saving capabilities, quality of service sup- port, and more. The AP’s beacon message also includes a SSID (Service Set IDentifier ), which most people often think of as the network name. The station sends a request to associate with the AP; the AP may accept or reject the request. While beacons are always broadcast, the SSID may or may not be broadcast. If the SSID is not broadcast, the station must somehow know (or discover) the name to associate to that AP.
Reassociation lets a station change its preferred AP. This is useful for mobile stations moving from one AP to another AP in the same extended 802.11 LAN, like a handover in the cellular network. If used correctly, no data will be lost as a consequence of the handover. (But 802.11, like Ethernet, is a best-effort service.)
SEC. 4.4 WIRELESS LANS 323
No delivery guarantees are given. Either the station or the AP may also disassoci- ate , ending the relationship. A station should use this service before shutting down or leaving the network. The AP may use it before going down for maintenance. The 802.11w standard added authentication to disassociation frames.
Once frames reach the AP, the distribution service determines how to route them. If the destination is local to the AP, the frames can be sent out directly over the air. Otherwise, they will have to be forwarded over the wired network. The integration service handles any translation that is needed for a frame to be sent outside the 802.11 LAN, or to arrive from outside the 802.11 LAN. The common case here is connecting the wireless LAN to the Internet.
Data transmission is what it is all about, so 802.11 naturally provides a data delivery service . This service lets stations transmit and receive data using the pro tocols we described earlier in this chapter. Since 802.11 is modeled on Ethernet and transmission over Ethernet is not guaranteed to be 100% reliable, transmission over 802.11 is not guaranteed to be reliable either. Higher layers must deal with detecting and correcting errors.
Security and Privacy
Stations must also authenticate before they can send frames via the AP, but authentication is handled in different ways depending on the choice of security scheme. If the 802.11 network is ‘‘open,’’ anyone is allowed to use it. Otherwise, credentials are needed to authenticate.
A common authentication approach, WPA2 (WiFi Protected Access 2), im- plements security as defined in the 802.11i standard. (WPA is an interim scheme that implements a subset of 802.11i. We will skip it and go straight to the com- plete scheme.) With WPA2, the AP can talk to an authentication server that has a username and password database to determine if the station is allowed to access the network. Alternatively, a pre-shared key, which is a fancy name for a network password, may be configured. Several frames are exchanged between the station and the AP with a challenge and response that lets the station prove it has the right credentials. This exchange happens after association.
Another authentication approach that is commonly used in enterprise networks is 802.1X, which implements an approach called port-based authentication . 802.1X relies on centralized authentication (e.g., authentication of devices to a centralized server), which creates the possibilities for more fine-grained access control, accounting, billing, and attribution. The station that is authenticating is sometimes called a supplicant; this device authenticates to the network through an authenticator, which talks to the authentication server. 802.1X relies on an authentication framework called EAP (Enhanced Authentication Protocol ). The EAP framework defines more than 50 different methods to perform authentication, but common methods include EAP-TLS , which performs authentication based on certificates; EAP-TTLS and PEAP, which allow the client to associate using a
324 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
variety of methods, including password-based authentication; and EAP-SIM , whereby a mobile phone can authenticate using a SIM. 802.1X has many advan tages over simple WPA, such as the ability to perform fine-grained access control based on user, but it requires a certificate infrastructure to administer.
The predecessor to WPA was called WEP (Wired Equivalent Privacy ). For this scheme, authentication with a preshared key happens before association. WEP is now widely known to be insecure and is effectively no longer used. The first practical demonstration that WEP was broken came when Adam Stubblefield was a summer intern at AT&T (Stubblefield et al., 2002). He was able to code up and test an attack in one week, much of which was spent getting permission from man- agement to buy the WiFi cards needed for experiments. Software to crack WEP passwords is now freely available.
With WEP broken and WPA deprecated, the next try was WPA2. It uses a pri- vacy service that manages the details of encryption and decryption. The en- cryption algorithm for WPA2 is based on AES (Advanced Encryption Stan- dard ), a U.S. government standard approved in 2002. The keys that are used for encryption are determined during the authentication procedure. Unfortunately,
WPA2 was broken in 2017 (Vanhoef and Piessens, 2017). Good security is very hard, even with unbreakable crypto, because key management is the weakest link.
Prioritization and Power Control
To handle traffic with different priorities, there is a QoS traffic scheduling service. It uses the protocols we described to give voice and video traffic preferen tial treatment compared to best-effort and background traffic. A companion service also provides higher-layer timer synchronization. This lets stations coordinate their actions, which may be useful for media processing.
Finally, there are two services that help stations manage their use of the spec trum. The transmit power control service gives stations the information they need to meet regulatory limits on transmit power that vary from region to region. The dynamic frequency selection service give stations the information they need to avoid transmitting on frequencies in the 5-GHz band that are being used for radar in the proximity.
With these services, 802.11 provides a rich set of functionality for connecting nearby mobile clients to the Internet. It has been a huge success, and the standard has repeatedly been amended to add more functionality. For a perspective on where the standard has been and where it is heading,see Hiertz et al. (2010).
4.5 BLUETOOTH
In 1994, the Swedish company L. M. Ericsson became interested in connecting its mobile phones to other devices (e.g., laptops) without cables. Together with four other companies (IBM, Intel, Nokia, and Toshiba), it formed a SIG (Special
SEC. 4.5 BLUETOOTH 325
Interest Group, i.e., consortium) in 1998 to develop a wireless standard for con- necting computing and communication devices and accessories using short-range, low-power, inexpensive wireless radios. The project was named Bluetooth , after Harald Blaatand (Bluetooth) II (940–981), a Viking king who unified (i.e., con- quered) Denmark and Norway, also without cables.
Bluetooth 1.0 was released in July 1999, and since then the SIG has never looked back. All manner of consumer electronic devices now use Bluetooth, from mobile phones and laptops to headsets, printers, keyboards, mice, game consoles, watches, music players, navigation units, and more. The Bluetooth protocols let these devices find and connect to each other, an act called pairing , and securely transfer data.
The protocols have evolved over the past decade, too. After the initial proto- cols stabilized, higher data rates were added to Bluetooth 2.0 in 2004. With the 3.0 release in 2009, Bluetooth can be used for device pairing in combination with 802.11 for high-throughput data transfer. The 4.0 release in June 2010 specified low-power operation. That will be handy for people who do not want to change the batteries regularly in all of those devices around the house.
We will cover the main aspects of Bluetooth 4.0 below as it is still the mostly widely used version. Afterwards, we will discuss Bluetooth 5 and how it differs from Bluetooth 4.0 (mostly in minor ways).
4.5.1 Bluetooth Architecture
Let us start our study of the Bluetooth system with a quick overview of what it contains and what it is intended to do. The basic unit of a Bluetooth system is a piconet , which consists of a controller node and up to seven active worker nodes within a distance of 10 meters. Multiple piconets can exist in the same (large) room and can even be connected via a bridge node that takes part in multiple piconets, as in Fig. 4-30. An interconnected collection of piconets is called a scat ternet .
In addition to the seven active worker nodes in a piconet, there can be up to 255 parked nodes in the net. These are devices that the controller has switched to a low-power state to reduce the drain on their batteries. In parked state, a device cannot do anything except respond to an activation or beacon signal from the con troller. Two minor intermediate power states, hold and sniff, also exist
The reason for the controller/worker design is that the designers intended to facilitate the implementation of complete Bluetooth chips for under $5. The consequence of this decision is that the workers are fairly dumb, basically just doing whatever the controller tells them to do. At its heart, a piconet is a cent ralized TDM system, with the controller controlling the clock and determining which device gets to communicate in which time slot. All communication is be tween the controller and a worker; direct worker-worker communication is not pos- sible.
326 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 Piconet 1 Piconet 2
W
W
W
W
W
C C
W
W
Active
W Parked
worker
W
W
W
worker
Bridge worker
Figure 4-30. Two piconets can be connected to form a scatternet.
4.5.2 Bluetooth Applications
Most network protocols just provide channels between communicating entities and let application designers figure out what they want to use them for. For ex- ample, 802.11 does not specify whether users should use their laptop computers for reading email, surfing the Web, or something else. In contrast, the Bluetooth SIG specifies particular applications to be supported and provides different proto- col stacks for each one. At the time of this writing, there are more than two dozen applications, which are called profiles. Unfortunately, this approach leads to a very large amount of complexity. We will omit the complexity here but will briefly look at the profiles to see more clearly what the Bluetooth SIG is trying to accom- plish with them.
Six of the profiles are for different uses of audio and video. For example, the intercom profile allows two telephones to connect as walkie-talkies. The headset and hands-free profiles both provide voice communication between a headset and its base station, as might be used for hands-free telephony while driving a car. Other profiles are for streaming stereo-quality audio and video, say, from a porta- ble music player to headphones, or from a digital camera to a TV.
The human interface device profile is for connecting keyboards and mice to computers. Other profiles let a mobile phone or other computer receive images from a camera or send images to a printer. Perhaps of more interest is a profile to use a mobile phone as a remote control for a (Bluetooth-enabled) TV.
Still other profiles enable networking. The personal area network profile lets Bluetooth devices form an ad hoc network or remotely access another network, such as an 802.11 LAN, via an access point. The dial-up networking profile was actually the original motivation for the whole project. It allows a (laptop) com- puter to connect to a mobile phone containing a built-in modem without using any cables, just radio signals.
SEC. 4.5 BLUETOOTH 327
Profiles for higher-layer information exchange have also been defined. The synchronization profile is intended for loading data into a mobile phone when it leaves home and collecting data from it when it returns.
We will skip the rest of the profiles, except to mention that some profiles serve as building blocks on which the above profiles are built. The generic access pro file, on which all of the other profiles are built, provides a way to establish and maintain secure links (channels) between the controller and the workers. The other generic profiles define the basics of object exchange and audio and video transport. Utility profiles are used widely for functions such as emulating a serial line, which is especially useful for many legacy applications.
Was it really necessary to spell out all these applications in detail and provide different protocol stacks for each one? Probably not, but there were a number of different working groups that devised different parts of the standard, and each one just focused on its specific problem and generated its own profile. Think of this as Conway’s Law in action. (In the April 1968 issue of Datamation magazine, Melvin Conway observed that if you assign n people to write a compiler, you will get an n-pass compiler, or more generally, the software structure mirrors the struc ture of the group that produced it.) It would probably have been possible to get away with two protocol stacks instead of 25, one for file transfer and one for streaming real-time communication.
4.5.3 The Bluetooth Protocol Stack
The Bluetooth standard has many protocols grouped loosely into the layers shown in Fig. 4-31. The first observation to make is that the structure does not fol low the OSI model, the TCP/IP model, the 802 model, or any other model.
Applications
Upper
e
e
e
l
Service
discovery RFcomm . . .
layers
Host-controller interface
if
o
r
P
li
f
o
r
P
l
i
f
L2CAP
o
r
P
Link manager
Link control
(Baseband)
Datalink layer
Physical
Radio layer
Figure 4-31. The Bluetooth protocol architecture.
328 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The bottom layer is the physical radio layer, which corresponds fairly well to the physical layer in the OSI and 802 models. It deals with radio transmission and modulation. Many of the concerns here have to do with the goal of making the system inexpensive so that it can become a mass-market item.
The link control (or baseband) layer is somewhat analogous to the MAC sub layer but also includes elements of the physical layer. It deals with how the con troller controls time slots and how these slots are grouped into frames.
Next come two protocols that use the link control protocol. The link manager handles the establishment of logical channels between devices, including power management, pairing and encryption, and quality of service. It lies below the host controller interface line. This interface is a convenience for implementation: typi- cally, the protocols below the line will be implemented on a Bluetooth chip, and the protocols above the line will be implemented on the Bluetooth device that hosts the chip.
The link protocol above the line is L2CAP (Logical Link Control Adapta tion Protocol ). It frames variable-length messages and provides reliability if need- ed. Many protocols use L2CAP, such as the two utility protocols that are shown. The service discovery protocol is used to locate services within the network. The RFcomm (Radio Frequency communication) protocol emulates the standard serial port found on PCs for connecting the keyboard, mouse, and modem, among other devices.
The top layer is where the applications are located. The profiles are repres- ented by vertical boxes because they each define a slice of the protocol stack for a particular purpose. Specific profiles, such as the headset profile, usually contain only those protocols needed by that application and no others. For example, pro files may include L2CAP if they have packets to send but skip L2CAP if they have only a steady flow of audio samples.
In the following sections, we will examine the Bluetooth radio layer and vari- ous link protocols, since these roughly correspond to the physical and MAC sublayers in the other protocol stacks we have studied.
4.5.4 The Bluetooth Radio Layer
The radio layer moves the bits from controller to worker, or vice versa. It is a low-power system with a range of 10 meters operating in the same 2.4-GHz ISMband as 802.11. The band is divided into 79 channels of 1 MHz each. To coexist with other networks using the ISM band, frequency hopping spread spectrum is used. There can be up to 1600 hops/sec over slots with a dwell time of 625-µsec. All the nodes in a piconet hop frequencies simultaneously, following the slot tim ing and pseudorandom hop sequence dictated by the controller.
Unfortunately, it turned out that early versions of Bluetooth and 802.11 inter fered enough to ruin each other’s transmissions. Some companies responded by banning Bluetooth altogether, but eventually a technical solution was devised. The
SEC. 4.5 BLUETOOTH 329
solution is for Bluetooth to adapt its hop sequence to exclude channels on which there are other RF signals. This process reduces the harmful interference. It is called adaptive frequency hopping .
Three forms of modulation are used to send bits on a channel. The basic scheme is to use frequency shift keying to send a 1-bit symbol every microsecond, giving a gross data rate of 1 Mbps. Enhanced rates were introduced with the 2.0 version of Bluetooth. These rates use phase shift keying to send either 2 or 3 bits per symbol, for gross data rates of 2 or 3 Mbps. The enhanced rates are only used in the data portion of frames.
4.5.5 The Bluetooth Link Layers
The link control (or baseband) layer is the closest thing Bluetooth has to a MAC sublayer. It turns the raw bit stream into frames and defines some key for- mats. In the simplest form, the controller in each piconet defines a series of 625- µsec time slots, with the controller’s transmissions starting in the even slots and the workers’ transmissions starting in the odd ones. This scheme is traditional time division multiplexing, with the controller getting half the slots and the work- ers sharing the other half. Frames can be 1, 3, or 5 slots long. Each frame has an overhead of 126 bits for an access code and header, plus a settling time of 250–260 µsec per hop to allow the inexpensive radio circuits to become stable. The payload of the frame can be encrypted for confidentiality with a key that is chosen when the controller and worker connect. Hops only happen between frames, not during a frame. The result is that a 5-slot frame is much more efficient than a 1-slot frame because the overhead is constant but more data is sent.
The link manager protocol sets up logical channels, called links, to carry frames between the controller and a worker device that have discovered each other. A pairing procedure is followed to make sure that the two devices are allowed to communicate before the link is used. The old pairing method is that both devices must be configured with the same four-digit PIN (Personal Identification Number). The matching PIN is how each device would know that it was connecting to the right remote device. However, unimaginative users and devices default to PINs such as ‘‘0000’’ and ‘‘1234’’ meant that this method provided very little security in practice.
The new secure simple pairing method enables users to confirm that both de- vices are displaying the same passkey, or to observe the passkey on one device and enter it into the second device. This method is more secure because users do not have to choose or set a PIN. They merely confirm a longer, device-generated passkey. Of course, it cannot be used on some devices with limited input/output, such as a hands-free headset.
Once pairing is complete, the link manager protocol sets up the links. Two main kinds of links exist to carry the payload (user data). The first is the SCO (Synchronous Connection Oriented ) link. It is used for real-time data, such as
330 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
telephone connections. This type of link is allocated a fixed slot in each direction. A worker may have up to three SCO links with its controller. Each SCO link can transmit one 64,000-bps PCM audio channel. Due to the time-critical nature of SCO links, frames sent over them are never retransmitted. Instead, forward error correction can be used to increase reliability.
The other kind is the ACL (Asynchronous ConnectionLess ) link. This type of link is used for packet-switched data that is available irregularly. ACL traffic is delivered on a best-effort basis without guarantees. Frames can be lost and may have to be retransmitted. A worker may have only one ACL link to its controller.
The data sent over ACL links come from the L2CAP layer. This layer has four major functions. First, it accepts packets of up to 64 KB from the upper layers and breaks them into frames for transmission. At the far end, the frames are reassem- bled into packets. Second, it handles the multiplexing and demultiplexing of mul tiple packet sources. When a packet has been reassembled, the L2CAP layer deter- mines which upper-layer protocol to hand it to, for example, RFcomm or service discovery. Third, L2CAP handles error control and retransmission. It detects er rors and resends packets that were not acknowledged. Finally, L2CAP enforces quality of service requirements between multiple links.
4.5.6 The Bluetooth Frame Structure
Bluetooth defines several frame formats, the most important of which is shown in two forms in Fig. 4-32. It begins with an access code that usually identifies the controller so that workers within radio range of two controllers can tell which traf fic is for them. Next comes a 54-bit header containing typical MAC sublayer fields. If the frame is sent at the basic rate, the data field comes next. It has up to 2744 bits for a five-slot transmission. For a single time slot, the format is the same except that the data field is 240 bits.
If the frame is sent at the enhanced rate, the data portion may have up to two or three times as many bits because each symbol carries 2 or 3 bits instead of 1 bit. These data are preceded by a guard field and a synchronization pattern that is used to switch to the faster data rate. That is, the access code and header are carried at the basic rate and only the data portion is carried at the faster rate. Enhanced-rate frames end with a short trailer.
Let us take a quick look at the common header. The Address field identifies which of the eight active devices the frame is intended for. The Type field identi fies the frame type (ACL, SCO, poll, or null), the type of error correction used in the data field, and how many slots long the frame is. The Flow bit is asserted by a worker when its buffer is full and cannot receive any more data. This bit enables a primitive form of flow control. The Acknowledgementbit is used to piggyback an ACK onto a frame. The Sequence bit is used to number the frames to detect re transmissions. The protocol is stop-and-wait, so 1 bit is enough. Then comes the 8-bit header Checksum. The entire 18-bit header is repeated three times to form
SEC. 4.5 BLUETOOTH 331
Bits 72 0–2744
54
Access code Data (at 1X rate)
Header
3 4 1 1 1 8
Repeated 3 times
Addr Type F A S CRC
Bits 72 54 16 0–8184 2 Access code Header Guard/Sync Data (at 2X or 3X rate) Trailer 5 x 675 microsec slots
(a) Basic rate data frame, top
(b) Enhanced rate data frame, bottom
Figure 4-32. Typical Bluetooth data frame at (a) basic and (b) enhanced, data rates.
the 54-bit header shown in Fig. 4-32. On the receiving side, a simple circuit exam ines all three copies of each bit. If all three are the same, the bit is accepted. If not, the majority opinion wins. Thus, 54 bits of transmission capacity are used to send 10 bits of header. The reason is that to reliably send data in a noisy environ- ment using cheap, low-powered (2.5 mW) devices with little computing capacity, a great deal of redundancy is needed.
Various formats are used for the data field for ACL and SCO frames. The basic-rate SCO frames are a simple example to study: the data field is always 240 bits. Three variants are defined, permitting 80, 160, or 240 bits of actual payload, with the rest being used for error correction. In the most reliable version (80-bit payload), the contents are just repeated three times, the same as the header.
We can work out the capacity with this frame as follows. Since the worker may use only the odd slots, it gets 800 slots/sec, just as the controller does. With an 80-bit payload, the channel capacity from the worker is 64,000 bps as is the channel capacity from the controller. This capacity is exactly enough for a single full-duplex PCM voice channel (which is why a hop rate of 1600 hops/sec was chosen). That is, despite a raw bandwidth of 1 Mbps, a single full-duplex uncom- pressed voice channel can completely saturate the piconet. The efficiency of 13%is the result of spending 41% of the capacity on settling time, 20% on headers, and 26% on repetition coding. This shortcoming highlights the value of the enhanced rates and frames of more than a single slot.
4.5.7 Bluetooth 5
In June 2016, the Bluetooth Special Interest Group introduced Bluetooth 5. In January 2019, it came out with Bluetooth 5.1. These were relatively minor upgrades to the Bluetooth 4 standard. Nevertheless, there are some differences
332 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
between Bluetooth 4 and both Bluetooth 5 standards. Here is a list of the key ones in Bluetooth 5.0
1. Support for Internet of Things devices.
2. The speed has been increased from 1 Mbps to 2 Mbps.
3. Message size has gone up from 31 bytes to 255 bytes.
4. Range indoors has gone up from 10 m to 40 m.
5. Power requirements have been reduced slightly.
6. The range of the beacons has gone up slightly.
7. Security is slightly better.
In all, not a huge change, but given the need for backward compatibility, that was not to be expected. The Bluetooth 5.1 standard had a few minor updates in the areas of device tracking, caching, and a few other small items.
4.6 DOCSIS
The cable television network was original designed for bringing television pro- grams into homes. It is now also widely used as an alternative to the telephone system for bringing Internet into homes. Below we describe the ‘‘MAC layer’’ in the DOCSIS standard, which most cable providers implement.
4.6.1 Overview
The DOCIS specification also has a MAC sublayer, in some sense, although this layer is somewhat less distinct from the link layer than other protocols, as we have studied in previous chapters. Nonetheless, the protocol has various aspects that fit into the standard goals of the MAC sublayer, including channel allocation (which occurs through a request-grant process), configuration of quality of service, and a unique forwarding model. This section addresses all three of these issues. More recently, full-duplex DOCSIS 3.1 (now called DOCSIS 4.0) has introduced new technologies for scheduling and interference cancellation.
DOCSIS has a standard MAC frame format, which includes a set of fields, in- cluding the length of the MAC frame, a checksum, and an extended header field, which supports a variety of functions, including link-layer security. Some headers support specific functions, including downstream timing, upstream power adjust- ment, bandwidth requests, and concatenation of frames. One specific type of frame is called a request frame, which is how the cable modem requests band- width, as described later in this section.
SEC. 4.6 DOCSIS 333 4.6.2 Ranging
A cable modem transmits what is called a ranging request, which allows the CMTS (headend) to determine the network delay to the cable modem, as well as to perform and necessary power adjustments. Ranging is effectively the periodic tun ing of the various transmission parameters, specifically timing, frequency, and power. The CMTS polls the cable modem, which triggers the modem to submit a ranging request. Based on this message, the CMTS provides the modem a response to help the cable modem adjust signal transmission timing and power. By default, ranging occurs about once every 30 seconds, but it can be configured to occur more frequently; typical ranging intervals can be about 10 to 20 seconds.
4.6.3 Channel Bandwidth Allocation
A DOCSIS CMTS allocates bandwidth to each cable modem through a re- quest-grant process. Each upstream or downstream traffic flow is typically assign- ed a service flow, and each service flow is allocated bandwidth by the CMTS.
Service Flows
Channel allocation in DOCSIS typically involves allocation of channels be tween one CMTS and one or more cable modems, which are located in the sub- scribers’ homes. The CMTS must serve all of the upstream and downstream chan- nels, and it discards any frame with a source MAC address that is not one of the as-
signed cable modems in the group. Central to the DOCSIS MAC layer is the notion of a service flow, which provides a way to manage both upstream and downstream quality of service management. Each cable modem has an associated
service flow ID, which is negotiated during the registration of the cable modem; each cable modem can have multiple associated service flows. Different service flows can have different limitations that are associated with different types of traf fic. For example, each service flow might have a maximum packet size; or, a ser- vice flow could be dedicated to a certain type of application, such as a constant bit rate application. All cable modems must support at least one upstream and one downstream service flow, called the primary service flow.
The Request-Grant Process and Low-Latency DOCSIS
When a cable modem has data to send, it sends a short request that tells the CMTS how much data it has to send and waits for a subsequent bandwidth alloca tion message, which describes the upstream transmission opportunities that a send- er may have to transmit data.
Upstream transmission is divided into discrete intervals by an upstream band- width allocation mechanism called a minislot . A minislot is simply a time unit of
334 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
granularity for upstream transmission, typically in 6.25 µsec increments. Depend ing on the version of DOCSIS, a minislot may need to be a power-of-two multiple of this increment; in more modern versions of DOCSIS, this restriction does not apply. By adjusting the minislots that are granted to a particular service flow, the CMTS can effectively implement quality of service and prioritization for different traffic flows.
Generally speaking, quality of service has allowed the CMTS to allocate more bandwidth to different cable modems (thus allowing a subscriber who is provi- sioned for a higher tier of service to achieve a higher service level). More recently, however, revisions to DOCSIS have also allowed differentiated service for latency- sensitive applications. Specifically, a new revision to the DOCSIS protocol allows for low latency, through a new specification called LLD (Low-Latency DOCSIS) LLD recognizes that for many interactive applications, such as gaming and video conferencing, low latency is as important as high throughput. In some cases, in existing DOCSIS networks, the latency for some flows can be quite high, due to both the time to acquire the shared media and the time for queueing.
LLD addresses these issues by shortening the round-trip delay associated with the request-grant process, and by using two queues—one queue for latency-sensi tive application traffic and a second queue for traffic that is not latency-sensitive. The shorter request-grant delay reduces the amount of time that the CMTS uses to perform scheduling calculations, to 1 millisecond from a previous time interval of 2–4 milliseconds. LLD also uses mechanisms to proactively schedule grants to a service flow to eliminate delay associated with the request-grant process entirely. LLD allows applications to determine whether they have packets that cannot be queued, through the marking of a differentiated service field in the DOCSIS frame. For more information on LLD, see White (2019).
4.7 DATA LINK LAYER SWITCHING
Many organizations have multiple LANs and wish to connect them. Would it not be convenient if we could just join the LANs together to make a larger LAN? In fact, we can do this when the connections are made with devices called bridges . The Ethernet switches we described in Sec. 4.3.4 are a modern name for bridges; they provide functionality that goes beyond classic Ethernet and Ethernet hubs to make it easy to join multiple LANs into a larger and faster network. We shall use the terms ‘‘bridge’’ and ‘‘switch’’ interchangeably.
Bridges operate in the data link layer, so they examine the data link layer ad- dresses to forward frames. Since they are not supposed to examine the payload field of the frames they forward, they can handle IP packets as well as other kinds of packets, such as AppleTalk packets. In contrast, routers examine the addresses in packets and route based on them, so they only work with the protocols that they were designed to handle.
SEC. 4.7 DATA LINK LAYER SWITCHING 335
In this section, we will look at how bridges work and are used to join multiple physical LANs into a single logical LAN. We will also look at how to do the re- verse and treat one physical LAN as multiple logical LANs, called virtual LANs. Both technologies provide useful flexibility for managing networks. For a compre- hensive treatment of bridges, switches, and several related topics, see Perlman (2000) and Yu (2011).
4.7.1 Uses of Bridges
Before getting into the technology of bridges, let us take a look at some com- mon situations in which bridges are used. We will mention three reasons why a single organization may end up with multiple LANs.
First, many university and corporate departments have their own LANs to con- nect their own personal computers, servers, and devices such as printers. Since the goals of the various departments differ, different departments may set up different LANs, without regard to what other departments are doing. Sooner or later, though, there is a need for interaction, so bridges are needed. In this example, multiple LANs come into existence due to the autonomy of their owners.
Second, the organization may be geographically spread over several buildings separated by considerable distances. It may be cheaper to have separate LANs in each building and connect them with bridges and a few long-distance fiber optic links than to run all the cables to a single central switch. Even if laying the cables is easy to do, there are limits on their lengths (e.g., 200 m for twisted-pair gigabit Ethernet). The network would not work for longer cables due to the excessive sig- nal attenuation or round-trip delay. The only solution is to partition the LAN and install bridges to join the pieces to increase the total physical distance that can be covered.
Third, it may be necessary to split what is logically a single LAN into separate LANs (connected by bridges) to accommodate the load. At many large universi ties, for example, thousands of workstations are available for student and faculty computing. Companies may also have thousands of employees. The scale of this system precludes putting all the workstations on a single LAN—there are more computers than ports on any Ethernet hub and more stations than allowed on a sin- gle classic Ethernet.
Even if it were possible to wire all the workstations together, putting more sta tions on an Ethernet hub or classic Ethernet would not add capacity. All of the sta tions share the same, fixed amount of bandwidth. The more stations there are, the less average bandwidth per station.
However, two separate LANs have twice the capacity of a single LAN. Bridges let the LANs be joined together while keeping this capacity. The key is not to send traffic onto ports where it is not needed, so that each LAN can run at full speed. This behavior also increases reliability, since on a single LAN a defective node that keeps outputting a continuous stream of garbage can clog up the entire LAN. By
336 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
deciding what to forward and what not to forward, bridges act like fire doors in a building, preventing a single node that has gone berserk from bringing down the entire system.
To make these benefits easily available, ideally bridges should be completely transparent. It should be possible to go out and buy bridges, plug the LAN cables into the bridges, and have everything work perfectly, instantly. There should be no hardware changes required, no software changes required, no setting of address switches, no downloading of routing tables or parameters, nothing at all. Just plug in the cables and walk away. Furthermore, the operation of the existing LANs should not be affected by the bridges at all. As far as the stations are concerned, there should be no observable difference whether or not they are part of a bridged LAN. It should be as easy to move stations around the bridged LAN as it is to move them around a single LAN.
Surprisingly enough, it is actually possible to create bridges that are transpar- ent. Two algorithms are used: a backward learning algorithm to stop traffic being sent where it is not needed; and a spanning tree algorithm to break loops that may be formed when switches are cabled together willy-nilly. Let us now take a look at these algorithms in turn to learn how this magic is accomplished.
4.7.2 Learning Bridges
The topology of two LANs bridged together is shown in Fig. 4-33 for two cases. On the left-hand side, two multidrop LANs, such as classic Ethernets, are joined by a special station—the bridge—that sits on both LANs. On the right-hand side, LANs with point-to-point cables, including one hub, are joined together. The bridges are the devices to which the stations and hub are attached. If the LAN technology is Ethernet, the bridges are better known as Ethernet switches.
A D
Port
A
Port
Hub
E
1
1
D
B
B1
2
B
2
E
1 2
B1 B2 34
H1
Bridge
C
FC
Bridge
G
4F 3
G
(a) (b)
Figure 4-33. (a) Bridge connecting two multidrop LANs. (b) Bridges (and a hub) connecting seven point-to-point stations.
Bridges were developed when classic Ethernets were in use, so they are often shown in topologies with multidrop cables, as in Fig. 4-33(a). However, all the
SEC. 4.7 DATA LINK LAYER SWITCHING 337
topologies that are encountered today are comprised of point-to-point cables and switches. The bridges work the same way in both settings. All of the stations at tached to the same port on a bridge belong to the same collision domain, and this is different than the collision domain for other ports. If there is more than one sta tion, as in a classic Ethernet, a hub, or a half-duplex link, the CSMA/CD protocol is used to send frames.
There is a difference, however, in how the bridged LANs are built. To bridge multidrop LANs, a bridge is added as a new station on each of the multidrop LANs, as in Fig. 4-33(a). To bridge point-to-point LANs, the hubs are either con- nected to a bridge or, preferably, replaced with a bridge to increase performance. In Fig. 4-33(b), bridges have replaced all but one hub.
Different kinds of cables can also be attached to one bridge. For example, the cable connecting bridge B1 to bridge B2 in Fig. 4-33(b) might be a long-distance fiber optic link, while the cable connecting the bridges to stations might be a short- haul twisted-pair line. This arrangement is useful for bridging LANs in different buildings.
Now let us consider what happens inside the bridges. Each bridge operates in promiscuous mode, that is, it accepts every frame transmitted by the stations at tached to each of its ports. The bridge must decide whether to forward or discard each frame, and, if the former, on which port to output the frame. This decision is made by using the destination address. As an example, consider the topology of Fig. 4-33(a). If station A sends a frame to station B, bridge B1 will receive the frame on port 1. This frame can be immediately discarded without further ado be- cause it is already on the correct port. However, in the topology of Fig. 4-33(b) suppose that A sends a frame to D. Bridge B1 will receive the frame on port 1 and output it on port 4. Bridge B2 will then receive the frame on its port 4 and output it on its port 1.
A simple way to implement this scheme is to have a big (hash) table inside the bridge. The table can list each possible destination and which output port it be longs on. For example, in Fig. 4-33(b), the table at B1 would list D as belonging to port 4, since all B1 has to know is which port to put frames on to reach D. That, in fact, more forwarding will happen later when the frame hits B2 is not of interest to B1.
When the bridges are first plugged in, all the hash tables are empty. None of the bridges know where any of the destinations are, so they use a flooding algo rithm: every incoming frame for an unknown destination is output on all the ports to which the bridge is connected except the one it arrived on. As time goes on, the bridges learn where destinations are. Once a destination is known, frames destined for it are put only on the proper port; they are not flooded.
The algorithm used by the bridges is backward learning . As mentioned above, the bridges operate in promiscuous mode, so they see every frame sent on any of their ports. By looking at the source addresses, they can tell which ma- chines are accessible on which ports. For example, if bridge B1 in Fig. 4-33(b)
338 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
sees a frame on port 3 coming from C, it knows that C must be reachable via port 3, so it makes an entry in its hash table. Any subsequent frame addressed to C coming in to B1 on any other port will be forwarded to port 3.
The topology can change as machines and bridges are powered up and down and moved around. To handle dynamic topologies, whenever a hash table entry is made, the arrival time of the frame is noted in the entry. Whenever a frame whose source is already in the table arrives, its entry is updated with the current time. Thus, the time associated with every entry tells the last time a frame from that ma- chine was seen.
Periodically, a process in the bridge scans the hash table and purges all entries more than a few minutes old. In this way, if a computer is unplugged from its LAN, moved around the building, and plugged in again somewhere else, within a few minutes it will be back in normal operation, without any manual intervention. This algorithm also means that if a machine is quiet for a few minutes, any traffic sent to it will have to be flooded until it next sends a frame itself.
The routing procedure for an incoming frame depends on the port it arrives on (the source port) and the address to which it is destined (the destination address). The procedure is as follows.
1. If the port for the destination address is the same as the source port, discard the frame.
2. If the port for the destination address and the source port are different, forward the frame on to the destination port.
3. If the destination port is unknown, use flooding and send the frame on all ports except the source port.
You might wonder whether the first case can occur with point-to-point links. The answer is that it can occur if hubs are used to connect a group of computers to a bridge. An example is shown in Fig. 4-33(b) where stations E and F are connected to hub H1, which is in turn connected to bridge B2. If E sends a frame to F, the
hub will relay it to B2 as well as to F. That is what hubs do—they wire all ports together so that a frame input on one port is simply output on all other ports. The frame will arrive at B2 on port 2, which is already the right output port to reach the destination. Bridge B2 need only discard the frame.
As each frame arrives, this algorithm must be applied, so it is usually imple- mented with special-purpose VLSI chips. The chips do the lookup and update the table entry, all in a few microseconds. Because bridges only look at the MAC ad- dresses to decide how to forward frames, it is possible to start forwarding as soon as the destination header field has come in, before the rest of the frame has arrived (provided the output line is available, of course). This design reduces the latency of passing through the bridge, as well as the number of frames that the bridge must be able to buffer. It is referred to as cut-through switching or wormhole routing and is usually handled in hardware.
SEC. 4.7 DATA LINK LAYER SWITCHING 339
We can look at the operation of a bridge in terms of protocol stacks to under- stand what it means to be a link layer device. Consider a frame sent from station A to station D in the configuration of Fig. 4-33(a), in which the LANs are Ethernet. The frame will pass through one bridge. The protocol stack view of processing is shown in Fig. 4-34.
Station A Station D
Network
Packet
Bridge
Packet
Ethernet
Eth
Packet
Relay
Eth
MAC
Eth Packet
Eth Packet
Packet
Physical
Eth
Packet
Eth Packet Eth Packet
Eth
Packet
Wire Wire
Figure 4-34. Protocol processing at a bridge.
The packet comes from a higher layer and descends into the Ethernet MAC layer. It acquires an Ethernet header (and also a trailer, not shown in the figure). This unit is passed to the physical layer, goes out over the cable, and is picked up by the bridge.
In the bridge, the frame is passed up from the physical layer to the Ethernet MAC layer. This layer has extended processing compared to the Ethernet MAC layer at a station. It passes the frame to a relay, still within the MAC layer. The bridge relay function uses only the Ethernet MAC header to determine how to handle the frame. In this case, it passes the frame to the Ethernet MAC layer of the port used to reach station D, and the frame continues on its way.
In the general case, relays at a given layer can rewrite the headers for that layer. Virtual LANs will provide an example shortly. In no case should the bridge look inside the frame and learn that it is carrying an IP packet; that is irrelevant to the bridge processing and would violate protocol layering. Also note that a bridge with k ports will have k instances of MAC and physical layers. The value of k is 2 for our simple example.
4.7.3 Spanning-Tree Bridges
To increase reliability, redundant links can be used between bridges. In the ex- ample of Fig. 4-35, there are two links in parallel between a pair of bridges. This design ensures that if one link is cut, the network will not be partitioned into two sets of computers that cannot talk to each other.
340 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 Frame F0
A
B2
F1
F2 F3 B1
Bridge
F4
Redundant links
Figure 4-35. Bridges with two parallel links.
However, this redundancy introduces some additional problems because it cre- ates loops in the topology. An example of these problems can be seen by looking at how a frame sent by A to a previously unobserved destination is handled in Fig. 4-35. Each bridge follows the normal rule for handling unknown destinations, which is to flood the frame. Call the frame from A that reaches bridge B1 frame F0. The bridge sends copies of this frame out all of its other ports. We will only consider the bridge ports that connect B1 to B2 (though the frame will be sent out the other ports, too). Since there are two links from B1 to B2, two copies of the frame will reach B2. They are shown in Fig. 4-35 as F1 and F2. Shortly thereafter, bridge B2 receives these frames. However, it does not (and cannot) know that they are copies of the same frame, rather than two different frames sent one after the other. So bridge B2 takes F1 and sends copies of it out all the other ports, and it also takes F2 and sends copies of it out all the other ports. This produces frames F3 and F4that are sent along the two links back to B1. Bridge B1 then sees two new frames with unknown destinations and copies them again. This cycle goes on forever.
The solution to this difficulty is for the bridges to communicate with each other and overlay the actual topology with a spanning tree that reaches every bridge. In effect, some potential connections between bridges are ignored in the interest of constructing a fictitious loop-free topology that is a subset of the actual topology.
For example, in Fig. 4-36 we see five bridges that are interconnected and also have stations connected to them. Each station connects to only one bridge. There are some redundant connections between the bridges so that frames will be for- warded in loops if all of the links are used. This topology can be thought of as a graph in which the bridges are the nodes and the point-to-point links are the edges. The graph can be reduced to a spanning tree, which has no cycles by definition, by dropping the links shown as dashed lines in Fig. 4-36. Using this spanning tree, there is exactly one path from every station to every other station. Once the bridges have agreed on the spanning tree, all forwarding between stations follows
SEC. 4.7 DATA LINK LAYER SWITCHING 341
the spanning tree. Since there is a unique path from each source to each destina tion, loops are impossible.
Station
Root
bridge
B1 B2
B3 B4
B5
Bridge
Link that is not part
of the spanning tree
Figure 4-36. A spanning tree connecting five bridges. The dashed lines are links that are not part of the spanning tree.
To build the spanning tree, the bridges run a distributed algorithm. Each bridge periodically broadcasts a configuration message out all of its ports to its neighbors and processes the messages it receives from other bridges, as described next. These messages are not forwarded, since their purpose is to build the tree, which can then be used for forwarding.
The bridges must first choose one bridge to be the root of the spanning tree. To make this choice, they each include an identifier based on their MAC address in the configuration message, as well as the identifier of the bridge they believe to be the root. MAC addresses are installed by the manufacturer and guaranteed to be unique worldwide, which makes these identifiers convenient and unique. The bridges choose the bridge with the lowest identifier to be the root. After enough messages have been exchanged to spread the news, all bridges will agree on which bridge is the root. In Fig. 4-36, bridge B1 has the lowest identifier and becomes the root.
Next, a tree of shortest paths from the root to every bridge is constructed. In Fig. 4-36, bridges B2 and B3 can each be reached from bridge B1 directly, in one hop that is a shortest path. Bridge B4 can be reached in two hops, via either B2 or B3. To break this tie, the path via the bridge with the lowest identifier is chosen, so B4 is reached via B2. Bridge B5 can be reached in two hops via B3.
To find these shortest paths, bridges include the distance from the root in their configuration messages. Each bridge remembers the shortest path it finds to the root. The bridges then turn off ports that are not part of the shortest path.
Although the tree spans all the bridges, not all the links (or even bridges) are necessarily present in the tree. This happens because turning off the ports prunes some links from the network to prevent loops. Even after the spanning tree has been established, the algorithm continues to run during normal operation to auto- matically detect topology changes and update the tree.
342 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The algorithm for automatically constructing the spanning tree was invented by Radia Perlman. Her job was to solve the problem of joining LANs without loops. She was given a week to do it, but she came up with the idea for the span- ning tree algorithm in a day. Fortunately, this left her enough time to write it as a poem (Perlman, 1985):
I think that I shall never see
Agraph more lovely than a tree.
Atree whose crucial property
Is loop-free connectivity.
Atree which must be sure to span.
So packets can reach every LAN.
First the Root must be selected
ByID it is elected.
Least-cost paths from Root are traced
In the tree these paths are placed.
Amesh is made by folkslike me
Then bridges find a spanning tree.
The spanning tree algorithm was then standardized as IEEE 802.1D and used for many years. In 2001, it was revised to more rapidly find a new spanning tree after a topology change. For a detailed treatment of bridges, see Perlman (2000).
4.7.4 Repeaters, Hubs, Bridges, Switches, Routers, and Gateways
So far in this book, we have looked at a variety of ways to get frames and packets from one computer to another. We have mentioned repeaters, hubs, bridges, switches, routers, and gateways. All of these devices are in common use, but they all differ in subtle and not-so-subtle ways. Since there are so many of them, it is probably worth taking a look at them together to see what the simi larities and differences are.
The key to understanding these devices is to realize that they operate in dif ferent layers, as illustrated in Fig. 4-37(a). The layer matters because different de- vices use different pieces of information to decide how to switch. In a typical scenario, the user generates some data to be sent to a remote machine. Those data are passed to the transport layer, which then adds a header (for example, a TCP header) and passes the resulting unit down to the network layer. The network layer adds its own header to form a network layer packet (e.g., an IP packet). In Fig. 4-37(b), we see the IP packet shaded in gray. Then, the packet goes to the data link layer, which adds its own header and checksum (CRC) and gives the resulting frame to the physical layer for transmission, for example, over a LAN.
Now let us look at the switching devices and see how they relate to the packets and frames. At the bottom, in the physical layer, we find the repeaters. These are analog devices that work with signals on the cables to which they are connected.
SEC. 4.7 DATA LINK LAYER SWITCHING 343 Application layer Application gateway
Transport layer Transport gateway
Packet (supplied by network layer)
Network layer Router Frame
Packet
header
header
TCP
User
data CRC
Data link layer Bridge, switch Physical layer Repeater, hub
header
Frame (built by data link layer)
(a) (b)
Figure 4-37. (a) Which device is in which layer. (b) Frames, packets, and
headers.
A signal appearing on one cable is cleaned up, amplified, and put out on another cable. Repeaters do not understand frames, packets, or headers. They understand the symbols that encode bits as volts. Classic Ethernet, for example, was designed to allow four repeaters that would boost the signal to extend the maximum cable length from 500 meters to 2500 meters.
Next we come to the hubs. A hub has a number of input lines that it joins elec trically. Frames arriving on any of the lines are sent out on all the others. If two frames arrive at the same time, they will collide, just as on a coaxial cable. All the lines coming into a hub must operate at the same speed. Hubs differ from re- peaters in that they do not (usually) amplify the incoming signals and are designed for multiple input lines, but the differences are slight. Like repeaters, hubs are physical layer devices that do not examine the link layer addresses or use them in any way.
Now let us move up to the data link layer, where we find bridges and switches. We just studied bridges at some length. A bridge connects two or more LANs. Like a hub, a modern bridge has multiple ports, usually enough for 4 to 48 input lines of a certain type. Unlike in a hub, each port is isolated to be its own collision
domain; if the port has a full-duplex point-to-point line, the CSMA/CD algorithm is not needed. When a frame arrives, the bridge extracts the destination address from the frame header and looks it up in a table to see where to send the frame. For Ethernet, this address is the 48-bit destination address shown in Fig. 4-14. The bridge only outputs the frame on the port where it is needed and can forward multi- ple frames at the same time.
Bridges offer much better performance than hubs, and the isolation between bridge ports also means that the input lines may run at different speeds, possibly even with different network types. A common example is a bridge with ports that connect to 10-, 100-, and 1000-Mbps Ethernet. Buffering within the bridge is needed to accept a frame on one port and transmit the frame out on a different port. If frames come in faster than they can be retransmitted, the bridge may run out of
344 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
buffer space and have to start discarding frames. For example, if a gigabit Ethernet is pouring bits into a 10-Mbps Ethernet at top speed, the bridge will have to buffer them, hoping not to run out of memory. This problem still exists even if all the ports run at the same speed because more than one port may be sending frames to a given destination port.
Bridges were originally intended to be able to join different kinds of LANs, for example, an Ethernet and a Token Ring LAN. However, this never worked well because of differences between the LANs. Different frame formats require copying and reformatting, which takes CPU time, requires a new checksum calculation, and introduces the possibility of undetected errors due to bad bits in the bridge’s mem- ory. Different maximum frame lengths are also a serious problem with no good solution. Basically, frames that are too large to be forwarded must be discarded. So much for transparency.
Two other areas where LANs can differ are security and quality of service. Some LANs have link-layer encryption, for example 802.11, and some do not, for example Ethernet. Some LANs have quality of service features such as priorities, for example 802.11, and some do not, for example Ethernet. Consequently, when a frame must travel between these LANs, the security or quality of service expected by the sender may not be able to be provided. For all of these reasons, modern bridges usually work for one network type, and routers, which we will come to soon, are used instead to join networks of different types.
Switches are modern bridges by another name. The differences are more to do with marketing than technical issues, but there are a few points worth knowing. Bridges were developed when classic Ethernet was in use, so they tend to join rel- atively few LANs and thus have relatively few ports. The term ‘‘switch’’ is more popular nowadays. Also, modern installations all use point-to-point links, such as twisted-pair cables, so individual computers plug directly into a switch and thus the switch will tend to have many ports. Finally, ‘‘switch’’ is also used as a general term. With a bridge, the functionality is clear. On the other hand, a switch may refer to an Ethernet switch or a completely different kind of device that makes for- warding decisions, such as a telephone switch.
So far, we have seen repeaters and hubs, which are actually quite similar, as well as bridges and switches, which are even more similar to each other. Now we move up to routers, which are different from all of the above. When a packet comes into a router, the frame header and trailer are stripped off and the packet lo- cated in the frame’s payload field (shaded in Fig. 4-37) is passed to the routing software. This software uses the packet header to choose an output line. For an IP packet, the packet header will contain a 32-bit (IPv4) or 128-bit (IPv6) address, but not a 48-bit IEEE 802 address. The routing software does not see the frame ad- dresses and does not even know whether the packet came in on a LAN or a point to-point line. We will study routers and routing in Chap. 5.
Up another layer, we find transport gateways. These connect two computers that use different connection-oriented transport protocols. For example, suppose a
SEC. 4.7 DATA LINK LAYER SWITCHING 345
computer using the connection-oriented TCP/IP protocol needs to talk to a com- puter using a different connection-oriented transport protocol called SCTP. The transport gateway can copy the packets from one connection to the other, refor- matting them as need be.
Finally, application gateways understand the format and contents of the data and can translate messages from one format to another. An email gateway could translate Internet messages into SMS messages for mobile phones, for example. Like ‘‘switch,’’ ‘‘gateway’’ is somewhat of a general term. It refers to a forwarding process that runs at a high layer.
4.7.5 Virtual LANs
In the early days of local area networking, thick yellow cables snaked through the cable ducts of many office buildings. Every computer they passed was plugged in. No thought was given to which computer belonged on which LAN. All the people in adjacent offices were put on the same LAN, whether they belonged to- gether or not. Geography trumped corporate organization charts.
With the advent of twisted pair and hubs in the 1990s, all that changed. Build ings were rewired (at considerable expense) to rip out all the yellow garden hoses and install twisted pairs from every office to central wiring closets at the end of each corridor or in a central machine room, as illustrated in Fig. 4-38. If the Vice President in Charge of Wiring was a visionary, Category 5 twisted pairs were in- stalled; if he was a bean counter, the existing (Category 3) telephone wiring was used (only to be replaced a few years later, when fast Ethernet emerged).
Today, the cables have changed and hubs have become switches, but the wiring pattern is still the same. This pattern makes it possible to configure LANs logi- cally rather than physically. For example, if a company wants k LANs, it could buy k switches. By carefully choosing which connectors to plug into which switches, the occupants of a LAN can be chosen in a way that makes organiza tional sense, without too much regard to geography.
Does it matter who is on which LAN? After all, in nearly all organizations, all the LANs are interconnected. In short, yes, it often matters. Network administra tors like to group users on LANs to reflect the organizational structure rather than the physical layout of the building, for a variety of reasons. One issue is security. One LAN might host Web servers and other computers intended for public use. Another LAN might host computers containing the records of the Human Re- sources department that are not to be passed outside of the department. In such a situation, putting all the computers on a single LAN and not letting any of the ser- vers be accessed from off the LAN makes sense. Management tends to frown when hearing that such an arrangement is impossible.
A second issue is load. Some LANs are more heavily used than others and it may be desirable to separate them. For example, if the folks in research are run- ning all kinds of nifty experiments that sometimes get out of hand and completely
346 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Cable
duct
Hub
Corridor
Switch
Hub
Twisted pair to a hub
Office
Figure 4-38. A building with centralized wiring using hubs and a switch.
saturate their LAN, the folks in management may not be enthusiastic about donat ing some of the capacity they were planning to use for videoconferencing to help out. Then again, this might impress on management the need to install a faster net- work.
A third issue is broadcast traffic. Bridges broadcast traffic when the location of the destination is unknown, and upper-layer protocols use broadcasting as well. For example, when a user wants to send a packet to an IP address x, how does it know which MAC address to put in the frame? We will study this question in Chap. 5, but briefly summarized, the answer is that it broadcasts a frame con taining the question ‘‘who owns IP address x?’’ Then it waits for an answer. As the number of computers in a LAN grows, so does the number of broadcasts. Each broadcast consumes more of the LAN capacity than a regular frame because it is delivered to every computer on the LAN. By keeping LANs no larger than they need to be, the impact of broadcast traffic is reduced.
Related to broadcasts is the problem that once in a while a network interface will break down or be misconfigured and begin generating an endless stream of broadcast frames. If the network is really unlucky, some of these frames will elicit responses that lead to ever more traffic. The result of this broadcast storm is that (1) the entire LAN capacity is occupied by these frames, and (2) all the machines on all the interconnected LANs are crippled just processing and discarding all the frames being broadcast.
At first it might appear that broadcast storms could be limited in scope and reach by separating the LANs with bridges or switches, but if the goal is to achieve
4
THE MEDIUM ACCESS CONTROLSUBLAYER
Many link-layer communications protocols that we studied in Chap. 3 rely on a broadcast communication medium to transmit data. Any such protocol requires additional mechanisms to allow multiple senders to efficiently and fairly share the broadcast medium. This chapter introduces these protocols.
In any broadcast network, the key issue involves determining who gets to use the channel when there is competition for it. For example, consider a conference call in which six people, on six different telephones, are all connected so that each one can hear and talk to everyone else. It is very likely that when one of them stops speaking, two or more will start talking at once, leading to chaos. In a face to-face meeting, chaos is often avoided by a second external channel. For ex- ample, at a meeting, people raise their hands to request permission to speak. When only a single channel is available, it is much harder to determine who should go next. Many protocols for solving the problem are known. They form the contents of this chapter. In the literature, broadcast channels are sometimes referred to as multiaccess channels or random access channels .
The protocols used to determine who goes next on a multiaccess channel be long to a sublayer of the data link layer called the MAC (Medium Access Con trol ) sublayer. The MAC sublayer is especially important in LANs, particularly wireless ones because wireless is naturally a broadcast channel. Some aspects of a WAN (e.g., a direct interconnect) are point-to-point; others (e.g., the shared access network in a cable ISP) are shared and also rely on the MAC layer to facilitate sharing. Because multiaccess channels and LANs are so closely related, in this
267
268 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
chapter we will discuss LANs in general, including a few issues that are not strictly part of the MAC sublayer. The main subject here will be control of the channel. Technically, the MAC sublayer is the bottom part of the data link layer, so logi- cally we should have studied it before examining all the point-to-point protocols in Chap. 3. Nevertheless, for most people, it is easier to understand protocols involv ing multiple parties after two-party protocols are well understood. For that reason, we have deviated slightly from a strict bottom-up order of presentation.
4.1 THE CHANNEL ALLOCATION PROBLEM
The central theme of this chapter is how to allocate a single broadcast channel among competing users. The channel might be a portion of the wireless spectrum in a geographic region, or a single wire or optical fiber to which multiple nodes are connected. It does not matter. In both cases, the channel connects each user to all other users and any user who makes full use of the channel interferes with other users who also wish to use the channel.
We will first look at the shortcomings of static allocation schemes for bursty traffic. Then, we will lay out the key assumptions used to model the dynamic schemes that we examine in the following sections.
4.1.1 Static Channel Allocation
The conventional way of allocating a single channel, such as a telephone trunk, among multiple competing users is to chop up its capacity by using one of the mul tiplexing schemes we described in Sec. 2.4.4, such as FDM (Frequency Division Multiplexing). If there are N users, the bandwidth is divided into N equal-sized portions, with each user being assigned one portion. Since each user has a private frequency band, there is now no interference among users. When there is only a small and constant number of users, each of which has a steady stream or a heavy load of traffic, this division is a simple and efficient allocation mechanism. A wireless example is FM radio stations. Each station gets a portion of the FM band and uses it most of the time to broadcast its signal.
However, when the number of senders is large and varying or the traffic is bursty, FDM presents some problems. If the spectrum is cut up into N regions and fewer than N users are currently interested in communicating, a large piece of valu- able spectrum will be wasted. And if more than N users want to communicate, some of them will be denied permission for lack of bandwidth, even if some of the users who have been assigned a frequency band hardly ever transmit or receive anything.
Even assuming that the number of users could somehow be held constant at N, dividing the single available channel into some number of static subchannels is inherently inefficient. The basic problem is that when some users are quiescent,
SEC. 4.1 THE CHANNEL ALLOCATION PROBLEM 269
their bandwidth is simply lost. They are not using it, and no one else is allowed to use it either. A static allocation is a poor fit to most computer systems, in which data traffic is extremely bursty, often with peak traffic to mean traffic ratios of 1000:1. Consequently, most of the channels will be idle most of the time.
The poor performance of static FDM can easily be seen with a simple queue ing theory calculation. Let us start by finding the mean time delay, T, to send a frame onto a channel of capacity C bps. We assume that the frames arrive ran- domly with an average arrival rate of h frames/sec, and that the frames vary in length with an average length of 1/µ bits. With these parameters, the service rate of the channel is µC frames/sec. A standard queueing theory result is
T =1
µC < h
(For the curious, this result is for an ‘‘M/M/1’’ queue. It requires that the ran- domness of the times between frame arrivals and the frame lengths follow an expo- nential distribution, or equivalently be the result of a Poisson process.)
In our example, if C is 100 Mbps, the mean frame length, 1/ µ, is 10,000 bits, and the frame arrival rate, h, is 5000 frames/sec, then T = 200 µsec. Note that if we ignored the queueing delay and just asked how long it takes to send a 10,000-bit frame on a 100-Mbps network, we would get the (incorrect) answer of 100 µsec. That result only holds when there is no contention for the channel.
Now let us divide the single channel into N independent subchannels, each with capacity C/N bps. The mean input rate on each of the subchannels will now be h/N. Recomputing T, we get
TN =1
µ(C/N) < (h/N) = N
µC < h = NT
The mean delay for the divided channel is N times worse than if all the frames were somehow magically arranged orderly in a big central queue. This same result says that a bank lobby full of ATM machines is better off having a single queue feeding all the machines than a separate partitioned queue in front of each machine because with separate queues, there may be idle ATMs while there are long lines at other ones.
Precisely the same arguments that apply to FDM also apply to other ways of statically dividing the channel. If we were to use time division multiplexing (TDM) and allocate each user every Nth time slot, if a user does not use the allocated slot, it would just lie fallow. The same would hold if we split up the networks physi- cally. Using our previous example again, if we were to replace the 100-Mbps net- work with 10 networks of 10 Mbps each and statically allocate each user to one of them, the mean delay would jump from 200 µsec to 2 msec.
Since none of the traditional static channel allocation methods work well at all with bursty traffic, we will now explore dynamic methods.
270 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 4.1.2 Assumptions for Dynamic Channel Allocation
Before we get to the first of the many channel allocation methods in this chap ter, it is worthwhile to carefully formulate the allocation problem. Underlying all the work done in this area are the following five key assumptions:
1. Independent Traffic . The model consists of N independent stations (e.g., computers, telephones), each with a program or user that gener- ates frames for transmission. The expected number of frames gener- ated in an interval of length 6t is h6t, where h is a constant (the arri- val rate of new frames). Once a frame has been generated, the station
is blocked and does nothing until the frame has been successfully transmitted.
2. Single Channel . A single channel is available for all communication. All stations can transmit on it and all can receive from it. The stations are assumed to be equally capable, though protocols may assign them different roles (e.g., priorities).
3. Observable Collisions. If two frames are transmitted simultan- eously, they overlap in time and the resulting signal is garbled. This event is called a collision. All stations can detect that a collision has occurred. A collided frame must be transmitted again later. No errors other than those generated by collisions occur.
4. Continuous or Slotted Time . Time may be assumed continuous, in which case frame transmission can begin at any instant. Alterna tively, time may be slotted or divided into discrete intervals (called slots). Frame transmissions must then begin at the start of a slot. A slot may contain 0, 1, or more frames, corresponding to an idle slot, a successful transmission, or a collision, respectively.
5. Carrier Sense or No Carrier Sense. With the carrier sense assump tion, stations can tell if the channel is in use before trying to use it. No station will attempt to use the channel while it is sensed as busy. If there is no carrier sense, stations cannot sense the channel before trying to use it. They just go ahead and transmit. Only later can they determine whether the transmission was successful.
Some discussion of these assumptions is in order. The first one says that frame arrivals are independent, both across stations and at a particular station, and that frames are generated unpredictably but at a constant rate. Actually, this assump tion is not a particularly good model of network traffic, as it has long been well known that packets come in bursts over a range of time scales (Paxson and Floyd, 1995). Recent research confirms that the pattern still holds (Fontugne et al., 2017). Nonetheless, Poisson models, as they are frequently called, are commonly used, in
SEC. 4.1 THE CHANNEL ALLOCATION PROBLEM 271
part, because they are mathematically tractable. They help us analyze protocols to understand roughly how performance changes over an operating range and how it compares with other designs.
The single-channel assumption is the heart of the model. No external ways to communicate exist. Stations cannot raise their hands to request that the teacher call on them, so we will have to come up with better solutions.
The remaining three assumptions depend on the engineering of the system, and we will say which assumptions hold when we examine a particular protocol. The collision assumption is basic. Stations need some way to detect collisions if they are to retransmit frames rather than let them be lost. For wired channels, node hardware can be designed to detect collisions when they occur. The stations can then terminate their transmissions prematurely to avoid wasting capacity. This detection is much harder for wireless channels, so collisions are usually inferred after the fact by the lack of an expected acknowledgement frame. It is also pos- sible for some frames involved in a collision to be successfully received, depending on the details of the signals and the receiving hardware. However, this situation is not the common case, so we will assume that all frames involved in a collision are lost. We will also see protocols that are designed to prevent collisions from oc- curring in the first place.
The reason for the two alternative assumptions about time is that slotted time can be used to improve performance. However, it requires the stations to follow a master clock or synchronize their actions with each other to divide time into dis- crete intervals. Hence, it is not always available. We will discuss and analyze sys tems with both kinds of time. For a given system, only one of them holds.
Similarly, a network may have carrier sensing or not. Wired networks will generally have carrier sense. Wireless networks cannot always use it effectively because not every station may be within radio range of every other station. Simi larly, carrier sense will not be available in other settings in which a station cannot communicate directly with other stations, for example a cable modem in which sta tions must communicate via the cable headend. Note that the word ‘‘carrier’’ in this sense refers to a signal on the channel and has nothing to do with the common carriers (e.g., telephone companies) that date back to the days of the Pony Express.
To avoid any misunderstanding, it is worth noting that no multiaccess protocol guarantees reliable delivery. Even in the absence of collisions, the receiver may have copied some of the frame incorrectly for various reasons. Other parts of the link layer or higher layers provide reliability.
4.2 MULTIPLE ACCESS PROTOCOLS
Many algorithms for allocating a multiple access channel are known. In the following sections, we will study a small sample of the more interesting ones and give some examples of how they are commonly used in practice.
272 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 4.2.1 ALOHA
The story of our first MAC protocol starts out in pristine Hawaii in the early 1970s. In this case, ‘‘pristine’’ can be interpreted as ‘‘not having a working tele- phone system.’’ This did not make life more pleasant for researcher Norman Abramson and his colleagues at the University of Hawaii who were trying to con- nect users on remote islands to the main computer in Honolulu. Stringing their own cables under the Pacific Ocean for long distances was not in the cards, so they looked for a different solution.
The one they found used short-range radios, with each user terminal sharing the same upstream frequency to send frames to the central computer. It included a simple and elegant method to solve the channel allocation problem. Their work has been extended by many researchers since then (Schwartz and Abramson, 2009). Although Abramson’s work, called the ALOHA system, used ground- based radio broadcasting, the basic idea is applicable to any system in which unco- ordinated users are competing for the use of a single shared channel.
We will discuss two versions of ALOHA here: pure and slotted. They differ with respect to whether time is continuous, as in the pure version; or divided into discrete slots into which all frames must fit, as in the slotted version.
Pure ALOHA
The basic idea of an ALOHA system is simple: let users transmit whenever they have data to be sent. There will be collisions, of course, and the colliding frames will be damaged. Senders need some way to find out if this is the case. In the ALOHA system, after each station has sent its frame to the central computer, this computer rebroadcasts the frame to all of the stations. A sending station can thus listen for the broadcast from the hub to see if its frame has gotten through. In other systems, such as wired LANs, the sender might be able to listen for collisions while transmitting.
If the frame was destroyed, the sender just waits a random amount of time and sends it again. The waiting time must be random or the same frames will collide over and over, in lockstep. Systems in which multiple users share a common chan- nel in a way that can lead to conflicts are known as contention systems.
A sketch of frame generation in an ALOHA system is given in Fig. 4-1. We have made the frames all the same length because the throughput of ALOHA sys tems is maximized by having a uniform frame size rather than by allowing vari- able-length frames.
Whenever two frames try to occupy the channel at the same time, there will be a collision (as seen in Fig. 4-1) and both will be garbled. If the first bit of a new frame overlaps with just the last bit of a frame that has almost finished, both frames will be totally destroyed (i.e., have incorrect checksums) and both will have
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 273
User
A
B
C
D
E
Collision Time Collision
Figure 4-1. In pure ALOHA, frames are transmitted at completely arbitrary times.
to be retransmitted later. The checksum does not (and should not) distinguish be tween a total loss and a near miss. Bad is bad.
An interesting question is: what is the efficiency of an ALOHA channel? In other words, what fraction of all transmitted frames escape collisions under these chaotic circumstances? Let us first consider an infinite collection of users typing at their terminals (stations). A user is always in one of two states: typing or waiting. Initially, all users are in the typing state. When a line is finished, the user stops typing, waiting for a response. The station then transmits a frame containing the line over the shared channel to the central computer and checks the channel to see if it was successful. If so, the user sees the reply and goes back to typing. If not, the user continues to wait while the station retransmits the frame over and over until it has been successfully sent.
Let the ‘‘frame time’’ denote the amount of time needed to transmit the stan- dard, fixed-length frame (i.e., the frame length divided by the bit rate). At this point, we assume that the new frames generated by the stations are well modeled by a Poisson distribution with a mean of N frames per frame time. (The infinite- population assumption is needed to ensure that N does not decrease as users be- come blocked.) If N > 1, the user community is generating frames at a higher rate than the channel can handle, and nearly every frame will suffer a collision. For reasonable throughput, we would expect 0 < N < 1.
In addition to the new frames, the stations also generate retransmissions of frames that previously suffered collisions. Let us further assume that the old and new frames combined are well modeled by a Poisson distribution, with mean of G frames per frame time. Clearly, G * N. At low load (i.e., N 5 0), there will be few collisions, hence few retransmissions, so G 5 N. At high load, there will be many collisions, so G > N. Under all loads, the throughput, S, is just the offered load, G, times the probability, P0, of a transmission succeeding—that is, S = GP0, where P0is the probability that a frame does notsuffer a collision.
274 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
A frame will not suffer a collision if no other frames are sent within one frame time of its start, as shown in Fig. 4-2. Under what conditions will the shaded frame arrive undamaged? Let t be the time required to send one frame. If any other user has generated a frame between time t0 and t0 + t, the end of that frame will collide with the beginning of the shaded one. In fact, the shaded frame’s fate was already sealed even before the first bit was sent, but since in pure ALOHA a station does not listen to the channel before transmitting, it has no way of knowing that another frame was already underway. Similarly, any other frame started be tween t0 + t and t 0 + 2t will bump into the end of the shaded frame.
Collides with the start of the shaded frame
t
Collides with the end of
the shaded frame
t0t0+ t t0+ 2t t0+ 3t Time
Vulnerable
Figure 4-2. Vulnerable period for the shaded frame.
The probability that k frames are generated during a given frame time, in which G frames are expected, is given by the Poisson distribution
Pr[k] = Gk e<G
k!(4-1)
so the probability of zero frames is just e<G. In an interval two frame times long, the mean number of frames generated is 2G. The probability of no frames being <2G. Using
initiated during the entire vulnerable period is thus given by P0 = e S = GP0, we get
<2G
S = Ge
The relation between the offered traffic and the throughput is shown in Fig. 4-3. The maximum throughput occurs at G = 0. 5, with S = 1/2e, which is about 0.184. In other words, the best we can hope for is a channel utilization of 18%. This result is not very encouraging, but with everyone transmitting at will, we could hardly have expected a 100% success rate.
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 275
)
e
m
i
t
e
m
a
r
f
r
e
p
t
u
p
hg
u
o
r
ht
(
0.40 0.30 0.20 0.10
Slotted ALOHA: S = Ge –G Pure ALOHA: S = Ge –2G
0 0.5 1.0 1.5 S
2.0 3.0
G (attempts per packet time)
Figure 4-3. Throughput versus offered traffic for ALOHA systems.
Slotted ALOHA
Soon after ALOHA came onto the scene, Roberts (1972) published a method for doubling the capacity of an ALOHA system. His proposal was to divide time into discrete intervals called slots, each interval corresponding to one frame. This approach requires the users to agree on slot boundaries. One way to achieve syn- chronization would be to have one special station emit a pip at the start of each in terval, like a clock.
In Roberts’ method, which has come to be known as slotted ALOHA —in contrast to Abramson’s pure ALOHA —a station is not permitted to send when- ever the user types a line. Instead, it is required to wait for the beginning of the next slot. Thus, the continuous time ALOHA is turned into a discrete time one. This halves the vulnerable period. To see this, look at Fig. 4-2 and imagine the collisions that are now possible. The probability of no other traffic during the same <G, which leads to
slot as our test frame is then e
<G
S = Ge
As you can see from Fig. 4-3, slotted ALOHA peaks at G = 1, with a throughput of S = 1/e or about 0.368, twice that of pure ALOHA. If the system is operating at G = 1, the probability of an empty slot is 0.368 (from Eq. 4-1). The best we can hope for using slotted ALOHA is 37% of the slots empty, 37% successes, and 26%collisions. Operating at higher values of G reduces the number of empties but in- creases the number of collisions exponentially. To see how this rapid growth of collisions with G comes about, consider the transmission of a test frame. The <G, which is the probability that all the
probability that it will avoid a collision is e
other stations are silent in that slot. The probability of a collision is then just 1 < e<G. The probability of a transmission requiring exactly k attempts (i.e., k < 1 collisions followed by one success) is
276 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 <G(1 < e<G )k < 1
Pk = e
The expected number of transmissions, E, per line typed at a terminal is then E = '
Y kPk = 'k=1
k=1
Y ke<G(1 < e<G)k < 1 = eG
As a result of the exponential dependence of E upon G, small increases in the channel load can drastically reduce its performance.
Slotted ALOHA is notable for a reason that may not be initially obvious. It was devised in the 1970s, used in a few early experimental systems, then almost forgotten (except by eccentric textbook authors who liked it). When Internet ac- cess over the cable was invented, all of a sudden there was a problem of how to al locate a shared channel among multiple competing users. Slotted ALOHA was pulled out of the garbage can, mixed with some new ideas, and suddenly there was a solution. It has often happened that protocols that are perfectly valid fall into dis- use for political reasons (e.g., some big company wants everyone to do things its way) or due to ever-changing technology trends. Then, years later some clever person realizes that a long-discarded protocol solves a current problem. For this reason, in this chapter we will study a number of elegant protocols that are not cur rently in widespread use but might easily be used in future applications, provided that enough network designers are aware of them. Of course, we will also study many protocolsthat are in current use as well.
4.2.2 Carrier Sense Multiple Access Protocols
With slotted ALOHA, the best channel utilization that can be achieved is 1/e. This low result is hardly surprising, since with stations transmitting at will, without knowing what the other stations are doing there are bound to be many collisions. In LANs, however, it is often possible for stations to detect what other stations are
doing, and thus adapt their behavior accordingly. These networks can achieve a much better utilization than 1/e. In this section, we will discuss some protocols for improving performance.
Protocols in which stations listen for a carrier (i.e., a transmission) and act accordingly are called carrier sense protocols . A number of them have been pro- posed, and they were long ago analyzed in detail. For example, see Kleinrock and Tobagi (1975). Below we will look at several versions of carrier sense protocols.
Persistent and Nonpersistent CSMA
The first carrier sense protocol that we will study here is called 1-persistent CSMA (Carrier Sense Multiple Access). That is a bit of a mouthful for the sim- plest CSMA scheme. When a station has data to send, it first listens to the channel to see if anyone else is transmitting at that moment. If the channel is idle, the sta tions sends its data. Otherwise, if the channel is busy, the station just waits until it becomes idle. Then, the station transmits a frame. If a collision occurs, the station
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 277
waits a random amount of time and starts all over again. The protocol is called 1-persistent because the station transmits with a probability of 1 when it finds the channel idle.
You might expect that this scheme avoids collisions except for the rare case of simultaneous sends, but in fact it does not. It’s much worse than that. If two sta tions become ready in the middle of a third station’s transmission, both will wait politely until the transmission ends, and then both will begin transmitting exactly simultaneously, resulting in a collision. If they were not so impatient, there would be fewer collisions.
More subtly, the propagation delay has a very important effect on collisions. There is a chance that just after a station begins sending, another station will be- come ready to send and sense the channel. If the first station’s signal has not yet reached the second one, the latter will sense an idle channel and will also begin sending, resulting in a collision. This chance depends on the number of frames that fit on the channel, or the bandwidth-delay product of the channel. If only a tiny fraction of a frame fits on the channel, which is the case in most LANs since the propagation delay is small, the chance of a collision happening is small. The larger the bandwidth-delay product, the more important this effect becomes, and the worse the performance of the protocol.
Even so, this protocol has better performance than pure ALOHA because both stations have the decency to desist from interfering with the third station’s frame, so it gets through undamaged. Exactly the same holds for slotted ALOHA.
A second carrier sense protocol is nonpersistent CSMA. In this protocol, a conscious attempt is made to be less greedy than in the previous one. As before, a station senses the channel when it wants to send a frame, and if no one else is send ing, the station begins doing so itself immediately. However, if the channel is al ready in use, the station does not continually sense it for the purpose of seizing it immediately upon detecting the end of the previous transmission. Instead, it waits a random period of time and then repeats the algorithm. Consequently, this algo rithm leads to fewer collisions and better channel utilization but longer delays than 1-persistent CSMA.
The last protocol is p-persistent CSMA. It applies to slotted channels and works as follows. When a station becomes ready to send, it senses the channel. If it is idle, it transmits with a probability p. With a probability q = 1 < p, it defers until the next slot. If that slot is also idle, it either transmits or defers again, with probabilities p and q. This process is repeated until either the frame has been transmitted or another station has begun transmitting. In the latter case, the unlucky station acts as if there had been a collision by waiting a random time and staring again. If the station initially senses that the channel is busy, it waits until the next slot and then applies the above algorithm IEEE 802.1 uses a refinement of p-persistent CSMA that we will discuss in Sec. 4.4.
Figure 4-4 shows the computed throughput versus offered traffic for all three protocols, as well as for pure and slotted ALOHA.
278 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
1.0
0.9
)
0.01-persistent CSMA Nonpersistent CSMA
e
m
it
t
e
k
c
a
p
r
e
p
t
u
p
hg
u
o
r
h
0.8 0.7 0.6 0.5 0.4 0.3 0.2
Pure
0.1-persistent CSMA
0.5-persistent
CSMA
Slotted
ALOHA
1-persistent
CSMA
t
(
S
0.1
ALOHA
00 1 2 3 4 5 6 7 8 9 G (attempts per packet time)
Figure 4-4. Comparison of the channel utilization versus load for various ran- dom access protocols.
CSMA with Collision Detection
Persistent and nonpersistent CSMA protocols are definitely an improvement over ALOHA because they ensure that no station begins to transmit while the channel is busy. However, if two stations sense the channel to be idle and begin transmitting simultaneously, their signals will still collide. Another improvement is for the stations to quickly detect the collision and abruptly stop transmitting, (rather than finishing them) since they are irretrievably garbled anyway. This strat- egy saves time and bandwidth.
This protocol, known as CSMA/CD (CSMA with Collision Detection ), is the basis of the classic Ethernet LAN, so it is worth devoting some time to looking at it in detail. It is important to realize that collision detection is an analog process. The station’s hardware must listen to the channel while it is transmitting. If the signal it reads back is different from the signal it is putting out, it knows that a col lision is occurring. The implications are that a received signal must not be tiny compared to the transmitted signal (which is difficult for wireless, as received sig- nals may be 1,000,000 times weaker than transmitted signals) and that the modula tion must be chosen to allow collisions to be detected (e.g., a collision of two 0-volt signals may well be impossible to detect).
CSMA/CD, as well as many other LAN protocols, uses the conceptual model of Fig. 4-5. At the point marked t0, a station has finished transmitting its frame. Any other station having a frame to send may now attempt to do so. If two or more stations decide to transmit simultaneously, there will be a collision. If a sta tion detects a collision, it aborts its transmission, waits a random period of time, and then tries again (assuming that no other station has started transmitting in the meantime). Therefore, our simple model for CSMA/CD will consist of alternating
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 279
contention and transmission periods, with idle periods occurring when all stations are quiet (e.g., for lack of work).
to
Contention slots
Frame Frame Frame Frame Transmission
Contention
Idle
period
period
Time
period
Figure 4-5. CSMA/CD can be in transmission, contention, or idle state.
Now let us look at the details of the contention algorithm. Suppose that two stations both begin transmitting at exactly time t 0. How long will it take them to realize that they have collided? The answer is vital to determining the length of the contention period and hence what the delay and throughput will be.
The minimum time to detect the collision is just the time it takes the signal to propagate from one station to the other. Based on this information, you might think that a station that has not heard a collision for a time equal to the full cable propagation time after starting its transmission can be sure it has seized the cable. By ‘‘seized,’’ we mean that all other stations know it is transmitting and will not in terfere. This conclusion is wrong.
Consider the following worst-case scenario. Let the time for a signal to propa- gate between the two farthest stations be o . At t0, one station begins transmitting. At t0 + o < ¡ , an instant before the signal arrives at the most distant station, that station also begins transmitting. Of course, it detects the collision almost instantly and stops, but the little noise burst caused by the collision does not get back to the original station until time 2o < ¡ . In other words, in the worst case a station cannot be sure that it has seized the channel until it has transmitted for 2o without hearing a collision.
Starting with this understanding, we can think of CSMA/CD contention as a slotted ALOHA system with a slot width of 2o . On a 1-km-long coaxial cable, o 5 5 µsec. The difference for CSMA/CD compared to slotted ALOHA is that slots in which only one station transmits (i.e., in which the channel is seized) are followed by the rest of a frame. This difference will greatly improve performance if the frame time is much longer than the propagation time.
4.2.3 Collision-Free Protocols
Although collisions do not occur with CSMA/CD once a station has unam- biguously captured the channel, they can still occur during the contention period. These collisions adversely affect the system performance, in particular when the
280 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
bandwidth-delay product is large, such as when the cable is long (i.e., large o ) and the frames are short. Not only do collisions reduce bandwidth, but they make the time to send a frame variable, which is not a good fit for real-time traffic such as voice over IP. CSMA/CD is also not universally applicable.
In this section, we will examine some protocols that resolve the contention for the channel without any collisions at all, not even during the contention period. Most of these protocols are not currently used in major systems, but in a rapidly changing field, having some protocols with excellent properties available for future systems is often a good thing.
In the protocols to be described, we assume that there are exactly N stations, each programmed with a unique address from 0 to N < 1. It does not matter that some stations may be inactive part of the time. We also assume that propagation delay is negligible. The basic question remains: which station gets the channel after a successful transmission? We continue using the model of Fig. 4-5 with its discrete contention slots.
A Bit-Map Protocol
In our first collision-free protocol, the basic bit-map method , each contention period consists of exactly N slots. If station 0 has a frame to send, it transmits a 1 bit during the slot 0. No other station is allowed to transmit during this slot. Regardless of what station 0 does, station 1 gets the opportunity to transmit a 1 bit during slot 1, but only if it has a frame queued. In general, station j may announce that it has a frame to send by inserting a 1 bit into slot j. After all N slots have passed by, each station has complete knowledge of which stations wish to transmit. At that point, they begin transmitting frames in numerical order (see Fig. 4-6).
Frames 8 Contention slots
8 Contention slots 1 d
0 1
2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
1 1 1 1 3 7 1 1 1 5 1 Figure 4-6. The basic bit-map protocol.
2
Since everyone agrees on who goes next, there will never be any collisions. After the last ready station has transmitted its frame, an event all stations can easily monitor, another N-bit contention period is begun. If a station becomes ready just after its bit slot has passed by, it is out of luck and must remain silent until every station has had a chance and the bit map has come around again.
Protocols like this in which the desire to transmit is broadcast before the actual transmission are called reservation protocols because they reserve channel owner- ship in advance and prevent collisions. Let us briefly analyze the performance of
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 281
this protocol. For convenience, we will measure time in units of the contention bit slot, with data frames consisting of d time units.
Under conditions of low load, the bit map will simply be repeated over and over, for lack of data frames. Consider the situation from the point of view of a low-numbered station, such as 0 or 1. Typically, when it becomes ready to send, the ‘‘current’’ slot will be somewhere in the middle of the bit map. On average, the station will have to wait N/2 slots for the current scan to finish and another full N slots for the following scan to run to completion before it may begin transmitting.
The prospects for high-numbered stations are brighter. Generally, these will only have to wait half a scan (N/2 bit slots) before starting to transmit. High-num- bered stations rarely have to wait for the next scan. Since low-numbered stations must wait on average 1. 5N slots and high-numbered stations must wait on average 0. 5N slots, the mean for all stations is N slots.
The channel efficiency at low load is easy to compute. The overhead per frame is N bits and the amount of data is d bits, for an efficiency of d/(d + N). At high load, when all the stations have something to send all the time, the N- bit contention period is prorated over N frames, yielding an overhead of only 1 bit per frame, or an efficiency of d/(d + 1). The mean delay for a frame is equal to the sum of the time it queues inside its station, plus an additional (N < 1)d + N once it gets to the head of its internal queue. This interval is how long it takes to wait for all other stations to have their turn sending a frame and another bitmap.
Token Passing
The essence of the bit-map protocol is that it lets every station transmit a frame in turn in a predefined order. Another way to accomplish the same thing is to pass a small message called a token from one station to the next in the same predefined order. The token represents permission to send. If a station has a frame queued for transmission when it receives the token, it can send that frame before it passes the token to the next station. If it has no queued frame, it simply passes the token.
In a token ring protocol, the topology of the network is used to define the order in which stations send. The stations are connected one to the next in a single ring. Passing the token to the next station then simply consists of receiving the token in from one direction and transmitting it out in the other direction, as seen in
Fig. 4-7. Frames are also transmitted in the direction of the token. This way they will circulate around the ring and reach whichever station is the destination. How- ever, to stop the frame circulating indefinitely (like the token), some station needs to remove it from the ring. This station may be either the one that originally sent the frame, after it has gone through a complete cycle, or the station that was the in tended recipient of the frame.
Note that we do not need a physical ring to implement token passing. All that is needed is a logical ring, where each station knows its predecessor and successor. The channel connecting the stations might instead be a single long bus (cable).
282 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 Station Token
Direction of
transmission
Figure 4-7. Token ring.
Each station then uses the bus to send the token to the next station in the predefined sequence. Possession of the token allows a station to use the bus to send one frame. This protocol is called token bus. It is defined in IEEE 802.4, a standard that failed so badly that IEEE has withdrawn it. Standards are not always forever.
The performance of token passing is similar to that of the bit-map protocol, though the contention slots and frames of one cycle are now intermingled. After sending a frame, each station must wait for all N stations (including itself) to send the token to their neighbors and the other N < 1 stations to send a frame, if they have one. A subtle difference is that, since all positions in the cycle are equivalent, there is no bias for low- or high-numbered stations. For token ring, each station is also sending the token only as far as its neighboring station before the protocol takes the next step. Each token does not need to propagate to all stations before the protocol advances to the next step.
Token rings have cropped up as MAC protocols with some consistency. An early token ring protocol (called ‘‘Token Ring’’ and standardized as IEEE 802.5) was popular in the 1980s as an alternative to classic Ethernet. In the 1990s, a much faster token ring called FDDI (Fiber Distributed Data Interface ) was beaten out by switched Ethernet. In the 2000s, a token ring called RPR (Resilient Packet Ring ) was defined as IEEE 802.17 to standardize the mix of metropolitan area rings in use by ISPs. We wonder what the 2020s will have to offer.
Binary Countdown
A problem with the basic bit-map protocol, and by extension token passing, is that the overhead is 1 bit per station, so it does not scale well to networks with hun- dreds or thousands of stations. We can do better than that by using binary station addresses with a channel that combines transmissions in a certain way. A station wanting to use the channel now broadcasts its address as a binary bit string, start ing with the high-order bit. All addresses are assumed to be the same number of bits. The bits in each address position from different stations are BOOLEAN ORed together by the channel when they are sent at the same time. We will call
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 283
this protocol binary countdown . It was used in Datakit (Fraser, 1983). It implic itly assumes that the transmission delays are negligible so that all stations see asserted bits essentially instantaneously.
To avoid conflicts, an arbitration rule must be applied: as soon as a station sees that a high-order bit position that is 0 in its address has been overwritten with a 1, it gives up. For example, if stations 0010, 0100, 1001, and 1010 are all trying to get the channel, in the first bit time the stations transmit 0, 0, 1, and 1, respectively. These are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a higher-numbered station is competing for the channel, so they give up for the current round. Stations 1001 and 1010 continue.
The next bit is 0, and both stations continue. The next bit is 1, so station 1001 gives up. The winner is station 1010 because it has the highest address. After win- ning the bidding, it may now transmit a frame, after which another bidding cycle starts. The protocol is illustrated in Fig. 4-8. It has the property that higher-num- bered stations have a higher priority than lower-numbered stations, which may be either good or bad, depending on the context.
Bit time
0 1 2 3
0 0 1 0 0 – – –
0 1 0 0 0 – – –
1 0 0 1 1 0 0 –
1 0 1 0 1 0 1 0
Result 1 0 1 0
Stations 0010 and 0100 see this 1 and give up
Station 1001
sees this 1 and gives up
Figure 4-8. The binary countdown protocol. A dash indicates silence.
The channel efficiency of this method is d/(d + log2 N). If, however, the frame format has been cleverly chosen so that the sender’s address is the first field in the frame, even these log2 N bits are not wasted, and the efficiency is 100%.
Binary countdown is an example of a simple, elegant, and efficient protocol that is waiting to be rediscovered. Hopefully, it will find a new home some day.
4.2.4 Limited-Contention Protocols
We have now considered two basic strategies for channel acquisition in a broadcast network: contention, as in CSMA, and collision-free protocols. Each strategy can be rated as to how well it does with respect to the two important
284 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
performance measures, delay at low load and channel efficiency at high load. Under conditions of light load, contention (i.e., pure or slotted ALOHA) is prefer- able due to its low delay (since collisions are rare). As the load increases, con tention becomes increasingly less attractive because the overhead associated with channel arbitration becomes greater. Just the reverse is true for the collision-free protocols. At low load, they have relatively high delay but as the load increases, the channel efficiency improves (since the overheads are fixed).
Obviously, it would be nice if we could combine the best properties of the con tention and collision-free protocols, arriving at a new protocol that used contention at low load to provide low delay, but used a collision-free technique at high load to provide good channel efficiency. Such protocols, which we will call limited-con tention protocols , do in fact exist, and will conclude our study of carrier sense net- works.
Up to now, the only contention protocols we have studied have been symmet ric. That is, each station attempts to acquire the channel with some probability, p, with all stations using the same p. Interestingly enough, the overall system per formance can sometimes be improved by using a protocol that assigns different probabilities to different stations.
Before looking at the asymmetric protocols, let us quickly review the per formance of the symmetric case. Suppose that k stations are contending for chan- nel access. Each has a probability p of transmitting during each slot. The probability that some station successfully acquires the channel during a given slot is the probability that any one station transmits, with probability p, and all other k < 1 stations defer, each with probability 1 < p. This value is kp(1 < p)k < 1. To find the optimal value of p, we differentiate with respect to p, set the result to zero, and solve for p. Doing so, we find that the best value of p is 1/k. Substituting p = 1/k, we get
k < 1
Pr[success with optimal p] =£¤k < 1
¥
¦
k
This probability is plotted in Fig. 4-9. For small numbers of stations, the chances of success are good, but as soon as the number of stations reaches even five, the probability has dropped close to its asymptotic value of 1/e.
From Fig. 4-9, it is fairly obvious that the probability of some station acquiring the channel can be increased only by decreasing the amount of competition. The limited-contention protocols do precisely that. They first divide the stations into (not necessarily disjoint) groups. Only the members of group 0 are permitted to compete for slot 0. If one of them succeeds, it acquires the channel and transmits its frame. If the slot lies fallow or if there is a collision, the members of group 1 contend for slot 1, etc. By making an appropriate division of stations into groups, the amount of contention for each slot can be reduced, thus operating each slot near the left end of Fig. 4-9.
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 285
1.0
0.8
s
s
e
c
c
us
0.6
f
o
y t
0.4
i
l
i
b
a
b
o
r
0.2
P
0.00 5 10 15 20 25 Number of ready stations
Figure 4-9. Acquisition probability for a symmetric contention channel.
The trick is how to assign stations to slots. Before looking at the general case,
let us consider some special cases. At one extreme, each group has only one mem- ber. Such an assignment guarantees that there will never be collisions because at most one station is contending for any given slot. We have seen such protocols be fore (e.g., binary countdown). The next special case is to assign two stations per 2, which for a
group. The probability that both will try to transmit during a slot is p small p is negligible. As more and more stations are assigned to the same slot, the probability of a collision grows, but the length of the bit-map scan needed to give everyone a chance shrinks. The limiting case is a single group containing all sta tions (i.e., slotted ALOHA). What we need is a way to assign stations to slots dy- namically, with many stations per slot when the load is low and few (or even just one) station per slot when the load is high.
The Adaptive Tree-Walk Protocol
One particularly simple way of performing the necessary assignment is to use the algorithm devised by the U.S. Army for testing soldiers for syphilis during World War II (Dorfman, 1943). In short, the Army took a blood sample from N soldiers. A portion of each sample was poured into a single test tube. This mixed sample was then tested for antibodies. If none were found, all the soldiers in the group were declared healthy. If antibodies were present, two new mixed samples were prepared, one from soldiers 1 through N/2 and one from the rest. The proc- ess was repeated recursively until the infected soldiers were determined.
For the computerized version of this algorithm (Capetanakis, 1979), it is con- venient to think of the stations as the leaves of a binary tree, as illustrated in Fig. 4-10. In the first contention slot following a successful frame transmission, slot 0, all stations are permitted to try to acquire the channel. If one of them does
286 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
so, fine. If there is a collision, then during slot 1 only those stations falling under node 2 in the tree may compete. If one of them acquires the channel, the slot fol lowing the frame is reserved for those stations under node 3. If, on the other hand, two or more stations under node 2 want to transmit, there will be a collision during slot 1, in which case it is node 4’s turn during slot 2.
1
2 3
4 5 6 7
Stations
A B C D E F G H
Figure 4-10. The tree for eight stations.
In essence, if a collision occurs during slot 0, the entire tree is searched, depth first, to locate all ready stations. Each bit slot is associated with some particular node in the tree. If a collision occurs, the search continues recursively with the node’s left and right children. If a bit slot is idle or if only one station transmits in it, the searching of its node can stop because all ready stations have been located. (Were there more than one, there would have been a collision.)
When the load on the system is heavy, it is hardly worth the effort to dedicate slot 0 to node 1 because that makes sense only in the unlikely event that precisely one station has a frame to send. Similarly, one could argue that nodes 2 and 3 should be skipped as well for the same reason. Put in more general terms, at what level in the tree should the search begin? Clearly, the heavier the load, the farther down the tree the search should begin. We will assume that each station has a good estimate of the number of ready stations, q, for example, from monitoring recent traffic.
To proceed, let us number the levels of the tree from the top, with node 1 in Fig. 4-10 at level 0, nodes 2 and 3 at level 1, etc. Notice that each node at level i <i of the stations below it. If the q ready stations are uniformly dis
has a fraction 2
tributed, the expected number of them below a specific node at level i is just 2<iq. Intuitively, we would expect the optimal level to begin searching the tree to be the one at which the mean number of contending stations per slot is 1, that is, the level <iq = 1. Solving this equation, we find that i = log2 q. Numerous improvements to the basic algorithm have been discovered and are
at which 2
discussed in some detail by Bertsekas and Gallager (1992). It is such a clever idea
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 287
that researchers are still tweaking it (De Marco and Kowalski, 2017). For example, consider the case of stations G and H being the only ones wanting to transmit. At node 1 a collision will occur, so 2 will be tried and discovered idle. It is pointless to probe node 3 since it is guaranteed to have a collision (we know that two or more stations under 1 are ready and none of them are under 2, so they must all be under 3). The probe of 3 can be skipped and 6 tried next. When this probe also turns up nothing, 7 can be skipped and node G tried next.
4.2.5 Wireless LAN Protocols
A system of laptop computers that communicate by radio can be regarded as a wireless LAN, as we discussed in Sec. 1.4.3. Such a LAN is an example of a broadcast channel. It also has somewhat different properties than a wired LAN, which leads to different MAC protocols. In this section, we will examine some of these protocols. In Sec. 4.4, we will look at 802.11 (WiFi) in detail.
A common configuration for a wireless LAN is an office building with access points (APs) strategically placed around the building. The APs are wired together using copper or fiber and provide connectivity to the stations that talk to them. If the transmission power of the APs and laptops is adjusted to have a range of tens of meters, nearby rooms become like a single cell and the entire building becomes like the cellular telephony systems we studied in Chap. 2, except that each cell only has one channel. This channel is shared by all the stations in the cell, includ ing the AP. It typically provides megabits/sec or even gigabits/sec of bandwidth. IEEE 802.11ac can theoretically run at 7 Gbps, but in practice, it is much slower.
We have already remarked that wireless systems cannot normally detect a col lision while it is occurring. The received signal at a station may be tiny, perhaps a million times fainter than the signal that is being transmitted. Finding it is like
looking for a ripple on the ocean. Instead, acknowledgements are used to discover collisions and other errors after the fact.
There is an even more important difference between wireless LANs and wired LANs. A station on a wireless LAN may not be able to transmit frames to or re- ceive frames from all other stations because of the limited radio range of the sta tions. In wired LANs, when one station sends a frame, all other stations receive it. The absence of this property in wireless LANs causes a variety of complications.
We will make the simplifying assumption that each radio transmitter has some fixed range, represented by a circular coverage region within which another station can sense and receive the station’s transmission. It is important to realize that in practice coverage regions are not nearly so regular because the propagation of radio signals depends on the environment. Walls and other obstacles that attenuate and reflect signals may cause the range to differ markedly in different directions. But a simple circular model will do for our purposes.
A naive approach to using a wireless LAN might be to try CSMA: just listen for other transmissions and only transmit if no one else is doing so. The trouble is,
288 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
this protocol is not really a good way to think about wireless because what matters for reception is interference at the receiver, not at the sender. To see the nature of the problem, consider Fig. 4-11, where four wireless stations are illustrated. For
our purposes, it does not matter which are APs and which are laptops. The radio range is such that A and B are within each other’s range and can potentially inter fere with one another. C can also potentially interfere with both B and D, but not with A.
A B C D A B C D
Radio range
Radio range
(a) (b)
Figure 4-11. A wireless LAN. (a) A and C are hidden terminals when trans
mitting to B.(b) B and C are exposed terminals when transmitting to A and D.
First consider what happens when A and C transmit to B, as depicted in Fig. 4-11(a). If A sends and then C immediately senses the medium, it will not hear Abecause Ais out of its range. Thus C will falsely conclude that it can trans- mit to B. If C does start transmitting, it will interfere at B, wiping out the frame from A. (We assume here that no CDMA-type scheme is used to provide multiple channels, so collisions garble the signal and destroy both frames.) We want a MAC protocol that will prevent this kind of collision from happening because it wastes bandwidth. The problem of a station not being able to detect a potential competitor for the medium because the competitor is too far away is called the hid- den terminal problem .
Now let us look at a different situation: B transmitting to A at the same time that C wants to transmit to D, as shown in Fig. 4-11(b). If C senses the medium, it will hear a transmission and falsely conclude that it may not send to D (shown as a dashed line). In fact, such a transmission would cause bad reception only in the zone between B and C, where neither of the intended receivers is located. We want a MAC protocol that prevents this kind of deferral from happening because it wastes bandwidth. The problem is called the exposed terminal problem .
The difficulty is that, before starting a transmission, a station really wants to know whether there is radio activity around the receiver. CSMA merely tells it whether there is activity near the transmitter by sensing the carrier. With a wire, all signals propagate to all stations, so this distinction does not exist. However, only one transmission can then take place at once anywhere in the system. In a system based on short-range radio waves, multiple transmissions can occur simultaneously if they all have different destinations and these destinations are out of range of one another. We want this concurrency to happen as the cell gets larger and larger, in
SEC. 4.2 MULTIPLE ACCESS PROTOCOLS 289
the same way that people at a party should not wait for everyone in the room to go silent before they talk; multiple conversations can take place at once in a large room as long as they are not directed to the same location.
An early and quite influential protocol that tackles these problems for wireless LANs is MACA (Multiple Access with Collision Avoidance ) (Karn, 1990; and Garcia-Luna-Aceves, 2017). The basic idea behind it is for the sender to stimulate the receiver into outputting a short frame, so stations nearby can detect this trans- mission and avoid transmitting for the duration of the upcoming (large) data frame. This technique is used instead of carrier sense.
MACA is illustrated in Fig. 4-12. Let us see how A sends a frame to B. A starts by sending an RTS (Request To Send) frame to B, as shown in Fig. 4-12(a). This short frame (30 bytes) contains the length of the data frame that will eventual ly follow. Then B replies with a CTS (Clear To Send ) frame, as shown in Fig. 4-12(b). The CTS frame contains the data length (copied from the RTS frame). Upon receipt of the CTS frame, Abegins transmission.
Range of A’s transmitter
Range of B’s transmitter
C B D A CTS
A RTS E
C B D E
(a) (b)
Figure 4-12. The MACA protocol. (a) A sending an RTS to B. (b) B responding with a CTS to A.
Now let us see how stations overhearing either of these frames react. Any sta tion hearing the RTS is clearly close to A and must remain silent long enough for the CTS to be transmitted back to A without conflict. Any station hearing the CTS is clearly close to B and must remain silent during the upcoming data transmission, whose length it can tell by examining the CTS frame.
In Fig. 4-12, C is within range of A but not within range of B. Therefore, it hears the RTS from Abut not the CTS from B. As long as it does not interfere with the CTS, it is free to transmit while the data frame is being sent. In contrast, D is
within range of B but not A. It does not hear the RTS but does hear the CTS. Hear ing the CTS tips it off that it is near a station that is about to receive a frame, so it defers sending anything until that frame is expected to be finished. Station E hears both control messages and, like D, must be silent until the data frame is complete.
290 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Despite these precautions, collisions can still occur. For example, B and C could both send RTS frames to A at the same time. These will collide and be lost. In the event of a collision, an unsuccessful transmitter (i.e., one that does not hear a CTS within the expected time interval) waits a random amount of time and tries again later.
4.3 ETHERNET
We have now finished our discussion of channel allocation protocols in the abstract, so it is time to see how these principles apply to real systems. Many of the designs for personal, local, and metropolitan area networks have been stan- dardized under the name of IEEE 802. A few have survived but many have not, as we saw in Fig. 1-38. Some people who believe in reincarnation think that Charles Darwin came back as a member of the IEEE Standards Association to weed out the unfit. The most important of the survivors are 802.3 (Ethernet) and 802.11 (wire less LAN). Bluetooth (wireless PAN) is widely deployed but has now been stan- dardized outside of 802.15.
We will begin our study of real systems with Ethernet, probably the most ubiq- uitous kind of computer network in the world. Two kinds of Ethernet exist: classic Ethernet , which solves the multiple access problem using the techniques we have studied in this chapter; and switched Ethernet , in which devices called switches are used to connect different computers. It is important to note that, while they are both referred to as Ethernet, they are quite different. Classic Ethernet is the original form and ran at rates from 3 to 10 Mbps. Switched Ethernet is what Ethernet has become and runs at 100, 1000, 10,000, 40,000, or 100,000 Mbps, in forms called fast Ethernet, gigabit Ethernet, 10-gigabit Ethernet, 40-gigabit Ethernet, or 100-gigabit Ethernet. In practice, only switched Ethernet is used nowadays.
We will discuss these historical forms of Ethernet in chronological order show ing how they developed. Since Ethernet and IEEE 802.3 are identical except for a minor difference (which we will discuss shortly), many people use the terms ‘‘Ethernet’’ and ‘‘IEEE 802.3’’ interchangeably. We will do so, too. For more information about Ethernet, see Spurgeon and Zimmerman (2014).
4.3.1 Classic Ethernet Physical Layer
The story of Ethernet starts about the same time as that of ALOHA, when a student named Bob Metcalfe got his bachelor’s degree at M.I.T. and then moved up the river to get his Ph.D. at Harvard. During his studies there, he was exposed to Abramson’s work on ALOHA. He became so interested in it that after graduating from Harvard, he decided to spend the summer in Hawaii working with Abramson before starting work at Xerox PARC (Palo Alto Research Center). When he got to
SEC. 4.3 ETHERNET 291
PARC, he saw that the researchers there had designed and built what would later be called personal computers. But the machines were isolated. Using his know ledge of Abramson’s work, he, together with his colleague David Boggs, designed and implemented the first local area network (Metcalfe and Boggs, 1976). It used a single long, thick coaxial cable and ran at 3 Mbps.
They called the system Ethernet after the luminiferous ether, through which electromagnetic radiation was once thought to propagate. (When the 19th-century British physicist James Clerk Maxwell discovered that electromagnetic radiation could be described by a wave equation, scientists assumed that space must be filled with some ethereal medium in which the radiation was propagating. Only after the
famous Michelson-Morley experiment in 1887 did physicists discover that electro- magnetic radiation could propagate in a vacuum.)
The Xerox Ethernet was so successful that DEC, Intel, and Xerox drew up a standard in 1978 for a 10-Mbps Ethernet, called the DIX standard . With a minor change, the DIX standard became the IEEE 802.3 standard in 1983. Unfortunately for Xerox, it already had a history of making seminal inventions (such as the per- sonal computer) and then failing to commercialize on them, a story told in Fum- bling the Future (Smith and Alexander, 1988). When Xerox showed no interest in doing anything with Ethernet other than helping standardize it, Metcalfe formed his own company, 3Com, to sell Ethernet cards for PCs. It sold millions of them.
Classic Ethernet snaked around the building as a single long cable to which all the computers were attached. This architecture is shown in Fig. 4-13. The first va riety, popularly called thick Ethernet, resembled a yellow garden hose, with markings every 2.5 meters to show where to attach computers. (The 802.3 stan- dard did not actually require the cable to be yellow, but it did suggest it.) It was succeeded by thin Ethernet, which bent more easily and made connections using industry-standard BNC connectors. Thin Ethernet was much cheaper and easier to install, but it could run for only 185 meters per segment (instead of 500 m with thick Ethernet), each of which could handle only 30 machines (instead of 100).
TransceiverInterface
cable
Ether
Figure 4-13. Architecture of classic Ethernet.
Each version of Ethernet has a maximum cable length per segment (i.e., unam- plified length) over which the signal will propagate. To allow larger networks,
292 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
multiple cables can be connected by repeaters . A repeater is a physical layer de- vice that receives, amplifies (i.e., regenerates), and retransmits signals in both di rections. As far as the software is concerned, a series of cable segments connected by repeaters is no different from a single cable (except for a small amount of delay introduced by the repeaters).
Over each of these cables, information was sent using the Manchester en- coding we studied in Sec. 2.4.3. An Ethernet could contain multiple cable seg- ments and multiple repeaters, but no two transceivers could be more than 2.5 km apart and no path between any two transceivers could traverse more than four re- peaters. The reason for this restriction was that the MAC protocol, which we will look at next, would work correctly.
4.3.2 Classic Ethernet MAC Sublayer Protocol
The format used to send frames is shown in Fig. 4-14. First comes a Preamble of 8 bytes, each containing the bit pattern 10101010 (with the exception of the last byte, in which the last 2 bits are set to 11). This last byte is called the Start of Frame delimiter for 802.3. The Manchester encoding of this pattern produces a 10-MHz square wave for 6.4 µsec to allow the receiver’s clock to synchronize with the sender’s. The last two 1 bits tell the receiver that the rest of the frame is about to start.
Bytes
8 6 6 2 0-1500 0-46 4 Destination
Source
(a) Preamble
address
Type Data Pad Check- sum address
(b) Preamble Length Data Pad Check- sum
Destination
Source
address
address
Figure 4-14. Frame formats. (a) Ethernet (DIX). (b) IEEE 802.3.
Next come two addresses, one for the destination and one for the source. They are each 6 bytes long. The first transmitted bit of the destination address is a 0 for ordinary addresses and a 1 for group addresses. Group addresses allow multiple stations to listen to a single address. When a frame is sent to a group address, all the stations in the group receive it. Sending to a group of stations is called multi- casting . The special address consisting of all 1 bits is reserved for broadcasting . A frame containing all 1s in the destination field is accepted by all stations on the network. Multicasting is more selective, but it involves group management to define which stations are in the group. Conversely, broadcasting does not dif ferentiate between stations at all, so it does not require any group management.
SEC. 4.3 ETHERNET 293
An interesting feature of station source addresses is that they are globally unique, assigned centrally by IEEE to ensure that no two stations anywhere in the world have the same address. The idea is that any station can uniquely address any other station by just giving the right 48-bit number. To do this, the first 3 bytes of the address field are used for an OUI (Organizationally Unique Identifier ). Val- ues for this field are assigned by IEEE and indicate a manufacturer. Manufacturers 24 addresses. The manufacturer assigns the last 3 bytes of
are assigned blocks of 2
the address and programs the complete address into the NIC before it is sold. Next comes the Type or Length field, depending on whether the frame is Ether- net or IEEE 802.3. Ethernet uses a Type field to tell the receiver what to do with the frame. Multiple network-layer protocols may be in use at the same time on the same machine, so when an Ethernet frame arrives, the operating system has to know which one to hand the frame to. The Type field specifies which process to give the frame to. For example, a type code of 0x0800 means that the data con tains an IPv4 packet.
IEEE 802.3, in its wisdom, decided that this field would carry the length of the frame, since the Ethernet length was determined by looking inside the data—a lay- ering violation if ever there was one. Of course, this meant there was no way for the receiver to figure out what to do with an incoming frame. That problem was handled by the addition of another header for the logical link control protocol with in the data, which we will look at later. It uses 8 bytes to convey the 2 bytes of protocol type information.
Unfortunately, by the time 802.3 was published, so much hardware and soft- ware for DIX Ethernet was already in use that few manufacturers and users were enthusiastic about repackaging the Type and Length fields. In 1997, IEEE threw in the towel and said that both ways were fine with it. Fortunately, all the Type fields in use before 1997 had values greater than 1500, then well established as the maxi- mum data size. Now the rule is that any number there less than or equal to 0x600 (1536) can be interpreted as Length, and any number greater than 0x600 can be interpreted as Type. Now IEEE can maintain that everyone is using its standard and everybody else can keep on doing what they were already doing (not bothering with logical link control protocol) without feeling guilty about it. This is what happens when (industrial) politics meets technology.
Next come the data, up to 1500 bytes. This limit was chosen somewhat arbi trarily at the time the Ethernet standard was cast in stone, mostly based on the fact that a transceiver needs enough RAM to hold an entire frame and RAM was expen-
sive in 1978. A larger upper limit would have meant more RAM, and hence a more expensive transceiver.
In addition to there being a maximum frame length, there is also a minimum frame length. While a data field of 0 bytes is sometimes useful, it causes a prob lem. When a transceiver detects a collision, it truncates the current frame, which means that stray bits and pieces of frames appear on the cable all the time. To make it easier to distinguish valid frames from garbage, Ethernet requires that valid
294 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
frames must be at least 64 bytes long, from destination address to checksum, in- cluding both. If the data portion of a frame is less than 46 bytes, the Pad field is used to fill out the frame to the minimum size.
Another (and more important) reason for having a minimum length frame is to prevent a station from completing the transmission of a short frame before the first bit has even reached the far end of the cable, where it may collide with another frame. This problem is illustrated in Fig. 4-15. At time 0, station A, at one end of the network, sends off a frame. Let us call the propagation time for this frame to reach the other end o . Just before the frame gets to the other end (i.e., at time o < ¡), the most distant station, B, starts transmitting. When B detects that it is re- ceiving more power than it is putting out, it knows that a collision has occurred, so it aborts its transmission and generates a 48-bit noise burst to warn all other sta tions. In other words, it jams the ether to make sure the sender does not miss the collision. At about time 2o , the sender sees the noise burst and aborts its transmis- sion, too. It then waits a random time before trying again.
Packet starts
Packet almost
at time 0 A B A B at B at
(a) (b)
Noise burst gets
A B Collision at
back to A at 2 A B
(c) (d)
time
Figure 4-15. Collision detection can take as long as 2o.
If a station tries to transmit a very short frame, it is conceivable that a collision will occur, but the transmission will have completed before the noise burst gets back to the station at 2o . The sender will then incorrectly conclude that the frame was successfully sent. To prevent this situation from occurring, all frames must take more than 2o to send so that the transmission is still taking place when the noise burst gets back to the sender. For a 10-Mbps LAN with a maximum length of 2500 meters and four repeaters (from the 802.3 specification), the round-trip time (including time to propagate through the four repeaters) has been determined to be nearly 50 µsec in the worst case. Therefore, the shortest allowed frame must take at least this long to transmit. At 10 Mbps, a bit takes 100 nsec, so 500 bits is the smallest frame that is guaranteed to work. To add some margin of safety, this number was rounded up to 512 bits or 64 bytes.
The final field is the Checksum. It is a 32-bit CRC of the kind we studied in Sec. 3.2. In fact, it is defined exactly by the generator polynomial we gave there,
SEC. 4.3 ETHERNET 295
which popped up for PPP, ADSL, and other links too. This CRC is an error-detect ing code that is used to determine if the bits of the frame have been received cor rectly. It just does error detection, with the frame dropped if an error is detected.
CSMA/CD with Binary Exponential Backoff
Classic Ethernet uses the 1-persistent CSMA/CD algorithm that we studied in Sec. 4.2. This descriptor just means that stations sense the medium when they have a frame to send and send the frame as soon as the medium becomes idle. They monitor the channel for collisions as they send. If there is a collision, they abort the transmission with a short jam signal and retransmit after a random interval.
Let us now see how the random interval is determined when a collision occurs, as it is a new method. The model is still that of Fig. 4-5. After a collision, time is divided into discrete slots whose length is equal to the worst-case round-trip propa- gation time on the ether (2o ). To accommodate the longest path allowed by Ether- net, the slot time has been set to 512 bit times, or 51.2 µsec.
After the first collision, each station waits either 0 or 1 slot times at random be fore trying again. If two stations collide and each one picks the same random num- ber, they will collide again. After the second collision, each one picks either 0, 1, 2, or 3 at random and waits that number of slot times. If a third collision occurs (the probability of this happening is 0.25), the next time the number of slots to wait 3 < 1.
is chosen at random from the interval 0 to 2
i < 1 is chosen,
In general, after i collisions, a random number between 0 and 2 and that number of slots is skipped. However, after 10 collisions have been reached, the randomization interval is frozen at a maximum of 1023 slots. After 16 collisions, the controller throws in the towel and reports failure back to the com- puter. Further recovery is up to higher layers.
This algorithm, called binary exponential backoff, was chosen to dynam ically adapt to the number of stations trying to send. If the randomization interval for all collisions were 1023, the chance of two stations colliding for a second time would be negligible, but the average wait after a collision would be hundreds of slot times, introducing significant delay. On the other hand, if each station always delayed for either 0 or 1 slots, then if 100 stations ever tried to send at once they would collide over and over until 99 of them picked 1 and the remaining station picked 0. This might take years. By having the randomization interval grow expo- nentially as more and more consecutive collisions occur, the algorithm ensures a low delay when only a few stations collide but also ensures that the collisions are resolved in a reasonable interval when many stations collide. Truncating the back- off at 1023 keeps the bound from growing too large.
If there is no collision, the sender assumes that the frame was probably suc- cessfully delivered. That is, neither CSMA/CD nor Ethernet provides acknowl- edgements. This choice is appropriate for wired and optical fiber channels that have low error rates. Any errors that do occur must then be detected by the CRC
296 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
and recovered by higher layers. For wireless channels that have more errors, we will see that acknowledgements are used.
4.3.3 Ethernet Performance
Now let us briefly examine the performance of classic Ethernet under condi tions of heavy and constant load, that is, with k stations always ready to transmit. A rigorous analysis of the binary exponential backoff algorithm is complicated.
Instead, we will follow Metcalfe and Boggs (1976) and assume a constant retrans- mission probability in each slot. If each station transmits during a contention slot with probability p, the probability A that some station acquires the channel in that slot is
A = kp(1 < p)k < 1
A is maximized when p = 1/k, with AA 1/e as k A '. The probability that the contention interval has exactly j slots in it is A(1 < A)j < 1, so the mean number of slots per contention is given by
'
Y jA(1 < A)j < 1 =1A
j=0
Since each slot has a duration 2o , the mean contention interval, w, is 2o /A. As- suming optimal p, the mean number of contention slots is never more than e, so w is at most 2o e 5 5. 4o .
If the mean frame takes P sec to transmit, when many stations have frames to
send,
Channel efficiency =P
P + 2o / A(4-2)
Here, we see where the maximum cable distance between any two stations enters into the performance figures. The longer the cable, the longer the contention inter- val, which is why the Ethernet standard specifies a maximum cable length.
It is instructive to formulate Eq. (4-2) in terms of the frame length, F , the net- work bandwidth, B, the cable length, L, and the speed of signal propagation, c, for the optimal case of e contention slots per frame. With P = F/B, Eq. (4-2) becomes
Channel efficiency =1
1 + 2BLe/cF(4-3)
When the second term in the denominator is large, network efficiency will be low. More specifically, increasing network bandwidth or distance (the BL product) re- duces efficiency for a given frame size. Unfortunately, much research on network hardware is aimed precisely at increasing this product. People want high band- width over long distances (fiber optic MANs, for example), yet classic Ethernet implemented in this manner is not the best system for these applications. We will see other ways of implementing Ethernet in the next section.
SEC. 4.3 ETHERNET 297
In Fig. 4-16, the channel efficiency is plotted versus the number of ready sta tions for 2o = 51. 2 µsec and a data rate of 10 Mbps, using Eq. (4-3). With a 64-byte slot time, it is not surprising that 64-byte frames are not efficient. On the
other hand, with 1024-byte frames and an asymptotic value of e 64-byte slots per contention interval, the contention period is 174 bytes long and the efficiency is 85%. This result is much better than the 37% efficiency of slotted ALOHA.
1.0
0.9 0.8
0.7
y
c
1024-byte frames 512-byte frames
256-byte frames
0.6
n
e
i
c
i
f
0.5
f
e
l
e
0.4
n
n
a
h
0.3
C
0.2
0.1
0 1 2 4 8 16
128-byte frames 64-byte frames
32 64 128 256
Number of stations trying to send
Figure 4-16. Efficiency of Ethernet at 10 Mbps with 512-bit slot times.
It is probably worth mentioning that there has been a large amount of theoreti- cal performance analysis of Ethernet (and other networks). Most of the results should be taken with a grain (or better yet, a metric ton) of salt, for two reasons. First, virtually all of the theoretical work assumes Poisson traffic. When re- searchers began looking at real data, they discovered that network traffic is rarely Poisson. Instead, it is self-similar or bursty over a range of time scales (Paxson and Floyd, 1995; and Fontugne et al., 2017). What this means is that averaging over long periods of time does not smooth out the traffic. As well as using questionable models, many of the analyses focus on the ‘‘interesting’’ performance cases of abnormally high load. Boggs et al. (1988) showed by experimentation that Ethernet works well in reality, even at moderately high load.
4.3.4 Switched Ethernet
Ethernet soon began to evolve away from the single long cable architecture of classic Ethernet. The problems associated with finding breaks or loose con- nections drove it toward a different kind of wiring pattern, in which each station has a dedicated cable running to a central hub . A hub simply connects all the
298 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
attached wires electrically, as if they were soldered together. This configuration is shown in Fig. 4-17(a).
Port
Port
Line Hub Switch
Line
(a) (b)
Figure 4-17. (a) Hub. (b) Switch.
The wires were telephone company twisted pairs, since most office buildings were already wired this way and normally plenty of spares were available. This re- use was a win, but it did reduce the maximum cable run from the hub to 100 meters (200 meters if high-quality Category 5 twisted pairs were used). Adding or removing a station is simpler in this configuration, and cable breaks can be detect- ed easily. With the advantages of being able to use existing wiring and ease of maintenance, twisted-pair hubs quickly became the dominant form of Ethernet.
However, hubs do not increase capacity because they are logically equivalent to the single long cable of classic Ethernet. As more and more stations are added, each station gets a decreasing share of the fixed capacity. Eventually, the LAN will saturate. One way out is to go to a higher speed, say, from 10 Mbps to 100 Mbps, 1 Gbps, or even higher speeds. But with the growth of multimedia and powerful servers, even a 1-Gbps Ethernet can become saturated.
Fortunately, there is an another way to deal with increased load: switched Ethernet. The heart of this system is a switch containing a high-speed backplane that connects all of the ports, as shown in Fig. 4-17(b). From the outside, a switch looks just like a hub. They are both boxes, typically with 4 to 48 ports, each with a standard RJ-45 connector for a twisted-pair cable. Each cable connects the switch or hub to a single computer, as shown in Fig. 4-18. A switch has the same advan tages as a hub, too. It is easy to add or remove a new station by plugging or unplugging a wire, and it is easy to find most faults since a flaky cable or port will usually affect just one station. There is still a shared component that can fail—the switch itself—but if all stations lose connectivity the IT folks know what to do to fix the problem: replace the whole switch.
Inside the switch, however, something very different is happening. Switches only output frames to the ports for which those frames are destined. When a switch port receives an Ethernet frame from a station, the switch checks the Ether- net addresses to see which port the frame is destined for. This step requires the
SEC. 4.3 ETHERNET 299
Switch
Hub
Switch ports
Twisted pair
Figure 4-18. An Ethernet switch.
switch to be able to work out which ports correspond to which addresses, a process that we will describe in Sec. 4.8 when we get to the general case of switches con- nected to other switches. For now, just assume that the switch knows the frame’s
destination port. The switch then forwards the frame over its high-speed backplane to the destination port. The backplane typically runs at many Gbps, using a propri- etary protocol that does not need to be standardized because it is entirely hidden inside the switch. The destination port then transmits the frame on the wire so that it reaches the intended station. None of the other ports even knows the frame exists.
What happens if more than one of the stations or ports wants to send a frame at the same time? Again, switches differ from hubs. In a hub, all stations are in the same collision domain . They must use the CSMA/CD algorithm to schedule their transmissions. In a switch, each port is its own independent collision domain. In the common case that the cable is full duplex, both the station and the port can send a frame on the cable at the same time, without worrying about other ports and stations. Collisions are now impossible and CSMA/CD is not needed. However, if the cable is half duplex, the station and the port must contend for transmission with CSMA/CD in the usual way.
A switch improves performance over a hub in two ways. First, since there are no collisions, the capacity is used more efficiently. Second, and more importantly, with a switch multiple frames can be sent simultaneously (by different stations). These frames will reach the switch ports and travel over the switch’s backplane to be output on the proper ports. However, since two frames might be sent to the same output port at the same time, the switch must have buffering so that it can tempo rarily queue an input frame until it can be transmitted to the output port. Overall, these improvements give a large performance win that is not possible with a hub. The total system throughput can often be increased by an order of magnitude, de- pending on the number of ports and traffic patterns.
The change in the ports on which frames are output also has security benefits. Most LAN interfaces have a promiscuous mode, in which all frames are given to each computer, not just those addressed to it. With a hub, every computer that is attached can see the traffic sent between all of the other computers. Spies and busybodies love this feature. With a switch, traffic is forwarded only to the ports
300 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
where it is destined. This restriction provides better isolation so that traffic will not easily escape and fall into the wrong hands. However, it is better to encrypt traffic if security is really needed.
Because the switch just expects standard Ethernet frames on each input port, it is possible to use some of the ports as concentrators. In Fig. 4-18, the port in the upper-right corner is connected not to a single station, but to a 12-port hub instead. As frames arrive at the hub, they contend for the ether in the usual way, including collisions and binary backoff. Successful frames make it through the hub to the switch and are treated there like any other incoming frames. The switch does not know they had to fight their way in. Once in the switch, they are sent to the correct output line over the high-speed backplane. It is possible that the correct destina tion was one on the lines attached to the hub, in which case the frame has already been delivered so the switch drops it. Hubs are simpler and cheaper than switches, but due to falling switch prices, they have become an endangered species. Modern networks largely use switched Ethernet. Nevertheless, legacy hubs still exist.
4.3.5 Fast Ethernet
At the same time that switches were becoming popular, the speed of 10-Mbps Ethernet was coming under pressure. At first, 10 Mbps seemed like heaven, just as cable modems seemed like heaven to the users of 56-kbps telephone modems. But the novelty wore off quickly. As a kind of corollary to Parkinson’s Law (‘‘Work expands to fill the time available for its completion’’), it seemed that data expanded to fill the bandwidth available for their transmission.
Many installations needed more bandwidth and thus had numerous 10-Mbps LANs connected by a maze of repeaters, hubs, and switches, although to the net- work managers it sometimes felt that they were being held together by bubble gum and chicken wire. But even with Ethernet switches, the maximum bandwidth of a single computer was limited by the cable that connected it to the switch port.
It was in this environment that IEEE reconvened the 802.3 committee in 1992 with instructions to come up with a faster LAN. One proposal was to keep 802.3 exactly as it was, but just make it go faster. Another proposal was to redo it totally and give it lots of new features, such as real-time traffic and digitized voice, but
just keep the old name (for marketing reasons). After some wrangling, the com- mittee decided to keep 802.3 the way it was, and just make it go faster. This strate- gy would get the job done before the technology changed and avoid unforeseen problems with a brand new design. The new design would also be backward-com- patible with existing Ethernet LANs. The people behind the losing proposal did what any self-respecting computer-industry people would have done under these circumstances: they stomped off and formed their own committee and standardized their LAN anyway (eventually as 802.12). It flopped miserably.
The work was done quickly (by standards committees’ norms), and the result, 802.3u, was approved by IEEE in June 1995. Technically, 802.3u is not really a
SEC. 4.3 ETHERNET 301
new standard, but an addendum to the existing 802.3 standard (to emphasize its backward compatibility). This strategy is used a lot. Since practically everyone calls it fast Ethernet , rather than 802.3u, we will do that, too.
The basic idea behind fast Ethernet was simple: keep all the old frame formats, interfaces, and procedural rules, but reduce the bit time from 100 nsec to 10 nsec. Technically, it would have been possible to copy 10-Mbps classic Ethernet and still detect collisions on time by just reducing the maximum cable length by a factor of
10. However, the advantages of twisted-pair wiring were so overwhelming that fast Ethernet is based entirely on this design. Thus, all fast Ethernet systems use hubs and switches; multidrop cables with vampire taps or BNC connectors are not permitted.
Nevertheless, some choices still had to be made, the most important being which wire types to support. One contender was Category 3 twisted pair. The arg- ument for it was that practically every office in the Western world had at least four Category 3 (or better) twisted pairs running from it to a telephone wiring closet within 100 meters. Sometimes two such cables existed. Thus, using Category 3 twisted pair would make it possible to wire up desktop computers using fast Ether- net without having to rewire the building, an enormous advantage for many organi- zations.
The main disadvantage of a Category 3 twisted pair is its inability to carry 100 Mbps over 100 meters, the maximum computer-to-hub distance specified for 10-Mbps hubs. In contrast, Category 5 twisted pair wiring can handle 100 m easi ly, and fiber can go much farther. The compromise chosen was to allow all three possibilities, as shown in Fig. 4-19, but to pep up the Category 3 solution to give it the additional carrying capacity needed.
Name Cable Max. segment Advantages
100Base-T4 Twisted pair 100 m Uses category 3 UTP 100Base-TX Twisted pair 100 m Full duplex at 100 Mbps (Cat 5 UTP) 100Base-FX Fiber optics 2000 m Full duplex at 100 Mbps; long runs
Figure 4-19. The original fast Ethernet cabling.
The Category 3 UTP scheme, formally called 100Base-T4 , used a signaling speed of 25 MHz, only 25% faster than standard Ethernet’s 20 MHz. (Remember that Manchester encoding, discussed in Sec. 2.4.3, requires two clock periods for each of the 10 million bits sent each second.) However, to achieve the necessary bit rate, 100Base-T4 requires four twisted pairs. Of the four pairs, one is always to the hub, one is always from the hub, and the other two are switchable to the current transmission direction. To get 100 Mbps out of the three twisted pairs in the trans- mission direction, a fairly involved scheme is used on each twisted pair. It involves sending ternary digits with three different voltage levels. This scheme is never go ing to to win any prizes for elegance, so we will (mercilfully) skip the details.
302 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
However, since standard telephone wiring for decades has had four twisted pairs per cable, most offices are able to use the existing wiring plant. It means giving up your office telephone, but that is surely a small price to pay for faster email.
100Base-T4 fell by the wayside as many office buildings were rewired with Category 5 UTP for 100Base-TX Ethernet, which came to dominate the market. This design is simpler because the wires can handle clock rates of 125 MHz. Only two twisted pairs per station are used, one to the hub and one from it. Neither straight binary coding (i.e., NRZ) nor Manchester coding is used. Instead, the 4B/5B encoding we described in Sec 2.4.3 is used. Four data bits are encoded as 5 signal bits and sent at 125 MHz to provide 100 Mbps. This scheme is simple but has sufficient transitions for synchronization and uses the bandwidth of the wire relatively well. The 100Base-TX system is full duplex; stations can transmit at 100 Mbps on one twisted pair and receive at 100 Mbps on another twisted pair at the same time.
The last option, 100Base-FX, uses two strands of multimode fiber, one for each direction, so it, too, can run full duplex with 100 Mbps in each direction. In this setup, the distance between a station and the switch can be up to 2 km.
Fast Ethernet allows interconnection by either hubs or switches. To ensure that the CSMA/CD algorithm continues to work, the relationship between the mini- mum frame size and maximum cable length must be maintained as the network speed goes up from 10 Mbps to 100 Mbps. So, either the minimum frame size of 64 bytes must go up or the maximum cable length of 2500 m must come down, proportionally. The easy choice was for the maximum distance between any two stations to come down by a factor of 10, since a hub with 100-m cables falls within this new maximum already. However, 2-km 100Base-FX cables are too long to permit a 100-Mbps hub with the normal Ethernet collision algorithm. These cables must instead be connected to a switch and operate in a full-duplex mode so that there are no collisions.
Users quickly started to deploy fast Ethernet, but they were not about to throw away 10-Mbps Ethernet cards on older computers. As a consequence, virtually all fast Ethernet switches can handle a mix of 10-Mbps and 100-Mbps stations. To make upgrading easy, the standard itself provides a mechanism called auto-negoti- ation that lets two stations automatically negotiate the optimum speed (10 or 100 Mbps) and duplexity (half or full). It works well most of the time but is known to lead to duplex mismatch problems when one end of the link autonegotiates but the other end does not and is set to full-duplex mode (Shalunov and Carlson, 2005). Most Ethernet products use this feature to configure themselves.
4.3.6 Gigabit Ethernet
The ink was barely dry on the fast Ethernet standard when the 802 committee began working on a yet faster Ethernet, quickly dubbed gigabit Ethernet . IEEE ratified the most popular form as 802.3ab in 1999. Below, we will discuss some of
SEC. 4.3 ETHERNET 303
the key features of gigabit Ethernet. More information is given by Spurgeon and Zimmerman (2014).
The committee’s goals for gigabit Ethernet were essentially the same as the committee’s goals for fast Ethernet: increase performance tenfold while main taining compatibility with all existing Ethernet standards. In particular, gigabit Ethernet had to offer unacknowledged datagram service with both unicast and broadcast, use the same 48-bit addressing scheme already in use, and maintain the same frame format, including the minimum and maximum frame sizes. The final standard met all these goals.
Like fast Ethernet, all configurations of gigabit Ethernet use point-to-point links. In the simplest configuration, illustrated in Fig. 4-20(a), two computers are directly connected to each other. The more common case, however, uses a switch or a hub connected to multiple computers and possibly additional switches or hubs, as shown in Fig. 4-20(b). In both configurations, each individual Ethernet cable has exactly two devices on it, no more and no fewer.
Switch or hub
Ethernet
Computer
Ethernet
(a) (b)
Figure 4-20. (a) A two-station Ethernet. (b) A multistation Ethernet.
Also like fast Ethernet, gigabit Ethernet supports two different modes of opera tion: full-duplex mode and half-duplex mode. The ‘‘normal’’ mode is full-duplex mode, which allows traffic in both directions at the same time. This mode is used when there is a central switch connected to computers (or other switches) on the periphery. In this configuration, all lines are buffered so each computer and switch is free to send frames whenever it wants to. The sender does not have to sense the channel to see if anybody else is using it because contention is impossible. On the line between a computer and a switch, the computer is the only possible sender to the switch, and the transmission will succeed even if the switch is currently send ing a frame to the computer (because the line is full duplex). Since no contention is possible, the CSMA/CD protocol is not used, so the maximum length of the cable is determined by signal strength issues rather than by how long it takes for a noise burst to propagate back to the sender in the worst case. Switches are free to mix and match speeds. Autonegotiation is supported just as in fast Ethernet, only now the choice is among 10, 100, and 1000 Mbps.
304 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The other mode of operation, half-duplex, is used when the computers are con- nected to a hub rather than a switch. A hub does not buffer incoming frames. In- stead, it electrically connects all the lines internally, simulating the multidrop cable used in classic Ethernet. In this mode, collisions are possible, so the standard CSMA/CD protocol is required. Because a 64-byte frame (the shortest allowed) can now be transmitted 100 times faster than in classic Ethernet, the maximum cable length must be 100 times less, or 25 meters, to maintain the essential proper ty that the sender is still transmitting when the noise burst gets back to it, even in the worst case. With a 2500-m-long cable, the sender of a 64-byte frame on a sys tem running at 1 Gbps would be long finished before the frame got even a tenth of the way to the other end, let alone to the end and back.
This length restriction was painful enough that two features were added to the standard to increase the maximum cable length to 200 meters, which is probably enough for most offices. The first feature, called carrier extension , essentially tells the hardware to add its own padding after the normal frame to extend the frame to 512 bytes. Since this padding is added by the sending hardware and re- moved by the receiving hardware, the software is unaware of it, meaning that no changes are needed to existing software. The downside is that using 512 bytes worth of bandwidth to transmit 46 bytes of user data (the payload of a 64-byte frame) has a line efficiency of only 9%.
The second feature, called frame bursting , allows a sender to transmit a con- catenated sequence of multiple frames in a single transmission. If the total burst is less than 512 bytes, the hardware pads it again. If enough frames are waiting for transmission, this scheme is very efficient and preferred over carrier extension.
In all fairness, it is hard to imagine an organization buying modern computers with gigabit Ethernet cards and then connecting them with an old-fashioned hub to simulate classic Ethernet with all its collisions. Gigabit Ethernet interfaces and switches used to be expensive, but their prices fell rapidly as sales volumes picked up. Still, backward compatibility is sacred in the computer industry, so the com- mittee was required to put it in. Today, most computers ship with an Ethernet in terface that is capable of 10-, 100-, and 1000-Mbps operation (and maybe higher) and compatible with all of them.
Gigabit Ethernet supports both copper and fiber cabling, as listed in Fig. 4-21. Signaling at or near 1 Gbps requires encoding and sending a bit every nanosecond. This trick was initially accomplished with short, shielded copper cables (the 1000Base-CX version) and optical fibers. For the optical fibers, two wavelengths are permitted and result in two different versions: 0.85 microns (short, for 1000Base-SX) and 1.3 microns (long, for 1000Base-LX).
Signaling at the short wavelength can be achieved with cheap LEDs. It is used with multimode fiber and is useful for connections within a building, as it can run up to 500 m for 50-micron fiber. Signaling at the long wavelength requires lasers. On the other hand, when combined with single-mode (10-micron) fiber, the cable can be up to 5 km. This limit allows long distance connections between buildings,
SEC. 4.3 ETHERNET 305
Name Cable Max. segment Advantages 1000Base-SX Fiber optics 550 m Multimode fiber (50, 62.5 microns) 1000Base-LX Fiber optics 5000 m Single (10 µ) or multimode (50, 62.5 µ) 1000Base-CX 2 Pairs of STP 25 m Shielded twisted pair 1000Base-T 4 Pairs of UTP 100 m Standard category 5 UTP
Figure 4-21. Gigabit Ethernet cabling.
such as for a campus backbone, as a dedicated point-to-point link. Later variations of the standard permit even longer links over single-mode fiber.
To send bits over these versions of gigabit Ethernet, the 8B/10B encoding we described in Sec. 2.4.3 was borrowed from another networking technology called Fibre Channel. That scheme encodes 8 bits of data into 10-bit codewords that are sent over the wire or fiber, hence the name 8B/10B. The codewords were chosen so that they could be balanced (i.e., have the same number of 0s and 1s) with suf ficient transitions for clock recovery. Sending the coded bits with NRZ requires a signaling bandwidth of 25% more than that required for the uncoded bits, a big im- provement over the 100% expansion of Manchester coding.
However, all of these options required new copper or fiber cables to support the faster signaling. None of them made use of the large amount of Category 5 UTP that had been installed along with fast Ethernet. Within a year, 1000Base-T came along to fill this gap, and it has been the most popular form of gigabit Ether- net ever since. People apparently dislike rewiring their buildings.
More complicated signaling is needed to make Ethernet run at 1000 Mbps over Category 5 wires. To start, all four twisted pairs in the cable are used, and each pair is used in both directions at the same time by using digital signal processing to separate signals. Over each wire, five voltage levels that carry 2 bits are used for signaling at 125 Msymbols/sec. The mapping to produce the symbols from the bits is not straightforward. It involves scrambling, for transitions, followed by an error correcting code in which four values are embedded into five signal levels.
A speed of 1 Gbps is quite fast. For example, if a receiver is busy with some other task for even 1 msec and does not empty the input buffer on some line, up to 1953 frames may have accumulated in that gap. Also, when a computer on a giga- bit Ethernet is shipping data down the line to a computer on a classic Ethernet, buffer overruns are very likely. As a consequence of these two observations, giga- bit Ethernet supports flow control. The mechanism consists of one end sending a special control frame to the other end telling it to pause for some period of time. These PAUSE control frames are normal Ethernet frames containing a type of 0x8808. Pauses are given in units of the minimum frame time. For gigabit Ether- net, the time unit is 512 nsec, allowing for pauses as long as 33.6 msec.
There is one more extension that was introduced along with gigabit Ethernet. Jumbo frames allow for frames to be longer than 1500 bytes, usually up to 9 KB.
306 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
This extension is proprietary. It is not recognized by the standard because if it is used then Ethernet is no longer compatible with earlier versions, but most vendors support it anyway. The rationale is that 1500 bytes is a short unit at gigabit speeds. By manipulating larger blocks of information, the frame rate can be decreased, along with the processing associated with it, such as interrupting the processor to say that a frame has arrived, or splitting up and recombining messages that were too long to fit in one Ethernet frame.
4.3.7 10-Gigabit Ethernet
As soon as gigabit Ethernet was standardized, the 802 committee got bored and wanted to get back to work. IEEE told them to start on 10-gigabit Ethernet. This work followed much the same pattern as the previous Ethernet standards, with standards for fiber and shielded copper cable appearing first in 2002 and 2004, fol lowed by the standard for copper twisted pair in 2006.
Ten Gbps is an impressive speed, 1000x faster than the original Ethernet. Where could it be needed? The answer is inside data centers and exchanges to connect high-end routers, switches, and servers, as well as in long-distance, high
bandwidth trunks between offices that are enabling entire metropolitan area net- works based on Ethernet and fiber. The long distance connections use optical fiber, while the short connections may use copper or fiber.
All versions of 10-gigabit Ethernet support only full-duplex operation. CSMA/CD is no longer part of the design, and the standards concentrate on the de tails of physical layers that can run at very high speed. Compatibility still matters, though, so 10-gigabit Ethernet interfaces autonegotiate and fall back to the highest speed supported by both ends of the line.
The main kinds of 10-gigabit Ethernet are listed in Fig. 4-22. Multimode fiber with the 0.85µ (short) wavelength is used for medium distances, and single-mode fiber at 1.3µ (long) and 1.5µ (extended) is used for long distances. 10GBase-ER can run for distances of 40 km, making it suitable for wide area applications. All of these versions send a serial stream of information that is produced by scrambling the data bits, then encoding them with a 64B/66B code. This encoding has less overhead than an 8B/10B code.
Name Cable Max. segment Advantages
10GBase-SR Fiber optics Up to 300 m Multimode fiber (0.85 µ) 10GBase-LR Fiber optics 10 km Single-mode fiber (1.3 µ) 10GBase-ER Fiber optics 40 km Single-mode fiber (1.5 µ) 10GBase-CX4 4 Pairs of twinax 15 m Twinaxial copper
10GBase-T 4 Pairs of UTP 100 m Category 6a UTP Figure 4-22. 10-Gigabit Ethernet cabling.
SEC. 4.3 ETHERNET 307
The first copper version defined, 10GBase-CX4, uses a cable with four pairs of twinaxial copper wiring. Each pair uses 8B/10B coding and runs at 3.125 Gsymb- ols/sec to reach 10 Gbps. This version is cheaper than fiber and was early to mar- ket, but it remains to be seen whether it will be beat out in the long run by 10-giga- bit Ethernet over more garden-variety twisted-pair wiring.
10GBase-T is the version that uses UTP cables. While it calls for Category 6a wiring, for shorter runs, it can use lower categories (including Category 5) to allow some reuse of installed cabling. Not surprisingly, the physical layer is quite invol- ved to reach 10 Gbps over twisted pair. We will only sketch some of the high-level details. Each of the four twisted pairs is used to send 2500 Mbps in both directions. This speed is reached using a signaling rate of 800 Msymbols/sec with symbols that use 16 voltage levels. The symbols are produced by scrambling the data, pro tecting it with a LDPC (Low Density Parity Check) code, and further coding for error correction.
Ten-gigabit Ethernet is now widespread in the market, so the 802.3 committee has moved on. At the end of 2007, IEEE created a group to standardize Ethernet operating at 40 Gbps and 100 Gbps. This upgrade will let Ethernet compete in very high-performance settings, including long-distance connections in backbone net- works and short connections over the equipment backplanes. The standard is not yet complete, but proprietary products are already available.
4.3.8 40- and 100-Gigabit Ethernet
After it finished standardizing 10-gigabit Ethernet, the 802.11 committee got to work on new standards for Ethernet at 40 gigabits/sec and 100 gigabits/sec. The former is targeted at internal connections in data centers, not at ordinary offices and certainly not end users. The latter is targeted at the Internet backbone and as such has to work on optical-network runs of thousands of kilometers. A possible use is a virtual private LAN to connect a data center with a million CPUs to anoth- er million-CPU data center.
The first standard was 802.3ba, approved in 2010, followed by 802.3bj (2014) and 802.3cd (2018). All of these define Ethernet at both 40 Gbps and 100 Gbps. Design goals included:
1. Backward compatibility with 802.3 standards to 1 gigabit/sec.
2. Allowing the minimum and maximum frame sizes to stay the same. 3. Handle bit-error rates of 10<12 and better.
4. Work well on optical networks.
5. Have data rates of either 40 Gbps or 100 Gbps.
6. Allow the use of single- or multimode fiber and specialized back- planes.
308 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The new standards phase out copper wire in favor of optical fiber and high-per formance (copper) backplanes used in data centers that support cloud computing. Half a dozen modulation schemes are supported, including 64B/66B (like 8B/10B,
but with more bits). In addition, up to 10 parallel lanes at 10 Gbps each can be used to get to 100 Gbps. The lanes are typically different frequency bands over an optical fiber. Integration into existing optical networks uses ITU recommendation G.709.
Starting around 2018, a small number of companies began introducing 100-Gbps switches and network adapter cards. For the folks for whom 100 Gbps is not enough, work has already begun on standards for up to 400 gigabits/sec, sometimes referred to as 400GbE. The standards are 802.3cd, 802.3ck, 802.3cm, and 802.3cn if you want to look them up. At 400 Gbps, a typical (compressed) 4K movie can be downloaded in full in about 2 seconds.
4.3.9 Retrospective on Ethernet
Ethernet has been around for over 40 years and has no serious competitors in sight, so it is likely to be around for many more years to come. Few CPU architec tures, operating systems, or programming languages have been king of the moun tain for three decades going on strong. Clearly, Ethernet did something right. What was it?
Probably the main reason for its longevity is that Ethernet is simple and flexi- ble. In practice, simple translates into reliable, cheap, and easy to maintain. Once the hub and switch architecture was adopted, failures became extremely rare. Peo- ple hesitate to replace something that works perfectly all the time, especially when they know that an awful lot of things in the computer industry work very poorly, so that many so-called ‘‘upgrades’’ are worse than what they replaced.
Simple also translates into cheap. Twisted-pair wiring is relatively inexpensive as are the hardware components. They may start out expensive when there is a transition, for example, new gigabit Ethernet NICs or switches, but they are merely additions to a well-established network (not a replacement of it) and the prices fall quickly as the sales volume picks up.
Ethernet is easy to maintain. There is no software to install (other than the drivers) and not much in the way of configuration tables to manage (and get wrong). Also, adding new hosts is as simple as just plugging them in.
Another point is that Ethernet interworks easily with TCP/IP, which has be- come dominant. IP is a connectionless protocol, so it fits perfectly with Ethernet, which is also connectionless. IP fits much less well with connection-oriented alter- natives such as ATM. This mismatch definitely hurt ATM’s chances.
Lastly, and perhaps most importantly, Ethernet has been able to evolve in cer tain crucial ways. Speeds have gone up by four orders of magnitude and hubs and switches have been introduced, but these changes have not required changing the
SEC. 4.3 ETHERNET 309
software and have often allowed the existing cabling to be reused for a time. When a network salesman shows up at a large installation and says ‘‘I have this fantastic new network for you. All you have to do is throw out all your hardware and rewrite all your software,’’ he has a problem.
Many alternative technologies that you have probably not even heard of were faster than Ethernet when they were introduced. As well as ATM, this list includes FDDI (Fiber Distributed Data Interface) and Fibre Channel,†two ring-based opti- cal LANs. Both were incompatible with Ethernet. Neither one made it. They were too complicated, which led to complex chips and high prices. The lesson that should have been learned here was KISS (Keep It Simple, Stupid). Eventually, Ethernet caught up with them in terms of speed, often by borrowing some of their technology, for example, the 4B/5B coding from FDDI and the 8B/10B coding from Fibre Channel. Then, they had no advantages left and quietly died off or fell into specialized roles.
It looks like Ethernet will continue to expand in its applications for some time. Ten-gigabit Ethernet freed it from the distance constraints of CSMA/CD. Much effort is being put into carrier-grade Ethernet to let network providers offer Ethernet-based services to their customers for metropolitan and wide area networks (Hawkins, 2016). This application carries Ethernet frames long distances over fiber and calls for better management features to help operators offer reliable, high-quality services. Very high-speed networks like 100GbE are also finding uses in backplanes connecting components in large routers or servers. Both of these uses are in addition to that of sending frames between computers in offices. The next step is 400GbE and that may not even be the last one.
4.4 WIRELESS LANS
Wireless LANs are increasingly popular, and homes, offices, cafes, libraries, airports, zoos, and other public places are being outfitted with them to connect desktop PCs, laptops, tablets, and smartphones to the Internet. Wireless LANs can also be used to let two or more nearby computers communicate without using the Internet.
The main wireless LAN standard for over two decades has been 802.11. We gave some background information on it in Sec. 1.5.3. Now it is time to take a closer look at the technology. In the following sections, we will look at the proto- col stack, physical-layer radio transmission techniques, the MAC sublayer proto- col, the frame structure, and the services provided. For more information about 802.11, see Bing (2017) and Davis (2018). To get the truth from the mouth of the horse, consult the published IEEE standards.
† It is called ‘‘Fibre Channel’’ and not ‘‘Fiber Channel’’ because the document editor was British.
310 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 4.4.1 The 802.11 Architecture and Protocol Stack
802.11 networks can be used in two modes. The most popular mode is to con- nect clients, such as laptops and smartphones, to another network, such as a com- pany intranet or the Internet. This mode is shown in Fig. 4-23(a). In infrastructure mode, each client is associated with an AP (Access Point ) that is in turn connected to the other network. The client sends and receives its packets via the AP. Several access points may be connected together, typically by a wired network called a dis tribution system, to form an extended 802.11 network. In this case, clients can send frames to other clients via their APs.
To network Access
point
Client
(a) (b)
Figure 4-23. 802.11 architecture. (a) Infrastructure mode. (b) Ad-hoc mode.
The other mode, shown in Fig. 4-23(b), is an ad hoc network . This mode is a collection of computers that are associated so that they can directly send frames to each other. There is no access point. Since Internet access is the killer application for wireless, ad hoc networks are not very popular.
Now we will look at the protocols. All the 802 protocols, including 802.11 and Ethernet, have a certain commonality of structure. A partial view of the 802.11 protocol stack for the major 802.11 variants is given in Fig. 4-24. The stack is the same for clients and APs. The physical layer corresponds fairly well to the OSI physical layer, but the data link layer in all the 802 protocols is split into two or more sublayers. In 802.11, the MAC sublayer determines how the channel is allo- cated, that is, who gets to transmit next. Above it is the logical link control sub layer, whose job it is to hide the differences between the different 802 variants and make them indistinguishable as far as the network layer is concerned. This could have been a significant responsibility, but these days the logical link control is a glue layer that identifies the protocol (e.g., IP) that is carried within an 802.11 frame.
Several transmission techniques have been added to the physical layer as 802.11 has evolved since it first appeared in 1997. Two of the initial techniques, infrared in the manner of television remote controls and frequency hopping in the 2.4-GHz band, are now defunct. The third initial technique, direct sequence spread
SEC. 4.4 WIRELESS LANS 311
Upper
layers
MAC sublayer
Logical link layer
Data link layer
802.11 (legacy)
802.11b
802.11n
802.11ac
Frequency
802.11a
802.11g
802.11ax
Physical
Spread
MIMO
MU-MIMO
hopping
OFDM
OFDM
MU-MIMO
spectrum
OFDM
OFDM
layer
and infrared
OFDMA
Release date: 1997—1999 1999 1999 2003 2009
2013
2019
Figure 4-24. Part of the 802.11 protocol stack.
spectrum at 1 or 2 Mbps in the 2.4-GHz band, was extended to run at rates up to 11 Mbps and quickly became a hit. It is now known as 802.11b.
To give wireless junkies a much-wanted speed boost, new transmission techni- ques based on the orthogonal frequency division multiplexing scheme we de- scribed in Sec. 2.5.3 were introduced in 1999 and 2003. The first is called 802.11a and uses a different frequency band, 5 GHz. The second stuck with 2.4 GHz and compatibility. It is called 802.11g. Both give rates up to 54 Mbps.
Transmission techniques that simultaneously use multiple antennas at the transmitter and receiver for a speed boost were finalized as 802.11n in Oct. 2009. In December of 2013, IEEE ran out of letters and published the next standard as 802.11ac. As an aside, the 802.11 committee members know the whole alpha- bet and use the ‘‘missing’’ letters, such as 802.11r, for minor technical refinements and amendments (often for clarifications and bug fixes). 802.11ac operates in the 5-GHz band, which means that older devices that use only the 2.4 GHz band can- not use it. Most modern mobile devices use 802.11ac. Most recently, the 802.11ax standard was approved for even more speed.
We will now examine each of these transmission techniques briefly. We will only cover those that are in use, however, skipping the legacy 802.11 transmission methods. Technically, these belong to the physical layer and should have been ex- amined in Chap. 2, but since they are so closely tied to wireless LANs in general and the 802.11 LAN in particular, we treat them here instead.
4.4.2 The 802.11 Physical Layer
Each of the transmission techniques makes it possible to send a MAC frame over the air from one station to another. They differ, however, in the technology used and speeds achievable in practice. A detailed discussion of these technologies is far beyond the scope of this book, but a few words on each one will relate the
312 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
techniques to the material we covered in Chap. 2 and provide interested readers with the key terms to search for elsewhere for more information.
All of the 802.11 techniques use short-range radios to transmit signals in either the 2.4-GHz or the 5-GHz ISM frequency bands. These bands have the advantage of being unlicensed and hence freely available to any transmitter willing to meet some restrictions, such as radiated power of at most 1 W (though 50 mW is more typical for wireless LAN radios). Unfortunately, this fact is also known to the manufacturers of garage door openers, cordless phones, microwave ovens, and countless other devices, all of which compete with laptops and smartphones using WiFi for the same spectrum. The 2.4-GHz band tends to be more crowded than the 5-GHz band, so 5 GHz can be better for some applications even though it has shorter range due to the higher frequency. Unfortunately, the shorter radio waves at 5 GHz do not penetrate walls as well as the longer ones at 2.4 GHz do, so 5 GHz is not the unquestioned champion.
All of the transmission methods also define multiple rates. The idea is that dif ferent rates can be used depending on the current conditions. If the wireless signal is weak, a low rate can be used. If the signal is clear, the highest rate can be used. This adjustment is called rate adaptation . Since the rates vary by a factor of 10 or more, good rate adaptation is important for good performance. Of course, since it
is not needed for interoperability, the standards do not say how rate adaptation should be done.
The first transmission method we shall look at is 802.11b. It is a spread-spec trum method that supports rates of 1, 2, 5.5, and 11 Mbps, though in practice the operating rate is nearly always 11 Mbps. It is similar to the CDMA system we ex- amined in Sec. 2.4.4, except that there is only one spreading code that is shared by all users. Spreading is used to satisfy the FCC requirement that power be spread over the ISM band. The spreading sequence used by 802.11b is called a Barker sequence . It has the property that its autocorrelation is low except when the se- quences are aligned. This property allows a receiver to lock onto the start of a transmission. To send at a rate of 1 Mbps, the Barker sequence is used with BPSK modulation to send 1 bit per 11 chips. The chips are transmitted at a rate of 11 Mchips/sec. To send at 2 Mbps, it is used with QPSK modulation to send 2 bits per 11 chips. The higher rates are different. These rates use a technique called CCK (Complementary Code Keying ) to construct codes instead of the Barker se- quence. The 5.5-Mbps rate sends 4 bits in every 8-chip code, and the 11-Mbps rate sends 8 bits in every 8-chip code.
Next, we come to 802.11a, which supports rates up to 54 Mbps in the 5-GHz ISM band. You might have expected that 802.11a to come before 802.11b, but that was not the case. Although the 802.11a group was set up first, the 802.11b stan- dard was approved first and its product got to market well ahead of the 802.11a products, partly because of the difficulty of operating in the higher 5-GHz band.
The 802.11a method is based on OFDM (Orthogonal Frequency Division Multiplexing ) because OFDM uses the spectrum efficiently and resists wireless
SEC. 4.4 WIRELESS LANS 313
signal degradations such as multipath. Bits are sent over 52 subcarriers in parallel, 48 carrying data and 4 used for synchronization. Each symbol lasts 4µs and sends 1, 2, 4, or 6 bits. The bits are coded for error correction with a binary convolu tional code first, so only 1/2, 2/3, or 3/4 of the bits are not redundant. With dif ferent combinations, 802.11a can run at eight different rates, ranging from 6 to 54 Mbps. These rates are significantly faster than 802.11b rates, and there is less in terference in the 5-GHz band. However, 802.11b has a range that is about seven times greater than that of 802.11a, which is more important in many situations.
Even with the greater range, the 802.11b people had no intention of letting this upstart win the speed championship. Fortunately, in May 2002, the FCC dropped its long-standing rule requiring all wireless communications equipment operating in the ISM bands in the U.S. to use spread spectrum, so it got to work on 802.11g, which was approved by IEEE in 2003. It copies the OFDM modulation methods of 802.11a but operates in the narrow 2.4-GHz ISM band along with 802.11b. It offers the same rates as 802.11a (6 to 54 Mbps) plus of course compatibility with any 802.11b devices that happen to be nearby. All of these different choices can be confusing for customers, so it is common for products to support 802.11a/b/g in a single network interface card.
Not content to stop there, the IEEE committee began work on a high-through- put physical layer called 802.11n. It was ratified in 2009. The goal for 802.11n was throughput of at least 100 Mbps after all the wireless overheads were removed. This goal called for a raw speed increase of at least a factor of four. To make it happen, the committee doubled the channels from 20 MHz to 40 MHz and reduced framing overheads by allowing a group of frames to be sent together. More signifi- cantly, however, 802.11n uses up to four antennas to transmit up to four streams of information at the same time. The signals of the streams interfere at the receiver, but they can be separated using MIMO (Multiple Input Multiple Output ) com- munications techniques. The use of multiple antennas gives a large speed boost, or better range and reliability instead. MIMO, like OFDM, is one of those clever communications ideas that is changing wireless designs and which we are all likely to hear a lot about in the future. For a brief introduction to multiple antennas in 802.11, see Halperin et al. (2010).
In 2013, IEEE published the 802.11ac standard. It uses wider (80 MHz and 160 MHz) channels, 256-QAM modulation, and MU-MIMO (MultiUser MIMO ) with up to eight streams and other tricks to crank the bit rate up to a theoretical maximum of 7 Gbps, although in practice this is virtually never even approached. Modern consumer mobile devices generally use 802.11ac.
Another recent 802.11 standard is 802.11ad . This one operates in the 60 GHz band (57–71 GHz), which means the radio waves are very short: only 5 mm long. These waves do not penetrate walls or anything else, so the standard is only useful within a single room. However, this is an advantage as well as a disadvantage. It means that whatever the person in the next office or apartment is doing will not in terfere with what you are doing. The combination of high bandwidth and poor
314 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
penetration makes it ideal for streaming uncompressed 4K or 8K movies from a base station in a room to mobile devices in the room. An improvement to this stan- dard, increasing the bandwidth by a factor of four, is the 802.11ay standard.
Now we come to 802.11ax, sometimes referred to high-efficiency wireless. The consumer-friendly name for the standard is WiFi 6 (in case you thought you slept through WiFi 1 through 5, you did not; the old names were based on the IEEE standards numbers, and the WiFi Alliance decided to call this revision WiFi 6 be- cause it is the sixth version of the WiFi standard). It allows for more efficient QAM encoding along with a new modulation scheme, OFDMA. It can (in prin- ciple) operate in unlicensed parts of the spectrum up to 7 GHz and can (theoreti- cally) achieve a data rate of 11 Gbps. You can try this at home if you like, but unless you have a perfectly designed test lab at home, you are not going to get 11 Gbps. You might get 1 Gbps, though.
In 802.11ax OFDMA, a central scheduler allocates fixed-length resource units to each of the transmitting stations, thus reducing contention in dense deployments. 802.11ax also provides support for spatial spectrum reuse, through a technique called coloring , whereby a sender marks the beginning of its transmission in such a way that allows other senders to determine whether simultaneous use of the spec trum could take place. In some circumstances, a sender could transmit simultan- eously if it reduces its power accordingly.
Additionally, 802.11ax uses 1024-QAM, which allows each symbol to encode 10 bits, as opposed to the 8 bits/symbol in 256-QAM that 802.11ac uses. The stan- dard also supports smarter scheduling through a feature called target wake time , which allows a router to put devices in the home on transmission schedules to min imize collisions. This feature is likely to be most useful in smart homes, where an increasing number of connected devices may need to periodically send heartbeats to the home router.
4.4.3 The 802.11 MAC Sublayer Protocol
Let us now return from the land of electrical engineering to the land of com- puter science. The 802.11 MAC sublayer protocol is quite different from that of Ethernet, due to two factors that are fundamental to wireless communication.
First, radios are nearly always half duplex, meaning that they cannot transmit and listen for noise bursts at the same time on a single frequency. The received signal can easily be a million times weaker than the transmitted signal, so it cannot be heard at the same time. With Ethernet, a station just waits until the ether goes silent and then starts transmitting. If it does not receive a noise burst back while transmitting the first 64 bytes, the frame has almost assuredly been delivered cor rectly. With wireless, this collision detection mechanism does not work.
Instead, 802.11 tries to avoid collisions with a protocol called CSMA/CA (CSMA with Collision Avoidance ). This protocol is conceptually similar to Ethernet’s CSMA/CD, with channel sensing before sending and exponential back
SEC. 4.4 WIRELESS LANS 315
off after collisions. However, a station that has a frame to send starts with a random backoff (except in the case that it has not used the channel recently and the channel is idle). It does not wait for a collision. The number of slots to backoff is chosen in the range 0 to, say, 15 in the case of the OFDM physical layer. The station waits until the channel is idle, by sensing that there is no signal for a short period of time (called the DIFS, as we explain below), and counts down idle slots, pausing when frames are sent. It sends its frame when the counter reaches 0. If the frame gets through, the destination immediately sends a short acknowledgement. Lack of an acknowledgement is inferred to indicate an error, whether a collision or otherwise. In this case, the sender doubles the backoff period and tries again, continuing with exponential backoff as in Ethernet until the frame has been successfully trans- mitted or the maximum number of retransmissions has been reached.
An example timeline is shown in Fig. 4-25. Station A is the first to send a frame. While A is sending, stations B and C become ready to send. They see that the channel is busy and wait for it to become idle. Shortly after A receives an ac- knowledgement, the channel goes idle. However, rather than sending a frame right away and colliding, B and C both perform a backoff. C picks a short backoff, and thus sends first. B pauses its countdown while it senses that C is using the channel, and resumes after C has received an acknowledgement. B soon completes its back- off and sends its frame.
Station
A sends to D
D acks A
A
Data
Ack
B ready to send
B sends to D D acks B
B
Data
Wait for idle Backoff Rest of backoff Wait for idle
Ack
C ready to send
C sends to D D acks C
C
Wait for idle Backoff
Data
Ack
Time
Figure 4-25. Sending a frame with CSMA/CA.
Compared to Ethernet, there are two main differences. First, starting backoffs early helps to avoid collisions. This avoidance is worthwhile because collisions are expensive, as the entire frame is transmitted even if one occurs. Second, acknowl- edgements are used to infer collisions because collisions cannot be detected.
This mode of operation is called DCF (Distributed Coordination Function ) because each station acts independently, without any kind of central control. The standard also includes an optional additional mode of operation called PCF (Point
316 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Coordination Function ) in which the access point controls all activity in its cell, just like a cellular base station. However, PCF is not used in practice because there is normally no way to prevent stations in another nearby network from transmitting competing traffic.
The second problem is that the transmission ranges of different stations may be different. With a wire, the system is engineered so that all stations can hear each other. With the complexities of RF propagation, this situation does not hold for wireless stations. Consequently, situations such as the hidden terminal problem mentioned earlier and illustrated again in Fig. 4-26(a) can arise. Since not all sta tions are within radio range of each other, transmissions going on in one part of a cell may not be received elsewhere in the same cell. In this example, station C is transmitting to station B. If A senses the channel, it will not hear anything and will falsely conclude that it may now start transmitting to B. This decision leads to a collision.
A wants to send to B
but cannot hear that
B is busy
Range
of C's
radio
B wants to send to C but mistakenly thinks
the transmission will fail
Range
of A's
radio
A C
A B C C is
B
A is
transmitting
(a)
transmitting (b)
Figure 4-26. (a) The hidden terminal problem. (b) The exposed terminal problem.
The inverse situation is the exposed terminal problem, illustrated in Fig. 4-26(b). Here, B wants to send to C, so it listens to the channel. When it hears a transmission, it falsely concludes that it may not send to C, even though A may in fact be transmitting to D (not shown). This decision wastes a transmission opportunity.
To reduce ambiguities about which station is sending, 802.11 defines channel sensing to consist of both physical sensing and virtual sensing. Physical sensing simply checks the medium to see if there is a valid signal. With virtual sensing, each station keeps a logical record of when the channel is in use by tracking the NAV (Network Allocation Vector ). Each frame carries a NAV field that says how long the sequence of which this frame is part will take to complete. Stations that overhear this frame know that the channel will be busy for the period indicated by the NAV , regardless of whether they can sense a physical signal. For example, the
SEC. 4.4 WIRELESS LANS 317
NAV of a data frame includes the time needed to send an acknowledgement. All stations that hear the data frame will defer during the acknowledgement period, whether or not they can hear the acknowledgement. Essentially, the NAV serves like a countdown timer, during which period the sender assumes that the channel is busy. In 802.11, the units of the NAV are microseconds. In dense deployments, the NAV set by one sender can be reset by other senders in the same transmission range, thus causing collisions and suboptimal performance. To mitigate this effect, 802.11ax introduces two NAVs; one NAV is modified by frames corresponding to frames that the station is associated with, and the second NAV is modified by frames that are heard by the station but originate in overlapping networks.
An optional RTS/CTS mechanism uses the NAV to prevent terminals from send ing frames at the same time as hidden terminals. It is shown in Fig. 4-27. In this example, Awants to send to B. C is a station within range of A(and possibly with in range of B, but that does not matter). D is a station within range of B but not within range of A.
RTS Data A
CTS ACK B
C D
NAV
NAV
Time
Figure 4-27. Virtual channel sensing using CSMA/CA.
The protocol starts when A decides it wants to send data to B. A begins by sending an RTS frame to B to request permission to send it a frame. If B receives this request, it answers with a CTS frame to indicate that the channel is clear to send. Upon receipt of the CTS, A sends its frame and starts an ACK timer. Upon correct receipt of the data frame, B responds with an ACK frame, completing the exchange. If A’s ACK timer expires before the ACK gets back to it, it is treated as a collision and the whole protocol is run again after a backoff.
Now let us consider this exchange from the viewpoints of C and D. C is within range of A, so it may receive the RTS frame. If it does, it realizes that someone is going to send data soon. From the information provided in the RTS request, it can estimate how long the sequence will take, including the final ACK. So, for the good of all, it desists from transmitting anything until the exchange is completed. It does so by updating its record of the NAV to indicate that the channel is busy, as shown in Fig. 4-27. D does not hear the RTS, but it does hear the CTS, so it also updates its NAV . Note that the NAV signals are not transmitted; they are just inter- nal reminders to keep quiet for a certain period of time.
318 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
However, while RTS/CTS sounds good in theory, it is one of those designs that has proved to be of little value in practice. Several reasons why it is seldom used are known. It does not help for short frames (which are sent in place of the RTS) or for the AP (which everyone can hear, by definition). For other situations, it only slows down operation. RTS/CTS in 802.11 is a little different than in the MACA protocol we saw in Sec 4.2 because everyone hearing the RTS or CTS remains quiet for the duration to allow the ACK to get through without collision. Because of this, it does not help with exposed terminals as MACA did, only with hidden terminals. Most often there are few hidden terminals, and CSMA/CA already helps them by slowing down stations that transmit unsuccessfully, whatever the cause, to make it more likely that transmissions will succeed.
CSMA/CA with physical and virtual sensing is the core of the 802.11 protocol. However, there are several other mechanisms that have been developed to go with it. Each of these mechanisms was driven by the needs of real operation, so we will look at them briefly.
The first need we will take a look at is reliability. In contrast to wired net- works, wireless networks are noisy and unreliable, in no small part due to inter ference from other kinds of devices, such as microwave ovens, which also use the unlicensed ISM bands. The use of acknowledgements and retransmissions is of lit tle help if the probability of getting a frame through is small in the first place.
The main strategy that is used to increase successful transmissions is to lower the transmission rate. Slower rates use more robust modulations that are more like ly to be received correctly for a given signal-to-noise ratio. If too many frames are lost, a station can lower the rate. If frames are delivered with little loss, a station can occasionally test a higher rate to see if it should be used.
Another strategy to improve the chance of the frame getting through undam- aged is to send shorter frames. If the probability of any bit being in error is p, the probability of an n-bit frame being received entirely correctly is (1 < p)n. For ex- <4, the probability of receiving a full Ethernet frame (12,144 bits)
ample, for p = 10
correctly is less than 30%. Most frames will be lost. But if the frames are only a third as long (4048 bits), two thirds of them will be received correctly. Now most frames will get through and fewer retransmissions will be needed.
Shorter frames can be implemented by reducing the maximum size of the mes- sage that is accepted from the network layer. Alternatively, 802.11 allows frames to be split into smaller pieces, called fragments , each with its own checksum. The fragment size is not fixed by the standard, but is a parameter that can be adjusted by the AP. The fragments are individually numbered and acknowledged using a stop-and-wait protocol (i.e., the sender may not transmit fragment k + 1 until it has received the acknowledgement for fragment k). Once the channel has been ac- quired, multiple fragments are sent as a burst. They go one after the other with an acknowledgement (and possibly retransmissions) in between, until either the whole frame has been successfully sent or the transmission time reaches the maximum al lowed. The NAV mechanism described above keeps other stations quiet only until
SEC. 4.4 WIRELESS LANS 319
the next acknowledgement, but another mechanism (see below) is used to allow a burst of fragments to be sent without other stations sending a frame in the middle. The second need we will discuss is saving power. Battery life is always an issue with mobile wireless devices. The 802.11 standard pays attention to the issue of power management so that clients need not waste power when they have neither information to send nor to receive.
The basic mechanism for saving power builds on beacon frames . Beacons are periodic broadcasts by the AP (e.g., every 100 msec). The frames advertise the presence of the AP to clients and carry system parameters, such as the identifier of the AP, the time, how long until the next beacon, and security settings.
Clients can set a power-management bit in frames that they send to the AP to tell it that they are entering power-save mode. In this mode, the client can doze and the AP will buffer traffic intended for it. To check for incoming traffic, the cli- ent wakes up for every beacon, and checks a traffic map that is sent as part of the beacon. This map tells the client if there is buffered traffic. If so, the client sends a poll message to the AP, which then sends the buffered traffic. The client can then go back to sleep until the next beacon is sent.
Another power-saving mechanism, called APSD (Automatic Power Save Delivery ), was added to 802.11 in 2005. With this new mechanism, the AP buffers frames and sends them to a client just after the client sends frames to the AP. The client can then go to sleep until it has more traffic to send (and receive). This mechanism works well for applications such as VoIP that have frequent traffic in both directions. For example, a VoIP wireless phone might use it to send and re- ceive frames every 20 msec, much more frequently than the beacon interval of 100 msec, while dozing in between.
The third and last need we will examine is quality of service. When the VoIP traffic in the preceding example competes with peer-to-peer traffic, the VoIP traffic will suffer. It will be delayed due to contention with the high-bandwidth peer-to- peer traffic, even though the VoIP bandwidth is low. These delays are likely to degrade the voice calls. To prevent this degradation, we would like to let the VoIP traffic go ahead of the peer-to-peer traffic, as it is of higher priority.
IEEE 802.11 has a clever mechanism to provide this kind of quality of service that was introduced as set of extensions under the name 802.11e in 2005. It works by extending CSMA/CA with carefully defined intervals between frames. After a frame has been sent, a certain amount of idle time is required before any station may send a frame to check that the channel is no longer in use. The trick is to define different time intervals for different kinds of frames.
Five intervals are depicted in Fig. 4-28. The interval between regular data frames is called the DIFS (DCF InterFrame Spacing ). Any station may attempt to acquire the channel to send a new frame after the medium has been idle for DIFS. The usual contention rules apply, and binary exponential backoff may be needed if a collision occurs. The shortest interval is SIFS (Short InterFrame Spacing ). It is used to allow the parties in a single dialog the chance to go first.
320 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Examples include letting the receiver send an ACK, other control frame sequences like RTS and CTS, or letting a sender transmit a burst of fragments. Sending the next fragment after waiting only SIFS is what prevents another station from jump ing in with a frame in the middle of the exchange.
Control frame or next fragment may be sent here
SIFS
AIFS1
High-priority frame here
Regular DCF frame here
DIFS
Low-priority frame here
ACK
AIFS4 Time
Bad frame recovery done EIFS
Figure 4-28. Interframe spacing in 802.11.
The two AIFS (Arbitration InterFrame Space ) intervals show examples of two different priority levels. The short interval, AIFS1, is smaller than DIFS but longer than SIFS. It can be used by the AP to move voice or other high-priority traffic to the head of the line. The AP will wait for a shorter interval before it sends the voice traffic, and thus send it before regular traffic. The long interval, AIFS4, is larger than DIFS. It is used for background traffic that can be deferred until after regular traffic. The AP will wait for a longer interval before it sends this traffic, giving regular traffic the opportunity to transmit first. The complete quality of ser- vice mechanism defines four different priority levels that have different backoff pa rameters as well as different idle parameters.
The last time interval, EIFS (Extended InterFrame Spacing ), is used only by a station that has just received a bad or unknown frame, to report the problem. The idea is that since the receiver may have no idea of what is going on, it should wait a while to avoid interfering with an ongoing dialog between two stations.
A further part of the quality of service extensions is the notion of a TXOP or transmission opportunity . The original CSMA/CA mechanism let stations send one frame at a time. This design was fine until the range of rates increased. With 802.11a/g, one station might be sending at 6 Mbps and another station be sending at 54 Mbps. They each get to send one frame, but the 6-Mbps station takes nine times as long (ignoring fixed overheads) as the 54-Mbps station to send its frame. This disparity has the unfortunate side effect of slowing down a fast sender who is competing with a slow sender to roughly the rate of the slow sender. For example, again ignoring fixed overheads, when sending alone the 6-Mbps and 54-Mbps senders will get their own rates, but when sending together they will both get 5.4 Mbps on average. It is a stiff penalty for the fast sender. This issue is known as the rate anomaly (Heusse et al., 2003).
SEC. 4.4 WIRELESS LANS 321
With transmission opportunities, each station gets an equal amount of airtime, not an equal number of frames. Stations that send at a higher rate for their airtime will get higher throughput. In our example, when sending together the 6-Mbps and 54-Mbps senders will now get 3 Mbps and 27 Mbps, respectively.
4.4.4 The 802.11 Frame Structure
The 802.11 standard defines three different classes of frames in the air: data, control, and management. Each of these has a header with a variety of fields used within the MAC sublayer. In addition, there are some headers used by the physical layer, but these mostly deal with the modulation techniques used, so we will not discuss them here.
We will look at the format of the data frame as an example. It is shown in Fig. 4-29. First comes the Frame control field, which is made up of 11 subfields. The first of these is the Protocol version, set to 00. It is there to allow future ver- sions of 802.11 to operate at the same time in the same cell. Then come the Type (data, control, or management) and Subtype fields (e.g., RTS, or CTS). For a regu lar data frame (without quality of service), they are set to 10 and 0000 in binary. The To DS and From DS bits are set to indicate whether the frame is going to or coming from the network connected to the APs, which is called the distribution system. The More fragments bit means that more fragments will follow. The Retry bit marks a retransmission of a frame sent earlier. The Power management bit indicates that the sender is going into power-save mode. The More data bit in- dicates that the sender has additional frames for the receiver. The Protected Frame bit indicates that the frame body has been encrypted for security. We will discuss security briefly in the next section. Finally, the Order bit tells the receiver that the higher layer expects the sequence of frames to arrive strictly in order.
Bytes 2 2 2 0–2312
6 6 6 4
(recipient) Duration Data Frame
Sequence Address 1
control
Version
Type
Address 2
(transmitter) Address 3
Check sequence
Subtype
To
= 00
From
More
frag. Retry Pwr. mgt.More
data Protected Order
= 10
= 0000
DS
DS
Bits 1 1 1 1 1 2 2 1 1
4 1
Figure 4-29. Format of the 802.11 data frame.
The second field of the data frame, the Duration field, tells how long the frame and its acknowledgement will occupy the channel, measured in microseconds. It is present in all types of frames, including control frames, and is what stations use to manage the NAV mechanism.
322 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Next come addresses. Data frames sent to or from an AP have three addresses, all in standard IEEE 802 format. The first address is the receiver, and the second address is the transmitter. They are obviously needed, but what is the third address for? Remember that the AP is simply a relay point for frames as they travel be tween a client and another point on the network, perhaps a distant client or a portal to the Internet. The third address gives this distant endpoint.
The Sequence field numbers frames so that duplicates can be detected. Of the 16 bits available, 4 identify the fragment and 12 carry a number that is advanced with each new transmission. The Data field contains the payload, up to 2312 bytes. The first bytes of this payload are in a format known as LLC (Logical Link Control ). This layer is the glue that identifies the higher-layer protocol (e.g., IP) to which the payloads should be passed. Last comes the Frame check sequence, which is the same 32-bit CRC we saw in Sec. 3.2.2 and elsewhere.
Management frames have the same format as data frames, plus a format for the data portion that varies with the subtype (e.g., parameters in beacon frames). Con trol frames are short. Like all frames, they have the Frame control, Duration , and Frame check sequence fields. However, they may have only one address and no data portion. Most of the key information is conveyed with the Subtype field (e.g., ACK, RTS, and CTS).
4.4.5 Services
The 802.11 standard defines the services that the clients, the access points, and the network connecting them must be a conformant wireless LAN. The 802.11 standard offers variousservices.
Association and Data Delivery
The association service is used by mobile stations to connect themselves to APs. Typically, it is used just after a station moves within radio range of the AP. Upon arrival, the station learns the identity and capabilities of the AP, either from beacon frames or by directly asking the AP. The capabilities include the data rates supported, security arrangements, power-saving capabilities, quality of service sup- port, and more. The AP’s beacon message also includes a SSID (Service Set IDentifier ), which most people often think of as the network name. The station sends a request to associate with the AP; the AP may accept or reject the request. While beacons are always broadcast, the SSID may or may not be broadcast. If the SSID is not broadcast, the station must somehow know (or discover) the name to associate to that AP.
Reassociation lets a station change its preferred AP. This is useful for mobile stations moving from one AP to another AP in the same extended 802.11 LAN, like a handover in the cellular network. If used correctly, no data will be lost as a consequence of the handover. (But 802.11, like Ethernet, is a best-effort service.)
SEC. 4.4 WIRELESS LANS 323
No delivery guarantees are given. Either the station or the AP may also disassoci- ate , ending the relationship. A station should use this service before shutting down or leaving the network. The AP may use it before going down for maintenance. The 802.11w standard added authentication to disassociation frames.
Once frames reach the AP, the distribution service determines how to route them. If the destination is local to the AP, the frames can be sent out directly over the air. Otherwise, they will have to be forwarded over the wired network. The integration service handles any translation that is needed for a frame to be sent outside the 802.11 LAN, or to arrive from outside the 802.11 LAN. The common case here is connecting the wireless LAN to the Internet.
Data transmission is what it is all about, so 802.11 naturally provides a data delivery service . This service lets stations transmit and receive data using the pro tocols we described earlier in this chapter. Since 802.11 is modeled on Ethernet and transmission over Ethernet is not guaranteed to be 100% reliable, transmission over 802.11 is not guaranteed to be reliable either. Higher layers must deal with detecting and correcting errors.
Security and Privacy
Stations must also authenticate before they can send frames via the AP, but authentication is handled in different ways depending on the choice of security scheme. If the 802.11 network is ‘‘open,’’ anyone is allowed to use it. Otherwise, credentials are needed to authenticate.
A common authentication approach, WPA2 (WiFi Protected Access 2), im- plements security as defined in the 802.11i standard. (WPA is an interim scheme that implements a subset of 802.11i. We will skip it and go straight to the com- plete scheme.) With WPA2, the AP can talk to an authentication server that has a username and password database to determine if the station is allowed to access the network. Alternatively, a pre-shared key, which is a fancy name for a network password, may be configured. Several frames are exchanged between the station and the AP with a challenge and response that lets the station prove it has the right credentials. This exchange happens after association.
Another authentication approach that is commonly used in enterprise networks is 802.1X, which implements an approach called port-based authentication . 802.1X relies on centralized authentication (e.g., authentication of devices to a centralized server), which creates the possibilities for more fine-grained access control, accounting, billing, and attribution. The station that is authenticating is sometimes called a supplicant; this device authenticates to the network through an authenticator, which talks to the authentication server. 802.1X relies on an authentication framework called EAP (Enhanced Authentication Protocol ). The EAP framework defines more than 50 different methods to perform authentication, but common methods include EAP-TLS , which performs authentication based on certificates; EAP-TTLS and PEAP, which allow the client to associate using a
324 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
variety of methods, including password-based authentication; and EAP-SIM , whereby a mobile phone can authenticate using a SIM. 802.1X has many advan tages over simple WPA, such as the ability to perform fine-grained access control based on user, but it requires a certificate infrastructure to administer.
The predecessor to WPA was called WEP (Wired Equivalent Privacy ). For this scheme, authentication with a preshared key happens before association. WEP is now widely known to be insecure and is effectively no longer used. The first practical demonstration that WEP was broken came when Adam Stubblefield was a summer intern at AT&T (Stubblefield et al., 2002). He was able to code up and test an attack in one week, much of which was spent getting permission from man- agement to buy the WiFi cards needed for experiments. Software to crack WEP passwords is now freely available.
With WEP broken and WPA deprecated, the next try was WPA2. It uses a pri- vacy service that manages the details of encryption and decryption. The en- cryption algorithm for WPA2 is based on AES (Advanced Encryption Stan- dard ), a U.S. government standard approved in 2002. The keys that are used for encryption are determined during the authentication procedure. Unfortunately,
WPA2 was broken in 2017 (Vanhoef and Piessens, 2017). Good security is very hard, even with unbreakable crypto, because key management is the weakest link.
Prioritization and Power Control
To handle traffic with different priorities, there is a QoS traffic scheduling service. It uses the protocols we described to give voice and video traffic preferen tial treatment compared to best-effort and background traffic. A companion service also provides higher-layer timer synchronization. This lets stations coordinate their actions, which may be useful for media processing.
Finally, there are two services that help stations manage their use of the spec trum. The transmit power control service gives stations the information they need to meet regulatory limits on transmit power that vary from region to region. The dynamic frequency selection service give stations the information they need to avoid transmitting on frequencies in the 5-GHz band that are being used for radar in the proximity.
With these services, 802.11 provides a rich set of functionality for connecting nearby mobile clients to the Internet. It has been a huge success, and the standard has repeatedly been amended to add more functionality. For a perspective on where the standard has been and where it is heading,see Hiertz et al. (2010).
4.5 BLUETOOTH
In 1994, the Swedish company L. M. Ericsson became interested in connecting its mobile phones to other devices (e.g., laptops) without cables. Together with four other companies (IBM, Intel, Nokia, and Toshiba), it formed a SIG (Special
SEC. 4.5 BLUETOOTH 325
Interest Group, i.e., consortium) in 1998 to develop a wireless standard for con- necting computing and communication devices and accessories using short-range, low-power, inexpensive wireless radios. The project was named Bluetooth , after Harald Blaatand (Bluetooth) II (940–981), a Viking king who unified (i.e., con- quered) Denmark and Norway, also without cables.
Bluetooth 1.0 was released in July 1999, and since then the SIG has never looked back. All manner of consumer electronic devices now use Bluetooth, from mobile phones and laptops to headsets, printers, keyboards, mice, game consoles, watches, music players, navigation units, and more. The Bluetooth protocols let these devices find and connect to each other, an act called pairing , and securely transfer data.
The protocols have evolved over the past decade, too. After the initial proto- cols stabilized, higher data rates were added to Bluetooth 2.0 in 2004. With the 3.0 release in 2009, Bluetooth can be used for device pairing in combination with 802.11 for high-throughput data transfer. The 4.0 release in June 2010 specified low-power operation. That will be handy for people who do not want to change the batteries regularly in all of those devices around the house.
We will cover the main aspects of Bluetooth 4.0 below as it is still the mostly widely used version. Afterwards, we will discuss Bluetooth 5 and how it differs from Bluetooth 4.0 (mostly in minor ways).
4.5.1 Bluetooth Architecture
Let us start our study of the Bluetooth system with a quick overview of what it contains and what it is intended to do. The basic unit of a Bluetooth system is a piconet , which consists of a controller node and up to seven active worker nodes within a distance of 10 meters. Multiple piconets can exist in the same (large) room and can even be connected via a bridge node that takes part in multiple piconets, as in Fig. 4-30. An interconnected collection of piconets is called a scat ternet .
In addition to the seven active worker nodes in a piconet, there can be up to 255 parked nodes in the net. These are devices that the controller has switched to a low-power state to reduce the drain on their batteries. In parked state, a device cannot do anything except respond to an activation or beacon signal from the con troller. Two minor intermediate power states, hold and sniff, also exist
The reason for the controller/worker design is that the designers intended to facilitate the implementation of complete Bluetooth chips for under $5. The consequence of this decision is that the workers are fairly dumb, basically just doing whatever the controller tells them to do. At its heart, a piconet is a cent ralized TDM system, with the controller controlling the clock and determining which device gets to communicate in which time slot. All communication is be tween the controller and a worker; direct worker-worker communication is not pos- sible.
326 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 Piconet 1 Piconet 2
W
W
W
W
W
C C
W
W
Active
W Parked
worker
W
W
W
worker
Bridge worker
Figure 4-30. Two piconets can be connected to form a scatternet.
4.5.2 Bluetooth Applications
Most network protocols just provide channels between communicating entities and let application designers figure out what they want to use them for. For ex- ample, 802.11 does not specify whether users should use their laptop computers for reading email, surfing the Web, or something else. In contrast, the Bluetooth SIG specifies particular applications to be supported and provides different proto- col stacks for each one. At the time of this writing, there are more than two dozen applications, which are called profiles. Unfortunately, this approach leads to a very large amount of complexity. We will omit the complexity here but will briefly look at the profiles to see more clearly what the Bluetooth SIG is trying to accom- plish with them.
Six of the profiles are for different uses of audio and video. For example, the intercom profile allows two telephones to connect as walkie-talkies. The headset and hands-free profiles both provide voice communication between a headset and its base station, as might be used for hands-free telephony while driving a car. Other profiles are for streaming stereo-quality audio and video, say, from a porta- ble music player to headphones, or from a digital camera to a TV.
The human interface device profile is for connecting keyboards and mice to computers. Other profiles let a mobile phone or other computer receive images from a camera or send images to a printer. Perhaps of more interest is a profile to use a mobile phone as a remote control for a (Bluetooth-enabled) TV.
Still other profiles enable networking. The personal area network profile lets Bluetooth devices form an ad hoc network or remotely access another network, such as an 802.11 LAN, via an access point. The dial-up networking profile was actually the original motivation for the whole project. It allows a (laptop) com- puter to connect to a mobile phone containing a built-in modem without using any cables, just radio signals.
SEC. 4.5 BLUETOOTH 327
Profiles for higher-layer information exchange have also been defined. The synchronization profile is intended for loading data into a mobile phone when it leaves home and collecting data from it when it returns.
We will skip the rest of the profiles, except to mention that some profiles serve as building blocks on which the above profiles are built. The generic access pro file, on which all of the other profiles are built, provides a way to establish and maintain secure links (channels) between the controller and the workers. The other generic profiles define the basics of object exchange and audio and video transport. Utility profiles are used widely for functions such as emulating a serial line, which is especially useful for many legacy applications.
Was it really necessary to spell out all these applications in detail and provide different protocol stacks for each one? Probably not, but there were a number of different working groups that devised different parts of the standard, and each one just focused on its specific problem and generated its own profile. Think of this as Conway’s Law in action. (In the April 1968 issue of Datamation magazine, Melvin Conway observed that if you assign n people to write a compiler, you will get an n-pass compiler, or more generally, the software structure mirrors the struc ture of the group that produced it.) It would probably have been possible to get away with two protocol stacks instead of 25, one for file transfer and one for streaming real-time communication.
4.5.3 The Bluetooth Protocol Stack
The Bluetooth standard has many protocols grouped loosely into the layers shown in Fig. 4-31. The first observation to make is that the structure does not fol low the OSI model, the TCP/IP model, the 802 model, or any other model.
Applications
Upper
e
e
e
l
Service
discovery RFcomm . . .
layers
Host-controller interface
if
o
r
P
li
f
o
r
P
l
i
f
L2CAP
o
r
P
Link manager
Link control
(Baseband)
Datalink layer
Physical
Radio layer
Figure 4-31. The Bluetooth protocol architecture.
328 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The bottom layer is the physical radio layer, which corresponds fairly well to the physical layer in the OSI and 802 models. It deals with radio transmission and modulation. Many of the concerns here have to do with the goal of making the system inexpensive so that it can become a mass-market item.
The link control (or baseband) layer is somewhat analogous to the MAC sub layer but also includes elements of the physical layer. It deals with how the con troller controls time slots and how these slots are grouped into frames.
Next come two protocols that use the link control protocol. The link manager handles the establishment of logical channels between devices, including power management, pairing and encryption, and quality of service. It lies below the host controller interface line. This interface is a convenience for implementation: typi- cally, the protocols below the line will be implemented on a Bluetooth chip, and the protocols above the line will be implemented on the Bluetooth device that hosts the chip.
The link protocol above the line is L2CAP (Logical Link Control Adapta tion Protocol ). It frames variable-length messages and provides reliability if need- ed. Many protocols use L2CAP, such as the two utility protocols that are shown. The service discovery protocol is used to locate services within the network. The RFcomm (Radio Frequency communication) protocol emulates the standard serial port found on PCs for connecting the keyboard, mouse, and modem, among other devices.
The top layer is where the applications are located. The profiles are repres- ented by vertical boxes because they each define a slice of the protocol stack for a particular purpose. Specific profiles, such as the headset profile, usually contain only those protocols needed by that application and no others. For example, pro files may include L2CAP if they have packets to send but skip L2CAP if they have only a steady flow of audio samples.
In the following sections, we will examine the Bluetooth radio layer and vari- ous link protocols, since these roughly correspond to the physical and MAC sublayers in the other protocol stacks we have studied.
4.5.4 The Bluetooth Radio Layer
The radio layer moves the bits from controller to worker, or vice versa. It is a low-power system with a range of 10 meters operating in the same 2.4-GHz ISMband as 802.11. The band is divided into 79 channels of 1 MHz each. To coexist with other networks using the ISM band, frequency hopping spread spectrum is used. There can be up to 1600 hops/sec over slots with a dwell time of 625-µsec. All the nodes in a piconet hop frequencies simultaneously, following the slot tim ing and pseudorandom hop sequence dictated by the controller.
Unfortunately, it turned out that early versions of Bluetooth and 802.11 inter fered enough to ruin each other’s transmissions. Some companies responded by banning Bluetooth altogether, but eventually a technical solution was devised. The
SEC. 4.5 BLUETOOTH 329
solution is for Bluetooth to adapt its hop sequence to exclude channels on which there are other RF signals. This process reduces the harmful interference. It is called adaptive frequency hopping .
Three forms of modulation are used to send bits on a channel. The basic scheme is to use frequency shift keying to send a 1-bit symbol every microsecond, giving a gross data rate of 1 Mbps. Enhanced rates were introduced with the 2.0 version of Bluetooth. These rates use phase shift keying to send either 2 or 3 bits per symbol, for gross data rates of 2 or 3 Mbps. The enhanced rates are only used in the data portion of frames.
4.5.5 The Bluetooth Link Layers
The link control (or baseband) layer is the closest thing Bluetooth has to a MAC sublayer. It turns the raw bit stream into frames and defines some key for- mats. In the simplest form, the controller in each piconet defines a series of 625- µsec time slots, with the controller’s transmissions starting in the even slots and the workers’ transmissions starting in the odd ones. This scheme is traditional time division multiplexing, with the controller getting half the slots and the work- ers sharing the other half. Frames can be 1, 3, or 5 slots long. Each frame has an overhead of 126 bits for an access code and header, plus a settling time of 250–260 µsec per hop to allow the inexpensive radio circuits to become stable. The payload of the frame can be encrypted for confidentiality with a key that is chosen when the controller and worker connect. Hops only happen between frames, not during a frame. The result is that a 5-slot frame is much more efficient than a 1-slot frame because the overhead is constant but more data is sent.
The link manager protocol sets up logical channels, called links, to carry frames between the controller and a worker device that have discovered each other. A pairing procedure is followed to make sure that the two devices are allowed to communicate before the link is used. The old pairing method is that both devices must be configured with the same four-digit PIN (Personal Identification Number). The matching PIN is how each device would know that it was connecting to the right remote device. However, unimaginative users and devices default to PINs such as ‘‘0000’’ and ‘‘1234’’ meant that this method provided very little security in practice.
The new secure simple pairing method enables users to confirm that both de- vices are displaying the same passkey, or to observe the passkey on one device and enter it into the second device. This method is more secure because users do not have to choose or set a PIN. They merely confirm a longer, device-generated passkey. Of course, it cannot be used on some devices with limited input/output, such as a hands-free headset.
Once pairing is complete, the link manager protocol sets up the links. Two main kinds of links exist to carry the payload (user data). The first is the SCO (Synchronous Connection Oriented ) link. It is used for real-time data, such as
330 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
telephone connections. This type of link is allocated a fixed slot in each direction. A worker may have up to three SCO links with its controller. Each SCO link can transmit one 64,000-bps PCM audio channel. Due to the time-critical nature of SCO links, frames sent over them are never retransmitted. Instead, forward error correction can be used to increase reliability.
The other kind is the ACL (Asynchronous ConnectionLess ) link. This type of link is used for packet-switched data that is available irregularly. ACL traffic is delivered on a best-effort basis without guarantees. Frames can be lost and may have to be retransmitted. A worker may have only one ACL link to its controller.
The data sent over ACL links come from the L2CAP layer. This layer has four major functions. First, it accepts packets of up to 64 KB from the upper layers and breaks them into frames for transmission. At the far end, the frames are reassem- bled into packets. Second, it handles the multiplexing and demultiplexing of mul tiple packet sources. When a packet has been reassembled, the L2CAP layer deter- mines which upper-layer protocol to hand it to, for example, RFcomm or service discovery. Third, L2CAP handles error control and retransmission. It detects er rors and resends packets that were not acknowledged. Finally, L2CAP enforces quality of service requirements between multiple links.
4.5.6 The Bluetooth Frame Structure
Bluetooth defines several frame formats, the most important of which is shown in two forms in Fig. 4-32. It begins with an access code that usually identifies the controller so that workers within radio range of two controllers can tell which traf fic is for them. Next comes a 54-bit header containing typical MAC sublayer fields. If the frame is sent at the basic rate, the data field comes next. It has up to 2744 bits for a five-slot transmission. For a single time slot, the format is the same except that the data field is 240 bits.
If the frame is sent at the enhanced rate, the data portion may have up to two or three times as many bits because each symbol carries 2 or 3 bits instead of 1 bit. These data are preceded by a guard field and a synchronization pattern that is used to switch to the faster data rate. That is, the access code and header are carried at the basic rate and only the data portion is carried at the faster rate. Enhanced-rate frames end with a short trailer.
Let us take a quick look at the common header. The Address field identifies which of the eight active devices the frame is intended for. The Type field identi fies the frame type (ACL, SCO, poll, or null), the type of error correction used in the data field, and how many slots long the frame is. The Flow bit is asserted by a worker when its buffer is full and cannot receive any more data. This bit enables a primitive form of flow control. The Acknowledgementbit is used to piggyback an ACK onto a frame. The Sequence bit is used to number the frames to detect re transmissions. The protocol is stop-and-wait, so 1 bit is enough. Then comes the 8-bit header Checksum. The entire 18-bit header is repeated three times to form
SEC. 4.5 BLUETOOTH 331
Bits 72 0–2744
54
Access code Data (at 1X rate)
Header
3 4 1 1 1 8
Repeated 3 times
Addr Type F A S CRC
Bits 72 54 16 0–8184 2 Access code Header Guard/Sync Data (at 2X or 3X rate) Trailer 5 x 675 microsec slots
(a) Basic rate data frame, top
(b) Enhanced rate data frame, bottom
Figure 4-32. Typical Bluetooth data frame at (a) basic and (b) enhanced, data rates.
the 54-bit header shown in Fig. 4-32. On the receiving side, a simple circuit exam ines all three copies of each bit. If all three are the same, the bit is accepted. If not, the majority opinion wins. Thus, 54 bits of transmission capacity are used to send 10 bits of header. The reason is that to reliably send data in a noisy environ- ment using cheap, low-powered (2.5 mW) devices with little computing capacity, a great deal of redundancy is needed.
Various formats are used for the data field for ACL and SCO frames. The basic-rate SCO frames are a simple example to study: the data field is always 240 bits. Three variants are defined, permitting 80, 160, or 240 bits of actual payload, with the rest being used for error correction. In the most reliable version (80-bit payload), the contents are just repeated three times, the same as the header.
We can work out the capacity with this frame as follows. Since the worker may use only the odd slots, it gets 800 slots/sec, just as the controller does. With an 80-bit payload, the channel capacity from the worker is 64,000 bps as is the channel capacity from the controller. This capacity is exactly enough for a single full-duplex PCM voice channel (which is why a hop rate of 1600 hops/sec was chosen). That is, despite a raw bandwidth of 1 Mbps, a single full-duplex uncom- pressed voice channel can completely saturate the piconet. The efficiency of 13%is the result of spending 41% of the capacity on settling time, 20% on headers, and 26% on repetition coding. This shortcoming highlights the value of the enhanced rates and frames of more than a single slot.
4.5.7 Bluetooth 5
In June 2016, the Bluetooth Special Interest Group introduced Bluetooth 5. In January 2019, it came out with Bluetooth 5.1. These were relatively minor upgrades to the Bluetooth 4 standard. Nevertheless, there are some differences
332 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
between Bluetooth 4 and both Bluetooth 5 standards. Here is a list of the key ones in Bluetooth 5.0
1. Support for Internet of Things devices.
2. The speed has been increased from 1 Mbps to 2 Mbps.
3. Message size has gone up from 31 bytes to 255 bytes.
4. Range indoors has gone up from 10 m to 40 m.
5. Power requirements have been reduced slightly.
6. The range of the beacons has gone up slightly.
7. Security is slightly better.
In all, not a huge change, but given the need for backward compatibility, that was not to be expected. The Bluetooth 5.1 standard had a few minor updates in the areas of device tracking, caching, and a few other small items.
4.6 DOCSIS
The cable television network was original designed for bringing television pro- grams into homes. It is now also widely used as an alternative to the telephone system for bringing Internet into homes. Below we describe the ‘‘MAC layer’’ in the DOCSIS standard, which most cable providers implement.
4.6.1 Overview
The DOCIS specification also has a MAC sublayer, in some sense, although this layer is somewhat less distinct from the link layer than other protocols, as we have studied in previous chapters. Nonetheless, the protocol has various aspects that fit into the standard goals of the MAC sublayer, including channel allocation (which occurs through a request-grant process), configuration of quality of service, and a unique forwarding model. This section addresses all three of these issues. More recently, full-duplex DOCSIS 3.1 (now called DOCSIS 4.0) has introduced new technologies for scheduling and interference cancellation.
DOCSIS has a standard MAC frame format, which includes a set of fields, in- cluding the length of the MAC frame, a checksum, and an extended header field, which supports a variety of functions, including link-layer security. Some headers support specific functions, including downstream timing, upstream power adjust- ment, bandwidth requests, and concatenation of frames. One specific type of frame is called a request frame, which is how the cable modem requests band- width, as described later in this section.
SEC. 4.6 DOCSIS 333 4.6.2 Ranging
A cable modem transmits what is called a ranging request, which allows the CMTS (headend) to determine the network delay to the cable modem, as well as to perform and necessary power adjustments. Ranging is effectively the periodic tun ing of the various transmission parameters, specifically timing, frequency, and power. The CMTS polls the cable modem, which triggers the modem to submit a ranging request. Based on this message, the CMTS provides the modem a response to help the cable modem adjust signal transmission timing and power. By default, ranging occurs about once every 30 seconds, but it can be configured to occur more frequently; typical ranging intervals can be about 10 to 20 seconds.
4.6.3 Channel Bandwidth Allocation
A DOCSIS CMTS allocates bandwidth to each cable modem through a re- quest-grant process. Each upstream or downstream traffic flow is typically assign- ed a service flow, and each service flow is allocated bandwidth by the CMTS.
Service Flows
Channel allocation in DOCSIS typically involves allocation of channels be tween one CMTS and one or more cable modems, which are located in the sub- scribers’ homes. The CMTS must serve all of the upstream and downstream chan- nels, and it discards any frame with a source MAC address that is not one of the as-
signed cable modems in the group. Central to the DOCSIS MAC layer is the notion of a service flow, which provides a way to manage both upstream and downstream quality of service management. Each cable modem has an associated
service flow ID, which is negotiated during the registration of the cable modem; each cable modem can have multiple associated service flows. Different service flows can have different limitations that are associated with different types of traf fic. For example, each service flow might have a maximum packet size; or, a ser- vice flow could be dedicated to a certain type of application, such as a constant bit rate application. All cable modems must support at least one upstream and one downstream service flow, called the primary service flow.
The Request-Grant Process and Low-Latency DOCSIS
When a cable modem has data to send, it sends a short request that tells the CMTS how much data it has to send and waits for a subsequent bandwidth alloca tion message, which describes the upstream transmission opportunities that a send- er may have to transmit data.
Upstream transmission is divided into discrete intervals by an upstream band- width allocation mechanism called a minislot . A minislot is simply a time unit of
334 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
granularity for upstream transmission, typically in 6.25 µsec increments. Depend ing on the version of DOCSIS, a minislot may need to be a power-of-two multiple of this increment; in more modern versions of DOCSIS, this restriction does not apply. By adjusting the minislots that are granted to a particular service flow, the CMTS can effectively implement quality of service and prioritization for different traffic flows.
Generally speaking, quality of service has allowed the CMTS to allocate more bandwidth to different cable modems (thus allowing a subscriber who is provi- sioned for a higher tier of service to achieve a higher service level). More recently, however, revisions to DOCSIS have also allowed differentiated service for latency- sensitive applications. Specifically, a new revision to the DOCSIS protocol allows for low latency, through a new specification called LLD (Low-Latency DOCSIS) LLD recognizes that for many interactive applications, such as gaming and video conferencing, low latency is as important as high throughput. In some cases, in existing DOCSIS networks, the latency for some flows can be quite high, due to both the time to acquire the shared media and the time for queueing.
LLD addresses these issues by shortening the round-trip delay associated with the request-grant process, and by using two queues—one queue for latency-sensi tive application traffic and a second queue for traffic that is not latency-sensitive. The shorter request-grant delay reduces the amount of time that the CMTS uses to perform scheduling calculations, to 1 millisecond from a previous time interval of 2–4 milliseconds. LLD also uses mechanisms to proactively schedule grants to a service flow to eliminate delay associated with the request-grant process entirely. LLD allows applications to determine whether they have packets that cannot be queued, through the marking of a differentiated service field in the DOCSIS frame. For more information on LLD, see White (2019).
4.7 DATA LINK LAYER SWITCHING
Many organizations have multiple LANs and wish to connect them. Would it not be convenient if we could just join the LANs together to make a larger LAN? In fact, we can do this when the connections are made with devices called bridges . The Ethernet switches we described in Sec. 4.3.4 are a modern name for bridges; they provide functionality that goes beyond classic Ethernet and Ethernet hubs to make it easy to join multiple LANs into a larger and faster network. We shall use the terms ‘‘bridge’’ and ‘‘switch’’ interchangeably.
Bridges operate in the data link layer, so they examine the data link layer ad- dresses to forward frames. Since they are not supposed to examine the payload field of the frames they forward, they can handle IP packets as well as other kinds of packets, such as AppleTalk packets. In contrast, routers examine the addresses in packets and route based on them, so they only work with the protocols that they were designed to handle.
SEC. 4.7 DATA LINK LAYER SWITCHING 335
In this section, we will look at how bridges work and are used to join multiple physical LANs into a single logical LAN. We will also look at how to do the re- verse and treat one physical LAN as multiple logical LANs, called virtual LANs. Both technologies provide useful flexibility for managing networks. For a compre- hensive treatment of bridges, switches, and several related topics, see Perlman (2000) and Yu (2011).
4.7.1 Uses of Bridges
Before getting into the technology of bridges, let us take a look at some com- mon situations in which bridges are used. We will mention three reasons why a single organization may end up with multiple LANs.
First, many university and corporate departments have their own LANs to con- nect their own personal computers, servers, and devices such as printers. Since the goals of the various departments differ, different departments may set up different LANs, without regard to what other departments are doing. Sooner or later, though, there is a need for interaction, so bridges are needed. In this example, multiple LANs come into existence due to the autonomy of their owners.
Second, the organization may be geographically spread over several buildings separated by considerable distances. It may be cheaper to have separate LANs in each building and connect them with bridges and a few long-distance fiber optic links than to run all the cables to a single central switch. Even if laying the cables is easy to do, there are limits on their lengths (e.g., 200 m for twisted-pair gigabit Ethernet). The network would not work for longer cables due to the excessive sig- nal attenuation or round-trip delay. The only solution is to partition the LAN and install bridges to join the pieces to increase the total physical distance that can be covered.
Third, it may be necessary to split what is logically a single LAN into separate LANs (connected by bridges) to accommodate the load. At many large universi ties, for example, thousands of workstations are available for student and faculty computing. Companies may also have thousands of employees. The scale of this system precludes putting all the workstations on a single LAN—there are more computers than ports on any Ethernet hub and more stations than allowed on a sin- gle classic Ethernet.
Even if it were possible to wire all the workstations together, putting more sta tions on an Ethernet hub or classic Ethernet would not add capacity. All of the sta tions share the same, fixed amount of bandwidth. The more stations there are, the less average bandwidth per station.
However, two separate LANs have twice the capacity of a single LAN. Bridges let the LANs be joined together while keeping this capacity. The key is not to send traffic onto ports where it is not needed, so that each LAN can run at full speed. This behavior also increases reliability, since on a single LAN a defective node that keeps outputting a continuous stream of garbage can clog up the entire LAN. By
336 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
deciding what to forward and what not to forward, bridges act like fire doors in a building, preventing a single node that has gone berserk from bringing down the entire system.
To make these benefits easily available, ideally bridges should be completely transparent. It should be possible to go out and buy bridges, plug the LAN cables into the bridges, and have everything work perfectly, instantly. There should be no hardware changes required, no software changes required, no setting of address switches, no downloading of routing tables or parameters, nothing at all. Just plug in the cables and walk away. Furthermore, the operation of the existing LANs should not be affected by the bridges at all. As far as the stations are concerned, there should be no observable difference whether or not they are part of a bridged LAN. It should be as easy to move stations around the bridged LAN as it is to move them around a single LAN.
Surprisingly enough, it is actually possible to create bridges that are transpar- ent. Two algorithms are used: a backward learning algorithm to stop traffic being sent where it is not needed; and a spanning tree algorithm to break loops that may be formed when switches are cabled together willy-nilly. Let us now take a look at these algorithms in turn to learn how this magic is accomplished.
4.7.2 Learning Bridges
The topology of two LANs bridged together is shown in Fig. 4-33 for two cases. On the left-hand side, two multidrop LANs, such as classic Ethernets, are joined by a special station—the bridge—that sits on both LANs. On the right-hand side, LANs with point-to-point cables, including one hub, are joined together. The bridges are the devices to which the stations and hub are attached. If the LAN technology is Ethernet, the bridges are better known as Ethernet switches.
A D
Port
A
Port
Hub
E
1
1
D
B
B1
2
B
2
E
1 2
B1 B2 34
H1
Bridge
C
FC
Bridge
G
4F 3
G
(a) (b)
Figure 4-33. (a) Bridge connecting two multidrop LANs. (b) Bridges (and a hub) connecting seven point-to-point stations.
Bridges were developed when classic Ethernets were in use, so they are often shown in topologies with multidrop cables, as in Fig. 4-33(a). However, all the
SEC. 4.7 DATA LINK LAYER SWITCHING 337
topologies that are encountered today are comprised of point-to-point cables and switches. The bridges work the same way in both settings. All of the stations at tached to the same port on a bridge belong to the same collision domain, and this is different than the collision domain for other ports. If there is more than one sta tion, as in a classic Ethernet, a hub, or a half-duplex link, the CSMA/CD protocol is used to send frames.
There is a difference, however, in how the bridged LANs are built. To bridge multidrop LANs, a bridge is added as a new station on each of the multidrop LANs, as in Fig. 4-33(a). To bridge point-to-point LANs, the hubs are either con- nected to a bridge or, preferably, replaced with a bridge to increase performance. In Fig. 4-33(b), bridges have replaced all but one hub.
Different kinds of cables can also be attached to one bridge. For example, the cable connecting bridge B1 to bridge B2 in Fig. 4-33(b) might be a long-distance fiber optic link, while the cable connecting the bridges to stations might be a short- haul twisted-pair line. This arrangement is useful for bridging LANs in different buildings.
Now let us consider what happens inside the bridges. Each bridge operates in promiscuous mode, that is, it accepts every frame transmitted by the stations at tached to each of its ports. The bridge must decide whether to forward or discard each frame, and, if the former, on which port to output the frame. This decision is made by using the destination address. As an example, consider the topology of Fig. 4-33(a). If station A sends a frame to station B, bridge B1 will receive the frame on port 1. This frame can be immediately discarded without further ado be- cause it is already on the correct port. However, in the topology of Fig. 4-33(b) suppose that A sends a frame to D. Bridge B1 will receive the frame on port 1 and output it on port 4. Bridge B2 will then receive the frame on its port 4 and output it on its port 1.
A simple way to implement this scheme is to have a big (hash) table inside the bridge. The table can list each possible destination and which output port it be longs on. For example, in Fig. 4-33(b), the table at B1 would list D as belonging to port 4, since all B1 has to know is which port to put frames on to reach D. That, in fact, more forwarding will happen later when the frame hits B2 is not of interest to B1.
When the bridges are first plugged in, all the hash tables are empty. None of the bridges know where any of the destinations are, so they use a flooding algo rithm: every incoming frame for an unknown destination is output on all the ports to which the bridge is connected except the one it arrived on. As time goes on, the bridges learn where destinations are. Once a destination is known, frames destined for it are put only on the proper port; they are not flooded.
The algorithm used by the bridges is backward learning . As mentioned above, the bridges operate in promiscuous mode, so they see every frame sent on any of their ports. By looking at the source addresses, they can tell which ma- chines are accessible on which ports. For example, if bridge B1 in Fig. 4-33(b)
338 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
sees a frame on port 3 coming from C, it knows that C must be reachable via port 3, so it makes an entry in its hash table. Any subsequent frame addressed to C coming in to B1 on any other port will be forwarded to port 3.
The topology can change as machines and bridges are powered up and down and moved around. To handle dynamic topologies, whenever a hash table entry is made, the arrival time of the frame is noted in the entry. Whenever a frame whose source is already in the table arrives, its entry is updated with the current time. Thus, the time associated with every entry tells the last time a frame from that ma- chine was seen.
Periodically, a process in the bridge scans the hash table and purges all entries more than a few minutes old. In this way, if a computer is unplugged from its LAN, moved around the building, and plugged in again somewhere else, within a few minutes it will be back in normal operation, without any manual intervention. This algorithm also means that if a machine is quiet for a few minutes, any traffic sent to it will have to be flooded until it next sends a frame itself.
The routing procedure for an incoming frame depends on the port it arrives on (the source port) and the address to which it is destined (the destination address). The procedure is as follows.
1. If the port for the destination address is the same as the source port, discard the frame.
2. If the port for the destination address and the source port are different, forward the frame on to the destination port.
3. If the destination port is unknown, use flooding and send the frame on all ports except the source port.
You might wonder whether the first case can occur with point-to-point links. The answer is that it can occur if hubs are used to connect a group of computers to a bridge. An example is shown in Fig. 4-33(b) where stations E and F are connected to hub H1, which is in turn connected to bridge B2. If E sends a frame to F, the
hub will relay it to B2 as well as to F. That is what hubs do—they wire all ports together so that a frame input on one port is simply output on all other ports. The frame will arrive at B2 on port 2, which is already the right output port to reach the destination. Bridge B2 need only discard the frame.
As each frame arrives, this algorithm must be applied, so it is usually imple- mented with special-purpose VLSI chips. The chips do the lookup and update the table entry, all in a few microseconds. Because bridges only look at the MAC ad- dresses to decide how to forward frames, it is possible to start forwarding as soon as the destination header field has come in, before the rest of the frame has arrived (provided the output line is available, of course). This design reduces the latency of passing through the bridge, as well as the number of frames that the bridge must be able to buffer. It is referred to as cut-through switching or wormhole routing and is usually handled in hardware.
SEC. 4.7 DATA LINK LAYER SWITCHING 339
We can look at the operation of a bridge in terms of protocol stacks to under- stand what it means to be a link layer device. Consider a frame sent from station A to station D in the configuration of Fig. 4-33(a), in which the LANs are Ethernet. The frame will pass through one bridge. The protocol stack view of processing is shown in Fig. 4-34.
Station A Station D
Network
Packet
Bridge
Packet
Ethernet
Eth
Packet
Relay
Eth
MAC
Eth Packet
Eth Packet
Packet
Physical
Eth
Packet
Eth Packet Eth Packet
Eth
Packet
Wire Wire
Figure 4-34. Protocol processing at a bridge.
The packet comes from a higher layer and descends into the Ethernet MAC layer. It acquires an Ethernet header (and also a trailer, not shown in the figure). This unit is passed to the physical layer, goes out over the cable, and is picked up by the bridge.
In the bridge, the frame is passed up from the physical layer to the Ethernet MAC layer. This layer has extended processing compared to the Ethernet MAC layer at a station. It passes the frame to a relay, still within the MAC layer. The bridge relay function uses only the Ethernet MAC header to determine how to handle the frame. In this case, it passes the frame to the Ethernet MAC layer of the port used to reach station D, and the frame continues on its way.
In the general case, relays at a given layer can rewrite the headers for that layer. Virtual LANs will provide an example shortly. In no case should the bridge look inside the frame and learn that it is carrying an IP packet; that is irrelevant to the bridge processing and would violate protocol layering. Also note that a bridge with k ports will have k instances of MAC and physical layers. The value of k is 2 for our simple example.
4.7.3 Spanning-Tree Bridges
To increase reliability, redundant links can be used between bridges. In the ex- ample of Fig. 4-35, there are two links in parallel between a pair of bridges. This design ensures that if one link is cut, the network will not be partitioned into two sets of computers that cannot talk to each other.
340 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4 Frame F0
A
B2
F1
F2 F3 B1
Bridge
F4
Redundant links
Figure 4-35. Bridges with two parallel links.
However, this redundancy introduces some additional problems because it cre- ates loops in the topology. An example of these problems can be seen by looking at how a frame sent by A to a previously unobserved destination is handled in Fig. 4-35. Each bridge follows the normal rule for handling unknown destinations, which is to flood the frame. Call the frame from A that reaches bridge B1 frame F0. The bridge sends copies of this frame out all of its other ports. We will only consider the bridge ports that connect B1 to B2 (though the frame will be sent out the other ports, too). Since there are two links from B1 to B2, two copies of the frame will reach B2. They are shown in Fig. 4-35 as F1 and F2. Shortly thereafter, bridge B2 receives these frames. However, it does not (and cannot) know that they are copies of the same frame, rather than two different frames sent one after the other. So bridge B2 takes F1 and sends copies of it out all the other ports, and it also takes F2 and sends copies of it out all the other ports. This produces frames F3 and F4that are sent along the two links back to B1. Bridge B1 then sees two new frames with unknown destinations and copies them again. This cycle goes on forever.
The solution to this difficulty is for the bridges to communicate with each other and overlay the actual topology with a spanning tree that reaches every bridge. In effect, some potential connections between bridges are ignored in the interest of constructing a fictitious loop-free topology that is a subset of the actual topology.
For example, in Fig. 4-36 we see five bridges that are interconnected and also have stations connected to them. Each station connects to only one bridge. There are some redundant connections between the bridges so that frames will be for- warded in loops if all of the links are used. This topology can be thought of as a graph in which the bridges are the nodes and the point-to-point links are the edges. The graph can be reduced to a spanning tree, which has no cycles by definition, by dropping the links shown as dashed lines in Fig. 4-36. Using this spanning tree, there is exactly one path from every station to every other station. Once the bridges have agreed on the spanning tree, all forwarding between stations follows
SEC. 4.7 DATA LINK LAYER SWITCHING 341
the spanning tree. Since there is a unique path from each source to each destina tion, loops are impossible.
Station
Root
bridge
B1 B2
B3 B4
B5
Bridge
Link that is not part
of the spanning tree
Figure 4-36. A spanning tree connecting five bridges. The dashed lines are links that are not part of the spanning tree.
To build the spanning tree, the bridges run a distributed algorithm. Each bridge periodically broadcasts a configuration message out all of its ports to its neighbors and processes the messages it receives from other bridges, as described next. These messages are not forwarded, since their purpose is to build the tree, which can then be used for forwarding.
The bridges must first choose one bridge to be the root of the spanning tree. To make this choice, they each include an identifier based on their MAC address in the configuration message, as well as the identifier of the bridge they believe to be the root. MAC addresses are installed by the manufacturer and guaranteed to be unique worldwide, which makes these identifiers convenient and unique. The bridges choose the bridge with the lowest identifier to be the root. After enough messages have been exchanged to spread the news, all bridges will agree on which bridge is the root. In Fig. 4-36, bridge B1 has the lowest identifier and becomes the root.
Next, a tree of shortest paths from the root to every bridge is constructed. In Fig. 4-36, bridges B2 and B3 can each be reached from bridge B1 directly, in one hop that is a shortest path. Bridge B4 can be reached in two hops, via either B2 or B3. To break this tie, the path via the bridge with the lowest identifier is chosen, so B4 is reached via B2. Bridge B5 can be reached in two hops via B3.
To find these shortest paths, bridges include the distance from the root in their configuration messages. Each bridge remembers the shortest path it finds to the root. The bridges then turn off ports that are not part of the shortest path.
Although the tree spans all the bridges, not all the links (or even bridges) are necessarily present in the tree. This happens because turning off the ports prunes some links from the network to prevent loops. Even after the spanning tree has been established, the algorithm continues to run during normal operation to auto- matically detect topology changes and update the tree.
342 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
The algorithm for automatically constructing the spanning tree was invented by Radia Perlman. Her job was to solve the problem of joining LANs without loops. She was given a week to do it, but she came up with the idea for the span- ning tree algorithm in a day. Fortunately, this left her enough time to write it as a poem (Perlman, 1985):
I think that I shall never see
Agraph more lovely than a tree.
Atree whose crucial property
Is loop-free connectivity.
Atree which must be sure to span.
So packets can reach every LAN.
First the Root must be selected
ByID it is elected.
Least-cost paths from Root are traced
In the tree these paths are placed.
Amesh is made by folkslike me
Then bridges find a spanning tree.
The spanning tree algorithm was then standardized as IEEE 802.1D and used for many years. In 2001, it was revised to more rapidly find a new spanning tree after a topology change. For a detailed treatment of bridges, see Perlman (2000).
4.7.4 Repeaters, Hubs, Bridges, Switches, Routers, and Gateways
So far in this book, we have looked at a variety of ways to get frames and packets from one computer to another. We have mentioned repeaters, hubs, bridges, switches, routers, and gateways. All of these devices are in common use, but they all differ in subtle and not-so-subtle ways. Since there are so many of them, it is probably worth taking a look at them together to see what the simi larities and differences are.
The key to understanding these devices is to realize that they operate in dif ferent layers, as illustrated in Fig. 4-37(a). The layer matters because different de- vices use different pieces of information to decide how to switch. In a typical scenario, the user generates some data to be sent to a remote machine. Those data are passed to the transport layer, which then adds a header (for example, a TCP header) and passes the resulting unit down to the network layer. The network layer adds its own header to form a network layer packet (e.g., an IP packet). In Fig. 4-37(b), we see the IP packet shaded in gray. Then, the packet goes to the data link layer, which adds its own header and checksum (CRC) and gives the resulting frame to the physical layer for transmission, for example, over a LAN.
Now let us look at the switching devices and see how they relate to the packets and frames. At the bottom, in the physical layer, we find the repeaters. These are analog devices that work with signals on the cables to which they are connected.
SEC. 4.7 DATA LINK LAYER SWITCHING 343 Application layer Application gateway
Transport layer Transport gateway
Packet (supplied by network layer)
Network layer Router Frame
Packet
header
header
TCP
User
data CRC
Data link layer Bridge, switch Physical layer Repeater, hub
header
Frame (built by data link layer)
(a) (b)
Figure 4-37. (a) Which device is in which layer. (b) Frames, packets, and
headers.
A signal appearing on one cable is cleaned up, amplified, and put out on another cable. Repeaters do not understand frames, packets, or headers. They understand the symbols that encode bits as volts. Classic Ethernet, for example, was designed to allow four repeaters that would boost the signal to extend the maximum cable length from 500 meters to 2500 meters.
Next we come to the hubs. A hub has a number of input lines that it joins elec trically. Frames arriving on any of the lines are sent out on all the others. If two frames arrive at the same time, they will collide, just as on a coaxial cable. All the lines coming into a hub must operate at the same speed. Hubs differ from re- peaters in that they do not (usually) amplify the incoming signals and are designed for multiple input lines, but the differences are slight. Like repeaters, hubs are physical layer devices that do not examine the link layer addresses or use them in any way.
Now let us move up to the data link layer, where we find bridges and switches. We just studied bridges at some length. A bridge connects two or more LANs. Like a hub, a modern bridge has multiple ports, usually enough for 4 to 48 input lines of a certain type. Unlike in a hub, each port is isolated to be its own collision
domain; if the port has a full-duplex point-to-point line, the CSMA/CD algorithm is not needed. When a frame arrives, the bridge extracts the destination address from the frame header and looks it up in a table to see where to send the frame. For Ethernet, this address is the 48-bit destination address shown in Fig. 4-14. The bridge only outputs the frame on the port where it is needed and can forward multi- ple frames at the same time.
Bridges offer much better performance than hubs, and the isolation between bridge ports also means that the input lines may run at different speeds, possibly even with different network types. A common example is a bridge with ports that connect to 10-, 100-, and 1000-Mbps Ethernet. Buffering within the bridge is needed to accept a frame on one port and transmit the frame out on a different port. If frames come in faster than they can be retransmitted, the bridge may run out of
344 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
buffer space and have to start discarding frames. For example, if a gigabit Ethernet is pouring bits into a 10-Mbps Ethernet at top speed, the bridge will have to buffer them, hoping not to run out of memory. This problem still exists even if all the ports run at the same speed because more than one port may be sending frames to a given destination port.
Bridges were originally intended to be able to join different kinds of LANs, for example, an Ethernet and a Token Ring LAN. However, this never worked well because of differences between the LANs. Different frame formats require copying and reformatting, which takes CPU time, requires a new checksum calculation, and introduces the possibility of undetected errors due to bad bits in the bridge’s mem- ory. Different maximum frame lengths are also a serious problem with no good solution. Basically, frames that are too large to be forwarded must be discarded. So much for transparency.
Two other areas where LANs can differ are security and quality of service. Some LANs have link-layer encryption, for example 802.11, and some do not, for example Ethernet. Some LANs have quality of service features such as priorities, for example 802.11, and some do not, for example Ethernet. Consequently, when a frame must travel between these LANs, the security or quality of service expected by the sender may not be able to be provided. For all of these reasons, modern bridges usually work for one network type, and routers, which we will come to soon, are used instead to join networks of different types.
Switches are modern bridges by another name. The differences are more to do with marketing than technical issues, but there are a few points worth knowing. Bridges were developed when classic Ethernet was in use, so they tend to join rel- atively few LANs and thus have relatively few ports. The term ‘‘switch’’ is more popular nowadays. Also, modern installations all use point-to-point links, such as twisted-pair cables, so individual computers plug directly into a switch and thus the switch will tend to have many ports. Finally, ‘‘switch’’ is also used as a general term. With a bridge, the functionality is clear. On the other hand, a switch may refer to an Ethernet switch or a completely different kind of device that makes for- warding decisions, such as a telephone switch.
So far, we have seen repeaters and hubs, which are actually quite similar, as well as bridges and switches, which are even more similar to each other. Now we move up to routers, which are different from all of the above. When a packet comes into a router, the frame header and trailer are stripped off and the packet lo- cated in the frame’s payload field (shaded in Fig. 4-37) is passed to the routing software. This software uses the packet header to choose an output line. For an IP packet, the packet header will contain a 32-bit (IPv4) or 128-bit (IPv6) address, but not a 48-bit IEEE 802 address. The routing software does not see the frame ad- dresses and does not even know whether the packet came in on a LAN or a point to-point line. We will study routers and routing in Chap. 5.
Up another layer, we find transport gateways. These connect two computers that use different connection-oriented transport protocols. For example, suppose a
SEC. 4.7 DATA LINK LAYER SWITCHING 345
computer using the connection-oriented TCP/IP protocol needs to talk to a com- puter using a different connection-oriented transport protocol called SCTP. The transport gateway can copy the packets from one connection to the other, refor- matting them as need be.
Finally, application gateways understand the format and contents of the data and can translate messages from one format to another. An email gateway could translate Internet messages into SMS messages for mobile phones, for example. Like ‘‘switch,’’ ‘‘gateway’’ is somewhat of a general term. It refers to a forwarding process that runs at a high layer.
4.7.5 Virtual LANs
In the early days of local area networking, thick yellow cables snaked through the cable ducts of many office buildings. Every computer they passed was plugged in. No thought was given to which computer belonged on which LAN. All the people in adjacent offices were put on the same LAN, whether they belonged to- gether or not. Geography trumped corporate organization charts.
With the advent of twisted pair and hubs in the 1990s, all that changed. Build ings were rewired (at considerable expense) to rip out all the yellow garden hoses and install twisted pairs from every office to central wiring closets at the end of each corridor or in a central machine room, as illustrated in Fig. 4-38. If the Vice President in Charge of Wiring was a visionary, Category 5 twisted pairs were in- stalled; if he was a bean counter, the existing (Category 3) telephone wiring was used (only to be replaced a few years later, when fast Ethernet emerged).
Today, the cables have changed and hubs have become switches, but the wiring pattern is still the same. This pattern makes it possible to configure LANs logi- cally rather than physically. For example, if a company wants k LANs, it could buy k switches. By carefully choosing which connectors to plug into which switches, the occupants of a LAN can be chosen in a way that makes organiza tional sense, without too much regard to geography.
Does it matter who is on which LAN? After all, in nearly all organizations, all the LANs are interconnected. In short, yes, it often matters. Network administra tors like to group users on LANs to reflect the organizational structure rather than the physical layout of the building, for a variety of reasons. One issue is security. One LAN might host Web servers and other computers intended for public use. Another LAN might host computers containing the records of the Human Re- sources department that are not to be passed outside of the department. In such a situation, putting all the computers on a single LAN and not letting any of the ser- vers be accessed from off the LAN makes sense. Management tends to frown when hearing that such an arrangement is impossible.
A second issue is load. Some LANs are more heavily used than others and it may be desirable to separate them. For example, if the folks in research are run- ning all kinds of nifty experiments that sometimes get out of hand and completely
346 THE MEDIUM ACCESS CONTROL SUBLAYER CHAP. 4
Cable
duct
Hub
Corridor
Switch
Hub
Twisted pair to a hub
Office
Figure 4-38. A building with centralized wiring using hubs and a switch.
saturate their LAN, the folks in management may not be enthusiastic about donat ing some of the capacity they were planning to use for videoconferencing to help out. Then again, this might impress on management the need to install a faster net- work.
A third issue is broadcast traffic. Bridges broadcast traffic when the location of the destination is unknown, and upper-layer protocols use broadcasting as well. For example, when a user wants to send a packet to an IP address x, how does it know which MAC address to put in the frame? We will study this question in Chap. 5, but briefly summarized, the answer is that it broadcasts a frame con taining the question ‘‘who owns IP address x?’’ Then it waits for an answer. As the number of computers in a LAN grows, so does the number of broadcasts. Each broadcast consumes more of the LAN capacity than a regular frame because it is delivered to every computer on the LAN. By keeping LANs no larger than they need to be, the impact of broadcast traffic is reduced.
Related to broadcasts is the problem that once in a while a network interface will break down or be misconfigured and begin generating an endless stream of broadcast frames. If the network is really unlucky, some of these frames will elicit responses that lead to ever more traffic. The result of this broadcast storm is that (1) the entire LAN capacity is occupied by these frames, and (2) all the machines on all the interconnected LANs are crippled just processing and discarding all the frames being broadcast.
At first it might appear that broadcast storms could be limited in scope and reach by separating the LANs with bridges or switches, but if the goal is to achieve