Medium Access Sublayer

``Data Link Layer for LANs''

Can divide networks into point-to-point and broadcast. Look at broadcast networks and their protocols.

When many stations compete for a channel (e.g., broadcast channel such as an Ethernet), an algorithm must arbitrate access to the shared channel.

Need a way of insuring that when two or more stations wish to transmit, they all wait until doing so won't interfere with other transmitters. Broadcast links include LANs, satellites (WAN), etc.

LAN characteristics:

MANs cover a city-wide area with LAN technology. For example, cable TV.

Can have higher speed, lower error rate lines with LANs than WANs.

Performance Analysis

We want to analyze performance of the communication channel to determine how well different technologies work.

There are three general approaches to analyzing systems:

  1. Actually build the system and measure its performance. Drawback? Unfortunately, this can be expensive in time and effort.
  2. Write a computer program that simulates the system. That is, processes can represent customers or servers, and processes move from queue to queue. For example, a ``customer'' process could generate arriving packets by depositing a message in a queue. The server process would then ``consume'' these messages, removing them from the queue and then simulating the steps required to actually send the packet.
  3. Represent service times, arrival rates, etc. as a series of equations and solve the system mathematically. Drawback? Very hard to represent a system using mathematical equations without making simplifying assumptions that may invalidate the model (and thus any conclusions).

In order to be able to solve a system mathematically, one must typically make simplifying assumptions. Some common simplifications:

  1. Queues can be infinitely long. Of course, no queue is infinitely long in practice, but if a fixed length queue never overflows, we can pretend that it has infinite length without compromising our model.
  2. Choose a service time function that is ``well behaved''. Typically, the service time is chosen to vary around a mean, with small deviations from the mean. Exponential distributions are commonly used, because their mathematical representations make analytic problems solvable. Note: in real networks, traffic patterns rarely behave at all like these functions.
  3. Likewise, assume customers arrive at a server's queue according to some well-behaved probability function.

Model Validity

The above simplifications may be made for computer simulations as well. Thus, when using simulations or mathematics to analyze a given system, one must always ask whether the assumptions are relevant to the system being considered. For instance:

Queueing Theory

Although assumptions are required, it is a powerful mathematics tool for making quantitative analyses of computer networks. Need to be aware of as we will use it in analysis. Appendix of Tanenbaum.

High level overview: Model the system as a set of customers (frames needing to be delivered) and servers (part of network that processes frames). See picture.

Queueing Systems

Characterized by five components

  1. interarrival-time probability density function
  2. service-time probability density function
  3. number of servers
  4. queueing discipline
  5. amount of buffer space in the queues

Assume an infinite number of customers (i.e. what is happening in the system does not affect new arrivals)

Concentrate on infinite-buffer, single-server systems using first-come, first-served. Use the notation A/B/m where:

Density functions are chosen from:

Usually assume a M/M/1 model.

Handwritten Notes

Dynamic Channel Allocation Assumptions

Before looking at specific technologies, it is useful to look at general characteristics of the various technologies.

In the station model, stations generate one frame at a time and block until the frame has been successfully transmitted. Thus, a station cannot have multiple frames queued for transmission.

With the single channel assumption, all stations share one communications medium (channel), and all stations are equivalent. That is, if two or more stations wish to transmit, no station has an inherent transmission priority over the other.

When two or more stations transmit frames simultaneously, a collision takes place. The collision assumption model assumes that all stations detect a collision and the collided frames become garbled and must be retransmitted.

In the continuous time model, a station can transmit frames at any time.

In the slotted time model, a station may begin transmission of a frame only at the start of a time slot. A clock ticks at regular pulses, and a station may begin transmission only at the start of a pulse. Thus, if a station wishes to transmit, it must wait until the start of a time slot before actually attempting transmission.

In carrier sense systems, a station can sense if a channel is being used before it attempts transmission. That is, it can listen to the channel to determine if another station is currently transmitting.

With no carrier sense, stations cannot sense the channel before transmission starts.

ALOHA Protocols

Back in 1970, the University of Hawaii built a network out of radios that broadcast signals. Basic idea:

  1. Anyone may transmit whenever they want. (Continuous time model.)
  2. Each radio detects collisions by listening to its own signal. A collision is detected when a sender doesn't receive the signal that it just sent.
  3. After a collision, wait a random amount of time and transmit the same frame again. This technique is known as backoff.

With the queueing theory, we can now examine its efficiency:

  1. Assume that all frames are the same size. That is, the service time is deterministic (when no collision takes place).
  2. When a station sends, it blocks until transmission is successful. A station cannot try to send more than one packet at a time. Note that this means that if a packet's transmission is delayed due to collisions, the station will also delay generating additional packets to send.
  3. Assume new frames are generated according to a Poisson distribution, with a mean of S frames per frame time.

    Note:

    1. For S ;SPMgt; 1, load exceeds channel's capacity and many collisions occur.
    2. for S close to 0, few collisions occur.
  4. Assume that the probability of n transmission attempts per frame time is Poisson distributed with a mean of G per frame. G is called the offered load and includes both new transmissions and retransmissions.

    Note: tex2html_wrap_inline370 , where t is one frame time, and tex2html_wrap_inline374 .

ALOHA Analysis

The probability that n frames will be generated during a given frame time is given by the Poisson distribution:

tex2html_wrap_inline378

When will transmission be successful? Answer: If there are no collisions. There are two cases. First, if another station starts transmission after we do (within time G), or if we collide with another frame already in transmission (within time G before we start transmitting). Thus, the probability if no other traffic being generated the entire vulnerable period is

tex2html_wrap_inline380 and Throughput tex2html_wrap_inline382

When is throughput maximized?

Answer: when G = 0.5; the throughput at this value is only 18%!

The above method is called pure ALOHA. An alternative method called slotted ALOHA behaves as follows:

  1. Only allow transmission to start at the beginning of a slot time. Need a means of synchronization. Each slot has duration equal to the maximum frame size. When a station has a new frame to send, it must wait until the start of the next slot time before attempting transmission.
  2. Because transmissions always start and stop on slot boundaries, the collision interval drops from two to one frame time. Result:

    Throughput tex2html_wrap_inline386 , which doubles the max throughput to 37% at G=1

Intuitively, the throughput is poor because of the following:

CSMA Protocols

Under what conditions do collisions take place?

  1. If a station begins transmission when the channel is already busy. Obviously, if one transmits when the channel is already in use, a collision will happen. Note: Pure Aloha has this problem, which is one reason its performance is poor. We can avoid these types of collisions by sensing the channel before we attempt transmission, and send only if it is idle.
  2. If two stations begin transmission at (roughly) the same time, each thinking that the channel is idle. Because of propagation delay times, one station may think the channel is idle, when in fact another station has already begun transmission, but its signals have not reached the first node.

Collision avoidance protocols all attempt to reduce one or more of the above factors.

Carrier Sense Protocols

Problem with ALOHA is that frames are blindly sent--bound to be collisions.

Stations listen for a transmission before trying to send data--carrier sense. Only send if channel is idle.

Protocol Efficiency

The following table gives maximum efficiency percentages for some of the protocols we have studied so far:

CSMA/CD

Another way to reduce the number of collisions is to abort collisions as soon as they are detected. CSMA networks with Collision Detect (CSMA/CD) do just that. How long does it take to detect collisions:

Collision Avoidance Protocols

Basic Bit-Map Protocol

Another approach to reducing collisions is to prevent them from happening in the first place. Consider the Basic Bit-Map Method:

  1. Assume N stations numbered 1-N, and a contention period of N slots (bits).
  2. Each station has one slot time during the contention period (numbered 1-N).
  3. Each station J sends a 1-bit during its slot time if it wants to transmit a frame.
  4. Every station sees all the 1-bits transmitted during the contention period, so each station knows which stations want to transmit.
  5. After the contention period, each station that asserted its desire to transmit sends its frame.

At low loads, this protocol has an efficiency of:

tex2html_wrap_inline396 , where d is the number of data bits in the frame, N is the number of stations, and d ;SPMgt;;SPMgt; N. At low loads, only one frame is sent per contention period, or an extra N bits for each transmitted frame.

At high loads, the efficiency rises to: tex2html_wrap_inline406

and the average delay is given by: tex2html_wrap_inline408

Drawbacks: Higher numbered stations get better service than those with lower numbers. A higher numbered station has less time to wait (on average) before its next bit time in the current or next contention period.

Also, at light loads, a station must wait tex2html_wrap_inline410 bit times before it can begin transmission.

Broadcast Recognition Alternating Priorities

The Broadcast Recognition Alternating Priorities (BRAP) protocol addresses some of these issues:

As soon as a station inserts a 1-bit into its slot, it begins transmission.

Result: Stations serviced in round-robin style; if a station has nothing to transmit, it sends nothing during its slot, and the next station is given the opportunity.

There are many other protocols aimed at squeezing out a bit more performance out of the protocols. In practice, however, they may not always be useful. Keep in mind:

  1. N, the number of stations, must be known by all stations. In practice, this is difficult to get exactly right. Consider adding or removing stations from a network. Also, each station may need to know its own number.
  2. During the contention period, each bit time (in BRAP) must be ``long enough'' that the signal propagates to both ends of the cable. We can't have bits from different stations colliding. In practice, this generally reduces the overall bit rate and throughput. In Ethernets, for example, the contention period is many times longer than one bit time -- 64 bytes!
  3. The more complex the protocol, the more complex (expensive) the hardware. One should never lose the sight of this, as the ultimate goal of networking high performance at low cost.
  4. Seemingly simple schemes may perform quite adequately in practice. For example, consider Ethernet (802.3) LANs. Indeed, neither described schemes is actually used in LAN hardware.

IEEE 802 Protocol Standards for LANs

The IEEE has produced a set of LAN protocols known as the IEEE 802 protocols. These protocols have been adopted by ANSI and ISO:

Ethernet is a specific product implementing (or nearly so) the IEEE standard. Interesting to note that having an Ethernet port on a machine has become a standard (certainly for workstations).

The 802.3 protocol is described as follows:

802.3 Frame Layout

At the Medium Access (MAC) sublayer, frames consist of the following:

  1. Frames begin with 56-bit preamble consisting of alternating 1s and 0s. Purpose? Similar to start bit in RS-232-A. It allows the receiver to detect the start of a frame.
  2. Start of the frame designated by the byte ``10101011''. That is, two consecutive 1 bits flag the end of the preamble.
  3. 48-bit destination address (16-bit for lower speed version).
  4. 48-bit source address (16-bit for lower speed version).
  5. 16-bit data length field; maximum data size 1500 bytes.
  6. 32-bit checksum field. The checksum is a number dependent on every bit in the frame. The sender computes a checksum and appends it to the frame. The receiver also recalculates the checksum, comparing result with value stored in frame. If the two checksums differ, some of the bits in the frame must have changed and the packet is discarded as having errors.

There are two types of addresses:

  1. Unicast addresses start with a high-order bit of 0. Unicast addresses refer to a single machine, and every Ethernet address in the world is guaranteed to be unique (e.g., address is sort of like a serial number).
  2. Multicast (group) addresses start with a high-order bit of 1. Multicast addresses refer to a set of one or more stations.

    A broadcast address (all 1's) is a special case of multicasting. All machines process broadcast frames.

    The management of multicast addresses (e.g., entering or leaving a group) must be managed by some outside mechanism (e.g, higher-layer software).

Binary Exponential Backoff

802.3 (Ethernet) LANs use a binary exponential backoff algorithm to reduce collisions:

  1. It is 1-persistent. When a station has a frame to send, and the channel is idle, transmission proceeds immediately.
  2. When a collision occurs, the sender generates a noise burst to insure that all stations recognize the condition and aborts transmission.
  3. After the collision, wait 0 or 1 contention periods before attempting transmission again. Station has 50-50 probability of waiting 0 or 1 contention period. The idea is that if two stations collide, one will transmit in the first interval, while the other one will wait until the second interval. Once the second interval begins, however, the station will sense that the channel is already busy and will not transmit.
  4. If another collision occurs: Randomly wait 0, 1, 2, or 3 slot times before attempting transmission again.
  5. In general, wait between 0 and tex2html_wrap_inline412 times, where r is the number of collisions encountered.
  6. Finally, freeze interval at 1023 slot times after 10 attempts, and give up altogether after 16 attempts.

Result? Low delay at light loads, yet we avoid collisions under heavy loads.

Also, note that there are no acknowledgments; a sender has no way of knowing that a frame was successfully delivered.

802.3 Analysis

Assuming a constant retransmission probability (rather than backoff), the channel efficiency of 802.3 LANs is given by:

Efficiency tex2html_wrap_inline416 ,

where P is the mean time to transmit a frame, and A is the probability that some station transmits during each slot.

What happens as tex2html_wrap_inline392 increases? The efficiency drops.

An alternative from of the same formula:

Efficiency tex2html_wrap_inline424 ,

where: B = bandwidth, L = cable length, c is signal propagation rate, e = contention slots per frame, and F = mean frame length.

What does this say about networks with large BL products? They are inefficient. Fiber optics, satellites, and WANs in general have high bandwidth-delay products.

Switched 802.3 LANs

Can connect hosts to a hub switch. Advantage is that stations use the same network interface card, but they can run at higher speeds.

Fast Ethernet

See Fig. 4-47. Different approaches result in 100Mbps.

100BASE-T4 split data on 3 streams each with effective data rate of 33 1/3Mbps. Use four twisted pairs. One pair used for in-bound, one for out-bound and two are bidirectional. Use ternary encoding (8B6T).

Shared hub (all incoming lines logically connected) vs. Switched Hub (each incoming frame is buffered on a plug-in line card). 100Base-Fx cables must be buffered because they are too long for Ethernet collision detection algorithm.

IEEE Standard 802.4: Token Bus

Not everyone was happy with CSMA/CD networks. Reasons?

  1. Non-deterministic nature of protocol; no absolute bound on delay.
  2. No support for priorities. All traffic (e.g., voice, video, interactive keystrokes, background mail, etc.) is treated the same.

One solution is the token bus: Physically a bus, logically a ring.

The details of the actual protocols are quite complex. Skipping the details, the following points should be noted:

  1. It uses 75-ohm broadband technology (analog).
  2. It supports 4 priority classes of frames. A station transmits its highest priority frames first, then its next highest, etc., until it has transmitted all of its frames or until its time has elapsed. This contrasts with the token ring, in which higher priority traffic can prevent a station from ever transmitting. Note that the token bus approach also leads to starvation, but under different conditions.
  3. When a station fails (or is powered down), it must leave the ring. Note that a protocol is needed to gracefully add or remove oneself from the logical ring and also to detect ungraceful exits of failed nodes.
  4. When a station joins the network, it must join the ring. Periodically broadcast special tokens to allow new stations to enter the ring. New stations reply with station's address and successor's address. Can have 0, 1 or multiple replies to a special token. If multiple replies then must use an arbitration algorithm similar to binary exponential backoff.
  5. Because tokens can become damaged, a protocol is needed to regenerate lost tokens.
  6. Protocol must deal with duplicate tokens. For example, a station might incorrectly regenerate a token.

IEEE Standard 802.5: Token Ring

An alternative topology is to connect stations together in a ring, with each station allowed to transmit in turn.

Token rings behave like broadcast networks in that every station ``sees'' (and processes) each frame.

However, they are actually built out of a collection of point-to-point links connected in a ring.

  1. Any technology can be used for individual links. Examples? Twisted pair, fiber, etc.
  2. Unlike 802.3, stations are active rather than passive -- they participate in the network's operation That is, a node receives a frame from its upstream neighbor and copies the frame to its downstream neighbor.
  3. A special control frame, called a token, circulates from station to station:
    1. A station can transmit only when it is in possession of the token. Once a station has ``seized'' the token, it may transmit one (or more) frames.
    2. A station can transmit only a single or limited number of frames before forwarding the token on. This limits the time a station can hold the token, placing an upper bound on the time a station has to wait before it can seize the token.
    3. Because only one station transmits at a time, no collisions occur.
  4. Data travels in only one direction, and transmission is totally digital. That is each station regenerates the bits that go, rather than amplifying the analog signal.
  5. Each station adds only a single 1-bit delay to each frame; as soon as it has received a bit, it copies it to the outgoing link. That is, a station regenerates a bit on its outgoing link as soon as it has received the bit on its incoming link. It does not wait for the entire frame before starting to forward the frame on.
  6. Depending on ring length, the ring can hold many frames, or a fraction of a single frame.
  7. The network must handle the case of a station being powered off, thereby ``breaking'' the ring. If so the station must connect input to output and take out the 1-bit delay.

Station Modes

A station can be in one of two modes:

  1. In listen mode, simply copy frames from the input line to the output line. Note: Copy takes place at bit level, not frame level.
  2. In transmit mode, first seize the token and then:
    1. Break the ``copy'' connection from the input-to-output line.
    2. Transmit frame, removing the bits as they come back on the receive line.
    3. After having received a copy of the entire frame, restore the ``copy'' connection (e.g., revert to listen mode).
    4. Regenerate token.

Note: Adding acknowledgments would be trivial. The intended recipient could simply flip a bit to indicate that it had received the frame.

Performance

What is the bound on the transmission delay time? That is, how long will a station wait before it is allowed to transmit?

nT, where n is the number of stations, and T is the time to transmit one frame. In the worst case, a station may have to wait for every other stations to transmit a single frame before it can transmit.

Overall performance:

802.5 Standard

IEEE 802.5 has defined a token ring standard:

  1. 1 and 4 Mbps version standardized. Note, however, that once again the standard lags behind the technology. 80Mbps token ring LANs have been available for several years.
  2. IBM version is NOT the same as 802.5.
  3. Uses twisted pair technology for links.

Token

In 802.5 each token frame is three bytes long:

  1. A 1-byte delimiter signals the start of a frame.
  2. A 1-byte access control field contains priority information.
  3. A 1-byte frame control field (distinguishes data frames from control frames).

Note: Token frames differ from regular frames by only 1 bit in the frame control field. Thus, seizing the token consists of flipping a single bit during the copy operation.

after seizing the token, a station may transmit data for no longer than the token holding time, typically about 10 ms.

Frame Format

Each regular frame has the following format:

  1. 1-byte start of frame, 1-byte access control, and 1-byte control field (e.g., a ``modified token'').
  2. 6 (or 2) byte destination address. (Same as 802.3.)
  3. 6 (or 2) byte source (sender) address. (Same as 802.3.)
  4. Data itself. No length field is included. Special delimiters flag the end of data.
  5. 4-byte checksum. (Same as 802.3.)
  6. 1-byte ending delimiter.

    Note: The ending delimiter contains an invalid Manchester encoding that can never appear in the data part of the frame.

  7. 1-byte frame status indication containing:

Priorities

The access control field in the token is used to support priorities. If a station wishes to send a frame at priority n:

  1. It must first seize a token at the same (or lower) priority than the frame it wishes to send. That is, if the token contains a higher priority, another station has a higher priority frame to send.
  2. It attempts to reserve the token by writing the desired priority into the access control field of a data frame that passes by. However, a station cannot replace a higher priority with a lower priority.
  3. Another station wishing to send a higher priority frame can overwrite the access control field with the higher priority value.
  4. When regenerating the token, set its priority to that in the (modified) access control field of the frame it receives. Another node may have raised the priority from what it had been.

Note that high priority traffic can starve out low priority traffic. A station that always has high priority traffic to send can prevent another station from ever transmitting.

Also, additional rules are required to restore priorities to their former state.

Wire Centers

Unfortunately, ring networks are subject to wire breaks (or station failures) -- single points of failure. One solution is a wire center:

  1. Convert the ring topology into a star topology. The wire center acts as the hub of the star.
  2. Each station connects to the wire center over two links, one for receiving and one for sending.
  3. The wiring center uses relays powered by the station: If the station shuts off, remove it from the ring. That is, when the power source vanishes, the relay effectively bypasses the station, removing it from the ring.
  4. Each station is still part of the logical ring; thus, multiple wire centers can be connected together. The existence of a wire center is transparent to the stations. That is, stations do not need to be aware that a wire center is present.

Ring Maintenance

Token ring networks must perform ring maintenance functions to handle such problems as token regeneration. In 802.5, each network has a monitor station:

  1. The monitor station is solely responsible for proper ring maintenance. (It is also a single point of failure should it ``misbehave''.) It does the following:

  2. Yet another protocol (using claim tokens) is used to select the monitor station.

Since tokens (potentially) circulate forever, the entire token must ``fit'' in the ring. That is, the ring delay must be larger than one token time. Unlike regular frames, token frames potentially circulate forever. Because the sender doesn't pull them off the ring, the first bit of the token frame can't return to the sender until after it has sent the last bit of the token frame.

Slotted Token Rings

Slotted rings are a variation of the token ring approach that are particularly useful in long-delay rings:

  1. Data frames are contained in the token frame itself. That is, the data frame is piggy backed on an existing token.
  2. The token contains a used/empty bit; when an ``empty'' token goes by, a station can set it to ``used'' and fill it with data.
  3. Beneficial in long-delay rings. Why? They allow multiple tokens and frames to travel within the ring simultaneously.

With slotted rings, frames are subject to maximum size restrictions. We don't want large packets to ``overflow'' the ring. All packets must be able to fit in the ring at the same time.

Comparison of LAN's

802.3 (Ethernet):

802.4:

802.5:

Finally, any specific performance analysis or traffic mix can make one technology look ``good'' and another look ``bad''. When choosing a system, all factors should be considered.

FDDI

The Fiber Distributed Data Interface (FDDI) is a standard for token rings using fiber for individual links:

  1. supports data rates of 100 Mbps over 200 km
  2. uses LED rather than lasers (lasers potentially dangerous)
  3. has a bit error rate of only 1 in tex2html_wrap_inline446 bits
  4. cabling consists of a pair of rings (one for each direction); two pairs are specified so that the second ring can be used to reconfigure the ring in the event of a station or link failure

The protocol itself is similar to 802.5, but has the following differences:

Waiting for applications to catch up. Similar to Ethernet which people said was too complicated when it came out.

IEEE Standard 802.2: Logical Link Control

On top of MAC layer for DLL equivalent (Fig 4-33).

Service options:

About this document ...

This document was generated using the LaTeX2HTML translator Version 96.1 (Feb 5, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -no_navigation -split 0 -t MAC Channel Sublayer week4-mac.tex.

The translation was initiated by Craig Wills on Mon Oct 18 17:32:36 EDT 1999


Craig Wills
Mon Oct 18 17:32:36 EDT 1999