``Data Link Layer for LANs''
Can divide networks into point-to-point and broadcast. Look at broadcast
networks and their protocols.
When many stations compete for a channel (e.g., broadcast
channel such as an Ethernet), an algorithm must arbitrate access to
the shared channel.
Need a way of insuring that when two or more stations wish to
transmit, they all wait until doing so won't interfere with other
transmitters. Broadcast links include LANs, satellites (WAN), etc.
LAN characteristics:
- diameter of not more than a few kilometers
- data rate of at least several Mbps
- complete ownership by a single organization
MANs cover a city-wide area with LAN technology. For example, cable TV.
Can have higher speed, lower error rate lines with LANs than WANs.
We want to analyze performance of the communication channel to determine
how well different technologies work.
There are three general approaches to analyzing systems:
- Actually build the system and measure its
performance. Drawback?
Unfortunately, this can be expensive in time and effort.
- Write a computer program that simulates the
system.
That is, processes can represent customers or servers, and processes move
from queue to queue. For example, a ``customer'' process could
generate arriving packets by depositing a message in a queue. The
server process would then ``consume'' these messages, removing them
from the queue and then simulating the steps required to actually send
the packet.
- Represent service times, arrival rates, etc. as a series
of equations and solve the system mathematically. Drawback? Very
hard to represent a system using mathematical equations without making
simplifying assumptions that may invalidate the model (and thus any
conclusions).
In order to be able to solve a system mathematically, one must
typically make simplifying assumptions. Some common
simplifications:
- Queues can be infinitely long. Of course, no queue is
infinitely long in practice, but if a fixed length queue never
overflows, we can pretend that it has infinite length without
compromising our model.
- Choose a service time function that is ``well behaved''.
Typically, the service time is chosen to vary around a mean,
with small deviations from the mean. Exponential distributions are
commonly used, because their mathematical representations make
analytic problems solvable. Note: in real networks, traffic patterns
rarely behave at all like these functions.
- Likewise, assume customers arrive at a server's queue according to
some well-behaved probability function.
The above simplifications may be made for computer simulations as
well. Thus, when using simulations or mathematics to analyze a
given system, one must always ask whether the assumptions are
relevant to the system being considered. For instance:
- Real systems do not have an infinite amount of buffer space. If
the system has ``a lot'', the assumption is probably reasonable. That
is, just because the model assumes an infinite amount of buffer space
doesn't mean that all that buffer space is actually used. If only a
small amount of space is ever used, then the assumption is fine.
- Hot point of contention between analysts and experimentalists:
Do service times and arrival times correspond to ``well behaved''
functions? In most cases, the answer is no. In most real
systems, actual traffic patterns are quite complex and do not match
simplistic functions. For example, flow control eventually forces a
sender to stop generating packets. This contradicts with the
``randomness'' assumption of arrival functions.
Although assumptions are required, it is a powerful mathematics tool for
making quantitative analyses of computer networks. Need to be aware of as
we will use it in analysis. Appendix of Tanenbaum.
High level overview: Model the system as a set of customers (frames
needing to be delivered) and servers (part of network that processes
frames). See picture.
Characterized by five components
- interarrival-time probability density function
- service-time probability density function
- number of servers
- queueing discipline
- amount of buffer space in the queues
Assume an infinite number of customers (i.e. what is happening in the
system does not affect new arrivals)
Concentrate on infinite-buffer, single-server systems using first-come,
first-served. Use the notation A/B/m where:
- A is the interarrival-time probability density
- B is the service-time probability density
- m is the number of servers
Density functions are chosen from:
- M - exponential probability density (M stands for Markov)
- D - all customers have the save value (D stands for deterministic)
- G - general (arbitrary)
Usually assume a M/M/1 model.
Before looking at specific technologies, it is useful to look at
general characteristics of the various technologies.
In the station model, stations generate one frame at a
time and block until the frame has been successfully transmitted.
Thus, a station cannot have multiple frames queued for
transmission.
With the single channel assumption, all stations
share one communications medium (channel), and all stations are
equivalent. That is, if two or more stations wish to transmit,
no station has an inherent transmission priority over the other.
When two or more stations transmit frames simultaneously,
a collision takes place. The collision assumption
model assumes that all stations detect a collision and the collided frames
become garbled and must be retransmitted.
In the continuous time model, a station can
transmit frames at any time.
In the slotted time model, a station may
begin transmission of a frame only at the start of a time slot.
A clock ticks at regular pulses, and a station may begin
transmission only at the start of a pulse. Thus, if a station wishes
to transmit, it must wait until the start of a time slot before
actually attempting transmission.
In carrier sense systems, a station can sense
if a channel is being used before it attempts transmission. That
is, it can listen to the channel to determine if another station is
currently transmitting.
With no carrier sense, stations cannot sense the
channel before transmission starts.
Back in 1970, the University of Hawaii built a network out of
radios that broadcast signals. Basic idea:
- Anyone may transmit whenever they want. (Continuous time model.)
- Each radio detects collisions by listening to its own signal.
A collision is detected when a sender doesn't receive the
signal that it just sent.
- After a collision, wait a random amount of time and
transmit the same frame again. This technique is known as
backoff.
With the queueing theory, we can now examine its efficiency:
- Assume that all frames are the same size. That is, the
service time is deterministic (when no collision takes place).
- When a station sends, it blocks until transmission is
successful. A station cannot try to send more than one packet at
a time. Note that this means that if a packet's transmission is delayed
due to collisions, the station will also delay generating additional
packets to send.
- Assume new frames are generated according to a Poisson
distribution, with a mean of S frames per frame time.
Note:
- For S ;SPMgt; 1, load exceeds channel's capacity and many
collisions occur.
- for S close to 0, few collisions occur.
- Assume that the probability of n transmission attempts per
frame time is Poisson distributed with a mean of G per frame. G
is called the offered load and includes both new
transmissions and retransmissions.
Note:
, where t is one frame time, and
.
The probability that n frames will be generated during a given frame
time is given by the Poisson distribution:
When will transmission be successful? Answer: If there
are no collisions. There are two cases. First, if another station
starts transmission after we do (within time G), or if we collide with
another frame already in transmission (within time G before we start
transmitting). Thus, the probability if no other traffic
being generated the entire vulnerable period is
and
Throughput
When is throughput maximized?
Answer: when G = 0.5; the throughput at this value
is only 18%!
The above method is called pure ALOHA. An
alternative method called slotted ALOHA behaves as
follows:
- Only allow transmission to start at the beginning of a slot
time. Need a means of synchronization.
Each slot has duration equal to the maximum frame size. When a
station has a new frame to send, it must wait until the start of the
next slot time before attempting transmission.
- Because transmissions always start and stop on slot boundaries,
the collision interval drops from two to one frame time.
Result:
Throughput
, which doubles the max throughput to 37% at G=1
Intuitively, the throughput is poor because of the following:
- At low offered loads, there are few collisions, but many empty
slots. Thus bandwidth is frequently unused and wasted.
- At higher loads, few empty slots, but many collisions,
effectively wasting the full slots.
- Throughput is maximized somewhere in the middle.
Under what conditions do collisions take place?
- If a station begins transmission when the channel is already
busy.
Obviously, if one transmits when the channel is already in use,
a collision will happen. Note: Pure Aloha has this problem,
which is one reason its performance is poor.
We can avoid these types of collisions by sensing the
channel before we attempt transmission, and send only if it is idle.
- If two stations begin transmission at (roughly) the same
time, each thinking that the channel is idle. Because of
propagation delay times, one station may think the channel is idle,
when in fact another station has already begun transmission, but its
signals have not reached the first node.
Collision avoidance protocols all attempt to reduce one or more of the
above factors.
Problem with ALOHA is that frames are blindly sent--bound to be collisions.
Stations listen for a transmission before trying to send data--carrier sense.
Only send if channel is idle.
- 1-Persistent CSMA (Carrier Sense Multiple Access). Sense channel, if
idle then send, if busy wait until idle and then send. 1-persistent
because it sends with probability of one when senses channel is idle.
Collisions?
Effect of propagation delay (takes time for signal to propagate from one
channel to another). Even if zero could have collisions.
- nonpersistent CSMA--less greedy. Sense channel. If idle then send.
If busy then wait random amount of time before repeating the same routine.
Collisions go away? No, can still send at the same time.
- p-persistent CSMA--applies to slotted channels. If channel idle
then send with probability p, with probability q=(1-p), it
defers to the next time slot. Delay for random time if channel busy. Look
at Fig 3-5.
Note: p = 1 implies we transmit immediately, p = .1, we transmit with
probability .1.
The primary advantage of p-persistent protocols is that they reduce
the number of collisions under heavy load. The primary disadvantage
is that they increase the average delay before a station transmits a
frame. Under low loads, the increased delay reduces efficiency.
The following table gives maximum efficiency percentages for some of
the protocols we have studied so far:
Another way to reduce the number of collisions is to abort
collisions as soon as they are detected. CSMA networks with Collision
Detect (CSMA/CD) do just that. How long does it take to detect
collisions:
- at least twice the propagation delay, or
(in worst case,
for the signal to travel from one end of cable to the other, another
for the collision indication to travel back) - we'll call this interval the contention period
- what does this say about building broadcast networks that span
large distances? (increase propagation delay)
The longer the physical distance, the longer the
contention period. For satellite communication, the contention period
is .5 seconds, much longer than the transmission time for a single frame!
- small frames? (could send entire frame before detecting a
collision)-pad out the frames in the standard
Another approach to reducing collisions is to prevent them from
happening in the first place. Consider the Basic Bit-Map Method:
- Assume N stations numbered 1-N, and a contention period of N
slots (bits).
- Each station has one slot time during the contention period
(numbered 1-N).
- Each station J sends a 1-bit during its slot time if it wants to
transmit a frame.
- Every station sees all the 1-bits transmitted during the contention
period, so each station knows which stations want to transmit.
- After the contention period, each station that asserted its
desire to transmit sends its frame.
At low loads, this protocol has an efficiency of:
, where d is the number of data bits
in the frame, N is the number of stations, and d ;SPMgt;;SPMgt; N. At low
loads, only one frame is sent per contention period, or an extra N
bits for each transmitted frame.
At high loads, the efficiency rises to:
and the average delay is given by:
Drawbacks: Higher numbered stations get better service than those with
lower numbers. A higher numbered station has less time to wait
(on average) before its next bit time in the current or next
contention period.
Also, at light loads, a station must wait
bit times
before it can begin transmission.
The Broadcast Recognition Alternating Priorities (BRAP) protocol
addresses some of these issues:
As soon as a station inserts a 1-bit into its slot, it begins
transmission.
Result: Stations serviced in round-robin style; if a station has nothing
to transmit, it sends nothing during its slot, and the next station is
given the opportunity.
There are many other protocols aimed at squeezing out a bit more
performance out of the protocols. In practice, however, they
may not always be useful. Keep in mind:
- N, the number of stations, must be known by all stations.
In practice, this is difficult to get exactly right. Consider adding
or removing stations from a network. Also, each station may need to
know its own number.
- During the contention period, each bit time (in BRAP) must be
``long enough'' that the signal propagates to both ends of the cable.
We can't have bits from different stations colliding.
In practice, this generally reduces the overall bit rate and
throughput. In Ethernets, for example, the contention period is many
times longer than one bit time -- 64 bytes!
- The more complex the protocol, the more complex (expensive) the
hardware. One should never lose the sight of this, as the
ultimate goal of networking high performance at low cost.
- Seemingly simple schemes may perform quite adequately in
practice. For example, consider Ethernet (802.3) LANs. Indeed,
neither described schemes is actually used in LAN hardware.
The IEEE has produced a set of LAN protocols known as the IEEE 802
protocols. These protocols have been adopted by ANSI and ISO:
- 802.2: logical link standard (device driver interface)
- 802.3: CSMA/CD
- 802.4: token bus
- 802.5: token ring
Ethernet is a specific product implementing (or nearly so) the IEEE
standard. Interesting to note that having an Ethernet port on a machine
has become a standard (certainly for workstations).
The 802.3 protocol is described as follows:
- 1-persistent CSMA/CD LAN (always sends if sense as idle)
- its history is as follows:
- started with ALOHA
- continued at Xerox, where Metcalf & Boggs produced a 3 Mbps LAN version
- Xerox, DEC, and Intel standardized a 10Mbps version
- IEEE standardized a 10Mbps version (with slight differences from
the Xerox standard
- the maximum length of a segment of cable is 500 meters
- segments can be separated by repeaters, devices that
regenerate or ``amplify'' signals (not frames); a single repeater can
join multiple segments
- maximum distance between two stations 2.5 km, maximum number of
repeaters along any path: 4 (why these limits?)
- uses Manchester encoding
At the Medium Access (MAC) sublayer, frames consist of the following:
- Frames begin with 56-bit preamble consisting of alternating 1s
and 0s. Purpose? Similar to start bit in RS-232-A.
It allows the receiver to detect the start of a frame.
- Start of the frame designated by the byte ``10101011''.
That is, two consecutive 1 bits flag the end of the preamble.
- 48-bit destination address (16-bit for lower speed version).
- 48-bit source address (16-bit for lower speed version).
- 16-bit data length field; maximum data size 1500 bytes.
- 32-bit checksum field. The checksum is a number dependent
on every bit in the frame. The sender computes a checksum and appends
it to the frame. The receiver also recalculates the checksum,
comparing result with value stored in frame. If the two checksums
differ, some of the bits in the frame must have changed and the packet
is discarded as having errors.
There are two types of addresses:
- Unicast addresses start with a high-order bit of 0.
Unicast addresses refer to a single machine, and every Ethernet address
in the world is guaranteed to be unique (e.g., address is sort of like
a serial number).
- Multicast (group) addresses start with a high-order
bit of 1. Multicast addresses refer to a set of one or more stations.
A broadcast address (all 1's) is a special case of multicasting.
All machines process broadcast frames.
The management of multicast addresses (e.g., entering or leaving a
group) must be managed by some outside mechanism (e.g, higher-layer
software).
802.3 (Ethernet) LANs use a binary exponential
backoff algorithm to reduce collisions:
- It is 1-persistent. When a station has a frame to send,
and the channel is idle, transmission proceeds
immediately.
- When a collision occurs, the sender generates a noise burst to
insure that all stations recognize the condition and aborts
transmission.
- After the collision, wait 0 or 1 contention periods
before attempting transmission again. Station has 50-50
probability of waiting 0 or 1 contention period. The idea is that if
two stations collide, one will transmit in the first interval, while
the other one will wait until the second interval. Once the second
interval begins, however, the station will sense that the channel is
already busy and will not transmit.
- If another collision occurs: Randomly wait 0, 1, 2, or 3
slot times before attempting transmission again.
- In general, wait between 0 and
times, where r is
the number of collisions encountered. - Finally, freeze interval at 1023 slot times after 10 attempts,
and give up altogether after 16 attempts.
Result? Low delay at light loads, yet we avoid collisions under
heavy loads.
Also, note that there are no acknowledgments; a sender has no
way of knowing that a frame was successfully delivered.
Assuming a constant retransmission probability (rather than backoff),
the channel efficiency of 802.3 LANs is given by:
Efficiency
,
where P is the mean time to transmit a frame, and A is the
probability that some station transmits during each slot.
What happens as
increases? The efficiency drops.
An alternative from of the same formula:
Efficiency
,
where: B = bandwidth, L = cable length, c is signal propagation
rate, e = contention slots per frame, and F = mean frame length.
What does this say about networks with large BL products? They
are inefficient. Fiber optics, satellites, and WANs in general have
high bandwidth-delay products.
Can connect hosts to a hub switch. Advantage is that stations use the same
network interface card, but they can run at higher speeds.
See Fig. 4-47. Different approaches result in 100Mbps.
100BASE-T4 split data on 3 streams each with effective data rate of 33
1/3Mbps. Use four twisted pairs. One pair used for in-bound, one for
out-bound and two are bidirectional. Use ternary encoding (8B6T).
Shared hub (all incoming lines logically connected) vs. Switched Hub (each
incoming frame is buffered on a plug-in line card). 100Base-Fx cables must
be buffered because they are too long for Ethernet collision detection
algorithm.
Not everyone was happy with CSMA/CD networks. Reasons?
- Non-deterministic nature of protocol; no absolute bound
on delay.
- No support for priorities. All traffic (e.g., voice, video,
interactive keystrokes, background mail, etc.) is treated the same.
One solution is the token bus:
Physically a bus, logically a ring.
The details of the actual protocols are quite complex. Skipping the
details, the following points should be noted:
- It uses 75-ohm broadband technology (analog).
- It supports 4 priority classes of frames.
A station transmits its
highest priority frames first, then its next highest, etc., until it
has transmitted all of its frames or until its time has elapsed.
This contrasts with the token ring, in which
higher priority traffic can prevent a station from ever transmitting.
Note that the token bus approach also leads to starvation, but under
different conditions.
- When a station fails (or is powered down), it must leave the
ring. Note that a protocol is needed to gracefully add or remove
oneself from the logical ring and also to detect ungraceful exits of
failed nodes.
- When a station joins the network, it must join the ring.
Periodically broadcast special tokens to allow new stations to enter the
ring. New stations reply with station's address and successor's address.
Can have 0, 1 or multiple replies to a special token. If multiple replies
then must use an arbitration algorithm similar to binary exponential backoff.
- Because tokens can become damaged, a protocol is needed to
regenerate lost tokens.
- Protocol must deal with duplicate tokens. For example, a
station might incorrectly regenerate a token.
An alternative topology is to connect stations together in a
ring, with each station allowed to transmit in turn.
Token rings behave like broadcast networks in that every station
``sees'' (and processes) each frame.
However, they are actually built out of a collection of point-to-point
links connected in a ring.
- Any technology can be used for individual links. Examples? Twisted
pair, fiber, etc.
- Unlike 802.3, stations are active rather than
passive -- they participate in the network's operation That
is, a node receives a frame from its upstream neighbor and copies the
frame to its downstream neighbor.
- A special control frame, called a token, circulates from
station to station:
- A station can transmit only when it is in possession of the
token. Once a station has ``seized'' the token, it may transmit
one (or more) frames.
- A station can transmit only a single or limited number of frames before
forwarding the token on. This limits the time a station can hold
the token, placing an upper bound on the time a station has to wait
before it can seize the token.
- Because only one station transmits at a time, no collisions occur.
- Data travels in only one direction, and transmission is totally digital.
That is each station regenerates the bits that go, rather than
amplifying the analog signal.
- Each station adds only a single 1-bit delay to each frame; as
soon as it has received a bit, it copies it to the outgoing link. That
is, a station regenerates a bit on its outgoing link as soon as it has
received the bit on its incoming link. It does not wait for the entire
frame before starting to forward the frame on.
- Depending on ring length, the ring can hold many frames, or a
fraction of a single frame.
- The network must handle the case of a station being powered off,
thereby ``breaking'' the ring. If so the station must connect input to
output and take out the 1-bit delay.
A station can be in one of two modes:
- In listen mode, simply copy frames from the input line to the
output line. Note: Copy takes place at bit level, not frame
level.
- In transmit mode, first seize the token and then:
- Break the ``copy'' connection from the input-to-output line.
- Transmit frame, removing the bits as they come back on the
receive line.
- After having received a copy of the entire frame, restore
the ``copy'' connection (e.g., revert to listen mode).
- Regenerate token.
Note: Adding acknowledgments would be trivial. The intended
recipient could simply flip a bit to indicate that it had received the
frame.
What is the bound on the transmission delay time? That is, how
long will a station wait before it is allowed to transmit?
nT, where n is the number of stations, and T is the time to
transmit one frame. In the worst case, a station may have to
wait for every other stations to transmit a single frame before it can
transmit.
Overall performance:
- Under light loads, token continually circulates, and a station
can grab it quickly. However, the delay will be slightly longer
than that for an Ethernet, which does not wait at all.
- Under heavy loads, token moves from one station to the next,
giving each station an opportunity to transmit. Note that
collisions cannot occur!
IEEE 802.5 has defined a token ring standard:
- 1 and 4 Mbps version standardized. Note, however, that
once again the standard lags behind the technology. 80Mbps token ring
LANs have been available for several years.
- IBM version is NOT the same as 802.5.
- Uses twisted pair technology for links.
In 802.5 each token frame is three bytes long:
- A 1-byte delimiter signals the start of a frame.
- A 1-byte access control field contains priority information.
- A 1-byte frame control field (distinguishes data frames from
control frames).
Note: Token frames differ from regular frames by only 1 bit in the
frame control field. Thus, seizing the token consists of
flipping a single bit during the copy operation.
after seizing the token, a station may transmit data for no longer
than the token holding time, typically about
10 ms.
Each regular frame has the following format:
- 1-byte start of frame, 1-byte access control, and 1-byte control
field (e.g., a ``modified token'').
- 6 (or 2) byte destination address. (Same as 802.3.)
- 6 (or 2) byte source (sender) address. (Same as 802.3.)
- Data itself. No length field is included. Special
delimiters flag the end of data.
- 4-byte checksum. (Same as 802.3.)
- 1-byte ending delimiter.
Note: The ending delimiter contains an invalid Manchester encoding
that can never appear in the data part of the frame.
- 1-byte frame status indication containing:
- A-bit: Set if the receiving station recognized the frame.
E.g., the receiving station is powered up and knows that the
frame was addressed to it.
- C-bit: Set if the receiving station copied the frame into its
memory. E.g., a hardware-level acknowledgment, frame was copied
into memory of network interface card (but not necessarily the
machine's memory).
The access control field in the token is used to support priorities.
If a station wishes to send a frame at priority n:
- It must first seize a token at the same (or lower) priority than
the frame it wishes to send. That is, if the token contains a
higher priority, another station has a higher priority frame to send.
- It attempts to reserve the token by writing the desired priority into
the access control field of a data frame that passes by.
However, a station cannot replace a higher priority with a lower
priority.
- Another station wishing to send a higher priority frame
can overwrite the access control field with the higher priority value.
- When regenerating the token,
set its priority to that in the (modified) access control field of the
frame it receives. Another node may have raised the priority
from what it had been.
Note that high priority traffic can starve out low
priority traffic. A station that always has high priority
traffic to send can prevent another station from ever transmitting.
Also, additional rules are required to restore priorities to their
former state.
Unfortunately, ring networks are subject to wire breaks (or
station failures) -- single points of failure. One solution is a
wire center:
- Convert the ring topology into a star topology. The wire
center acts as the hub of the star.
- Each station connects to the wire center over two links, one for
receiving and one for sending.
- The wiring center uses relays powered by the station: If
the station shuts off, remove it from the ring. That is, when
the power source vanishes, the relay effectively bypasses the station,
removing it from the ring.
- Each station is still part of the logical ring; thus, multiple
wire centers can be connected together. The existence of a wire
center is transparent to the stations. That is, stations do not need
to be aware that a wire center is present.
Token ring networks must perform ring maintenance
functions to handle such problems as token
regeneration. In 802.5, each network has a monitor
station:
- The monitor station is solely responsible for proper ring
maintenance. (It is also a single point of failure should it
``misbehave''.) It does the following:
- Regenerates lost tokens (which is easy to do; just drain all
frames from the ring and generate a new token).
- Detects (and removes) orphan frames, endlessly
circulating runt frames that result when a station fails while it is
transmitting and the wiring center removes it from the ring.
- Yet another protocol (using claim tokens)
is used to select the monitor station.
Since tokens (potentially) circulate forever, the entire token must
``fit'' in the ring. That is, the ring delay must be larger than
one token time. Unlike regular frames, token frames potentially
circulate forever. Because the sender doesn't pull them off the ring,
the first bit of the token frame can't return to the sender until
after it has sent the last bit of the token frame.
Slotted rings are a variation of the token ring
approach that are particularly useful in long-delay rings:
- Data frames are contained in the token frame itself. That
is, the data frame is piggy backed on an existing token.
- The token contains a used/empty bit; when an ``empty'' token goes
by, a station can set it to ``used'' and fill it with data.
- Beneficial in long-delay rings. Why? They allow multiple
tokens and frames to travel within the ring simultaneously.
With slotted rings, frames are subject to maximum size restrictions.
We don't want large packets to ``overflow'' the ring. All
packets must be able to fit in the ring at the same time.
802.3 (Ethernet):
- Most widely used of the three technologies.
- Low delay under light loads.
- Simple to install and maintain. (Operations issue.)
- Minimum frame size of 64 bytes is large when
carrying single byte keystrokes.
- Non-deterministic delay characteristics.
- Limited to 10 Mbps and short distances.
- No support for multiple priorities.
802.4:
- Uses standard analog technology.
- More deterministic access time (but not 100% either, due to
ring maintenance overhead). Indeed, in worst case scenarios,
access time is no better than for 802.3.
- Supports priorities.
- Works well under heavy load.
- Relatively high delay at low loads.
- Can be mixed with TV and voice equipment.
802.5:
- All digital. 802.3 requires analog circuitry required to
handle collision detect features.
- Able to make use of fiber optic technology over long
distances.
This particularly important in WANs.
- Supports both very large and small frames.
- High channel utilization at high loads.
- Low delays at low loads, bounded delay in worst case.
(Assuming frames have upper limits on size.)
- Ring monitor is a single point of failure.
Finally, any specific performance analysis or traffic mix can
make one technology look ``good'' and another look ``bad''.
When choosing a system, all factors should be considered.
The Fiber Distributed Data Interface (FDDI) is a standard
for token rings using fiber for individual links:
- supports data rates of 100 Mbps over 200 km
- uses LED rather than lasers (lasers potentially dangerous)
- has a bit error rate of only 1 in
bits - cabling consists of a pair of rings (one for each direction);
two pairs are specified so that the second ring can be used to
reconfigure the ring in the event of a station or link failure
The protocol itself is similar to 802.5, but has the following differences:
- 802.5 requires that the sender receive his frame before
regenerating the token; because of lengths, sender regenerates token
as soon as frame is out its interface, allowing the ring to carry
multiple token/data frames simultaneously
- FDDI can also carry synchronous frames for PCM ir ISDN traffic
Waiting for applications to catch up. Similar to Ethernet which people
said was too complicated when it came out.
On top of MAC layer for DLL equivalent (Fig 4-33).
Service options:
- Unreliable datagram service
- acknowledged datagram service
- reliable connection-oriented service
This document was generated using the LaTeX2HTML translator Version 96.1 (Feb 5, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -no_navigation -split 0 -t MAC Channel Sublayer week4-mac.tex.
The translation was initiated by Craig Wills on Mon Oct 18 17:32:36 EDT 1999
Craig Wills
Mon Oct 18 17:32:36 EDT 1999