Networking Today:
Indeed, the network is the computer!
Definition of a computer network: an interconnected collection of autonomous computers.
interconnected--capable of exchanging messages.
autonomous--do not control one another.
There are two aspects to computer networks:
Hardware ``physically'' connects machines to one another (allows signals to be sent).
Protocols specify the services the network provides. Protocols make the hardware usable by programmers and applications software.
Which is more important? They are both important. However, software is more important because it is the software that programmers and end users see and use. In addition, by careful design, the hardware can be hidden from the user, making the hardware even less important. Of course, software will have a difficult time overcoming hardware deficiencies.
Analogy: what's more important, a machine's hardware architecture or its operating system? A machines specific instruction set or its compilers?
There are three primary ways in which networks are used by computers.
Each of these works with the client-server paradigm (see Fig 1-1). You will gain experience with this approach in this class.
They make a more cost-effective use of hardware and software, as well as creating new benefits.
``Information Superhighway''
Networks are changing not only computer science but all of society. Typical uses of networks include:
A host or end-system refers to a computer that a user logs into to do their work. Although a host is attached to a network, it is not usually part of the network itself.
The communications subnet refers to the everything between hosts. The subnet is responsible for transporting data from one host to another. Look at Figure 1-5.
Although many subnet designs exist, they generally fall into two broad classes (or a combination of the two):
Of course, a single pt-to-pt link isn't all the useful. Usually, a machine will connect to several links, resulting in a graph topology. Each machine (called a packet-switch) ``switches'' packets from one line to another. Fig 1-6.
Broadcast networks have the advantage that they efficiently support multicasting sending a single packet to every machine on the network. This contrasts with non-broadcast networks, which must send the same packet over and over again to each machine on the network to achieve the same outcome.
Note that broadcasting is a special case of multicasting, in which all machines receive the packet.
Unicasting refers to the normal case where packet is intended for a single recipient.
Have done some research work with multicasting.
Primary types of networks:
A standard exists IEEE 802.6--Distributed Queue Dual Bus (DQDB). Uses two broadcast buses, one for each direction.
Look at Figure 1-7.
Building networks is a complex problem. Where do we start?
Use divide-and-conquer: Break the problem into smaller problems (subroutines or objects), solve the individual problems, and combine them into a final solution.
Networking extends this concept into layering. Starting with the hardware, we build successive layers on top of the lower layer. The idea is to build increasingly sophisticated services, using only the services provided by the layer immediately underneath.
Conceptually, layer N communicates with its remote peer (counterpart layer) on a remote machine. In reality, only the lowest layer physically transmits data. See Fig. 1-9.
Each layer defines a protocol that describes the messages exchanged with its peer on the remote machine.
A crucial point in achieving good layering is abstraction -- providing a well-defined service. When using the services of layer N, we are not concerned with how the service is actually implemented. We only need to know how to invoke it. Thus, the service better be one that is easy to use and allows the layer above it to build a better service.
Each layer provides an interface that specifies how its services are accessed by the layers above (and below) it. The interface should never change -- changing it affects the layer above it. The actual implementation of the service, however, can change.
How easily can we change the underlying network from point-to-point to broadcast? If the layers are designed properly, the change will be transparent to the upper layers.
The layers and protocols form what is called a network architecture. Look at Fig. 1-11.
When discussing layering in network protocols, two concepts are fundamental messages and encapsulation. Each layer deals with messages. Messages:
When layer N accepts data from layer N+1 (above it), it encapsulates the entire layer N+1 message in the data portion off the layer N packet. It should never look inside the data portion of the message!
When the remote peer receives a message, it strips of the header information and passes only the data to the next higher layer.
As a first step in standardization, the International Standards Organization (ISO) developed a seven-layer model known as the ISO Open Systems Interconnection (OSI) reference model.
The lowest layer, the physical layer, is concerned with transmitting raw bits over a communication channel. It is concerned with insuring that when one side sends a ``1'' bit, the other side receives a ``1'' bit.
The physical layer is usually the focus of an electrical engineer and deals with such questions as:
The data link layer transforms the raw transmission facility of the physical layer into an error free channel. It deals with communication between two machines sharing a common physical channel. It:
The network layer controls operation of the subnet (communication between hosts). It:
The transport layer makes sure data gets delivered to a specific process on a specific machine. It:
The session layer allows users to establish sessions between them. It:
The presentation layer performs services that are requested often enough to warrant development of a general solution. For instance:
The application layer refers to the user programs themselves. Look at Figure 1-16.
Came out of work on ARPANET (predecessor to Internet). Not so much a model, but a base set of protocols. Uses a connectionless network layer. Look at Figure 1-19. Will explore all levels in more detail.
Protocols came first, where in OSI reference model, the model came first. Model provides a nice basis to talk about networks. TCP/IP provides better protocols for using them.
OSI got ``crushed'' because TCP/IP was already a working protocol. See Fig. 1-20. Dave Clark offers an observation dubbed ``apocalypse of the two elephants'':
Tanenbaum offers critiques of each model. Settles on a ``model'' of leaving out session and presentation layers and organizing material around other layers.
There are two primary styles of network communication:
Connectionless service is often called unreliable datagram (or just datagram) service because there is no guarantee that messages will be delivered correctly. Indeed, the main point of the ``connection'' is to insure that the network is able to guarantee that packets will be delivered in a known order. The term unreliable refers to delivery not being acknowledged, and hence, there is no way to know for sure that the data has been delivered. IP protocol is an example.
When thinking about network protocols, it is important to distinguish between services and protocols. A service is a high-level, abstract description of the functionality provided, while a protocol is the specific set of rules that specify the meaning of messages exchanged by peer entities. A protocol implements a service.
Novell Netware is widely popular for LAN communication among PCs. Built on connectionless network layer protocol IPX. Above IPX is connection-oriented protocol NCP (Network Core Protocol). SPX (connectionless) is also available. Similar to TCP/UDP/IP. Look at Figure 1-22.
Servers advertise services (using Service Advertising Protocol (SAP)) periodically, which are picked up by routers to pass on client requests.
Example: the Arpanet was a packet-switched network built in the late 1960's by the Defense Department's Advanced Research Projects Agency (ARPA). It used 56 kbps leased lines to connect packet switches called PSN's (or IMPs using Tannebaum's terminology). Uses Interface Message Processors (IMPs) to route messages.
Goal: Serve as a testbed for resource sharing and protocol development (e.g. file transfer, mail, remote login). At the time it was proposed and built, few persons really had any idea what (if anything!) networks would be good for.
The Arpanet was a single physical network (providing a service similar to an Ethernet), meaning that only hosts connected directly to the Arpanet could communicate. The Arpanet established the US's leadership in the development of packet-switching technology.
Not directly.
To tie networks together (SATNET, WIDEBAND, MILNET), DARPA developed a technology known as internetworking, protocols that combine independent physical networks into a single virtual network.
The set of internetworking protocols developed by the DoD are known as the TCP/IP protocol suite. The suite includes:
In the early 90s, the NSFnet was the main cross-country backbone network. Like the Arpanet, it provided packet switching.
Now commercial IP services are available to interconnect the regional networks.
Government sponsored networks along with commercial and regional networks, may which communicate using TCP/IP are collectively known as the Internet.
USENET is a nationwide network consisting primarily of Unix machines. Distinguishing characteristics:
USENET is used primarily for the exchange of mail and the distribution of network news.
ATM networks are connection-oriented. Speeds of 155Mbps to 622Mbps. High speed packet switching. Reference Model is in Fig. 1-31.
Comparison given in Fig. 1-32.