Computer Networks



NANNAPANENI VENKATRAOCOLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE

SUBJECT:Computer Networks SUBJECTINCHARGE:G.varaprasad

Unit-1

1.List two advantages and two disadvantages of having international standards for network, Protocols?

Ans: A network protocol defines rules and conventions for communication between network devices. Protocols for computer networking all generally use packet switching techniques to send and receive messages in the form of packets.

Network protocols include mechanisms for devices to identify and make connections with each other, as well as formatting rules that specify how data is packaged into messages sent and received. Some protocols also support message acknowledgement and data compression designed for reliable and/or high-performance network communication. Hundreds of different computer network protocols have been developed each designed for specific purposes and environments.

The importance of standards in the field of communication cannot be overstressed.

Standards enable equipment from different vendors and with different operating

characteristics to become components of the same network. Standards also enable

different networks in different geographical locations (e.g., different countries and

continents) to be interconnected. From a customer’s point of view, standards mean

real cost savings: the same end-user device can be used for access to a variety of

networks and services.

Standards are developed by national and international organizations established

for this exact purpose. During the course of this book we will discuss a number of

important standards developed by various organizations, including the following:

• The International Standards Organization (ISO) has already been

mentioned. This is a voluntary organization with representations from national

standards organizations of member countries (e.g., ANSI), major vendors, and

end-users. ISO is active in many area of science and technology, including

information technology. ISO standards are published as ISO serial-no (e.g., ISO

8632).

• The Consultative Committee for International Telegraph and Telephone

(CCITT) is a standards organization devoted to data and telecommunication,

with representations from governments, major vendors, telecommunication

14 Communication Networks Copyright © 2005 PragSoft

carriers, and the scientific community. CCITT standards are published as

Recommendation L.serial-no, where L is a letter of the alphabet (e.g., I.440).

These are revised and republished every four years. CCITT standards are very

influential in the field of telecommunications and are adhered to by most vendors

and carriers.

• The Institute of Electrical and Electronic Engineers (IEEE) is a US

standards organization with members throughout the world. IEEE is active in

many electric and electronic-related areas. The IEEE standards for local area

networks are widely adopted and will be discussed in Chapter 9. IEEE

standards are published as IEEE serial-no (e.g., IEEE 908).

• The Electronic Industries Association (EIA) is a US trade association best

known for its EIA-232 standard, which will be discussed in the next chapter.

• The European Computer Manufacturers Association (ECMA) is a

standards organization involved in the area of computer engineering and related

technologies. ECMA directly cooperates with ISO and CCITT.

In addition to these organizations, and because of their global market influence,

large vendors occasionally succeed in establishing their products as de facto

standards.

2.Explain about the types of networks?

Ans:One way to categorize the different types of computer network designs is by their scope or scale. For historical reasons, the networking industry refers to nearly every type of design as some kind of area network. Common examples of area network types are:

• LAN - Local Area Network

• WLAN - Wireless Local Area Network

• WAN - Wide Area Network

• MAN - Metropolitan Area Network

• SAN - Storage Area Network, System Area Network, Server Area Network, or sometimes Small Area Network

• CAN - Campus Area Network, Controller Area Network, or sometimes Cluster Area Network

• PAN - Personal Area Network

• DAN - Desk Area Network

LAN and WAN were the original categories of area networks, while the others have gradually emerged over many years of technology evolution.

LAN - Local Area Network

A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet.

In addition to operating in a limited space, LANs are also typically owned, controlled, and managed by a single person or organization. They also tend to use certain connectivity technologies, primarily Ethernet and Token Ring.

[pic]

WAN - Wide Area Network

As the term implies, a WAN spans a large physical distance. The Internet is the largest WAN, spanning the Earth.

A WAN is a geographically-dispersed collection of LANs. A network device called a router connects LANs to a WAN. In IP networking, the router maintains both a LAN address and a WAN address.

A WAN differs from a LAN in several important ways. Most WANs (like the Internet) are not owned by any one organization but rather exist under collective or distributed ownership and management. WANs tend to use technology like ATM, Frame Relay and X.25 for connectivity over the longer distances.

LAN, WAN and Home Networking

Residences typically employ one LAN and connect to the Internet WAN via an Internet Service Provider (ISP) using a broadband modem. The ISP provides a WAN IP address to the modem, and all of the computers on the home network use LAN (so-called private) IP addresses. All computers on the home LAN can communicate directly with each other but must go through a central gateway, typically a broadband router, to reach the ISP.

Other Types of Area Networks

While LAN and WAN are by far the most popular network types mentioned, you may also commonly see references to these others:

• Wireless Local Area Network - a LAN based on WiFi wireless network technology

[pic]

• Metropolitan Area Network - a network spanning a physical area larger than a LAN but smaller than a WAN, such as a city. A MAN is typically owned an ope rated by a single entity such as a government body or large co[pic].

• Campus Area Network - a network spanning multiple LANs but smaller than a MAN, such as on a university or local business campus.

• Storage Area Network - connects servers to data storage devices through a technology like Fibre Channel.

• System Area Network - links high-performance computers with high-speed connections in a cluster configuration. Also known as Cluster Area Network.

3.Explain about the ISO-OSI layered architecture model?

Ans: This lecture introduces the ISO-OSI layered architecture of Networks. According to the ISO standards, networks have been divided into 7 layers depending on the complexity of the fucntionality each of these layers provide. The detailed description of each of these layers is given in the notes below. We will first list the layers as defined by the standard in the increasing order of function complexity:

1. Physical Layer

2. Data Link Layer

3. Network Layer

4. Transport Layer

5. Session Layer

6. Presentation Layer

7. Application Layer

Physical Layer

This layer is the lowest layer in the OSI model. It helps in the transmission of data between two machines that are communicating through a physical medium, which can be optical fibres,copper wire or wireless etc. The following are the main functions of the physical layer:

1. Hardware Specification: The details of the physical cables, network interface cards, wireless radios, etc are a part of this layer.

|Coaxial Cable |Hybrid Cable |Wireless Card |Network Card |

|[pic] |[pic] |[pic] |[pic] |

2. Encoding and Signalling: How are the bits encoded in the medium is also decided by this layer. For example, on the coppar wire medium, we can use differnet voltage levels for a certain time interval to represent '0' and '1'. We may use +5mV for 1nsec to represent '1' and -5mV for 1nsec to represent '0'. All the issues of modulation is dealt with in this layer. eg, we may use Binary phase shift keying for the representation of '1' and '0' rather than using different volatage levels if we have to transfer in RF waves.

[pic]

Binary Phase Shift Keying

3. Data Transmission and Reception: The transfer of each bit of data is the responsibility of this layer. This layer assures the transmissoin of each bit with a high probability. The transmission of the bits is not completely reliable as their is no error correction in this layer.

4. Topology and Network Design: The network design is the integral part of the physical layer. Which part of the network is the router going to be placed, where the switches will be used, where we will put the hubs, how many machines is each switch going to handle, what server is going to be placed where, and many such concerns are to be taken care of by the physical layer. The variosu kinds of netopologies that we decide to use may be ring, bus, star or a hybrid of these topologies depending on our requirements.

[pic]

Data Link Layer

This layer provides reliable transmission of a packet by using the services of the physical layer which transmits bits over the medium in an unreliable fashion. This layer is concerned with :

1. Framing : Breaking input data into frames (typically a few hundred bytes) and caring about the frame boundaries and the size of each frame.

2. Acknowledgment : Sent by the receiving end to inform the source that the frame was received without any error.

3. Sequence Numbering : To acknowledge which frame was received.

4. Error Detection : The frames may be damaged, lost or duplicated leading to errors.The error control is on link to link basis.

5. Retransmission : The packet is retransmitted if the source fails to receive acknowledgment.

6. Flow Control : Necessary for a fast transmitter to keep pace with a slow receiver.

[pic]

Data Link Layer

Network Layer

Its basic functions are routing and congestion control.

Routing: This deals with determining how packets will be routed (transferred) from source to destination. It can be of three types :

• Static : Routes are based on static tables that are "wired into" the network and are rarely changed.

• Dynamic : All packets of one application can follow different routes depending upon the topology of the network, the shortest path and the current network load.

• Semi-Dynamic : A route is chosen at the start of each conversation and then all the packets of the application follow the same route.

[pic]

Routing

The services provided by the network can be of two types :

• Connection less service: Each packet of an application is treated as an independent entity. On each packet of the application the destination address is provided and the packet is routed.

• Connection oriented service: Here, first a connection is established and then all packets of the application follow the same route. To understand the above concept, we can also draw an analogy from the real life. Connection oriented service is modeled after the telephone system. All voice packets go on the same path after the connection is established till the connection is hung up. It acts like a tube ; the sender pushes the objects in at one end and the receiver takes them out in the same order at the other end. Connection less service is modeled after the postal system. Each letter carries the destination address and is routed independent of all the others. Here, it is possible that the letter sent first is delayed so that the second letter reaches the destination before the first letter.

Congestion Control: A router can be connected to 4-5 networks. If all the networks send packet at the same time with maximum rate possible then the router may not be able to handle all the packets and may drop some/all packets. In this context the dropping of the packets should be minimized and the source whose packet was dropped should be informed. The control of such congestion is also a function of the network layer. Other issues related with this layer are transmitting time, delays, jittering.

Internetworking: Internetworks are multiple networks that are connected in such a way that they act as one large network, connecting multiple office or department networks. Internetworks are connected by networking hardware such as routers, switches, and bridges.Internetworking is a solution born of three networking problems: isolated LANs, duplication of resources, and the lack of a centralized network management system. With connected LANs, companies no longer have to duplicate programs or resources on each network. This in turn gives way to managing the network from one central location instead of trying to manage each separate LAN. We should be able to transmit any packet from one network to any other network even if they follow different protocols or use different addressing modes.

[pic]

Inter-Networking

Network Layer does not guarantee that the packet will reach its intended destination. There are no reliability guarantees.

Transport Layer

Its functions are :

• Multiplexing / Demultiplexing : Normally the transport layer will create distinct network connection for each transport connection required by the session layer. The transport layer may either create multiple network connections (to improve throughput) or it may multiplex several transport connections onto the same network connection (because creating and maintaining networks may be expensive). In the latter case, demultiplexing will be required at the receiving end. A point to note here is that communication is always carried out between two processes and not between two machines. This is also known as process-to-process communication.

• Fragmentation and Re-assembly : The data accepted by the transport layer from the session layer is split up into smaller units (fragmentation) if needed and then passed to the network layer. Correspondingly, the data provided by the network layer to the transport layer on the receiving side is re-assembled.

|Fragmentation |Reassembly |

|[pic] |[pic] |

• Types of service : The transport layer also decides the type of service that should be provided to the session layer. The service may be perfectly reliable, or may be reliable within certain tolerances or may not be reliable at all. The message may or may not be received in the order in which it was sent. The decision regarding the type of service to be provided is taken at the time when the connection is established.

• Error Control : If reliable service is provided then error detection and error recovery operations are also performed. It provides error control mechanism on end to end basis.

• Flow Control : A fast host cannot keep pace with a slow one. Hence, this is a mechanism to regulate the flow of information.

• Connection Establishment / Release : The transport layer also establishes and releases the connection across the network. This requires some sort of naming mechanism so that a process on one machine can indicate with whom it wants to communicate.

Session Layer

It deals with the concept of Sessions i.e. when a user logins to a remote server he should be authenticated before getting access to the files and application programs. Another job of session layer is to establish and maintain sessions. If during the transfer of data between two machines the session breaks down, it is the session layer which re-establishes the connection. It also ensures that the data transfer starts from where it breaks keeping it transparent to the end user. e.g. In case of a session with a database server, this layer introduces check points at various places so that in case the connectoin is broken and reestablished, the transition running on the database is not lost even if the user has not committed. This activity is called Synchronization. Another function of this layer is Dialogue Control which determines whose turn is it to speak in a session. It is useful in video conferencing.

Presentation Layer

This layer is concerned with the syntax and semantics of the information transmitted. In order to make it possible for computers with different data representations to communicate data structures to be exchanged can be defined in abstract way alongwith standard encoding. It also manages these abstract data structres and allows higher level of data structres to be defined an exchange. It encodes the data in standard agreed way(network format). Suppose there are two machines A and B one follows 'Big Endian' and other 'Little Endian' for data representation. This layer ensures that the data transmitted by one gets converted in the form compatibale to othe machine. This layer is concerned with the syntax and semantics of the information transmitted.In order to make it possible for computers with different data representations to communicate data structures to be exchanged canbe defined in abstract way alongwith standard encoding. It also manages these abstract data structres and allows higher level of data structres to be defined an exchange. Other functions include compression, encryption etc.

Application Layer

The seventh layer contains the application protocols with which the user gains access to the network. The choice of which specific protocols and their associated functions are to be used at the application level is up to the individual user. Thus the boundary between the presentation layer and the application layer represents a separation of the protocols imposed by the network designers from those being selected and implemented by the network users.For example commonly used protocols are HTTP(for web browsing), FTP(for file transfer) etc.

4.Difference between theOSI model andTCP model

Ans:In most of the networks today, we do not follow the OSI model of seven layers. What is actually implemented is as follows. The functionality of Application layer and Presentation layer is merged into one and is called as the Application Layer. Functionalities of Session Layer is not implemented in most networks today. Also, the Data Link layer is split theoretically into MAC (Medium Access Control) Layer and LLC (Link Layer Control). But again in practice, the LLC layer is not implemented by most networks. So as of today, the network architecture is of 5 layers only.

[pic]

Network Layers in Internet Today

5.Describe briefly about the physical layer?

Ans:Physical layer is concerned with transmitting raw bits over a communication channel. The design issues have to do with making sure that when one side sends a 1 bit, it is recieved by the other side as 1 bit and not as 0 bit. In physical layer we deal with the communication medium used for transmission.

Types of Medium

Medium can be classified into 2 categories.

1. Guided Media : Guided media means that signals is guided  by the prescence of physical media i.e. signals are under control and remains in the physical wire. For eg. copper wire.

2. Unguided Media : Unguided Media means that there is no physical path for the signal to propogate. Unguided media are essentially electro-magnetic waves. There is no control on flow of signal. For eg. radio waves.

Communication Links

In a nework nodes are connected through links. The communication through links can be classified as

1. Simplex : Communication can take place only in one direction. eg. T.V broadcasting.

2. Half-duplex : Communication can take place in one direction at a time. Suppose node A and B are connected then half-duplex communication means that at a time data can flow from A to B or from B to A but not simultaneously. eg. two persons talking to each other such that when speaks the other listens and vice versa.

3. Full-duplex : Communication can take place simultaneously in both directions. eg. A discussion in a group without discipline.

Links can be further classified as

1. Point to Point : In this communication only two nodes are connected to each other. When a node sends a packet then it can be recieved only by the node on the other side and none else.

2. Multipoint : It is a kind of sharing communication, in which signal can be recieved by all nodes. This is also called broadcast.

Generally two kind of problems are associated in transmission of signals.

1. Attenuation : When a signal transmitts in a network then the quality of signal degrades as the signal travels longer distances in the wire. This is called attenuation. To improve quality of signal amplifiers are used at regular distances.

2. Noise : In a communication channel many signals transmits simultaneously, certain random signals are also present in the medium. Due to interference of these signals our signal gets disrupted a bit.

Bandwidth

Bandwidth simply means how many bits can be transmitted per second in the communication channel. In technical terms it indicates the width of frequency spectrum.

Transmission Media

Guided Transmission Media

In Guided transmission media generally two kind of materials are used.

1. Copper

o Coaxial Cable

o Twisted Pair

2. Optical Fiber

1. Coaxial Cable: Coaxial cable consists of an inner conductor and an outer conductor which are seperated by an insulator. The inner conductor is usually copper. The outer conductor is covered by a plastic jacket. It is named coaxial because the two conductors are coaxial. Typical diameter of coaxial cable lies between 0.4 inch to 1 inch. The most application of coaxial cable is cable T.V. The coaxial cable has high bandwidth, attenuation is less.

[pic]

2. Twisted Pair: A Twisted pair consists of two insulated copper wires, typically 1mm thick. The wires are twisted togather in a helical form the purpose of twisting is to reduce cross talk interference between several pairs. Twisted Pair is much cheaper then coaxial cable but it is susceptible to noise and electromagnetic interference and attenuation is large.

[pic]

Twisted Pair can be further classified in two categories:

Unshielded twisted pair: In this no insulation is provided, hence they are susceptible to interference.

Shielded twisted pair: In this a protective thick insulation is provided but shielded twisted pair is expensive and not commonly used.

The most common application of twisted pair is the telephone system. Nearly all telephones are connected to the telephone company office by a twisted pair. Twisted pair can run several kilometers without amplification, but for longer distances repeaters are needed. Twisted pairs can be used for both analog and digital transmission. The bandwidth depends on the thickness of wire and the distance travelled. Twisted pairs are generally limited in distance, bandwidth and data rate.

3. Optical Fiber: In optical fiber light is used to send data. In general terms prescence of light is taken as bit 1 and its absence as bit 0. Optical fiber consists of inner core of either glass or plastic. Core is surrounded by cladding of the same material but of different refrective index. This cladding is surrounded by a plastic jacket which prevents optical fiber from electromagnetic interferrence and harshy environments. It uses the principle of total internal reflection to transfer data over optical fibers. Optical fiber is much better in bandwidth as compared to copper wire, since there is hardly any attenuation or electromagnetic interference in optical wires. Hence there is less requirement to improve quality of signal, in long distance transmission. Disadvantage of optical fiber is that end points are fairly expensive. (eg. switches)

Differences between different kinds of optical fibers:

1. Depending on material

▪ Made of glass

▪ Made of plastic.

2. Depending on radius

▪ Thin optical fiber

▪ Thick optical fiber

3. Depending on light source

▪ LED (for low bandwidth)

▪ Injection lased diode (for high bandwidth)

Wireless Transmission

1. Radio: Radio is a general term that is used for any kind of frequency. But higher frequencies are usually termed as microwave and the lower frequency band comes under radio frequency. There are many application of radio. For eg. cordless keyboard, wireless LAN, wireless ethernet. but it is limited in range to only a few hundred meters. Depending on frequency radio offers different bandwidths.

2. Terrestrial microwave: In terrestrial microwave two antennas are used for communication. A focused beam emerges from an antenna and is recieved by the other antenna, provided that antennas should be facing each other with no obstacle in between. For this reason antennas are situated on high towers. Due to curvature of earth terristial microwave can be used for long distance communication with high bandwidth. Telecom department is also using this for long distance communication. An advantage of wireless communication is that it is not required to lay down wires in the city hence no permissions are required.

3. Satellite communication: Satellite acts as a switch in sky. On earth VSAT(Very Small Aperture Terminal) are used to transmit and recieve data from satellite. Generally one station on earth transmitts signal to satellite and it is recieved by many stations on earth. Satellite communication is generally used in those places where it is very difficult to obtain line of sight i.e. in highly irregular terristial regions. In terms of noise wireless media is not as good as the wired media. There are frequency band in wireless communication and two stations should not be allowed to transmit simultaneously in a frequency band. The most promising advantage of satellite is broadcasting. If satellites are used for point to point communication then they are expensive as compared to wired media.

[pic]

6.Explain the Topologies of networks?

Ans: A network topology is the basic design of a computer network. It is very much like a map of a road. It details how key network components such as nodes and links are interconnected. A network's topology is comparable to the blueprints of a new home in which components such as the electrical system, heating and air conditioning system, and plumbing are integrated into the overall design. Taken from the Greek work "Topos" meaning "Place," Topology, in relation to networking, describes the configuration of the network; including the location of the workstations and wiring connections. Basically it provides a definition of the components of a Local Area Network (LAN). A topology, which is a pattern of interconnections among nodes, influences a network's cost and performance. There are three primary types of network topologies which refer to the physical and logical layout of the Network cabling. They are:

1. Star Topology: All devices connected with a Star setup communicate through a central Hub by cable segments. Signals are transmitted and received through the Hub. It is the simplest and the oldest and all the telephone switches are based on this. In a star topology, each network device has a home run of cabling back to a network hub, giving each device a separate connection to the network. So, there can be multiple connections in parallel.

[pic]

Advantages

o Network administration and error detection is easier because problem is isolated to central node

o Networks runs even if one host fails

o Expansion becomes easier and scalability of the network increases

o More suited for larger networks

Disadvantages

o Broadcasting and multicasting is not easy because some extra functionality needs to be provided to the central hub

o If the central node fails, the whole network goes down; thus making the switch some kind of a bottleneck

o Installation costs are high because each node needs to be connected to the central switch

2. Bus Topology: The simplest and one of the most common of all topologies, Bus consists of a single cable, called a Backbone, that connects all workstations on the network using a single line. All transmissions must pass through each of the connected devices to complete the desired request. Each workstation has its own individual signal that identifies it and allows for the requested data to be returned to the correct originator. In the Bus Network, messages are sent in both directions from a single point and are read by the node (computer or peripheral on the network) identified by the code with the message. Most Local Area Networks (LANs) are Bus Networks because the network will continue to function even if one computer is down. This topology works equally well for either peer to peer or client server.

[pic]

The purpose of the terminators at either end of the network is to stop the signal being reflected back.

Advantages

o Broadcasting and multicasting is much simpler

o Network is redundant in the sense that failure of one node doesn't effect the network. The other part may still function properly

o Least expensive since less amount of cabling is required and no network switches are required

o Good for smaller networks not requiring higher speeds

Disadvantages

o Trouble shooting and error detection becomes a problem because, logically, all nodes are equal

o Less secure because sniffing is easier

o Limited in size and speed

3. Ring Topology: All the nodes in a Ring Network are connected in a closed circle of cable. Messages that are transmitted travel around the ring until they reach the computer that they are addressed to, the signal being refreshed by each node. In a ring topology, the network signal is passed through each network card of each device and passed on to the next device. Each device processes and retransmits the signal, so it is capable of supporting many devices in a somewhat slow but very orderly fashion. There is a very nice feature that everybody gets a chance to send a packet and it is guaranteed that every node gets to send a packet in a finite amount of time.

[pic]

Advantages

o Broadcasting and multicasting is simple since you just need to send out one message

o Less expensive since less cable footage is required

o It is guaranteed that each host will be able to transmit within a finite time interval

o Very orderly network where every device has access to the token and the opportunity to transmit

o Performs better than a star network under heavy network load

Disadvantages

o Failure of one node brings the whole network down

o Error detection and network administration becomes difficult

o Moves, adds and changes of devices can effect the network

o It is slower than star topology under normal load

Generally, a BUS architecture is preferred over the other topologies - ofcourse, this is a very subjective opinion and the final design depends on the requirements of the network more than anything else. Lately, most networks are shifting towards the STAR topology. Ideally we would like to design networks, which physically resemble the STAR topology, but behave like BUS or RING topology.

7.Write a short note on connection oriented and connection less sevices

Ans: The two primary types of services which are made available by a particular network layer and which actually are also useful classifications for many non-technical types of service industries are known as connection-oriented and connectionless communication.

Connection-oriented Services

One of the easiest ways to understand what a connection-oriented protocol is would be to think of a very familiar service upon which it's based: the telephone system. When I pick up the phone, I have an open circuit, and the dial tone carrier signal allows me to connect to a destination of my choosing.

Given valid input parameters, the service:

• Establishes the connection.

• Allows me to utilize the connection.

• Tears down the connection when I'm done using it.

The primary difference between this method and that of a connectionless service is that in a connection-oriented system, all of my communications are taking place on the same transmission channel. On the other hand, with a connectionless service, all transmissions are independently routed, and perhaps re-assembled in some order at the other end -- the service in between has no inherent responsibility for ensuring ordinality -- it need only assure that each transmission gets delivered from its source to its destination.

Connectionless Services

A good analogy for a connectionless service is the process of sending letters through the postal system. Each transmission (the "letter") contains the full destination address and is processed independent of related messages. As described above, the service has only to ensure that each reaches its host within certain time parameters. Unlike a connection-oriented service, the system has free reign on what happens enroute between the sender and receiver:

• a message can be delayed to ensure another arrives first.

• widely different channels of communication can be used for transmitting messages.

• a message can be handed off to a trusted third party in the distribution network.

• a message can be intercepted by a third party, copied or logged, and passed on to the intended receiver.

These operations are basically impossible for a connection-oriented service.

Types of Services Available in TCP/IP and OSI:

The OSI Reference Model provides for both connection-oriented and connectionless communication at the network level. However, it supports only connection-oriented communication at the Transport layer. It became obvious after the initial design of OSI that allowing both types of traffic at the transport layer was important even though it violated the idea of data abstraction which were central to the design of OSI.

TCP/IP, on the other hand, supports only connectionless traffic at the Network layer, but supports both modes in the Transport layer. This allows for simple request-response protocols to be easily implemented, though it complicates things somewhat for the user. At the Transport level, TCP, the Transmission Control Protocol (a connection-oriented service) as well as UDP, the Universal Datagram Protocol (a connectionless service) are provided.

Quality of Service

Reliability of connections achieved through connectionless and connection-oriented protocols is another major concern. All protocols are not created equal, and sacrifices in reliability can be made in exchange for greater speed, or vice versa. Often the trade-offs are worth it, assuming that we're attempting to fit our task intelligently into the capacities of the protocol.

Sometimes it's necessary for a "handshake" process to occur, especially if we need to authenticate each piece of traffic from a sender, but in many cases (such as streaming video), the performance hit involved is simply unacceptable. Not all applications require connections, and it's naive to think of either protocol as superior to the other.

In certain cases, it's not even necessary to ensure that a message gets sent, so long as the chance that it was received is high enough (think of the high-volume email transmissions of spammers). Consider how difficult and time-consuming it would be if each spam message had to be acknowledged by the receiver and tracked by the spammer!

8.Write a short note on ARPANET

Ans: The Advanced Research Projects Agency Network (ARPANET), was the world's first operational packet switching network and the core network of a set that came to compose the global Internet. The network was created by a small research team at the Massachusetts Institute of Technology and the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense. The packet switching of the ARPANET was based on designs by Lawrence Roberts of the Lincoln Laboratory.[1]

Packet switching, today the dominant basis for data communications worldwide, was a new concept at the time of the conception of the ARPANET. Data communications had been based on the idea of circuit switching, as in the traditional telephone circuit, wherein a telephone call reserves a dedicated circuit for the duration of the communication session and communication is possible only between the two parties interconnected.

With packet switching, a data system could use one communications link to communicate with more than one machine by collecting data into datagrams and transmit these as packets onto the attached network link, whenever the link is not in use. Thus, not only could the link be shared, much as a single post box can be used to post letters to different destinations, but each packet could be routed independently of other packets.

9.Explain about Internet and its applications

Ans: Internet is a collection of networks or network of networks. Various networks such as LAN and WAN connected through suitable hardware and software to work in a seamless manner. Schematic diagram of the Internet is shown in Fig. 1.1.8. It allows various applications such as e-mail, file transfer, remote log-in, World Wide Web, Multimedia, etc run across the internet. The basic difference between WAN and Internet is that WAN is owned by a single organization while internet is not so. But with the time the line between WAN and Internet is shrinking, and these terms are sometimes used interchangeably.

[pic]

FIG:Internet – network of networks

Applications

In a short period of time computer networks have become an indispensable part of business, industry, entertainment as well as a common-man's life. These applications have changed tremendously from time and the motivation for building these networks

are all essentially economic and technological.

Initially, computer network was developed for defense purpose, to have a secure communication network that can even withstand a nuclear attack. After a decade or so, companies, in various fields, started using computer networks for keeping track of inventories, monitor productivity, communication between their different branch offices located at different locations. For example, Railways started using computer networks by connecting their nationwide reservation counters to provide the facility of reservation and enquiry from any where across the country.

And now after almost two decades, computer networks have entered a new dimension; they are now an integral part of the society and people. In 1990s, computer network started delivering services to private individuals at home. These services and motivation for using them are quite different. Some of the services are access to remote information, person-person communication, and interactive entertainment. So, some of the applications of computer networks that we can see around us today are as follows:

Marketing and sales: Computer networks are used extensively in both marketing and sales organizations. Marketing professionals use them to collect, exchange, and analyze data related to customer needs and product development cycles. Sales application includes teleshopping, which uses order-entry computers or telephones connected to order processing network, and online-reservation services for hotels, airlines and so on.

Financial services: Today's financial services are totally depended on computer networks. Application includes credit history searches, foreign exchange and investment services, and electronic fund transfer, which allow user to transfer money without going into a bank (an automated teller machine is an example of electronic fund transfer, automatic pay-check is another).

Manufacturing: Computer networks are used in many aspects of manufacturing including manufacturing process itself. Two of them that use network to provide essential services are computer-aided design (CAD) and computer-assisted manufacturing (CAM), both of which allow multiple users to work on a project simultaneously.

Directory services: Directory services allow list of files to be stored in central location to speed worldwide search operations.

Information services: A Network information service includes bulletin boards and data banks. A World Wide Web site offering technical specification for a new product is an information service.

Electronic data interchange (EDI): EDI allows business information, including documents such as purchase orders and invoices, to be transferred without using paper.

Electronic mail: probably it's the most widely used computer network application.

Teleconferencing: Teleconferencing allows conference to occur without the participants being in the same place. Applications include simple text conferencing (where participants communicate through their normal keyboards and monitor) and video conferencing where participants can even see as well as talk to other fellow participants. Different types of equipments are used for video conferencing depending on what quality of the motion you want to capture (whether you want just to see the face of other fellow participants or do you want to see the exact facial expression).

Voice over IP: Computer networks are also used to provide voice communication. This kind of voice communication is pretty cheap as compared to the normal telephonic conversation.

Video on demand: Future services provided by the cable television networks may include video on request where a person can request for a particular movie or any clip at anytime he wish to see.

Summary: The main area of applications can be broadly classified into following categories:

Scientific and Technical Computing



Client Server Model, Distributed Processing

Parallel Processing, Communication Media

Commercial



Advertisement, Telemarketing, Teleconferencing



Worldwide Financial Services



Network for the People (this is the most widely used application nowadays)



Telemedicine, Distance Education, Access to Remote Information, Person-to-Person Communication, Interactive Entertainment

UNIT-II

PHYSICAL LAYER

1.Describe briefly about the physical layer?

Ans:Physical layer is concerned with transmitting raw bits over a communication channel. The design issues have to do with making sure that when one side sends a 1 bit, it is recieved by the other side as 1 bit and not as 0 bit. In physical layer we deal with the communication medium used for transmission.

Types of Medium

Medium can be classified into 2 categories.

3. Guided Media : Guided media means that signals is guided  by the prescence of physical media i.e. signals are under control and remains in the physical wire. For eg. copper wire.

4. Unguided Media : Unguided Media means that there is no physical path for the signal to propogate. Unguided media are essentially electro-magnetic waves. There is no control on flow of signal. For eg. radio waves.

Communication Links

In a nework nodes are connected through links. The communication through links can be classified as

4. Simplex : Communication can take place only in one direction. eg. T.V broadcasting.

5. Half-duplex : Communication can take place in one direction at a time. Suppose node A and B are connected then half-duplex communication means that at a time data can flow from A to B or from B to A but not simultaneously. eg. two persons talking to each other such that when speaks the other listens and vice versa.

6. Full-duplex : Communication can take place simultaneously in both directions. eg. A discussion in a group without discipline.

Links can be further classified as

3. Point to Point : In this communication only two nodes are connected to each other. When a node sends a packet then it can be recieved only by the node on the other side and none else.

4. Multipoint : It is a kind of sharing communication, in which signal can be recieved by all nodes. This is also called broadcast.

Generally two kind of problems are associated in transmission of signals.

3. Attenuation : When a signal transmitts in a network then the quality of signal degrades as the signal travels longer distances in the wire. This is called attenuation. To improve quality of signal amplifiers are used at regular distances.

4. Noise : In a communication channel many signals transmits simultaneously, certain random signals are also present in the medium. Due to interference of these signals our signal gets disrupted a bit.

Bandwidth

Bandwidth simply means how many bits can be transmitted per second in the communication channel. In technical terms it indicates the width of frequency spectrum.

Transmission Media

Guided Transmission Media

In Guided transmission media generally two kind of materials are used.

3. Copper

o Coaxial Cable

o Twisted Pair

4. Optical Fiber

4. Coaxial Cable: Coaxial cable consists of an inner conductor and an outer conductor which are seperated by an insulator. The inner conductor is usually copper. The outer conductor is covered by a plastic jacket. It is named coaxial because the two conductors are coaxial. Typical diameter of coaxial cable lies between 0.4 inch to 1 inch. The most application of coaxial cable is cable T.V. The coaxial cable has high bandwidth, attenuation is less.

[pic]

5. Twisted Pair: A Twisted pair consists of two insulated copper wires, typically 1mm thick. The wires are twisted togather in a helical form the purpose of twisting is to reduce cross talk interference between several pairs. Twisted Pair is much cheaper then coaxial cable but it is susceptible to noise and electromagnetic interference and attenuation is large.

[pic]

Twisted Pair can be further classified in two categories:

Unshielded twisted pair: In this no insulation is provided, hence they are susceptible to interference.

Shielded twisted pair: In this a protective thick insulation is provided but shielded twisted pair is expensive and not commonly used.

The most common application of twisted pair is the telephone system. Nearly all telephones are connected to the telephone company office by a twisted pair. Twisted pair can run several kilometers without amplification, but for longer distances repeaters are needed. Twisted pairs can be used for both analog and digital transmission. The bandwidth depends on the thickness of wire and the distance travelled. Twisted pairs are generally limited in distance, bandwidth and data rate.

6. Optical Fiber: In optical fiber light is used to send data. In general terms prescence of light is taken as bit 1 and its absence as bit 0. Optical fiber consists of inner core of either glass or plastic. Core is surrounded by cladding of the same material but of different refrective index. This cladding is surrounded by a plastic jacket which prevents optical fiber from electromagnetic interferrence and harshy environments. It uses the principle of total internal reflection to transfer data over optical fibers. Optical fiber is much better in bandwidth as compared to copper wire, since there is hardly any attenuation or electromagnetic interference in optical wires. Hence there is less requirement to improve quality of signal, in long distance transmission. Disadvantage of optical fiber is that end points are fairly expensive. (eg. switches)

Differences between different kinds of optical fibers:

1. Depending on material

▪ Made of glass

▪ Made of plastic.

2. Depending on radius

▪ Thin optical fiber

▪ Thick optical fiber

3. Depending on light source

▪ LED (for low bandwidth)

▪ Injection lased diode (for high bandwidth)

Wireless Transmission

4. Radio: Radio is a general term that is used for any kind of frequency. But higher frequencies are usually termed as microwave and the lower frequency band comes under radio frequency. There are many application of radio. For eg. cordless keyboard, wireless LAN, wireless ethernet. but it is limited in range to only a few hundred meters. Depending on frequency radio offers different bandwidths.

5. Terrestrial microwave: In terrestrial microwave two antennas are used for communication. A focused beam emerges from an antenna and is recieved by the other antenna, provided that antennas should be facing each other with no obstacle in between. For this reason antennas are situated on high towers. Due to curvature of earth terristial microwave can be used for long distance communication with high bandwidth. Telecom department is also using this for long distance communication. An advantage of wireless communication is that it is not required to lay down wires in the city hence no permissions are required.

6. Satellite communication: Satellite acts as a switch in sky. On earth VSAT(Very Small Aperture Terminal) are used to transmit and recieve data from satellite. Generally one station on earth transmitts signal to satellite and it is recieved by many stations on earth. Satellite communication is generally used in those places where it is very difficult to obtain line of sight i.e. in highly irregular terristial regions. In terms of noise wireless media is not as good as the wired media. There are frequency band in wireless communication and two stations should not be allowed to transmit simultaneously in a frequency band. The most promising advantage of satellite is broadcasting. If satellites are used for point to point communication then they are expensive as compared to wired media.

[pic]

2.Describe Architecture of ISDN

• What is ISDN?

• What is ISDN's history?

• Why use ISDN?

• What are BRI and PRI? What are Channels?

• What do the layers look like?

• What protocols are used?

Ans: ISDN, which stands for Integrated Services Digital Network, is a system of digital phone connections which has been available for over a decade. This system allows voice and data to be transmitted simultaneously across the world using end-to-end digital connectivity.

With ISDN, voice and data are carried by bearer channels (B channels) occupying a bandwidth of 64 kb/s (bits per second). Some switches limit B channels to a capacity of 56 kb/s. A data channel (D channel) handles signaling at 16 kb/s or 64 kb/s, depending on the service type. Note that, in ISDN terminology, "k" means 1000 (103), not 1024 (210) as in many computer applications (the designator "K" is sometimes used to represent this value); therefore, a 64 kb/s channel carries data at a rate of 64000 b/s. A new set of standard prefixes has recently been created to handle this. Under this scheme, "k" (kilo-) means 1000 (103), "M" (mega-) means 1000000 (106), and so on, and "Ki" (kibi-) means 1024 (210), "Mi" (mebi-) means 1048576 (220), and so on.

(An alert reader pointed out some inconsistencies in my use of unit terminology throughout this Tutorial. He also referred me to a definitive web site. As a result, I have made every effort to both conform to standard terminology, and to use it consistently. I appreciate helpful user input like this!)

There are two basic types of ISDN service: Basic Rate Interface (BRI) and Primary Rate Interface (PRI). BRI consists of two 64 kb/s B channels and one 16 kb/s D channel for a total of 144 kb/s. This basic service is intended to meet the needs of most individual users.

PRI is intended for users with greater capacity requirements. Typically the channel structure is 23 B channels plus one 64 kb/s D channel for a total of 1536 kb/s. In Europe, PRI consists of 30 B channels plus one 64 kb/s D channel for a total of 1984 kb/s. It is also possible to support multiple PRI lines with one 64 kb/s D channel using Non-Facility Associated Signaling (NFAS).

H channels provide a way to aggregate B channels. They are implemented as:

• H0=384 kb/s (6 B channels)

• H10=1472 kb/s (23 B channels)

• H11=1536 kb/s (24 B channels)

• H12=1920 kb/s (30 B channels) - International (E1) only

To access BRI service, it is necessary to subscribe to an ISDN phone line. Customer must be within 18000 feet (about 3.4 miles or 5.5 km) of the telephone company central office for BRI service; beyond that, expensive repeater devices are required, or ISDN service may not be available at all. Customers will also need special equipment to communicate with the phone company switch and with other ISDN devices. These devices include ISDN Terminal Adapters (sometimes called, incorrectly, "ISDN Modems") and ISDN Routers.

The early phone network consisted of a pure analog system that connected telephone users directly by a mechanical interconnection of wires. This system was very inefficient, was very prone to breakdown and noise, and did not lend itself easily to long-distance connections. Beginning in the 1960s, the telephone system gradually began converting its internal connections to a packet-based, digital switching system. Today, nearly all voice switching in the U.S. is digital within the telephone network. Still, the final connection from the local central office to the customer equipment was, and still largely is, an analog Plain-Old Telephone Service (POTS) line.

A standards movement was started by the International Telephone and Telegraph Consultative Committee (CCITT), now known as the International Telecommunications Union (ITU). The ITU is a United Nations organization that coordinates and standardizes international telecommunications. Original recommendations of ISDN were in CCITT Recommendation I.120 (1984) which described some initial guidelines for implementing ISDN.

Local phone networks, especially the regional Bell operating companies, have long hailed the system, but they had been criticized for being slow to implement ISDN. One good reason for the delay is the fact that the two major switch manufacturers, Northern Telecom (now known as Nortel Networks), and AT&T (whose switch business is now owned by Lucent Technologies), selected different ways to implement the CCITT standards. These standards didn't always interoperate. This situation has been likened to that of earlier 19th century railroading. "People had different gauges, different tracks... nothing worked well."

In the early 1990s, an industry-wide effort began to establish a specific implementation for ISDN in the U.S. Members of the industry agreed to create the National ISDN 1 (NI-1) standard so that end users would not have to know the brand of switch they are connected to in order to buy equipment and software compatible with it. However, there were problems agreeing on this standard. In fact, many western states would not implement NI-1. Both Southwestern Bell and U.S. West (now Qwest) said that they did not plan to deploy NI-1 software in their central office switches due to incompatibilities with their existing ISDN networks.

Ultimately, all the Regional Bell Operating Companies (RBOCs) did support NI-1. A more comprehensive standardization initiative, National ISDN 2 (NI-2), was later adopted. Some manufacturers of ISDN communications equipment, such as Motorola and U S Robotics (now owned by 3Com), worked with the RBOCs to develop configuration standards for their equipment. These kinds of actions, along with more competitive pricing, inexpensive ISDN connection equipment, and the desire for people to have relatively low-cost high-bandwidth Internet access have made ISDN more popular in recent years.

Most recently, ISDN service has largely been displaced by broadband internet service, such as xDSL and Cable Modem service. These services are faster, less expensive, and easier to set up and maintain than ISDN. Still, ISDN has its place, as backup to dedicated lines, and in locations where broadband service is not yet available.

Advantages:

Speed

The modem was a big breakthrough in computer communications. It allowed computers to communicate by converting their digital information into an analog signal to travel through the public phone network. There is an upper limit to the amount of information that an analog telephone line can hold. Currently, it is about 56 kb/s bidirectionally. Commonly available modems have a maximum speed of 56 kb/s, but are limited by the quality of the analog connection and routinely go about 45-50 kb/s. Some phone lines do not support 56 kb/s connections at all. There were currently 2 competing, incompatible 56 kb/s standards (X2 from U S Robotics (recently bought by 3Com), and K56flex from Rockwell/Lucent). This standards problem was resolved when the ITU released the V.90, and later V.92, standard for 56 kb/s modem communications.

ISDN allows multiple digital channels to be operated simultaneously through the same regular phone wiring used for analog lines. The change comes about when the telephone company's switches can support digital connections. Therefore, the same physical wiring can be used, but a digital signal, instead of an analog signal, is transmitted across the line. This scheme permits a much higher data transfer rate than analog lines. BRI ISDN, using a channel aggregation protocol such as BONDING or Multilink-PPP, supports an uncompressed data transfer speed of 128 kb/s, plus bandwidth for overhead and signaling. In addition, the latency, or the amount of time it takes for a communication to begin, on an ISDN line is typically about half that of an analog line. This improves response for interactive applications, such as games.

Multiple Devices

Previously, it was necessary to have a separate phone line for each device you wished to use simultaneously. For example, one line each was required for a telephone, fax, computer, bridge/router, and live video conference system. Transferring a file to someone while talking on the phone or seeing their live picture on a video screen would require several potentially expensive phone lines.

ISDN allows multiple devices to share a single line. It is possible to combine many different digital data sources and have the information routed to the proper destination. Since the line is digital, it is easier to keep the noise and interference out while combining these signals. ISDN technically refers to a specific set of digital services provided through a single, standard interface. Without ISDN, distinct interfaces are required instead.

Signaling

Instead of the phone company sending a ring voltage signal to ring the bell in your phone ("In-Band signal"), it sends a digital packet on a separate channel ("Out-of-Band signal"). The Out-of-Band signal does not disturb established connections, no bandwidth is taken from the data channels, and call setup time is very fast. For example, a V.90 or V.92 modem typically takes 30-60 seconds to establish a connection; an ISDN call setup usually takes less than 2 seconds.

The signaling also indicates who is calling, what type of call it is (data/voice), and what number was dialed. Available ISDN phone equipment is then capable of making intelligent decisions on how to direct the call. In the U.S., the telephone company provides its BRI customers with a U interface. The U interface is a two-wire (single pair) interface from the phone switch, the same physical interface provided for POTS lines. It supports full-duplex data transfer over a single pair of wires, therefore only a single device can be connected to a U interface. This device is called an Network Termination 1 (NT-1). The situation is different elsewhere in the world, where the phone company is allowed to supply the NT-1, and thereby the customer is given an S/T interface.

The NT-1 is a relatively simple device that converts the 2-wire U interface into the 4-wire S/T interface. The S/T interface supports multiple devices (up to 7 devices can be placed on the S/T bus) because, while it is still a full-duplex interface, there is now a pair of wires for receive data, and another for transmit data. Today, many devices have NT-1s built into their design. This has the advantage of making the devices less expensive and easier to install, but often reduces flexibility by preventing additional devices from being connected.

Technically, ISDN devices must go through an Network Termination 2 (NT-2) device, which converts the T interface into the S interface (Note: the S and T interfaces are electrically equivalent). Virtually all ISDN devices include an NT-2 in their design. The NT-2 communicates with terminal equipment, and handles the Layer 2 and 3 ISDN protocols. Devices most commonly expect either a U interface connection (these have a built-in NT-1), or an S/T interface connection.

Devices that connect to the S/T (or S) interface include ISDN capable telephones and FAX machines, video teleconferencing equipment, bridge/routers, and terminal adapters. All devices that are designed for ISDN are designated Terminal Equipment 1 (TE1). All other communication devices that are not ISDN capable, but have a POTS telephone interface (also called the R interface), including ordinary analog telephones, FAX machines, and modems, are designated Terminal Equipment 2 (TE2). A Terminal Adapters (TA) connects a TE2 to an ISDN S/T bus.

Going one step in the opposite direction takes us inside the telephone switch. Remember that the U interface connects the switch to the customer premises equipment. This local loop connection is called Line Termination (LT function). The connection to other switches within the phone network is called Exchange Termination (ET function). The LT function and the ET function communicate via the V interface.

This can get rather confusing. This diagram should be helpful:

[pic] 

PHYSICAL LAYER

The ISDN Physical Layer is specified by the ITU I-series and G-series documents. The U interface provided by the telco for BRI is a 2-wire, 160 kb/s digital connection. Echo cancellation is used to reduce noise, and data encoding schemes (2B1Q in North America, 4B3T in Europe) permit this relatively high data rate over ordinary single-pair local loops.

2B1Q

2B1Q (2 Binary 1 Quaternary) is the most common signaling method on U interfaces. This protocol is defined in detail in 1988 ANSI spec T1.601. In summary, 2B1Q provides:

• Two bits per baud

• 80 kilobaud (baud = 1 modulation per second)

• Transfer rate of 160 kb/s

|Bits |Quaternary |Voltage |

| |Symbol |Level |

|00 |-3 |-2.5 |

|01 |-1 |-0.833 |

|10 |+3 |+2.5 |

|11 |+1 |+0.833 |

This means that the input voltage level can be one of 4 distinct levels (note: 0 volts is not a valid voltage under this scheme). These levels are called Quaternaries. Each quaternary represents 2 data bits, since there are 4 possible ways to represent 2 bits, as in the table above.

Frame Format

Each U interface frame is 240 bits long. At the prescribed data rate of 160 kb/s, each frame is therefore 1.5 ms long. Each frame consists of:

• Frame overhead - 16 kb/s

• D channel - 16 kb/s

• 2 B channels at 64 kb/s - 128 kb/s

|Sync |12 * (B1 + B2 + D) |Maintenance |

|18 bits |216 bits |6 bits |

• The Sync field consists of 9 Quaternaries (2 bits each) in the pattern +3 +3 -3 -3 -3 +3 -3 +3 -3.

• (B1 + B2 + D) is 18 bits of data consisting of 8 bits from the first B channel, 8 bits from the second B channel, and 2 bits of D channel data.

• The Maintenance field contains CRC information, block error detection flags, and "embedded operator commands" used for loopback testing without disrupting user data.

Data is transmitted in a superframe consisting of 8 240-bit frames for a total of 1920 bits (240 octets). The sync field of the first frame in the superframe is inverted (i.e. -3 -3 +3 +3 +3 -3 +3 -3 +3).

3.Describe ATM Architecture and explain about it

Ans: ATM is based on the switching of 53-byte cells, in which each cell consists of a 5-byte header and a payload of 48 bytes of information. Figure 14.1 illustrates the format of the ATM cell, including the explosion of its 5-byte header to indicate the fields carried in the header.

[pic]

Figure :The 53-byte ATM cell.

The 4-bit Generic Flow Control (GFC) field is used as a mechanism to regulate the flow of traffic in an ATM network between the network and the user. The use of this field is currently under development. As we will shortly note, ATM supports two major types of interfaces: Network-to-User (UNI) and Network-to-Network (NNI). When a cell flows from the user to the network or from the network to the user, it will carry a GFC bit value. However, when it flows within a network or between networks, the GFC field is not used. Instead of being wasted, its space can be used to expand the length of the Virtual Path Identifier field.

The 8-bit Virtual Path Identifier (VPI) field represents one half of a two-part connection identifier used by ATM. This field identifies a virtual path that can represent a group of virtual circuits transported along the same route. Although the VPI is eight bits long in a UNI cell, the field expands to 12-bit positions to fill the Generic Flow Control field in an NNI cell. It is described in more detail later in this chapter.

The Virtual Channel Identifier (VCI) is the second half of the two-part connection identifier carried in the ATM header. The 16-bit VCI field identifies a connection between two ATM stations communicating with one another for a specific type of application. Multiple virtual channels (VCs) can be transported within one virtual path. For example, one VC could be used to transport a disk backup operation, while a second VC is used to transport a TCP/IP-based application. The virtual channel represents a one-way cell transport facility. Thus, for each of the previously described operations, another series of VCIs is established from the opposite direction. You can view a virtual channel as an individual one-way end-to-end circuit, whereas a virtual path that can represent a collection of virtual channels can be viewed as a network trunk line. After data is within an ATM network, the VPI is used to route a common group of virtual channels between switches by enabling ATM switches to simply examine the value of the VPI. Later in this chapter, you will examine the use of the VCI.

The Payload Type Identifier (PTI) field indicates the type of information carried in the 48-byte data portion of the ATM cell. Currently, this 3-bit field indicates whether payload data represents management information or user data. Additional PTI field designators have been reserved for future use.

The 1-bit Cell Loss Priority (CLP) field indicates the relative importance of the cell. If this field bit is set to 1, the cell can be discarded by a switch experiencing congestion. If the cell cannot be discarded, the CLP field bit is set to 0.

The last field in the ATM cell header is the 8-bit Header Error Control field. This field represents the result of an 8-bit Cyclic Redundancy Check (CRC) code, computed only over the ATM cell header. This field provides the capability for detecting all single-bit errors and certain multiple-bit errors that occur in the 40-bit ATM cell header.

Advantages of the Technology

The use of cell-switching technology in a LAN environment provides some distinct advantages over the shared-medium technology employed by Ethernet, token-ring, and FDDI networks. Two of those advantages are obtaining full bandwidth access to ATM switches for individual workstations and enabling attaching devices to operate at different operating rates. Those advantages are illustrated in Figure 14.2, which shows an ATM switch that could be used to support three distinct operating rates. Workstations could be connected to the switch at 25Mbps, and a local server could be connected at 155Mbps to other switches either to form a larger local LAN or to connect to a communications carrier's network via a different operating rate.

The selection of a 53-byte cell length results in a minimum of latency in comparison to the packet length of traditional LANs, such as Ethernet, which can have a maximum 1526-byte frame length. Because the ATM cell is always 53 bytes in length, cells transporting voice, data, and video can be intermixed without the latency of one cell adversely affecting other cells. Because the length of each cell is fixed and the position of information in each header is known, ATM switching can be accomplished via the use of hardware. In comparison, on traditional LANs, bridging and routing functions are normally performed by software or firmware, which executes more slowly than hardware-based switching.

[pic]

Figure 14.2: ATM is based on the switching of 53-byte cells.

Two additional features of ATM that warrant discussion are its asynchronous operation and its connection-oriented operation. ATM cells are intermixed via multiplexing, and cells from individual connections are forwarded from switch to switch via a single-cell flow. However, the multiplexing of ATM cells occurs via asynchronous transfer, in which cells are transmitted only when data is present to send. In comparison, in conventional time division multiplexing, keep-alive or synchronization bytes are transmitted when there is no data to be sent. Concerning the connection-oriented technology used by ATM, this means that a connection between the ATM stations must be established before data transfer occurs. The connection process results in the specification of a transmission path between ATM switches and end stations, enabling the header in ATM cells to be used to route the cells on the required path through an ATM network.

Cell Routing

The actual routing of ATM cells depends on whether a connection was pre-established or set up as needed on a demand basis. The pre-established type of connection is referred to as a Permanent Virtual Connection (PVC), and the other type is referred to as a Switched Virtual Connection (SVC). Examine the 5-byte ATM cell header shown in Figure 14.1 and note the VCI and VPI fields. The VPI is 8 bits in length, whereas the VCI is 16 bits in length, enabling 256 virtual paths of which each path is capable of accommodating up to 65,536 (216) virtual connections.

By using VPs and VCs, ATM employs a two-level connection identifier that is used in its routing hierarchy. A VCI value is unique only in a particular VPI value, whereas VPI values are unique only in particular physical links. The VPI/VCI value assignment has only local significance, and those values are translated at every switch a cell traverses between endpoints in an ATM network. The actual establishment of a virtual path is based on ATM's network management and signaling operations. During the establishment of a virtual path routing table, entries in each switch located between endpoints map an incoming physical port and a Virtual Path Identifier pair to an outgoing pair. This initial mapping process is known as network provisioning, and the change of routing table entries is referred to as network reprovisioning.

Figure 14.3 illustrates an example of a few possible table entries for a switch, where a virtual path was established such that VPI=6 on port 1 and VPI=10 on port 8, representing two physical links in the established connection.

[pic]

Figure 14.3: Switch operations based on routing table entries.

Next, examine the entries in the routing table shown in Figure 14.3, and note that the table does not include values for VCIs. This is by design because a VP in an ATM network can support up to 65,536 VC connections. Thus, only one table entry is required to switch up to 65,536 individual connections if those connections all follow the same set of physical links in the same sequence. This method of switching, which is based on the VPI and port number, simplifies the construction and use of routing tables and facilitates the establishment of a connection through a series of switches. Although VCIs are not used in routing tables, they are translated at each switch. To help you understand the rationale for this technique, you must focus on their use. As previously noted, a VCI is unique within a VP and is used at an endpoint to denote a different connection within a virtual path. Thus, the VPI/VCI pair used between an endpoint and a switch has a local meaning and is translated at every switch; however, the VCI is not used for routing between switches.

The establishment of a connection between two end stations is known as a Virtual Channel Connection (VCC). To illustrate the routing of cells in an ATM network based on a VCC, consider Figure 14.4, which represents a small two-switch–based ATM network. The VCC represents a series of virtual channel links between two ATM endpoints. In Figure 14.4, one VCC could be represented by VCI=1, VCI=3, and VCI=5, which collectively form a connection between workstations at the two endpoints shown in the network. A second VCC could be represented by VCI=2, VCI=4, and VCI=6. The second VCC could represent the transportation of a second application between the same pair of endpoints or a new application between different endpoints served by the same pair of ATM switches.

[pic]

Figure 14.4: Connections in an ATM network.

As indicated by the previous examples, each VC link consists of one or more physical links between the location where a VCI is assigned and the location where it is either translated or removed. The assignment of VCs is the responsibility of switches during the call setup process.

Top of page

The ATM Protocol Reference Model

Three layers in the ATM architecture form the basis for the ATM Protocol Reference model, illustrated in Figure 14.5. Those layers are the Physical layer, the ATM layer, and the ATM Adaptation layer.

[pic]

Figure 14.5: The ATM protocol suite.

The Physical Layer

As indicated in Figure 14.5, the lowest layer in the ATM protocol is the Physical layer. This layer describes the physical transmission of information through an ATM network. It is not actually defined with respect to this new technology. The absence of a Physical layer definition results from the design goal of ATM to operate on various physical interfaces or media types. Thus, instead of defining a specific Physical layer, ATM depends on the Physical layers defined in other networking protocols. Types of physical media specified for ATM include shielded and unshielded twisted-pair, coaxial cable, and fiber-optic cable, which provide cell transport capabilities ranging from a T1 rate of 1.544Mbps to a SONET range of 622Mbps.

The ATM Layer

The ATM layer represents the physical interface between the ATM Adaptation layer (AAL) and the Physical layer. Thus, the ATM layer is responsible for relaying cells from the AAL to the Physical layer for transmission, and in the opposite direction from the Physical layer to the AAL for use in an endpoint. When transporting cells to the Physical layer, the ATM layer is responsible for generating the five-byte cell header for each cell. When receiving cells from the Physical layer, the ATM layer performs a reverse operation, extracting the five-byte header from each cell.

The actual manner by which the ATM layer performs its relaying function depends on the location of the layer at a switch or at an endpoint. If the ATM layer is located in an endpoint, it receives a stream of cells from the Physical layer and transmits either cells with new data or empty cells if there is no data to send to the AAL. When located in a switch, the ATM layer is responsible for determining where incoming cells are routed and for multiplexing cells by placing cells from individual connections into a single-cell stream.

The ATM Adaptation Layer

The ATM Adaptation layer (AAL) represents the top layer in the ATM Protocol model. This layer is responsible for providing an interface between higher-layer protocols and the ATM layer. Because this interface normally occurs based on a voice, data, or video application accessing an ATM network, the operations performed by the AAL occur at endpoints and not at ATM switches. Thus, the AAL is shown in Figure 14.5 to reside at ATM endpoints.

The primary function of the ATM Adaptation layer is format conversion. That is, the AAL maps the data stream originated by the higher-layer protocol into the 48-byte payload of ATM cells, with the header placement being assigned by the ATM layer. In the reverse direction, the AAL receives the payload of ATM cells in 48-byte increments from the ATM layer and maps those increments into the format recognized by the higher-layer protocol.

Because it is not possible to address the requirements of the diverse set of applications designed to use ATM within a single AAL, the ITU-T classified the functions required by different applications based on their traffic and service requirements. This classification scheme defined four classes of applications based on whether a timing relationship is required between end stations, the type of bit rate (variable or constant), and the type of connection (connection-oriented or connectionless) required. Table 14.2 summarizes the four classes of applications with respect to their timing relationship, bit rate, and type of connection.

In Table 14.2, note that the timing relationship references whether one is required between end stations. Real-time services, such as the transportation of voice or video, represent two examples of applications that require a timing relationship. In comparison, a file transfer represents an application that does not require a timing relationship. When a timing relationship is required, clocking between two end stations must be aligned.

A constant bit-rate application represents an application that requires an unvarying amount of bandwidth, such as voice or real-time video. In comparison, a variable bit-rate application represents "bursty" traffic, such as LAN data or transmission via a packet network.

The capability to support connection-oriented or connectionless applications enables ATM to support various existing higher-layer protocols. For example, Frame Relay is a connection-oriented protocol, whereas IP is a connectionless protocol. Through they use of different AALs, both can be transported by ATM.

Based on the four application classes, four different types of AALs were defined: AAL1, 2, 3/4, and 5. At one time, AAL3 and AAL4 were separate types; however, they had a sufficient degree of commonality to be merged. Figure 14.6 illustrates the relationship between application classes and ATM Adaptation layers with respect to the different parameters used to classify the application classes.

Table 14.2 The ATM Application Classes

|Class |Timing Relationship |Bit Rate |Type of Connection |

|A |Yes |Constant |Connection-oriented |

|B |Yes |Variable |Connection-oriented |

|C |No |Variable |Connection-oriented |

|D |No |Variable |Connectionless |

[pic]

Figure: Application classification and associated AALs.

ATM Adaptation layers are distinguished from one another based on the method by which the 48-byte cell payload constructed as a data stream generated by a higher-level protocol is passed to the ATM layer. For example, consider a Class A application represented by a voice conversation. Because misordering cells can be viewed as being worse than losing cells, the payload is constructed to include a sequence number when Class A traffic is transported. Figure 14.7 illustrates the format of an AAL1 cell payload. Note that the Sequence Number Protection (SNP) field protects the Sequence Number (SN) field from the effect of bit errors occurring during transmission, in effect providing a forward error detection and correction capability.

AAL1 is designated for transporting continuous bitrate (CRR) data, such as real-time voice and video traffic. The AAL1 specification defines the manner by which a continuous signal is transported in a sequence of individual ATM cells. As indicated in Figure 14.7, the first byte in the normal 48-byte cell payload is used for cell sequencing and protection of the sequence number, limiting the actual payload to 47 bytes per AAL1-generated cell. The AAL2 cell will eventually be used to transport packet video services and should be defined in the near future.

[pic]

Figure: AAL 1 cell payload format.

AAL3 is designed to transport delay-insensitive user data, such as Frame Relay, X.25, or IP traffic. There is a high degree of probability that such data will have to be fragmented because the maximum payload of an ATM cell is 48 bytes. AAL3/4 uses four additional bytes beyond the cell header. The use of those bytes makes 44 bytes in the cell available for transporting the actual payload. In comparison, AAL5 uses all 48 bytes beyond the cell header to transport the payload, providing a minimum 10% enhanced throughput in comparison to AAL3/4.

Although several aspects of different AAL operations remain to be specified, the use of different AALs provides the mechanism for the cell-based switching technology on which ATM is based to transport different types of information using a common cell structure.

Service Definitions

Perhaps the major benefit of ATM is that it enables users to obtain a Quality of Service (QoS) for each class of service. The QoS represents a guaranteed level of service that can be based upon such parameters as peak cell rate (PCR), sustained cell rate (SCR), cell delay variation tolerance (CDVT), minimum cell rate (MCR), and burst tolerance (BT). Each of these parameters is used with other parameters to define one of the five classes of service for which a carrier may offer cell loss, cell delay, and bandwidth guarantees. Those classes of service include Continuous Bit Rate (CBR), Variable Bit Rate–Real Time (VBR–RT), Variable Bit Rate–Non-Real Time (VBR-NRT), Unspecified Bit Rate (UBR), and Available Bit Rate (ABR).

Continuous Bit Rate and Variable Bit Rate–Real Time services generally correspond to Class A and Class B services, respectively. Variable Bit Rate–Non-Real Time is a less time-stringent version of VBR–RT.

Both UBR and ABR services are for transporting delay-insensitive traffic, corresponding to Classes C and D. UBR represents a best-effort delivery mechanism for which cells can be discarded during periods of network congestion. In comparison, an ABR service is allocated all the bandwidth required by the application that is available on a connection, with a feedback mechanism employed to control the rate the originator transmits cells to minimize cell loss when available bandwidth contracts. Table 14.3 provides a summary of the five types of ATM services.

Table 14.3 ATM Services

|Guarantees |ATM Service Feedback |Metrics |Loss |Delay |Bandwidth |

|Constant Bit Rate (CBR) |PCR, CDVT |Yes |Yes |Yes |No |

|Variable Bit Rate–Real Time (VBR–RT) |PCR, CDVT, SCR, BT |Yes |Yes |Yes |No |

|Variable Bit Rate–Non-Real Time (VBR-NRT) |PCR, CDVT, SCR, BT |Yes |Yes |Yes |No |

|Unspecified Bit Rate (UBR) |Unspecified |No |No |No |No |

|Available Bit Rate (ABR) |PCR, CDVT, MCR |Yes |No |Yes |Yes |

Legend:

PCR = Peak Cell Rate

CDVT = Cell Delay Variation Tolerance

SCR = Sustained Cell Rate

BT = Burst Tolerance

MCR = Minimum Cell Rate

LAN Emulation

Although numerous advantages are associated with the use of ATM, its use in corporate and government offices causes a degree of interoperability problems when it's used to support legacy LANs, such as Ethernet and token-ring networks. Figure 14.8 illustrates the interoperability problems associated with using ATM as a backbone to interconnect legacy LAN switches. In Figure 14.8, note that ATM uses virtual path and virtual channel identifiers for addressing. In comparison, legacy LANs that include Ethernet use MAC addressing. Another difference between the two is that ATM is a connection-oriented protocol, whereas Ethernet is connectionless. This means there is no direct equivalent to a legacy broadcast transmission capability.

To obtain compatibility between ATM and legacy LANs, the ATM Forum developed a protocol called LAN Emulation (LANE). The goal of LANE is to provide a mechanism that enables ATM to interoperate with legacy LANs while hiding the ATM network from the legacy network. To accomplish this, the ATM LANE protocol emulates the characteristics of the legacy network.

LANE functions are performed on switches at the edge of an ATM network. As you might surmise, such switches are referred to as ATM edge devices. Four components are needed to provide LAN Emulation: a LAN Emulation Configuration Server (LECS), a LAN Emulation Server (LES), and a Broadcast and Unknown Server (BUS).

MAC addressing; connectionless Workstations

[pic]

Figure 14.8: Interoperability problems associated with using ATM as a backbone to connect legacy LANs.

The Client

The functionality of an LEC is typically located in an ATM adapter card installed in a legacy switch. That card is configured with two addresses: an IEEE 48-bit MAC address and a 20-byte ATM address. The LEC is responsible for address resolution, data forwarding, and registration of MAC addresses with the LANE server (LES). It also communicates with other LECS via ATM virtual channel connections established across the ATM network.

The LECS

The LANE Configuration Server maintains a database of emulated LANs (ELANs) and the ATM address of LAN Emulation Servers (LESs) that control the ELANs. When a LANE client needs an ATM address, it first searches its connections, called Virtual Channel Connections (VCCs), it previously opened. The LEC maintains a translation table of destination MAC addresses mapped to VCCs. If the destination address is in the table, the LEC can use the existing VCC to send the message. If not, the LEC must perform an address resolution procedure using the LAN Emulation Address Resolution Protocol (LE-ARP). To do so, it queries the LECS, which returns the ATM address that serves the appropriate emulated LAN. The LEC then uses that address to query the LES. The LECS database is defined and maintained by the network manager or LAN administrator and represents the only manual process in the entire emulation process.

The LES

The LES represents a central control point for a predefined group of LECs. The LES maintains a point-to-multipoint Virtual Control Channel to all the LECs it controls. When the LEC queries the LES, the LES verifies that the LEC can joint the ELAN. Assuming it can, it examines the request of the LEC to resolve a MAC to ATM address by searching its tables for the appropriate ATM address that provides a path to the desired MAC address. Those tables are formed by LECs registering their ATM-to-MAC address translations with the LES. If the address is in the LES' cache memory, the LES returns the ATM address to the LEC that uses that address to establish an ATM connection. If the LES does not have that address in cache memory, it uses the services of the BUS.

The BUS

The Broadcast and Unknown Server (BUS) functions as a central point for transmitting broadcasts and multicast messages. It is required because ATM is a point-to-point connection-oriented technology that lacks a broadcast or one-to-many transmission capability. If the LES does not have the address required by the LEC, it uses the services of the BUS. That is, the BUS transmits an address resolution request to all stations that make up the ELAN, and the station that recognizes its own MAC address returns its ATM address. The LES updates its cache memory and returns the ATM address to the LEC. The LEC can then establish a connection across the ATM _network.

Although communications carriers have expended a significant amount of effort to develop an ATM infrastructure for transporting information between carrier offices, the expansion of this evolving technology to customer premises—as well as its common use on LANs—will probably take several years. This is because, as with any new technology, the cost of ATM equipment is relatively high in comparison to the cost of older technology. Over the next few years, you can expect several important standards to be promulgated, and you can also expect to see the cost of ATM equipment become more reasonable as development costs are amortized over a larger base of products. As this occurs, the use of ATM will expand considerably.

UNIT-III

DATA LINK LAYER

1.Discuss Framing in detail

Ans: The Data Link Layer is the second layer in the OSI model, above the Physical Layer, which ensures that the error free data is transferred between the adjacent nodes in the network. It breaks the datagrams passed down by above layers and convert them into frames ready for transfer. This is called Framing. It provides two main functionalities

• Reliable data transfer service between two peer network layers

• Flow Control mechanism which regulates the flow of frames such that data congestion is not there at slow receivers due to fast senders.

Framing:

Since the physical layer merely accepts and transmits a stream of bits without any regard to meaning or structure, it is upto the data link layer to create and recognize frame boundaries. This can be accomplished by attaching special bit patterns to the beginning and end of the frame. If these bit patterns can accidentally occur in data, special care must be taken to make sure these patterns are not incorrectly interpreted as frame delimiters. The four framing methods that are widely used are

• Character count

• Starting and ending characters, with character stuffing

• Starting and ending flags, with bit stuffing

• Physical layer coding violations

Character Count

This method uses a field in the header to specify the number of characters in the frame. When the data link layer at the destination sees the character count,it knows how many characters follow, and hence where the end of the frame is. The disadvantage is that if the count is garbled by a transmission error, the destination will lose synchronization and will be unable to locate the start of the next frame. So, this method is rarely used.

Character stuffing

In the second method, each frame starts with the ASCII character sequence DLE STX and ends with the sequence DLE ETX.(where DLE is Data Link Escape, STX is Start of TeXt and ETX is End of TeXt.) This method overcomes the drawbacks of the character count method. If the destination ever loses synchronization, it only has to look for DLE STX and DLE ETX characters. If however, binary data is being transmitted then there exists a possibility of the characters DLE STX and DLE ETX occurring in the data. Since this can interfere with the framing, a technique called character stuffing is used. The sender's data link layer inserts an ASCII DLE character just before the DLE character in the data. The receiver's data link layer removes this DLE before this data is given to the network layer. However character stuffing is closely associated with 8-bit characters and this is a major hurdle in transmitting arbitrary sized characters.

Bit stuffing

The third method allows data frames to contain an arbitrary number of bits and allows character codes with an arbitrary number of bits per character. At the start and end of each frame is a flag byte consisting of the special bit pattern 01111110 . Whenever the sender's data link layer encounters five consecutive 1s in the data, it automatically stuffs a zero bit into the outgoing bit stream. This technique is called bit stuffing. When the receiver sees five consecutive 1s in the incoming data stream, followed by a zero bit, it automatically destuffs the 0 bit. The boundary between two frames can be determined by locating the flag pattern.

Physical layer coding violations

The final framing method is physical layer coding violations and is applicable to networks in which the encoding on the physical medium contains some redundancy. In such cases normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The combinations of low-low and high-high which are not used for data may be used for marking frame boundaries.

2.Discuss Error Control in detail

Ans:The bit stream transmitted by the physical layer is not guaranteed to be error free. The data link layer is responsible for error detection and correction. The most common error control method is to compute and append some form of a checksum to each outgoing frame at the sender's data link layer and to recompute the checksum and verify it with the received checksum at the receiver's side. If both of them match, then the frame is correctly received; else it is erroneous. The checksums may be of two types:

# Error detecting : Receiver can only detect the error in the frame and inform the sender about it. # Error detecting and correcting : The receiver can not only detect the error but also correct it.

Examples of Error Detecting methods:

• Parity bit:

Simple example of error detection technique is parity bit. The parity bit is chosen that the number of 1 bits in the code word is either even( for even parity) or odd (for odd parity). For example when 10110101 is transmitted then for even parity an 1 will be appended to the data and for odd parity a 0 will be appended. This scheme can detect only single bits. So if two or more bits are changed then that can not be detected.

• Longitudinal Redundancy Checksum:

Longitudinal Redundancy Checksum is an error detecting scheme which overcomes the problem of two erroneous bits. In this conceptof parity bit is used but with slightly more intelligence. With each byte we send one parity bit then send one additional byte which have the parity corresponding to the each bit position of the sent bytes. So the parity bit is set in both horizontal and vertical direction. If one bit get flipped we can tell which row and column have error then we find the intersection of the two and determine the erroneous bit. If 2 bits are in error and they are in the different column and row then they can be detected. If the error are in the same column then the row will differentiate and vice versa. Parity can detect the only odd number of errors. If they are even and distributed in a fashion that in all direction then LRC may not be able to find the error.

• Cyclic Redundancy Checksum (CRC):

We have an n-bit message. The sender adds a k-bit Frame Check Sequence (FCS) to this message before sending. The resulting (n+k) bit message is divisible by some (k+1) bit number. The receiver divides the message ((n+k)-bit) by the same (k+1)-bit number and if there is no remainder, assumes that there was no error. How do we choose this number?

For example, if k=12 then 1000000000000 (13-bit number) can be chosen, but this is a pretty crappy choice. Because it will result in a zero remainder for all (n+k) bit messages with the last 12 bits zero. Thus, any bits flipping beyond the last 12 go undetected. If k=12, and we take 1110001000110 as the 13-bit number (incidentally, in decimal representation this turns out to be 7238). This will be unable to detect errors only if the corrupt message and original message have a difference of a multiple of 7238. The probablilty of this is low, much lower than the probability that anything beyond the last 12-bits flips. In practice, this number is chosen after analyzing common network transmission errors and then selecting a number which is likely to detect these common errors.

How to detect source errors?

In order ensure that the frames are delivered correctly, the receiver should inform the sender about incoming frames using positive or negative acknowledgements. On the sender's side the receipt of a positive acknowledgement implies that the frame has arrived at the destination safely while the receipt of a negative acknowledgement means that an error has occurred in the frame and it needs to be retransmitted. However, this scheme is too simplistic because if a noise burst causes the frame to vanish completely, the receiver will not respond at all and the sender would hang forever waiting for an acknowledgement. To overcome this drawback, timers are introduced into the data link layer. When the sender transmits a frame it also simultaneously starts a timer. The timer is set to go off after a interval long enough for the frame to reach the destination, be processed there, and have the acknowledgement propogate back to the sender. If the frame is received correctly the positive acknowledgment arrives before the timer runs out and so the timer is canceled. If however either the frame or the acknowledgement is lost the timer will go off and the sender may retransmit the frame. Since multiple transmission of frames can cause the receiver to accept the same frame and pass it to the network layer more than once, sequence numbers are generally assigned to the outgoing frames.

The types of acknowledgements that are sent can be classified as follows:

• Cumulative acknowledgements: A single acknowledgement informing the sender that all the frames upto a certain number have been received.

• Selective acknowledgements: Acknowledgement for a particular frame.

They may be also classified as:

• Individual acknowledgements: Individual acknowledgement for each frame.

• Group acknowledgements: A bit-map that specifies the acknowledgements of a range of frame numbers.

3.Discuss About FlowControl in detail

Ans: Consider a situation in which the sender transmits frames faster than the receiver can accept them. If the sender keeps pumping out frames at high rate, at some point the receiver will be completely swamped and will start losing some frames. This problem may be solved by introducing flow control. Most flow control protocols contain a feedback mechanism to inform the sender when it should transmit the next frame.

Mechanisms For Flow Control:

• Stop and Wait Protocol: This is the simplest file control protocol in which the sender transmits a frame and then waits for an acknowledgement, either positive or negative, from the receiver before proceeding. If a positive acknowledgement is received, the sender transmits the next packet; else it retransmits the same frame. However, this protocol has one major flaw in it. If a packet or an acknowledgement is completely destroyed in transit due to a noise burst, a deadlock will occur because the sender cannot proceed until it receives an acknowledgement. This problem may be solved using timers on the sender's side. When the frame is transmitted, the timer is set. If there is no response from the receiver within a certain time interval, the timer goes off and the frame may be retransmitted.

• Sliding Window Protocols: Inspite of the use of timers, the stop and wait protocol still suffers from a few drawbacks. Firstly, if the receiver had the capacity to accept more than one frame, its resources are being underutilized. Secondly, if the receiver was busy and did not wish to receive any more packets, it may delay the acknowledgement. However, the timer on the sender's side may go off and cause an unnecessary retransmission. These drawbacks are overcome by the sliding window protocols.

In sliding window protocols the sender's data link layer maintains a 'sending window' which consists of a set of sequence numbers corresponding to the frames it is permitted to send. Similarly, the receiver maintains a 'receiving window' corresponding to the set of frames it is permitted to accept. The window size is dependent on the retransmission policy and it may differ in values for the receiver's and the sender's window. The sequence numbers within the sender's window represent the frames sent but as yet not acknowledged. Whenever a new packet arrives from the network layer, the upper edge of the window is advanced by one. When an acknowledgement arrives from the receiver the lower edge is advanced by one. The receiver's window corresponds to the frames that the receiver's data link layer may accept. When a frame with sequence number equal to the lower edge of the window is received, it is passed to the network layer, an acknowledgement is generated and the window is rotated by one. If however, a frame falling outside the window is received, the receiver's data link layer has two options. It may either discard this frame and all subsequent frames until the desired frame is received or it may accept these frames and buffer them until the appropriate frame is received and then pass the frames to the network layer in sequence.

[pic]

In this simple example, there is a 4-byte sliding window. Moving from left to right, the window "slides" as bytes in the stream are sent and acknowledged.

Most sliding window protocols also employ ARQ ( Automatic Repeat reQuest ) mechanism. In ARQ, the sender waits for a positive acknowledgement before proceeding to the next frame. If no acknowledgement is received within a certain time interval it retransmits the frame. ARQ is of two types :

1. Go Back 'n': If a frame is lost or received in error, the receiver may simply discard all subsequent frames, sending no acknowledgments for the discarded frames. In this case the receive window is of size 1. Since no acknowledgements are being received the sender's window will fill up, the sender will eventually time out and retransmit all the unacknowledged frames in order starting from the damaged or lost frame. The maximum window size for this protocol can be obtained as follows. Assume that the window size of the sender is n. So the window will initially contain the frames with sequence numbers from 0 to (w-1). Consider that the sender transmits all these frames and the receiver's data link layer receives all of them correctly. However, the sender's data link layer does not receive any acknowledgements as all of them are lost. So the sender will retransmit all the frames after its timer goes off. However the receiver window has already advanced to w. Hence to avoid overlap , the sum of the two windows should be less than the sequence number space.

w-1 + 1 < Sequence Number Space

i.e., w < Sequence Number Space

Maximum Window Size = Sequence Number Space - 1

2. Selective Repeat:In this protocol rather than discard all the subsequent frames following a damaged or lost frame, the receiver's data link layer simply stores them in buffers. When the sender does not receive an acknowledgement for the first frame it's timer goes off after a certain time interval and it retransmits only the lost frame. Assuming error - free transmission this time, the sender's data link layer will have a sequence of a many correct frames which it can hand over to the network layer. Thus there is less overhead in retransmission than in the case of Go Back n protocol.

In case of selective repeat protocol the window size may be calculated as follows. Assume that the size of both the sender's and the receiver's window is w. So initially both of them contain the values 0 to (w-1). Consider that sender's data link layer transmits all the w frames, the receiver's data link layer receives them correctly and sends acknowledgements for each of them. However, all the acknowledgemnets are lost and the sender does not advance it's window. The receiver window at this point contains the values w to (2w-1). To avoid overlap when the sender's data link layer retransmits, we must have the sum of these two windows less than sequence number space. Hence, we get the condition

Maximum Window Size = Sequence Number Space / 2

4.Explain Cyclic Redundancy Check

Ans: Error detection is important whenever there is a non-zero chance of your data getting corrupted. Whether it's an Ethernet packet or a file under the control of your application, you can add a piece of redundant information to validate it.

The simplest example is a parity bit. Many computers use one parity bit per byte of memory. Every time the byte gets written, the computer counts the number of non-zero bits in it. If the number is even, it sets the ninth parity bit, otherwise it clears it. When reading the byte, the computer counts the number of non-zero bits in the byte, plus the parity bit. If any of the nine bits is flipped, the sum will be odd and the computer will halt with a memory error. (Of course, if two bits are flipped--a much rarer occurrence--this system will not detect it.)

For messages longer than one byte, you'd like to store more than one bit of redundant information. You might, for instance, calculate a checksum. Just add together all the bytes in the message and append (or store somewhere else) the sum. Usually the sum is truncated to, say, 32 bits. This system will detect many types of corruption with a reasonable probability. It will, however, fail badly when the message is modified by inverting or swapping groups of bytes. Also, it will fail when you add or remove null bytes.

Calculating a Cyclic Redundancy Check is a much more robust error checking algorithm. In this article I will sketch the mathematical foundations of the CRC calculation and describe two C++ implementations--first the slow but simple one, then the more optimized one.

Polynomials

Here's a simple polynomial, 2x2 - 3x + 7. It is a function of some variable x, which depends only on powers of x. The degree of a polynomial is equal to the highest power of x in it; here it is 2 because of the x2 term. A polynomial is fully specified by listing its coefficients, in this case (2, -3, 7). Notice that to define a degree-d polynomial you have to specify d + 1 coefficients.

It's easy to multiply polynomials. For instance,

(2x2 - 3x + 7) * (x + 2)

 = 2x3 + 4x2 - 3x2 - 6x + 7x + 14

 = 2x3 + x2 + x + 14

Conversely, it is also possible to divide polynomials. For instance, the above equation can be rewritten as a division:

(2x3 + x2 + x + 14) / (x + 2) = 2x2 - 3x + 7

Just like in integer arithmetic, one polynomial doesn't have to be divisible by another. But you can always divide out the "whole" part and be left with the remainder. For instance x2 - 2x is not divisible by x + 1, but you can calculate the quotient to be x - 3 and the remainder to be 3:

(x2 - 2x) = (x + 1) * (x - 3) + 3

In fact you can use a version of long division to perform such calculations

Arithmetic Modulo Two

Most of us are familiar with polynomials whose coefficients are real numbers. In general, however, you can define polynomials with coefficients taken from arbitrary sets. One such set (in fact a field) consists of the numbers 0 and 1 with arithmetic defined modulo 2. It means that you perform arithmetic as usual, but if you get something greater than 1 you keep only its remainder after division by 2. In particular, if you get 2, you keep 0. Here's the addition table:

0 + 0 = 0

0 + 1 = 1 + 0 = 1

1 + 1 = 0 (because 2 has remainder 0 after dividing by 2)

The multiplication table is equally simple:

0 * 0 = 0

0 * 1 = 1 * 0 = 0

1 * 1 = 1

What's more, subtraction is also well defined (in fact the subtraction table is identical to the addition table) and so is division (except for division by zero). What is nice, from the point of view of computer programming, is that both addition and subtraction modulo 2 are equivalent to bitwise exclusive or (XOR).

Now imagine a polynomial whose coefficients are zeros and ones, with the rule that all arithmetic on these coefficients is performed modulo 2. You can add, subtract, multiply and divide such polynomials (they form a ring). For instance, let's do some easy multiplication:

(1x2 + 0x + 1) * (1x + 1)

  = 1x3 + 1x2 + 0x2 + 0x + 1x + 1

  = 1x3 + 1x2 + 1x + 1

Let's now simplify our notation by representing a polynomial as a series of coefficients. For instance, 1x2 + 0x + 1 has coefficients (1, 0, 1), 1x + 1 (1, 1), and 1x3 + 1x2 + 1x + 1 (1, 1, 1, 1).

Do you see what I am driving at? A polynomial with coefficients modulo 2 can be represented as a series of bits. Conversely, any series of bits can be looked upon as a polynomial. In particular any binary message, which is nothing but a series of bits, is equivalent to a polynomial.

CRC

Take a binary message and convert it to a polynomial then divide it by another predefined polynomial called the key. The remainder from this division is the CRC. Now transmit both the message and the CRC. The recipient of the transmission does the same operation (divides the message by the same key) and compares his CRC with yours. If they differ, the message must have been mangled. If, on the other hand, they are equal, the odds are pretty good that the message went through uncorrupted. Most localized corruptions (burst of errors) will be caught using this scheme.

Not all keys are equally good. The longer the key, the better error checking. On the other hand, the calculations with long keys can get pretty involved. Ethernet packets use a 32-bit CRC corresponding to degree-31 remainder (remember, you need d + 1 coefficients for a degree-d polynomial). Since the degree of the remainder is always less than the degree of the divisor, the Ethernet key must be a polynomial of degree 32. A polynomial of degree 32 has 33 coefficients requiring a 33-bit number to store it. However, since we know that the highest coefficient (in front of x32) is 1, we don't have to store it. The key used by the Ethernet is 0x04c11db7. It corresponds to the polynomial:

x32 + x26 + ... + x2 + x + 1

There is one more trick used in packaging CRCs. First calculate the CRC for a message to which you have appended 32 zero bits. Suppose that the message had N bits, thus corresponding to degree N-1 polynomial. After appending 32 bits, it will correspond to a degree N + 31 polynomial. The top-level bit that was multiplying xN-1 will be now multiplying xN+31 and so on. In all, this operation is equivalent to multiplying the message polynomial by x32. If we denote the original message polynomial by M (x), the key polynomial by K (x) and the CRC by R (x) (remainder) we have:

M * x32 = Q (x) * K (x) + R (x)

Now add the CRC to the augmented message and send it away. When the recipient calculates the CRC for this sum, and there was no transmission error, he will get zero. That's because:

M * x32 + R (x) = Q (x) * K (x) (no remainder!)

You might think I made a sign mistake--it should be -R (x) on the left. Remember, however, that in arithmetic modulo 2 addition and subtraction are the same.

The CRC algorithm requires the division of the message polynomial by the key polynomial. The straightforward implementation follows the idea of long division, except that it's much simpler. The coefficients of our polynomials are ones and zeros. We start with the leftmost coefficient (leftmost bit of the message). If it's zero, we move to the next coefficient. If it's one, we subtract the divisor. Except that subtraction modulo 2 is equivalent to exclusive or, so it's very simple.

Let's do a simple example, dividing a message 100110 by the key 101. Remember that the corresponding polynomials are x5 + x2 + x and x2 + 1. Since the degree of the key is 2, we start by appending two zeros to our message.

10011000 / 101

101

[pic]

111

101

[pic]

100

101

[pic]

100

101

[pic]

01

We don't even bother calculating the quotient, all we need is the remainder (the CRC), which is 01 in this case. The original message with the CRC attached reads 10011001. You can easily convince itself that it is divisible by the key, 101, with no remainder.

In practice we don't write the top bit of the key--it is implicit. In this particular example, we would only store bits 01 as our key.

The calculation above could be implemented using a 2-bit register for storing intermediate results (again, the top bit is always one, so we don't store it). Let's rewrite the above example and highlight the bits that are stored in the register at each step. The significant bits of the key are marked in red.

0010011000 / 101

001

010

100

101

[pic]

001

011

111

101

[pic]

010

100

101

[pic]

001

010

100

101

[pic]

001

Notice that we subtract (or XOR, since this is arithmetic modulo 2) the key from the register every time a 1 is shifted out of it.

Implementation

Let's start with the basic class, Crc. It defines the type Crc::Type as a 32-bit unsigned long (corresponding to a 33-bit key). The constructor takes the key and stores it, and it zeroes the register. The method Done returns the result of the CRC calculation. It also zeroes the register, so that it can be used to calculate another CRC.

#include

#include

#include

class Crc

{

public:

typedef unsigned long Type;

Crc (Type key)

: _key (key), _register (0)

{}

Type Done ()

{

Type tmp = _register;

_register = 0;

return tmp;

}

protected:

Type _key; // really 33-bit key, counting implicit 1 top-bit

Type _register;

};

The straightforward implementation of the CRC algorithm is not very efficient, but it will serve as our starting point. The class SlowCrc implements a public method PutByte as well as a private helper, PutBit. PutByte simply splits the byte into bits and sends them one-by-one to PutBit. The value of the bit is encoded as a bool (true or false).

class SlowCrc: public Crc

{

public:

SlowCrc (Crc::Type key)

: Crc (key)

{}

void PutByte (unsigned char byte);

private:

void PutBit (bool bit);

};

void SlowCrc::PutByte (unsigned char byte)

{

unsigned char mask = 0x80; // leftmost bit

for (int j = 0; j < 8; ++j)

{

PutBit ((byte & mask) != 0);

mask >>= 1;

}

}

Here is the heart of the algorithm, PutBit. We pick the top bit from the register, shift the register left by one, inserting a new message bit from the right. If the top bit was one, we XOR the key into the register.

void SlowCrc::PutBit (bool bit)

{

std::cout ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download