Professional Documents
Culture Documents
ID: 10413642
ABSTRACT
The strength of the Internet has been its immense scalability and adaptability to accommodate a seemingly unceasing portfolio of applications. With the rising popularity of the internet across the world it has become imminent to enhance the Quality of Service (QoS) and traffic engineering capabilities. This requires an increase in network reliability, efficiency and service quality. Internet Service Providers are retorting to these developments by enhancing their networks and optimize their performance. In this perspective traffic engineering is the biggest role player in design and operation of large internet backbone networks. Traffic engineering (TE) is the process of guiding traffic across the backbone to facilitate the efficient use of the network infrastructure. One of the main technologies used for TE is Multi Protocol Label Switching (MPLS). Before MPLS TE, traffic engineering was performed by either IP or by ATM, depending on the protocol used between the edges of the router. MPLS has helped to address some of the problems related to TE in IP networks. In this paper, we examine the role of MPLS in TE and how dramatically it has improved the performance and scalability of the backbone networks. MPLS allows the establishments of several de-tour paths between the source and destination pair as well as its primary path of minimum hops. MPLS has the capability to engineer traffic tunnels to avoid congestion and fully utilize all available bandwidth. The greatest strength of MPLS is its seamless coexistence with IP traffic and its reuse of proven IP routing protocols.
INTRODUCTION
With the rising popularity of the internet across the world the usage of the network resources has also increased. But the strength of the Internet has been its immense scalability and adaptability to accommodate a seemingly unceasing portfolio of applications. With the rise of use of video, voice and other high priority applications, it has become imminent to increase the quality of service (QoS). Users across the world receive service from the network based on various values of transport, such as latency, throughput, bandwidth and reliability. Service providers operate networks to provide these services to users by carrying operations under user-specified constraints. The process of managing the allocation of these resources and providing quality of service is known as traffic engineering (TE). To carry out improved TE, service providers use different technologies, Multi Protocol Label Switching (MPLS) is one of them. MPLS has helped in solving many issues related to traffic engineering in IP networks
What is TE?
Traffic engineering is `concerned with the performance optimization of networks'[1]. It is the process of capably allocating network resources so that user demands are met and operator benefit is also maximized. Traffic engineering is important both in wireless and wired network. Despite the ever increasing development in technologies like optical
fiber, that make immense amount of bandwidth available, traffic engineering is still as important. One of the possible reasons for the importance of traffic engineering is that the number and the demands of users are also increasing. Consequently traffic engineering still performs a useful function for the users and operators. And it would be valuable to let it be performed in an efficient manner.
The MPLS Traffic Engineering model consists of the following components: Path Management Traffic Assignment Network State Information dissemation Network management
Path Management
Path management is concerned with the selection, installtion and maintenance of the explicit routes and LSPs. The policies included mention the criteria on what basis a path should be selected as well as rules for sustaining already established LSPs.
A path selection function is used to specify explicit routes for an LSP tunnel at the origination node. A sequence of hops or sequence of abstract nodes can represent this explicit route, which may contain both loose or strict subsets. These routes can be defined both administratively or computed automatically by a constraint based routing entity. Constraint based routing helps to reduce the level of manual intervention. A signalling component is used by a path management function to instantiate LSP tunnels, which also serves as a label distribution protocol. The path are maintained, sustained and terminated by a third function known as the path management. LSP tunnels attributes traffic parameters, priority attributes, preemption attributes, resource class affinity attributes, and other policy attributes. Traffic attributes specify the available bandwidth on the LSP tunnels, peak rates, mean rates and burst sizes. Adaptive LSPs change their paths when a better path is available, whereas nonadaptive only change paths in case of a faliure. It is the resilience feature that controls whether an LSP tunnel is to be automatically rerouted due to faults on its path. Resource attributes are used to specify additional resources of the network, categorize resources, and links into classes. These attributes can then be used to control traffic over specific topological regions of the world.
Traffic Assignment
This component deals with all aspects of allocation of traffic to LSPs after they have been established. A partitioning function is used to partition ingress traffic whereas an appointment function then appoints traffic to the tunnels. Load distribution is an important issue in traffic assignment. This can be dealt by implicitly or explicity assigning weights to the tunnels and partitioning traffc according to the weights. Load distribution across paralled LSP tunnels can also be implemented as a feedback function of the state of network.
Network Management
The success of MPLS TE depends upon the ease with which the network can be controlled and maintained, which is handled by the network management component. Multiple functions included in this component help to manage and control MPLS tunnel. Monitoring the end points of a LSP tunnel can give us the path loss on the ingress and
egress. Delays throughout the path can be calculated by sending probe packets through these tunnels and measuring the transit times. An operational requirement is the capability to list all the nodes traversed by an LSP tunnel at any point. Also it gives the number of LSP tunnels originating, terminating and traversing each node.
Working of MPLS
MPLS uses label-swapping forwarding paradigm known as label switching for flexible control over routing across the network. The main difference between MPLS traffic and normal IP traffic is the addition of MPLS header containing one or more labels. MPLS creates one or more tunnels or explicitly defined paths. MPLS extends the use of Layer 3 and 2 protocols such as OSPF (Open Shortest Path First), IS-IS (Intermediate SystemIntermediate System) and RSVP and CR-LDP respectively. Using these protocols MPLS constructs a tunnel that utilizes the best network resources such as shortest path, bandwidth, delay and reliability. Tunnels paths are calculated at the tunnel head. The decision is based on a fit between required and available resources known as constrain-based routing. The IGP component handles the routing of packets across these tunnels. The signaling component of decides the best path for a tunnel. RSVP (Resource Reservation Setup Protocol) and CR-LDP (Constraint-based Routing Label Distribution Protocol) are the two main protocols used for this purpose. These protocol are used across the IP network to reserve resources across the network. MPLS also uses it to indicate to other nodes the nature (bandwidth, jitter, maximum burst, and so forth) of the packet streams it sends or it intends to receive. Explicit routes to one or more nodes in the network are calculated by different traffic engineering protocols. LSP (Label Switched Paths) are these explicit routes and are also known as traffic engineering tunnels. Link-state protocols like integrated IS-IS use Dijkstra's SPF (Shortest Path First) algorithm to compute a shortest path tree to all nodes in the network. From this shortest path tree are the routing tables derived. These tables contain ordered sets of destination and first-hop information. In a normal hop-by-hop routing, the first hop is a physical interface attached to the router. Routers must agree on how to use these traffic engineering tunnels otherwise traffic might loop through two or more tunnels. A router discovers the path to one node in the network during each step of SPF computation. The first-hop information is derived from the adjacent database if that node is directly connected to the calculating router. If that node is not directly connected to the calculating router, the node inherits first-hop information from the parent of that node. Each node has one or more parents and each node is the parent of zero or more downstream nodes.
Each router maintains a list of all TE tunnels that originate at this router, and for each of these TE tunnels the router at the tailend is known. There are three possible ways to calculate the first-hop information: 1. Examine the list of tailend routers directly reachable by way of a TE tunnel. If there is a TE tunnel to this node, use the TE tunnel as the first-hop. 2. If there is no TE tunnel, and the node is directly connected, use the first-hop information from the adjacency database. 3. If the node is not directly connected, and is not directly reachable by way of a TE tunnel, the first-hop information is copied from the parent node(s) to the new node. Due to these computations, traffic to nodes that are tailend of TE tunnels flows through these tunnels. If there is more than one TE tunnel to different intermediate node on the path to the destination node X, traffic flows over the TE tunnel whose tailend node is closest to node X. When an unlabeled packet enters the MPLS tunnel, the ingress router first examines the forwarding equivalence class (FEC), the packet should be in, and inserts one or more labels in the packets MPLS header. Packets with similar or identical characteristics which may be forwarded in the same way, i.e. they may be bound to the same MPLS label are forwarded on the basis of Forwarding Equivalence Class (FEC). After a packet enters a LSP, the ingress router prepends the packet with MPLS header, containing one or more labels. This is called the label stack.
Each stack label entry has four fields: A 20-bit label value A 3-bit field for experimentation A 1-bit bottom of stack flag. If this is set to 1 it signifies that the current label is the last in the sack. An 8 bit TTL (Time To Live) field
After a label is attached the packet is then passed on to the next hop router for this tunnel. Instead of an IP table lookup these MPLS labeled packets are then forwarded using a label lookup. Label lookup and forwarding is much faster than a normal IP lookup as it takes place directly in the fabric and not the CPU. After an MPLS router receives a labeled packet, the topmost label is examined. Three kinds of operations can be performed on a packets label stack i.e. swap, push or pop based on its contents. Pre built
lookup tables in routers tell them what kind of operation should be performed on topmost label of the incoming packet. In a swap operation the label is swapped with a new label, and the packet is forwarded along the path associated with the new label. In a push operation a new label is pushed on top of an existing label, which encapsulates the packet in another layer of MPLS. This is mostly used in MPLS VPNs. In a pop operation the label is removed from the packet, which may reveal an inner label below. At the egress router, the last label is removed and only the payload remains. This is an IP packet, or any other kind of payload. The egress router must have the routing information for the packets payload since it has to forward this packet without the help of any labels. Another technique used for popping is called the Penultimate Hop Popping (PHP). The label is popped off at the hop before the egress router in this technique. Tranist routers connected directly to the egress router pop the last label themselves offloading the egress router.
Path Computation
Path computation for a TE LSP is performed by LSRs by using the TE topology database. Using the Shortest Path First (SPF) algorithm is the most common approach over constraint based routing. And it receives the name Constraint Based-Shortest Path First (CSPF) algorithm. The algorithm works in such a way that it removes all those links that do not meet the TE LSP requirements. It uses the IGP link metric or the link TE metric to compute the shortest path. Although the CSPF does not guarantee a complete optimal mapping of traffic over the network resources, but is still considered an adequate approximation. Figure 2-2 illustrates a simplified version of CSPF on a sample network. In this case, node E wants to compute the shortest path to node H with the following constraints: only links with at least 50 bandwidth units available and an administrative group value of 0xFF. Node E examines the TE topology database and disregards links with insufficient bandwidth or administrative group values other than 0xFF. The dotted lines in the topology represent links that CSPF disregards. Subsequently, node E executes the shortest path algorithm on the reduced topology using the link metric values. In this case, the shortest path is {E, F, B, C, H}. Using this result, node E can initiate the TE LSP signaling.
Path computation is a difficult process and involves several computations especially in multi-area or inter-autonomous systems. If the head end is unable to view the complete network topology, it specifies the path as a list of predefined boundary LSR.
Signaling of TE LSPs
MPLS TE uses several extensions to RSVP to signal LSPs. RSVP uses five new objects LABEL_REQUEST, LABEL, EXPLICIT_ROUTE, RECORD_ROUTE, and SESSION_ATTRIBUTE. The LABEL_REQUEST object is used by RSVP to request a label binding at each hop. A LABEL object is used to perform label distribution. On demand Label distribution is performed by network nodes using these two objects. The EXPLICIT_ROUTE object contains a hop list that defines the explicit routed path that the signaling will follow. The hop and label information along the signaling path is collected by the RECORD_ROUTE object. The attribute requirements of the LSP i.e. priority, protection, etc are given by the SESSION_ATTRIBUTE.
Traffic Selection
The process of traffic selection is kept separate from the process of TE LSP creation. A head-end can signal a TE LSP, but the traffic does not start to flow through the LSP until the LSR has implemented a traffic-selection mechanism. The only entry point for the traffic is the head-end, therefore the selection of traffic is also a decision made by the head-end. The head-end can use different approaches for this decision, and the selection criteria can be dynamic or static. The decision may also depend upon the packet type for instance IP or Ethernet, or packet contents for e.g. class of service. MPLS has the ability to make use of several traffic selection mechanisms depending on the services it has to offer.
10
In an MPLS domain, when a stream of data traverses a common path, a Label Switched Path is established, which simplifies the routing process. Each packet, when enters the network at the ingress node, is assigned a label which also assigns FEC to it. At each Label Switch Router (LSR) the packet is only forwarded to the next hop based on its label. In a DiffServ domain all packets requiring the same DiffServ behavior are said to constitute a behavior aggregate. The packets are marked with a DiffServ Code Point at the ingress node of the DiffServ domain. Based on this DiffServ Code Point, each transit node selects the queuing and scheduling treatment of a packet and in some case drop probability also. This also shows us the similarity between MPLS and DiffServ. A DiffServ BA or the DiffServ Code Point is quite similar to the MPLS LSP or MPLS label respectively. The main difference between the two is that DiffServ is used for queuing, scheduling and dropping, whereas MPLS is rather used for routing and forwarding. Due to this both these technologies are not dependent on each other, and both are different ways of providing higher quality services. And there is also a choice that a network operator could employ both these architectures at the same time.
11
The figure above shows an MPLS network using FRR. In this case node E signals a TELSP toward node H via node F and G. A backup link has also been presignaled by FRR which passes via node I, in case the link between node F and G fails. Thus node F is responsible for rerouting the traffic into the backup link, which makes it the point of local repair (PLR).
12
13
ATM
Asynchronous Transfer Mode (ATM) is a cell relay, network and data link layer protocol which encodes data traffic into small fixed-sized cells. The cell size is of 53 bytes that include 48 bytes of data and 5 bytes of header information.This is instead of variable sized packets (sometimes known as frames) as in packet-switched networks. ATM is a connection-oriented technology, in which a connection is established between the two endpoints before the actual data exchange begins.
14
ATM was designed to implement a low-jitter network interface. And the basic purpose for the use of small data cells was to reduce jitter(delay variance) in the multiplexing of data streams, reduction of this is particularly important when carrying voice traffic. This is because the conversion of digitized voice back into an analog audio signal is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess - and if the data does arrive, but late, it is useless, because the time period when it should have been converted to a signal has already passed. In this way a speech signal reduced to packets, if has to share a link with bursty data traffic (i.e. some data packets will be large). No matter how small the speech packets are, they could always encounter full-size data packets and under normal conditions might experience queuing delays. ATM is a channel-based transport layer. This is encompassed in the concept of the Virtual Path and Virtual Circuit . Every ATM cell has an 8- or 12-bit Virtual Path Identifier and 16-bit Virtual Circuit Identifier pair defined in its header. The length of the Virtual Path Identifier varies according to whether the cell is sent on the user-network interface, or if it is sent on the network-network interface. ATM virtual circuits are bidirectional, but bi-directional are only svc ATM connections; pvc ATM connections are uni-directional When an ATM circuit is set up each switch is informed of the traffic class of the connection. ATM traffic contracts are part of the mechanism by which Quality of Serviceis ensured. There are four basic types which each have a set of parameters describing the connection. CBR - Constant bit rate: you specify a Peak Cell Rate (PCR), which is constant. V VBR - Variable bit rate: you specify an average cell rate, which can peak at a certain level for a maximum interval before being problematic. It has real-time and non-real-time variants, and is used for "bursty" traffic. ABR - Available bit rate: you specify a minimum guaranteed rate. UBR - Unspecified bit rate: your traffic is allocated all remaining transmission capacity.
Most traffic classes also introduce the concept of Cell Delay Variation Tolerance (CDVT) which defines the "clumping" of cells in time. Traffic contracts are usually maintained by the use of "Shaping", a combination of queuing and marking of cells, and enforced by "Policing". To maintain network performance it is possible to police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract, the network can either drop the cells or mark the Cell Loss Priority bit. Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic as discarding a single cell will invalidate the whole packet. For this, schemes such as Partial Packet Discard and Early Packet Discard have been created that will discard a whole series of cells until the next
15
frame starts. This saves bandwidth for full frames as it reduces the numbe of redundant cells in the network. Partial Packet Discard and Early Packet Discard work with ATM Adaptation Layer 5 connections as they use the frame end bit to detect the end of packets. Another advantage of the use of virtual circuits is the ability to use them as a multiplexing layer, allowing different services (such as voice, SNA, etc.) to share a common ATM connection without interfering.
DIFFERNCE
A very important difference between the two technologies lies within the encapsulation and transport methods. The first difference lies in the length of packets where MPLS can work with variable length packets whereas ATM uses fixed length 53 byte cells. ATM uses an additional adaption layer which is used to segment, transport and re-assemble packets over the ATM network. This is adds significant overhead and complexity to the data stream. Whereas MPLS does the same work very simply by just adding an additional lable to the head of each packet and transmitting it over the network. The way the two technologies create and maintain their connections is also quite different. MPLS connection LSP is uni-directional - allowing data to flow in only one direction between two endpoints. Establishing two-way communications between endpoints requires a pair of LSPs to be established. Due to this data flowing in the forward direction may use a different path than the data flowing in the reverse direction. Whereas ATM uses point-to-point Virtual Circuits are bi-directional, which allows data to flow in both directions over the same path. Both technologies support tunneling of connections within connections. ATM uses virtual paths to achieve this whereas MPLS uses label stacking. MPLS can stack multiple labels to form tunnels within tunnels. The ATM Virtual Path Indicator and Virtual Circuit Indicator are both carried together in the cell header This limits ATM to only a single level of tunnelling. Modern routers support are built to support the most commonly and widely used protocol Internet Protocol (IP). IP is currently one of the most widely used protocol over the internet. The biggest single edge that MPLS has over ATM is that it has been designed to work in conjuction with IP. This allows the network operators to work with ease and great flexibility in network design and operation. ATM is incompatible with IP and this requires complex adaption which makes it quite unsuitable in todays internet world that has been dominated by IP.
16
MPLS VPNs
One of the major applications that MPLS is used in is VPNs. Known as MPLS VPN; it provides a number of enhanced benefits to enterprises such as any-to-any connectivity through the use of forwarding tables, the ability to retain existing IP addressing plan by supporting overlapping IP addresses, and greater scalability at the site-to-site and data center levels. A significant technical advantage of MPLS VPNs is that they are connectionless, which limits the need for tunnels and encryption for network privacy, thus eliminating significant complexity. MPLS-based Layer 3 VPNs are based on the employment of BGP routing protocol for allocating VPN labels. This sets up an internal BGP session linking the two edge routers, or the two providers edge routers in this case. It is the job of the Label Distribution Protocol to distribute labels in the core of the network. Also in the core, VPN routing and forwarding instances-also called VRF tables-are derived from the global routing tables which reside in each router. One VRF is consigned to each subscriber. Since there are multiple VRFs and one global routing table, a service provider can offer a VPN service as well as Internet service over the same connection. When traffic arrives on a VPN, the forwarding decision is made according to the associated VRF. Internet traffic will still be routed using the global routing table. Forwarding tables provide the functionality for any-to-any connectivity. By using MPLS, we could allow multiple routing tables to simultaneously support traffic for multiple VPNs in addition to non-VPN traffic. Each subscriber is imposed with a VRF instance. Then the VRF uses BGP to assign labels, or designate VPN prefixes. These prefixes are used to route packets to the egress router, which also advertises the prefix. The target egress node is determined by the Interior Gateway Protocol (IGP), and the outgoing interface to which the traffic is to be sent, is determined by the VPN label. This involves the MPLS stacking, which includes both an outer label and an inner label. The outer label is the IGP label, whereas the inner label is the VPN label. The outer label makes sure that traffic gets from the source to the destination provider edge. And the inner label is assigned the task of directing the traffic to the appropriate label. In other words, Traffic is routed to the appropriate provider edge using the outer label, and then the inner label dictates where it goes from there Routing policies are configured to import and export routes. These policies allow topologies to be built as hub and spokes or as full mesh, and allow sites to maintain anyto-any connectivity, even there is a single access link. By adding a new VRF to the appropriate provider edge router, new sites can be easily added. The need to go to every other router or site is not required to accommodate the introduction of a new site; therefore MPLS provides excellent scalability and simplified start up of new sites. Large networks, when trying to implement full-mesh topologies, often end with a scalability problem where non-MPLS networks are used. To ensure a scalable, full-mesh topology, MPLS-based VPNs do not keep any subscriber information in the core devices. With ATMs, core devices have to keep virtual circuits information about each and every
17
subscriber, and all the detailed information about the locations they could reach. This quickly limits capacity in a large full-mesh network topology. In contrast, MPLS-based VPNs are virtually unlimited as to the number of sites each can reach. Today, MPLSbased VPNs have been implemented to span thousands of sites, and are far ahead of any alternative technologies when it comes to scalability. Many service providers offer both Layer 3 and Layer 2 VPN support over the same MPLS network, giving enterprise IT managers the freedom to choose the best option for meeting the overall requirements of their networks. Cisco Systems has also implemented a Layer 2 MPLS solution which is based on Any Transport over MPLS (AToM). AToM uses the same concept of label stacking as implemented in Layer 3 BGP VPNs. It uses label stacking to carry multiple Layer 2 circuits over a single pseudo wire. And due to the point-to-point nature of AToM, LDP is used instead of BGP. The following figure also shows the basic and the additional network services provided by MPLS VPNs.
18
The strength of MPLS is its immense scalability and adaptability to accommodate applications requirements and to precisely engineer traffic tunnels to avoid congestion and more fully utilize all available bandwidth. Because of these added benefits we adopt MPLS as the protocol for micro-mobility. The mobility enhanced MPLS architecture (WMPLS) enables networks to locate roaming terminals for data delivery and to maintain connections as the terminal is moving into a new service area. The basic network architecture comprises of mobile hosts capable of initiating traffic flows towards the base stations. The traffic flow is collected towards the Mobile Label Switching Nodes (MLSN) via the base stations. Base stations are also responsible for the termination of label switched paths. MLSNs provide support for fast handoff and location management mechanisms. MLSNs can perform the function of either a Label Switch Router (LSR) or a Label Edge Router (LER), as in a classical MPLS network. The following figure shows a simplified model of a mobile network with labels distributed between a correspondent node and the destination mobile node in a foreign network.
In a control driven approach, after a mobile initiates a path establishment process, each mobile in the network need to run a routing protocol and have the complete topology of the network. This requires huge amount of resources for each mobile, and this is not an optimum solution. Since it is necessary to utilize all advantages provided by routing protocols, therefore these protocols are run across MLSNs and across the interfaces. Whenever a mobile node wants to establish a label switched path a participating mobile switching node in the home area provides the explicit routing information to the mobile. This allows the mobile to initiate signaling with an explicit route specification.
19
Two distinct operations are typically involved in the forwarding of a mobile node. This involves aggregating IP packets to a Forwarding Equivalence Class (FEC) and map the FEC to the next hop in the path. End-to-end MPLS allows the mobile to map packets to the FEC and encode this FEC as a label. Once the path has been established, the intermediate mobile nodes need only to perform the second operation, i.e. mapping the label to the next hop and performing appropriate label translations. Wireless packet core networks which are in the infancy must evolve to support future IP/PPP transport. To provide QoS and to converge voice and data onto the common packet common network, MPLS should be deployed in the core of the network. This could allow the packet core to simultaneously support multiple service including 3G wireless applications and traditional data and voice services. Standardization bodies are working on developing standards towards using MPLS for end-to-end services. This not only ensures a consistent control panel across many protocols such as ATM, PPP, Ethernet, etc but also extends this compatibility to the optical and wireless networks as well.
Conclusion
The paper discusses the application of Multi Protocol Label Switching to engineer traffic, in IP networks. The growth and adaptability of the Internet in the current history of communications is unparalleled. With the emergence of voice and video applications, it has become difficult to utilize all the available resources over the Internet. MPLS is one of the entrants in this remarkable evolution that provides the best utilization of network resources and a better Quality of Service (QoS). MPLS is a technology that uses the native capabilities of IP and improves network efficiency and service guarantee. MPLS when combined with traffic engineering delivers a formidable tool for meeting the current, rigid requirements of differentiated services. It strengthens IP routing, the proven scalability of terabit routers and the mechanisms for end to end QoS.
20
References
[1] Richard Mortier, Internet traffic engineering, April 2002 [2] Tmea Dreilinger, DiffServ and MPLS [3] Tamrat Bayle, Reiji Aibara, Kouji Nishimura, Performance Measurements of MPLS Traffic Engineering and QoS [4] Wikipedia the free encyclopedia, (http://en.wikipedia.org/wiki/MPLS), Multiprotocol Label Switching [5] Cisco System Inc, MPLS Traffic Engineering [6] Cisco System Inc, MPLS TE Technology Overview [7] Avici_ Systems Inc, Traffic Engineering With Multiprotocol Label Switching [8] Cisco System Inc, MPLS Traffic Engineering (TE)Scalability and Enhancements [9] Joseph M. Soricelli, Introduction to MPLS [10] Daniel O. Awduche, MPLS and Traffic Engineering in IP networks [11] Kaouthar Sethom, Hossam Afifi, Guy Pujolle, Wireless MPLS: A New Layer 2.5 Micro-mobility Scheme [12] Subramanian Vijayarangam and Subramanian Ganesan, QoS Implementation for MPLS Based Wireless Networks [13] Wikipedia the free encyclopedia, (http://en.wikipedia.org/wiki/Asynchronous_Transfer_Mode), Asynchronous Transfer Mode
21