You are on page 1of 21

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE) ECE-545 Obaid Ullah Khalid ID: 10413927 Shahzad Khan Muhammad

ID: 10413642

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

ABSTRACT
The strength of the Internet has been its immense scalability and adaptability to accommodate a seemingly unceasing portfolio of applications. With the rising popularity of the internet across the world it has become imminent to enhance the Quality of Service (QoS) and traffic engineering capabilities. This requires an increase in network reliability, efficiency and service quality. Internet Service Providers are retorting to these developments by enhancing their networks and optimize their performance. In this perspective traffic engineering is the biggest role player in design and operation of large internet backbone networks. Traffic engineering (TE) is the process of guiding traffic across the backbone to facilitate the efficient use of the network infrastructure. One of the main technologies used for TE is Multi Protocol Label Switching (MPLS). Before MPLS TE, traffic engineering was performed by either IP or by ATM, depending on the protocol used between the edges of the router. MPLS has helped to address some of the problems related to TE in IP networks. In this paper, we examine the role of MPLS in TE and how dramatically it has improved the performance and scalability of the backbone networks. MPLS allows the establishments of several de-tour paths between the source and destination pair as well as its primary path of minimum hops. MPLS has the capability to engineer traffic tunnels to avoid congestion and fully utilize all available bandwidth. The greatest strength of MPLS is its seamless coexistence with IP traffic and its reuse of proven IP routing protocols.

INTRODUCTION
With the rising popularity of the internet across the world the usage of the network resources has also increased. But the strength of the Internet has been its immense scalability and adaptability to accommodate a seemingly unceasing portfolio of applications. With the rise of use of video, voice and other high priority applications, it has become imminent to increase the quality of service (QoS). Users across the world receive service from the network based on various values of transport, such as latency, throughput, bandwidth and reliability. Service providers operate networks to provide these services to users by carrying operations under user-specified constraints. The process of managing the allocation of these resources and providing quality of service is known as traffic engineering (TE). To carry out improved TE, service providers use different technologies, Multi Protocol Label Switching (MPLS) is one of them. MPLS has helped in solving many issues related to traffic engineering in IP networks

What is TE?
Traffic engineering is `concerned with the performance optimization of networks'[1]. It is the process of capably allocating network resources so that user demands are met and operator benefit is also maximized. Traffic engineering is important both in wireless and wired network. Despite the ever increasing development in technologies like optical

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

fiber, that make immense amount of bandwidth available, traffic engineering is still as important. One of the possible reasons for the importance of traffic engineering is that the number and the demands of users are also increasing. Consequently traffic engineering still performs a useful function for the users and operators. And it would be valuable to let it be performed in an efficient manner.

What is Multi Protocol Label Switching (MPLS)?


Multi Protocol Label Switching-Traffic Engineering (MPLS-TE) is a growing technology in todays service provider network. MPLS TE can play an important role in the implementation of network services with quality of service guarantees. Multi Protocol Label Switching is a technology developed by the engineers of Cisco System Inc. It was then called Tag-Switching when it was a Cisco proprietary. Later it was handed over to IETF for open standardization and was renamed as Label Switching. One original motivation was to allow the creation of simple high-speed switches, since for a significant length of time it was impossible to forward IP packets entirely in hardware[6]. MPLS adoption to the re-use of other protocols has made it quite popular in the operators world. It allows Multi Protocol Label Switched networks to replicate and expand upon the traffic engineering capabilities of Layer 3 and Layer 2. MPLS uses protocols from both the Layer 3 and 2; therefore it is sometimes referred to as Layer 2.5 protocol. Since it uses all the signaling capabilities of Layer 2 and routing capabilities of Layer 3, the result is that the paths selected are optimal with regards to these decisions. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Historically, IP networks relied on the optimization of underlying network infrastructure or Interior Gateway Protocol (IGP) tuning for TE[6]. MPLS re-uses the existing IP protocols makes use of other capabilities to provide better quality of service. Multi Protocol Label Switching-Traffic Engineering helps to cut down network failures and enhances service compatibility.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

Components of MPLS TE model


MPLS is comprised of the following components: Label Switched Path (LSP): The MPLS tunnel or path is known as the LSP. Label Switched Routers (LSRs): The routers along the LSP are known as LSRs. Ingress Router: The entry router is called the ingress router. Egress Router: The destination router is known as the egress router. midpoint LSRs: LSRs along the LSP between the ingress and the egress are called as midpoint LSRs.

The MPLS Traffic Engineering model consists of the following components: Path Management Traffic Assignment Network State Information dissemation Network management

Path Management
Path management is concerned with the selection, installtion and maintenance of the explicit routes and LSPs. The policies included mention the criteria on what basis a path should be selected as well as rules for sustaining already established LSPs.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

A path selection function is used to specify explicit routes for an LSP tunnel at the origination node. A sequence of hops or sequence of abstract nodes can represent this explicit route, which may contain both loose or strict subsets. These routes can be defined both administratively or computed automatically by a constraint based routing entity. Constraint based routing helps to reduce the level of manual intervention. A signalling component is used by a path management function to instantiate LSP tunnels, which also serves as a label distribution protocol. The path are maintained, sustained and terminated by a third function known as the path management. LSP tunnels attributes traffic parameters, priority attributes, preemption attributes, resource class affinity attributes, and other policy attributes. Traffic attributes specify the available bandwidth on the LSP tunnels, peak rates, mean rates and burst sizes. Adaptive LSPs change their paths when a better path is available, whereas nonadaptive only change paths in case of a faliure. It is the resilience feature that controls whether an LSP tunnel is to be automatically rerouted due to faults on its path. Resource attributes are used to specify additional resources of the network, categorize resources, and links into classes. These attributes can then be used to control traffic over specific topological regions of the world.

Traffic Assignment
This component deals with all aspects of allocation of traffic to LSPs after they have been established. A partitioning function is used to partition ingress traffic whereas an appointment function then appoints traffic to the tunnels. Load distribution is an important issue in traffic assignment. This can be dealt by implicitly or explicity assigning weights to the tunnels and partitioning traffc according to the weights. Load distribution across paralled LSP tunnels can also be implemented as a feedback function of the state of network.

Network State Information dissemation


This component deals with the distribution of relevant topology state information throughout the MPLS domain. This is achieved by extending conventional IGPs to propogate additional information about the state of network in link state advertisements. This additional information distributed include maximum bandwidth, default traffic engineering metric, reserved bandwidth per priority class and resource class attributes[10]. The constraint based entities use the topology state information for selecting feasible routes for LSP tunnels.

Network Management
The success of MPLS TE depends upon the ease with which the network can be controlled and maintained, which is handled by the network management component. Multiple functions included in this component help to manage and control MPLS tunnel. Monitoring the end points of a LSP tunnel can give us the path loss on the ingress and

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

egress. Delays throughout the path can be calculated by sending probe packets through these tunnels and measuring the transit times. An operational requirement is the capability to list all the nodes traversed by an LSP tunnel at any point. Also it gives the number of LSP tunnels originating, terminating and traversing each node.

Working of MPLS
MPLS uses label-swapping forwarding paradigm known as label switching for flexible control over routing across the network. The main difference between MPLS traffic and normal IP traffic is the addition of MPLS header containing one or more labels. MPLS creates one or more tunnels or explicitly defined paths. MPLS extends the use of Layer 3 and 2 protocols such as OSPF (Open Shortest Path First), IS-IS (Intermediate SystemIntermediate System) and RSVP and CR-LDP respectively. Using these protocols MPLS constructs a tunnel that utilizes the best network resources such as shortest path, bandwidth, delay and reliability. Tunnels paths are calculated at the tunnel head. The decision is based on a fit between required and available resources known as constrain-based routing. The IGP component handles the routing of packets across these tunnels. The signaling component of decides the best path for a tunnel. RSVP (Resource Reservation Setup Protocol) and CR-LDP (Constraint-based Routing Label Distribution Protocol) are the two main protocols used for this purpose. These protocol are used across the IP network to reserve resources across the network. MPLS also uses it to indicate to other nodes the nature (bandwidth, jitter, maximum burst, and so forth) of the packet streams it sends or it intends to receive. Explicit routes to one or more nodes in the network are calculated by different traffic engineering protocols. LSP (Label Switched Paths) are these explicit routes and are also known as traffic engineering tunnels. Link-state protocols like integrated IS-IS use Dijkstra's SPF (Shortest Path First) algorithm to compute a shortest path tree to all nodes in the network. From this shortest path tree are the routing tables derived. These tables contain ordered sets of destination and first-hop information. In a normal hop-by-hop routing, the first hop is a physical interface attached to the router. Routers must agree on how to use these traffic engineering tunnels otherwise traffic might loop through two or more tunnels. A router discovers the path to one node in the network during each step of SPF computation. The first-hop information is derived from the adjacent database if that node is directly connected to the calculating router. If that node is not directly connected to the calculating router, the node inherits first-hop information from the parent of that node. Each node has one or more parents and each node is the parent of zero or more downstream nodes.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

Each router maintains a list of all TE tunnels that originate at this router, and for each of these TE tunnels the router at the tailend is known. There are three possible ways to calculate the first-hop information: 1. Examine the list of tailend routers directly reachable by way of a TE tunnel. If there is a TE tunnel to this node, use the TE tunnel as the first-hop. 2. If there is no TE tunnel, and the node is directly connected, use the first-hop information from the adjacency database. 3. If the node is not directly connected, and is not directly reachable by way of a TE tunnel, the first-hop information is copied from the parent node(s) to the new node. Due to these computations, traffic to nodes that are tailend of TE tunnels flows through these tunnels. If there is more than one TE tunnel to different intermediate node on the path to the destination node X, traffic flows over the TE tunnel whose tailend node is closest to node X. When an unlabeled packet enters the MPLS tunnel, the ingress router first examines the forwarding equivalence class (FEC), the packet should be in, and inserts one or more labels in the packets MPLS header. Packets with similar or identical characteristics which may be forwarded in the same way, i.e. they may be bound to the same MPLS label are forwarded on the basis of Forwarding Equivalence Class (FEC). After a packet enters a LSP, the ingress router prepends the packet with MPLS header, containing one or more labels. This is called the label stack.

Each stack label entry has four fields: A 20-bit label value A 3-bit field for experimentation A 1-bit bottom of stack flag. If this is set to 1 it signifies that the current label is the last in the sack. An 8 bit TTL (Time To Live) field

After a label is attached the packet is then passed on to the next hop router for this tunnel. Instead of an IP table lookup these MPLS labeled packets are then forwarded using a label lookup. Label lookup and forwarding is much faster than a normal IP lookup as it takes place directly in the fabric and not the CPU. After an MPLS router receives a labeled packet, the topmost label is examined. Three kinds of operations can be performed on a packets label stack i.e. swap, push or pop based on its contents. Pre built

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

lookup tables in routers tell them what kind of operation should be performed on topmost label of the incoming packet. In a swap operation the label is swapped with a new label, and the packet is forwarded along the path associated with the new label. In a push operation a new label is pushed on top of an existing label, which encapsulates the packet in another layer of MPLS. This is mostly used in MPLS VPNs. In a pop operation the label is removed from the packet, which may reveal an inner label below. At the egress router, the last label is removed and only the payload remains. This is an IP packet, or any other kind of payload. The egress router must have the routing information for the packets payload since it has to forward this packet without the help of any labels. Another technique used for popping is called the Penultimate Hop Popping (PHP). The label is popped off at the hop before the egress router in this technique. Tranist routers connected directly to the egress router pop the last label themselves offloading the egress router.

Basic Operation of MPLS TE


The operation of MPLS TE involves link information distribution, path computation, and LSP signaling and traffic selection. LSRs implement the first two steps, link information distribution and path computation when using constraint based routing. When constraint based routing is not used, routers only provide the services of signaling and traffic selection. MPLS does not define any new protocols and work on the existing IP protocols.

Link Information Distribution


An LSR requires and in-depth and detailed knowledge of the network to perform constraint based routing. It needs to know the current state of an extended list of link attributes to take a set of constraints into consideration during path computation for a TE LSP. In order to acquire these attributes, link state protocols like IS-IS and OSPF are used. LSRs then use this information to build a TE topology database. This database is separate from the regular topology database that LSRs build for hop-by-hop destination based routing. Available bandwidth, administrative group(flags) and a TE metric are introduces as new link attributes. The administrative group acts as a classification mechanism to define link inclusion and exclusion rules. The TE metric is used as a second link metric for path optimization. LSRs also distribute a TE ID that has a similar function as a router ID. An LSR in a network with multiple areas only builds a partial topology database. These partial databases also play a role in path computation. LSRs that operate in the inter-autonomous system TE environment have to deal with partial network topologies.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

Path Computation
Path computation for a TE LSP is performed by LSRs by using the TE topology database. Using the Shortest Path First (SPF) algorithm is the most common approach over constraint based routing. And it receives the name Constraint Based-Shortest Path First (CSPF) algorithm. The algorithm works in such a way that it removes all those links that do not meet the TE LSP requirements. It uses the IGP link metric or the link TE metric to compute the shortest path. Although the CSPF does not guarantee a complete optimal mapping of traffic over the network resources, but is still considered an adequate approximation. Figure 2-2 illustrates a simplified version of CSPF on a sample network. In this case, node E wants to compute the shortest path to node H with the following constraints: only links with at least 50 bandwidth units available and an administrative group value of 0xFF. Node E examines the TE topology database and disregards links with insufficient bandwidth or administrative group values other than 0xFF. The dotted lines in the topology represent links that CSPF disregards. Subsequently, node E executes the shortest path algorithm on the reduced topology using the link metric values. In this case, the shortest path is {E, F, B, C, H}. Using this result, node E can initiate the TE LSP signaling.

Path computation is a difficult process and involves several computations especially in multi-area or inter-autonomous systems. If the head end is unable to view the complete network topology, it specifies the path as a list of predefined boundary LSR.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

Signaling of TE LSPs
MPLS TE uses several extensions to RSVP to signal LSPs. RSVP uses five new objects LABEL_REQUEST, LABEL, EXPLICIT_ROUTE, RECORD_ROUTE, and SESSION_ATTRIBUTE. The LABEL_REQUEST object is used by RSVP to request a label binding at each hop. A LABEL object is used to perform label distribution. On demand Label distribution is performed by network nodes using these two objects. The EXPLICIT_ROUTE object contains a hop list that defines the explicit routed path that the signaling will follow. The hop and label information along the signaling path is collected by the RECORD_ROUTE object. The attribute requirements of the LSP i.e. priority, protection, etc are given by the SESSION_ATTRIBUTE.

Traffic Selection
The process of traffic selection is kept separate from the process of TE LSP creation. A head-end can signal a TE LSP, but the traffic does not start to flow through the LSP until the LSR has implemented a traffic-selection mechanism. The only entry point for the traffic is the head-end, therefore the selection of traffic is also a decision made by the head-end. The head-end can use different approaches for this decision, and the selection criteria can be dynamic or static. The decision may also depend upon the packet type for instance IP or Ethernet, or packet contents for e.g. class of service. MPLS has the ability to make use of several traffic selection mechanisms depending on the services it has to offer.

MPLS and DiffServ, a comparison


Internet Service Providers traditionally provide the same class of service to all their customers, which is often known as the Best-Effor Service. The only differentiation among the customers has been only throught the connectivity type. But in the recent years service providers as well as customers demand new ways of service differentiation. The major factor has been the emergence of new applications that require service qualities. On the other hand, ISPs could improve their revenues by applying a differentiated pricing scheme: for higher level of service, higher rates can be charged. This distinctive feature can be delivered by the DiffServ architecture. The architecture is based on a simple model where the traffic entering a network is classified and assigned to different behavior aggregates (BAs), where the BA is identified by a single DiffServ Code-Point. Within the core of the network packets are assigned according to a per-hop behavior. DiffServ domain is the smallest automatic unit of DiffServ, where services are assured by identical principles. A domain consists of the boundary nodes (routers) and the core nodes (routers). The only purpose of core nodes is to forward packets. This architecture achieves scalability by implementing complex classification and conditioning functions only at the boundary nodes. The Type of Service (ToS) octet in the IPv4 and the Traffic Class byte in IPv6 is used for this purpose.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

10

In an MPLS domain, when a stream of data traverses a common path, a Label Switched Path is established, which simplifies the routing process. Each packet, when enters the network at the ingress node, is assigned a label which also assigns FEC to it. At each Label Switch Router (LSR) the packet is only forwarded to the next hop based on its label. In a DiffServ domain all packets requiring the same DiffServ behavior are said to constitute a behavior aggregate. The packets are marked with a DiffServ Code Point at the ingress node of the DiffServ domain. Based on this DiffServ Code Point, each transit node selects the queuing and scheduling treatment of a packet and in some case drop probability also. This also shows us the similarity between MPLS and DiffServ. A DiffServ BA or the DiffServ Code Point is quite similar to the MPLS LSP or MPLS label respectively. The main difference between the two is that DiffServ is used for queuing, scheduling and dropping, whereas MPLS is rather used for routing and forwarding. Due to this both these technologies are not dependent on each other, and both are different ways of providing higher quality services. And there is also a choice that a network operator could employ both these architectures at the same time.

Implementing MPLS with DiffServ


When an MPLS-capable router sends a packet to the next hop, the label is also sent along with it. This way all the LSRs along the LSP need not analyze the packet header, and simple read the MPLS label. This is indexed to a routing table maintained by each LSR, which specifies the next hop. Now the packets marked with DiffServ Code Point arrive at the MPLS network need a way to transfer information provided by the Code Points onto the MPLS label. DiffServ Code Points are a part of the IP headers and in an MPLS network IP headers are not examined. Therefore DiffServ must be provided in a different way in order too make MPLS DiffServ capable. Two different methods have been defined in order to achieve this architecture. One way is to use the 3-bit long EXP field of the MPL header. This allows a maximum of eight Behavior Aggregate classes to be separated and an EXP value is mapped to a full PHB description i.e. queuing, scheduling and drop precedence. The other way is used if there are more than eight service classes. A different type of LSP has been proposed, where the DiffServ Code Point is mapped to a Label, EXP pair. This is known as the LLSP i.e. Label-inferred-LSP which gives all the information regarding scheduling and queuing. The Behavior Aggregate classification is based on the EXP and label fields of the MPLS header rather than the DiffServ Code Point. The buffer management and queue scheduling remains same with or without MPLS, therefore the per-hop behavior of the packets is also identical.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

11

Fast ReRoute (FRR)


MPLS provides a number of advantages besides utlizing the avaiable bandwidth and resources. A technique known as Fast Reroute (FRR) is used by MPLS to provide traffic protection in case of a network faliure. Traffic protection is important in case of real time or any other traffic that practises strict packet loss requirements. A local protection approach used by FRR in which it uses a presignaled backup TE path to reroute traffic in case of faliure. The headend of the backup TE LSP is the node immediately next to the faliure and is responsible for rerouting traffic. This ensures that no delay occurs in computing a path and signalling a new TE LSP to reroute the traffic. FRR can reroute traffic in tens of millions of seconds.

The figure above shows an MPLS network using FRR. In this case node E signals a TELSP toward node H via node F and G. A backup link has also been presignaled by FRR which passes via node I, in case the link between node F and G fails. Thus node F is responsible for rerouting the traffic into the backup link, which makes it the point of local repair (PLR).

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

12

MPLS and QoS


The most important issue here is the need for quality of service (QoS) capability in the internet. The current internet only has the ability to provide best effor possible delivery service regardless of the need of the traffic. This is due to the fact that IP was not designed to provide connection oriented service, therefore it provides no guarantee of packet delivery. But it is necessary that there should be some means of reserving the path for all the packets of a traffic. This could allow the internet to differentiate between high priority and low priority network traffic to ensure guaranteed QoS. Multi Protocol Label switching is a technology that can improve network performance and quality of service (QoS). It offers multiple classes of services, which are associated with different types of traffic. For instance mission critcal applications might be in a Gold class of service, less-important applications might be in a silver package. Recreational applications such as P2P and instant messanging might be in Best Effort Service and VoIP traffic might be in its own class of service that would reduce jitter. MPLS can also work in conjuction with other QoS architectures for IP networks defined by IETF. Differentiated Services (DiffServ) is a model used by MPLS t rovide QoS in IP netowrks. DiffServ provides the end-to-end QoS management in a scalable way. The QoS feature of MPLS reprensents the capability to provide different levels of service and reource assurance. This inculdes a technique to manage network bandwidth, delay, jitter and packet loss. For example to keep voice traffic within bounds of packet loss, delay and jitter, voice packets are marked with certain priority combined with buffer management and queuing schemes. And it is here that sometimes the EXP (CoS) field is used to determine the types of treatments such as queuing and schedulling. With MPLS, there are two approaches to mark traffic for controlling QoS within the network. Therefore the EXP (CoS) bits are set in two ways. A queuing information is encoded into the EXP(CoS) field of the MPLS header in the first way. Due to this technique different packets might receive different markings, which mainly depends on their requirements. This approach is known as the E-LSPs (experimental bit inferred label switched paths) to indicate that QoS information is inferred from the EXP field. In the second method, the label associated with the MPLS packet specifies how a packet should be treated, due to this all packets of the same traffic will be marked with a fixed CoS value. This means that all packets entering the LSP receive the same class of service. This approach is fixed with the MPLS label therefore it is referred to as L-LSPs (label inferred label switched paths). What ever the approach taken MPLS provides an efficient way to provide QoS and manage network resources. Achieving QoS guarantee is not difficult if using MPLS since it provides bandwidth and path reservation, especially if an explicit LSP is being used.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

13

Why use MPLS


WAN connections are an expensive item in an ISP budget. Traffic engineering enables ISPs to route network traffic in such a way that they can offer the best service to their users in terms of throughput and delay. Some ISPs base their services on an overlay model, in which the transmission facilities are managed by Layer 2 switching. This approach is quite advantageous in a way that the routers see only a fully meshed virtual topology, making most destinations appear one hop away. This use of explicit Layer 2 transit layer gives you control over the ways in which traffic uses the available bandwidth. However this approach has several disadvantages. MPLS uses the same overlay model and achieves traffic engineering benefits without the need to run a separate network, and without needing a non-scalable full mesh of router interconnects. Cisco IOS software releases for example Cisco IOS Release 12.0 contains a set of features that provides elementary traffic engineering capabilities. The IOS release gives you the capability to create static routes and control dynamic routes through the manipulation of link state metrics. This functionality is useful in some tactical situations, but is insufficient for all the traffic engineering needs of ISPs. MPLS traffic engineering accounts for link bandwidth and for the size of the traffic flow when determining explicit routes across the backbone. MPLS traffic engineering also has a dynamic adaptation mechanism that provides a full solution to traffic engineering a backbone. This need for dynamic adaptation is also necessary as it enables the backbone to be resilient to failures, even if many primary paths have already been calculated offline.

MPLS vs. ATM


MPLS and ATM both provide a connection-oriented service, although both protocols and technologies are different. Connections are signalled between endpoints, connection state is maintained at each node in the path and encapsultaion techniques are used to carry data across the connection in both technologies. After all these similarities, there are a lot of significant differences that remain between the two.

ATM
Asynchronous Transfer Mode (ATM) is a cell relay, network and data link layer protocol which encodes data traffic into small fixed-sized cells. The cell size is of 53 bytes that include 48 bytes of data and 5 bytes of header information.This is instead of variable sized packets (sometimes known as frames) as in packet-switched networks. ATM is a connection-oriented technology, in which a connection is established between the two endpoints before the actual data exchange begins.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

14

ATM was designed to implement a low-jitter network interface. And the basic purpose for the use of small data cells was to reduce jitter(delay variance) in the multiplexing of data streams, reduction of this is particularly important when carrying voice traffic. This is because the conversion of digitized voice back into an analog audio signal is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess - and if the data does arrive, but late, it is useless, because the time period when it should have been converted to a signal has already passed. In this way a speech signal reduced to packets, if has to share a link with bursty data traffic (i.e. some data packets will be large). No matter how small the speech packets are, they could always encounter full-size data packets and under normal conditions might experience queuing delays. ATM is a channel-based transport layer. This is encompassed in the concept of the Virtual Path and Virtual Circuit . Every ATM cell has an 8- or 12-bit Virtual Path Identifier and 16-bit Virtual Circuit Identifier pair defined in its header. The length of the Virtual Path Identifier varies according to whether the cell is sent on the user-network interface, or if it is sent on the network-network interface. ATM virtual circuits are bidirectional, but bi-directional are only svc ATM connections; pvc ATM connections are uni-directional When an ATM circuit is set up each switch is informed of the traffic class of the connection. ATM traffic contracts are part of the mechanism by which Quality of Serviceis ensured. There are four basic types which each have a set of parameters describing the connection. CBR - Constant bit rate: you specify a Peak Cell Rate (PCR), which is constant. V VBR - Variable bit rate: you specify an average cell rate, which can peak at a certain level for a maximum interval before being problematic. It has real-time and non-real-time variants, and is used for "bursty" traffic. ABR - Available bit rate: you specify a minimum guaranteed rate. UBR - Unspecified bit rate: your traffic is allocated all remaining transmission capacity.

Most traffic classes also introduce the concept of Cell Delay Variation Tolerance (CDVT) which defines the "clumping" of cells in time. Traffic contracts are usually maintained by the use of "Shaping", a combination of queuing and marking of cells, and enforced by "Policing". To maintain network performance it is possible to police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract, the network can either drop the cells or mark the Cell Loss Priority bit. Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic as discarding a single cell will invalidate the whole packet. For this, schemes such as Partial Packet Discard and Early Packet Discard have been created that will discard a whole series of cells until the next

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

15

frame starts. This saves bandwidth for full frames as it reduces the numbe of redundant cells in the network. Partial Packet Discard and Early Packet Discard work with ATM Adaptation Layer 5 connections as they use the frame end bit to detect the end of packets. Another advantage of the use of virtual circuits is the ability to use them as a multiplexing layer, allowing different services (such as voice, SNA, etc.) to share a common ATM connection without interfering.

DIFFERNCE
A very important difference between the two technologies lies within the encapsulation and transport methods. The first difference lies in the length of packets where MPLS can work with variable length packets whereas ATM uses fixed length 53 byte cells. ATM uses an additional adaption layer which is used to segment, transport and re-assemble packets over the ATM network. This is adds significant overhead and complexity to the data stream. Whereas MPLS does the same work very simply by just adding an additional lable to the head of each packet and transmitting it over the network. The way the two technologies create and maintain their connections is also quite different. MPLS connection LSP is uni-directional - allowing data to flow in only one direction between two endpoints. Establishing two-way communications between endpoints requires a pair of LSPs to be established. Due to this data flowing in the forward direction may use a different path than the data flowing in the reverse direction. Whereas ATM uses point-to-point Virtual Circuits are bi-directional, which allows data to flow in both directions over the same path. Both technologies support tunneling of connections within connections. ATM uses virtual paths to achieve this whereas MPLS uses label stacking. MPLS can stack multiple labels to form tunnels within tunnels. The ATM Virtual Path Indicator and Virtual Circuit Indicator are both carried together in the cell header This limits ATM to only a single level of tunnelling. Modern routers support are built to support the most commonly and widely used protocol Internet Protocol (IP). IP is currently one of the most widely used protocol over the internet. The biggest single edge that MPLS has over ATM is that it has been designed to work in conjuction with IP. This allows the network operators to work with ease and great flexibility in network design and operation. ATM is incompatible with IP and this requires complex adaption which makes it quite unsuitable in todays internet world that has been dominated by IP.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

16

MPLS VPNs
One of the major applications that MPLS is used in is VPNs. Known as MPLS VPN; it provides a number of enhanced benefits to enterprises such as any-to-any connectivity through the use of forwarding tables, the ability to retain existing IP addressing plan by supporting overlapping IP addresses, and greater scalability at the site-to-site and data center levels. A significant technical advantage of MPLS VPNs is that they are connectionless, which limits the need for tunnels and encryption for network privacy, thus eliminating significant complexity. MPLS-based Layer 3 VPNs are based on the employment of BGP routing protocol for allocating VPN labels. This sets up an internal BGP session linking the two edge routers, or the two providers edge routers in this case. It is the job of the Label Distribution Protocol to distribute labels in the core of the network. Also in the core, VPN routing and forwarding instances-also called VRF tables-are derived from the global routing tables which reside in each router. One VRF is consigned to each subscriber. Since there are multiple VRFs and one global routing table, a service provider can offer a VPN service as well as Internet service over the same connection. When traffic arrives on a VPN, the forwarding decision is made according to the associated VRF. Internet traffic will still be routed using the global routing table. Forwarding tables provide the functionality for any-to-any connectivity. By using MPLS, we could allow multiple routing tables to simultaneously support traffic for multiple VPNs in addition to non-VPN traffic. Each subscriber is imposed with a VRF instance. Then the VRF uses BGP to assign labels, or designate VPN prefixes. These prefixes are used to route packets to the egress router, which also advertises the prefix. The target egress node is determined by the Interior Gateway Protocol (IGP), and the outgoing interface to which the traffic is to be sent, is determined by the VPN label. This involves the MPLS stacking, which includes both an outer label and an inner label. The outer label is the IGP label, whereas the inner label is the VPN label. The outer label makes sure that traffic gets from the source to the destination provider edge. And the inner label is assigned the task of directing the traffic to the appropriate label. In other words, Traffic is routed to the appropriate provider edge using the outer label, and then the inner label dictates where it goes from there Routing policies are configured to import and export routes. These policies allow topologies to be built as hub and spokes or as full mesh, and allow sites to maintain anyto-any connectivity, even there is a single access link. By adding a new VRF to the appropriate provider edge router, new sites can be easily added. The need to go to every other router or site is not required to accommodate the introduction of a new site; therefore MPLS provides excellent scalability and simplified start up of new sites. Large networks, when trying to implement full-mesh topologies, often end with a scalability problem where non-MPLS networks are used. To ensure a scalable, full-mesh topology, MPLS-based VPNs do not keep any subscriber information in the core devices. With ATMs, core devices have to keep virtual circuits information about each and every

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

17

subscriber, and all the detailed information about the locations they could reach. This quickly limits capacity in a large full-mesh network topology. In contrast, MPLS-based VPNs are virtually unlimited as to the number of sites each can reach. Today, MPLSbased VPNs have been implemented to span thousands of sites, and are far ahead of any alternative technologies when it comes to scalability. Many service providers offer both Layer 3 and Layer 2 VPN support over the same MPLS network, giving enterprise IT managers the freedom to choose the best option for meeting the overall requirements of their networks. Cisco Systems has also implemented a Layer 2 MPLS solution which is based on Any Transport over MPLS (AToM). AToM uses the same concept of label stacking as implemented in Layer 3 BGP VPNs. It uses label stacking to carry multiple Layer 2 circuits over a single pseudo wire. And due to the point-to-point nature of AToM, LDP is used instead of BGP. The following figure also shows the basic and the additional network services provided by MPLS VPNs.

Wireless MPLS (W-MPLS)


The next generation will bring us a lot of enhancements in the world of wireless networks. Mobile nodes will become enhanced and equipped with multiple interface, and will be able to take advantage of the overlay networks. To handle these micro-mobility management issues, global IP mobility solutions have to be optimized. Signaling overhead, foreign network detection and reconfiguration are not always met in current solutions. To overcome these problems a new technology W-MPLS Wireless MPLS is under development. It is also known as the layer 2.5 mobility scheme based on the forwarding mechanism of MPLS.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

18

The strength of MPLS is its immense scalability and adaptability to accommodate applications requirements and to precisely engineer traffic tunnels to avoid congestion and more fully utilize all available bandwidth. Because of these added benefits we adopt MPLS as the protocol for micro-mobility. The mobility enhanced MPLS architecture (WMPLS) enables networks to locate roaming terminals for data delivery and to maintain connections as the terminal is moving into a new service area. The basic network architecture comprises of mobile hosts capable of initiating traffic flows towards the base stations. The traffic flow is collected towards the Mobile Label Switching Nodes (MLSN) via the base stations. Base stations are also responsible for the termination of label switched paths. MLSNs provide support for fast handoff and location management mechanisms. MLSNs can perform the function of either a Label Switch Router (LSR) or a Label Edge Router (LER), as in a classical MPLS network. The following figure shows a simplified model of a mobile network with labels distributed between a correspondent node and the destination mobile node in a foreign network.

In a control driven approach, after a mobile initiates a path establishment process, each mobile in the network need to run a routing protocol and have the complete topology of the network. This requires huge amount of resources for each mobile, and this is not an optimum solution. Since it is necessary to utilize all advantages provided by routing protocols, therefore these protocols are run across MLSNs and across the interfaces. Whenever a mobile node wants to establish a label switched path a participating mobile switching node in the home area provides the explicit routing information to the mobile. This allows the mobile to initiate signaling with an explicit route specification.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

19

Two distinct operations are typically involved in the forwarding of a mobile node. This involves aggregating IP packets to a Forwarding Equivalence Class (FEC) and map the FEC to the next hop in the path. End-to-end MPLS allows the mobile to map packets to the FEC and encode this FEC as a label. Once the path has been established, the intermediate mobile nodes need only to perform the second operation, i.e. mapping the label to the next hop and performing appropriate label translations. Wireless packet core networks which are in the infancy must evolve to support future IP/PPP transport. To provide QoS and to converge voice and data onto the common packet common network, MPLS should be deployed in the core of the network. This could allow the packet core to simultaneously support multiple service including 3G wireless applications and traditional data and voice services. Standardization bodies are working on developing standards towards using MPLS for end-to-end services. This not only ensures a consistent control panel across many protocols such as ATM, PPP, Ethernet, etc but also extends this compatibility to the optical and wireless networks as well.

Conclusion
The paper discusses the application of Multi Protocol Label Switching to engineer traffic, in IP networks. The growth and adaptability of the Internet in the current history of communications is unparalleled. With the emergence of voice and video applications, it has become difficult to utilize all the available resources over the Internet. MPLS is one of the entrants in this remarkable evolution that provides the best utilization of network resources and a better Quality of Service (QoS). MPLS is a technology that uses the native capabilities of IP and improves network efficiency and service guarantee. MPLS when combined with traffic engineering delivers a formidable tool for meeting the current, rigid requirements of differentiated services. It strengthens IP routing, the proven scalability of terabit routers and the mechanisms for end to end QoS.

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

20

References
[1] Richard Mortier, Internet traffic engineering, April 2002 [2] Tmea Dreilinger, DiffServ and MPLS [3] Tamrat Bayle, Reiji Aibara, Kouji Nishimura, Performance Measurements of MPLS Traffic Engineering and QoS [4] Wikipedia the free encyclopedia, (http://en.wikipedia.org/wiki/MPLS), Multiprotocol Label Switching [5] Cisco System Inc, MPLS Traffic Engineering [6] Cisco System Inc, MPLS TE Technology Overview [7] Avici_ Systems Inc, Traffic Engineering With Multiprotocol Label Switching [8] Cisco System Inc, MPLS Traffic Engineering (TE)Scalability and Enhancements [9] Joseph M. Soricelli, Introduction to MPLS [10] Daniel O. Awduche, MPLS and Traffic Engineering in IP networks [11] Kaouthar Sethom, Hossam Afifi, Guy Pujolle, Wireless MPLS: A New Layer 2.5 Micro-mobility Scheme [12] Subramanian Vijayarangam and Subramanian Ganesan, QoS Implementation for MPLS Based Wireless Networks [13] Wikipedia the free encyclopedia, (http://en.wikipedia.org/wiki/Asynchronous_Transfer_Mode), Asynchronous Transfer Mode

Multi Protocol Label Switching-Traffic Engineering (MPLS-TE)

21

You might also like