Professional Documents
Culture Documents
1000Base-LX
1000Base-SX
1000Base-CX
1000Base-T
information at
CS: The Convergence Sublayer provides the function needed to support specific application
using AAL.
24. Difference b/w AAL & AAL 3/5
AAL
AAL 3/5
25. Give the data rates for frame relay & X.25?
The lower bit rate for X.25 is 64 kbps. The fixed data for frame relay is 1.544mbps.
The higher data rate for frame relay is 44.376mbps.
26. Define Ethernet.
As packet switching has dominated wide area data networking, Ethernet dominates
local area networking. The original experimental Ethernet operated at 3mbps over coaxial
cable. This remarkable over twisted pair & optical fiber as well as coaxial cable. It was
released commercially at 10 mbps & then was scaled up first to 100bps & none 1 & 10 gbps
16 Marks.
1.Explain Frame relay Networks in detail.
DTEs generally are considered to be terminating equipment for a specific network and
typically are located on the premises of a customer. In fact, they may be owned by the
customer. Examples of DTE devices are terminals, personal computers, routers, and bridges.
DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to
provide clocking and switching services in a network, which are the devices that actually
transmit data through the WAN. In most cases, these are packet switches. Figure 10-1 shows
the relationship between the two categories of devices.
Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit error-rate of
the medium, each switching node needs to implement error detection to avoid wasting
bandwidth due to the transmission of erred frames. The error detection mechanism used in
frame relay uses the cyclic redundancy check (CRC) as its basis.
Congestion-Control Mechanisms
Frame Relay reduces network overhead by implementing simple congestion-notification
mechanisms rather than explicit, per-virtual-circuit flow control. Frame Relay typically is
implemented on reliable network media, so data integrity is not sacrificed because flow
control can be left to higher-layer protocols. Frame Relay implements two congestionnotification mechanisms:
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit
switching (guaranteed capacity and constant transmission delay) with those of packet
switching (flexibility and efficiency for intermittent traffic). It provides scalable bandwidth
from a few megabits per second (Mbps) to many gigabits per second (Gbps). Because of its
asynchronous nature, ATM is more efficient than synchronous technologies, such as timedivision multiplexing (TDM).
With TDM, each user is assigned to a time slot, and no other station can send in that time
slot. If a station has much data to send, it can send only when its time slot comes up, even if
all other time slots are empty. However, if a station has nothing to transmit when its time slot
comes up, the time slot is sent empty and is wasted. Because ATM is asynchronous, time
slots are available on demand with information identifying the source of the transmission
contained in the header of each ATM cell.
ATM transfers information in fixed-size units called cells. Each cell consists of 53
octets, or bytes. The first 5 bytes contain cell-header information, and the remaining 48
contain the payload (user information). Small, fixed-length cells are well suited to
transferring voice and video traffic because such traffic is intolerant of delays that result from
having to wait for a large data packet to download, among other things. Figure illustrates the
basic format of an ATM cell. Figure :An ATM Cell Consists of a Header and Payload Data
ATM is almost similar to cell relay and packets witching using X.25and framerelay.like
packet switching and frame relay,ATM involves the transfer of data in discrete
pieces.also,like packet switching and frame relay ,ATM allows multiple logical connections
to multiplexed over a single physical interface. in the case of ATM,the information flow on
each logical connection is organised into fixed-size packets, called cells. ATM is a
streamlined protocol with minimal error and flow control capabilities :this reduces the
overhead of processing ATM cells and reduces the number of overhead bits required with
each cell, thus enabling ATM to operate at high data rates.the use of fixed-size cells
simplifies the processing required at each ATM node,again supporting the use of ATM at
high data rates. The ATM architecture uses a logical model to describe the functionality that
it supports. ATM functionality corresponds to the physical layer and part of the data link
layer of the OSI reference model. . the protocol referencce model shown makes reference to
three separate planes:
user plane provides for user information transfer ,along with associated controls (e.g.,flow
control ,error control).
control plane performs call control and connection control functions.
management plane includes plane management ,which performs management function
related to a system as a whole and provides coordination between all the planes ,and layer
management which performs management functions relating to resource and parameters
residing in its protocol entities .
The ATM reference model is composed of the following ATM layers:
Physical layerAnalogous to the physical layer of the OSI reference model, the ATM
physical layer manages the medium-dependent transmission.
ATM layerCombined with the ATM adaptation layer, the ATM layer is roughly
analogous to the data link layer of the OSI reference model. The ATM layer is responsible for
the simultaneous sharing of virtual circuits over a physical link (cell multiplexing) and
passing cells through the ATM network (cell relay). To do this, it uses the VPI and VCI
information in the header of each ATM cell.
ATM adaptation layer (AAL)Combined with the ATM layer, the AAL is roughly
analogous to the data link layer of the OSI model. The AAL is responsible for isolating
higher-layer protocols from the details of the ATM processes. The adaptation layer prepares
user data for conversion into cells and segments the data into 48-byte cell payloads.
Finally, the higher layers residing above the AAL accept user data, arrange it into packets,
and hand it to the AAL. Figure :illustrates the ATM reference model.
4 3
GFC
VPI
VPI
VPI
VCI
VPI
VCI
VCI
4 3
VCI
VCI
PT
CLP
VCI
PT
CLP
HEC
HEC
The PT field is used to designate various special kinds of cells for Operation and
Management (OAM) purposes, and to delineate packet boundaries in some AALs.
Several of ATM's link protocols use the HEC field to drive a CRC-Based Framing algorithm,
which allows the position of the ATM cells to be found with no overhead required beyond
what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit
header errors and detect multi-bit header errors. When multi-bit header errors are detected,
the current and subsequent cells are dropped until a cell with no header errors is found.
In a UNI cell the GFC field is reserved for a local flow control/submultiplexing system
between users. This was intended to allow several terminals to share a single network
connection, in the same way that two ISDN phones can share a single basic rate ISDN
connection. All four GFC bits must be zero by default.The NNI cell format is almost identical
to the UNI format, except that the 4-bit GFC field is re-allocated to the VPI field, extending
the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost
212 VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are
reserved).
A Virtual Channel (VC) denotes the transport of ATM cells which have the same unique
identifier, called the Virtual Channel Identifier (VCI). This identifier is encoded in the cell
header. A virtual channel represents the basic means of communication between two endpoints, and is analogous to an X.25 virtual circuit.
A Virtual Path (VP) denotes the transport of ATM cells belonging to virtual channels which
share a common identifier, called the Virtual Path Identifier (VPI), which is also encoded in
the cell header. A virtual path, in other words, is a grouping of virtual channels which
connect the same end-points. This two layer approach results in improved network
performance. Once a virtual path is set up, the addition/removal of virtual channels is
straightforward
This class is used for emulating circuit switching. The cell rate is constant
constant bit rate with time. CBR applications are quite sensitive to cell-delay variation.
(CBR)
Examples of applications that can use CBR are telephone traffic (i.e.,
nx64 kbps), videoconferencing, and television.
This class allows users to send traffic at a rate that varies with time
variable bit rate
depending on the availability of user information. Statistical multiplexing
non-real
time
is provided to make optimum use of network resources. Multimedia e(VBRNRT)
mail is an example of VBRNRT.
variable bit rate This class is similar to VBRNRT but is designed for applications that are
real time (VBR sensitive to cell-delay variation. Examples for real-time VBR are voice
RT)
with speech activity detection (SAD) and interactive compressed video.
This class of ATM services provides rate-based flow control and is aimed
at data traffic such as file transfer and e-mail. Although the standard does
not require the cell transfer delay and cell-loss ratio to be guaranteed or
available bit rate
minimized, it is desirable for switches to minimize delay and loss as much
(ABR)
as possible. Depending upon the state of congestion in the network, the
source is required to control its rate. The users are allowed to declare a
minimum cell rate, which is guaranteed to the connection by the network.
unspecified
rate (UBR)
bit
Technical
Parameter
This class is the catch-all, other class and is widely used today for TCP/IP.
Definition
cell loss ratio CLR is the percentage of cells not delivered at their destination
(CLR)
because they were lost in the network due to congestion and buffer
overflow.
The delay experienced by a cell between network entry and exit
cell
transfer points is called the CTD. It includes propagation delays, queuing
delay (CTD)
delays at various intermediate switches, and service times at
queuing points.
cell
delay CDV is a measure of the variance of the cell transfer delay. High
variation
variation implies larger buffering for delay-sensitive traffic such
(CDV)
as voice and video.
peak cell rate The maximum cell rate at which the user will transmit. PCR is the
(PCR)
inverse of the minimum cell inter-arrival time.
sustained cell This is the average rate, as measured over a long interval, in the
rate (SCR)
order of the connection lifetime.
This parameter determines the maximum burst that can be sent at
burst tolerance the peak rate. This is the bucket-size parameter for the
(BT)
enforcement algorithm that is used to control the traffic entering
the network.
Benefits of ATM
The benefits of ATM are the following:
high performance via hardware switching
dynamic bandwidth for bursty traffic
class-of-service support for multimedia
scalability in speed and network size
common LAN/WAN architecture
opportunities for simplification via VC architecture
international standards compliance
T AAL1 PDU
The structure of the AAL1 PDU is given in the following illustration:
SN
SNP
CSI
SC
CRC
EPC
1 bit
3
bits
3 bits
1 bit
SAR
Payload
47 bytes
PDU
AAL1 PDU
SN
Sequence number. Numbers the stream of SAR PDUs of a CPCS PDU (modulo 16). The
sequence number is comprised of the CSI and the SN.
CSI
Convergence sublayer indicator. Used for residual time stamp for clocking.
SC
Sequence count. The sequence number for the entire CS PDU, which is generated by the
Convergence Sublayer.
SNP
Sequence number protection. Comprised of the CRC and the EPC.
CRC
Cyclic redundancy check calculated over the SAR header.
EPC
Even parity check calculated over the CRC.
SAR
47-byte user information field.
PDU
payload
AAL2
AAL2 provides bandwidth-efficient transmission of low-rate, short and variable packets in
delay sensitive applications. It supports VBR and CBR. AAL2 also provides for variable
payload within cells and across cells. AAL type 2 is subdivided into the Common Part
Sublayer (CPS ) and the Service Specific Convergence Sublayer (SSCS ).
AAL2 CPS Packet
The CPS packet consists of a 3 octet header followed by a payload. The structure of the
AAL2 CPS packet is shown in the following illustration.
CID
LI
UUI
8 bits
6 bits
CID Channelidentification.
LI
Length indicator. This is the length of the packet payload associated with each individual
user. Value is one less than the packet payload and has a default value of 45 bytes (may be set
to 64 bytes).
UUI
User-to-user indication. Provides a link between the CPS and an appropriate SSCS that
satisfies the higher layer application
HEC
Header error control.
AAL2
The structure of the AAL2 SAR PDU is given in the following illustration.
Start
OSF
SN
6 bits 1 bit
field
P
CPS-PDU
AAL2 PDU payload
1 bit
payload
PAD
0-47
bytes
OSF
Offset field. Identifies the location of the start of the next CPS packet within the CPS-PDU.
SN
Sequence number. Protects data integrity.
P
Parity. Protects the start field from errors.
SAR
Information field of the SAR PDU.
PDU
payload
PAD
Padding.
AAL2 SSCS Packet
The SSCS conveys narrowband calls consisting of voice, voiceband data or circuit mode
data. SSCS packets are transported as CPS packets over AAL2 connections. The CPS packet
contains a SSCS payload. There are 3 SSCS packet types.
Type 1 Unprotected; this is used by default.
Type 2 Partially protected.
Type 3 Fully protected: the entire payload is protected by a 10-bit CRC which is computed as
for OAM cells. The remaining 2 bits of the 2-octet trailer consist of the message type field.
AAL2 SSCS Type 3 Packets:
The type 3 packets are used for the following:
Dialled digits
Channel associated signalling bits
Facsimile demodulated control data
Alarms
User state control operations.
The following illustration gives the general sturcture of AAL2 SSCS Type 3 PDUs. The
format varies and each message has its own format according to the actual message type.
Redundancy
Time
stamp
Message
dependant
information
Message
type
CRC10
14
16
10 bits
Redundancy
Packets are sent 3 times to ensure error correction. The value in this field signifies the
transmission number.
Time
stamp
Counters packet delay variation and allows a receiver to accurately reproduce the relative
timing of successive events separated by a short interval.
Message
dependant
Packet content that varies, depending on the message type.
information
Message
The message type code.
type
CRC-10
The 10-bit CRC.
AAL3/4
AAL3/4 consists of message and streaming modes. It provides for point-to-point and pointto-multipoint (ATM layer) connections. The Convergence Sublayer (CS) of the ATM
Adaptation Layer (AAL) is divided into two parts: service specific (SSCS ) and common part
(CPCS ). This is illustrated in the following diagram:
AAL3/4 packets are used to carry computer data, mainly SMDS traffic.
AAL3/4 CPCS PDU
The functions of the AAL3/4 CPCS include connectionless network layer (Class D), meaning
no need for an SSCS; and frame relaying telecommunication service in Class C. The CPCS
PDU is composed of the following fields:
Header
Info
CPI
Trailer
Pad
Etag Length
0-65535 0-3
2 bytes
CPI
Message type. Set to zero when the BAsize and Length fields are encoded in bytes.
Btag
Beginning tag. This is an identifier for the packet. It is repeated as the Etag.
BAsize
Buffer allocation size. Size (in bytes) that the receiver has to allocate to capture all the data.
CPCS
Variable information field up to 65535 bytes.
SDU
PAD
Padding field which is used to achieve 32-bit alignment of the length of the packet.
0
All-zero.
Etag
End tag. Must be the same as Btag.
Length
Must be the same as BASize.
AAL3/4 SAR PDU
The structure of the AAL3/4 SAR PDU is illustrated below:
ST SN
MID
Information
LI
CRC
10
352
10 bits
44 bytes
2-byte trailer
2-byte header
48 bytes
AAL3/4 SAR PDU
ST
Segment type. Values may be as follows:
SN
Sequence number. Numbers the stream of SAR PDUs of a CPCS PDU (modulo 16).
MID
Multiplexing identification. This is used for multiplexing several AAL3/4 connections over
one ATM link.
Information
This field has a fixed length of 44 bytes and contains parts of CPCS PDU.
LI
Length indication. Contains the length of the SAR SDU in bytes, as follows:
CRC
Cyclic redundancy check.
Functions of AAL3/4 SAR include identification of SAR SDUs; error indication and
handling; SAR SDU sequence continuity; multiplexing and demultiplexing.
AAL5 The type 5 adaptation layer is a simplified version of AAL3/4. It also consists of
message and streaming modes, with the CS divided into the service specific and common
part. AAL5 provides point-to-point and point-to-multipoint (ATM layer) connections.
AAL5 is used to carry computer data such as TCP/IP. It is the most popular AAL and is
sometimes referred to as SEAL (simple and easy adaptation layer).
AAL5 CPCS PDU
The AAL5 CPCS PDU is composed of the following fields:
Info
Trailer
CPCS payload
Pad
0-65535
0-47
4 bytes
CPCS
The actual information that is sent by the user. Note that the information comes before any
length indication (as opposed to AAL3/4 where the amount of memory required is known in
advance).
Pad
Padding bytes to make the entire packet (including control and CRC) fit into a 48-byte
boundary.
UU
CPCS user-to-user indication to transfer one byte of user information.
CPI
Common part indicator is a filling byte (of value 0). This field is to be used in the future for
layer management message indication.
Length
Length of the user information without the Pad.
CRC
CRC-32. Used to allow identification of corrupted transmission.
AAL5 SAR PDU The structure of the AAL5 CS PDU is as follows:
Information
PAD
UU
CPI
Length
CRC-32
1-48
0-47
4 bytes
8-byte trailer
AAL5 SAR PDU
step 1).
10 Mbps
50-ohm coaxial cable bus
Maximum segment length 500 meters
10Base-T
Twisted pair, maximum length 100 meters
Star
topology
(hub
or
multipoint
repeater
at
central
point)
Layer 2 Switch
Incoming frame switched to one outgoing line
Many transmissions at same time
Bridge
Frame handling done in software
Analyze and forward one frame at a time
Store-and-forward
Layer 2 Switch
Frame handling done in hardware
Multiple data paths and can handle multiple frames at a time
Can do cut-through
Layer 2 Switches
Flat address space
Broadcast storm
Only one path between any 2 devices
Solution 1: subnetworks connected by routers
Solution 2: layer 3 switching, packet-forwarding logic in hardware
Fibre Channel
I/O channel
Hardware based, high-speed, short distance
Direct point-to-point or multipoint communications link
Data type qualifiers for routing payload
Link-level constructs for individual I/O operations
Protocol specific specifications to support e.g. SCSI
Fibre Channel Network-Oriented Facilities
Full multiplexing between multiple destinations
Peer-to-peer connectivity between any pair of ports
Internetworking with other connection technologies
Fibre Channel Requirements
Full duplex links with 2 fibres/link
100 Mbps 800 Mbps
Distances up to 10 km
Small connectors
high-capacity
Greater connectivity than existing multidrop channels
Broad availability
Support for multiple cost/performance levels
Support for multiple existing interface command sets
2 marks
1. When queue will be formed in a network
Queue will be formed, if the current demand for a particular service exceeds the
Capacity of service provider.
Arrival pattern
Service pattern
Number of server
Queue discipline
System capacity
Number of channels
3. What is meant by traffic intensity in queuing analysis? And write littles formula for
single server queue?
Traffic intensity (or) utilization factor = / = arrival rate / rate service
Littles formula = TS
r = Tr
w = Tw
4. Compare Single Server and Multi Server Queue.
S.No
Multiserver model
Arrival rate =
r = Tr (or)
w = Tw
b) Service rate
c) Number of servers
Provide as output information concerning:
a) Items waiting
b) Waiting time
c) Items queued
d) Residence time
on
Arrival Rate: The rate at which data enters into a queuing system i.e., inter arrival
It is indicated as .
Service Rate: The rate at which data leaves the queuing system i.e., service rate.
It is indicated as .
24Define Qos.
Refers to the properties of a network that contribute to the degree of
satisfaction that user perceive, relative to the network performance four service
categories are typically under this term capacity, data rate, latency, delay & traffic
losses.
25. Define committed burst size
The max. amount data that the network agrees to transfer under normal
Condition over a measurement interval T, these data may or may not be contiguous.
Define excess burst size
The max amount of data in excess of BC that the network will attempt to
transfer under normal condition over a measurement interval T. these data are
uncommitted
16 Marks.
1.Explain queuing analysis and the different queuing models.
Queing analysis
In queueing theory, a queueing model is used to approximate a real queueing
situation or system, so the queueing behaviour can be analysed mathematically.
Queueing models allow a number of useful steady state performance measures to
be determined, including:
the average number in the queue, or the system,
the average time spent in the queue, or the system,
the statistical distribution of those numbers or times,
the probability the queue is full, or empty, and
the probability of finding the system in a particular state.
These performance measures are important as issues or problems caused by
queueing situations are often related to customer dissatisfaction with service or
may be the root cause of economic losses in a business. Analysis of the relevant
queueing models allows the cause of queueing issues to be identified and the
impact of any changes that might be wanted to be assessed.
Notation
Queueing models can be represented using Kendall's notation:
A/B/S/K/N/Disc
where:
Single-server queue
Single-server queues are, perhaps, the most commonly encountered queueing
situation in real life. One encounters a queue with a single server in many
situations, including business (e.g. sales clerk), industry (e.g. a production line),
transport (e.g. a bus, a taxi rank, an intersection), telecommunications (e.g.
Telephone line), computing (e.g. processor sharing). Even where there are multiple
servers handling the situation it is possible to consider each server individually as
part of the larger system, in many cases. (e.g A supermarket checkout has several
single server queues that the customer can select from.) Consequently, being able
to model and analyse a single server queue's behaviour is a particularly useful
thing to do.
Poisson arrivals and service
M/M/1// represents a single server that has unlimited queue capacity and
infinite calling population, both arrivals and service are Poisson (or random)
processes, meaning the statistical distribution of both the inter-arrival times and the
service times follow the exponential distribution. Because of the mathematical
nature of the exponential distribution, a number of quite simple relationships are
able to be derived for several performance measures based on knowing the arrival
rate and service rate.
This is fortunate because, an M/M/1 queuing model can be used to approximate
many queuing situations.
Poisson arrivals and general service
M/G/1// represents a single server that has unlimited queue capacity and
infinite calling population, while the arrival is still Poisson process, meaning the
statistical distribution of the inter-arrival times still follow the exponential
distribution, the distribution of the service time does not. The distribution of the
service time may follow any general statistical distribution, not just exponential.
Relationships are still able to be derived for a (limited) number of performance
measures if one knows the arrival rate and the mean and variance of the service
rate. However the derivations a generally more complex.
A number of special cases of M/G/1 provide specific solutions that give broad
insights into the best model to choose for specific queueing situations because they
permit the comparison of those solutions to the performance of an M/M/1 model.
Multiple-servers queue
Multiple (identical)-servers queue situations are frequently encountered in
telecommunications or a customer service environment. When modelling these
situations care is needed to ensure that it is a multiple servers queue, not a network
of single server queues, because results may differ depending on how the queuing
model behaves.
Service Process
Number
Servers
of
D (Deterministic)
G (General)
Examples of queueing systems that can be defined with this convention are:
M/M/1: This is the simplest queueing system to analyze. Here the arrival and
service time are negative exponentially distributed (poisson process). The system
consists of only one server. This queueing system can be applied to a wide variety
of problems as any system with a very large number of independent customers can
be approximated as a Poisson process. Using a Poisson process for service time
however is not applicable in many applications and is only a crude approximation.
Refer to M/M/1 Queueing System for details.
M/D/n: Here the arrival process is poisson and the service time distribution is
deterministic. The system has n servers. (e.g. a ticket booking counter with n
cashiers.) Here the service time can be assumed to be same for all customers)
G/G/n: This is the most general queueing system where the arrival and service
time processes are both arbitrary. The system has n servers. No analytical solution
is known for this queueing system.
Markovian arrival processes
In queuing theory, Markovian arrival processes are used to model the arrival
customers to queue.
Some of the most common include the Poisson process, Markovian arrival process
and the batch Markovian arrival process.
Markovian arrival processes has two processes. A continuous-time Markov process
j(t), a Markov process which is generated by a generator or rate matrix, Q. The
other process is a counting process N(t), which has state space
(where
is the set of all natural numbers). N(t) increases every time there is a
transition in j(t) which marked.
Poisson process
The Poisson arrival process or Poisson process counts the number of arrivals, each
of which has a exponentially distributed time between arrival. In the most general
case this can be represented by the rate matrix,
Markov arrival process
The Markov arrival process (MAP) is a generalisation of the Poisson process by
having non-exponential distribution sojourn between arrivals. The homogeneous
case has rate matrix,
Little's law
In queueing theory, Little's result, theorem, lemma, or law says:
The average number of customers in a stable system (over some time interval), N,
is equal to their average arrival rate, , multiplied by their average time in the
system, T, or:
Effects of Congestion
Congestion-Control Mechanisms
Backpressure
Request from destination to source to reduce rate
Useful only on a logical connection basis
Requires hop-by-hop flow control mechanism
Policing
Measuring and restricting packets as they enter the network
Choke packet
Specific message back to source
E.g., ICMP Source Quench
Implicit congestion signaling
Source detects congestion from transmission delays and lost packets and reduces
flow
Frame Relay reduces network overhead by implementing simple congestionnotification mechanisms rather than explicit, per-virtual-circuit flow control.
Frame Relay typically is implemented on reliable network media, so data integrity
is not sacrificed because flow control can be left to higher-layer protocols. Frame
Relay implements two congestion-notification mechanisms:
FECN and BECN each is controlled by a single bit contained in the Frame Relay
frame header. The Frame Relay frame header also contains a Discard Eligibility
(DE) bit, which is used to identify less important traffic that can be dropped during
periods of congestion.
The FECN bit is part of the Address field in the Frame Relay frame header. The
FECN mechanism is initiated when a DTE device sends Frame Relay frames into
the network. If the network is congested, DCE devices (switches) set the value of
the frames' FECN bit to 1. When the frames reach the destination DTE device, the
Address field (with the FECN bit set) indicates that the frame experienced
congestion in the path from source to destination. The DTE device can relay this
information to a higher-layer protocol for processing. Depending on the
implementation, flow control may be initiated, or the indication may be ignored.
The BECN bit is part of the Address field in the Frame Relay frame header. DCE
devices set the value of the BECN bit to 1 in frames traveling in the opposite
direction of frames with their FECN bit set. This informs the receiving DTE device
that a particular path through the network is congested. The DTE device then can
relay this information to a higher-layer protocol for processing. Depending on the
implementation, flow-control may be initiated, or the indication may be ignored.
Fairness
Various flows should suffer equally
Last-in-first-discarded may not be fair
Quality of Service (QoS)
Flows treated differently, based on need
Voice, video: delay sensitive, loss insensitive
File transfer, mail: delay insensitive, loss sensitive
Interactive computing: delay and loss sensitive
Reservations
Policing: excess traffic discarded or handled on best-effort basis
4.Explain frame relay congestion control
Frame Relay Congestion Control
Network Response
each frame handler monitors its queuing behavior and takes action
use FECN/BECN bits
some/all connections notified of congestion
User (end-system) Response
receipt of BECN/FECN bits in frame
BECN at sender: reduce transmission rate
FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow
Frame Relay Traffic Rate Management Parameters