You are on page 1of 26

Department of Computer Science and Engineering

Sri Jayachamarajendra College of Engineering, Mysore

Presenter:

Manas H.R
4JC07CS054
• It is a condition, when little or no communication is happening due to
congestion. The state of the network is very stable and has less
throughput.

• It generally occurs at choke points in the network, where the total


incoming bandwidth to a node exceeds the outgoing bandwidth.

INCOMING BANDWIDTH > OUTGOING BANDWIDTH

• Connection points between a LAN and a WAN are the most likely
choke points.

• A collapse happens when the data rate exceeds the bandwidth-delay


product.

• During collapse, the network settles into a stable state where the
traffic demand is high but little useful throughput is available, and
there are high levels of packet delay and loss and general QoS is
extremely poor.
The Prevention of Network Congestion and Collapse requires 2 major
components:

• A mechanism in routers to reorder or drop packets under overload.

( ACTIVE QUEUE MANAGEMENT )

•End-to-End flow control mechanisms designed into the end points which
respond to congestion and behave appropriately.

( TCP CONGESTION AVOIDANCE )


• Active Queue Management reorders or drops packets under overload or
during congestion by maintaining a drop-tail queue.

• A drop-tail discipline does the following;


A packet is put onto the queue if the queue is shorter than its maximum
size (measured in packets or in bytes), and dropped otherwise.

• Active queue disciplines drop or mark packets before the queue is full.
Typically, they operate by maintaining one or more drop/mark
probabilities, and probabilistically dropping or marking packets even
when the queue is short.

• Drop-tail queues have a tendency to penalise bursty flows, and to cause


global synchronisation between flows. By dropping packets
probabilistically, AQM disciplines typically avoid both of these issues.

• By providing endpoints with congestion indication before the queue is


full, AQM disciplines are able to maintain a shorter queue length than
drop-tail queues, which reduces network latency ("ping time").
1. Adaptive Virtual Queue.

2. Random Early Detection.

3. Random Exponential Marking.

4. Blue Fair Queueing.

5. Stochastic Blue Fair Queueing.


• Active Queue Management takes care of the buffer management
using drop-tail queues in the packet level.

• But congestion avoidance doesn‟t only need a buffer management,


but also a flow level control adjusting the total amount of packets
that must go out of the router. This is taken care off by the
TCP Congestion Avoidance Algorithm.

• The TCP uses a network congestion avoidance algorithm that includes


various aspects of an additive increase/multiplicative decrease (AIMD)
scheme, with other schemes such as slow-start in order to achieve
congestion avoidance.
• The congestion window is one of the factors that determines the number of
bytes that can be outstanding at any time.

• Maintained on the sender, this is a means of stopping the link between two
places from getting overloaded with too much traffic. The size of this window is
calculated by estimating how much congestion there is between the two places.
The sender maintains the congestion window.

• When a connection is set up, the congestion window is set to the maximum
segment size (MSS) allowed on that connection. Further variance in the
congestion window is dictated by an Additive Increase/Multiplicative Decrease
approach.
• To avoid congestion collapse, TCP uses a multi-faceted congestion
control strategy. For each connection, TCP maintains a congestion
window, limiting the total number of unacknowledged packets that may
be in transit end-to-end.

• TCP uses a mechanism called slow start to increase the congestion


window after a connection is initialized and after a timeout. It starts with
a window of 2 * maximum segment size (MSS).

• Although the initial rate is low, the rate of increase is very rapid: for every
packet acknowledged, the congestion window increases by 1 MSS unit so
that the congestion window effectively doubles for every round trip
time(RTT).

• When the congestion window exceeds a threshold „ssthresh‟ the


algorithm enters a new state, called congestion avoidance. In some
implementations, the initial ssthresh is large, and so the first
slow start usually ends after a loss. However, ssthresh is updated at the
end of each slow start, and will often affect subsequent slow starts
triggered by timeouts.
• Congestion avoidance: As long as non-duplicate ACKs are received, the
congestion window is additively increased by one MSS every round trip time.

• When a packet is lost, the likelihood of duplicate ACKs being received is


very high (it's possible though unlikely that the stream just underwent
extreme packet reordering, which would also prompt duplicate ACKs).

• If three duplicate ACKs are received, Reno will halve the congestion
window, perform a "fast retransmit", and enter a phase called Fast
Recovery. If an ACK times out, slow start is used.

Sender Congestion Receiver


The window w i (t) of source „i‟ increases by 1 packet per RTT and
decreases per unit time by

Ti Round Trip Time; pi Loss Probability

4/3. wi (t) packets is the peak window size.

The Multiplicative Decrease is too high considering the

Magnitude of xi .
Window Size with respect to the flow level using AIMD can be depicted in
the equation:

Average Window Size was estimated as follows: If wi = xi . Ti with

ki Gain Function. ui  Marginal Utility Function

This equation makes the BDP in the TCP line increase rapidly
which cause a lot of oscillations in the network,
maintaining an uncontrollable instability.
• As bandwidth-delay product increases, it becomes difficult to
maintain the equilibrium in the data transmission rate of the network,
also resulting in an unstable system dynamics.

• Even though equilibrium is w.r.t flow-level , this problem


manifests itself at the packet level, where a source increments its
window too slowly and decrements it too drastically.

• When the peak window is 80,000-packet (corresponding to an


“average” window of 60,000 packets), which is necessary to sustain
7.2Gbps using 1,500-byte packets with a RTT of 100ms, it takes
40,000 RTTs, or almost 70 minutes, to recover from a single packet
loss. There is an unstable network that has not recovered from
congestion. These are now taken care off by a new algorithm called
the FAST TCP.
Using queueing delay as a congestion measure has advantages.

• Queueing delay can be more accurately estimated than loss


probability both because packet losses in networks with large BDP‟s are
rare events under TCP Reno (e.g., probability on the order 10-7 or smaller),

• Each measurement of queueing delay provides multipath information. This


makes it easier for an equation-based implementation to stabilize a network
into a steady state with a target fairness and high utilization.

• Based on the commonly used ordinary differential equation model of


TCP/AQM, the dynamics of queueing delay has the right scaling with
respect to network capacity. This helps maintain stability as a network
scales up in capacity.
• An equation is designed based on the average queueing delay and
adjusts the congestion window size based on the results of that
equation after every packet transmission.

• The new equation deploying the FAST TCP approach, makes a very
small change to the TCP Reno‟s window adjustment equation.

a = 0.0125
• By explicitly estimating how far the current state pi(t)/ui(t) is from the
equilibrium value of 1, FAST TCP can drive the system rapidly, yet in a
fair and stable manner, towards the equilibrium.

• The window adjustment depends on just the current window size and is
independent of where the current state is with respect to the target.

• By choosing a multi-bit congestion measure, FAST TCP eliminates


the packet-level oscillation due to binary feedback which solves the
problem of binary feedback. It all happens at the sender‟s end.

• Using queueing delay as the congestion measure pi(t) allows the


network to stabilize in the region below the overflowing point, when the
buffer size is sufficiently large.

• To avoid the second problem in Reno, where the required equilibrium


congestion measure is too small to practically estimate, the algorithm
must adapt its parameter αi with capacity to maintain small but sufficient
queueing delay.

• The window control algorithm must be stable, in addition to being fair and efficient, at
the flow level.
• Data Control : determines which packets to transmit.

• Window Control : determines how many packets to transmit at the


RTT Timescale

• Burstiness control : determines when to transmit these packets at a


smaller timescale.

• These decisions are made based on information provided by the


Estimation component.
Parameters to be Estimated:

Base Round Trip Time.

Average Round Trip Time.

Transmission Delay.

Queueing Delay.

Average Queueing Delay.

Initial Window Size.


• When a positive acknowledgment is received, it calculates the RTT for the
corresponding data packet and updates the average queueing delay and the
minimum RTT.

• When a negative acknowledgment (signalled by 3 ACKs) is received, it


generates a loss indication for this data packet to the other components.

• The estimation component generates both a multi-bit queueing delay sample


and a one-bit loss-or-no-loss sample for each data packet.

• The queueing delay is smoothed by taking a moving average with the weight
η(t) := min{3/w i(t), 1/4} that depends on the window wi(t) at time t, as follows.

• The k-th RTT sample Ti(k) updates the average RTT Ti(k) according to:

tk is the time at which the k-th RTT sample is received.

• Taking di(k) to be the minimum RTT observed so far, the average queueing delay
is estimated as:
• The window control component determines congestion window based on
congestion information—queueing delay and packet loss, provided by
the estimation component.

• FAST TCP periodically updates the congestion window based on the


average RTT and average queueing delay provided by the estimation
component, according to the following scenario.

• If packet transfer successful W  2 * W.

• If 3 duplicate ACKs are received then,

• γ ∈ (0, 1] W  Current Window Size

• a ( ) = 0.125 * w * qdelay
This graph shows the stability of the FAST TCP
Throughput = Total Amount of Data Transferred / Duration of the transfer.

Utilization = Throughput / Bottleneck Capacity

With 10 flows, FAST TCP achieved an aggregate throughput of 8,609 Mbps and utilization
of 88%, averaged over a 6-hour period, over a routed path between Sunnyvale in California
and Baltimore in Maryland, using the standard MTU, apparently the largest aggregate
throughput ever accomplished in a delay-based algorithm as the researchers have
observed
• FAST TCP addresses the four main problems of TCP Reno in
networks with high capacities and large latencies.

• FAST TCP has a log utility function and achieves weighted


proportional fairness. Its window adjustment is equation based, under
which the network moves rapidly towards equilibrium when the current
state is far away and slows down when it approaches the equilibrium.

• FAST TCP uses queueing delay, in addition to packet loss, as a


congestion signal. Queueing delay provides a finer measure of
congestion and scales naturally with network capacity.

You might also like