You are on page 1of 5

Impact of Optical Packet Loss and Reordering on

TCP Performance
Franco Callegati, Walter Cerroni, Carla Raffaelli
D.E.I.S. - University of Bologna
Viale Risorgimento, 2 - 40136 Bologna - ITALY
{franco.callegati,walter.cerroni,carla.raffaelli}@unibo.it

Abstract Next generation optical network technologies such


as OPS and OBS have a non-negligible impact on the performance of the transport layer. This is related to how a packet
stream traversing the optical network is affected by random
behaviors such as latency variability, out-of-sequence delivery
and loss. This paper tries to better understand how the use of
contention resolution schemes at the optical layer affects the TCP
performance, focusing in particular on the impact of unordered
delivery and loss of packets.

I. I NTRODUCTION
Optical Burst Switching (OBS) [1] and Optical Packet Switching (OPS) [2] are respectively a medium and a long term solution for the introduction of all-optical networks that promise
increasing flexibility and efficiency in bandwidth usage, combined with the ability to support traffic with different Quality
of Service (QoS) requirements [3]. The interaction of such
advanced network paradigms with higher layer protocols, and
in particular with end-to-end flow control mechanisms, is
currently one of the main open issues in OBS/OPS network
research. This is well documented by several papers recently
published about edge-to-edge or end-to-end performance of
transport protocols, such as TCP, over OBS/OPS [4] [5] [6].
An OBS/OPS network segment is characterized by typical
random behaviors of packet-based switching paradigms, such
as latency, delay jitter, out-of-sequence delivery and loss, that
affect the end-to-end data flows and are a consequence of
network congestion events. In OPS, and sometimes also in
OBS, congestion at the nodes may be solved in the time
domain, by using some form of optical delay lines [7], but
other contention resolution schemes are also adopted by both
technologies exploiting either the wavelength domain, by
sending contending bursts/packets to different wavelengths of
the same output fiber [8], or the space domain, by means of
deflection routing toward alternative paths [9].
Both these congestion resolution strategies may result in
unordered delivery of packets belonging to the same endto-end flow, because they follow either different network
paths or the same path (typically the shortest one) but using
different wavelengths. In the latter case the delays experienced by packets transmitted on different wavelengths may
change depending on the variable status of congestion on
each wavelength, while in the deflection routing case also the
propagation delay may differ because of the different path
length. Therefore unordered delivery may be more frequent in

this kind of networks than in conventional ones.


The previous considerations represent the rationale behind
this paper, that tries to better understand how contention
resolution affects the operations at the transport layer, putting
the focus on the joint effects that unordered delivery and loss
of optical packets (either bursts or smaller packets) may have
on TCP performance. It is well known that, in general, packet
losses, out-of-sequence packet deliveries and delay variations
have an impact on end-to-end protocols performance, since
they may cause throughput impairments [10] [11] [12]. When
considering TCP-based traffic, modifications to the packet
sequence influence the typical congestion control mechanisms
adopted by the protocol [13] and may result in a reduction
of the transmission window size with consequent bandwidth
under-utilization.
It is therefore definitely important to better understand how
the OBS/OPS switching operations and their consequences on
the optical packet flows affect the performance of higher layers
protocols, taking into account how optical packets are formed
at the network edges. This is the main objective of this paper,
which is organized as follows: first, the optical packet/burst
assembly process at the edge systems is described in section
II; then, a simulation model is presented in section III, showing
how the OBS/OPS network behavior in terms of loss and
packet reordering is represented; then, simulation results in
presence of single and multiple TCP connections are discussed
in section IV; finally, section V concludes the work.
II. O PTICAL PAYLOAD ASSEMBLY AT THE EDGES
When entering the OBS/OPS network, incoming traffic is
typically groomed and several IP datagrams are multiplexed
into the same optical burst/packet, that must satisfy a minimum length requirement to guarantee a reasonable switching
efficiency. For the sake of simplicity, in the remainder of this
paper the optical bursts or packets will be referred to with
the term optical packets, while the client traffic (typically
IP datagrams carrying TCP segments) will be simply called
packets. The optical packet assembly/disassembly process
takes place at network edges, where ingress and egress routers
realize the interface between the electrical and the optical
domains. In particular, the ingress node is responsible for
collecting incoming packets in order to form optical packets.
The functional blocks involved in this process are sketched in
Fig. 1.

1-4244-0357-X/06/$20.00 2006 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2006 proceedings.

DWDM Output
Links

Electrical Input
Links

assembly
queues

Scheduling

E/O

Conversion

Assembly control

Fig. 1.

Basic architecture of the ingress OBS/OPS node

Incoming traffic is classified based on the destination


Optical-Service Access Point (O-SAP), which identifies the
egress edge router, and on specific service requirements, when
a QoS-aware environment is provided. The assembly process
takes place as follows: when a packet arrives at the edge node,
it is processed, classified and then inserted into the relevant
assembly queue, waiting to be included in the payload of an
optical packet. When enough packets are collected to fill an
optical packet up to its maximum size, this one is ready to be
transmitted. However, the time required to reach this phase is
a random variable that depends on the statistics of the arrival
process at each assembly queue. Therefore an additional and
unpredictable latency is introduced on the data flow. In order to
preserve the optical network time transparency, the assembly
procedure is subject to an assembly time-out: in case the
maximum packet size is not reached yet when the time-out
expires, the optical packet is transmitted immediately, filled
with some form of padding in case the minimum length is
not reached or the optical packet must have a fixed length.
Therefore, either a full optical payload or a partial one is sent
to an E/O conversion unit to be transmitted to the proper fiber
and wavelength. An adaptive assembly time-out can be used
to dynamically adjust the packet length and the packetization
delay, according to the dynamics of the incoming TCP traffic
[4].
Given that packets within the same optical payload belong
to the same class, two main different strategies can be adopted
for upper layer data multiplexing, assuming information is
organized in flows, as happens for TCP-based applications:
per-flow assembly: an optical packet carries data belonging to the same class and to the same information flow.
Per-flow queuing is needed at the ingress of the assembly
unit;
mixed-flow assembly: an optical packet may carry information belonging to different flows of the same class.
Per-class queuing is needed at the ingress of the assembly
unit.
The mixed-flow solution leads to a simpler and more scalable
implementation of the assembly unit, which mainly depends
on the number of O-SAPs and not on the number of flows,
as happens with the other approach. However, the per-flow
approach is expected to perform better since it benefits from
a correlation effect due to the segment aggregation.
To understand this, it must be considered that the correct or

incorrect (due to either loss or reordering) delivery of a single


optical packet may affect multiple TCP segments, depending
on the assembly strategy. The access bandwidth, jointly with
the assembly strategy, determines how many TCP segments
of the same connection are aggregated in the same optical
packet and therefore the influence on the TCP connection of
the correct or incorrect delivery of such optical packet. In the
literature these phenomena have been studied with reference
to fast, medium and slow sources [5] [14]. It is known that
TCP retransmits lost segments detected by triple duplicate
ACKs or by retransmission time-out expiration. Fast sources
are able to load up to the entire TCP transmission window
into a single optical packet. Whenever such an optical packet
is lost or delivered out-of-sequence, the retransmission is
triggered, shrinking the TCP congestion window and reducing
the connection throughput, regardless of the TCP version (e.g.
Tahoe, Reno, New Reno, SACK). Similar events occur for
slow and medium sources, even though each TCP version may
react differently.
On the other hand, the correlated delivery of multiple
segments in an optical packet makes the TCP congestion
window grow faster than it would in conventional networks.
This benefit partially compensates the negative effects of
the segments retransmission. While the detrimental effect of
incorrect delivery of optical packets is similar for both perflow and mixed-flow assembly, the correlation delivery benefit
is more evident in the per-flow case that, in general, is known
to provide better performance.
III. S IMULATION MODEL AND SET- UP
In order to evaluate the end-to-end performance of an optical packet network subject to packet loss and unordered
delivery, a specific simulation model has been conceived. The
model consists of n TCP sources and m TCP receivers, each
having access to the optical network with bandwidth Bf i
(i = 1, . . . , n at the sender side and i = 1, . . . , m at the
receiver
nside). The aggregate input bandwidth is defined as
Ba = i=1 Bf i , while the optical link connecting the ingress
and egress routers is characterized by the bandwidth Bo .
The access network is assumed lossless. The model adopted
here does not actually implement a complete optical packet
network with a relatively complex mesh topology. Since the
goal here is to understand how TCP reacts to the occurrence
of congestion events and to the use of related control strategies
adopted in OBS/OPS networks, the simulation model assumes
that phenomena like packet loss and reordering are emulated
by the optical link connecting the edge systems. Therefore,
the optical link represents the whole network behavior. With
this choice, the simulation set-up is much simpler and the
computational complexity is reduced. Moreover, this approach
allows to abstract from the particular network topology and
traffic distribution adopted and to focus on the TCP behavior
under average conditions.
At this point, some assumptions must be made on how the
random effects of the network on the packet flow should be

1-4244-0357-X/06/$20.00 2006 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2006 proceedings.

emulated. The basic idea is to make the optical connection


between the edges capable of:
introducing random loss on the packet flow at a given
average rate PL ;
providing variable latency in order to allow unordered
packet delivery at the egress router.
The choice of a realistic value for PL is not a big issue,
since an extensive literature has been produced on performance
evaluation of OPS in terms of packet loss rate. The simulation
model assumes a packet loss rate ranging between typical
figures, e.g. between 105 and 101 .
On the other hand, only a few works on OBS/OPS have
been published which take into account also the problem of
packet sequence break due to wavelength multiplexing and
dynamic routing. As an example, the model presented in [15]
evaluates the distribution of the latency variation and quantifies
the rate of unordered packets delivered to the egress router
under typical OPS network conditions. In order to provide
reasonable figures and to compare the effects of unordered
delivery and packet loss, the probability of out-of-sequence
events PR is assumed between 105 and 101 . In particular,
PR is defined as the probability that, given a generic couple of
subsequent ordered packets Pn and Pn+1 transmitted by the
ingress router, Pn+1 reaches the egress router before Pn , since
the latter has experienced a higher latency than the former due
to different buffering operations or different network paths.
This is clearly shown in Fig. 2, where all the possible
relative positions between Pn and Pn+1 at the egress router
are illustrated. The figure assumes that tn = tn+1 tn is the
time difference between the departures of the two packets from
the ingress router, while sn is the time difference between
their arrivals at the egress router. Dn and Dn+1 are the optical
packet durations, here assumed generally different. Among all
the alternatives, cases 6 and 7, where Pn+1 arrives before Pn ,
are considered out-of-sequence deliveries, while in all other
cases it is assumed that the egress router may correctly order
the optical packets at reception.
The model described above has been simulated using the
ns-2 network simulator [16]. The number of traffic sources is
equal to the number of receivers, i.e. n = m. Each source
constantly generates asynchronous traffic and is attached to a
TCP agent that sets up connections with a remote TCP sink.
The version of TCP adopted is Reno, implementing Slow Start,
Congestion Avoidance, Fast Retransmit and Fast Recovery algorithms for congestion control. The TCP maximum segment
size is set to M SS = 472 bytes, resulting in IP datagrams
of L = 512 bytes. Each source and sink access bandwidth is
assumed to be Bf = 100 Mbps, with Tf = 10 ms propagation
delay to reach the corresponding edge router. The maximum
value allowed for the transmission window is WM = 128
segments.
The optical packet network is emulated by a link connecting
the edge systems (called the optical link) with bandwidth Bo =
2.5 Gbps and a variable latency To used to implement the
packet reordering. Loss events are introduced on the optical
link with probability PL , while packet reordering is caused by

Pn

Pn+1

INGRESS
tn

tn+1

EGRESS








Dn+1

Dn

tn

sn

Fig. 2. A generic couple of ordered optical packets Pn and Pn+1 at the


ingress node and all their possible time relationships at the egress node

an appropriate variation of To with probability PR that gives


sn < 0. No loss or reordering events are generated on the
return link from the sinks to the sources, in order to preserve
the TCP ACK flows.
The ingress router introduces the additional packet assembly
delay Ta that, according to section II, is at most equal to the
assembly time-out, set to Tout = 3 ms. Since the complete
optical network is not actually implemented, the simulator
can assume either fixed or variable optical packet formats
(including also OBS): in the former case, the assembly process
produces payloads with fixed size Lo , filled with some padding
in case the assembly time-out expires, while in the latter case
payloads can be shorter than Lo (assumed to be the maximum
size) depending on the assembly timing. In any case, loss or
reordering events, as generated by the simulator, have the same
effect on fixed and variable optical packets, since the only
difference is the additional padding which does not affect the
application performance.
IV. N UMERICAL RESULTS
In this section, results obtained by simulating the model
discussed above are presented. Two scenarios are considered,
where either a single or multiple TCP connections have access
to the optical link at the same time. For the multiple connection
case, both per-flow and mixed-flow assembly strategies are
shown. However, the ultimate goal of this section is not to
provide a comprehensive comparison of the two assembly
techniques used, but to understand the actual impact of loss
and reordering events (individually and jointly) on TCP performance.
A. Single connection
A first set of simulations has been run when only a single
traffic source is active. The reason is to investigate the specific

1-4244-0357-X/06/$20.00 2006 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2006 proceedings.

7
TCP Send Rate (Mbps)

TCP Send Rate (Mbps)

6
5
4
3
2
1

Burst
Lo=64L
Lo=30L
Lo=10L
Lo=8L
Lo=5L

0
10-5

6
5
4
3
2
1

10-4

10-3
Loss Probability

10-2

0
10-5

10-1

Fig. 3. TCP send rate as a function of the packet loss probability for different
optical packet sizes

Burst
Lo=64L
Lo=30L
Lo=10L
Lo=8L
Lo=5L
10-4

10-3
Out-of-sequence Probability

10-2

10-1

Fig. 4. TCP send rate as a function of the packet reordering probability for
different optical packet sizes
9

B. Multiple connections
When traffic incoming from multiple sources is considered,
the influence of the multiplexing strategy applied at the ingress
node in case of packet reordering is shown in Fig. 5. Here the
number of sources and sinks is set to n = m = 8, meaning
that 8 sources of the same class (optical SAP, QoS, etc.)
are active on the same edge node and are multiplexed into
the same optical packet. The effects of per-flow and mixedflow assembly are compared, for two different values of the
payload size. When the optical packet carries a considerable

8
7
TCP Send Rate (Mbps)

effects of the packet loss and reordering on the TCP throughput, as shown in Figs. 3 and 4 respectively. Out-of-sequence
and loss events have similar consequences on the transport
layer, although, for a given value of the event probability,
losses are more detrimental than reordering. This is due to the
fact that, when a reordering event occurs, the missing segments
are not actually lost and, in case of retransmission, they
are very likely to arrive before their retransmitted duplicates,
causing a quicker recovery of the congestion window than in
the loss case.
The curves show also the impact of the optical payload
size Lo defined in terms of number of packets carried. The
burst case refers to the upper limit of the optical payload
size, represented by a transmission time corresponding to the
assembly time-out. For instance, with the numbers used in the
simulations, the burst length is equal to Lo = Tout Bf /8 =
37500 bytes = 73.24 L. The presence of the correlation
effect discussed at the end of section II is the reason of
the performance improvement when the optical payload size
increases: the higher the number of TCP segments within
the same payload, the quicker the growth of the congestion
window after a retransmission. The only exception is in Fig.
4, when packet reordering occurs for Lo = 64L: since
the payload size corresponds exactly to half the maximum
transmission window, the packetization efficiency is optimized
and the performance are slightly better than in the burst case.

6
5
4
3
2

Per-flow Lo=30L
Per-flow Lo=5L
Mixed-flow Lo=5L
Mixed-flow Lo=30L

1
10-4

10-3
10-2
Out-of-sequence Probability

10-1

Fig. 5. TCP send rate as a function of the packet reordering probability for
different assembly strategies and optical packet sizes

amount of TCP segments (Lo = 30L), the per-flow strategy


performs much better than the mixed-flow, since the beneficial
correlation effect described in section II is more relevant. This
is not the case when only a few IP datagrams are carried by
an optical packet (Lo = 5L) and the two strategies behave in
a comparable way.
Figure 6 shows the TCP performance when reordering is
specifically caused by deflection routing, leading to latency
differences between subsequent out-of-sequence packets in
the order of sn = 30 ms. Here, the quality of the
transmission decreases, especially for the per-flow assembly
strategy, since the higher latency due to deflection routing
significantly increases the segments round trip time.
The joint effect of loss and reordering is represented in Fig.
7, for Lo = 5L. Here, the TCP send rate behavior as a function
of the out-of-sequence probability is evaluated when the optical link is also characterized by a given loss rate. The figure
clearly shows how the effect of reordering events becomes
negligible when loss events are dominant. This plot can be

1-4244-0357-X/06/$20.00 2006 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2006 proceedings.

resolution schemes as parameters of a single optical link.


Both loss and out-of-sequence delivery of optical packets
have been considered as non-ideal network behaviors, showing
that their detrimental effect on the TCP throughput may be
relevant. Therefore, TCP send rate optimization is achievable
through a suitable balance between loss and out-of-sequence
occurrence. Since, in general, different packet scheduling and
contention resolution algorithms in the core switches give
different performance in terms of loss and out-of-sequence
delivery, the evaluations provided here can be a useful support
to the design and choice of the most appropriate one with
relation to environmental characteristics.

8
7

TCP Send Rate (Mbps)

6
5
4
3
2
1

Per-flow - Lo=30L
Per-flow - Lo=5L
Mixed-flow - Lo=30L
Mixed-flow - Lo=5L

0
10-4

10-3
10-2
Out-of-sequence Probability

10-1

Fig. 6.
TCP send rate as a function of deflection routing reordering
probability for different assembly strategies and optical packet sizes
8
7

TCP Send Rate (Mbps)

PL=10-4
- Per-flow
PL=10-4
- Mixed-flow
-3
PL=10-3 - Per-flow
PL=10-2 - Mixed-flow
PL=10-2 - Per-flow
PL=10 - Mixed-flow

5
4
3
2
1
0
10-4

10-3
10-2
Out-of-sequence Probability

10-1

Fig. 7. TCP send rate as a function of the packet reordering probability


subject to different packet loss rates and assembly strategies, for Lo = 5L

used to make some design choices when planning the optical


network. In particular, it becomes useful when the wavelength
scheduling policy at the optical nodes must be decided, by
evaluating whether it is more convenient to minimize the loss
or the reordering events. As an example, a previous work
focusing on the effects of the wavelength scheduling on the
packet sequence [17] evaluates the performance degradation in
terms of loss when some constraints are introduced in order
to avoid packet reordering due to different buffering. With the
help of a plot similar to the one shown in Fig. 7 it is possible to
specify the required level of performance in order to suitably
design the optical node, e.g. by increasing the buffering space.
V. C ONCLUSION
In this paper a simulation study of the behavior of single
and multiple TCP connections running over optical burst and
packet switching core networks has been conducted. The TCP
throughput has been analyzed by means of a simplified optical
network model, summarizing the effects of optical congestion

ACKNOWLEDGMENTS
This work was partially funded by the E.U. Commission
through the IST-FP6 Network of Excellence e-Photon/ONe+.
R EFERENCES
[1] C. Qiao, M. Yoo, Optical burst switching: A new paradigm for an optical
internet, Journal of High Speed Networks, vol. 8, no. 1, pp. 69-84,
January 1999.
[2] M. J. OMahony, D. Simeonidou, D. K. Hunter, A. Tzanakaki, The application of optical packet switching in future communication networks,
IEEE Communications Magazine, vol. 39, no. 3, pp.128-135, March
2001.
[3] W. Vanderbauwhede, D. A. Harle, Design and Modeling of an Asynchronous Optical Packet Switch for DiffServ Traffic, Proc. ONDM 2004,
Gent, Belgium, pp. 19-35, February 2004.
[4] X. Cao, J. Li, Y. Chen, C. Qiao, Assembling TCP/IP packets in optical
burst switched networks, Proc. IEEE Globecom 2002, Taipei, Taiwan,
vol. 3, pp. 2808-2812, November 2002.
[5] A. Detti, M. Listanti, Impact of Segments aggregation on TCP Reno
flows in optical burst switching network, Proc. IEEE INFOCOM 2002,
New York, USA, vol. 3, pp. 1803-1812, June 2002.
[6] A. Detti, M. Listanti, Amplification Effects of the Send Rate of TCP Connection Through an Optical Burst Switching Network, Optical Switching
and Networking, Elsevier, vol. 2, no. 1, pp. 49-69, May 2005.
[7] D. K. Hunter, M. C. Chia, I. Andonovic, Buffering in Optical Packet
Switching, IEEE/OSA Journal of Lightwave Technology, vol. 16, no. 10,
pp. 2081-2094, December 1998.
[8] L. Dittmann, et al., The European IST project DAVID: A viable approach
towards optical packet switching, IEEE Journal on Selected Areas in
Communications, vol. 21, no. 7, pp. 1026-1040, September 2003.
[9] R. Ramamurthy, B. Mukherjee, Fixed-alternate routing and wavelength
conversion in wavelength-routed optical networks, IEEE/ACM Transactions on Networking, vol. 10, no. 3, pp. 351-367, June 2002.
[10] J. C. R. Bennett, C. Patridge, Packet reordering is not a pathological
network behavior, IEEE/ACM Transactions on Networking, vol. 7, no.
6, pp. 789-798, December 1999.
[11] M. Laor, L. Gendel, The effect of packet reordering in a backbone link
on application throughput, IEEE Network, vol. 16, no. 5, pp. 28-36,
September/October 2002.
[12] S. Jaiswal, G. Iannacone, C. Diot, J. Kurose, D. Towsley, Measurement
and classification of out-of-sequence packets in a tier-1 IP backbone,
Proc. IEEE INFOCOM 2003, San Francisco, CA, vol. 2, pp. 1199-1209,
March 2003.
[13] M. Allman, V. Paxson, W. Stevens, TCP congestion control IETF RFC
2581, April 1999.
[14] Y. Chen, C. Qiao, X. Yu, Optical burst switching: a new area in optical
networking research, IEEE Network, vol. 18, no. 3, pp. 16-23, May/June
2004.
[15] F. Callegati, W. Cerroni, G. Muretto, C. Raffaelli, P. Zaffoni, A framework for performance evaluation of OPS congestion resolution, Proc.
ONDM 2005, Milan, Italy, pp. 243-250, February 2005.
[16] The Network Simulator ns-2, http://www.isi.edu/nsnam/ns
[17] F. Callegati, D. Careglio, W. Cerroni, G. Muretto, C. Raffaelli, J.
Sole Pareta, P. Zaffoni, Keeping the packet sequence in optical packetswitched networks, Optical Switching and Networking, vol. 2, no. 3, pp.
137-147, November 2005.

1-4244-0357-X/06/$20.00 2006 IEEE

This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE GLOBECOM 2006 proceedings.

You might also like