You are on page 1of 7

Cross Layer Design Approach for Performance

Evaluation of Multimedia Contents

Miguel Almeida Rui Inácio Susana Sargento


Instituto de Telecomunicações COO Operational Business Software Instituto de Telecomunicações
Universidade de Aveiro (UA) Nokia Siemens Networks Universidade de Aveiro (UA)
Aveiro, Portugal Aveiro, Portugal Aveiro, Portugal
miguel.almeida@ua.pt rui.inacio@nsn.com susana@ua.pt

Abstract— This paper provides an insight on the cross-layer type of multimedia service in multi-operator scenarios. Given
overview of the current 3GPP networks, while featuring the the impact of outsourcing in nowadays operators’ business
analysis until the end-to-end performance and end-user models, content providers may not be the same as internet
perception. It employs a correlation between the most relevant
parameters at each layer and underlines their contribution to the
service providers. In fact, given the diversity of available
upper layers. It further provides an analytical qualitative access technologies already in the market, this situation is a
analysis of the overall service performance by evaluating very valid one. In order to generally and ubiquitously classify
simulation results which are sustained by both network and a service performance, important reporting components are
service performance metrics. We show qualitative results to give defined: the definition and use of general Key Performance
an overview on the performance of each services and by testing Indicators (KPI) are proposed, based on cross-layer
them over a network with HSPA and UMTS technologies. For
e.g., Voice over IP achieved a performance rating of 3.7 and 3.2
information, to uniformly describe the service performance.
out of 5 while in UMTS and HSPA respectively. These indicators The support of visualization models will also be described:
can then be enhanced to indicate if the desired service suits the they provide different reports for different types of end users
network being used. and even for different types of analysis, starting from a unique
data set, i.e., different ways of showing a KPI.
Cross-layer, 3G, E2E and Service performance The rest of this paper is organized as follows; Section II
shows relevant work in the area and underlines the approach
I. INTRODUCTION for this paper. Section III shows the relevant information from
Measuring the end-to-end performance of a service while each which is used to perform a Cross Layer evaluation of the
taking into account the penalties introduced by each and every performance of a network and services. Section IV introduces
an instantiation of the proposed architecture. In section V some
network segment and equipment entity on the data path, and in results are presented regarding the validation of the previously
each layer, allows for a complete understanding of the defined metrics and Section VI concludes the paper.
problems and performance issues introduced. However,
networks are significantly complex, and cellular networks are II. CROSS LAYER DESIGN FOR PERFORMANCE
not an exception, especially when considering the integration EVALUATION
of 3GPP’s access technologies with other solutions.
Internet Service Providers (ISP) are now evolving towards Several works already underline how Cross-layer design
(CLD) allows for improvements in terms of performance. In
business models focusing on the delivery of services over IP
[1] it is underlined how CLD fills the gap between theoretical
networks undergoing the “all-over-IP” perspective. The way a and practical implementations and shows the importance of
multimedia content-oriented service behaves on such a such designs for optimization purposes. It further explains how
scenario depends on a wide range of factors; performance network layer protocols can benefit from the underlying
monitoring requires efficient and optimized strategies with modules. Other proposals such as [2] and [3] present
multiple implications at the security/privacy levels. Each application level optimizations for services such as IPTV in a
service has specific characteristics which may make it more or specific type of network using adaptive fragmentation
less resilient to some network performance issues. The study according to the network conditions. Some proposals like [4]
of the relationship between the network and the service, reinforce the impact of scheduling of application-level packets
(e.g.: video-frames), hence introducing additional complexity.
through cross-layer information, will help to better understand
the problems of the end-clients and track down error causes. Typical CLD approaches focus on the evaluation of a
As in every cross-layer proposal, a careful analysis of the particular service over a particular network. In [5] it is
overhead implications in terms of signaling and added presented a very recent overview of existing cross-layer
processing is required. The scope of this work is to propose a solutions for telecommunications scenarios. Most of the shown
solution for a general cross-layer monitoring system for any proposals usually focus on how to deliver voice over certain
transmission links or on how to improve the performance of IP
data packets. As general, a global model capable of evaluating parameters with the efficiency of the wireless and wired links
the impact of the performance of lower layers on the layers between the nodes.
above does not seem to be easy to achieve, especially if the
Frame delay Buffering delay
bouquet of envisioned services is broad. Dropped Frames Buffering Overflow
Throughput Resource Reservations Success Ratio
The novelty of this paper is the introduction of a study on a Number of connections Cell/Frame Rate
majority of performance metrics which should be considered
If the traffic increases beyond the link capacity, then the
while looking into services being delivered over cellular network is not efficient. The reliability of a link (RL) is shown
networks. We take a bottom up approach identifying and in the following section. Certain links are more relevant than
relating some of the most meaningful parameters which have a
others. Typically a link connecting the GGSN to the SGSN
direct implication on the quality of services. requires a different optimization degree. That is why different
link types have different Link Type Index Values (LTI).
III. MULTI-LAYER TAXONOMY Equation 1 shows the a way to evaluate the efficiency of the
Multimedia services have different requirements; for link taking into account the LTI and where GRk is the estimated
instance, IPTV has a relative delay tolerance, but consumes a traffic growth rate and k is link index.
high amount of resources with lower delay variation. VoIP has 
Nlinks
 Traffic k × RLk 
a high requirement for delay and jitter variations but its ∑ 1 − 1 − LTI k

 Capacity + GRk  
bandwidth is typically somehow constant. By inter-crossing the EffecLinks =
k =1   k  
,
EQ 1
layers of analysis we can get a full view on the service Nlinks
performance. To do so, we propose the approach of dividing 2) Physical Layer Reliability
the functionalities into layers: first we evaluate the network The Relevance ID is setup according to the importance of
elements and network segments; then we evaluate the E2E
an on-path node. As such, it is configurable according to the
performance of a service; and at the end we evaluate the impact
network’s architecture. A concentrating node like a gateway is
on the users, more commonly known as Quality of Experience
(QoE). To evaluate this approach we use the cellular more vital than an end-point node. RL depends on the node
networks as use case (Figure 1), by formulating variables endpoints and on its own reliability (including its time
which condition the performance of a service over a 3GPP availability and relevance).
network. We show KPIs that are calculated per each layer and RLk = RLij = RNEi × RNE j × (AvailabilityTimeij × LINKrelevanceij ) EQ 2
propose Cross-layer KPIs (CLKPIs) which inter-relate different
The Reliability of the Network Elements (RNE) with ID i
layers.
and j, corresponds to the importance of the two end points of
the link and is obtained as shown bellow, considering both NE
up time and relevance.
RNEi = (100 − (100 − NEUpTimei ) × NErelevancei ) EQ 3

The most relevant factor is the frame loss (FL) metric which is
common to all the sectors:

FL TotalL1 = ∑ FL i , ∧ i = {RadioLink ,WiredLink , NE } EQ 4


Figure 1 – Layered analysis i

A. L1: Physical Layer B. L2: Link Layer


This layer imposes the first limits to the performance and The KPIs shown in Table II express the “per-hop” L2
conditions all of the above layers. behavior that adds up affects on the E2E performance.
1) Links Moreover, they also consider performance issues on the
Packet loss can occur on the radio link due to the channel signaling plane. Transaction delay is of special relevance, since
characteristics, power, distance and interference of handsets, we are considering networks with tunneling and
but also on the core due to burst of traffic that the network is encapsulation/decapsulation.
not prepared to handle.
Table II – L2 most relevant KPIs
Table I – Wireless Link Layer KPIs Transaction Delay(NE) Per DCH, HS-DSCH, E-DCH:
BER/FER/SNR Radio Link Establishment Success Queue occupation Channel Usage
Transmitted Radio Frames Radio link setup time Transaction Capacity Throughput
Channel Usage per Traffic Class Physical Channels failures Number of MAC flows Setup Failure Ratio
Physical Channels throughput Number of connected subscribers PDU and SDU Discard Ratio Transaction Rate
BTS Output Power Received Power in UL
C. L3: Network
This layer represents the transport of signaling and user
The most relevant KPIs to be stored for the wireless
plane data over IP on the core network. It is not session
and wired network segment are shown in Table I and Error!
Reference source not found.. They relate the physical oriented. The reason why we include the control plane in this
phase is because it represents a significant amount of relevant
metrics for service characterization.
1) User Plane GPRS Tunneling Protocol. While the GTP-C handles the
The largest contribution for the bit rates’ constraints lies within control data between the SGSN and the GGSN, the GTP-U
the radio access network. The radio link itself usually imposes takes care of the tunneling of the user data by creating one
the limits, which on the core side, are imposed by the inter- tunnel per active PDP context: data is carried between the
nodes connection links. They are dimensioned to respond well NodeB and the terminal via PDCP. The main KPIs are shown
to the average network occupation, but in peak situations this in Table IV.
approach leads to packet errors and reduced bitrates.
Table IV – L4 KPIs
IP Packet Loss Unicast IP packets GTPuDroppedPackets PDCP Delay
Discarded IP packets Multicast IP packets GTPuThroughput PDCP Errors
Core Aggregated Bit Rate Broadcast IP packets GTPuTransmitionOverhead PDCP Throughput
Table IV presents an example of L3 KPIs. We describe the In this layer we define the CLKPIs for Loss Ratio of packet
impact of the delay caused by lower layers on the network transmission, average E2E delay and Overhead for traffic,
layer delay by EQ 5: which includes the additional load caused by packet headers,
and overhead considering signaling as well.
D L 3 = ∑ TD Nodei + ∑ BD Nodei EQ 5
LGTPu + LPDCP
LRL 4 = EQ 8
Since this layer represents the transport on the wired side of λTotalPackets
the RAN as well as core, the delay is mainly affected by the
average transaction delay (TD) introduced by the network The Loss Ratio (LR) is expressed by the total dropped
elements and which reflects the time the packet stays in a node. packets in the GTP-u and PDCP traffic packets over the total
It is also influenced by buffering (BD). The impact of L1 transmitted packets. The delay here is also the sum of the
Losses (L) on L3 Packet Losses is evaluated as presented average delays in both the segments as shown in EQ 9.
bellow:
DL 4 = D GTPu + D PDCP EQ 9
L' (t )
(
PL L13 (t ) = 'L1 × 100 − L'L1 (t )
L L 3 (t )
) EQ 6 GTP inflicts a significant amount of overhead, through a
complex stack of layers which also inflict additional bits.
The Layer 1 losses are given in EQ 4 (LL1 = FLTotalL1), while Depending on physical transport protocol, we can have SDH
the L3 losses are obtained from layer 3 KPIs reflect LL3 (Packet (OvhL3) or Carrier Ethernet. On top of that ATM (OvhL2), IP
Loss of L3). LLx’ is given by EQ 7 (with x =1 for LL1 and x=3 and UDP (OvhL3) and only then GTP must be considered. All
for LL3). of these protocol headers condition the network. The overhead
can be shown in terms of the traffic packet overhead (TOvh in
( )
L'Lx = LLx (t ) − L Lx L Lx EQ 7 EQ 10) or also considering the total overhead including the
signaling (EQ 11). To calculate the total overhead ( OvhTotal ,L 4 ),
2) Control Plane we also consider the traffic generated in the signaling plane of
In order to initiate the session setup process, the UE the GTP (λGTPc).
establishes a Radio Link with the respective NodeB. Using
RRC signaling the RNC then reserves the Iub resources. The TOvhL 4 = OvhGTPu + OvhPDCP + OvhL 3 + OvhL 2 + OvhL1 EQ 10
UE initiates the creation of the PDP Context with the Core
Network (GGSN), thus defining the QoS requirements for the OvhTotal , L 4 = TOvhL 4 + λGTPc EQ 11
packet session. In the meantime the core network creates a
Radio Access Bearer (RAB) with the RNC. The RNC runs the E. L5: E2E Network Connection IP/TCP
admission control algorithm and sets up a Radio Bearer (RB).
When considering performance metrics only, the content
The KPIs shown in Table V are recommended for the analysis
evaluation is mainly dependent on the service and network
of this layer.
related information. The network impact on the E2E is
Table III – L3 most relevant KPIs evaluated according to the penalty introduced by entities in the
data/signaling path, while the service related information tries
PDP Activation Success Ratio RRC Setup Time
PDP Creation Time SGSN Context Drop Rate to match this impact into the user-perceived quality.
RAB Setup complete Success Ratio Iu GPRS Attach Success Rate
1) E2E delay/jitter
RAB Creation Delay Iu GPRS Attach Time
RAB Drop Ratio RRC Setup Success/Failure Rate The E2E delay has a strong contribution from the
An extremely important CLKPI we want to consider is the intermediate network entities. The first parameter that should
Setup Time for the establishment of a Packet call, in which we be checked is the overall round trip time (RTT) delay. This will
consider mainly the setup time of the RRC and the creation be collected from the Real-time Control Protocol (RTCP).
time of the PDP context, as well as the GPRS attach time: However, in case that the value is too high, it is not possible to
locate the problem origin. In order to achieve this drill down,
STPCall = STRRC + CTPDP + AttachTGPRS we first need to check if there is a problem with a flow, then
look for the flow’s path, and further evaluate the layer at which
D. L4: Tunneling the problem is occurring. The KPIs below are related to the
application level where S is the total number of sessions, and
In current 3GPP networks (excluding LTE), the transmission EQ 14, EQ 15 use Voice as example.
of IP data packets is done via the creation of a tunnel by the
D E 2 E , IP = ∑ (D session,i ) S EQ 12 Table VI – KPIs extracted from RTCP
J E 2 E , IP = ∑ (J session ,i ) S
Application Loss Rate R-Value
EQ 13 Burst Density MOS
D voice = ∑ (Dvoice,i ) S voice EQ 14 Burst Duration End System Delay
RTT
J Voice = ∑ (Jittervoice ,i ) S voice EQ 15 1) Communication Perceived fails
2) Overhead By extending the typical approach with a high resolution
The additional overhead comes from the IP and UDP/TCP VoIP metrics report block, using the fields of R-value, MOS
headers, which should be added to EQ 10. and noise level, it is possible to have a clearer understanding
3) Bit Rate of the overall service behavior. From the codec information,
At this level, it is possible to track the bit rate per session we can obtain relevant feedback on the perceived and
and aggregate per application type. Also, it should be noticed corrected errors occurring at the application level, as well as
that peak and average bit rates are extremely important as they the compression rate adaptation.
influence the network behavior and user experience 2) Setup Waiting Time
dramatically. We underline the following CLKPIs for Bit Rate One of the most crucial parameters is the time a user has to
(Rb) analysis with different aggregations: total bit rate, bit rate wait before starting to receive a requested content. This value
per Application Identifier (AID= {Voice, Video, FTP, Email}) is imposed by the delays introduced by every layer and
and the peak bit rate. SAID refers to the number of sessions for network entity on the logical path. The Setup Waiting Time
a certain application. (SWT) is directly influenced by the delay inflicted by the
TotalSessions intermediate nodes, the signaling time and the responsiveness
R b ,aggr = ∑R
i =1
b ,i /S EQ 16
of the servers. These metrics involve several layers including
TotalSessionsAID KPIs already defined in the previous sections (STPCALL), while
R b , ApplicationID = ∑R
i =1
b ,i / S AID EQ 17
the rest regards this section. All the waiting times are added
Rbmax ( )
, x = max R b , x (t ) , t = 1,2,.., Ν ∧
X= {AID, aggregated} EQ 18 up.
SWTtotal = STPCall + DService Re quest + DSipServer + D AAA +
4) Out of Order and Packet Loss EQ 19
Out of order application packets can cause a direct visual + Dsource + DSIP ,Signaling + DPr oces sin g

and audio impact. They may occur not only from the network 3) Service Availability
conditions, but also can be an outcome of the codec operation. The availability relates mainly with the session
Jitter and routing changes can affect this parameter. establishment success ratio and path unavailability. The
service availability influences directly the user satisfaction and
F. L6: Session perception of the service. The main aspects to consider are: the
After the network resources have been configured and a path availability, the on-path NE individual uptime and the
path has been established, it is then possible to communicate application server uptime.
with a respective application server. In the VoIP example, this PA = min (UpTNEID × ConnSucess. NEID ) EQ 20
corresponds to sending a message to a server registering and Path Availability (PA) is valid for a certain Time Window
attempting to establish communication with another peer. The and for NEID ∈ (GGSN, SGSN, RNC, NodeB). ConnSuccess is a
implications are mainly related to the server responsiveness, rate indicating the success of a connection.
occupation and user authorization. [7] defines a set of metrics
UpT AID = UpT Server, AID × Conn Success, AID EQ 21
for the integration of the E2E view under the Session Initiation
Protocol procedures, some listed in Table VII. 4) Time Resolution
The number of frames per second and the frame delay are
Table V – KPI list for SIP two parameters which characterize different types of aspects at
Registration Request Delay Success/Failure Session Completion
the same level. If on one hand, Frames per Second indicates
Success/Failure Register Hops per Request
Success/Failure Session Setup Session Establishment Rate the seamless service delivery degree for video contents, the
Session Disconnect Delay Ineffective Session Attempts Frame Delay is especially important for voice contents where
Session Duration Time Session Disconnect Failures the E2E delay is the major underlying factor which conditions
G. L7: QoE the reception of the other end’s response. The delay DE2E,IP is
The final goal of this cross-layer process is the evaluation of explained in section III.E.1 and now we add the delay
the QoE for the user. In order to achieve this application introduced by the end systems, which is formed by the delay
oriented feedback, we consider that it is very important to at the client side and server side. These are the times between
have extensions to some protocols, namely RTCP, to support the packets being available by the network and being
the gathering of metrics related to the codec performance and consumed by the application.
user-perceiving quality. [6] provides some additional fields D AID = DE 2 E , IP + DEndSystem, AID EQ 22

which are very important to our goal. In the QoS plane these DEndSystem, AID = DClient , AID + DServer , AID EQ 23
are directly related with the Mean Packet Delay and bit rates
(Table VIII).
IV. SERVICE PERFORMANCE EVALUATION larger impact on the Network Adequacy Indicator (NAI) to the
In order to support the gathering of the aforementioned services being delivered since it is a very profitable service.
metrics and their treatment we propose a base performance TotalServices

monitoring architecture with requirements for the correlation of ∑ (RF × SPI )


i =1
i i
EQ 24
NAI =
information on a layered basis. In this section we briefly ∑ RF i
describe the architecture and show the service performance
indicators to be achieved. The SPI depends on the type of service. When considering
voice, different parameters are addressed when compared to
A. Authors and Affiliations Video, as well as different relevance. We perform a weighted
average between the importance of the KPI to a certain service
The main entities proposed in Figure 2 are the Network and the rating that the same KPI achieves. This will result in
Elements, the Element Managers and the tool which performs the use of universal KPIs.
complex data analysis and correlation.
MaxKPIs
1) Network Elements (NE) ∑ k ( KPI ) × KPRV ( KPI )
i i
EQ 25
Each network element monitors its performance through the SPI i = KPI =1
, i = 1,2,..numServices
TK i
Performance Mediation. There are three types of primitives
relating to the data: Performance Management data (PM), The KPIs in Table VII were chosen according to the
Configuration Management data (CM) and Fault Management importance of the parameters to each of the services. The Key
data (FM). In Figure 2 we only present NodeB and RNC Performance Rating Value (KPRV) reflects the importance of
each KPI to the overall service performance). Ki(KPI) is a
network elements, but any type of NE could be included
rating value according to the KPI performance.
beyond the Radio Network System (RNS). NEs are
responsible for gathering the KPI until L4.  SPI 1   KVRP1,1 × KPIR1 L KVRPm ,1 × KPIRm   TK1 L TK n  M1
     
NE Mediations
Reporting Engine  M = M O M  / M O M 
 SPI   KVRP × KPIR L KVRP × KPIR   TK L TK 
GUI

NodeB
RNC1  n  1, n 1 m,n m  1 n
CM, PM and FM Statistics PostProcessor
NodeB Statistics Processor PM FM CM
NodeB RNS 1 Metamodel Metamodel Metamodel Matrix (M1) depicts the application of EQ 25. The values of
Raw DataBase Loader
Reporter Database the KVRPKPI,TypeOfService are presented in Table VII. For voice,
RNC2
PM
Mediations
FM
Mediations
CM
Mediations PM
Mediations
FM
Mediations
CM
Mediations
the most important KPIs are the delay and the availability. For
NodeB
NodeB
video delivery we considered the availability of bit rates and
NodeB RNS 2
Element Manager Reporter Tool the reliability of the network, while browsing users are more
Figure 2 – Architecture overview tolerant to the network performance. TK is given in EQ 26.
2) Element Manager (EM)
TK i = ∑ KPRVKPI ,i , and i = {Voice, Video, Browsing} EQ 26
The NE Mediation collects the PM and FM functions
existing in each of the elements of the network. The statistics Table VII - KPRV values
processor is responsible for converting all the diverse gathered
KPRV By Type Of Service
data. The Raw Database loader provides interfaces for the data KPI: Voice: Video: Browsing:
storage management features. E2E information should be E2EDelay 5 2 3
treated by the EM. Reliability 4 4 3
3) Reporting Tool Discard Rate 3 3 2
The Reporting Database is a data warehouse designed for Jitter 3 2 1
Waiting Time 2 1 2
the integration of diverse data sources. This data repository Bit Rates 1 5 2
modulates all the network topology into a hierarchal object TK 18 17 13
structure. The Statistics Post Processor is responsible for the Table XError! Reference source not found. shows the
aggregations of data and time trend analysis. The Reporting threshold values for the definition of a rating R for each of the
Engine is responsible for the database queries. The Automated selected KPIs. According to the threshold, we rank the
Knowledge Discovery model searches for patterns. performance of the previously evaluated services on the
simulated network.
B. Service Performance Indicators
In this section we present some performance indicators that Table VIII - Threshold definition for service performance
ratings
can be obtained with the performance architecture. The
Service Performance Indicator (SPI) is mainly affected by the R Delay Jitter (ms) Discard Rate Reliability (%) WT (s)
on-path links and nodes performance and the E2E conditions. (ms) (%)
5 <50 <0.1xDelay <0.001 >99.999 <0.01
It shows the overall performance of a network according to the 4 [50;100] [0.1D;0.5D] [0.001;0.01] [99.99;99.9999] [0.001;0.1]
quality of delivery of the contained services. 3 [100;150] [0.5D;0.8D] [0.01;0.1] [99.9;99.999] [0.1;1]
The Revenue Factor (RF) has a direct implication on the 2 [150;300] [0.8D;D] [0.1;1] [99.9;99] [1;10]
evaluation of the network adequacy to the services being 1 [300;600] [D;2D] [1;10] [90;99] [10;100]
provided. The under-performance of voice should reflect a 0 > 600 >2D >10 <90 >100
We apply the services performance into EQ 25 considering XII). The jitter follows the same trend and the overall value is
M2Error! Reference source not found.. 0.10 sec for HSDPA and 0.22 sec in UMTS.
 TK voice  M2
 
(SPI voice ) = (E 2EDKPRV1 × R1 L BRKPRV1 × Rm ) /  M  Table X – GTP Delay
 TK 
 voice  GTP DL Delay (high bit rates) 0.00011
GTP DL Delay (low bit rates) 0.000035
Table XI – Link Delays
V. RESULTS EVALUATION
HSDPA (sec) UMTS (sec)
In order to validate the predefined metrics and approach we GPRS attachment delay 0.90 0.9
present some results obtained with a network simulator, PDP context activation delay 1.4 1.4
considering some of the parameters defined above for the Service Activation Delay 0.25 0.35
evaluation analysis. GMM Signaling Channel Setup 0.35 (up to 0.90) 0.28 (up to
Delay 0.5)
A. Simulation parameters GMM Access Delay 0.035 0.28
RAN delay 0.035 0.035
The simulations are run in OpNet considering 500 values per E2E Delay 0.147 0.370
statistic and an update interval of 500000 events. The E2E jitter 0.100 0.22
conditions are changed every one hour. The generic scenario The Service Setup Waiting time, i.e., the time a user needs to
was deployed considering 4 RNCs each of which connected to wait before starting receiving the desired service, is 3.26 sec
a NodeB; 3 RNCs are UMTS/HSDPA enabled and one only for HSDPA and 3.64 sec for UMTS. These values were taken
UMTS (3GPP release 04) capable. We introduced 9 UMTS from the information in
mobile nodes, 8 of which are accessing a light HTTP service.
The remaining was used to evaluate the performance of both Table X, Table XI, Table XII and Table XIII, according to
services VoIP (G.711) and Video (300Kbps Flow). 22 HSDPA EQ 22.
enabled mobile nodes are distributed evenly. All nodes access Table XII – Lower layer information
Heavy HTTP traffic (packet size of 1000bytes, normal
HSDPA UMTS
distribution, and page inter-arrival of 10 with exponential
UMTS UE RLC/MAC total 23kbps or 20 pps 2.600 kbps or
distribution of 10 seconds: Browsing image). The core links received throughput 2.4
which connects the SGSN with the GGSN and the GGSN with GMM Ack received 0.06 pps 0.1
the core routers has a background traffic load of 170Mbps and Eb/N0 3.1dB 3.1 dB
all links have a 10Gbps capacity. Table XIII – Video Performance
HSDPA UMTS
B. Simulation Results PDP Context Activation Delay UMTS 1 1.4
GPRS Attachment Time 0.8 1
The obtained values will extrapolate cross layer conclusions RACH Access Delay 0.039 0.028
relating to Section III and IV. On the core link the average
throughput is 222Mbps. This already includes the background C. Service Performance Evaluation
traffic which was introduced to evaluate the network’s For the Voice service, while in HSDPA, the delay is less
behavior with other concurrent traffic. The UEs total than 150 ms so the R is 3 and the KPRV is 5. Reliability is
generated traffic is 52Mbps. The obtained GTP overhead is above >99.999 so R=5 and the KPRV is 4. The jitter is 0.68 of
515Kbps (nearly 0.95%). The processing delay introduced by the delay, so R is 3 and the KPRV is 3. The available bit rate is
the several entities varies according to Error! Reference always enough to support the service and there is always a
source not found.. surplus of bandwidth so we will consider R = 5 and the WT is
The total introduced delay by the nodes is 0.57 ms for the above 1second and bellow 10 seconds so R=2 for the KPRV of
heavy loaded network branch and 0.32 ms for the least loaded 2.
branch. We can also see that the major contributions come as Using M2, where E2ED is the e2e delay and BR is the bit
expected from the RNC and the GGSN. Another aspect which rate, we get that the voice service scored an adequacy value of
can be observed is the fact that the processing delay 3.7 out of 5 while in HSDPA and 3.2 out of 5 while in UMTS.
experienced by the RNC is higher than by the GGSN which The Video Service scored 3.8 in both types of network. In
indicates that the traffic volume is not the only variable. order to define the adequacy of the services to the network we
Table IX –NE Processing Delays would need to consider the revenue of each service and use it
NE Processing Delay (sec) in EQ 24.The revenue is a weighting parameter which is
GGSN 0.0002100
SGSN 0.0001000
configurable. It should be set according to each network, since
UMTS RNC 0.0000150 it depends on the individual profit of each delivered service.
HSDPA RNC 0.0002500 Voice scored a lower rating because the delay is penalizing
UMTS NodeB 0.0000008 the delay requirements, since it has a higher value than video.
HSDPA NodeB 0.0000145 The same happens when comparing the performance of Voice
The average E2E delay experienced by the nodes is 147 ms over the two technologies. HSPA features less delay and thus
in the HSDPA link and 370 ms in the pure UMTS link (Table prompts voice for a higher score.
VI. CONCLUSIONS
We underlined relevant parameters at each layer of a 3GPP
network and evaluated how such metrics relate to those in the
above layers. We also provide an analytical way to evaluate the
adequacy of a network to support a set of services, and present
an architecture proposal for cross-layer performance
evaluation. Through the obtained simulation results, we
showed the rating of two multimedia services and different
technologies. For example, the average end-to-end delay
experienced by the nodes is 147 ms in the HSDPA link and 370
ms in the pure UMTS link. Moreover, the time a user needs to
wait before starting receiving the desired service, is 3.26 sec
for HSDPA and 3.64 sec for UMTS. As shown, 3GPP
networks are still on a way to provide an optimal delivery of
such contents. The video service seemed to be the best
adjusted, mainly due to high delays existing on the network
which influenced negatively the voice. This study will be
extended in the future to quantify and qualify the performance
of other services while detailing the impact of each layer on
another.

REFERENCES
[1] Ian F. Akyildiz, Xudong Wang, “Cross Layer Design in Wireless Mesh
Networks”, IEEE Transactions on Vehicular Technology, vol. 57, No. 2,
March 2008
[2] Ismail Djama, Toufik Ahmed, Daniel Négru, “Adaptive Cross-Layer
Fragmentation for QoS-based Wireless IPTV Services”m International
Conference on Computer Systems and Applications, March 31, 2008,
Doha
[3] Ismail Djama, Abdelhamid Nafaa, Raouf Boutaba, “Meet In the Middle
Cross-Layer Adaptation for Audiovisual Content Delivery”, IEEE
Transactions on Multimedia, vol 10, N1, January 2008
[4] Virginia Corvino, Velio Tralli, Roberto Verdone, "Cross-Layer
Scheduling for Multiple Video Streams over a Hierarchical Emergency-
Deployed Network", The 28th International Conference on Distributed
Computing Systems Workshops, June 2008
[5] Fotis Foukalas, Vangelis Gazis, Nancy Alonistioti, “Cross-layer design
proposals for wireless mobile networks: a survey and taxonomy”, IEEE
Communications Surveys, 1st quarter 2008, Vol.10, No. 1.
[6] A. Clark, et. Al. “RTCP HR - High Resolution VoIP Metrics Report
Blocks”, IETF Draft, Febrary 25, February 25, 2008
[7] D. Malas, “SIP End-to-End Performance Metrics”, IETF Draft, PMOL
WG, June 25, 2008

You might also like