You are on page 1of 11

A.F. COSME.

UMTS CAPACITY SIMULATION STUDY

again here, pointing out when the Timetotrigger1a parameter is used, to


illustrate how the cell addition works. For the description of the Soft
Handover algorithm the following parameters are needed:

- AS_Th: Threshold for macro diversity (reporting range);

- AS_Th_Hyst: Hysteresis for the above threshold;

- AS_Rep_Hyst: Replacement Hysteresis;

- ∆T: Time to Trigger;

- AS_Max_Size: Maximum size of Active Set.

Equivalent to Timetotrigger 1A
.
∆T ∆T ∆T
Measurement
Quantity CPICH 1

As_Th + As_Th_Hyst

AS_Th – AS_Th_Hyst
As_Rep_Hyst

CPICH 2

CPICH 3

CPICH 2 >
CPICH1 – Time
(AS_Th –
AS_Th_Hyst) Event 1A Event 1C ⇒ Event 1B ⇒
Cell 1 Connected for a time > ⇒ Add Cell 2 Replace Cell 1 with Cell 3 Remove Cell 3
∆t→Add Cell

Figure 91: WCDMA handover example [3gpp25.922]

As described in the Figure above:

- If Meas_Sign is below (Best_Ss - As_Th - As_Th_Hyst) for a period of ∆T


remove Worst cell in the Active Set (Event 1B).

- If Meas_Sign is greater than (Best_Ss – (As_Th - As_Th_Hyst)) for


a period of ∆T and the Active Set is not full add Best cell outside the
Active Set in the Active Set (Event1A).

- If Active Set is full and Best_Cand_Ss is greater than (Worst_Old_Ss +


As_Rep_Hyst) for a period of ∆T add Best cell outside Active Set and
Remove Worst cell in the Active Set (Event 1C).

186
Where:
- Best_Ss :the best measured cell present in the Active Set;

- Worst_Old_Ss: the worst measured cell present in the Active Set;

- Best_Cand_Set: the best measured cell present in the monitored set.

- Meas_Sign :the measured and filtered quantity.

The “hysteresis” parameter is defined as a sort of “guard band” in order to


avoid event triggering due to insignificant measurement fluctuations.

Event-triggered periodic measurement reporting is setup for event 1a and


event 1c. This means that if the UE sends a report to UTRAN and the UE
does not get any ACTIVE SET UPDATE message, the UE will start to report
the same event every repInterval1a until the Active Set is updated, or
until the condition for event triggering is not valid anymore.

When a MEASUREMENT REPORT message is received at the SRNC from


the UE, and the MEASUREMENT REPORT message has been triggered on
event 1a, Soft/Softer Handover evaluation algorithm (SHO_Eval)
processes the report and evaluates if the proposed candidate can be
added to the Active Set.

After knowing the working principle of the Soft Handover algorithm we


could say that event1a affects the handover performance in the following
ways:

According to Ericsson, too short settings for this timer cause update
events of the active set to occur too quickly after each other, leading to
high signaling load, and in the other hand too long settings cause that
although the criteria have been fulfilled, the active set will not be updated.
Less optimal cells will be used which leads to unnecessary interference.
This will waste UTRAN resources.

Additionally, Ericsson mentions that if this parameter is changed without


complete knowledge of all system internal interactions the effects may be
unexpected and fatal. They also mention that most radio networks are
non-homogeneous and a change of a parameter value does not
necessarily have the same effect on the UE or RAN in all parts of the
network.

It is important to remember that the measurements used for handover


event evaluation is made on the downlink CPICH. This means that using
different settings for primaryCpichPower (power assigned to the CPICH)
on neighboring cells will create a more complicated interference situation.

8.5.2 The Analysis framework

The experiment setup was done according to the Design Technique called
“Two-factor full factorial design without replications” presented in [Jain].

187 Beneficiario COLFUTURO 2003


A.F. COSME. UMTS CAPACITY SIMULATION STUDY

This means that we use two variables (or factors) which are carefully
controlled and varied to study their impact on the performance. In our
case, these two variables are the Traffic Densities (corresponding to the 5
columns of the Table 4) and the parameter TimetoTrigger1a with its
possible levels. The following Table shows the levels (values) chosen for
this parameter.

Minimum Value 0 millisecond

Value from Vodafone Netherlands 200 millisecond


(same as default value)

Value from Vodafone Global 320 milliseconds

Maximum value 5000 millisecond


Table 61: Chosen levels for the parameter Time to Trigger 1A

Once the input variables have been chosen, we have to define the output
variables, i.e. the “measured” data as referred in [Jain], although in this
case there is not measured data as such but simulated. Knowing the
expected behavior from Ericsson, we can monitor the performance in
terms of:

ƒ Uplink Interference Load [%]


ƒ Soft Handover Attempts [num]

Each of the indicators is provided by the simulator so an independent


analysis for each one was performed, i.e. wit was constructed a different
matrix and ANOVA (Analysis of Variance) Table for each one of these two
indicators. The statistical ANOVA Test calculations are greatly simplified
using the Microsoft Excel Data Analysis Pack, which implements the tests
“ANOVA: Two factors without replication” and “ANOVA: Two factors with
replication”. The difference between both analyses is the number of
repetitions for each experiment, in our case, due to time restrictions only
one experiment was performed per each combination of input levels.

To make the analysis, the Data has to be organized in a matrix where


each column represents each of the parameter Timetotrigger1a levels
(named factor A in this description) and each row represents the different
levels of traffic densities (identified as Mix1_10Erl, Mix2_20Erl,
Mix3_40Erl, Mix4_80Erl and Mix5_160Erl respectively, named factor B in
this description). Each entry in the matrix, Yij, represents the response in
the experiment where factor A is at level I and factor B at level j. For
instance, Y12 would be equivalent in our setup to Y (Minimum Value,
Mix2).

188
The “grand mean” µ is obtained by averaging all observations. Averages
per rows and columns are also required. Once these averages are
calculated, the “column effect” or αj (effect of the factor A at value j) are
obtained by subtracting the “grand mean” from each column mean, and
the Bi or row effects (effect of the factor B at value i) are calculated in the
same way, subtracting the “grand mean” from each row mean. This gives
us a first indication about how different is the performance for each of the
parameter/load alternatives regarding the average performance
represented by µ.

Next step is to build the matrix of the estimated response, defined as:

Yˆij = µ + αj + Bi (8-2)

Once this is defined, the Error Matrix can be found by subtracting position
to position the estimated response matrix from the “measured” response
matrix. Each entry of the error matrix is defined as follows:

Errorij = Yij (measured) - Yˆij (8-3)

As the values of the µ, αj’s and Bi’s are computed such as the error has
zero mean, this matrix has the property that the sum of all the entries in
a row or column must be zero.

Next step is to calculate the sum of squared errors (SSE) which is defined
as
SSE = Σ (eij2) (8-4)

Where this sum is performed including all entries in the error matrix.

Next, the total variation SST (which is different from the total variance) is
calculated as:

SST = SSA + SSB + SSE (8-5)

Where:

• SSA = b * Σ (αj)2 where b= number of rows

189 Beneficiario COLFUTURO 2003


A.F. COSME. UMTS CAPACITY SIMULATION STUDY

• SSB = a Sum (Bi)2 where a=number of columns

At this point, we can also calculate the percentage of variation explained


by each factor (which should be higher than the percentage of variation
explained by errors to consider that a parameter has a significant impact
in the performance). The percentage of variations by each factor is
defined as:

Percentage of variation explained by A = SSA/SST

Percentage of variation explained by B = SSB/SST

Percentage of variation explained by errors =SSE/SST

To statistically test the SIGNIFICANCE of a factor, we must divide the sum


of squares by their corresponding degrees of freedom (Df). In this case,
the corresponding Dfs are:

• SSA Df = (a-1)

• SSB Dfs = (b-1)

• SSB Errors = (a-1)*(b-1)

The degrees of freedom of the factors A and B are because the errors in
each column should add 0 and in each row as well and for the errors the
degrees of freedom is the product of DfA and DfB.

Next we proceed to calculate the mean squares of all factors as follows:


MSA = SSA / (a-1) (8-6)

MSB = SSB / (b-1) (8-7)

MSE = SSE/ ((a-1)*(b-1)) (8-8)

At this point we can also calculate the Standard Deviation of each of the
factors:
SSE = STANDAR DEVIATION OF ERRORS = √(MSE) (8-9)
Standard DEVIATION µ = Se/ (ab) (8-10)
Standard DEVIATION αj = Se* √ ((a-1)/ab) (8-11)
Standard DEVIATION Bi = Se √ ((b-1)/ab) (8-12)

190
Variance of each factor = (Standard deviation of the factor)2 (8-13)

After mean and variance of each parameter are known, a confidence


interval (defined as a function of the mean and the standard deviation)
can be calculated using statistical methods.

The last part is to calculate the F-ratios to test the statistical significance
of the factors (a systematic confirmation from the previous results based
on a statistical test). F-ratios are defined as follows:

Fa = MSA/MSE (8-14)

Fb = MSA/MSE (8-15)

Then, the factor A is considered significant at level alpha (significance


level, an alpha 0.05 is defined for a 95% confidence interval) if the
computed ratio is more than FcritA = F[1-alpha,a-1, (a-1)(b-1)], where F
is computed from the table of quantiles of F variates. Accordingly, the
factor B is considered significant at level alpha if the computed ratio is
more than FcritB = F[1-alpha,b-1, (a-1)(b-1)] from the table of quantiles
of F variates. All these values are conveniently arranged by Excel so after
knowing the procedure, we would present the results and provide the
conclusions using the criteria presented.

8.5.3 Simulation Results

First of all, we are going to show the original matrices with the
“measured” results with the Uplink Interference Load and the Soft
Handover Attempts. Next two tables illustrate the original matrices.

min-level (0 Netherlands level


msec) (200 msec) global level (320 msec) max-level (5000 msec)

MIX1_10Erl 199 191 191 174

MIX2_20Erl 2132 403 424 372

MIX3_40Erl 1240 879 873 1785

MIX4_80Erl 3019 1777 2704 1658

MIX5_160Erl 139329 8282 7202 2770

Table 62: Original table for ANOVA, Cell HO attempts

191 Beneficiario COLFUTURO 2003


A.F. COSME. UMTS CAPACITY SIMULATION STUDY

min-level netherlands level (200 global level (320 max-level (5000


(0 msec) msec) msec) msec)

MIX1_10Erl 10.27% 10.20% 10.18% 10.47%

MIX2_20Erl 20.30% 20.44% 20.52% 20.95%

MIX3_40Erl 42.32% 43.02% 42.61% 47.21%

MIX4_80Erl 72.99% 77.48% 73.75% 76.90%

MIX5_160Erl 88.61% 89.38% 90.02% 90.28%

Table 63: Original table for ANOVA, Uplink Load

Next, we show the Analysis of Variance after applying the Excel Data
Analysis Toolkit over the original tables and afterwards we draw the main
conclusions.

8.5.3.1 Analysis of Variance (ANOVA)

the results obtained for each measured response (i.e. Handover attempts
and Uplink Load interference) after applying the ANOVA-2 factor without
replication analysis were as follows:

Source of Variation SS df MS F F crit

Rows 4695850801 4 1173962700 1.33424525 3.25916

Columns 2778331540 3 926110513.2 1.052553504 3.4903

Error 10558442984 12 879870248.7

Total 18032625325 19
Table 64: ANOVA of Handover attempts, additive model

Source of Variation SS df MS F F crit

Rows 1.866556606 4 0.466639151 3032.14678 3.25916

Columns 0.001438522 3 0.000479507 3.115761269 3.4903

Error 0.001846767 12 0.000153897

Total 1.869841895 19
Table 65: ANOVA of Uplink load, additive model

192
The ROWS factor corresponds to the different traffic densities (5 levels)
and the COLUMNS factor corresponds to the 4 levels of the parameter
time to trigger 1a (0.200. 320 and 5000 msec). The columns are
respectively:

• SS = Squared Sum of each factor


• Df = degrees of freedom (number of independent terms to obtain
the Squared Sum)
• MS = Mean Square value = SSparameter / Dfparameter
• F = Computed F factor = MSfactor /MSE where MSE is the Mean
Square Error (SSE/Dferror).

According to the ANOVA test of the Handover attempts (table 62), the
percentage of variation explained by each factor is as follows:

• Percentage explained by Rows (load) = SSrows / SStotal = 26%


• Percentage explained by columns (time to trigger levels) =
SScolumns/SStotal = 15 %
• Percentage explained by errors = SSerrors/ SStotal = 59%

According to this first calculation, from the point of view of VARIATION


(which is not the same as variance), the time to trigger level of the set of
performed experiments is not significant. This was confirmed when
checking the obtained F (MScolumns/MSerror) for the Time to trigger
parameter, which is not higher than the Fcrit in any of the response
variables, therefore, with the assumed additive model, one cannot confirm
the statistical significance for the columns (time to trigger levels). This is
an indication that some transformation of the output variable must be
tried in order to reduce the variance of the data. In fact, taking a look at
the Maximum output / Minimum output ratio, specially in the Handover
Attempts output variable, one can see that is rather high compared with
the order of magnitude of the data obtained: in this case [Jain] suggests
to try a logarithmic transformation over the output data (multiplicative
model). The multiplicative model with two factors assumes a model as it is
described in the next equation:
Yij = Vi * Wj (8-16)

If we take logarithm at both sides, we have an additive model:


Log (Yij) = Log(Vi) + Log(Wj) (8-17)

In the case of two-factor experiments, the additive model assumed so far


was:
Yi = µ + αj + Bi + eij (8-18)

If we assume a logarithmic transformation, this means that the model


would be:
Log (Yi) = Log µ + Log αj + Log Bi + Log eij (8-19)

Therefore, the output in linear fashion would be:

193 Beneficiario COLFUTURO 2003


A.F. COSME. UMTS CAPACITY SIMULATION STUDY

µ αj Bi
Yi = 10 * 10 * 10 * 10 eij (8-20)

Where µ, αj , Bi ,eij are obtained from the ANOVA 2-factor analysis


performed over the Logarithm of each one of the output levels. The
transformed Tables for both variables and the corresponding ANOVA test
are shown below.

min-level (0 msec) netherlands level (200 msec) global level (320 msec) max-level (5000 mse

MIX1_10Erl 2,298853076 2,281033367 2,281033367 2,240549248

MIX2_20Erl 3,3287872 2,605305046 2,627365857 2,57054294

MIX3_40Erl 3,093421685 2,943988875 2,941014244 3,25163822

MIX4_80Erl 3,479863113 3,249687428 3,432006687 3,219584526

MIX5_160Erl 5,14404152 3,918135226 3,857453117 3,442479769

Table 66: Logarithm transformation of the measured variable Handover attempts

min-level (0 msec) netherlands level (200 msec) global level (320 msec) max-level (5000 mse

MIX1_10Erl -0.99 -0.99 -0.99 -0.98

MIX2_20Erl -0.69 -0.69 -0.69 -0.68

MIX3_40Erl -0.37 -0.37 -0.37 -0.33

MIX4_80Erl -0.14 -0.11 -0.13 -0.11

MIX5_160Erl -0.05 -0.05 -0.05 -0.04

Table 67: Logarithm transformation of the measured variable Uplink load

Source of Variation SS df MS F F crit

Rows 2,041006383 3 0.680335461 19,62286238 4,757055

Columns 0.017800402 2 0.008900201 0.256707802 5,143249

194
Error 0.208023309 6 0.034670552

Total 2,266830094 11
Table 68: ANOVA of Handover attempts with the multiplicative model

Source of Variation SS df MS F F crit

Rows 0.745806368 3 0.248602123 1768,122551 4,757055

Columns 0.00070624 2 0.00035312 2,511480684 5,143249

Error 0.000843614 6 0.000140602

Total 0.747356222 11
Table 69: ANOVA of Uplink load with the multiplicative model

Again, the obtained F for the levels of the parameter time to trigger is not
higher than Fcrit, therefore with the multiplicative model it is not possible
either to guarantee statistical significance. In the chapter 15 of [Jain]
there are a list of graphical tests to determine which kind of
transformation would be required, three of these tests were tried but the
criteria to apply the given transformation were not fulfilled with the
collected information. Therefore, due to the limitations of time and
resources of this project, this verification with more transformations in
order to reduce the variance of the experiment is still open. It is also
suggested to perform more than one simulation per each traffic density
level and then apply the ANOVA 2 factor analysis with replication, which
was not possible in this project due to the limitations of time and
hardware.

Therefore, to conclude the analysis of the given parameter another


approach also mentioned in [Jain] is going to be used. This method is
particularly useful when the goal of the experiment is simply to find the
best combination of factor levels (the combination that produces the best
performance). The name of the method is the Ranking method and
consists to organize the experiments in the order of increasing or
decreasing responses so that the experiment with the best response is
first and the worst response is last. Then, the factor columns are observed
to find levels that consistently produce good or bad results.

For this analysis, the best experiment is defined as the one with the
lowest handover attempts measurement and the lowest uplink load.
Therefore, we present below a table where the rows have been sorted in
order of increasing number of handovers and increasing uplink load:

experiment Load time to trigger HO attempts UL Load

16 MIX1_10Erl 5000 174 10.47%

195 Beneficiario COLFUTURO 2003


página anterior siguiente página
A.F. COSME. UMTS CAPACITY SIMULATION STUDY

11 MIX1_10Erl 320 191 10.18%

6 MIX1_10Erl 200 191 10.20%

1 MIX1_10Erl 0 199 10.27%

17 MIX2_20Erl 5000 372 20.95%

7 MIX2_20Erl 200 403 20.44%

12 MIX2_20Erl 320 424 20.52%

13 MIX3_40Erl 320 873 42,61%

8 MIX3_40Erl 200 879 43,02%

3 MIX3_40Erl 0 1240 42,32%

19 MIX4_80Erl 5000 1658 76,90%

9 MIX4_80Erl 200 1777 77,48%

18 MIX3_40Erl 5000 1785 47,21%

2 MIX2_20Erl 0 2132 20.30%

14 MIX4_80Erl 320 2704 73,75%

20 MIX5_160Erl 5000 2770 90.28%

4 MIX4_80Erl 0 3019 72,99%

15 MIX5_160Erl 320 7202 90.02%

10 MIX5_160Erl 200 8282 89,38%

5 MIX5_160Erl 0 139329 88,61%

Table 70: Ranking method applied over the simulation outcomes

The first interesting thing that we can notice in Table 68 is that in the
Load column, nearly all the experiments are ordered from the lowest load
(10 Erlangs) to highest Load (160 Erlangs) including all the time to trigger
levels per each traffic load level (exception made with the experiments 2,
18 an 4 that can be attributed to random errors in the simulator).
Therefore the effects of the increased traffic load on the system
performance, measured in terms of Hand-over attempts and Uplink Load,
can be seen: the highest load, the worse.

Taking a look at the second column, we can observe the pattern 5000.
320. 200 and 0 (with the 320 and 200 alternating position in some cases)
in mostly all the table. Again, variations of this pattern can be attributed
to experimental errors in the simulator.

Then, we can appreciate that for the same level of traffic load, the worst
results in terms of Handover Attempts are with the value 0 for the Time to

196

You might also like