TD (08) 410 PDF
TD (08) 410 PDF
TD (08) 410 PDF
EURO-COST
M. Garca-Lozano, S. Ruiz-Boque
Av. del Canal Olmpic s/n
EPSC, C4, 305
Castelldefels (Barcelona)
SPAIN
Phone: +34 93 413 72 13
Fax: +34 93 413 70 07
Email: mariogarcia@tsc.upc.edu
Study on the Automated Tuning of HSDPA Code Allocation
M. Garca-Lozano, S. Ruiz-Boque
Abstract
UMTS Rel5 [1] and Rel6 [2], among other advances, introduce higher data rates to DL and UL
through HSDPA and its counterpart HSUPA. Within this context, the objective of the current paper
is to analyze the potential improvements that could provide the incorporation of an Automatic Tuning
System (ATS) on the HSDPA technology. In this sense, it is assessed to which extent it is worthwhile to
make a dynamic management of its three most important resources: devoted power, codes and percentage
of users assigned to Rel99 and HSDPA.
From the study, one of the first conclusions reached is that the benefits of HSDPA are so high that
in general, there is no clear benefit in introducing an ATS to manage power or the percentage of UEs
assigned to HSDPA, both can be handled by straight forward rules-of-thumb. However, code allocation
deserves a further study. Indeed, a full ATS to dynamically allocate HS-PDSCHs in HSDPA systems
is proposed. This is done according to the channel quality indicators reported by the UEs, which are
processed and turned into appropriate Key Performance Indicators or KPIs. In this way, a mid-term
reservation mechanism is designed to guarantee that HSDPA performs at its most efficient level while
guaranteeing that no codes are being wasted and unnecessarily increasing Rel99 and HSPDA blocking
probability. By means of dynamic simulations, the proposal is tested and validated.
1 Introduction
HSDPA features provide a reduction in the cost per megabit through quite a smooth and simple update
from pure 3G systems. In fact, many operators are offering some kind of broadband service, which is the
consequence, or maybe the cause, that the demand for wireless data services is growing faster than even
before. Indeed HSDPA is a first step towards a further boost of data services usage. New improvements to
this technology have been defined at Rel7 HSPA+ (also called Evolved HSDPA) [3].
Rel5 HSDPA has been designed with different performance enhancing features to support theoretical
data rates up to 14 Mbps (28 Mbps in HSPA+ DL). New and fast mechanisms are introduced into the
MAC layer to adapt the data rate to propagation channel conditions, being mainly coding and adaptive
modulation (QPSK / QAM-16, also QAM-64 in HSPA+), fast hybrid automation repeat request (H-ARQ)
and fast scheduling based on a shorter transmission time interval (TTI) of 2 ms. In addition to this, the
H-ARQ mechanism and the scheduler themselves are located in a new MAC sublayer, denoted as MAC-hs.
The MAC-hs is located in the Node-B which leads to an almost instantaneous execution of H-ARQ and
scheduling decisions.
HSDPA also introduces some changes in the UTRAN physical layer. Whereas Rel99 originally defined
three different techniques to enable DL packet data, in practice the DCH over the Dedicated Physical Channel
(DPCH) is the primary means of supporting any significant data transmission. The Forward Access Channel
(FACH) transmitted on the Secondary Common Control Physical Channel (SCCPCH) is an alternative way
though much more inefficient. It must be generally received by all UEs in a cells coverage area and that is
why high spreading factors (SF128 or SF256) are usually employed [4], besides macro diversity or fast power
control are not supported. Finally, the third mechanism is the Downlink Shared Channel (DSCH) which was
not widely adopted or implemented for FDD and was eventually removed from the specifications [5].
1
Iub
Iub
Node-B
HSDPA serving
Node-B
With DPCH transmission, each user is assigned a dedicated OVSF code with a SF dependent on the
required data rate. Precisely, one of the novelties that allows HSDPA achieving high data rates is the
allocation of multiple codes to a single user. Indeed, to support HSDPA, three new physical channels have
been defined [6]. First, the High Speed Physical Downlink Shared Channel (HS-PDSCH) is a SF16 DL
channel carrying the data payload and supporting both time and code multiplexing: several UEs can be
assigned to different HS-PDSCHs in the same TTI. Second, the High Speed Dedicated Physical Control
Channel (HS-DPCCH) is an UL channel in which each operating HSDPA UE reports the acknowledgements
of the packet received on HS-PDSCH and also the Channel Quality Indicators (CQI). These CQIs are used
by the Node-B scheduler to decide the next UE to be served. And third, the High Speed Shared Control
Channel (HS-SCCH) is a fixed rate (SF128) DL channel used to communicate to UEs the scheduling and
control information relating to each HS-PDSCH. It is remarkable that a HSDPA UE must always have an
associated DCH to carry the UL user payload and to transfer the Layer 3 signalling. Whereas, the HSDPA
specific physical channels do not support SHO, the associated DCH uses this mechanism normally. All these
channels are graphically summarized by Figure 1 in which the UE is in a SHO area.
Apart from the improvements included in the standards, the RRM algorithms that are implemented in the
vendor equipment are a key factor to the success of HSDPA. Since the design of these algorithms is not defined
by the standard, several investigations are being carried out to find the best possible implementations. In
this context, the work in [7] shows an analysis and propose practical considerations in realistic deployments,
through lab and field testing. The authors group the main strategies into four categories:
1. HSDPA Power Allocation: Static or dynamic strategies can be implemented each one with pros
and cons. This aspect is further explained along Section 2.1. A revision of existent works is done an
complementing simulations are also presented.
2. Node-B Scheduler: Thanks to the new 2 ms TTI, opportunistic schedulers are now a fairly interesting
option to exploit the time-variant nature of the radio channel to increase the cell throughput. Further
details on this topic are given in Section 3.2.3. Scheduling possibilities are revised and some conclusions
are drawn in the context of the proposed Automatic Tuning System.
3. Link Adaptation: Regards to the aggressiveness in the Transport Format (TF) selection. In this
sense a tradeoff exists between an underuse of cell capacity and a degraded performance because of
excessive retransmissions. Conclusions from [7] recommend dynamic NACK rate target control.
4. HS-DSCH Serving cell change: Since macrodiversity is not considered in HSDPA, depending on
the implementation, the transient period after a cell reselection can vary from a few miliseconds to
several seconds, with the consequent UE degradation. This aspect is out of the scope of this work.
A fifth strategy is proposed and studied along this work and indeed it is the focus of the ATS proposal:
HSDPA Code Allocation, widely studied from Section 2.2 and on.
2
3000
Total
2500 Rel'99
HSDPA
Throughput [kbps]
2000
1500
1000
500
0
0 10 20 30 40 50 60 70 80 90 100
Figure 2: Cell throughput evolution for different number of UEs served by HSDPA
1. One-to-one overlay: HSDPA is provided through a different and dedicated carrier. In this case, traffic
balancing lies in that all HSDPA capable users are directly assigned to the HS carrier while the rest
remain in the Rel99 one. By means of an interfrequency handover, UEs are directed to the HSDPA
carrier when activating the particular HS services.
Considering a different UE allocation would render into a reduction of the cell throughput. This idea
was studied by means of simulation (see Section 3.1 for further details on simulation conditions) and
Figure 2 shows the main result. 50% of the total UEs are considered to be HSDPA capable. It can
be observed how the central cell throughput increases as soon as UEs are transferred into HSDPA.
The new technology advantages are so clear that DCHs usage would be only justified when the service
imposes hard constraints over delay and jitter and the HSDPA load is such that the required QoS
cannot be granted by the scheduler.
The one-to-one overlay strategy is of simple management but at the expense of an inefficient use of
the spectrum. The possible limited number of carriers per operator as well as the costs and issues
associated with upgrading to a multicarrier network are important drawbacks as well.
A particular case of this scenario would be deploying the second carrier with HSDPA only in hotspots,
where smaller, localized high-demand areas are served by micro or picocells. In contrast with a macrocell
environment, higher peak data rates can be achieved. Nevertheless, in indoor environments, HSDPA
could only be enabled if the UE previously had coverage from the macrocell layer. Otherwise it would
be unable to enter the network and execute the corresponding handover. This is a key drawback if the
existing macrocell network has not a deep coverage in terms of in-building penetration.
2. Single carrier shared between Rel99 and HSDPA: In this second approach a single carrier
shares all types of traffic. Spectrum is now more efficiently used but several issues not defined by
3GPP must be tackled carefully. In particular, the allocation of the two basic resources to be shared
between HS and Rel99 users: power and codes. Both topics are developed in subsequent sections.
1. Some providers design their equipment so that HSDPA power is fixed as a percentage of the total
available DL power (see Figure 3(a), Pmax represents the maximum allowable transmission).
3
Pmax [W] Pmax [W] HS-PDSCHs, HS-SCCHs Pmax [W]
DPCHs DPCHs
DPCHs
3000
Total
2500 Rel'99
HSDPA
Throughput [kbps]
2000
1500
1000
500
0
10 20 30 40 50 60 70 80 90
Figure 4: Cell throughput for different % of the maximum power allocated to HSDPA
2. Others allow a dynamic allocation on the basis of usage of non-HSDPA users. That is to say, HSDPA
can only use the power left by Rel99 (Figure 3(b)). In certain cases, a margin below the maximum
power in the node-B can be adjusted to avoid excessive interference.
3. Finally, some authors propose fixing a minimum amount of planned power devoted to HSDPA and, if
available, dynamically allowing more power up to a certain maximum threshold [8] (Figure 3(c)).
Approach 1 is not straight forward since the amount of power devoted to Rel99 or HSDPA will tend to
benefit one or the other type of users. Figure 4 shows the obtained cell throughput when different percentages
of the maximum power are allocated to HSDPA. It can be seen that the higher the HSDPA power, the better
the total cell throughput is. However, this is at the cost of degrading DCH connections. In this particular
case, when 40% or more of the total power is reserved for HSDPA, the degradation probability is clearly not
null, the node-B starts to lack power to correctly serve Rel99 UEs.
In general, operators currently aim at guaranteeing DCHs required power. So, with approach 1, an
estimation of the power to be consumed by Rel99 must be previously done, for example by analyzing
reports from nodes-B. This analysis is subsequently done continuing with the same example. In particular,
Figure 5 shows the pdf and cdf of the power that could be devoted to HSDPA once all Rel99 UEs are served,
that is to say, using approach 2. For this particular scenario, it can be calculated that the mean power used
by HSDPA is close to 40 dBm. If this value is fixed and guaranteed, there will be resource shortage in DPCHs
power control 50% of the time. A more acceptable value for the probability of degradation might be 3%,
value that yields to the reservation of 37.5 dBm for HSDPA. This corresponds to a 28.18% of the maximum
available power (43 dBm). On the other hand, under these circumstances, in 97% of cases, Rel99 UEs use
less power but the extra amount will not be used by any channel. For example, according again to Figure 5,
with a probability of 50% there would more than another margin of 37.5 dBm of unused power (probability
of having more than 37.5 + 3 = 40.5 dBm unused by Rel99). Therefore, by fixing a certain amount of power,
resources shortage can be controlled and take place with a minimum probability but at the cost of wasting
4
0.25 1
pdf
0.15 0.6
cdf
pdf
0.1 0.4
0.05 0.2
0 0
37 37.5 38 38.5 39 39.5 40 40.5 41 41.5
Rel'99 left power and used by HSDPA [dBm]
Figure 5: pdf and cdf of power devoted to HSDPA with a dynamic allocation policy
The third shared resource to be considered for analysis is the percentage of the OVSF code tree to be
assigned to each technology. This is another aspect to be carefully considered when deploying HSDPA over
one existing Rel99 carrier. The current subsection establishes the problem behind this topic and it is widely
studied in the rest of the paper.
The number of codes that are assigned to each technology must take into account different QoS require-
ments as for example cell throughput, throughput per user or blocking constraints. Since each HS-PDSCH
uses a SF16 code, up to 15 codes could be allocated to HSDPA. However, this configuration in a single
carrier would leave Rel99 users with almost no codes or even without any of them. Figure 6 shows this
situation graphically, it represents the utilization of the OVSF code tree when 15 HS-PDSCH are used. In
this example, only one HS-SCCH is used and therefore, only one user could be scheduled at each TTI. So,
the code tree occupation would even be worse if 4 HS-SCCH (maximum possible number) had been reserved.
Moreover for each active HSDPA user there must be an associated Rel99 DCH (with a minimum SF of 256),
so the full code tree occupation is obvious. Of course, 15 HS-PDSCH codes plus 4 HS-SCCH only leaves
5
SF=1
SF=2
SF=4
SF=8
SF=16
Up to 15 HS-PDSCH codes
SF=32
SF=128
HS-SCCH
SCCPCH
SF=256
CPICH,
PICH, AICH,
PCCPCH
2 SF256 codes free, so only 2 HSDPA UEs could be active and it would make no sense allowing 4 to be
scheduled in one TTI. Under these circumstances, no codes would be available for Rel99 UEs. With only
one carrier in the cell, this configuration might only cohabit with Rel99 UEs if a secondary scrambling code
were used. This would be at the expense of extra interference because of the lack of orthogonality between
channels.
So, given a certain amount of traffic to be served by Rel99 channels and another volume of traffic directed
to HSDPA, the first question to answer is how the codes should be assigned to meet QoS targets. Besides,
two more questions can be posed, firstly if this assignment is dependent on changes in traffic patterns, and
secondly, if it should be considered for inclusion in the ATS of an evolved 3G network. These questions can
be answered by analyzing the behavior of an operative network and deriving statistics to find trends. This
is emulated by means of static simulations whose results are covered along the next section. Once statistics
and trends are obtained, it is shown that performance gains appear if the number of codes for HSDPA is
not fixed to a particular value but changed dynamically according to certain KPIs. Given this, the complete
ATS functioning is explained and studied. In particular, it is used the same three-blocked based architecture
that was described in TD(07)344. These blocks are briefly summarized next:
Learning & Memory: Data-base accumulating statistical information concerned with the network
performance. It is also responsible for finding out network behaviour trends from this data.
Monitoring: Responsible for measuring a set of parameters, turning them into appropriate KPIs and
triggering an alarm when certain quality thresholds are not met.
Control Algorithm: It receives the alarm from the Monitoring block and with the information pro-
vided by Learning & Memory decides on the actions to take, which may compromise the change of
RRM parameters.
6
Table 1: Other simulation parameters
The scenario to be evaluated is a 3GPP based, urban and macrocellular one [10], with an area of 5 5
km2 and 42 cells in a regular layout. UEs are uniformly scattered. Propagation is modeled according to
COST231-Hata, considering a 2GHz carrier and radiation patterns from commercial antennas [11]. The two
dimensional shadowing model proposed in [12] is employed with a correlation distance of 18 m, a standard
deviation of 8 dB and a correlation coefficient between base stations of 0.5. Table 1 shows other important
parameters.
500 users have been spread around the scenario, 50% of them are considered to use a high speed packet
switched service and so, they are redirected to HSDPA when becoming active. The other 50% of users remain
at Rel99 and make use of one classical circuit switched DCH.
Paying attention to HSDPA-capable terminals, twelve different categories exist [13] offering maximum
data rates ranging from 0.9 to 14 Mbps. These differences are due to the ability of the UE to support both
QPSK and 16-QAM or solely QPSK. Also because of the maximum transport block size (TBS) transmitted
in a single TTI as well as the inter-TTI interval, which can be 1, 2 or 3 ms. The maximum number of
HS-PDSCHs that the UE can simultaneously decode also affects the maximum achievable rate. And finally,
because of the number of soft bits that can be buffered by a UE in the active H-ARQ processes, which does not
directly affect the peak data rate but the effective throughput. Simulations consider UEs of highest capability,
i.e. category 10, which support both QPSK and 16-QAM, they can also decode up to 15 simultaneous HS-
PDSCH codes with a maximum TBS of 27952 bits in one TTI with a TTI interval of 1 ms (i.e. consecutive
HS-PDSCHs can be decoded) and with an incremental redundancy buffer size of 172800 bits. Regarding the
traffic modeling, since the objective is to determine the maximum HSDPA capacity per cell, traffic buffers are
assumed full during the simulation time. The service is considered to be a delay-tolerant and best effort one,
so scheduling can be conducted without considering minimum requirements. Further details on scheduling
will be given in Section 3.2.3.
Rel5 specifications do not stipulate power controlling HS-SCCHs and this decision is left to the infras-
tructure vendors. Avoiding this would lead to unnecessary power reservation and consequently to poorer
throughput of data channels. Simulations consider that these channels are power controlled. Although a
dynamic HSDPA power allocation is chosen for simulation, even in the case of presupposing a fixed amount
of HSDPA power, the quantity devoted to HS-PDSCHs would vary in a TTI basis and according to the radio
channel condition of the UEs to be served.
Initially, users are considered to be uniformly spread around the network. The number of HS-SCCHs is
kept to the maximum possible value, i.e. the minimum value between 4 and the number of HS-PDSCHs.
Finally, the correspondence between the CQI values and the selected TFs was obtained from the AROMA
research project, IST project from the 6th Framework Program of the European Community [14].
It is worth a remark that, when considering Rel99 based systems, traffic is usually quantified in terms of
number of users and corresponding channel usage. Each user was assigned a dedicated channel (DCH) or bit
pipe. In HSDPA, however, because all user traffic is carried through a downlink shared channel, a different
approach to dimensioning is necessary. The important dimensioning output is now the average throughput.
For example, interesting evaluation measurements for the operator are the average user throughput and
average cell throughput. Indeed during the initial phases of HSDPA planning, the objective is to estimate
the mean or maximum physical layer data rate achievable at the cell edge [15], [9].
7
3000
42
2500
2000
38
1500 Total
36
HSDPA
1000 Rel'99
34
500
32
0 30
1 2 3 4 5 6 7 8 9 10 11 12 13 14 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# HS-PDSCH # HS-PDSCH
(a) Throughput evolution for different number of HS- (b) HSDPA power evolution. Snapshot values and mean evo-
PDSCH lution.
Figure 7: Throughput and HSDPA power evolution for different HS-PDSCHs allocations
Given the previous paragraphs, Figure 7(a) represents the mean throughput for both HSDPA and Rel99 as
a function of the number of codes assigned to HS-PDSCHs. The accumulated final throughput of the cell is
also plotted.
Starting with HSDPA throughput, from the figure, it can be observed a monotonical increase until 8 codes
allocation. The initial upwards trend shows quite an exponential behavior but becomes far more slow from
5 to 8 codes. From this point, the behaviour is slightly more irregular and will be addressed in subsequent
paragraphs through a deeper analysis.
The sharp initial increase denotes that most UEs report a CQI equal or higher than the first TF using 5
codes. That means individual peak rates equal or higher than 1.659 Mbps could be assigned to most UEs.
However, due to the lack of codes, inferior TFs are used. Because of this, not all the power left by Rel99
UEs can be used. This fact is illustrated by Figure 7(b) which depicts the evolution of HSDPA power. The
particular values obtained in each snapshot and the mean is shown for each case. As expected there is a
strong correlation between this graph and the HSDPA throughput evolution. From 5 to 11 codes allocation,
the power is maintained fairly constant both in mean and variance, most of the Rel99 left power is being
successfully used. The final power increase indicates that more power is available from Rel99, being the
reasons explained later.
The throughput increase is restrained after 5 codes assignment. This is because UEs with CQIs allowing
TFs with 7, 8, etc. HS-PDSCHs are not frequent. Moreover, TFs do not have all the possible number of codes,
for example none of them uses 6 HS-PDSCHs, the same happens with 9, 11, 13 and 14 values. Therefore,
these combinations only give the chance to multiplex more users per TTI but will not contribute to rise the
individual peak rates.
The final throughput increase for 12, 13 and 14 codes is justified by the growth in the transmitted HSDPA
power. This extra power is justified by the rise in Rel99 blocked UEs which means less UEs to be served by
the node-B. In fact, for more than 6 HS-PDSCH codes, the Rel99 blocking probability starts to have non
zero values.
In this set of simulations, the assumed admission control algorithm only takes into account the code tree
occupation. No other criteria are introduced to avoid side effects that could hinder the analysis. It is impor-
tant to remark that HSDPA blocking is also possible because HSDPA UEs also need an associated DCH.
Assuming, that the OVSF is perfectly managed and appropriately updated to optimize its occupation, the
number of free codes for an specific SF, F (SFi ), can be easily found as the total number of codes (=SFi )
8
0.60
HSDPA
0.50
Rel'99
0.40
Prob. Blocking
0.30
0.20
0.10
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14
# HS-PDSCH
minus the occupation due to signalling from both Rel99 and HSDPA, OccS (SFi ), and the occupation of
Rel99 and HSDPA UEs, respectively OccR99 (SFi ) and OccHS (SFi ). This is shown in equation 1.
Where:
dxe denotes the ceiling function, which returns the smallest integer not less than x.
NHSSCCH and NHSP DSCH are the number of channels denoted in the subindex.
Nserv is the number of different services, or rather, the number of different SFs used in the cell.
Figure 8 shows the blocking probability for both Rel99 and HSDPA UEs. According to this graph,
HSDPA blocking probability starts to be non zero for 9 codes and above. Rel99 users will experience a
higher blocking because they are more demanding in terms of code tree occupation, they use a 64 kbps data
service with an associated TF having a fixed SF32 (vs. SF256 for the HSDPA DCHs).
Therefore, using more HS-PDSCHs favors HSDPA throughput in general but also impacts negatively in
its blocking indicators. For a higher number of codes allocated, the admitted HSDPA UEs can potentially
use more channels and be served with a higher throughput but at the expense of an increased blocking. Even
a paradoxical behaviour might appear for the more restrictive cases (12, 13 and 14 codes) and in fact it was
present in a few simulated snapshots: Because of favoring too much HSDPA, no HSDPA UEs are able to access
the cell. This happens when Rel99 UEs occupy all the codes available for DCHs, which can easily happen if
too many HS-PDSCHs are allocated. As a consequence, blocking is an important performance indicator to
be considered as well when choosing the number of codes to be reserved, or rather, when choosing the range
of codes the ATS has to consider.
From Figure 7(a), it can be observed that for 9, 10 and 11 codes, there is even a subtle decrease in the HSDPA
throughput (and thus in the total cell throughput). The reason is also found in the number of blocked UEs.
9
100%
1.20 99%
1.10 97%
1.05 96%
All UEs in the
1.00 scenario 95%
0.90 93%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14
# HS-PDSCH # HS-PDSCH
(a) Average AS size evolution. (b) % of Rel99 UEs reaching Eb /N0 target
Not all the cells in the system occupy the code tree at the same time, therefore soft handover users may be
rejected by one base station but can still access the system through another one in their AS. The point is that
the remaining cell (or cells) can be far away and now being forced to transmit 100% of the required power
by the terminal. This causes more interference and a global worse situation which leads to less available
power for HSDPA and more reduced CQI values. Of course, this also yields to increased degradation among
Rel99 users. In a homogeneous scenario these effects tend to be slighter because Rel99 UEs only represent
50% of the total UEs and only around 25% of them are in soft handover. Also DL interference is bounded
because the maximum DL transmission power is limited per connection (33 dBm). Finally, the increased
power is soon compensated by the effect of fully blocked users which correct the throughput downwards
trend upwards again.
Figure 9(a) quantifies the mean AS sizes variation and Figure 9(b) quantifies the appearance of degraded
Rel99 users because they demand more power than the 33 dBm maximum. AS size variation is given for
both all UEs in the scenario and only those ones admitted in the system. In the first case, UEs with sizes
equal to zero (non admitted ones) also contribute to the final value. The average AS size is monotonously
reduced with the number of codes from 1.25 to 1.14, these values are scaled by the blocking probability when
all UEs are computed. On the other hand, the effect of HSDPA in the AS selection leads to an almost 7% of
degraded UEs for the worst case. It can be expected that this values are even more outlined if more HSDPA
UEs operate in the cell edge, and on the contrary, it is reduced if they are close to the node-B.
Regarding the evolution of Rel99 throughput it is fixed by the number of admitted users and starts to
decrease as soon as blocking appears. For more than 3 codes assigned to HS-PDSCHs its contribution to the
global cell throughput is far less important than that of HSDPA, although its reduction is not negligible.
From the previous analysis, if QoS requirements demand maximizing the cell throughput and maintaining
blocking and degradation (eventually dropping) to minimum values and with a maximum of 5%, then the
best code assignment would be 8, though with a very small gain in throughput with respect to 5 codes
allocation. Some code configurations (e.g. 10 and 11) are bad options for both Rel99 and HSDPA jointly
and should be avoided.
The basic operation of the HSDPA packet scheduler can be defined as the selection of the user to be served
in every TTI. It decides the distribution of radio resources constrained by the satisfaction of individual QoS
attributes. Indeed, the TTI reduction from 10 ms in UMTS Rel99 to 2 ms in Rel5 (HSDPA)allows the
packet scheduler to better exploit the varying channel conditions of different users.
A good design of a scheduling algorithm should take into account not only the maximization of the
system throughput, but also being fair to users. That is, scheduling algorithms should balance the trade-off
between maximizing throughput and fairness. Several scheduling policies have been proposed in the literature,
10
however a complete evaluation of them and improvement proposals are out of the established objectives. In
the context of HSDPA systems, the three basic scheduling algorithms are Round Robin, Maximum Carrier-
to-Interference and Proportional Fair. From them, a wide variety of options exist adapting the basics behind
each one to improve particular aspects, [16; 17; 18].
Round Robin (RR) is considered the basic scheduling reference. It is a channel independent algorithm
in which HSDPA users are served with an equal probability in a cyclic ordering. Consequently, two
clear advantages arise, first, its implementation simplicity, and second, fairness among users in the
cell. The algorithm is fair in the sense of equally distributing the transmission times but this yields to
different individual throughputs, in detriment of those far from the node-B. These users require more
power to achieve a certain Eb and measure a higher N0 , so, their average rates will be lower when
compared with the nearest ones. This is the option used in the simulations so far.
Maximum CIR scheduler (Max-CIR) maximizes cell throughput by always serving those users
with a higher CIR, that is, those users reporting a higher CQI. As a consequence, unless the cell is
very small, resources are monopolized by a subset of users and those far from the node-B will hardly
be served. Max-CIR and the next approach are channel-aware schedulers, also known as opportunis-
tic algorithms because they exploit the time-variant nature of the radio channel to increase the cell
throughput.
Proportional Fair (PF) represents an intermediate point between the two approaches. This algorithm
provides an attractive trade-off between average cell throughput and user fairness. Users are served
according to their relative channel quality. In particular, the ith user priority i (t) is given by the
quotient of its instantaneous data rate Rb,i (t) and average throughput Rb,i (t):i (t) = Rb,i (t)/Rb,i (t).
The classical method to average the user throughput is the quotient between the amount of successfully
transmitted data i (t) during the users lifetime ti and the corresponding period of time: Rb,i (t) =
i (t)/ti . This value is usually exponentially smoothed along time and found in a TTI basis. In
particular, Rb,i is updated recursively according to equation 2, which shows the expression to find the
user throughput at TTI n:
(
(1 ) Rb,i [n 1] + Rb,i [n] if user i is served
Rb,i [n] = (2)
(1 ) Rb,i [n 1] otherwise
Where is a weighting forgetting factor, or similarly, 1 is the averaging period of the smoothing
filter measured in TTIs. Depending on the value of , PF performance tends to RR ( = 1) or to
Max-CIR ( = 0). For intermediate values the performance is something in between. In the context
of HSDPA networks, a fairly complete study of classical PF with different parametrization along with
comparisons with PF variants can be found in [19].
A growing tendency in the literature is posing the problem as an optimization one, but not measuring
performance in terms of generic and network-centric indicators, but rather evaluating to which extent
the network satisfies each service requirements. In this sense the idea of users utility is exploited, see
for example [20] for a specific HSDPA study case and [21] for generic CDMA networks. This is not
a novel concept for other types of networks and it was firstly proposed in [22]. It is considered that
associated with each user i, there is an utility function Ui representing its satisfaction. From this, the
scheduler should select those packets so that the sum of utility functions for all users is maximized at
any given TTI:
PN
Maximize i=1 Ui Rb,i (3)
PN
Subject to i=1 Rb,i < Rch
and Rb,i 0
Where Rch is the maximum channel rate and N is the number of UEs in the cell.
This summation constitutes the objective function and, in equation 3, only depends on the
mean throughput, however other constraints such as delay could also be included. Under certain
circumstances,[20], the problem can be solved through the Lagrange method, however since the channel
is time varying and so it is the optimal solution, a gradient search method is usually used, [23]. Hence,
the priority of each user is given by:
11
3500
3000
2500
Throughput [kbps]
2000
1500
# HS-PDSCH
Figure 10: Cell throughput evolution for different number of HS-PDSCH and scheduling policy.
Ui Rb,i (n)
i (t) = Rb,i (n) (4)
Rb,i (n)
For example, in the particular case of elastic data traffic (as most of the internet traffic is and which
implies that the transmitter application can handle temporary rate fluctuations), it is admitted that
users perceived QoS is a concave function of the mean throughput [24] well approximated by the
logarithm function [25]. Intuitively, this means that perceived QoS increases with the mean throughput
but just marginally when the user is already correctly served. On the other hand, once the throughput
is reduced below a certain level, the satisfaction drops dramatically. Given this, and after solving the
optimization problem in equation 3, the scheduling algorithm that maximizes the summation utilities
is precisely PF.
With PF, on the average, equal time is assigned to each user but with the particularity that they are
scheduled when they have good channel conditions and thus their instantaneous data rate exceeds the
average. On the other hand, one of the criticisms that is usually made about PF is the lack of minimum
guaranteed QoS parameters. Under the utility approach viewpoint, this can be seen in the fact that
PF maximizes an objective function only dependent on the mean throughput.
Policies considering QoS differentiations constitute the fourth group of schedulers. Classical
opportunistic strategies exploit multi-user diversity considering fairness as a constraint, which is mainly
efficient for best effort services. However the need for strict QoS support for other services such as
streaming, gaming or VoIP is growing. This is indeed the advantage of this fourth group of schedulers,
which are QoS-aware driven.
They often are modifications to the basic PF algorithm aiming at meeting traffic delay constraints,
guarantee minimum rates and so on [26]. The study in [27] deals with VoIP over HSDPA and, even
though PF provides bad results for the VoIP service (because of its unawareness of the delay), the
schedulers that obtained better results are some sort of modified PF. Also, the authors in [28] propose
an enhanced PF algorithm that takes into account the specific delay requirements of different sensitive
data services. Finally, some authors propose the joint use of several utility functions of different types
of services [29]. A comparison of the utility functions and its partial derivatives for RR, max-CIR, PF
and a set of this fourth group of schedulers can be found in [16].
Given the previous paragraphs, it is evident that the variety of scheduling strategies is huge, although PF
(and its variations) arises as one of the most interesting options. Hence, this algorithm was also incorporated
to the simulator and results from Montecarlo tests were also obtained for this scheduler. The objective behind
this was to validate the derived conclusions.
In particular, Figure 10 contains the throughput evolution for both PF and RR (again). Rel99 throughput
has been omitted because curves are identical in both cases and do not provide new information, they can
12
1
1 - 5 HS-PDSCHs
0.9
6 - 11 "
0.8 12,14 "
0.7
0.6
pdf
0.5
0.4
0.3
0.2
0.1
0
1 2 3
# Multiplexed UEs
Figure 11: pdf of number of multiplexed UEs for different number of HS-PDSCH.
be easily derived from the represented curves. From the Figure, it can be seen that the previous analysis
is extensible to PF case. The new curves pass through the same states as RR but with higher throughput
values. Figure 10 also shows how the gain for allocating 8 codes instead of 5 is slightly higher. Whereas in
RR, this gain is just of 80 kbps, in the PF case reaches 166 kbps, so for this particular distribution of users
an allocation of 7 (gain = 140 kbps) or 8 codes can be justified. Nevertheless, in this sense, the results in
[7] reveal that maximum PF gains in HSDPA scenarios are obtained under low mobility conditions, which
is the case of current simulations (3 km/h). For stationary and vehicular conditions the gain is minimal and
both curves would remain almost identical.
It is also noticeable that for a reservation under 5 codes, the throughput differences between both sched-
ulers are negligible. Because of the lack of available codes, PF cannot take profit of good channel conditions
and UEs are served under their possibilities.
The scheduler takes decisions on when to serve a particular user but also it has to rule the assignment
of power. Particularly, taking into account that code multiplexing is supported by HSDPA, it is worth a
mention the strategy used to allocate power levels when more than one UE is scheduled in one TTI.
When studying scheduling algorithms for HSDPA, most proposals consider a single user to serve in each
TTI, code multiplexing is usually missed. However, this strategy may not be optimal, particularly if there
are delay constraints or if the traffic is too bursty, so that no single user may be able to fully use the available
capacity. A recent contribution, [30], does propose a multiuser scheduling schema for CDMA packet data
systems, sharing power among code multiplexed users and taking profit of this to increase cell throughput.
In the presented simulations, one of the aims is finding maximum capacity values, and that is why buffers
are considered full during the observation time. Even though this fact, there is another reason that leads
single user scheduling to sub-optimality, this is the existence of a finite set of TFs. The scheduler selects
the best one according to the reported CQI, consequently a quantification process appears which implies
left resources that could be potentially assigned to other terminals. Indeed, the adopted approach considers
that, after the scheduling algorithm has prioritized the users, the first one is served according to its reported
CQI and the needed power is allocated to achieve the highest possible throughput. Next, with the remaining
power (if available), it is analyzed if a second (or third and fourth) UE can be served. This implies that the
first scheduled user consumes power greedily and the next ones are somewhat a try to maximize the use of
the total available HSDPA power. That is why these secondary users are not marked as served if the RR
policy is being used and could be again considered in the next TTI. If PF is applied the transmitted bits do
contribute to the average throughput calculus.
Because of this criterion, multiplexing more than two users was hardly done. This can be seen in Figure
11, in which the probability of having 1, 2 or 3 multiplexed UEs is represented, the 4 UEs case is omitted
13
0.3 1
UEs close to cell edge
0.9
0.25 UEs uniformly distributed
0.8
UEs close to node-B (<150 m)
0.7
0.2
0.6
0.15 0.5
cdf
pdf
0.4
0.1
0.3
0.2
0.05
0.1
0 0
UE Throughput [kbps]
Figure 12: UE throughput pdf and cdf for different geographical distribution.
because never occurred. From 1 to 5 HS-PDSCHs the pdf is plotted in black. It can be observed that, when
allocating from 1 to 3 codes, the probability of serving more than one UE is practically null. The reported
CQIs indicate that, for most users, a higher rate could be assigned, consequently the best available TF is
selected and all the HSDPA power is consumed by one UE. When 4 and 5 codes are used, the probability
of multiplexing two UEs increases up to 35%. But it is not until the 6 codes case, that a second HS-SCCH
is clearly justified. On the other hand, the probability of having 3 simultaneous UEs is so low that its
contribution to the final mean throughput is negligible. From 6 to 11 codes, the pdf is plotted in grey.
As in the two previous figures, the behavior is maintained. This evolution is consequent with the power
availability in each case and the fact that the users are homogeneously distributed, which implies a wider
range of possible reported CQIs. Having HSDPA UEs concentrated in an area of the cell far from the node-B,
implies higher power consumption per UE and, therefore, reduced probabilities of having two multiplexed
UEs. Finally, cases 11 to 14 are plotted in white, more power is available for HSDPA and so TFs of higher
rates can be assigned, the number of multiplexed UEs is consequently reduced.
When considering PF, the probability of multiplexing more than one UE is reduced. Since the users are
prioritized taking into account channel conditions, in average higher level TF are selected and more power
is used per user. In particular, for 6 codes and more, the reduction is around the 20%. Under this numbers,
the probability is also negligible.
From the previous paragraph and indirect fact can be derived. It can be posed that if an ATS is designed
to control the number of allocated HS-PDSCH, when the reservation falls below 4, the number of HS-SCCH
could also be reduced from 2 (or 3) to 1. Although HS-SCCH occupy a small fraction of the OVSF tree
(SF128), removing two HS-SCCHs might imply an extra SF32 code (Figure 6).
In operative networks UEs are not always homogeneously distributed, indeed certain cells can have most
of their users concentrated in particular areas. So, the CQI reports can be different and consequently the
HSDPA TFs and assigned data rates. Figure 12 represents the pdf and cdf of the individual UE throughput
(if several UEs are multiplexed in one TTI, only the first one is considered to not bias the measurement) for
three scenarios, users uniformly distributed, most of UEs far from the node-B and finally UEs close to the
node-B. The graphs consider an 8 HS-PDSCH allocation.
As expected, far away UEs report lower values of CQI which yields to poorer throughputs. The closer
the UEs to node-B the higher the throughput per user. For instance, when UEs are concentrated in an
area of 150 m radius from the node-B the individual peak throughput is higher than 3.5 Mbps with a 50%
probability.
In order to quantify the impact of these different channel conditions on the global cell throughput, Figure
13 represents its evolution for different allocations of HS-DPSCH. Both new scenarios with users close to the
14
2500 7000
Total -RR-
6000 HSDPA -RR-
2000 Total -PF-
5000 HSDPA -PF-
Throughput [kbps]
Throughput [kbps]
1500
4000
3000
1000
Total -RR- 2000
HSDPA -RR-
500
Total -PF- 1000
HSDPA -PF-
0 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14
# HS-PDSCH # HS-PDSCH
(a) UEs close to cell edge. (b) UEs close to node-B (< 150 m).
Figure 13: Cell throughput evolution for different number of HS-PDSCH and UEs distribution.
cell edge 13(a) and close to the node-B 13(b) are represented. From the graphs, it can be observed that when
UEs are mostly far from the node-B, there is no gain in reserving more than 5 codes to HSDPA. Reported
CQIs are low and those extra codes would be hardly used. In fact, assigning more than 7 codes would even
imply a reduction in the global cell throughput, up to 320 kbps, because of the effects previously explained.
Having UEs close to the node-B implies far higher levels of throughput which leads to the rule: the
higher the number of HS-PDSCH codes, the better. It can be compared the maximum average throughput
of 2240 kbps obtained by PF scheduling when UEs are close to the cell limit with the 6702 kbps when they
are close to the center, nearly the triple. Under this circumstances, it is blocking probability that upper
bounds the number of codes to reserve. Thus, for this second spatial distribution, the optimum value would
be 9 codes, which means a reduction of 1.5 Mbps (RR) and 1.9 Mbps (PF) with respect to the maximum
achievable throughput. It worths a mention that cells blocking probability is slightly lower when UEs are
close to the node-B because there are few users performing SHO. The system blocking probability, however,
remains similar. Indeed those UEs having only one base in their AS are fully rejected from the system if
they cannot access in.
Hence, the CQI reports histogram can be used as an indicator of the channel conditions of HSDPA
connections in the cell. In this way, whenever it is detected that RF channels improve, more codes could
be reserved to HSDPA. If these conditions worsen, part of the code tree could be released since not only it
does not give any throughput gain, but could even imply losses. This idea is extended along next section.
By means of dynamic simulations, proper KPIs are derived from the histogram so that the Control block
reacts correctly and false alarms triggering is minimized.
15
5 5 5
4 4 4
3 3 3
y [km]
y [km]
y [km]
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
x [km] x [km] x [km]
(a) Uniformly distributed (b) Close to cell edge (c) Close to node-B (<150 m)
Figure 14: UEs geographical distribution for different moments of the observation time
1
30
0.8
25
CQI value
20 0.6
15
0.4
10
0.2
5
0
10 20 30 40 50 60
Time [min]
in the sense of reported CQIs. Figure 14 shows some snapshots of the geographical distribution of UEs along
time.
On the other hand, Figure 15 represents the temporal evolution of the normalized histogram of reported
CQI values, averaged every 500 ms. The three parts of simulation time can be clearly differentiated. Initially,
the histograms show a high standard deviation since the UEs are situated very homogenously around the
network and therefore around each particular cell. Besides, their movement does not contribute to generate
significant accumulations in specific areas. As users concentrate on the cells edges, the standard deviation
decreases and also the reported CQI values. These values increase again when users go towards the node-B.
Following this figure, it can be observed that standard deviation can take high and low values along time.
Consequently the evolution of the mean is only partially representative of the whole histogram behavior. That
is why, to have further information about the RF characteristics of the majority of users, the first quartile
Q1 (= 25th percentile) has been utilized. This indirect measurement corresponds to the first performance
indicator: KPI-A. In this way, it is known that 75 % of UEs report a CQI higher than KPI-A.
When dispersion is excessive, particularly when users are not concentrated in specific areas, the mea-
surement of just one of the histogram first moments is going to be insufficient. That is why the standard
deviation CQI is also calculated and considered to take decisions. Thus, its value corresponds to the second
indicator: KPI-B.
From the reported CQI values and derived KPIs, a first step towards automation could be detecting in
which cases it would make no sense allocating more than 5 codes, because there would be no throughput gains.
And, second, in which situations the majority of perceived channel conditions are so good that allocating 8
(or even 9) codes is justified. Nevertheless, between these two extreme situations, several intermediate cases
can be defined. With this objective in mind, an analysis of 3 of the central cells has been done. From each one,
the possible optimum commutation points have been detected, this has been done for several possibilities of
HS-PDSCHs allocation. Figure 16 graphically shows the result of this analysis for the most central cell in
the scenario.
16
6000
2 HS-PDSCH 5 HS-PDSCH
3 HS-PDSCH 7 HS-PDSCH
4 HS-PDSCH 8 HS-PDSCH
5000
7 8
Throughput [kbps]
5 7
4000
3000 5
2000
5 4 4 5
4 3
3 4
1000
3 2 2 3
0
0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min]
Figure 16: Averaged throughput evolution for fixed code assignments and optimum commutation points.
# HS-PDSCH
Q1
CQI < 3 CQI 3
7 2 3
9 3 4
11 4 5
17 5 7
21 7 8
> 21 8 8
As was studied with static simulations, it can be observed that in certain cases there is no special benefit
in increasing the codes reservation for HSDPA. But, in other situations, it does exist a clear throughput
gain. In this way, the plotted bubbles indicate desirable points to commute from the current code allocation
(first number in the bubble) to a new one (second number). That is to say, at those points the Control stage
should receive an alarm from the Monitoring one and should reallocate codes according to the databases
generated by the Learning and Memory block.
From the analysis of these cells a decision Look-Up-Table (LUT) has been defined and it is shown in
Table 2. The values of Q1 are related with the number of codes to apply and the computed value of CQI .
The standard deviation is considered to be high when it takes a value greater than the 10% of the maximum
reportable CQI (=30), that means CQI higher than 3. Under these circumstances, several UEs report a
CQI fairly far from Q1 (well over 3 units) and therefore the optimal number of codes to allocate is superior.
Otherwise, there would be a significant throughput reduction with respect to the optimum case, as it is
shown later.
Figure 17 shows Q1 evolution along time as well as the number of codes to be set aside for HS-PDSCH
if the value of Q1 is directly evaluated in the proposed LUT. Reported CQIs show sharp and fast variations
along their general trend and therefore a prior processing is needed to avoid an excessive number of codes
reconfigurations. From the figure, too frequent reallocations along with and excessive ping-pong effect can be
seen. Making excessive codes changes for just a short time is not desirable since they imply extra signalling
in the Iub interface. Specifically, the channelization codes available for the HS-PDSCH packet scheduling in
a cell are explicitly signalled by the RNC to the node-B. This higher signalling is defined by [31].
On the other hand, it is important to recall that one of the objectives to be met by the ATS is maximum
simplicity so that it can be run continuously, in real time. Therefore extra calculations and mathematical
17
25
# allocated HS-PDSCH
Q1
20
15
codes, Q1
10
0
0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min]
10 10
8 8
# codes
# codes
6 6
4 4
2 2
0 0
0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min] Time [min]
(a) Running averaging window (delay 15s) (b) Time-to-trigger (delay 15s)
10 10
8 8
# codes
# codes
6 6
4 4
2 2
0 0
0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min] Time [min]
(c) Running averaging window (delay 60s) (d) Time-to-trigger (delay 60s)
Figure 18: Central cell code allocations along time for different treatment of Q1
manipulations with KPIs must be simple. The easiest option to avoid ping-poing is to use a classic time-to-
trigger just as is done in other RRM procedures of cellular networks, such as handover. Also, by means of a
FIR filter, a running average can be obtained adding hardly extra complexity to the ATS. Both strategies
are subsequently assessed.
In order to evaluate the effects of different averaging window sizes or the time-to-trigger durations and
decide a proper value, different simulations have been run. Some examples are shown in Figure 18, which
reveals, as it could be expected, a tradeoff between the number of reallocations and the precision of the
number of codes. The higher, the reallocations the more precise the system is, but also the higher the number
of false alarms, codes that are allocated for just some units of seconds. This can be seen in Figures 18(a)
and 18(b) where and averaging running window of 30 s and a time-to-trigger of 15 s is applied respectively.
Note that time sizes are chosen so that the delay is the same, 15 s, with respect to the evaluated instant
of time. These figures can be compared with Figures 18(c) and 18(d), these second examples introduce a
delay of 60 s, so variations do not follow UEs evolution so accurately. This is particularly outlined in the
Time-to-Trigger case, because of sharp variations it takes longer to obtain a stable situation for 60 s, on the
other hand, false alarms are completely eliminated. However, in the running average case on Figure 18(c)
18
10
8
# codes
6
4
2
0
0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min]
Figure 19: Final central cell code allocation. Concatenation of running average and time-to-trigger
Code proposal no
different from
current?
yes
Time-to-Trigger no
Fulfilled?
yes
Control Control
Reallocate Reallocate nearest
#HS-PDSCH Codes #HS-PDSCH Codes
19
500 30
400
25
300
Delta throughput [kbps]
100
15
0
-100 10
-200
5
-300
-400 0
0 5 10 15 20 25 30 35 40 45 50 55 60 0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min] Time [min]
(a) Throughput variations with respect to fixed 8 codes (b) Evolution of blocking probability for fixed 8 codes reser-
reservation vation
800
600
Delta throughput [kbps]
400
200
-200
-400
0 5 10 15 20 25 30 35 40 45 50 55 60
Time [min]
(c) Throughput variations with respect to fixed 8 codes
reservation without considering CQI
Figure 21: Full ATS results and comparison with fixed 8 codes allocation
implies maximum throughput. Thus, Figure 21(a) contains the difference in throughput with respect to the
fixed allocation. It can be observed how, using ATS implies a throughput loss around 100 kbps. In certain
instants of time this value rises but soon after the ATS corrects it by a new reallocation. On the other hand,
there are some periods of time in which the proposed number of codes is even better in terms of throughput.
The reasons for this can be found in Section 3.2.1, an excessive number of allocated codes for HS-PDSCHs
can involve a worse interference pattern in Rel99. Hence, it is interesting to note that the mean value of
the graph is +18.7 kbps. Then, this quantifies the average throughput loss introduced by the ATS when
comparing with a fixed maximum code reservation policy. It can be concluded that the approach performs
correctly and throughput levels are maintained at a quasi-optimum value.
On the other hand, the systems behavior in terms of blocking is improved thanks to the intelligent code
reallocations. Figure 21(b) represents the blocking probability experienced by the central cell when the fixed
strategy is implemented, again with 8 codes. Especially in the second third of the simulation, because of
UEs being accumulated at the edges of the cell, and therefore at SHO areas, access requests are increased.
The cell however does not have enough resources to support those new petitions. However, by means of the
proposed adaptive code allocation, blocking probability is always kept equal to zero. Precisely the number of
assigned codes clearly descends when users are far away from the node-B and their reported CQI worsens,
so in this sense the gains are obvious.
Finally, Figure 21(c) shows what would be the throughput variations if the ATS did not compute and did
not take into account the value of CQI . Again the comparison is done against the fixed 8 codes allocation
and it can be seen that the results worsen those presented in Figure 21(a). In this case, the allocation is more
20
conservative and UEs with high CQIs do not get the most of HSDPA. In certain short periods of time the
differences reach 700 kbps. In particular, it can be observed that it is during the first third of the simulation
when the ATS performs worse. During this period, users scour around the network homogeneously and
therefore the histograms computed for each cell show higher values for their CQI (recall Figure 15). Even
though, the average throughput loss in the whole observation time is just around 42 kbps, during the first
time period it increases up to 115 kbps. This outlines the importance of a good definition of the LUT and
the consideration of second moment measurements to have a wider perspective of RF channel conditions.
In order to maximize the cell throughput, all HSDPA capable UEs should be transferred to this
technology but only if the scheduler is able to cope with delay and individual minimum throughput
constraints. This can be guaranteed by using a proper admission control combined with a QoS aware
scheduler.
On the other hand, HSDPA should just consume the power left over by Rel99 to guarantee DCH
operation. Otherwise, to assure a certain HSDPA throughput at the cell edge, a fixed amount of power
could be allocated but at the cost of losing maximum DCH performance. The value to reserve can be
easily found by means of simple link budgeting. Tradeoffs between the probability of degrading DCHs
and the maximum cell-edge HSDPA throughput appear, so dynamic strategies are easier to manage
and make the most of available resources.
The second part of the document is devoted to the investigation of dynamic code allocation. Codes from
the OVSF tree is another of the resources to be shared when both technologies are deployed under the same
carrier. Initially three questions were posed: first, how the codes should be assigned to meet QoS targets,
second if this assignment is dependent on changes in traffic patterns and third, if code allocation should be
considered for the inclusion in a UMTS ATS. To answer this questions a detailed initial analysis was done
by means of simulations. The cell throughput along with several collateral effects were studied for different
codes allocation. This was done for different geographical UEs distributions and several engineering rules
have been obtained:
Effects on blocking probability: Blocking probability upper bounds the maximum number of codes
to be considered by the ATS. Both HSDPA and Rel99 blocking are proportional to the number of
reserved codes. So, by favoring HS-PDSCHs too much, a negative effect also appears in HSDPA.
Effects on SHO areas: an indirect effect of the previous point is that AS sizes are reduced if the number
of allocated codes surpass a certain threshold, whose value depends on the Rel99 TFs. This yields to
connections with node-Bs that are not the best option in terms of DL power. As a consequence DL
interference increases with the number of HS-PDSCHs. Of course, when fully blocked users (AS size
= 0) are not negligible, DL power is again reduced, but with a clearly inadequate performance of the
network.
Effects on Rel99 throughput: Rel99 throughput is maintained but, degradation and eventually drop-
ping are proportional to the number of HS-PDSCHs. This is because the worsening of interference
patterns and blocking.
21
Regarding scheduling policies, a revision of the state-of-the-art has been done and PF (and its vari-
ations) arises as one of the most interesting options for elastic traffic. Hence, this algorithm was also
simulated and the previous conclusions (obtained with RR) are extensible to this case. The code range
to be considered by the ATS can be slightly increased when UEs are uniformly distributed in the
cell and PF is implemented. These differences tend to disappear when specific concentrations of users
appear or if the maximum number of codes to be considered for HSDPA is under 5 codes.
Power assignment and multiuser code multiplexing: The scheduler is also responsible for this aspect,
which is typically missed in many of the HSDPA literature. However, in practical implementations
should be considered to make the most of the available power. In the current work, the first user
is always served according to the reported CQIs and subsequent ones take profit of the remaining
resources. With this strategy, the number of multiplexed UEs is closely coupled with available HSDPA
TFs. The quantification that has to be done between the reported CQI and the selected TF, defines the
available power for more UEs. From 1 to 4 HS-PDSCHs, the probability of multiplexing more than one
user is very small, so the ATS should allocate just one HS-SCCH. For higher numbers of HS-PDSCHs,
multiplexing more than two users was hardly done. This probabilities are further reduced if PF is used.
The optimum number of codes to be assigned to HSDPA is tightly related to UEs spatial distribution.
For user concentrations far away from the node-B there is no point in reserving more than 5 codes,
this value can be even decreased as the cell size increases. A higher value doest not improve HSDPA
performance and codes are wasted and unnecessarily increasing blocking probabilities in both tech-
nologies. On the other hand, when users are close to the node-B (distance below 150 m) the number
of codes to be allocated is just limited by the maximum allowable blocking probabilities. When UEs
are homogenously distributed the optimum number depends on the cell size. In the presented set of
simulations, since the scenario was a macrocellular one, it was closer to the first case.
Given this, the proposed ATS makes reservations based on the majority RF channels conditions of
HSDPA users. Consequently, reported CQI measurements are continuously monitored and the corresponding
histogram is computed. By means of dynamic simulations, the evolution of the histogram has been analyzed
in a wide range of situations. From this analysis two KPIs were derived, being the first quartile, which was
shown to be more representative than the mean value, and the standard deviation of the histogram.
From the analysis of several central cells of the scenario, a decision LUT was defined and incorporated
into the ATS. Thanks to this, the connection between the calculated KPIs and the codes to be applied by
the Control block could be obtained.
A post-processing of the KPIs was revealed to be necessary to avoid too frequent reallocations and an
excessive ping-pong effect. Two strategies were analyzed, a classical time-to-trigger and FIR filter based
running average. From this analysis a combination of both, introducing a 30 s delay was selected.
The complete ATS was shown to function very correctly. When comparing its performance with a fixed 8
code allocation, the average loss of throughput was just 18.7 kbps but maintaining the blocking probability
at zero, whereas the fixed allocation reached values of 25% when the users moved towards the cell edges
and more resources were needed. Finally, the importance of including the standard deviation in the defined
LUT was addressed. Missing this parameters leads the ATS to throughput losses up to 700 kbps in specific
periods of time.
Thus, it has been shown that the ATS proposal succeeds in the improvement of the network performance.
An optimum number of codes is allocated for each technology and, hence, the cell throughput can be
optimized while minimizing both Rel99 and HSPDA blocking probabilities.
References
[1] TR 25.855 (Release 5) - HSDPA; Overall UTRAN Description, 3GPP Technical Report. [Online].
Available: http://www.3gpp.org/
[2] TR 25.808 (Release 6) - FDD Enhanced Uplink; Physical Layer Aspects, 3GPP Technical Report.
[Online]. Available: http://www.3gpp.org/
[3] TR 25.999 (Release 7) - High Speed Packet Access (HSPA) Evolution; Frequency Division Duplex
(FDD), 3GPP Technical Report. [Online]. Available: http://www.3gpp.org/
22
[4] C. Chevallier, C. Brunner, A. Garavaglia, K. P. Murray, and K. R. Baker, WCDMA (UMTS) Deployment
Handbook: Planning and Optimization Aspects, 1st ed. Chicester, UK: John Wiley & Sons, 2006.
[5] RP-050248. Removal of DSCH (FDD Mode), 3GPP Report. [Online]. Available: http://www.3gpp.org/
[6] TS 25.211 (Release 5) - Physical Channels and mapping of Transport Channels onto Physical Channels,
3GPP Technical Specification. [Online]. Available: http://www.3gpp.org/
[7] P. Tapia, D. Wellington, J. Liu, and Y. Karimli, Practical Considerations of HSDPA Performance, in
Proc. of IEEE Vehicular Technology Conference Fall (VTC 2007 Fall), Baltimore (USA), Sep. 30/Oct.
1 2007.
[8] A. Mader, D. Staehle, and M. Spahn, Impact of HSDPA Radio Resource Allocation Schemes on the
System Performance of UMTS, in Proc. of IEEE Vehicular Technology Conference Fall (VTC 2007
Fall), Baltimore (USA), Sep. 30/Oct. 1 2007.
[9] P. Zanier and D. Soldani, A Simple Approach to HSDPA Dimensioning, in Proc. of IEEE International
Symposium on Personal, Indoor and Mobile Radio Commun. (PIMRC 2005), Berlin (Germany), Sep.
1114, 2005.
[10] TS 25.942 (Release 4) - RF System Scenarios, 3GPP Technical Specification. [Online]. Available:
http://www.3gpp.org/
[11] Kathrein website, 2006. [Online]. Available: http://www.kathrein.de/
[12] R. Fraile, O. Lazaro, and N. Cardona, Two Dimensional Shadowing Model, COST 273, Prague (Czech
Rep.), Tech. Rep. available as TD(03)171, Sep. 2003.
[13] TS 25.306 (Release 6). UE Radio Access Capabilities, 3GPP Technical Specification. [Online]. Available:
http://www.3gpp.org/
[14] AROMA (Advanced Resource management solutions for future all IP heterOgeneous Mobile rAdio
environments) IST Project, 6th Framework Program of the European Community, 2007. [Online].
Available: http://www.aroma-ist.upc.edu/
[15] G. Thrasivoulos and D. Esmael, HSDPA Network Dimensioning Challenges and Key Performance
Parameters, Bechtel Telecommunications Technical Journal (BTTJ), vol. 4, no. 2, pp. 7782, Jun.
2006.
[16] H. Holma and A. Toskala, HSDPA / HSUPA for UMTS, 1st ed. Chicester, UK: John Wiley & Sons,
2006.
[17] P. Ameigeiras, Packet Scheduling and Quality of Service in HSDPA, Ph.D. dissertation, Institute of
Electronic Systems, Aalborg University, Aalborg, Denmark, Oct. 2003.
[18] B. Al-Manthari, H. Hassanein, and N. Nasser, Packet Scheduling in 3.5G High-Speed Downlink Packet
Access Networks: Breadth and Depth, IEEE Network, vol. 21, no. 1, pp. 4146, Jan. 2007.
[19] F. Feller and M. C. Necker, Comparison of Opportunistic Scheduling Algorithms for HSDPA Networks,
in Proc. of 12th EUNICE Open European Summer School (EUNICE 2006), Stuttgart (Germany), Sep.
1820, 2006.
[20] A. Haider and R. Harris, A Novel Proportional Fair Scheduling Algorithm for HSDPA in UMTS
Networks, in Proc. of IEEE International Conference on Wireless Broadband and Ultra Wideband
Communications (AusWireless 2007), Sydney (Australia), Aug. 2730, 2007.
[21] S. Shen and C. Chang, A Utility-based Scheduling Algorithm with Differentiated QoS Provisioning
for Multimedia CDMA Cellular Networks, in Proc. of IEEE Vehicular Technology Conference Spring
(VTC 2004 Spring), Milan (Italy), May 1719, 2004.
[22] F. Kelly, Charging and Rate Control for Elastic Traffic, European Transactions on Telecommunica-
tions, vol. 8, no. 1, pp. 3337, Jan. 1997.
[23] Hossein, P. A., QoS Control for WCDMA High Speed Packet Data, in Proc. of 4th International
Workshop on Mobile and Wireless Communications Network, Stockholm (Sweden), Sep. 911, 2002.
23
[24] Z. Jiang, H. Mason, B. J. Kim, N. K. Shankaranarayanan, and P. Henry, A Subjective Survey of User
Experience for Data Applications for Future Cellular Wireless Networks, in Proc. of IEEE Symposium
on Applications and the Internet (SAINT 2001), San Diego, (USA), Jan. 812, 2001.
[25] N. Enderle and X. Lagrange, User Satisfaction Models and Scheduling Algorithms for Packet-Switched
Services in UMTS, in Proc. of IEEE Vehicular Technology Conference Spring (VTC 2003 Spring), Jeju
(Korea), Apr. 2225, 2003.
[26] J. S. Gomes, M. Yun, H. Choi, J. Kim, J. Sohn, and H. I. Choi, Scheduling Algorithms For Policy
Driven QoS Support in HSDPA Networks, in Proc. of IEEE Vehicular Technology Conference Spring
(VTC 2007 Spring), Dubling (Ireland), Apr. 2225, 2007.
[27] A. R. Braga, E. B. Rodrigues, and F. R. P. Cavalcanti, Novel Scheduling Algorithms Aiming for QoS
Guarantees for VoIP over HSDPA, in Proc. of International Telecommunications Symposium (ITS
2006), Fortaleza (Brasil), Sep. 36, 2006.
[28] Z. Yong, X. Zhang, and D. Yang, QoS Based Proportional Fair Scheduling Algorithm for CDMA
Forward Links, in Proc. of IEEE Vehicular Technology Conference Spring (VTC 2007 Spring), Dubling
(Ireland), Apr. 2225, 2007.
[29] A. Aguiar, A. Wolisz, and H. Lederer, Utility-based Packet Scheduler for Wireless Communications,
in Proc. of IEEE Workshop on Wireless Local Networks (WLN 2006), Tampa (USA), Nov. 1417, 2006.
[30] S. Vangipuram and S. Bhashyam, Multiuser Scheduling and Power Sharing for CDMA Packet Data
Systems, in Proc. of National Conference on Communications (NCC 2007), Kanpur (India), Jan. 26
28, 2007.
[31] TS 25.433 (Release 5). UTRAN Iub interface Node B Application Part (NBAP) signalling, 3GPP
Technical Specification. [Online]. Available: http://www.3gpp.org/
24