Research Article: Qoe Evaluation: The Triangle Testbed Approach
Research Article: Qoe Evaluation: The Triangle Testbed Approach
Research Article: Qoe Evaluation: The Triangle Testbed Approach
Research Article
QoE Evaluation: The TRIANGLE Testbed Approach
Copyright © 2018 Almudena Dı́az Zayas et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
This paper presents the TRIANGLE testbed approach to score the Quality of Experience (QoE) of mobile applications, based
on measurements extracted from tests performed on an end-to-end network testbed. The TRIANGLE project approach is a
methodology flexible enough to generalize the computation of the QoE for any mobile application. The process produces a final
TRIANGLE mark, a quality score, which could eventually be used to certify applications.
project follows also a parametric approach to compute the area [8]. Different types of testbeds can be found in the lit-
QoE. erature, ranging from simulated to emulated mobile/wireless
Conclusions in [5] point out that a large number of testbeds, which are used to obtain subjective or objective QoE
parameters in the model could be cumbersome due to metrics, to extract a QoE model, or to assess the correctness
the difficulty of obtaining the required measurements and of a previously generated QoE model. Many of the testbeds
because it would require significantly more data points reviewed have been developed for a specific research, instead
and radio scenarios to tune the model. The TRIANGLE of for a more general purpose, such as the TRIANGLE
approach has overcome this limitation through the large testbed, which can serve a wide range of users (researchers,
variety of measurements collected, the variety of end-to- app developers, service providers, etc.). In this section, some
end network scenarios designed, and mostly the degree of QoE-related works that rely on testbeds are reviewed.
automation reached, which enables the execution of intensive
The QoE Doctor tool [12] is closely related to the TRI-
test campaigns covering all scenarios.
ANGLE testbed, since its main purpose is the evaluation of
Although there are many proposals to calculate the
quality of experience, in general, they are very much oriented mobile apps QoE in an accurate, systematic, and repeatable
to specific services, for example, voice [6] or video streaming way. However, QoE Doctor is just an Android tool that can
[7, 8]. This paper introduces a methodology to compute the take measurements at different layers, from the app user
QoE of any application, even if the application supports more interface (UI) to the network, and quantify the factors that
than one service. impact the app QoE. It can be used to identify the causes
The QoE, as perceived by the user, depends on many of a degraded QoE, but it is not able to control or monitor
factors: the network conditions, both at the core (CN) and at the mobile network. QoE Doctor uses an UI automation
the radio access (RAN), the terminal, the service servers, and tool to reproduce user behaviour in the terminal (app user
human factors difficult to control. Due to the complexity and flows in TRIANGLE nomenclature) and to measure the user-
the time needed to run experiments or make measurements, perceived latency by detecting changes on the screen. Other
most of the studies limit the evaluation of the QoE to a limited QoE metrics computed by QoE Doctor are the mobile data
set of, or even noncontrolled, network conditions, especially consumption and the network energy consumption of the app
those that affect the radio interface (fading, interference, etc.). by means of an offline analysis of the TCP flows. The authors
TRIANGLE presents a methodology and a framework to have used QoE Doctor to evaluate the QoE of popular apps
compute the QoE, out of technical parameters, weighting the such as YouTube, Facebook, or mobile web browsers. One of
impact of the network conditions based on the actual uses the drawbacks of this approach is that most metrics are based
cases for the specific application. As in ITU recommendation on detecting specific changes on the UI. Thus, the module
G1030 [9] and G1031 [10], the user’s influence factors are in charge of detecting UI changes has to be adapted for each
outside of the scope of the methodology developed in specific app under test.
TRIANGLE. QoE-Lab [13] is a multipurpose testbed that allows the
TRIANGLE has developed an end-to-end cellular net- evaluation of QoE in mobile networks. One of its purposes
work testbed and a set of test cases to automatically test is to evaluate the effect of new network scenarios on services
applications under multiple changing network conditions such as VoIP, video streaming, or web applications. To this
and/or terminals and provide a single quality score. The end, QoE-Lab extends BERLIN [14] testbed framework with
score is computed weighting the results obtained testing support for next generation mobile networks and some new
the different uses cases applicable to the application, for services, such as VoIP and video streaming. The testbed
the different aspects relevant to the user (the domains in allows the study of the effect of network handovers between
TRIANGLE), and under the network scenarios relevant for wireless technologies, dynamic migrations, and virtualized
the application. The framework allows specific QoS-to-QoE resources. Similar to TRIANGLE, the experiments are exe-
translations to be incorporated into the framework based on cuted in a repeatable and controlled environment. However,
the outcome of subjective experiments on new services. in the experiments presented in [13], the user equipment
Note that although the TRIANGLE project also provides were laptops, which usually have better performance and
means to test devices and services, only the process to test more resources than smartphones (battery, memory, and
applications is presented here. CPU). The experiments also evaluated the impact of different
The rest of the paper is organized as follows. Section 2 scenarios on the multimedia streaming services included in
provides an overview of related work. Section 3 presents an the testbed. The main limitations are that it is not possible
overview of the TRIANGLE testbed. Section 4 introduces to evaluate different mobile apps running in different smart-
the TRIANGLE approach. Section 5 describes in detail how phones or relate the QoE with the CPU, battery usage, and so
the quality score is obtained in the TRIANGLE framework. forth.
Section 6 provides an example and the outcome of this De Moor et al. [15] proposed a user-centric methodology
approach applied to the evaluation of a simple App, the for the multidimensional evaluation of QoE in a mobile real-
Exoplayer. Finally, Section 7 summarizes the conclusions. life environment. The methodology relies on a distributed
testbed that monitors the network QoS and context infor-
2. State of the Art mation and integrates the subjective user experience based
on real-life settings. The main component of the proposed
Modelling and evaluating QoE in current and next generation architecture is the Mobile Agent, a component to be installed
of mobile networks is an important and active research in the user device that monitors contextual data (location,
Wireless Communications and Mobile Computing 3
Testbed management
Orcomposutor Measurements and data
Orchestration collections
ETL
framework
DBs
Compositor Executor
DBs
ETL modules
TAP
App
DEKRA WebDriver Android iOS TAP EPC Instrumentat …
Driver Driver Tap Driver Driver Driver ion TAP TAP Driver
Driver
UE Transport
Meas 1
KPI 1 Synthetic MOS 1
Meas ...
Iteration 1
Meas P KPI ... Synthetic MOS ... Aggregation Synthetic MOS Scenario 1
Meas 1
KPI R Synthetic MOS R
Scenario 1 Meas ...
Iteration ...
Meas P
Meas 1
Synthetic MOS
Meas ...
Iteration N
Meas P
Test case
Meas 1 KPI 1 Synthetic MOS 1
Meas ...
Iteration 1
KPI ... Synthetic MOS ... Aggregation Synthetic MOS Scenario K
Meas P
Meas 1
Meas ...
Iteration N
Meas P
Figure 2: The process to obtain the synthetic-MOS score in a TRIANGLE test case.
Table 1: Uses cases defined in the TRIANGLE project. features questionnaire in the portal), equivalent to the classi-
cal conformance testing ICS (Implementation Conformance
Identifier Use Case Statement), has been developed and is accessible through the
VR Virtual Reality portal. After filling the questionnaire, the applicable test plan,
GA Gaming that is, the test campaign with the list of applicable test cases,
AR Augmented Reality is automatically generated.
CS Content Distribution Streaming Services The sequence of user actions (type, swipe, tap, etc.) a user
LS Live Streaming Services needs to perform in the terminal (UE) to complete a task (e.g.,
SN Social Networking
play a video) is called the “app user flow.” In order to be able
to automatically run a test case, the actual application user
HS High Speed Internet
flow, with the user actions a user would need to perform on
PM Patient Monitoring the phone to complete certain tasks defined in the test case,
ES Emergency Services also has to be provided.
SM Smart Metering Each test case univocally defines the conditions of execu-
SG Smart Grids tion, the sequence of actions the user would perform (i.e., the
CV Connected Vehicles app user flow), the sequence of actions that the elements of
the testbed must perform, the traffic injected, the collection
of measurements to take, and so forth. In order to obtain
phones (UEs) and the ones that can be integrated in, for statistical significance, each test case includes a number
example, gaming consoles, advanced VR gear, car units, or of executions (iterations) under certain network conditions
IoT systems. (herein called scenarios). Out of the various measurements
The TRIANGLE domains group different aspects that made in the different iterations under any specific network
can affect the final QoE perceived by the users. The cur- conditions (scenario), a number of KPIs (Key Performance
rent testbed implementation supports three of the several Indicators) are computed. The KPIs are normalized into a
domains that have been identified: Apps User Experience standard 1-to-5 scale, as typically used in MOS scores, and
(AUE), Apps Energy consumption (AEC), and Applications referred to as synthetic-MOS, a terminology that has been
Device Resources Usage (RES). adopted from previous works [7, 21]. The synthetic-MOS
Table 1 provides the use cases and Table 2 lists the values are aggregated across network scenarios to produce a
domains initially considered in TRIANGLE. number of intermediate synthetic-MOS scores, which finally
To produce data to evaluate the QoE, a series of test are aggregated to obtain a synthetic-MOS score in each test
cases have been designed, developed, and implemented to case (see Figure 2).
be run on the TRIANGLE testbed. Obviously, not all test The process to obtain the final TRIANGLE mark is
cases are applicable to all applications under test, because sequential. First, for each domain, a weighted average of
not all applications need, or are designed, to support all the the synthetic-MOS scores obtained in each test case in
functionalities that can be tested in the testbed. In order the domain is calculated. Next, a weighted average of the
to automatically determine the test cases that are applicable synthetic-MOS values in all the domains of a use case is
to an application under test, a questionnaire (identified as calculated to provide a single synthetic-MOS value per use
6 Wireless Communications and Mobile Computing
case. An application will usually be developed for one specific individual feature, aspect, or behaviour of the application
use case, as those defined in Table 1, but may be designed for under test, as shown in Figure 4.
more than one use case. In the latter case, a further weighted Each test case defines a number of measurements, and
average is made with the synthetic-MOS scores obtained in because the results of the measurements depend on many
each use case supported by the application. These sequential factors, they are not, in general, deterministic, and, thus,
steps produce a single TRIANGLE mark, an overall quality each test case has been designed not to perform just one
score, as shown in Figure 3. single measurement but to run a number of iterations (N)
This approach provides a common framework for testing of the same measurement. Out of those measurements, KPIs
applications, for benchmarking applications, or even for are computed. For example, if the time to load the first
media frame is the measurement taken in one specific test
certifying disparate applications. The overall process for an
case, the average user waiting time KPI can be calculated by
app that implements features of different use cases is depicted
computing the mean of the values across all iterations. In
in Figure 3. general, different use case-domain pairs have a different set of
KPIs. The reader is encouraged to read [11] for further details
5. Details of the TRIANGLE QoE Computation about the terminology used in TRIANGLE.
Recommendation P.10/G.100 Amendment 1 Definition of
For each use case identified (see Table 1) and domain (see Quality of Experience [2] notes that the overall acceptability
Table 2), a number of test cases have been developed within may be influenced by user expectations and context. For
the TRIANGLE project. Each test case intends to test an the definition of the context, technical specifications ITU-T
Wireless Communications and Mobile Computing 7
Measurements
media frame
be set to the extreme value minKPI (if it is worse) or maxKPI Table 4: Measurement points associated with test case AUE/CS/002.
(if it is better).
Measurements Measurement points
Media File Playback - Start
Type II. This function performs a logarithmic interpolation Time to load first media frame
Media File Playback - First Picture
and is inspired on the opinion model recommended by the
ITU-T in [9] for a simple web search task. This function maps Media File Playback - Start
Playback cut-off
a value, v, of a KPI, to v’ (synthetic-MOS) in the range [1-to-5] Media File Playback - End
by computing the following formula: Pause Media File Playback - Pause
5.0 − 1.0
V =
ln ((𝑎 ∗ 𝑤𝑜𝑟𝑠𝑡𝐾𝑃𝐼 + 𝑏) /𝑤𝑜𝑟𝑠𝑡𝐾𝑃𝐼 ) (2)
is described in this section. This application only has one use
case: content distribution streaming services (CS).
∙ (ln (V) − ln (𝑎 ∗ 𝑤𝑜𝑟𝑠𝑡𝐾𝑃𝐼 + 𝑎)) + 5 Exoplayer is an application level media player for Android
promoted by Google. It provides an alternative to Android’s
The default values of 𝑎 and 𝑏 correspond to the simple web MediaPlayer API for playing audio and video both locally and
search task case (𝑎 = 0,003 and 𝑏 = 0,12) [9, 22] and the over the Internet. Exoplayer supports features not currently
worst value has been extracted from the ITU-T G1030. If supported by Android’s MediaPlayer API, including DASH
during experimentation a future input case falls outside the and SmoothStreaming adaptive playbacks.
data range of the KPI, the parameters 𝑎 and 𝑏 will be updated The TRIANGLE project has concentrated in testing just
accordingly. Likewise, if through subjective experimentation two of the Exoplayer features: “Noninteractive Playback”
other values are considered better adjustments for specific and “Play and Pause.” These features result in 6 test cases
services, the function can be easily updated. applicable, out of the test cases defined in TRIANGLE. These
Once all KPIs are translated into synthetic-MOS values, are test cases AUE/CS/001 and AUE/CS/002, in the App User
they can be averaged with suitable weights. In the averaging Experience domain, test cases AEC/CS/001 and AEC/CS/002,
process, the first step is to average over the network scenarios in the App Energy Consumption domain, and test cases
considered relevant for the use case, as shown in Figure 2. RES/CS/001 and RES/CS/002, in the Device Resources Usage
This provides the synthetic-MOS output value for the test domain.
case. If there is more than one test case per domain, which is The AUE/CS/002 “Play and Pause” test case description,
generally the case, a weighted average is calculated in order to belonging to the AUE domain, is shown in Table 3. The test
provide one synthetic-MOS value per domain, as depicted in case description specifies the test conditions, the generic app
Figure 3. The final step is to average the synthetic-MOS scores user flow, and the raw measurements, which shall be collected
over all use cases supported by the application (see Figure 3). during the execution of the test.
This provides the final score, that is, the TRIANGLE mark. The TRIANGLE project also offers a library that includes
the measurement points that should be inserted in the
6. A Practical Case: Exoplayer under Test source code of the app for enabling the collection of the
measurements specified. Table 4 shows the measurement
For better understanding, the complete process of obtaining points required to compute the measurements specified in
the TRIANGLE mark for a specific application, the Exoplayer, test case AUE/CS/002.
Wireless Communications and Mobile Computing 9
Feature Domain KPI Synthetic MOS Calculation KPI min KPI max
Non-Interactive Playback AEC Average power consumption Type I 10 W 0.8 W
Non-Interactive Playback AUE Time to load first media frame Type II KPI worst=20 ms
Non-Interactive Playback AUE Playback cut-off ratio Type I 50% 0
Non-Interactive Playback AUE Video resolution Type I 240p 720p
Non-Interactive Playback RES Average CPU usage Type I 100% 16%
Non-Interactive Playback RES Average memory usage Type I 100% 40%
Play and Pause AEC Average power consumption Type I 10 W 0.8 W
Play and Pause AUE Pause operation success rate Type I 50% 100%
Play and Pause RES Average CPU usage Type I 100% 16%
Play and Pause RES Average memory usage Type I 100% 40%
Horizontal Resolution
1000
point “Media File Playback – First Picture.”
As specified in [11], all scenarios defined are applicable 800
to the content streaming use case. Therefore, test cases in 600
the three domains currently supported by the testbed are 400
executed in all the scenarios.
200
Once the test campaign has finished, the raw measure-
ment results are processed to obtain the KPIs associated with 0
00:00.000
00:17.280
00:34.560
00:51.840
01:09.120
01:26.400
01:43.680
02:00.960
each test case: average current consumption, average time to
load first media frame, average CPU usage, and so forth. The
processes applied are detailed in Table 5. Based on previous
experiments performed by the authors, the behaviour of the Timestamp
time to load the first media frame KPI resembles the web Figure 6: Video Resolution evolution in the Driving Urban Normal
response time KPI (i.e., the amount of time the user has scenario.
to wait for the service) and thus, as recommended in the
opinion model for web search introduced in [9], a logarithmic
interpolation (type II) has been used for this metric.
The results of the initial process, that is, the KPIs compu- adaptation algorithm is basically throughput-based and some
tation, are translated into synthetics-MOS values. To compute parameters control how often and when switching can occur.
these values, reference benchmarking values for each of the During the testing, the testbed was configured with the
KPIs need to be used according to the normalization and different network scenarios defined in [11]. In these scenarios,
interpolation process described in Section 5. Table 5 shows the network configuration changes dynamically following a
what has been currently used by TRIANGLE for the App random pattern, resulting in different maximum throughput
User Experience domain, which is also used by NGMN as rates. The expected behaviour of the application under test
reference in their precommercial Trials document [23]. is that the video streaming client adapts to the available
For example, for the “time to load first media frame” KPI throughput by decreasing or increasing the resolution of the
shown in Table 5, the type of aggregation applied is averaging received video. Figure 6 depicts how the client effectively
and the interpolation formula used is Type II. adapts to the channel conditions.
To achieve stable results, each test case is executed 10 However, the objective of the testing carried out in the
times (10 iterations) in each network scenario. The synthetic- TRIANGE testbed is not just to verify that the video stream-
MOS value in each domain is calculated by averaging the ing client actually adapts to the available maximum through-
measured synthetic-MOS values in the domain. For example, put but also to check whether this adaptation improves the
synthetic-MOS value is the RES domain obtained by aver- users’ experience quality.
aging the synthetic-MOS value of “average CPU usage” and Table 6 shows a summary of the synthetic-MOS values
“average memory usage” from the two test cases. obtained per scenario in one test case of each domain. The
Although Exoplayer supports several video streaming scores obtained in the RES and AEC domains are always high.
protocols, in this work only DASH [24] (Dynamic Adaptive In the AUE domain, the synthetic MOS associated with the
Streaming over HTTP) has been tested. DASH clients should Video Resolution shows low scores in some of the scenarios
seamlessly adapt to changing network conditions by making because the resolution decreases, reasonable good scores in
decisions on which video segment to download (videos the time to load first media, and high scores in the time to
are encoded at multiple bitrates). The Exoplayer’s default playback cut-off ratio. Overall, it can be concluded that the
10 Wireless Communications and Mobile Computing
Table 6: Synthetic MOS values per test case and scenario for the feature “Noninteractive Playback”.
DASH implementation of the video streaming client under and enables the execution of extensive and repeatable test
test is able to adapt to the changing conditions of the network, campaigns to obtain meaningful QoE scores. The TRIANGLE
maintaining an acceptable rate of video cut-off, rebuffering project has also defined a methodology, which is based on the
times, and resources usage. transformation and aggregation of KPIs, its transformation
The final score in each domain is obtained by averaging into synthetic-MOS values, and its aggregation over the
the synthetic-MOS values from all the tested network scenar- different domains and use cases.
ios. Figure 7 shows the spider diagram for the three domains
tested. In the User Experience domain, the score obtained is The TRIANGLE approach is a methodology flexible
lower than the other domains, due to the low synthetic-MOS enough to generalize the computation of QoE for any applica-
values obtained for the video resolution. tion/service. The methodology has been validated testing the
The final synthetic MOS for the use case Content Dis- DASH implementation in the Exoplayer App. To confirm the
tribution Streaming is obtained as a weighted average of the suitability of the weights used in the averaging process and the
three domains, representing the overall QoE as perceived by interpolation parameters, as well as to verify the correlation
the user. The final score for the Exoplayer version 1.516 and the of the obtained MOS with that scored by users, the authors
features tested (Noninteractive Playback and Play and Pause) have started experiments with real users and initial results are
is 4.2, which means that the low score obtained in the video encouraging.
resolution is compensated with the high scores in other KPIs. The process described produces a final TRIANGLE mark,
If an application under test has more than one use case,
a single quality score, which could eventually be used to cer-
the next steps in the TRIANGLE mark project approach
would be the aggregation per use case and the aggregation tify applications after achieving a consensus on the different
over all use cases. The final score, the TRIANGLE mark, is an values of the process (weights, limits, etc.) to use.
estimation of the overall QoE as perceived by the user.
In the current TRIANGLE implementation, the weights Data Availability
in all aggregations are the same. Further research is needed
to appropriately define the weights of each domain and each The methodology and results used to support the findings of
use case in the overall score of the applications. this study are included within the article.
DER
AUE AEC
[1] ETSI, “Human factors: quality of experience (QoE) require- [11] EU H2020 TRIANGLE Project, Deliverable D2.2 Final report
ments for real-time communication services,” Tech. Rep. 102 on the formalization of the certification process, requirements
643, 2010. and use cases, 2017, https://www.triangle-project.eu/project-old/
deliverables/.
[2] ITU-T, “P.10/G.100 (2006) amendment 1 (01/07): new appendix
I - definition of quality of experience (QoE),” 2007. [12] Q. A. Chen, H. Luo, S. Rosen et al., “QoE doctor: diagnosing
[3] F. Kozamernik, V. Steinmann, P. Sunna, and E. Wyckens, mobile app QoE with automated UI control and cross-layer
“SAMVIQ - A new EBU methodology for video quality evalua- analysis,” in Proceedings of the Conference on Internet Mea-
tions in multimedia,” SMPTE Motion Imaging Journal, vol. 114, surement Conference (IMC ’14), pp. 151–164, ACM, Vancouver,
no. 4, pp. 152–160, 2005. Canada, November 2014.
[4] ITU-T, “G.107 : the E-model: a computational model for use in [13] M. A. Mehmood, A. Wundsam, S. Uhlig, D. Levin, N. Sarrar,
transmission planning,” 2015. and A. Feldmann, “QoE-Lab: Towards Evaluating Quality of
[5] J. De Vriendt, D. De Vleeschauwer, and D. C. Robinson, “QoE Experience for Future Internet Conditions,” in Testbeds and
model for video delivered over an LTE network using HTTP Research Infrastructure, Korakis T., Li H., Tran-Gia P., and H.
adaptive streaming,” Bell Labs Technical Journal, vol. 18, no. 4, S. Park, Eds., vol. 90 of TridentCom 2011, Lnicst, pp. 286–301,
pp. 45–62, 2014. Springer, Development of Networks and Communities, Berlin,
[6] S. Jelassi, G. Rubino, H. Melvin, H. Youssef, and G. Pujolle, Germany, 2012.
“Quality of Experience of VoIP Service: A Survey of Assessment [14] D. Levin, A. Wundsam, A. Mehmood, and A. Feldmann,
Approaches and Open Issues,” IEEE Communications Surveys & “Berlin: The Berlin Experimental Router Laboratory for Inno-
Tutorials, vol. 14, no. 2, pp. 491–513, 2012. vative Networking,” in TridentCom 2010. Lnicst, T. Magedanz,
[7] M. Li, C.-L. Yeh, and S.-Y. Lu, “Real-Time QoE Monitoring A. Gavras, N. H. Thanh, and J. S. Chase, Eds., vol. 46 of Lecture
System for Video Streaming Services with Adaptive Media Play- Notes of the Institute for Computer Sciences, Social Informatics
out,” International Journal of Digital Multimedia Broadcasting, and Telecommunications Engineering, pp. 602–604, Springer,
vol. 2018, Article ID 2619438, 11 pages, 2018. Heidelberg, Germany, 2011.
12 Wireless Communications and Mobile Computing
Rotating Advances in
Machinery Multimedia
The Scientific
Engineering
Journal of
Journal of
Hindawi
World Journal
Hindawi Publishing Corporation Hindawi
Sensors
Hindawi Hindawi
www.hindawi.com Volume 2018 http://www.hindawi.com
www.hindawi.com Volume 2018
2013 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Control Science
and Engineering
Advances in
Civil Engineering
Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
Journal of
Journal of Electrical and Computer
Robotics
Hindawi
Engineering
Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018
VLSI Design
Advances in
OptoElectronics
International Journal of
International Journal of
Modelling &
Simulation
Aerospace
Hindawi Volume 2018
Navigation and
Observation
Hindawi
www.hindawi.com Volume 2018
in Engineering
Hindawi
www.hindawi.com Volume 2018
Engineering
Hindawi
www.hindawi.com Volume 2018
Hindawi
www.hindawi.com www.hindawi.com Volume 2018
International Journal of
International Journal of Antennas and Active and Passive Advances in
Chemical Engineering Propagation Electronic Components Shock and Vibration Acoustics and Vibration
Hindawi Hindawi Hindawi Hindawi Hindawi
www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018 www.hindawi.com Volume 2018