Efficiency Analysis of Provisioning Microservices

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

2016 IEEE 8th International Conference on Cloud Computing Technology and Science

Efficiency Analysis of Provisioning Microservices


Hamzeh Khazaei, Cornel Barna, Nasim Beigi-Mohammadi, Marin Litoiu
School of Computer Science, York University
Toronto, Ontario, Canada
Email: {hkh,cornel.barna,nbm,mlitoiu}@yorku.ca

Abstract—Microservice architecture has started a new trend different frameworks and tools; (4) to increase the overall
for application development/deployment in cloud due to its scalability; and (5) to improve the resilience of the system.
flexibility, scalability, manageability and performance. Various Containers have empowered the usage of microservices
microservice platforms have emerged to facilitate the whole
software engineering cycle for cloud applications from design, architectures by being lightweight, providing fast start-up
development, test, deployment to maintenance. In this paper, times, and having a low overhead [4].
we propose a performance analytical model and validate it A flexible computing model combines IaaS based clouds
by experiments to study the provisioning performance of with container based PaaS (Platform-as-a-Service) cloud.
microservice platforms. We design and develop a microservice
Platforms such as Nirmata [5], Docker Cloud [6], pre-
platform on Amazon EC2 cloud using Docker technology
family to identify important elements contributing to the per- viously known as Tutum, and Giant Swarm [7] offer
formance of microservice platforms. We leverage the results platforms for managing virtual environments made of con-
and insights from experiments to build a tractable analytical tainers while relying on IaaS public/private clouds as the
performance model that can be used to perform what-if backend resource providers.
analysis and capacity planning in a systematic manner for
large scale microservices with minimum amount of time and
Service availability and service response time are two
cost. important quality measures from cloud’s users perspective
[8]. Quantifying and characterizing such performance mea-
I. I NTRODUCTION sures requires accurate modeling and coverage of large
Infrastructure-as-a-Service (IaaS) cloud providers, such parameter space while using tractable and solvable models
as Amazon EC2 and IBM Cloud deliver on-demand oper- in timely manner to assist with runtime decisions. This
ating system (OS) instances in the form of virtual machines paper consider container based PaaS operating on top of
(VM). A virtual machine manager (VMM), or hypervisor, VM based IaaS and introduces a performance model for
is usually used to manage all virtual machines on a physical microservice provisioning. The model supports microser-
machine. This virtualization technology is quite mature now vice management use cases and incorporate the following
and can provide good performance and security isolation contributions:
among VM instances. An individual VM has no awareness • Supports virtualization at both VM and container
of other VMs running on the same physical machine (PM). layers
However, for applications that require higher flexibility at • Captures different delays imposed by the microservice
runtime and less isolation, hypervisor based virtualization platform on users’ requests;
might not satisfy the entire set of quality of service (QoS) • Characterizes the service availability and elasticity;
requirements. • Provides capacity planning and what-if analysis for
A container runs directly on a Linux kernel with sim- microservice platform providers;
ilar performance isolation and allocation characteristics as • Provides insights in performance vs cost trade offs.
VMs but without the expensive VM runtime management
overhead [1], [2]. Containerization of applications, that is The rest of the paper is organized as follows. Section
deployment of application or its components in containers, II describes the new trend of emerging microservice plat-
has become popular in cloud service industry. For example, forms. Section III describes the details of the platform that
Google is providing many of its popular products through we are going to model in this work. Section IV elaborate
a container-based cloud. A Docker container [3] comes the performance analytical model. Section V and VI present
with all dependent software packages for an application, the details of our experiments and the numerical results
providing a fast and simple way to develop and deploy obtained from the analytical model. In section VII, we
different versions of the applications [3]. Container based survey related work in cloud performance analysis, and
services are popularly known as Microservices and are finally, section VIII summarizes our findings and concludes
being leveraged by many service providers for a number the paper.
of reasons: (1) to reduce complexity when using tiny
services; (2) to scale, remove and deploy parts of the system II. M ICROSERVICE P LATFORMS
or application easily; (3) to improve flexibility by using Recently, a pattern has been adopted by many software-
9781-5090-1445-3/16$31.00 
c 2016 IEEE (CloudCom’16) as-a-service providers in which both VMs and containers

2330-2186/16 $31.00 © 2016 IEEE 261


DOI 10.1109/CloudCom.2016.48
Fig. 1: Leveraging both virtualization techniques, ie, VMs Fig. 2: Conceptual Model: including both microservice
and containers, to offer microservices on IaaS cloud. platform and the backend public/private cloud.

are leveraged to provide so called microservices. Microser- The steps incurred in servicing a provisioning request in
vices is an approach that allows more complex applications MSP are shown in the Fig. 3. User requests for containers
to be configured from basic building blocks, where each are submitted to a global finite queue and then processed
building block is deployed in a container and the con- on a first-come, first-serve basis (FCFS). A request that
stituent containers are linked together to form the cohesive finds the queue full, will be rejected immediately. Once
application. The application’s functionality can then be the request is admitted to the queue, it must wait until
scaled by deploying more containers of the appropriate the VM Assigning Module (VMAM) processes it. VMAM
building blocks rather than entire new iterations of the full finds the most appropriate VM in the user’s cluster (based
application. Microservice platforms (MSP) such as Nirmata on the policy) and then send the request to that VM’s
[5], Docker Cloud [6] and Giant Swarm [7] facilitate so that the Container Provisioning Module (CPM) initiates
the management of such service paradigm. MSPs are au- and deploys the container. when a request is processed in
tomating deployment, scaling, and operations of application CPM, a pre-build or customized container image is used
containers across clusters of physical machines in cloud. to create a container instance. These images can be loaded
MSPs enable software-as-service providers to quickly and from a public repository like Docker Hub [3] or private
efficiently respond to customer demand by scaling the repositories.
applications on demand, seamlessly rolling out new features If there are not enough resources (ie, VM) in the MSP
and optimizing hardware usage by using only the resources to accommodate the new request, scheduler asks for a VM
that are needed. Fig. 1 shows the layered architecture from the backend IaaS (see Fig. 3). The request will be
in which both virtualization techniques are leveraged to rejected if the application (or the user) has already reached
deliver microservices on the cloud. its capacity. Otherwise, a VM will be provisioned and the
Fig. 2 depicts the high-level architecture of MSPs and last request for container will be deployed on this new VM.
the way they leverage the backend public or private cloud In the IaaS cloud when a request is processed, a pre-built
(ie, infrastructure-as-a-service clouds). Various microser- or customized disk image is used to create a VM instance
vice platform providers such as Nirmata, Docker Tutum [8]. In this work we assume that pre-built images fulfill all
and Giant Swarm implemented their platform based on this user requests.
conceptual model. To model the behavior of this system, we design a
provisioning performance model that includes all the ser-
III. S YSTEM D ESCRIPTION vicing steps in fulfilling requests for containers. Then, we
In this section we describe the system under modeling solve this model to compute the provisioning performance
with respect to Fig. 3 that shows the servicing steps metrics: request rejection probability, mean response delay
of a request in microservice platforms. In microservice and cluster utilization as functions of variations in workload
platforms (MSP), a request may originate form two sources: (request arrival rate), container lifetime and cluster size.
first, direct requests from users (ie, microservice users in We describe our analysis in detail in Section IV, using the
Fig. 2) that want to deploy a new instance of an application symbols and acronyms listed in Table I.
or service; second type would be runtime requests from
applications (eg, consider adaptive applications) by which IV. A NALYTICAL M ODEL
applications adapt to the runtime conditions; for example, The performance model of the microservice platform is
scaling up the number of containers to cope with a traffic shown in Fig. 4. The MSP performance model has been
peak. We model these two types of requests altogether as realized by a 3-dimensional Continues Time Markov Chain
a Poisson process. (CTMC) with states labeled as (i, j, k); state i indicates

262
Fig. 3: Servicing steps of a provisioning request in microservice platforms; derived form conceptual model in Fig. 2.

TABLE I: Symbols and corresponding descriptions. backend IaaS cloud when explicitly ordered by the MSP
Symbol Description user (or application) or when the utilization of the host
λ Mean arrival rate of requests for containers group is equal or greater than a predefined value. For state
1/α Mean time to provision a VM
(i, j, k), utilization u is defined as follows,
1/β Mean time that takes to decomission a VM i+j
s Min size of the cluster (VMs) u= (1)
k×M
S Max size of the cluster (VMs)
M Max number of containers that can be deployed in which M is the maximum number of containers that can
on a VM be run on a single VM. On the other hand, if utilization
1/μ Mean container lifetime drops lower than a predefined value, the MSP will release
u Utilization in the user’s cluster one VM to optimize the cost. A VM can be released if
Lq Microservice global queue size, Lq = S × M there is no running containers on it so that the VM should
bpq Blocking probability in the microservice global be fully decommissioned in advance. Also the MSP holds a
queue minimum number of VMs (ie., s) in the cluster regardless
Preq Probability of request for a VM of utilization, in order to maintain the availability of the
Prel Probability by which a VM will be released microservices. The user may also set another value for its
λc Arrival rate of requests for VMs application(s) (ie, S) indicates the MSP can not request
1/ηc Mean VM lifetime more than S VMs from IaaS cloud on behalf of the user.
1/φ Mean time to provision a container Thus the application scale up at most to S VMs in case
wtq Mean waiting time in microservice global of high traffic and scale down to s VMs in times of low
queue utilization. We set the global queue size (ie, Lq ) to the
rt Mean response time of a requests total number of containers that it can accommodate at its
vmno Mean no of VMs in the cluster full capacity (ie, Lq = S × M ). Note that requests will
contno Mean no of containers in the cluster be blocked if the user reached its capacity, regardless of
util Mean cluster utilization Global Queue state.
State (0, 0, s) indicates that there is no request in queue,
no running container and the cluster consists of s VMs that
the number of requests in Microservice Global Queue, j is the minimum number of VMs that user maintain in its
denotes the number of running containers in the platform host group. Consider an arbitrary state such as (i, j, k), in
and finally k shows the number of active VMs in the user’s which five transitions might happen:
cluster. Each VM can accommodate up to M containers 1) Upon arrival of a new request the system with rate
that is set by the user. The request arrival can be adequately of λ moves to state (i + 1, j, k) if the user still has
modelled as a Poisson process [9] with the arrival rate of λ. capacity (ie, i + j < S × M ), otherwise the request
Also let φ be the rate at which containers can be deployed will be blocked and the system stays in the current
on a VM and μ be the service rate of each container (note state.
that 1/μ would be the container lifetime). Therefor, the 2) A container will be instantiated with rate φ for the
total service rate for each VM is the product of number request in the head of Global Queue and moves to
of running containers by μ. Assume α and β are the rates (i − 1, j + 1, k).
at which the MSP can obtain and release a VM from IaaS 3) The service time (ie, lifetime) of a containers finishes
cloud respectively. The scheduler asks for a new VM from with rate of kμ and the system moves to (i, j − 1, k).

263
4) If the utilization gets higher than the threshold, the Applying Little’s law [10], the mean waiting time in the
scheduler requests a new VM, and the system moves global queue is given by:
to state (i, j, k + 1) with rate α. q
5) Or, the utilization dropped bellow a certain value, MSP wtq = (9)
λ(1 − bpq )
decommission a VM, and the system releases the idle
VM so that moves to (i, j, k − 1) with rate β. The total response time of a request would be summation
of waiting time in queue, VM provisioning time from IaaS
in case of need and container provisioning time at MSP.
Thus, the response time can be calculated as:
1
rt = wtq + (preq × ) + (1/φ) (10)
α
We set α and φ based on our experiment on Amazon EC2
and Docker ecosystem which will be described in section
V. The mean number of running VMs, mean number of
running containers and mean utilization in the system can
be calculated as follow:

vmno = k × (π(i,j,k) ) (11)
(i,j,k)∈S

contno = j × (π(i,j,k) ) (12)
(i,j,k)∈S

 i+j
util = ( ) × (π(i,j,k) ) (13)
k×M
(i,j,k)∈S
Fig. 4: Microservice platform model.
V. E XPERIMENTAL S ETUP AND R ESULTS
Suppose that π(i,j,k) is the steady-state probability for the In this section, we present our microservice platform
model (Fig. 4) to be in the state (i, j, k). So the blocking and discuss experiments that we have performed on this
probability in MSP can be calculated as follow, platform. For experiments we couldn’t use available third
 party platforms such as Docker Cloud or Nirmata as we
bpq = π(i,j,k) if i + j = Lq (2) needed full control of the platform for monitoring, param-
(i,j,k)∈S eter setting, and performance measurement. As a result,
We are also interested in two probabilities by which the we have created a microservice platform from scratch
MSP requests (preq ) or releases (prel ) a VM. based on the conceptual architecture presented in Fig. 2.
 We employed Docker Swarm as the cluster management
Preq = π(i,j,k) if u ≥ high-util & k < S (3) system, Docker as the container engine and Amazon EC2 as
(i,j,k)∈S the backend public cloud. We developed a front-end in Java

Prel = π(i,j,k) if u ≤ low-util & k > s (4) for the microservice platform that interacts with the cluster
(i,j,k)∈S management system (ie, Swarm) through REST APIs. The
microservice platform leverages three initial VMs, two
Using these probabilities, the rate by which microservice configured in worker mode and another one in master mode
platform requests (ie, λc ) or releases (ie, ηc ) a VM can be to manage the Docker Swarm cluster. All VMs are of type
calculated. m3.medium (1 virtual CPU with 3.5 GB memory). In our
λc = λ × preq (5) deployment, we have used Consul as the discovery service,
ηc = μ × prel (6) that has been installed on the Swarm Manager VM.
For the containers, we have used Ubuntu 16.04 image
In order to calculate the mean waiting time in queue, we available on Docker Hub. Each running container was
first establish the probability generating function (PGF) for restricted to use only 512 MB memory, thus making the
the number of requests in the queue [10], as capacity of a VM to be 7 containers. The Swarm manager
Lq strategy for distributing containers on worker nodes was

Q(z) = (π(i,j,k) )z i (7) binpack. The advantage of this strategy is that fewer VMs
i=0 can be used, since Swarm will attempt to put as many
containers as possible on the current VMs before using
The mean number of requests in queue is
another VM. Table II presents the input values for two
q = Q (1) (8) scenarios in our experiments.

264
TABLE II: Range of parameters for experiments. indicators in every single minute; hereafter we call each
minute of the experiments as iteration.
Value(s)
Parameter Unit In the first experiment scenario (Fig. 5(a)), we have set
Scenario 1 Scenario 2
the average lifetime of a container to one minute; the lower
Arrival rate 20 20 . . 40 req/min
and upper utilization thresholds are set to 50% and 80%
VM capacity 7 7 container
respectively (shaded area in the third plot of Fig. 5(a) shows
Container lifetime 1 2 minute
Desired utilization 50% . . 80% 70% . . 90% N/A
the areas where the cluster is underloaded or overloaded).
Initial cluster size 2 2 VM
The arrival rate has a Poisson distribution with mean value
Max cluster size 10 10 VM
of 20 requests per minute shown in the second plot of
Fig. 5(a) with blue line. In the first plot, red line shows
the number of running VMs and the blue line enumerates
To trigger VM elasticity, we have defined two cluster the number of running containers.
utilization (based on Eq. 1) thresholds: an upper threshold An interesting observation here is the behavior of the
that signifies cluster is overloaded, and a lower threshold system at the beginning of the experiment. Because the
to show that the cluster is underloaded. In order to avoid workload is very high and the capacity of the cluster is low,
oscillation or ping-pong effect in provisioning/releasing the response becomes very long (ie, up to approximately
VMs, we do not add/remove a VM immediately after 85 s) that is attributed to long waiting time in the queue for
crossing thresholds rather we employ Algorithm 1 as the capacity to become available. Once enough VMs have been
elasticity mechanism. provisioned, the queue empties and the response time drops.
Occasional spikes appear in the response time when there
is no available capacity (and a new VM has to be added to
Algorithm 1: The decision making algorithm for the cluster), but they are less severe. All in all, in scenario
adding and removing VMs. 1 the system is operating well with desired utilization and
1 if utilization ≥ upper threshold then response time with no rejected request.
// cluster overload
2 if heat < 0 then In the second experiment scenario, presented in Fig. 5(b),
// reset any buildup for VM removal we have increased the lifetime of a container to two minutes
3 heat ← 0; and changed the cluster utilization thresholds to 70% and
4 heat ← heat + 1; 90%; we also increased the arrival rate from 20 req/min to
5 else if utilization ≤ lower threshold then 40 req/min around iteration 400. The other parameters of
// cluster underload
6 if heat > 0 then
the experiments remained the same. We noticed the same
// reset any buildup for adding VM high response time at the beginning of the experiment; this
7 heat ← 0; time the cluster scaled up to 10 VMs (maximum allowed
8 heat ← heat − 1; number) while still the queue did not get cleared; this
9 else is attributed to longer lifetime of containers that makes
// the utilization is within range resource releasing slower compared to the first scenario.
// move heat one unit towards 0
10 if heat > 0 then At this moment, because maximum capacity of the cluster
11 heat ← heat − 1; has been reached, we have witnessed a large number of
12 else if heat < 0 then rejected requests (around iteration 20, the red line in the
13 heat ← heat + 1; second plot). After 50 iterations the behaviour of the cluster
14 if heat = 6 then is similar to that of scenario 1. Around iteration 400, we
15 Add a new VM ; started to increase the workload (blue line in the second
16 heat ← 0; plot, Fig. 5(b)). This resulted in eventually utilization of
17 else if heat = −6 then all allowed VMs and rejection of requests as there was no
18 Remove a VM ;
19 heat ← 0;
capacity available.
VI. N UMERICAL VALIDATION AND A NALYSIS
In order to control experiment’s costs, we have limited In this section, We first validate the analytical model
the cluster size to maximum of 10 running VMs for the with results of experiments presented in section V. Second,
application, which gives us a maximum capacity of 70 thanks to minimal cost and runtime associated with analyt-
running containers. For the same reason, we set the con- ical performance model, we leverage it to study interesting
tainer lifetime as 1 and 2 minutes for two scenarios. Under scenarios at large scale with different configuration and
this configuration, our experiments take up to 1000 minutes parameter settings to shed some light on MSP provisioning
combined (300 for the first scenario and 700 minutes for the performance.
second). The results of our experiments have been presented We use the same parameters, outlined in Table II, for both
in Fig. 5. Note that, the X axis in Fig. 5 is experiment experiments and numerical analysis. The analytical model
time in which we report the average values of performance has been implemented and solved in Python using NumPy,

265
  
   

  


 


 
  




 

 
 

 
 
 

  
  
               

    

   




  

 





 


   


  
 
   

   
               
 !" # 
$
  !" #!"  $ 

   


 
   
 

%$& '

%$& '
% $& 
'

% $& 
'
  
 


     
 
 
 
 
 
   
               
$ % $& 
$%$& 

(a) Scenario 1. (b) Scenario 2.

Fig. 5: Experimental results; see Table II for parameter settings.

SciPy, Sympy and Matplotlib libraries [11]. Table III shows we needed around 1000 minutes to compute desired perfor-
the comparison between the results from analytical model mance indicators; more importantly, analytical model cost
and experiment for both scenarios. As can be seen, both is negligible compared to the cost of experiments. Table IV
analytical models and experimental results are well in tune presents the range of individual parameter values that we
with differences less than 10%. Note that in analytical either set or calculate from our experiments for numerical
model for the sake of equilibrium conditions (ie, steady analysis of the analytical model.
state) we put a limit on queue size while we do not have Microservice applications may span across large number
such a limitation in the experiment. Also, in the exper- of VMs and, as a result, incorporate a very large number of
iments we employ a more sophisticated elasticity policy containers. Also, container may live longer depending on
(ie, Algorithm 1) compared to simple threshold approach the applications; for example consider an email application,
in analytical model. These two differences (ie, queue limit which has been deployed as microservices, that spin a
and elasticity policies) are the source of narrow divergence container for each active user; this container serves the
between analytical model and the experiments. user while he/she is checking or composing emails and
will be released when the user logs out of the system [12].
TABLE III: Corresponding results from analytical model Motivated by these facts we employed analytical model
(AM) and experiment (Exp) for both scenarios. to investigate the microservice provisioning performance
Scenario 1 Scenario 2 under such an assumptions. In general, as the analytical
Parameter model scale linearly1 , with increasing input parameters such
AM Exp AM Exp
Response Time 2.115 2.233 2.98 3.168 as number of VMs and the number of containers per VMs,
Utilization 63.3% 64.7% 79.5% 82.4% it can be leveraged to study microservice platforms with
Mean No of VMs 4.6 5.15 7.33 7.85 large scales.
Mean No of Containers 20 22.1 39.99 44.2 We let the mean container lifetime to be in the range
of [8 . . 20] minutes; also we set various cluster size to
Now that the analytical model captures the microservice include [28 . . 44] VMs. Under this parameter setting we
platforms accurately enough, we leverage it to study inter- 1 We described a formal approach to prove linear scalability for perfor-
esting scenarios at large scale. Note that analytical model mance models in [13]. It can be used to prove the linear scalability of the
take less than a minute to conclude while in experiment proposed performance model in this paper as well.

266
TABLE IV: Range of parameters for the analytical model.
Parameter Value(s) Unit
Arrival rate 16 request/min
VM capacity 7 container
Mean container provisioning time 0.435∗ second
Mean VM provisioning time 104.6∗ second
Container lifetime 8 . . 20 minute
Desired utilization 70% . . 90% N/A
Initial cluster size 2 VM
Max cluster size 28 . . 44 VM
∗ These values have been obtained from experiments.

(a) Total response time for requests.


obtained the interested performance indicators. Fig. 6(a)
shows the trend of total response time when changing above
control variables. As can be seen, in order to fulfill a
request under 5 seconds, the container lifetime should not
exceed 12 minutes and the cluster should include at least
40 VMs. Also, it can be noticed that none of the clusters
can maintain response time lower than 25 seconds when
the mean lifetime of containers is 20 minutes on average.
Fig. 6(b) shows the probability by which a request may
get rejected due to either lack of room in the global queue
or lack of capacity in the microservice platform. A linear
relationship can be noticed between the container lifetime
and the rejection probability. Also, for keeping the rejection
below 5%, the application should employ at least 40 VMs.
We also characterize the utilization of the cluster under (b) Blocking probability of requests.
various cluster size and container lifetime. As can be seen
in Fig. 6(c), we set the desired utilization between 70% and
90%. If the mean container lifetime is 8 minute, regardless
of the cluster size, the utilization would be less than 70%
which economically is not desirable. On the other hand,
for containers with average lifetime of 20 minutes, the
utilization of the cluster would be more than 90% which is
not desirable for the sake of performance and reliability.
In addition to results presented here, we also charac-
terized the response time, rejection probability, utilization,
number of VMs and number of containers for different
arrival rate of requests which have been omitted due to
page limit.
(c) Mean utilization of clusters.
VII. R ELATED W ORK
Performance analysis of cloud computing services has at-
tracted considerable research attention although most of the Fig. 6: Analytical model results; see Table IV for input
works considered hypervisor based virtualization in which parameters.
VMs are the sole way of providing isolated environment for
the users. However, recently, container based virtualization
is getting momentum due to its advantages over VMs for Container deployment process 5x more requests compared
providing microservices. to VM deployment and also containers outperformed VMs
Performance analysis of cloud services considering con- by 22x in terms of scalability. This work shows promising
tainers as a virtualization option is in its infancy. Much of performance when using containers instead of VMs for
the works have been focused on comparison between imple- service delivery to end users.
mentation of various applications deployed either as VMs Another study has been carried out in [15] to compare
or containers. In [14], authors showed that containers have the performance of three implementations of an application
outperformed VMs in terms of performance and scalability. using native, Docker and KVM implementation. In general,

267
Docker equals or exceeds KVM performance in every platform can be carried out systematically with minimum
case. Results showed that both KVM and Docker introduce amount of time and cost.
negligible overhead for CPU and memory performance.
ACKNOWLEDGMENTS
The authors in [16] performed a more comprehensive
study on performance evaluation of containers under differ- We would like to thank Dr. Murray Woodside for his
ent deployments. They used various benchmarks to study valuable technical comments and inputs. This research was
the performance of native deployment, VM deployment, supported by Fuseforward Solutions Group Ltd., the Natu-
native Docker and VM Docker. All in all, they showed ral Sciences and Engineering Council of Canada (NSERC),
that in addition to the well-known security, isolation, and and the Ontario Research Fund for Research Excellence
manageability advantages of virtualization, running an ap- under the Connected Vehicles and Smart Transportation
plication in a Docker container in a vSphere VM adds very (CVST) project.
little performance overhead compared to running the appli- R EFERENCES
cation in a Docker container on a native OS. Furthermore,
[1] S. Soltesz, H. Pötzl, M. E. Fiuczynski, A. Bavier, and L. Peterson,
they found that a container in a VM delivers near native “Container-based operating system virtualization: a scalable, high-
performance for Redis and most of the micro-benchmark performance alternative to hypervisors,” in ACM SIGOPS Operating
tests. Systems Review, vol. 41, no. 3. ACM, 2007, pp. 275–287.
[2] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated
Amaral et al. [4] evaluated the performance impact of performance comparison of virtual machines and linux containers,”
choosing between the two models for implementing related technology, vol. 28, p. 32, 2014.
processes in containers: in the first approach, master-slave, [3] D. Merkel, “Docker: lightweight linux containers for consistent
development and deployment,” Linux Journal, vol. 2014, no. 239,
all child containers are peers of each other and a parent p. 2, 2014.
container which serves to manage them; in the second [4] M. Amaral, J. Polo, D. Carrera, I. Mohomed, M. Unuvar, and
approach, nested-container, involves the parent container M. Steinder, “Performance evaluation of microservices architectures
using containers,” in Network Computing and Applications (NCA),
being a privileged container and the child containers being 2015 IEEE 14th International Symposium on, Sept 2015, pp. 27–34.
in its namespace. Their results showed that the nested- [5] Nirmata, Inc. (2016, 6) Microservices Operations and Management.
containers approach is a suitable model, thanks to improved [Online]. Available: http://nirmata.com
[6] Docker Cloud. (2016, 6) The Docker Platform for Dev and Ops.
resource sharing and guaranteeing fate sharing among the [Online]. Available: https://cloud.docker.com
containers in the same nested-container. [7] O. Thylmann. (2016, 6) Giant Swarm Microservices Framework.
These studies reveal a promising future for using both [Online]. Available: https://giantswarm.io
[8] H. Khazaei, J. Mišić, and V. B. Mišić, “A fine-grained performance
virtualization techniques in order to deliver secure, scalable model of cloud computing centers,” IEEE Transactions on Parallel
and high performant services to the end user [17], [18]. The and Distributed Systems, vol. 24, no. 11, pp. 2138–2147, November
recent popularity of microservice platforms such as Docker 2013.
[9] G. Grimmett and D. Stirzaker, Probability and Random Processes,
Cloud, Nirmata and Giant Swarm are attributed to such 3rd ed. Oxford University Press, Jul 2010.
advantages mentioned above. However, to the best of our [10] L. Kleinrock, Queueing Systems, Volume 1, Theory. Wiley-
knowledge, there is no comprehensive performance model Interscience, 1975.
[11] SciPy. (2016, 6) A python-based ecosystem of open-source
that incorporates the details of provisioning performance software for mathematics, science, and engineering. [Online].
of microservice platforms. In this work, we studied the Available: http://scipy.org
performance of PaaS and IaaS collaborating with each other [12] J. Beda. (2015, 05) Containers at scale: the Google Cloud Platform
and Beyond. [Online]. Available: https://speakerdeck.com/jbeda/
to leverage both virtualization techniques for providing fine- containers-at-scale
grained, secure, scalable and performant microservices. [13] H. Khazaei, J. Mišić, V. B. Mišić, and S. Rashwand, “Analysis of
a pool management scheme for cloud computing centers,” IEEE
VIII. C ONCLUSIONS Transactions on Parallel and Distributed Systems, vol. 24, no. 5,
pp. 849–861, 2013.
In this paper, we presented our work on provisioning [14] A. M. Joy, “Performance comparison between linux containers
performance analysis of microservice platforms. We first and virtual machines,” in Computer Engineering and Applications
(ICACEA), 2015 International Conference on Advances in. IEEE,
designed and developed a microservice platform on Ama- 2015, pp. 342–346.
zon cloud using Docker family technologies. We performed [15] W. Felter, A. Ferreira, R. Rajamony, and J. Rubio, “An updated
experiments to study the important performance indicators performance comparison of virtual machines and linux containers,”
in Performance Analysis of Systems and Software (ISPASS), 2015
such as total request response time, request rejection proba- IEEE International Symposium On. IEEE, 2015, pp. 171–172.
bility, cluster size and utilization while capturing the details [16] VMware, Inc. (2016, 6) Docker contain-
of provisioning at both container and VM layers. Then we ers performance in vmware vsphere. [On-
line]. Available: https://blogs.vmware.com/performance/2014/10/
developed a tractable analytical performance model that docker-containers-performance-vmware-vsphere.html
showed a high fidelity to experiments. Thanks to linear scal- [17] U. Gupta, “Comparison between security majors in virtual machine
ability of the analytical model and realistic parameters from and linux containers,” arXiv preprint arXiv:1507.07816, 2015.
[18] M. Villamizar, O. Garces, H. Castro, M. Verano, L. Salamanca,
experiments, we could study the provisioning performance R. Casallas, and S. Gil, “Evaluating the monolithic and the microser-
of microservice platforms at large scale. As a result, by vice architecture pattern to deploy web applications in the cloud,”
leveraging proposed model and experiments in this paper, in Computing Colombian Conference. IEEE, 2015, pp. 583–590.
what-if analysis and capacity planning for microservice

268

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy