Cloud Computing 2 - Dewakar
Cloud Computing 2 - Dewakar
Cloud Computing 2 - Dewakar
The following characteristics set apart cloud from other computing techniques:
● On-Demand Service - without requiring human interaction with service provider
● Ubiquitous Network Access - access from any device
● Location -Independent Resource Pooling : No control, knowledge over physical
location of server
● Rapid Elasticity - scale -out and scale-in, automatically & rapidly
● Measured Service – you get what you pay for(services and transaction)
Public Cloud
The infrastructure of public cloud is made available for the general public where the resources are
2 Prepared by Purusottam Adhikari
provided over internet and any users can access from the cloud, it is owned by the cloud vendor.
The public cloud infrastructure is not visible for the customer where the infrastructure is hosted.
Private Cloud The infrastructure of private cloud is made available only for specific organization
and not for other organization. It means resources available in private cloud can be access by
internal users, anyone within the organization but users outside of that organization cannot access.
Commercial data are fully maintained as well as infrastructures of private cloud are entirely taking
care by the organization itself. Private cloud a lot more protected when we compare with the public
cloud.
Hybrid Cloud
The infrastructure of hybrid cloud is combination of more than one cloud such as public, private or
community cloud. Critical data can hosted by organization on private cloud and data by having
pretty less security relates to public cloud .
Community Cloud T
he infrastructure of public cloud is deployed for several organizations and not for specific
organization but support specific community or interested group. It means organizations that have
similar policies, objectives and targets or belongs to specific community, build a shared cloud
datacenter that can be used by all of the members. It is based on the faith between all the members
in community cloud, which can walked through their mutual benefits.
Cloud Services
There are 5 commonly used categories in a spectrum of cloud offerings: (Distinction is not
clear-cut)(to suit different target audiences)
● Platform-as-a-service (PaaS) -Provision H/W, OS, Framework, Database
● Software-as-a-service (SaaS) - Provision H/W, OS, Special purpose S/W
● Infrastructure-as-a-service (IaaS) - Provision H/W and organization has control
over OS
● Storage-as-a-service (SaaS) - Provision of DB-like services, metered like - per
gigabyte/month
● Desktop-as-a-service (DaaS) - Provision of Desktop environment within a browser
Platform as a service
1. PaaS stands for platform as a service.
2. PaaS provides a computing platform with a programming language execution
environment.
3. PaaS provide a development and deployment platform for running applications in the
cloud.
4. PaaS constitute the middleware on top of which applications are built.
5. Application management is the core functionality of the middleware.
6. PaaS provides run time environments for the applications.
7. PaaS provides
Applications deployment
Configuring application components
Provisioning and configuring supporting technologies
Software-as-a-service
1. SaaS stands for software as a service.
2. Software as a service (SaaS) allows users to connect to and use cloud-based apps over
the Internet.
3. SaaS is the service with which end users interact directly.
4. It provides a means to free users from complex hardware and software management.
5. In SaaS customer do not neew to purchase the software and required the license.
6. They simply access the application website, enter their credentials and billing details,
and can instantly use the application.
7. Customer can customize their software.
8. Application is awailable to the customer on demand.
9. SaaS can be considered as a “one-to-many” software delivery model.
10. In SaaS applications are build as per the user needs.
11. From the emaples mentioned below we can find why SaaS is condiered as one to many
model.
12. Some examples:
Gmail
Google drive
Dropbox
WhatsApp
Characterstics of SaaS:
4 Prepared by Purusottam Adhikari
1. The product sold to customer is application access.
2. The application is centrally managed.
3. The service delivered is one-to-many.
4. The service delivered is an integrated solution delivered on the contract, which means
provided as promised.
Infrastructure-as-a-service
1. IaaS stands for infrastructure as a service.
2. Infrastructure as service or IaaS is the basic layer in cloud computing model.
3. IaaS offers servers, network devices, load balancers, database, Web servers etc.
4. IaaS delivers customizable infrastructure on demand.
5. IaaS examples can be categorized in two categories
a. IaaS Management layer
b. IaaS Physical infrastructure
6. Some service providers provide both above categories and some provides only
management layer.
7. IaaS management layer also required integration with ohter IaaS solutions that
provide physical infrastructure.
8. On virtual machines applications are installed and deployed.
9. One of the example of virtual machine is Oracle VM.
10. Hardware virtualization includes workload partitioning, application isolation,
sandboxing, and hardware tuning.
11. Instead of purchasing user can access these virtual hardwares on pay per use basis.
12. users can take advantage of the full customization offered by virtualization to deploy
their infrastructure in the cloud.
13. Some virtual machines can be with pre installed operating systems and other softwares.
14. On some virtual machines operating systems and others softwares can be installed as per
use.
15. Some examples:
Amazon Web Services (AWS),
Microsoft Azure,
Google Compute Engine (GCE)
1.5 Benefits and Challenges of Cloud Computing
Challenges:
Security and Privacy: Security and Privacy of information is the biggest challenge to
cloud computing. Security and privacy issues can be overcome by employing encryption,
security hardware and security applications.
Portability: This is another challenge to cloud computing that applications should easily
be migrated from one cloud provider to another. There must not be vendor lock-in.
However, it is not yet made possible because each of the cloud provider uses different
standard languages for their platforms.
Reliability and Availability: It is necessary for cloud systems to be reliable and robust
because most of the businesses are now becoming dependent on services provided by third-
party.
Business Applications
Cloud computing has made businesses more collaborative and easy by incorporating various
apps such as MailChimp, Chatter, Google Apps for business, and Quickbooks.
Development & Test
The cloud customer can develop and test their complete production on demand in the cloud.
Developers can save their time and expanse over traditional development and testing scenarios, that
allows developers to quicker handoff from design to function. It also provide the patterns towards
about iterative active development, the opportunity to trial and able to move out quick competitive
differentiator with the cloud.
Cloud-Based Anti-Spam and Anti-Virus Services
1. Total Cost of Ownership. With cloud storage, there is no hardware to purchase, storage to
provision, or capital being used for "someday" scenarios. You can add or remove capacity on
demand, quickly change performance and retention characteristics, and only pay for storage that you
actually use. Less frequently accessed data can even be automatically moved to lower cost tiers in
accordance with auditable rules, driving economies of scale.
2. Time to Deployment. When development teams are ready to execute, infrastructure should never
slow them down. Cloud storage allows IT to quickly deliver the exact amount of storage needed,
right when it's needed. This allows IT to focus on solving complex application problems instead of
having to manage storage systems.
3. Information Management. Centralizing storage in the cloud creates a tremendous leverage point
for new use cases. By using cloud storage lifecycle management policies, you can perform powerful
information management tasks including automated tiering or locking down data in support of
compliance requirements.
2. PaaS:
1. PaaS stands for platform as a service.
2. PaaS provides a computing platform with a programming language execution
environment.
3. PaaS offered to the user is a development platform
4. PaaS solutions generally include the infrastructure as well.
5. PurePaaS offered only the user-level middleware.
6. Some examples:
Google App Engine
Force.com
3. SaaS:
1. SaaS stands for software as a service.
2. Software as a service (SaaS) allows users to connect to and use cloud-based apps over
the Internet.
3. SaaS is the service with which end users interact directly.
4. Some examples:
Gmail
Google drive
Dropbox
WhatsApp
14 Prepared by Purusottam Adhikari
4. User applications:
1. It includes cloud applications thruough which end user get intercact.
2. There may be different types of user applications, like scientific, gaming, social etc.
3. Some of the examples are Gmail, Facebook.com, etc.
5. User-level middleware:
1. It includes cloud programming environment and tools.
2. There may be different types of programming environments and tools depends on the
user applications.
3. Some of the examples of user level middleware are web 2.0, libraries, scripting.
6. Core middleware:
1. It includes cloud hosting platforms.
2. It manage quality of service.
3. Execution management.
4. Accounting, metering etc.
5. Virtual machines are the part of core middleware.
7. System infrastructure:
1. It includes cloud resources.
2. Storage hardware
3. Servers, databases are part of it.
Cloud computing has several deployment models, of which the main ones are:
Private: a cloud infrastructure operated solely for an organisation, being accessible only
within a private network and being managed by the organisation or a third party
(potentially off-premise)
Public: a publicly accessible cloud infrastructure Community: a cloud infrastructure shared
by several organisations with shared concerns
Hybrid: a composition of two or more clouds that remain separate but between which there
can be data and application portability
Partner: cloud services offered by a provider to a limited and well-defined number of
parties
Privacy:
In the commercial, consumer context, privacy entails the protection and appropriate use of
the personal information of customers, and the meeting of expectations of customers about
its use. For organisations, privacy entails the application of laws, policies, standards and
processes by which personal information is managed. What is appropriate will depend on
the applicable laws, individuals’ expectations about the collection, use and disclosure of
their personal information and other contextual information, hence one way of thinking
about privacy is just as ‘the appropriate use of personal information under the
circumstances’
Privacy differs from security, in that it relates to handling mechanisms for personal
information, dealing with individual rights and aspects like fairness of use, notice, choice,
access, accountability and security. Many privacy laws also restrict the transborder data
flow of Security mechanisms, on the other hand, focus on provision of protection
mechanisms that include authentication, access controls, availability, confidentiality,
integrity, retention, storage, backup, incident response and recovery. Privacy relates to
personal information only, whereas security and confidentiality can relate to all
information.
Trust:
Trust is a broader notion than security as it includes subjective criteria and experience.
Correspondingly, there exist both hard (security-oriented) and soft trust (i. e. non-security
oriented trust) solutions [29]. “Hard” trust involves aspects like authenticity, encryption,
and security in transactions, whereas “soft” trust involves human psychology, brand
18 Prepared by Purusottam Adhikari
loyalty, and userfriendliness [30]. Some soft issues are involved in security, nevertheless.
An example of soft trust is reputation, which is a component of online trust that is perhaps
a company’s most valuable asset [31] (although of course a CSP’s reputation may not be
justified). Brand image is associated with trust and suffers if there is a breach of trust or
privacy.
People often find it harder to trust on-line services than off-line services [32], often
because in the digital world there is an absence of physical cues and there may not be
established centralized authorities [33]. The distrust of on-line services can even negatively
affect the level of trust accorded to organizations that may have been long respected as
trustworthy There are many different ways in which on-line trust can be established:
security may be one of these (although security, on its own, does not necessarily imply
trust [31]). Some would argue that security is not even a component of trust: Nissenbaum
argues that the level of security does not affect trust [35]. On the other hand, an example of
increasing security to increase trust comes from people being more willing to engage in
ecommerce if they are assured that their credit card numbers and personal data are
cryptographically protected [36]
cloud providers need to safeguard the privacy and security of personal and confidential
data that they hold on behalf of organisations and users. In particular, it is essential for the
adoption of public cloud systems that consumers and citizens are reassured that privacy
and security is not compromised. It will be necessary to address the problems of privacy
and security raised in this chapter in order to provide and support trustworthy and
innovative cloud computing services that are useful for a range of different situations.
Why Virtualization?
20 Prepared by Purusottam Adhikari
Here are some reasons for going for virtualization
● Lower cost of infrastructure
● Reducing the cost of adding to that infrastructure
● Gathering information across IT set up for increased utilization and collaboration
● Deliver on SLA(service-level agreement) response time during spikes in
production
● Building heterogeneous infrastructure that are responsive
Following are the causes for Virtualization technology in demand:
1. Increased performance and computing capacity:
Now a days computers are enough capable to support virtualization technologies.
2. Underutilized hardware and software resources:
Most of the computers are used during office hours only, so after office hours these
resources can be used for other works too.
3. Lack of space:
Due to additional storage requirements companies such as Google and Microsoft
expending their data centers, where virtualization technology provides additional
capabilities to these data centers.
4. Greening initiatives:
Maintaining a data center involves keeping servers on and servers needs to be keep cool.
Infrastructures for cooling have a significant impact on the carbon footprint of a data
center. Hence, reducing the number of servers through server consolidation will definitely
reduce the impact of cooling and power consumption of a data center. Virtualization
technologies can provide an efficient way of consolidating servers.
5. Rise of administrative costs:
Power consumption and cooling costs have now become higher than the cost of IT
equipment.
3.2 Some types of virtualization:
1. Storage virtualization:
Storage virtualization is an array of servers that are managed by a virtual storage system.
The servers aren’t aware of exactly where their data is stored, and instead function more
like worker bees in a hive. It makes managing storage from multiple sources to be
managed and utilized as a single repository. Storage virtualization software maintains
smooth operations, consistent performance and a continuous suite of advanced functions
despite changes, break down and differences in the underlying equipment.
2. Network virtualization:
Network virtualization refers to the management and monitoring of an entire computer
network as a single administrative entity from a single software-based administrator’s
console.The ability to run multiple virtual networks with each has a separate control and
21 Prepared by Purusottam Adhikari
data plan. It co-exists together on top of one physical network. It can be managed by
individual parties that potentially confidential to each other. Network virtualization
provides a facility to create and provision virtual networks—logical switches, routers,
firewalls, load balancer, Virtual Private Network (VPN), and workload security within
days or even in weeks.
3. Desktop virtualization:
Desktop virtualization is technology that lets users simulate a workstation load to access a
desktop from a connected device remotely or locally.Desktop virtualization allows the
users’ OS to be remotely stored on a server in the data centre. It allows the user to access
their desktop virtually, from any location by a different machine. Users who want
specific operating systems other than Windows Server will need to have a virtual
desktop. Main benefits of desktop virtualization are user mobility, portability, easy
management of software installation, updates, and patches.
4. Application server virtualization:
Application server virtualization abstracts a collection of application servers that provide
the same services as a single virtual application server.Application-server virtualization is
another large presence in the virtualization space, and has been around since the inception
of the concept. It is often referred to as ‘advanced load balancing,’ as it spreads
applications across servers, and servers across applications.This enables IT departments to
balance the workload of specific software in an agile way that doesn’t overload a specific
server or underload a specific application in the event of a large project or change. In
addition to load balancing it also allows for easier management of servers and applications,
since you can manage them as a single instance. Additionally, it gives way to greater
network security, as only one server is visible to the public while the rest are hidden behind
a reverse proxy network security appliance.
5. Application Virtualization
Application virtualization is often confused with application-server virtualization. What it
means is that applications operate on computers as if they reside naturally on the hard
drive, but instead are running on a server. The ability to use RAM and CPU to run the
programs while storing them centrally on a server, like through Microsoft Terminal
Services and cloud-based software, improves how software security updates are pushed,
and how software is rolled out.Application virtualization helps a user to have remote
access of an application from a server. The server stores all personal information and
other characteristics of the application but can still run on a local workstation through the
internet. Example of this would be a user who needs to run two different versions of the
same software. Technologies that use application virtualization are hosted applications
and packaged applications.
6. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes place. Here,
the central-server(physical server) is divided into multiple different virtual servers by
changing the identity number, processors. So, each system can operate its own operating
22 Prepared by Purusottam Adhikari
systems in isolate manner. Where each sub-server knows the identity of the central
server. It causes an increase in the performance and reduces the operating cost by the
deployment of main server resources into a sub-server resource. It’s beneficial in virtual
migration, reduce energy consumption, reduce infrastructural cost, etc
7. Data virtualization:
This is the kind of virtualization in which the data is collected from various sources and
managed that at a single place without knowing more about the technical information like
how data is collected, stored & formatted then arranged that data logically so that its
virtual view can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc. It can be used to performing various kind
of tasks such as:
Data-integration
Business-integration
Service-oriented architecture data-services
Searching organizational data
Advantages of virtualization:
1. Increased security:
The ability to control the execution of a guest in a completely transparent manner opens
new possibilities for delivering a secure, controlled execution environment.
2. Managed execution:
Provides sharing, aggregation, emulation, isolation etc.
3. Portability:
User works can be safely moved and executed on top of different virtual machines.
Disadvantages of virtualization:
1. Performance degradation:
Since virtualization interposes an abstraction layer between the guest and the host, the
guest can experience increased latencies.
2. Inefficiency and degraded user experience:
Some of the specific features of the host cannot be exposed by the abstraction layer and
then become inaccessible.
3. Security holes and new threats:
Virtualization opens the door to a new and unexpected form of phishing. In case of
hardware virtualization, malicious programs can preload themselves before the operating
system and act as a thin virtual machine manager.
Virtualization technology examples:
1. Xen
23 Prepared by Purusottam Adhikari
2. VMware
3.3 IMPLEMENTATION LEVELS OF VIRTUALIZATION
Virtualization is a computer architecture technology by which multiple virtual
machines (VMs) are multiplexed in the same hardware machine. The idea of VMs can be
dated back to the 1960s. The purpose of a VM is to enhance resource sharing by many users
and improve computer performance in terms of resource utilization and application
flexibility. Hardware resources (CPU, memory, I/O devices, etc.) or software resources
(operating system and software libraries) can be virtualized in various functional layers. This
virtualization technology has been revitalized as the demand for distributed and cloud
computing increased sharply in recent years.
The idea is to separate the hardware from the software to yield better system efficiency. For
example, computer users gained access to much enlarged memory space when the concept
of virtual memory was introduced. Similarly, virtualization techniques can be applied to
enhance the use of compute engines, networks, and storage. According to a 2009 Gartner
Report, virtualization was the top strategic technology poised to change the computer
industry. With sufficient storage, any computer platform can be installed in another host
computer, even if they use processors with different instruction sets and run with distinct
operating systems on the same hardware.
1. Levels of Virtualization Implementation
A traditional computer runs with a host operating system specially tailored for its hardware
architecture, as shown in Figure 3.1(a). After virtualization, different user applications
managed by their own operating systems (guest OS) can run on the same hardware,
independent of the host OS. This is often done by adding additional software, called
a virtualization layer as shown in Figure 3.1(b). This virtualization layer is known
as hypervisor or virtual machine monitor (VMM) [54]. The VMs are shown in the upper
boxes, where applications run with their own guest OS over the virtualized CPU, memory,
and I/O resources.
The main function of the software layer for virtualization is to virtualize the physical
hardware of a host machine into virtual resources to be used by the VMs, exclusively. This
can be implemented at various operational levels, as we will discuss shortly. The
virtualization software creates the abstraction of VMs by interposing a virtualization layer at
various levels of a computer system. Common virtualization layers include the instruction
set architecture (ISA) level, hardware level, operating system level, library support level, and
application level (see Figure 3.2).
Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other hand,
the process manages the underlying hardware through virtualization. The idea is to virtualize
a computer’s resources, such as its processors, memory, and I/O devices. The intention is to
upgrade the hardware utilization rate by multiple users concurrently. The idea was
implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has been
applied to virtualize x86-based machines to run Linux or other guest OS applications.
1.3 Operating System Level
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hardware and software in data centers. The containers behave like real servers.
OS-level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users. It is also used, to a
lesser extent, in consolidating server hardware by moving services on separate hosts into
containers or VMs on one server.
1.4 Library Support Level
Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface
Utilizing a non-virtualized environment can be inefficient because when you are not
consuming the application on the server, the compute is sitting idle and can't be used for
other applications. When you virtualize an environment, that single physical
server transforms into many virtual machines. These virtual machines can have
different operating systems and run different applications while still all being hosted on
the single physical server.The consolidation of the applications onto virtualized
environments is a more cost-effective approach because you’ll be able to consume fewer
physical customers, helping you spend significantly less money on servers and bring cost
savings to your organization.
When a disaster affects a physical server, someone is responsible for replacing or fixing
it—this could take hours or even days. With a virtualized environment, it’s easy to
provision and deploy, allowing you to replicate or clone the virtual machine that’s been
affected. The recovery process would take mere minutes—as opposed to the hours it would
take to provision and set up a new physical server—significantly enhancing the resiliency
of the environment and improving business continuity.
With fewer servers, your IT teams will be able to spend less time maintaining the physical
hardware and IT infrastructure. You’ll be able to install, update, and maintain the
environment across all the VMs in the virtual environment on the server instead of going
through the laborious and tedious process of applying the updates server-by-server. Less
time dedicated to maintaining the environment increases your team’s efficiency and
productivity.
Since the virtualized environment is segmented into virtual machines, your developers can
quickly spin up a virtual machine without impacting a production environment. This is
ideal for Dev/Test, as the developer can quickly clone the virtual machine and run a test on
the environment.For example, if a new software patch has been released, someone can
clone the virtual machine and apply the latest software update, test the environment, and
then pull it into their production application. This increases the speed and agility of an
application.
When you are able to cut down on the number of physical servers you’re using, it’ll lead to
a reduction in the amount of power being consumed. This has two green benefits:
It reduces expenses for the business, and that money can be reinvested elsewhere.
It reduces the carbon footprint of the data center.
3.5 Server Virtualization is the partitioning of a physical server into a number of small
virtual servers, each running its own operating system. These operating systems are known
as guest operating systems. These are running on another operating system known as the
host operating system. Each guest running in this manner is unaware of any other guests
running on the same host. Different virtualization techniques are employed to achieve this
transparency.
To create virtual server instances you first need to set up a virtualization software. This
essential piece of software is called a hypervizor. Its main role is to create a virtualization
layer that separates CPU / Processors, RAM and other physical resources from the virtual
instances.Once you install the hypervizor on your host machine, you can use that
virtualization software to emulate the physical resources and create a new virtual server on
top of it.
Advantages:
Easier
Enhanced Performance
No emulation overhead
Limitations:
Requires modification to a guest operating system
3. Full Virtualization –
It is very much similar to Paravirtualization. It can emulate the underlying hardware when
necessary. The hypervisor traps the machine operations used by the operating system to
perform I/O or modify the system status. After trapping, these operations are emulated in
software and the status codes are returned very much consistent with what the real
hardware would deliver. This is why an unmodified operating system is able to run on top
of the hypervisor.
Example: VMWare ESX server uses this method. A customized Linux version known as
Service Console is used as the administrative operating system. It is not as fast as
Paravirtualization.
Advantages:
No special administrative software is required.
Very less overhead
Limitations:
Hardware Support Required
6. System Level or OS Virtualization –
Runs multiple but logically distinct environments on a single instance of the operating
system kernel. Also called shared kernel approach as all virtual machines share a common
kernel of host operating system. Based on the change root concept “chroot”.
chroot starts during bootup. The kernel uses root filesystems to load drivers and perform
other early-stage system initialization tasks. It then switches to another root filesystem
using chroot command to mount an on-disk file system as its final root filesystem and
continue system initialization and configuration within that file system.
The chroot mechanism of system-level virtualization is an extension of this concept. It
enables the system to start virtual servers with their own set of processes that execute
relative to their own filesystem root directories.
The main difference between system-level and server virtualization is whether different
operating systems can be run on different virtual systems. If all virtual servers must share
the same copy of the operating system it is system-level virtualization and if different
servers can have different operating systems ( including different versions of a single
operating system) it is server virtualization.
Examples: FreeVPS, Linux Vserver, and OpenVZ are some examples.
Type 1 Hypervisor
Type 1 or bare-metal hypervisors are installed directly on the physical hardware of the host
machine, providing a layer between the hardware and an OS. On top of this layer, you can
install many virtual machines. The machines are not connected in any way and can have
different instances of operating systems and act as different application servers.
Management Console
System administrators and advanced users control the hypervisor remotely through an
interface called a management console.
With it, you can connect to and manage instances of operating systems. You can also turn
servers on and off, transfer operating systems from one server to another (in case of
downtime or malfunction) and perform many other operations.
Type 2 Hypervisor
Unlike type 1, a type 2 hypervisor is installed on top of an existing operating system. This
allows users to utilize their personal computer or server as a host for virtual machines.
Therefore, you have the underlying hardware, an operating system serving as a host, a
hypervisor and a guest operating system.
Note: The guest machine is not aware of its part of a larger system and all actions you run
on it are isolated from the host.
Although a VM is isolated, the primary OS is still directly connected to the hardware. This
makes it less secure than type 1 hypervisors.
In environments where security is paramount, this type of hypervisor may not suit your
needs. However, end-users and clients with small businesses may find this type of
environment more fitting.
Having a hosted hypervisor allows more than one instance of an operating system to be
installed. However, you should be careful with resource allocation. In the case of type 2
hypervisors, over-allocation may result in your host machine crashing.
A Virtual Load Balancer provides more flexibility to balance the workload of a server by
distributing traffic across multiple network servers. Virtual load balancing aims to mimic
software-driven infrastructure through virtualization. It runs the software of a physical load
balancing appliance on a virtual machine.
A virtual network load balancer promises to deliver software load balancing by taking the
software of a physical appliance and running it on a virtual machine load balancer. Virtual
load balancers, however, are a short-term solution. The architectural challenges of
traditional hardware appliances remain, such as limited scalability and automation, and
lack of central management (including the separation of control plane and data plane) in
data centers.
The traditional application delivery controller companies build virtual load balancers that
utilize code from legacy hardware load balancers. The code simply runs on a virtual
machine. But these virtual load balancers are still monolithic load balancers with static
capacity.
A virtual load balancer uses the same code from a physical appliance. It also tightly
couples the data and control plane in the same virtual machine. This leads to the same
inflexibility as the hardware load balancer.
For example, while an F5 virtual load balancer lowers the CapEx compared to hardware
load balancers, virtual appliances are in reality hardware-defined software.
Virtual load balancers seem similar to a software load balancer, but the key difference is
that virtual versions are not software-defined. That means virtual load balancers do not
solve the issues of inelasticity, cost and manual operations plagued by traditional
hardware-based load balancers.
Software load balancers, however, are an entirely different architecture designed for high
performance and agility. Software load balancers also offer lower cost without being
locked into any one vendor.
A virtual infrastructure architecture can help organizations transform and manage their IT
system infrastructure through virtualization. But it requires the right building blocks to
deliver results. These include:
Host: A virtualization layer that manages resources and other services for
virtual machines. Virtual machines run on these individual hosts, which
continuously perform monitoring and management activities in the
background. Multiple hosts can be grouped together to work on the same
network and storage subsystems, culminating in combined computing and
memory resources to form a cluster. Machines can be dynamically added or
removed from a cluster.
Hypervisor: A software layer that enables one host computer to
simultaneously support multiple virtual operating systems, also known as
virtual machines. By sharing the same physical computing resources, such as
memory, processing and storage, the hypervisor stretches available resources
and improves IT flexibility.
Virtual machine: These software-defined computers encompass operating
systems, software programs and documents. Managed by a virtual
infrastructure, each virtual machine has its own operating system called a guest
operating system.
The key advantage of virtual machines is that IT teams can provision them
faster and more easily than physical machines without the need for hardware
procurement. Better yet, IT teams can easily deploy and suspend a virtual
Process
Three
threads in
Process executing
on remote
machine
AnekaThreads
Remote Machine Remote Machine Remote Machine
Process
Client application
start AnekaThreads
executing on
join remote node
join
Application thread
blocks until
AnekaThread
completes
Thread Life-Cycle
The following diagram depicts the possible execution states for local threads supported by the .Net
framework. A thread may, at any given time, be in one or more of these states. When a new thread is
created its state is Unstarted, and transitions to Running when the Start method is invoked. There is also an
additional state called Background which indicates whether a thread is running in the background or the
foreground.
Figure 5 illustrates the life-cycle of an AnekaThread. As both thread types are fundamentally different, one
being local and the other distributed, the possible states they take differ from instantiation to termination.
An instance of AnekaThread transitions from Unstarted to Started when its Start method is invoked. It then
transitions to Queued when it is scheduled for execution at a remote computing node. When execution
begins, it state is Running, and finally transitions to Completed when all work is done. During any one of
these stages, an AnekaThread may fail resulting in the Failed state. Other states such as StagingIn and
StagingOut are used when a thread requires files for execution, and produces files as output. Programming
threads that require or produce files is beyond the scope of this tutorial and you are encouraged to refer
the online documentation for more details. Lastly, from your perspective as a programmer you only get to
43 Prepared by Purusottam Adhikari
initiate the first state change from Unstarted to Started. Thereafter, all stage changes are carried out by the
Aneka runtime environment
Problem Decomposition
One of the key challenges in developing parallel applications lies in breaking down a large
problem into smaller units of work, such that they can be executed concurrently on
different machines. Decomposing a problem might not seem very evident at first, but it is
often a good idea to start with a piece of paper. Two common approaches used for
problem decomposition are:
The first approach is the most common and involves identifying repetitive calculations in
the problem. Often these take the form of for or while loops in a sequential program.
Every iteration of the loop is thus potentially a unit of work that can be computed
independently from other iterations. The two examples, calculating Pi and matrix
multiplication, shown later in this tutorial uses this approach. If an iteration is dependent
on the values produced in the previous iteration, then the units of work can no longer be
computed independently and some form of communication is required. The following
diagram illustrates this:
The second approach involves identifying sufficiently large but isolated computations in
the problem. Each of these distinct computations would then form a unit of work for
concurrent execution. Unlike the first approach where each unit of work does the same
amount of computation and would thus take more or less the same time to complete, the
second approach involves distinct units of work, each of which make take significantly
different amounts of time to complete. The first example program shown later in this
tutorial, where the results of three trigonometric functions are computed, use this
approach
Task computing is a wide area of distributed system programming encompassing several different
models of architecting distributed applications, A task represents a program, which require input files
and produce output files as a result of its execution. Applications are then constituted of a collection of
tasks. These are submitted for execution and their output data are collected at the end of their
execution.
Task computing
A task identifies one or more operations that produce a distinct output and that can be isolated as
a single logical unit.
In practice, a task is represented as a distinct unit of code, or a program, that can be separated
and executed in a remote run time environment.
The middleware is a software layer that enables the coordinated use of multiple resources, which are
drawn from a datacentre or geographically distributed networked computers. A user submits the
collection of tasks to the access point(s) of the middleware, which will take care of scheduling and
monitoring the execution of tasks. Each computing resource provides an appropriate runtime
environment. Task submission is done using the APIs provided by the middleware, whether a Web or
programming language interface. Appropriate APIs are also provided to monitor task status and collect
their results upon completion. It is possible to identify a set of common operations that the middleware
needs to support the creation and execution of task-based applications. These operations are: •
Coordinating and scheduling tasks for execution on a set of remote nodes • Moving programs to remote
nodes and managing their dependencies • Creating an environment for execution of tasks on the
remote nodes • Monitoring each task’s execution and informing the user about its status • Access to the
output produced by the task.
Characterizing a task A task represents a component of an application that can be logically isolated and
executed separately. A task can be represented by different elements:
• A shell script composing together the execution of several applications
• A single program
• A unit of code (a Java/C11/.NET class) that executes within the context of a specific runtime
environment. A task is characterized by input files, executable code (programs, shell scripts, etc.), and
output files. The runtime environment in which tasks execute is the operating system or an equivalent
sandboxed environment. A task may also need specific software appliances on the remote execution
nodes.
A map function takes a key/value pair as input and produces a list of key/value pairs as output. The type
of output key and value can be different from input key and value:
map::(key1,value1) => list(key2,value2)
A reduce function takes a key and associated value list as input and generates a list of new values as
output:
reduce::list(key2,value2) => list(value3)
MapReduce Execution
A MapReduce application is executed in a parallel manner through two phases. In the first phase, all
map operations can be executed independently with each other. In the second phase, each reduce
operation may depend on the outputs generated by any number of map operations. However, similar to
map operations, all reduce operations can be executed independently. From the perspective of
dataflow, MapReduce execution consists of m independent map tasks and r independent reduce tasks,
each of which may be dependent on m map tasks. Generally the intermediate results are partitioned
into r pieces for r reduce tasks. The MapReduce runtime system schedules map and reduce tasks to
distributed resources. It manages many technical problems: parallelization, concurrency control,
network communication, and fault tolerance. Furthermore, it performs several optimizations to
decrease overhead involved in scheduling, network communication and intermediate grouping of
results.
4.3.1 Parallel efficiency of MapReduce
MapReduce is a distributed / parallel computing algorithm which is divided into 02 phases:
1. Map Phase
2. Reduce Phase
When it comes to parallel computing, there are 03 primary parallel computing models:
For a given computation, speedup is defined as T1/TP , where T1= Running time of the
computation on 1 processor and TP = Running time of the computation on P processors.
1. Control dependency: The program or computation may have portions that are
inherently sequential, e.g. a for-loop that updates a variable and uses the updated
value in the subsequent iteration. Amdahl's law illustrates the theoretical speedup
that can be obtained by running a program on a parallel computer with P
processors.
a. S = 1 / F + (1 − F/P), where F= Sequential portion of the program and hence,
1−F = Parallelizable portion of the program
2. Data dependency: Subproblems are dynamically created and alloted as seen in the
tree-parallel model. The processor utilisation remains sub-optimal during the
initial phases of division. Further, subproblems are alloted to machines by message
passing, thereby causing communication latency.
The performance of algorithms using the above 02 parallel programming models is as
follows:
The data-parallel model achieves close to ideal speedups if the data is uniformly
distributed among the processors. In other words, for a parallel computer with P
machines, speedup = T1/(T1/P) = P.
The control dependency in the program limits the speedup achieved in the tree-
parallel model.
An example of a batch processing job could be reading all the sale logs from an online shop for a
single day and aggregating it into statistics for that day (number of users per country, the average
spent amount, etc.). Doing this as a daily job could give insights into customer trends.
MapReduce is a programming model that was introduced in a white paper by Google in 2004.
Today, it is implemented in various data processing and storing systems
(Hadoop, Spark, MongoDB, …) and it is a foundational building block of most big data batch
processing systems.
To run a MapReduce job, the user has to implement two functions, map and reduce, and those
implemented functions are distributed to nodes that contain the data by the MapReduce
The computation performance of MapReduce comes at the cost of its expressivity. When writing
a MapReduce job we have to follow the strict interface (return and input data structure) of
the map and the reduce functions. The map phase generates key-value data pairs from the input
data (partitions), which are then grouped by key and used in the reduce phase by the reduce task.
Everything except the interface of the functions is programmable by the user.
Hadoop, along with its many other features, had the first open-source implementation of
MapReduce. It also has its own distributed file storage called HDFS. In Hadoop, the typical input
into a MapReduce job is a directory in HDFS. In order to increase parallelization, each directory
is made up of smaller units called partitions and each partition can be processed separately by a
map task (the process that executes the map function). This is hidden from the user, but it is
important to be aware of it because the number of partitions can affect the speed of execution.
The map task (mapper) is called once for every input partition and its job is to extract key-value
pairs from the input partition. The mapper can generate any number of key-value pairs from a
single input
The MapReduce framework collects all the key-value pairs produced by the mappers, arranges
them into groups with the same key and applies the reduce function. All the grouped values
entering the reducers are sorted by the framework. The reducer can produce output files which
can serve as input into another MapReduce job, thus enabling multiple MapReduce jobs to
chain into a more complex data processing pipeline.
A thread is a fundamental unit of cpu utilization that consists of a program counter, a stack, and a
collection of registers. Threads have their program and memory access. A thread of execution is
the shortest series of programmed instructions that a scheduler can handle separately. Threads
are built in feature of os.
A task is anything that we want to be completed that is higher-level-abstraction on top of treads.
It is the collection of software instructions stored in memory. When the software instruction is
placed into memory, it is referred as task or process. The task can inform us if it has been
completed and whether the procedure has produce a result. A task will use the threadpool by
default, which saves resources because creating threads is costly as a large block of memory has
to be allocated and initializes for the thread stack and system calls need to be made to create and
register the native thread with the host OS. When the requests are frequent and lightweight as
they are in most server applications, establishing a new thread for each request might take
substantial computer resources.
Mapreduce is a framework that allows us to design program that can process massive volumes of
data in parallel on vast clusters of commodity hardware in a dependable manner. Mapreduce is a
programming architecture for distributing computing. The mapreduce method consists of two
key tasks: map and reduce . Map translate the one collection of data into another where
individual pieces are split down into tuples(key/value pairs). Reduce will takes the result of map
as an input and merges those data tuples into a smaller collection of tuples. The reduction work is
always executed after map job, as the name mapreduce indicates.
M-R only does one big split, with the mapped splits not talking between each other at all,
and then reduces everything together. A single tier, no inter-split communication until
reduce, and massively scalable. Great for taking advantage of your share of the cloud.
Security Issues:
Data breaches:
Data breaches might be the primary goal of a targeted attack, or might be the consequences of a
human mistake, application laws or inadequate security policies. It might include any materials
that are not meant for public distribution like personal health information, financial information,
trade secrets etc.
Insufficient identity, credential, and access management:
Inadequate identity, credential or key management can allow for illegal data access and possibly
catastrophic (huge or extreme) damage to companies or end users
Data Loss:
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data
Leakage. As we know that our sensitive data is in the hands of Somebody else, and we don’t
have full control over our database. So if the security of cloud service is to break by hackers
then it may be possible that hackers will get access to our sensitive data or personal files.
Interference of Hackers and Insecure API’s:
As we know if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is
important to protect the Interface’s and API’s which are used by an external user. But also in
cloud computing, few services are available in the public domain. An is the vulnerable part of
Cloud Computing because it may be possible that these services are accessed by some third
parties. So it may be possible that with the help of these services hackers can easily hack or
harm our data.
User Account Hijacking:
Account Hijacking is the most serious security issue in Cloud Computing. If somehow the
Account of User or an Organization is hijacked by Hacker. Then the hacker has full authority
to perform Unauthorized Activities.
To address the security issues listed above, SaaS providers will need to incorporate and enhance
security practices used by the managed service providers and develop new ones as the cloud
computing environment evolves. The baseline security practices for the SaaS environment as
currently formulated are discussed in the following sections.
Legal issues that can arise “in the cloud” include liability for copyright infringement,
data breaches, security violations, privacy and HIPAA violations, data loss, data
management, electronic discovery (“e-discovery”), hacking, cybersecurity, and many
other complex issues that can lead to complex litigation and regulatory matters before
courts and agencies in the United States, Europe, and elsewhere.
Cloud security monitoring is the practice of continuously supervising both virtual and physical
servers to analyze data for threats and vulnerabilities. Cloud security monitoring solutions often
rely on automation to measure and assess behaviors related to data, applications and
infrastructure.
Cloud security monitoring solutions can be built natively into the cloud server hosting
infrastructure (like AWS’s CloudWatch, for example) or they can be third-party solutions that
are added to an existing environment (like Blumira). Organizations can also perform cloud
Like a SIEM, cloud security monitoring works by collecting log data across servers. Advanced
cloud monitoring solutions analyze and correlate gathered data for anomalous activity, then send
alerts and enable incident response. A cloud security monitoring service will typically offer:
Visibility. Moving to the cloud inherently lowers an organization’s visibility across their
infrastructure, so cloud monitoring security tools should bring a single pane of glass to monitor
Auditing. It’s a challenge for organizations to manage and meet compliance requirements, so
cloud security monitoring tools should provide robust auditing and monitoring capabilities.
monitor behavior in real time to quickly identify malicious activity and prevent an attack.
Integration. To maximize visibility, a cloud monitoring solution should ideally integrate with an
organization’s existing services, such as productivity suites (i.e. Microsoft 365 and G Suite),
endpoint security solutions (i.e. Crowdstrike and VMware Carbon Black) and identity and
Maintain compliance. Monitoring is a requirement for nearly every major regulation, from
HIPAA to PCI DSS. Cloud-based organizations must use monitoring tools to avoid compliance
Identify vulnerabilities. Automated monitoring solutions can quickly alert IT and security
teams about anomalies and help identify patterns that point to risky or malicious behavior.
Overall, this brings a deeper level of observability and visibility to cloud environments.
Prevent loss of business. An overlooked security incident can be detrimental and even result in
shutting down business operations, leading to a decrease in customer trust and satisfaction —
especially if customer data was leaked. Cloud security monitoring can help with business
continuity and data security, while avoiding a potentially catastrophic data breach.
cloud as one of those layers and provides visibility into the overall environment.
Cloud computing exists in an exciting, complex, and dynamic legal environment spanning both
public and private law: countries actively try to protect the rights of their citizens and encourage
adoption of the cloud through strict, effective, and fair regulatory approaches, and businesses and
cloud service providers (CSPs) work together to craft contracts to the benefit of both. The
standards and requirements contained in these laws and contracts vary considerably, and may be
in conflict with one another. In addition, there are special considerations that must be undertaken
when data flows across borders between organizations that operate in two different jurisdictions.
In this chapter, we discuss the legal landscape in which cloud computing exists; provide a high-
level overview of relevant laws and regulations that govern it, including how countries have
addressed the problem of transborder dataflows, and describe the increasingly important role
played by contracts between cloud service providers and their clients.
Legal issues have risen with the changing landscape of computing, especially when the service,
data and infrastructure is not owned by the user. With the Cloud, the question arises as to who is
in the “possession” of the data. The Cloud provider can be considered as a legal custodian, owner
or possessor of the data thereby causing complexities in legal matters around trademark
infringement, privacy of users and their data, abuse and security. By introducing Cloud design
focusing on privacy, legal as a service on a Cloud and service provider accountability, users can
expect the service providers to be accountable for privacy and data in addition to their regular
SLAs.
Multi-tenancy
In cloud computing, multitenancy means that multiple customers of a cloud vendor are using the
same computing resources. Despite the fact that they share resources, cloud customers aren't
aware of each other, and their data is kept totally separate. Multitenancy is a crucial component
of cloud computing; without it, cloud services would be far less practical. Multitenant
architecture is a feature in many types of public cloud computing,
including IaaS, PaaS, SaaS, containers, and serverless computing.
Many of the benefits of cloud computing are only possible because of multitenancy. Here are
two crucial ways multitenancy improves cloud computing:
Better use of resources: One machine reserved for one tenant isn't efficient, as that one tenant is
not likely to use all of the machine's computing power. By sharing machines among multiple
tenants, use of available resources is maximized.
Lower costs: With multiple customers sharing resources, a cloud vendor can offer their services
to many customers at a much lower cost than if each customer required their own dedicated
infrastructure.
Possible security risks and compliance issues: Some companies may not be able to store data
within shared infrastructure, no matter how secure, due to regulatory requirements. Additionally,
security problems or corrupted data from one tenant could spread to other tenants on the same
machine, although this is extremely rare and shouldn't occur if the cloud vendor has configured
their infrastructure correctly. These security risks are somewhat mitigated by the fact that cloud
vendors typically are able to invest more in their security than individual businesses can.
The "noisy neighbor" effect: If one tenant is using an inordinate amount of computing power,
this could slow down performance for the other tenants. Again, this should not occur if the cloud
vendor has set up their infrastructure correctly.
Multi-tenancy Issues
Multi-tenancy issues in cloud computing are a growing concern, especially as the industry
expands. And big business enterprises have shifted their workload to the cloud. Cloud computing
provides different services on the internet. Including giving users access to resources via the
internet, such as servers and databases.
Cloud computing lets you work remotely with networking and software.
Tenants need to pay for only the services they use. Still, there are some multi-tenancy issues in
cloud computing that you must look out for:
Security: This is one of the most challenging and risky issues in multi-tenancy cloud computing.
There is always a risk of data loss, data theft, and hacking. The database administrator can grant
access to an unauthorized person accidentally. Despite software and cloud computing companies
saying that client data is safer than ever on their servers, there are still security risks.
There is a potential for security threats when information is stored using remote servers and
accessed via the internet. There is always a risk of hacking with cloud computing. No matter how
secure encryption is, someone can always decrypt it with the proper knowledge. A hacker getting
access to a multitenant cloud system can gather data from many businesses and use it to his
advantage. Businesses need high-level trust when putting data on remote servers and using
resources provided by the cloud company to run the software.
The multi-tenancy model has many new security challenges and vulnerabilities. These new
security challenges and vulnerabilities require new techniques and solutions. For example, a
tenant gaining access to someone else’s data and it’s returned to the wrong tenant, or a tenant
affecting another in terms of resource sharing.
Performance: SaaS applications are at different places, and it affects the response time. SaaS
applications usually take longer to respond and are much slower than server applications. This
slowness affects the overall performance of the systems and makes them less efficient. In the
competitive and growing world of cloud computing, lack of performance pushes the cloud
service providers down. It is significant for multi-tenancy cloud service providers to enhance
their performance.
Less Powerful: Many cloud services run on web 2.0, with new user interfaces and the latest
templates, but they lack many essential features. Without the necessary and adequate features,
multi-tenancy cloud computing services can be a nuisance for clients.
Noisy Neighbor Effect: If a tenant uses a lot of the computing resources, other tenants may
suffer because of their low computing power. However, this is a rare case and only happens if
the cloud architecture and infrastructure are inappropriate.
Interoperability: users remain restricted by their cloud service providers. Users can not go
beyond the limitations set by the cloud service providers to optimize their systems. For example,
users can not interact with other vendors and service providers and can’t even communicate with
the local applications.
Monitoring: constant monitoring is vital for cloud service providers to check if there is an issue
in the multi-tenancy cloud system. Multi-tenancy cloud systems require continuous monitoring,
as computing resources get shared with many users simultaneously. If any problem arises, it
must get solved immediately not to disturb the system’s efficiency.
However, monitoring a multi-tenancy cloud system is very difficult as it is tough to find flaws in
the system and adjust accordingly.
Capacity Optimization: Before giving users access, database administrators must know which
tenant to place on what network. The tools applied should be modern and latest that offer the
correct allocation of tenants. Capacity must get generated, or else the multi-tenancy cloud system
will have increased costs. As the demands keep on changing, multi-tenancy cloud systems must
keep on upgrading and providing sufficient capacity in the cloud system.
Multi-tenancy cloud computing is growing and growing at a rapid pace. It is the requirement for
the future and has significant potential to grow. Multi-tenancy cloud computing will keep on
improving and becoming better as large organizations are looking
KEY DIFFERENCE
SOAP stands for Simple Object Access Protocol whereas REST stands for
Representational State Transfer.
SOAP is a protocol whereas REST is an architectural pattern.
SOAP uses service interfaces to expose its functionality to client applications
while REST uses Uniform Service locators to access to the components on the
hardware device.
Amazon web services (AWS) Amazon Web Services (AWS) is a cloud computing platform with
functionalities such as database storage, delivery of content, and secure IT infrastructure for
companies, among others. It is known for its on-demand services namely Elastic Compute Cloud
(EC2) and Simple Storage Service (S3). Amazon EC2 and Amazon S3 are essential tools to
understand if you want to make the most of AWS cloud. Amazon EC2 is a software for running
cloud servers that is short for Elastic Cloud compute. Amazon launched EC2 in 2006, as it
allowed companies to rapidly and easily spin servers into the cloud, instead of having to buy, set
up, and manage their own servers on the premises. While Amazon EC2 server instances can also
have bare-metal EC2 instances, most Amazon EC2 server instances are virtual machines housed
on Amazon's infrastructure. The server is operated by the cloud provider and you don't need to
set up or maintain the hardware.) A vast number of EC2 instances are available for different
prices; generally speaking the more computing capacity you use, the higher the EC2 instance you
need. (Bare metal Cloud Instances permit you to host a working load on a physical computer,
rather than a virtual machine. In certain Amazon EC2 examples, different types of applications
such as the parallel processing of big data workload GPUs are optimized for use. EC2 offers
functionality such as auto-scaling, which automates the process of increasing or decreasing
compute resources available for a given workload, not just to make the deployment of a server
simpler and quicker. Auto-scaling thus helps to optimize costs and efficiency, especially in
working conditions with significant variations in volume. Amazon S3 is a storage service
operating on the AWS cloud (as its full name, Simple Storage Service). It enables users to store
virtually every form of data in the cloud and access the storage over a web interface, AWS
Command Line Interface, or AWS API. You need to build what Amazon called a 'bucket' which
is a specific object that you use to store and retrieve data for the purpose of using S3. If you like,
you can set up many buckets. Amazon S3 is an object storage system which works especially
well for massive, uneven or highly dynamic data storage.
AppEngine
Google AppEngine The Google AppEngine (GAE) is a cloud computing service (belonging to
the platform as a service (PaaS) category) to create and host web-based applications within
Google's data centers. GAE web applications are sandboxed and run across many redundancy
servers to allow resources to be scaled up according to currently-existing traffic requirements.
App Engine assigns additional resources to servers to handle increased load. Google App Engine
is a Google platform for developers and businesses to create and run apps using advanced
Google infrastructure. These apps must be written in one of the few languages supported, namely
Java, Python, PHP and Go. This also requires the use of Google query language and Google Big
Table is the database used. The applications must comply with these standards, so that
applications must either be developed in keeping with GAE or modified to comply. GAE is a
Aneka
5. Organizational aspects
The role of the IT department in an enterprise that completely or significantly relies on the cloud poses a
number of challenges from an organizational point of view that must be faced. The lack of control over the
Scientific applications
Scientific applications are increasingly using cloud computing systems and technologey. The most
relevant option is Iaas solutions, which offer the optimal environment for running bag-of-tasks
applications and workflows. Problems that require a higher degree of flexibility in terms of structuring of
their computation model can leverage platforms such as Aneka.
Healthcare: ECG analysis in the cloud
Cloud computing allows the remote monitoring of a patient's heartbeat and data analysis in
minimal time, and the notification of first-aid personnel and doctors should the data reveal
potentially dangerous conditions. ECG is the electrical manifestation of the contractile activity of
the heart's myocardium. Wearable computing devices equipped with ECG sensors constantly
monitor the patient's heartbeat. Such information is transmitted to the patient's mobile device,
which will forward it to the cloud-hosted Web service for analysis. The Web service is the SaaS
application that will store ECG data in Amazon S3 and issue a processing request to the scalable
cloud platform. The number of workload engine instances is controlled according to the queue of
each instance, while Aneka controls the number of EC2 instances used to execute the single tasks
for a single ECG job.
Social networking:
Social networking applications have grown considerably in the last few years to become the most active
sites on the Web. To sustain their traffic and serve millions of users seamlessly, services such as Twitter
and Facebook have leveraged cloud computing technologies. The possibility of continuously adding
capacity while systems are running is the most attractive feature for social networks, which constantly
increase their user base.
Media applications
Media applications are a niche that has taken a considerable advantage from leveraging cloud computing
technologies. In particular, video-processing operations, such as encoding, transcoding, composition, and
rendering, are good candidates for a cloud-based environment. These are computationally intensive tasks
that can be easily offloaded to cloud computing infrastructures.
Multiplayer online gaming:
Online multiplayer gaming attracts millions of gamers around the world who share a common experience
by playing together in a virtual environment that extends beyond the boundaries of a normal LAN. Online
games support hundreds of players in the same session, made possible by the specific architecture used to
forward interactions, which is based on game log processing. Players update the game server hosting the
game session, and the server integrates all the updates into a log that is made available to all the players
through a TCP port. The client software used for the game connects to the log port and, by reading the
log, updates the local user interface with the actions of other players.