Cloud Computing 2 - Dewakar

Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Cloud Computing

Unit 1 : Introduction to Cloud Computing


1.0 Overview of Cloud Computing
Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources(eg. Networks,servers,storage,
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. Cloud computing is the use of
networked infrastructure software and capacity to provide resources to users in an on-
demand environment. Cloud Computing is an emerging consumption and delivery model
that enables provisioning of standardised business and computing services through a
shared infrastructure, where-in end user is enabled to control the interaction in-order to
accomplish the business task. Computing resources such as hardware, software networks
,storage, services and interfaces are no longer confined within the four walls of the
enterprise.Cloud computing is an emerging style of computing where applications, data
and resources are provided to users as services over the web.

1.1 Evaluation of Cloud Computing

1 Prepared by Purusottam Adhikari


1.3 Characteristics of Cloud Computing
● Cloud computing uses commodity based hardware as its base - replace h/w without
affecting the cloud.
● It uses commodity based software container system - Ex: service should be able to
be moved from one cloud provider to any other cloud provider with no effect on the
service.
● Virtualization
● Abstraction layer for h/w, s/w and configuration systems
● Multi-tenant system
● Pay as you go with no-lock in
● Privacy and Security of Data
● Flexible migration and restart capabilities
● Autonomic Computing - Automated restarts, automated resource expansion and
contraction
● Dynamic Scaling - Horizontal / Vertical

5 Essential Characteristics of Cloud Computing


Cloud computing is an emerging style of computing where applications, data and resources
are provided to users as services over the web.

The following characteristics set apart cloud from other computing techniques:
● On-Demand Service - without requiring human interaction with service provider
● Ubiquitous Network Access - access from any device
● Location -Independent Resource Pooling : No control, knowledge over physical
location of server
● Rapid Elasticity - scale -out and scale-in, automatically & rapidly
● Measured Service – you get what you pay for(services and transaction)

1.4 Types of Cloud and its Cloud Services


Cloud computing is a style of computing in which business processes, applications, data
and any type of IT resources can be provided as a service to users:
Cloud has the following delivery models(Types of Cloud):
● Public
● Private
● Hybrid
● Community

Public Cloud
The infrastructure of public cloud is made available for the general public where the resources are
2 Prepared by Purusottam Adhikari
provided over internet and any users can access from the cloud, it is owned by the cloud vendor.
The public cloud infrastructure is not visible for the customer where the infrastructure is hosted.

Private Cloud The infrastructure of private cloud is made available only for specific organization
and not for other organization. It means resources available in private cloud can be access by
internal users, anyone within the organization but users outside of that organization cannot access.
Commercial data are fully maintained as well as infrastructures of private cloud are entirely taking
care by the organization itself. Private cloud a lot more protected when we compare with the public
cloud.
Hybrid Cloud
The infrastructure of hybrid cloud is combination of more than one cloud such as public, private or
community cloud. Critical data can hosted by organization on private cloud and data by having
pretty less security relates to public cloud .
Community Cloud T
he infrastructure of public cloud is deployed for several organizations and not for specific
organization but support specific community or interested group. It means organizations that have
similar policies, objectives and targets or belongs to specific community, build a shared cloud
datacenter that can be used by all of the members. It is based on the faith between all the members
in community cloud, which can walked through their mutual benefits.

Cloud Services
There are 5 commonly used categories in a spectrum of cloud offerings: (Distinction is not
clear-cut)(to suit different target audiences)
● Platform-as-a-service (PaaS) -Provision H/W, OS, Framework, Database
● Software-as-a-service (SaaS) - Provision H/W, OS, Special purpose S/W
● Infrastructure-as-a-service (IaaS) - Provision H/W and organization has control
over OS
● Storage-as-a-service (SaaS) - Provision of DB-like services, metered like - per
gigabyte/month
● Desktop-as-a-service (DaaS) - Provision of Desktop environment within a browser

Platform as a service
1. PaaS stands for platform as a service.
2. PaaS provides a computing platform with a programming language execution
environment.
3. PaaS provide a development and deployment platform for running applications in the
cloud.
4. PaaS constitute the middleware on top of which applications are built.
5. Application management is the core functionality of the middleware.
6. PaaS provides run time environments for the applications.
7. PaaS provides
 Applications deployment
 Configuring application components
 Provisioning and configuring supporting technologies

3 Prepared by Purusottam Adhikari


8. For users PaaS interfaces can be in the form of a Web-based interface or in the form of
programming APIs and libraries.
9. PaaS solutions generally include the infrastructure as well.
10. PurePaaS offered only the user-level middleware..
11. Some examples:
 Google App Engine
 Force.com
Characterstics of PaaS:
1. Runtime framework: The runtime framework executes end-user code according to the
policies set by the user and the provider.
2. Abstraction: PaaS offer a way to deploy and manage applications on the cloud rather
than a virtual machines on top of which the IT infrastructure is built and configured.
3. Automation: PaaS deploy the applications automatically.
4. Cloud services: Provide services for creation, delivery, monitoringm management,
reporting of applications.

Software-as-a-service
1. SaaS stands for software as a service.
2. Software as a service (SaaS) allows users to connect to and use cloud-based apps over
the Internet.
3. SaaS is the service with which end users interact directly.
4. It provides a means to free users from complex hardware and software management.
5. In SaaS customer do not neew to purchase the software and required the license.
6. They simply access the application website, enter their credentials and billing details,
and can instantly use the application.
7. Customer can customize their software.
8. Application is awailable to the customer on demand.
9. SaaS can be considered as a “one-to-many” software delivery model.
10. In SaaS applications are build as per the user needs.
11. From the emaples mentioned below we can find why SaaS is condiered as one to many
model.
12. Some examples:
 Gmail
 Google drive
 Dropbox
 WhatsApp
Characterstics of SaaS:
4 Prepared by Purusottam Adhikari
1. The product sold to customer is application access.
2. The application is centrally managed.
3. The service delivered is one-to-many.
4. The service delivered is an integrated solution delivered on the contract, which means
provided as promised.
Infrastructure-as-a-service
1. IaaS stands for infrastructure as a service.
2. Infrastructure as service or IaaS is the basic layer in cloud computing model.
3. IaaS offers servers, network devices, load balancers, database, Web servers etc.
4. IaaS delivers customizable infrastructure on demand.
5. IaaS examples can be categorized in two categories
a. IaaS Management layer
b. IaaS Physical infrastructure
6. Some service providers provide both above categories and some provides only
management layer.
7. IaaS management layer also required integration with ohter IaaS solutions that
provide physical infrastructure.
8. On virtual machines applications are installed and deployed.
9. One of the example of virtual machine is Oracle VM.
10. Hardware virtualization includes workload partitioning, application isolation,
sandboxing, and hardware tuning.
11. Instead of purchasing user can access these virtual hardwares on pay per use basis.
12. users can take advantage of the full customization offered by virtualization to deploy
their infrastructure in the cloud.
13. Some virtual machines can be with pre installed operating systems and other softwares.
14. On some virtual machines operating systems and others softwares can be installed as per
use.
15. Some examples:
 Amazon Web Services (AWS),
 Microsoft Azure,
 Google Compute Engine (GCE)
1.5 Benefits and Challenges of Cloud Computing

5 Prepared by Purusottam Adhikari


Benefits of Cloud Computing
Reduced Cost
Organizations want to reduce the cost of managing and maintaining has to shift toward the
resources of cloud computing vendor. Using cloud provider services, organization keep their
applications up to date on their systems free without having to purchase and install.
Flexibility
The main reason of popularity of cloud computing is flexibility, due to this users have ability to
access data anywhere and anytime such as from home, on holiday in the world. If user is off-line
want to access data, user can connect through virtual office, quickly and easily. The devices which
are applicable include laptop, desktop, smart phone etc. with internet connection.
Availability and Reliability
Availability of cloud resources is high because it is up to vender available on 24x7 and more
reliable chances of failure are minimal and immediate response to disaster recovery. From
anywhere one can login and access the information.
Simplicity
Simplicity offers a user does not require training or have technical sound to work on a cloud, with
little knowledge of hardware and software can use the cloud resources.
Greener
The cloud computing is naturally a green technology since it enable resource sharing among users
thus not requiring large data centers that consumes a large amount of power.Users can get anything
from cloud at anytime and anywhere.
Centralized
Because the system is centralized, you can easily apply patches and upgrades. This means your
users always have access to the latest software versions.
Mobility
Users of cloud do not require to carry their personal computer, because they can access own
documents anytime anywhere.
Unlimited Storage
Capacity Cloud computing support unlimited data storage capacity, so it offers virtually unlimited
storage of data. Users can store approximately hundreds of petabytes ( a million gigabytes)
compared to own computer's current capacity can be 500 GB or 1 TB, so do not require to panic
about your data.

Challenges:

6 Prepared by Purusottam Adhikari


Cloud computing, an emergent technology, has placed many challenges in different aspects
of data and information handling. Some of these are shown in the following diagram:

Security and Privacy: Security and Privacy of information is the biggest challenge to
cloud computing. Security and privacy issues can be overcome by employing encryption,
security hardware and security applications.

Portability: This is another challenge to cloud computing that applications should easily
be migrated from one cloud provider to another. There must not be vendor lock-in.
However, it is not yet made possible because each of the cloud provider uses different
standard languages for their platforms.

Interoperability: It means the application on one platform should be able to incorporate


services from the other platforms. It is made possible via web services, but developing such
web services is very complex.

Computing Performance: Data intensive applications on cloud requires high network


bandwidth, which results in high cost. Low bandwidth does not meet the desired
computing performance of cloud application.

Reliability and Availability: It is necessary for cloud systems to be reliable and robust
because most of the businesses are now becoming dependent on services provided by third-
party.

1.6 Applications of Cloud Computing


Cloud Computing has its applications in almost all the fields such as business, entertainment,
data storage, social networking, management, entertainment, education, art and global
positioning system, etc. Some of the widely famous cloud computing applications are
discussed here in this tutorial:

Business Applications

Cloud computing has made businesses more collaborative and easy by incorporating various
apps such as MailChimp, Chatter, Google Apps for business, and Quickbooks.
Development & Test
The cloud customer can develop and test their complete production on demand in the cloud.
Developers can save their time and expanse over traditional development and testing scenarios, that
allows developers to quicker handoff from design to function. It also provide the patterns towards
about iterative active development, the opportunity to trial and able to move out quick competitive
differentiator with the cloud.
Cloud-Based Anti-Spam and Anti-Virus Services

7 Prepared by Purusottam Adhikari


A number of organizations utilize cloud services which typically perform anti-spam sorting and
deliver you antivirus services. Even if majority of services are hosted by the organization inwardly,
they can be easily positioned in your organization based on cloud situation.
IT Education and Research
IT field is swiftly heading in the direction of Cloud Computing, because it supports multiple types of
cloud deployment (public, private hybrid, or community), multiple models of application
programming and extensible framework enabling educators/researchers to develop their own
programming model and application schedule [23].Due to above platform provided by cloud
computing, software industry also concentrating in shifting from developing applications for PCs to
Data centers.

1.7 Cloud Storage


Cloud storage is a cloud computing model that stores data on the Internet through a cloud
computing provider who manages and operates data storage as a service. It’s delivered on
demand with just-in-time capacity and costs, and eliminates buying and managing your
own data storage infrastructure. This gives you agility, global scale and durability, with
“anytime, anywhere” data access.

Storing data in the cloud lets IT departments transform three areas:

1. Total Cost of Ownership. With cloud storage, there is no hardware to purchase, storage to
provision, or capital being used for "someday" scenarios. You can add or remove capacity on
demand, quickly change performance and retention characteristics, and only pay for storage that you
actually use. Less frequently accessed data can even be automatically moved to lower cost tiers in
accordance with auditable rules, driving economies of scale.

2. Time to Deployment. When development teams are ready to execute, infrastructure should never
slow them down. Cloud storage allows IT to quickly deliver the exact amount of storage needed,
right when it's needed. This allows IT to focus on solving complex application problems instead of
having to manage storage systems.

3. Information Management. Centralizing storage in the cloud creates a tremendous leverage point
for new use cases. By using cloud storage lifecycle management policies, you can perform powerful
information management tasks including automated tiering or locking down data in support of
compliance requirements.

1.8 Cloud Services requirements


1. Efficiency / cost reduction
By using cloud infrastructure, you don’t have to spend huge amounts of money on
purchasing and maintaining equipment.
2. Data security
Cloud offers many advanced security features that guarantee that data is securely stored
and handled. Cloud storage providers implement baseline protections for their platforms
and the data they process, such authentication, access control, and encryption.
3. Scalability
8 Prepared by Purusottam Adhikari
Different companies have different IT needs — a large enterprise of 1000+ employees
won’t have the same IT requirements as a start-up.Using cloud is a great solution because it
enables enterprise to efficiently — and quickly — scale up/down according to business
demands.
4. Mobility
Cloud computing allows mobile access to corporate data via smartphones and devices,
which is a great way to ensure that no one is ever left out of the loop. Staff with busy
schedules, or who live a long way away from the corporate office, can use this feature to
keep instantly up-to-date with clients and coworkers.
5. Disaster recovery
Data loss is a major concern for all organizations, along with data security. Storing your
data in the cloud guarantees that data is always available, even if your equipment like
laptops or PCs, is damaged. Cloud-based services provide quick data recovery for all kinds
of emergency scenarios.
6. Control
Cloud enables you complete visibility and control over your data. You can easily decide
which users have what level of access to what data.
7. Market reach
Developing in the cloud enables users to get their applications to market quickly.
8. Automatic Software Updates
Cloud-based applications automatically refresh and update themselves.

1.9 Cloud and dynamic infrastructure

Cloud and Dynamic infrastructure


1. Service management
9 Prepared by Purusottam Adhikari
This type of special facility or a functionality is provided to the cloud IT services by the
cloud service providers. This facility includes visibility, automation and control to
delivering the first class IT services.
2. Asset-Management
In this the assets or the property which is involved in providing the cloud services are
getting managed.
3. Virtualization and consolidation
Consolidation is an effort to reduce the cost of a technology by improving its operating
efficiency and effectiveness. It means migrating from large number of resources to fewer
one, which is done by virtualization technology.
4. Information Infrastructure
It helps the business organizations to achieve the following : Information compliance,
availability of resources retention and security objectives.
5. Energy-Efficiency
Here the IT infrastructure or organization sustainable. It means it is not likely to damage or
effect any other thing.
6. Security
This cloud infrastructure is responsible for the risk management. Risk management Refers
to the risks involved in the services which are being provided by the cloud-service
providers.
7. Resilience
This infrastructure provides the feature of resilience means the services are resilient. It
means the infrastructure is safe from all sides. The IT operations will not be easily get
affected.

1.10 Cloud adoption


Cloud adoption means adopting a service or technology from another cloud service
provide.
1. Here Cloud means the environment of cloud where the cloud services are being
operated.
2. Adoption term states that accepting the services of new Technology.
3. Adoption means following some kind of new trend or existing trend or a technology.
4. This Cloud adoption is suitable for low priority business applications.
5. It supports some interactive applications that combines two or more data sources.

10 Prepared by Purusottam Adhikari


6. For example:-if a marketing company requires to grow his business in the whole country
in a short span of time then it must need a quick promotion or short promotion across the
country.
7. Cloud Adoption is useful when the recovery management, backup recovery based
implementations are required.
8. By considering the above key points we conclude that it is only suitable for the
applications that are modular and loosely coupled.
9. It will work well with research and development projects.
10. It means the testing of new services ,design models and also the applications that can be
get adjusted on small servers.
11. Applications which requires different level of infrastructure throughout the day or
throughout the month should be deployed Through the cloud.
12. The applications whose demand is unknown can also be deployed using clouds.
Benefits of cloud adoption:
1. Data security
2. Increased resource sharing
3. Flexibility
4. Business agility
5. Facilitates innovation
6. Great efficiency at lower price
7. Better collaboration
8. Better backup

11 Prepared by Purusottam Adhikari


Chapter 2: Cloud Computing Architecture

Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on


demand. As seen in the image below.
Cloud computing architecture includes:
1. IaaS, Infrastructure as a service

12 Prepared by Purusottam Adhikari


2. PaaS, Platform as a service
3. SaaS, Software as a service

fig: Cloud computing Architecture

Cloud infrastructure can be heterogeneous in nature because a variety of resources, such as


 Clusters
 Networked PCs,
 Databases
 Cloud application
 Cloud programming tools
 Hosting platforms
 Virtual machines, etc are used.
From the diagram above, we will discuss about:
1. IaaS
2. PaaS
3. SaaS
4. User applications
5. User-level middleware
6. Core middleware
7. System infrastructure
1. IaaS:
13 Prepared by Purusottam Adhikari
1. IaaS stands for infrastructure as a service.
2. Infrastructure as service or IaaS is the basic layer in Cloud computing model.
3. IaaS offers servers, network devices, load balancers, database, Web servers etc.
4. IaaS examples can be categorized in two categories
a. IaaS Management layer
b. IaaS Physical infrastructure
5. Some service providers provide both above categories and some provides only
management layer.
6. IaaS management layer also required integration with ohter IaaS solutions that
provide physical infrastructure.
7. Main technologies behind IaaS is hardware virtualization.
8. Some examples:
 Amazon Web Services (AWS),
 Microsoft Azure,
 Google Compute Engine (GCE)

2. PaaS:
1. PaaS stands for platform as a service.
2. PaaS provides a computing platform with a programming language execution
environment.
3. PaaS offered to the user is a development platform
4. PaaS solutions generally include the infrastructure as well.
5. PurePaaS offered only the user-level middleware.
6. Some examples:
 Google App Engine
 Force.com

3. SaaS:
1. SaaS stands for software as a service.
2. Software as a service (SaaS) allows users to connect to and use cloud-based apps over
the Internet.
3. SaaS is the service with which end users interact directly.
4. Some examples:
 Gmail
 Google drive
 Dropbox
 WhatsApp
14 Prepared by Purusottam Adhikari
4. User applications:
1. It includes cloud applications thruough which end user get intercact.
2. There may be different types of user applications, like scientific, gaming, social etc.
3. Some of the examples are Gmail, Facebook.com, etc.
5. User-level middleware:
1. It includes cloud programming environment and tools.
2. There may be different types of programming environments and tools depends on the
user applications.
3. Some of the examples of user level middleware are web 2.0, libraries, scripting.
6. Core middleware:
1. It includes cloud hosting platforms.
2. It manage quality of service.
3. Execution management.
4. Accounting, metering etc.
5. Virtual machines are the part of core middleware.
7. System infrastructure:
1. It includes cloud resources.
2. Storage hardware
3. Servers, databases are part of it.

Cloud computing has several deployment models, of which the main ones are:
Private: a cloud infrastructure operated solely for an organisation, being accessible only
within a private network and being managed by the organisation or a third party
(potentially off-premise)
Public: a publicly accessible cloud infrastructure Community: a cloud infrastructure shared
by several organisations with shared concerns
Hybrid: a composition of two or more clouds that remain separate but between which there
can be data and application portability
Partner: cloud services offered by a provider to a limited and well-defined number of
parties

Cloud design and implementation using SOA


As more firms move file storage to the cloud, it makes sense to employ cloud computing
and service-Oriented Architecture together. Cloud computing enables users to quickly and
easily develop services suited to their clients without needing to consult an IT staff. One of
the disadvantage of combining SOA and cloud is that some factors such as security and
availability are not accessed. The integration of current data and systems into the cloud
15 Prepared by Purusottam Adhikari
solution is a significant problem for enterprises when combining cloud computing and
service oriented Architecture. It is crucial to remember that not every IT function can be
outsourced to the cloud.

Elasticity, self-service provisioning standards-based interfaces, and pay-as-you-go are


some of the major characteristics of the cloud. To achieve these kinds of engineering, the
cloud’s basis must be properly conceived and properly architected. Cloud services aid
businesses by bringing SOA’s best practices and business process emphasis to the next
level. To offer services with desired level of flexibility and scalability, cloud service
providers must develop solutions using s service oriented approach.
SOA is a cloud service that is an essential component of the ‘Component as a service’ of
the cloud service stack. SOA’s structure enables business processes to be emphasized to
provide interoperability and quick delivery of functionality. It facilities system-to-system
integration by generating loosely connected services that may be utilized for a variety of
applications. The concept of SOA is similar to Object-Orientated Programming where
objects are generalized so that they can be reused for multiple purposes.
The cloud’s services and structure should be built using a modular architecture approach.
Component based, modular architecture allows for flexibility and reuse. This flexibility is
supported by SOA. SOA is more than just a technology strategy and technique for
developing IT Systems. Companies have embraced SOA concepts to improve
understanding between business and IT and to assist business in adapting to change.

Fig stack of service categories


Service orientation is a design paradigm comprised of specific set of design principles. Its
most important feature is its reliance of the separation of concerns design philosophy.
Separation of concern (SoC) is based on the simple fact that a problem becomes easier to
approach if it is divided into small units and handled separately
16 Prepared by Purusottam Adhikari
Example of SoC

Security, trust and privacy


Cloud computing refers to the underlying infrastructure for an emerging model of service
provision that has the advantage of reducing cost by sharing computing and storage
resources, combined with an on-demand provisioning mechanism relying on a pay-per-use
business model. These new features have a direct impact on information technology (IT)
budgeting but also affect traditional security, trust and privacy mechanisms. The
advantages of cloud computing—its ability to scale rapidly, store data remotely and share
services in a dynamic environment—can become disadvantages in maintaining a level of
assurance sufficient to sustain confidence in potential customers. Some core traditional
mechanisms for addressing privacy (such as model contracts) are no longer flexible or
dynamic enough, so new approaches need to be developed to fit this new paradigm. In this
chapter, we assess how security, trust and privacy issues occur in the context of cloud
computing and discuss ways in which they may be addressed.

Privacy:
In the commercial, consumer context, privacy entails the protection and appropriate use of
the personal information of customers, and the meeting of expectations of customers about
its use. For organisations, privacy entails the application of laws, policies, standards and
processes by which personal information is managed. What is appropriate will depend on
the applicable laws, individuals’ expectations about the collection, use and disclosure of
their personal information and other contextual information, hence one way of thinking
about privacy is just as ‘the appropriate use of personal information under the
circumstances’

17 Prepared by Purusottam Adhikari


Personal information describes facts, communications or opinions which relate to the
individual and which it would be reasonable to expect him or her to regard as intimate or
sensitive and therefore about which he or she might want to restrict collection, use or
sharing.
To summarise, privacy is regarded as a human right in Europe, whereas in America it has
been traditionally viewed more in terms of avoiding harm to people in specific contexts. It
is a complex but important notion and correspondingly the collection and processing of
personal information is subject to regulation in many countries across the world. Hence
cloud business scenarios need to take this into account.
Security:
Security is defined as “Preservation of confidentiality, integrity and availability of
information; in addition, other properties such as authenticity, accountability, non-
repudiation and reliability can also be involved.”
Note that security is actually one of the core privacy principles, as considered in the
previous subsection. Correspondingly, it is a common requirement under the law that if a
company outsources the handling of personal information or confidential data to another
company, it has some responsibility to make sure the outsourcer uses “reasonable security”
to protect those data. This means that any organization creating, maintaining, using or
disseminating records of PII must ensure that the records haven't been tampered with, and
must take precautions to prevent misuse of the information. Specifically, to ensure the
security of the processing of such information, data controllers must implement appropriate
technical and organizational measures to protect it against:
• Unauthorised access or disclosure: in particular where the processing involves the
transmission of data over a network
• Destruction: accidental or unlawful destruction or loss
• Modification: inappropriate alteration
• Unauthorised use: all other unlawful forms of processing

Privacy differs from security, in that it relates to handling mechanisms for personal
information, dealing with individual rights and aspects like fairness of use, notice, choice,
access, accountability and security. Many privacy laws also restrict the transborder data
flow of Security mechanisms, on the other hand, focus on provision of protection
mechanisms that include authentication, access controls, availability, confidentiality,
integrity, retention, storage, backup, incident response and recovery. Privacy relates to
personal information only, whereas security and confidentiality can relate to all
information.

Trust:
Trust is a broader notion than security as it includes subjective criteria and experience.
Correspondingly, there exist both hard (security-oriented) and soft trust (i. e. non-security
oriented trust) solutions [29]. “Hard” trust involves aspects like authenticity, encryption,
and security in transactions, whereas “soft” trust involves human psychology, brand
18 Prepared by Purusottam Adhikari
loyalty, and userfriendliness [30]. Some soft issues are involved in security, nevertheless.
An example of soft trust is reputation, which is a component of online trust that is perhaps
a company’s most valuable asset [31] (although of course a CSP’s reputation may not be
justified). Brand image is associated with trust and suffers if there is a breach of trust or
privacy.
People often find it harder to trust on-line services than off-line services [32], often
because in the digital world there is an absence of physical cues and there may not be
established centralized authorities [33]. The distrust of on-line services can even negatively
affect the level of trust accorded to organizations that may have been long respected as
trustworthy There are many different ways in which on-line trust can be established:
security may be one of these (although security, on its own, does not necessarily imply
trust [31]). Some would argue that security is not even a component of trust: Nissenbaum
argues that the level of security does not affect trust [35]. On the other hand, an example of
increasing security to increase trust comes from people being more willing to engage in
ecommerce if they are assured that their credit card numbers and personal data are
cryptographically protected [36]

cloud providers need to safeguard the privacy and security of personal and confidential
data that they hold on behalf of organisations and users. In particular, it is essential for the
adoption of public cloud systems that consumers and citizens are reassured that privacy
and security is not compromised. It will be necessary to address the problems of privacy
and security raised in this chapter in order to provide and support trustworthy and
innovative cloud computing services that are useful for a range of different situations.

19 Prepared by Purusottam Adhikari


Chapter 3: Cloud Virtualization technology
3.1Overview of Virtualization:
● Virtualization represents the logical view of data representation- the power to
compute in virtualized environments.
● It is a technique that has been used in large mainframe computer for 30+ years. It is
used to manage a group of computers together- instead of managing resources
separately.

Virtualization is an abstraction layer (hypervisor) that decouples the physical hardware


from the operating system to deliver greater IT resources utilization and flexibility
Virtualization can bring the following benefits
● save money
● increased control
● simplify disaster recovery
● business readiness assessment

Why Virtualization?
20 Prepared by Purusottam Adhikari
Here are some reasons for going for virtualization
● Lower cost of infrastructure
● Reducing the cost of adding to that infrastructure
● Gathering information across IT set up for increased utilization and collaboration
● Deliver on SLA(service-level agreement) response time during spikes in
production
● Building heterogeneous infrastructure that are responsive
Following are the causes for Virtualization technology in demand:
1. Increased performance and computing capacity:
Now a days computers are enough capable to support virtualization technologies.
2. Underutilized hardware and software resources:
Most of the computers are used during office hours only, so after office hours these
resources can be used for other works too.
3. Lack of space:
Due to additional storage requirements companies such as Google and Microsoft
expending their data centers, where virtualization technology provides additional
capabilities to these data centers.
4. Greening initiatives:
Maintaining a data center involves keeping servers on and servers needs to be keep cool.
Infrastructures for cooling have a significant impact on the carbon footprint of a data
center. Hence, reducing the number of servers through server consolidation will definitely
reduce the impact of cooling and power consumption of a data center. Virtualization
technologies can provide an efficient way of consolidating servers.
5. Rise of administrative costs:
Power consumption and cooling costs have now become higher than the cost of IT
equipment.
3.2 Some types of virtualization:
1. Storage virtualization:
Storage virtualization is an array of servers that are managed by a virtual storage system.
The servers aren’t aware of exactly where their data is stored, and instead function more
like worker bees in a hive. It makes managing storage from multiple sources to be
managed and utilized as a single repository. Storage virtualization software maintains
smooth operations, consistent performance and a continuous suite of advanced functions
despite changes, break down and differences in the underlying equipment.
2. Network virtualization:
Network virtualization refers to the management and monitoring of an entire computer
network as a single administrative entity from a single software-based administrator’s
console.The ability to run multiple virtual networks with each has a separate control and
21 Prepared by Purusottam Adhikari
data plan. It co-exists together on top of one physical network. It can be managed by
individual parties that potentially confidential to each other. Network virtualization
provides a facility to create and provision virtual networks—logical switches, routers,
firewalls, load balancer, Virtual Private Network (VPN), and workload security within
days or even in weeks.
3. Desktop virtualization:
Desktop virtualization is technology that lets users simulate a workstation load to access a
desktop from a connected device remotely or locally.Desktop virtualization allows the
users’ OS to be remotely stored on a server in the data centre. It allows the user to access
their desktop virtually, from any location by a different machine. Users who want
specific operating systems other than Windows Server will need to have a virtual
desktop. Main benefits of desktop virtualization are user mobility, portability, easy
management of software installation, updates, and patches.
4. Application server virtualization:
Application server virtualization abstracts a collection of application servers that provide
the same services as a single virtual application server.Application-server virtualization is
another large presence in the virtualization space, and has been around since the inception
of the concept. It is often referred to as ‘advanced load balancing,’ as it spreads
applications across servers, and servers across applications.This enables IT departments to
balance the workload of specific software in an agile way that doesn’t overload a specific
server or underload a specific application in the event of a large project or change. In
addition to load balancing it also allows for easier management of servers and applications,
since you can manage them as a single instance. Additionally, it gives way to greater
network security, as only one server is visible to the public while the rest are hidden behind
a reverse proxy network security appliance.
5. Application Virtualization
Application virtualization is often confused with application-server virtualization. What it
means is that applications operate on computers as if they reside naturally on the hard
drive, but instead are running on a server. The ability to use RAM and CPU to run the
programs while storing them centrally on a server, like through Microsoft Terminal
Services and cloud-based software, improves how software security updates are pushed,
and how software is rolled out.Application virtualization helps a user to have remote
access of an application from a server. The server stores all personal information and
other characteristics of the application but can still run on a local workstation through the
internet. Example of this would be a user who needs to run two different versions of the
same software. Technologies that use application virtualization are hosted applications
and packaged applications.
6. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes place. Here,
the central-server(physical server) is divided into multiple different virtual servers by
changing the identity number, processors. So, each system can operate its own operating
22 Prepared by Purusottam Adhikari
systems in isolate manner. Where each sub-server knows the identity of the central
server. It causes an increase in the performance and reduces the operating cost by the
deployment of main server resources into a sub-server resource. It’s beneficial in virtual
migration, reduce energy consumption, reduce infrastructural cost, etc

7. Data virtualization:
This is the kind of virtualization in which the data is collected from various sources and
managed that at a single place without knowing more about the technical information like
how data is collected, stored & formatted then arranged that data logically so that its
virtual view can be accessed by its interested people and stakeholders, and users through
the various cloud services remotely. Many big giant companies are providing their
services like Oracle, IBM, At scale, Cdata, etc. It can be used to performing various kind
of tasks such as:
 Data-integration
 Business-integration
 Service-oriented architecture data-services
 Searching organizational data

Advantages of virtualization:
1. Increased security:
The ability to control the execution of a guest in a completely transparent manner opens
new possibilities for delivering a secure, controlled execution environment.
2. Managed execution:
Provides sharing, aggregation, emulation, isolation etc.
3. Portability:
User works can be safely moved and executed on top of different virtual machines.
Disadvantages of virtualization:
1. Performance degradation:
Since virtualization interposes an abstraction layer between the guest and the host, the
guest can experience increased latencies.
2. Inefficiency and degraded user experience:
Some of the specific features of the host cannot be exposed by the abstraction layer and
then become inaccessible.
3. Security holes and new threats:
Virtualization opens the door to a new and unexpected form of phishing. In case of
hardware virtualization, malicious programs can preload themselves before the operating
system and act as a thin virtual machine manager.
Virtualization technology examples:
1. Xen
23 Prepared by Purusottam Adhikari
2. VMware
3.3 IMPLEMENTATION LEVELS OF VIRTUALIZATION
Virtualization is a computer architecture technology by which multiple virtual
machines (VMs) are multiplexed in the same hardware machine. The idea of VMs can be
dated back to the 1960s. The purpose of a VM is to enhance resource sharing by many users
and improve computer performance in terms of resource utilization and application
flexibility. Hardware resources (CPU, memory, I/O devices, etc.) or software resources
(operating system and software libraries) can be virtualized in various functional layers. This
virtualization technology has been revitalized as the demand for distributed and cloud
computing increased sharply in recent years.
The idea is to separate the hardware from the software to yield better system efficiency. For
example, computer users gained access to much enlarged memory space when the concept
of virtual memory was introduced. Similarly, virtualization techniques can be applied to
enhance the use of compute engines, networks, and storage. According to a 2009 Gartner
Report, virtualization was the top strategic technology poised to change the computer
industry. With sufficient storage, any computer platform can be installed in another host
computer, even if they use processors with different instruction sets and run with distinct
operating systems on the same hardware.
1. Levels of Virtualization Implementation
A traditional computer runs with a host operating system specially tailored for its hardware
architecture, as shown in Figure 3.1(a). After virtualization, different user applications
managed by their own operating systems (guest OS) can run on the same hardware,
independent of the host OS. This is often done by adding additional software, called
a virtualization layer as shown in Figure 3.1(b). This virtualization layer is known
as hypervisor or virtual machine monitor (VMM) [54]. The VMs are shown in the upper
boxes, where applications run with their own guest OS over the virtualized CPU, memory,
and I/O resources.
The main function of the software layer for virtualization is to virtualize the physical
hardware of a host machine into virtual resources to be used by the VMs, exclusively. This
can be implemented at various operational levels, as we will discuss shortly. The
virtualization software creates the abstraction of VMs by interposing a virtualization layer at
various levels of a computer system. Common virtualization layers include the instruction
set architecture (ISA) level, hardware level, operating system level, library support level, and
application level (see Figure 3.2).

24 Prepared by Purusottam Adhikari


1.1 Instruction Set Architecture Level

25 Prepared by Purusottam Adhikari


At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the
help of ISA emulation. With this approach, it is possible to run a large amount of legacy
binary code writ-ten for various processors on any given new hardware host machine.
Instruction set emulation leads to virtual ISAs created on any hardware machine.The basic
emulation method is through code interpretation. An interpreter program interprets the
source instructions to target instructions one by one. One source instruction may require tens
or hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired. This approach
translates basic blocks of dynamic source instructions to target instructions. The basic blocks
can also be extended to program traces or super blocks to increase translation efficiency.
Instruction set emulation requires binary translation and optimization. A virtual instruction
set architecture (V-ISA) thus requires adding a processor-specific software translation layer
to the compiler.

1.2 Hardware Abstraction Level

Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other hand,
the process manages the underlying hardware through virtualization. The idea is to virtualize
a computer’s resources, such as its processors, memory, and I/O devices. The intention is to
upgrade the hardware utilization rate by multiple users concurrently. The idea was
implemented in the IBM VM/370 in the 1960s. More recently, the Xen hypervisor has been
applied to virtualize x86-based machines to run Linux or other guest OS applications.
1.3 Operating System Level
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hardware and software in data centers. The containers behave like real servers.
OS-level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users. It is also used, to a
lesser extent, in consolidating server hardware by moving services on separate hosts into
containers or VMs on one server.
1.4 Library Support Level
Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface

26 Prepared by Purusottam Adhikari


becomes another candidate for virtualization. Virtualization with library interfaces is
possible by controlling the communication link between applications and the rest of a system
through API hooks. The software tool WINE has implemented this approach to support
Windows applications on top of UNIX hosts. Another example is the vCUDA which allows
applications executing within VMs to leverage GPU hardware acceleration.
1.5 User-Application Level
Virtualization at the application level virtualizes an application as a VM. On a traditional
OS, an application often runs as a process. Therefore, application-level virtualization is also
known as process-level virtualization. The most popular approach is to deploy high level
language (HLL) VMs. In this scenario, the virtualization layer sits as an application program
on top of the operating system, and the layer exports an abstraction of a VM that can run
programs written and compiled to a particular abstract machine definition. Any program
written in the HLL and compiled for this VM will be able to run on it. The Microsoft .NET
CLR and Java Virtual Machine (JVM) are two good examples of this class of VM.
3.4 Five benefits of virtualization
Virtualizing your environment can increase scalability while simultaneously reducing expenses,
and the following details a just a few of the many benefits that virtualization can bring to your
organization:
1. Slash your IT expenses

Utilizing a non-virtualized environment can be inefficient because when you are not
consuming the application on the server, the compute is sitting idle and can't be used for
other applications. When you virtualize an environment, that single physical
server transforms into many virtual machines. These virtual machines can have
different operating systems and run different applications while still all being hosted on
the single physical server.The consolidation of the applications onto virtualized
environments is a more cost-effective approach because you’ll be able to consume fewer
physical customers, helping you spend significantly less money on servers and bring cost
savings to your organization.

2. Reduce downtime and enhance resiliency in disaster recovery situations

When a disaster affects a physical server, someone is responsible for replacing or fixing
it—this could take hours or even days. With a virtualized environment, it’s easy to
provision and deploy, allowing you to replicate or clone the virtual machine that’s been
affected. The recovery process would take mere minutes—as opposed to the hours it would
take to provision and set up a new physical server—significantly enhancing the resiliency
of the environment and improving business continuity.

27 Prepared by Purusottam Adhikari


3. Increase efficiency and productivity

With fewer servers, your IT teams will be able to spend less time maintaining the physical
hardware and IT infrastructure. You’ll be able to install, update, and maintain the
environment across all the VMs in the virtual environment on the server instead of going
through the laborious and tedious process of applying the updates server-by-server. Less
time dedicated to maintaining the environment increases your team’s efficiency and
productivity.

4. Control independence and DevOps

Since the virtualized environment is segmented into virtual machines, your developers can
quickly spin up a virtual machine without impacting a production environment. This is
ideal for Dev/Test, as the developer can quickly clone the virtual machine and run a test on
the environment.For example, if a new software patch has been released, someone can
clone the virtual machine and apply the latest software update, test the environment, and
then pull it into their production application. This increases the speed and agility of an
application.

5. Move to be more green-friendly (organizational and environmental)

When you are able to cut down on the number of physical servers you’re using, it’ll lead to
a reduction in the amount of power being consumed. This has two green benefits:

 It reduces expenses for the business, and that money can be reinvested elsewhere.
 It reduces the carbon footprint of the data center.
3.5 Server Virtualization is the partitioning of a physical server into a number of small
virtual servers, each running its own operating system. These operating systems are known
as guest operating systems. These are running on another operating system known as the
host operating system. Each guest running in this manner is unaware of any other guests
running on the same host. Different virtualization techniques are employed to achieve this
transparency.

How Does Server Virtualization Work?

To create virtual server instances you first need to set up a virtualization software. This
essential piece of software is called a hypervizor. Its main role is to create a virtualization
layer that separates CPU / Processors, RAM and other physical resources from the virtual
instances.Once you install the hypervizor on your host machine, you can use that
virtualization software to emulate the physical resources and create a new virtual server on
top of it.

Types of Server virtualization :


1. Hypervisor –

28 Prepared by Purusottam Adhikari


A Hypervisor or VMM(virtual machine monitor) is a layer that exists between the
operating system and hardware. It provides the necessary services and features for the
smooth running of multiple operating systems. It identifies traps, responds to privileged
CPU instructions, and handles queuing, dispatching, and returning the hardware requests.
A host operating system also runs on top of the hypervisor to administer and manage the
virtual machines.
2. Para Virtualization –
It is based on Hypervisor. Much of the emulation and trapping overhead in software
implemented virtualization is handled in this model. The guest operating system is
modified and recompiled before installation into the virtual machine.
Due to the modification in the Guest operating system, performance is enhanced as the
modified guest operating system communicates directly with the hypervisor and emulation
overhead is removed. Example: Xen primarily uses Paravirtualization, where a customized
Linux environment is used to support the administrative environment known as domain 0.

Advantages:
 Easier
 Enhanced Performance
 No emulation overhead
Limitations:
 Requires modification to a guest operating system
3. Full Virtualization –
It is very much similar to Paravirtualization. It can emulate the underlying hardware when
necessary. The hypervisor traps the machine operations used by the operating system to
perform I/O or modify the system status. After trapping, these operations are emulated in
software and the status codes are returned very much consistent with what the real
hardware would deliver. This is why an unmodified operating system is able to run on top
of the hypervisor.
Example: VMWare ESX server uses this method. A customized Linux version known as
Service Console is used as the administrative operating system. It is not as fast as
Paravirtualization.

29 Prepared by Purusottam Adhikari


Advantages:
 No modification to the Guest operating system is required.
Limitations:
 Complex
 Slower due to emulation
 Installation of the new device driver is difficult.
4. Hardware-Assisted Virtualization –
It is similar to Full Virtualization and Paravirtualization in terms of operation except that it
requires hardware support. Much of the hypervisor overhead due to trapping and emulating
I/O operations and status instructions executed within a guest OS is dealt with by relying
on the hardware extensions of the x86 architecture. Unmodified OS can be run as the
hardware support for virtualization would be used to handle hardware access requests,
privileged and protected operations, and to communicate with the virtual machine.
Examples: AMD – V Pacifica and Intel VT Vanderpool provide hardware support for
virtualization.
Advantages:
 No modification to a guest operating system is required.
 Very less hypervisor overhead
Limitations:
 Hardware support Required
5. Kernel level Virtualization –
Instead of using a hypervisor, it runs a separate version of the Linux kernel and sees the
associated virtual machine as a user-space process on the physical host. This makes it easy
to run multiple virtual machines on a single host. A device driver is used for
communication between the main Linux kernel and the virtual machine.
Processor support is required for virtualization ( Intel VT or AMD – v). A slightly
modified QEMU process is used as the display and execution containers for the virtual
machines. In many ways, kernel-level virtualization is a specialized form of server
virtualization.

30 Prepared by Purusottam Adhikari


Examples: User – Mode Linux( UML ) and Kernel Virtual Machine( KVM )

Advantages:
 No special administrative software is required.
 Very less overhead
Limitations:
 Hardware Support Required
6. System Level or OS Virtualization –
Runs multiple but logically distinct environments on a single instance of the operating
system kernel. Also called shared kernel approach as all virtual machines share a common
kernel of host operating system. Based on the change root concept “chroot”.
chroot starts during bootup. The kernel uses root filesystems to load drivers and perform
other early-stage system initialization tasks. It then switches to another root filesystem
using chroot command to mount an on-disk file system as its final root filesystem and
continue system initialization and configuration within that file system.
The chroot mechanism of system-level virtualization is an extension of this concept. It
enables the system to start virtual servers with their own set of processes that execute
relative to their own filesystem root directories.
The main difference between system-level and server virtualization is whether different
operating systems can be run on different virtual systems. If all virtual servers must share
the same copy of the operating system it is system-level virtualization and if different
servers can have different operating systems ( including different versions of a single
operating system) it is server virtualization.
Examples: FreeVPS, Linux Vserver, and OpenVZ are some examples.

31 Prepared by Purusottam Adhikari


Advantages:
 Significantly lightweight than complete machines(including a kernel)
 Can host many more virtual servers
 Enhanced Security and isolation
 Virtualizing an operating system usually has little to no overhead.
 Live migration is possible with OS Virtualization.
 It can also leverage dynamic container load balancing between nodes and clusters.
 On OS virtualization, the file-level copy-on-write (CoW) method is possible, making it
easier to back up data, more space-efficient, and easier to cache than block-level copy-
on-write schemes.
Limitations:
 Kernel or driver problems can take down all virtual servers.

HYPERVISOR MANAGEMENT SOFTWARE:


A hypervisor, also known as a virtual machine manager/monitor (VMM),
is computer hardware platform virtualization software that allows several
operating systems to share a single hardware host
Each operating system appears to have the host’s processor, memory,
and resources to it. Instead, the hypervisor is controlling the host
processor and resources, distributing what is needed to each operating
system in turn and ensuring that the guest operating systems/virtual
machines are unable to disrupt each other.
The term ‘hypervisor’ originated in IBM’s CP-370 reimplementation of CP-
67 for the System/370, released in 1972 as VM/370.
The term ‘hypervisor call’ refers to the par virtualization interface, by
which a guest operating system accesses services directly from the
higher-level control program.

32 Prepared by Purusottam Adhikari


This is the same concept as making a supervisor call to the same level
operating system.
Types of Hypervisors

Two types of hypervisors are used to create virtual environments:

 Type 1 hypervisors (native/bare metal hypervisors)


 Type 2 hypervisors (hosted hypervisors)

Type 1 Hypervisor

Type 1 or bare-metal hypervisors are installed directly on the physical hardware of the host
machine, providing a layer between the hardware and an OS. On top of this layer, you can
install many virtual machines. The machines are not connected in any way and can have
different instances of operating systems and act as different application servers.

Management Console

System administrators and advanced users control the hypervisor remotely through an
interface called a management console.

With it, you can connect to and manage instances of operating systems. You can also turn
servers on and off, transfer operating systems from one server to another (in case of
downtime or malfunction) and perform many other operations.

A type 1 hypervisor is highly secure since it doesn’t have an attack surface of an


underlying operating system (host). Also, it controls and assigns the resources allocated to
each virtual machine based on its usage to avoid wasting resources.

33 Prepared by Purusottam Adhikari


Examples of type 1 hypervisors include VMware ESXi, KVM, Oracle VM, Citrix
XenServer, Microsoft Hyper-V, and others.

Type 2 Hypervisor

Unlike type 1, a type 2 hypervisor is installed on top of an existing operating system. This
allows users to utilize their personal computer or server as a host for virtual machines.
Therefore, you have the underlying hardware, an operating system serving as a host, a
hypervisor and a guest operating system.

Note: The guest machine is not aware of its part of a larger system and all actions you run
on it are isolated from the host.

Although a VM is isolated, the primary OS is still directly connected to the hardware. This
makes it less secure than type 1 hypervisors.

In environments where security is paramount, this type of hypervisor may not suit your
needs. However, end-users and clients with small businesses may find this type of
environment more fitting.

Having a hosted hypervisor allows more than one instance of an operating system to be
installed. However, you should be careful with resource allocation. In the case of type 2
hypervisors, over-allocation may result in your host machine crashing.

Examples of type 2 hypervisors include VMware Workstation, KVM, Oracle VM


VirtualBox, Microsoft Virtual PC, Red Hat Enterprise Virtualization and others.

34 Prepared by Purusottam Adhikari


Virtual Load Balancer

A Virtual Load Balancer provides more flexibility to balance the workload of a server by
distributing traffic across multiple network servers. Virtual load balancing aims to mimic
software-driven infrastructure through virtualization. It runs the software of a physical load
balancing appliance on a virtual machine.

What is a Virtual Load Balancer?

A virtual network load balancer promises to deliver software load balancing by taking the
software of a physical appliance and running it on a virtual machine load balancer. Virtual
load balancers, however, are a short-term solution. The architectural challenges of
traditional hardware appliances remain, such as limited scalability and automation, and
lack of central management (including the separation of control plane and data plane) in
data centers.

How Does Virtual Load Balancing Work?

The traditional application delivery controller companies build virtual load balancers that
utilize code from legacy hardware load balancers. The code simply runs on a virtual
machine. But these virtual load balancers are still monolithic load balancers with static
capacity.

Virtual Load Balancer vs. Hardware Load Balancer?

35 Prepared by Purusottam Adhikari


The complexity and limitations of a virtual load balancer is similar to that of a hardware
load balancer.

A hardware load balancer uses rack-mounted, on-premises physical hardware. Hardware


load balancers are proven to handle high traffic volume well. But the hardware can be
expensive and limit flexibility.

A virtual load balancer uses the same code from a physical appliance. It also tightly
couples the data and control plane in the same virtual machine. This leads to the same
inflexibility as the hardware load balancer.

For example, while an F5 virtual load balancer lowers the CapEx compared to hardware
load balancers, virtual appliances are in reality hardware-defined software.

Virtual Load Balancer vs. Software Load Balancer?

Virtual load balancers seem similar to a software load balancer, but the key difference is
that virtual versions are not software-defined. That means virtual load balancers do not
solve the issues of inelasticity, cost and manual operations plagued by traditional
hardware-based load balancers.

Software load balancers, however, are an entirely different architecture designed for high
performance and agility. Software load balancers also offer lower cost without being
locked into any one vendor.

3.7 Virtual infrastructure

Virtual infrastructure is a collection of software-defined components that make up an


enterprise IT environment. A virtual infrastructure provides the same IT capabilities as
physical resources, but with software, so that IT teams can allocate these virtual resources
quickly and across multiple systems, based on the varying needs of the enterprise.

By decoupling physical hardware from an operating system, a virtual infrastructure can


help organizations achieve greater IT resource utilization, flexibility, scalability and cost
savings. These benefits are especially helpful to small businesses that require reliable
infrastructure but can’t afford to invest in costly physical hardware.

Benefits of virtual infrastructure

36 Prepared by Purusottam Adhikari


The benefits of virtualization touch every aspect of an IT infrastructure, from storage and
server systems to networking tools. Here are some key benefits of a virtual infrastructure:

 Cost savings: By consolidating servers, virtualization reduces capital and


operating costs associated with variables such as electrical power, physical
security, hosting and server development.
 Scalability: A virtual infrastructure allows organizations to react quickly to
changing customer demands and market trends by ramping up on CPU
utilization or scaling back accordingly.
 Increased productivity: Faster provisioning of applications and resources
allows IT teams to respond more quickly to employee demands for new tools
and technologies. The result: increased productivity, efficiency and agility for
IT teams, and an enhanced employee experience and increased talent retention
rates without hardware procurement delays.
 Simplified server management: From seasonal spikes in consumer demand
to unexpected economic downturns, organizations need to respond quickly.
Simplified server management makes sure IT teams can spin up, or down,
virtual machines when required and re-provision resources based on real-time
needs. Furthermore, many management consoles offer dashboards, automated
alerts and reports so that IT teams can respond immediately to server
performance issues.
Virtual infrastructure components

By separating physical hardware from operating systems, virtualization can provision


compute, memory, storage and networking resources across multiple virtual
machines (VMs) for greater application performance, increased cost savings and easier
management. Despite variances in design and functionality, a virtual infrastructure
typically consists of these key components:

 Virtualized compute: This component offers the same capabilities as physical


servers, but with the ability to be more efficient. Through virtualization, many
operating systems and applications can run on a single physical server,
whereas in traditional infrastructure servers were often underutilized. Virtual
compute also makes newer technologies like cloud computing and containers
possible.
 Virtualized storage: This component frees organizations from the constraints
and limitations of hardware by combining pools of physical storage capacity
into a single, more manageable repository. By connecting storage arrays to
multiple servers using storage area networks, organizations can bolster their
storage resources and gain more flexibility in provisioning them to virtual
machines. Widely used storage solutions include fiber channel SAN arrays,
iSCSI SAN arrays, and NAS arrays.

37 Prepared by Purusottam Adhikari


 Virtualized networking and security: This component decouples networking
services from the underlying hardware and allows users to access network
resources from a centralized management system. Key security features ensure
a protected environment for virtual machines, including restricted access,
virtual machine isolation and user provisioning measures.
 Management solution: This component provides a user-friendly console for
configuring, managing and provisioning virtualized IT infrastructure, as well
automating processes. A management solution allows IT teams to migrate
virtual machines from one physical server to another without delays or
downtime, while enabling high availability for applications running in virtual
machines, disaster recovery and back-up administration.
Virtual infrastructure requirements
Virtualization products have strict requirements on backend infrastructure components
including storage, backup, system management, security and time sync.
Ensuring that these components are of required configuration is critical for successful
implementation.

38 Prepared by Purusottam Adhikari


From design to disaster recovery, there are certain virtual infrastructure requirements
organizations must meet to reap long-term value from their investment.

 Plan ahead: When designing a virtual infrastructure, IT teams should consider


how business growth, market fluctuations and advancements in technology
might impact their hardware requirements and reliance on compute,
networking and storage resources.
 Look for ways to cut costs: IT infrastructure costs can become unwieldly if
IT teams don’t take the time to continuously examine a virtual infrastructure
and its deliverables. Cost-cutting initiatives may range from replacing old
servers and renegotiating vendor agreements to automating time-consuming
server management tasks.
 Prepare for failure: Despite its failover hardware and high availability, even
the most resilient virtual infrastructure can experience downtime. IT teams
should prepare for worst-case scenarios by taking advantage of monitoring
tools, purchasing extra hardware and relying on clusters to better manage host
resources.
Virtual infrastructure architecture

A virtual infrastructure architecture can help organizations transform and manage their IT
system infrastructure through virtualization. But it requires the right building blocks to
deliver results. These include:

 Host: A virtualization layer that manages resources and other services for
virtual machines. Virtual machines run on these individual hosts, which
continuously perform monitoring and management activities in the
background. Multiple hosts can be grouped together to work on the same
network and storage subsystems, culminating in combined computing and
memory resources to form a cluster. Machines can be dynamically added or
removed from a cluster.
 Hypervisor: A software layer that enables one host computer to
simultaneously support multiple virtual operating systems, also known as
virtual machines. By sharing the same physical computing resources, such as
memory, processing and storage, the hypervisor stretches available resources
and improves IT flexibility.
 Virtual machine: These software-defined computers encompass operating
systems, software programs and documents. Managed by a virtual
infrastructure, each virtual machine has its own operating system called a guest
operating system.
The key advantage of virtual machines is that IT teams can provision them
faster and more easily than physical machines without the need for hardware
procurement. Better yet, IT teams can easily deploy and suspend a virtual

39 Prepared by Purusottam Adhikari


machine, and control access privileges, for greater security. These privileges
are based on policies set by a system administrator.
 User interface: This front-end element means administrators can view and
manage virtual infrastructure components by connecting directly to the server
host or through a browser-based interface.

Unit 4: Cloud Programming Models (12 Hrs.)


4.1 Thread programming
Traditional thread programming has always been carried out within the realm of a single process.
That is a process, composed of multiple threads of execution, share memory and other resources
but execute within the confines of the process’s memory space. The operating system allocates a
fraction of the timeslice assigned to the entire process, to each of the threads within. This
process, known as context switching, creates the illusion of running multiple threads concurrently.
In modern machines however, where multi-core processors are not uncommon, true parallelism is
achieved by assigning each thread to a single core. A multi-threaded program brings numerous
advantages such as improved responsiveness in interactive applications, increased throughput in
I/O intensive applications, increased server responsiveness when handling multiple clients, and a
simplified program structure. Threads within a single process however cannot communicate with
threads in other processes by sharing memory, and must resort to using other forms of inter-
process communication, such as named pipes and sockets. As applications become increasingly
complex there is greater demand for computational power than can be delivered by a single
multi-core machine. Often this requires utilizing an entire cluster of, possibly multi-core,
machines. Such problems typically require a large number of repetitive calculations on different
data sets. As a result, the problem can be broken down into smaller manageable units of work and
then distributed across each of the nodes in the cluster. As with traditional threads, concurrent

40 Prepared by Purusottam Adhikari


execution is thus achieved by executing each of these units of work simultaneously, but on
different machines. Once the partial results have been computed by the different nodes, the
results can be gathered at the client machine and combined to produce the final result. A simple
relationship can be established between the total time taken to complete the application, and the
number of nodes available for execution. The larger the number of nodes available, the greater is
the number of work units that can be distributed and executed 2 simultaneously and thus shorter
is the time taken to complete the application. Aneka takes traditional thread programming a step
further. It lets you write multi-threaded applications in the traditional way, with the added twist
that each of these threads can now be executed outside the parent process and on a separate
machine. In the strict sense of the word, these “threads” are independent processes executing on
different nodes, and do not share memory or other resources. But AnekaThreads, as they are
called, lets you write applications using the same thread constructs for concurrency and
synchronization as with traditional threads. This lets you easily port existing multi-threaded
compute intensive applications to parallel versions that can run faster by utilizing multiple
machines simultaneously.
The Aneka Environment
Aneka is both a development and a runtime environment. As a development environment, Aneka
provides a set of libraries that allow you to write parallel applications using one of the three
programming models supported. These are the Task, Thread and MapReduce models. As a
runtime environment, Aneka executes the units of work that constitute your application, in
parallel.
Note that running applications on Aneka requires access to a runtime environment pre-installed
on a cluster. For development purposes however, you may run Aneka standalone on your
personal computer.
Defining AnekaThreads
A running program consists of one or more threads executing within a process. A process is an
instance of a program in execution. Each process has its own memory address space, the
executable program and data. A program with a single thread is called a single-threaded program,
while a program with multiple threads is called a multithreaded program. Each thread within a
program has its own stack for maintaining its state. A process is therefore a grouping of resources,
while a thread is an entity that can be scheduled for execution on the CPU. Threads are also
known as light-weight processes. In .Net a thread is represented by the Thread class.
An AnekaThread is a unit of work that can be executed on a remote computer. Unlike standard
threads, each AnekaThread is executed within a process of its own on the remote machine. An
AnekaThread has a similar interface to the standard Thread class, and can be started and stopped
(aborted) in much the same manner. AnekaThreads are however more simplistic and do not
support all behaviors exposed by the .Net Thread class, such as managing thread priorities and
operations such as Suspend and Resume. Figures 1 and 2 below show the essential differences
between traditional threads and AnekaThreads.

Process

Three
threads in

41 Prepared by Purusottam Adhikari


execution

Figure 1: A process containing multiple threads

Process executing
on remote
machine
AnekaThreads
Remote Machine Remote Machine Remote Machine

Process

Client application

Figure 2: A process executing AnekaThreads on remote machines


Comparing Distributed Threads with Local Threads
An important difference between local and distributed threads lies in the sharing of
resources. Local threads execute within the domain of a single process and
communicate by sharing memory and other resources. For instance two local
threads can read and write to the same data structure, and may coordinate their
work using synchronization primitives such as locking and signaling. On the
contrary, each distributed thread executes in isolation within a process of its own.
AnekaThreads therefore cannot communicate with each other through shared data
structures, even if they were all created by the same process. This restriction limits
the use of distributed threads to applications where such communication and
coordination is not required.
Thread Synchronization
Learning to appreciate the differences between local and distributed threads will
help you write more complex applications that utilize threads of both types. The
.Net framework provides a number of synchronization primitives for controlling the
interactions between local threads and avoiding race conditions, such as locking and
signaling. The Aneka threading library on the other hand only supports a single
synchronization mechanism using the AnekaThread.Join method as illustrated in
figure 3 below.
42 Prepared by Purusottam Adhikari
Invoking AnekaThread’s join method will cause the main application thread to block
until the AnekaThread terminates by either completing successfully or failing. This
basic level of synchronization can be useful in applications where the partial results
of computations are required in order to proceed further. Since each of these
threads execute in isolation completely independent of each other, and using with
their own private data structures, no other forms of synchronization such as locking
and signaling are necessary.As the Aneka runtime environment is shared amongst
a number of users, where multiple applications utilize the execution nodes, it is not
possible to perform operations such as Suspend, Resume, Interupt and Sleep that
may result in holding a resource indefinitely preventing from being used.
Application thread
start

start AnekaThreads
executing on
join remote node

join

Application thread
blocks until
AnekaThread
completes

Figure 3: Basic synchronization provided by Aneka’s threading library


Thread Priorities
.Net’s Thread class supports thread priorities, where the scheduling priority can be one of Highest,
AboveNormal, Normal, BelowNormal or Lowest. Operating systems are however not required to honor the
priority of a thread. The current version of Aneka does not support thread priorities for AnekaThreads.

Thread Life-Cycle
The following diagram depicts the possible execution states for local threads supported by the .Net
framework. A thread may, at any given time, be in one or more of these states. When a new thread is
created its state is Unstarted, and transitions to Running when the Start method is invoked. There is also an
additional state called Background which indicates whether a thread is running in the background or the
foreground.
Figure 5 illustrates the life-cycle of an AnekaThread. As both thread types are fundamentally different, one
being local and the other distributed, the possible states they take differ from instantiation to termination.
An instance of AnekaThread transitions from Unstarted to Started when its Start method is invoked. It then
transitions to Queued when it is scheduled for execution at a remote computing node. When execution
begins, it state is Running, and finally transitions to Completed when all work is done. During any one of
these stages, an AnekaThread may fail resulting in the Failed state. Other states such as StagingIn and
StagingOut are used when a thread requires files for execution, and produces files as output. Programming
threads that require or produce files is beyond the scope of this tutorial and you are encouraged to refer
the online documentation for more details. Lastly, from your perspective as a programmer you only get to
43 Prepared by Purusottam Adhikari
initiate the first state change from Unstarted to Started. Thereafter, all stage changes are carried out by the
Aneka runtime environment

Developing Parallel Applications with AnekaThreads


Developing parallel applications requires an understanding the problem and its logical
structure. Once you have figured this out, it is fairly easy to approach the solution. The
following sections provide simple guidelines to follow when developing parallel
applications with Aneka.

Problem Decomposition
One of the key challenges in developing parallel applications lies in breaking down a large
problem into smaller units of work, such that they can be executed concurrently on
different machines. Decomposing a problem might not seem very evident at first, but it is
often a good idea to start with a piece of paper. Two common approaches used for
problem decomposition are:

44 Prepared by Purusottam Adhikari


• Identifying patterns of repetitive, but independent computations
• Identifying distinct, but independent computations

The first approach is the most common and involves identifying repetitive calculations in
the problem. Often these take the form of for or while loops in a sequential program.
Every iteration of the loop is thus potentially a unit of work that can be computed
independently from other iterations. The two examples, calculating Pi and matrix
multiplication, shown later in this tutorial uses this approach. If an iteration is dependent
on the values produced in the previous iteration, then the units of work can no longer be
computed independently and some form of communication is required. The following
diagram illustrates this:

The second approach involves identifying sufficiently large but isolated computations in
the problem. Each of these distinct computations would then form a unit of work for
concurrent execution. Unlike the first approach where each unit of work does the same
amount of computation and would thus take more or less the same time to complete, the
second approach involves distinct units of work, each of which make take significantly
different amounts of time to complete. The first example program shown later in this
tutorial, where the results of three trigonometric functions are computed, use this
approach

45 Prepared by Purusottam Adhikari


A Parallel Math Program using AnekaThreads
The following program demonstrates the use of AnekaThreads to perform a simple
mathematical computation. While the result might not be of any practical use, this
program serves as a useful example to introduce programming with AnekaThreads.
Consider the following mathematical equation:

p = sin (x) + cost (y) + tan (z)

As these trigonometric functions are independent operations, they can be executed in


isolation. The only requirement is that after computation, the results of these operations
much be combined to produce the final result, as illustrated in the figure below.

46 Prepared by Purusottam Adhikari


The class MathExample is a single threaded client application that runs on the local
machine. It spawns three AnekaThreads, each to compute the sin, cos and tan of a given
angle. Each of these threads is then dispatched to remote compute nodes for concurrent
execution, by invoking the start operation on the threads. Although Math Example can
now continue execution, it needs to wait until all three threads have completed before it
can combine their results. To do this, it invokes the join operation on each of the threads
which causes MathExample to block until the threads have completed their operation on
the remote node.

47 Prepared by Purusottam Adhikari


4.2 Task programming
In computer programming, a task is a basic unit of programming that an operating system
controls. ... All of today's widely-used operating systems support multitasking , which
allows multiple tasks to run concurrently, taking turns using the resources of the
computer.The Task Programming Model is a high-level multithreaded programming
model. It is designed to allow Maple code to be written that takes advantage of multiple
processors, while avoiding much of the complexity of traditional multithreaded
programming.Task computing is a wide area of distributed system programming
encompassing several different models of architecting distributed applications, which,
eventually, are based on the same fundamental abstraction: the task. ... Applications are
then constituted of a collection of tasks.

The Task Programming Model is a high-level multithreaded programming model. It is


designed to allow Maple code to be written that takes advantage of multiple processors,
while avoiding much of the complexity of traditional multithreaded programming.
• Here are a few advantages of using the Task Programming Model:
• No explicit threading, users create Tasks, not Threads.
• Maple schedules the Tasks to the processors so that the code scales to the number of
available processors.
• Multiple algorithms written using the Task Programming Model can run at the same time
without significant performance impact.
• Complex problems can be solved without requiring traditional synchronization tools such
as Mutexes and Condition Variables.
• If such synchronization tools are not used, the function cannot be deadlocked.
• The Task functions are simple and the model mirrors conventional function calling.

Task computing is a wide area of distributed system programming encompassing several different
models of architecting distributed applications, A task represents a program, which require input files
and produce output files as a result of its execution. Applications are then constituted of a collection of
tasks. These are submitted for execution and their output data are collected at the end of their
execution.
Task computing
A task identifies one or more operations that produce a distinct output and that can be isolated as
a single logical unit.
In practice, a task is represented as a distinct unit of code, or a program, that can be separated
and executed in a remote run time environment.

48 Prepared by Purusottam Adhikari


Multithreaded programming is mainly concerned with providing a support for parallelism within
a single machine. Task computing provides distribution by harnessing the compute power of
several computing nodes. Hence, the presence of a distributed infrastructure is explicit in this
model. Now clouds have emerged as an attractive solution to obtain a huge computing power on
demand for the execution of distributed applications. To achieve it, suitable middleware is
needed. A reference scenario for task computing is depicted in Figure 7.1

The middleware is a software layer that enables the coordinated use of multiple resources, which are
drawn from a datacentre or geographically distributed networked computers. A user submits the
collection of tasks to the access point(s) of the middleware, which will take care of scheduling and
monitoring the execution of tasks. Each computing resource provides an appropriate runtime
environment. Task submission is done using the APIs provided by the middleware, whether a Web or
programming language interface. Appropriate APIs are also provided to monitor task status and collect
their results upon completion. It is possible to identify a set of common operations that the middleware
needs to support the creation and execution of task-based applications. These operations are: •
Coordinating and scheduling tasks for execution on a set of remote nodes • Moving programs to remote
nodes and managing their dependencies • Creating an environment for execution of tasks on the
remote nodes • Monitoring each task’s execution and informing the user about its status • Access to the
output produced by the task.
Characterizing a task A task represents a component of an application that can be logically isolated and
executed separately. A task can be represented by different elements:
• A shell script composing together the execution of several applications
• A single program
• A unit of code (a Java/C11/.NET class) that executes within the context of a specific runtime
environment. A task is characterized by input files, executable code (programs, shell scripts, etc.), and
output files. The runtime environment in which tasks execute is the operating system or an equivalent
sandboxed environment. A task may also need specific software appliances on the remote execution
nodes.

49 Prepared by Purusottam Adhikari


Computing categories These categories provide an overall view of the characteristics of the problems.
They implicitly impose requirements on the infrastructure and the middleware. Applications falling into
this category are: 1 High-performance computing 2 High-throughput computing 3 Many-task computing
1 High-performance computing
High-performance computing (HPC) is the use of distributed computing facilities for solving problems
that need large computing power.
The general profile of HPC applications is constituted by a large collection of compute-intensive tasks
that need to be processed in a short period of time. The metrics to evaluate HPC systems are floating-
point operations per second (FLOPS), now tera-FLOPS or even peta-FLOPS, which identify the number of
floating- point operations per second. Ex: supercomputers and clusters are specifically designed to
support HPC applications that are developed to solve “Grand Challenge” problems in science and
engineering.
2 High-throughput computing
High-throughput computing (HTC) is the use of distributed computing facilities for applications requiring
large computing power over a long period of time. HTC systems need to be robust and to reliably
operate over a long time scale. The general profile of HTC applications is that they are made up of a
large number of tasks of which the execution can last for a considerable amount of time. Ex: scientific
simulations or statistical analyses. It is quite common to have independent tasks that can be scheduled
in distributed resources because they do not need to communicate. HTC systems measure their
performance in terms of jobs completed per month.
3 Many-task computing
MTC denotes high-performance computations comprising multiple distinct activities coupled via file
system operations. MTC is the heterogeneity of tasks that might be of different nature: Tasks may be
small or large, singleprocessor or multiprocessor, compute-intensive or data-intensive, static or
dynamic, homogeneous or heterogeneous. MTC applications includes loosely coupled applications that
are communication-intensive but not naturally expressed using the message-passing interface. It aims to
bridge the gap between HPC and HTC. MTC is similar to HTC, but it concentrates on the use of many
computing resources over a short period of time to accomplish many computational tasks.

Task Computing Framework

50 Prepared by Purusottam Adhikari


• Many framework are available for execution of task based applications on distributed computing
resources like clouds.
Some or them are: • Condor • Oracle Grid Engine • Berkely Open Infrastructure for Network
Computing(BOINC) • Aneka etc.
4.3 Map-reduce programming
MapReduce is triggered by map and reduce operations in functional languages, such as Lisp. This model
abstracts computation problems through two functions: map and reduce. All problems formulated in
this way can be parallelized automatically. All data processed by MapReduce are in the form of
key/value pairs. The execution happens in two phases. In the first phase, a map function is invoked once
for each input key/value pair and it can generate output key/value pairs as intermediate results. In the
second one, all the intermediate results are merged and grouped by keys. The reduce function is called
once for each key with associated values and produces output values as final results.

A map function takes a key/value pair as input and produces a list of key/value pairs as output. The type
of output key and value can be different from input key and value:
map::(key1,value1) => list(key2,value2)
A reduce function takes a key and associated value list as input and generates a list of new values as
output:
reduce::list(key2,value2) => list(value3)
MapReduce Execution
A MapReduce application is executed in a parallel manner through two phases. In the first phase, all
map operations can be executed independently with each other. In the second phase, each reduce
operation may depend on the outputs generated by any number of map operations. However, similar to
map operations, all reduce operations can be executed independently. From the perspective of
dataflow, MapReduce execution consists of m independent map tasks and r independent reduce tasks,
each of which may be dependent on m map tasks. Generally the intermediate results are partitioned
into r pieces for r reduce tasks. The MapReduce runtime system schedules map and reduce tasks to
distributed resources. It manages many technical problems: parallelization, concurrency control,
network communication, and fault tolerance. Furthermore, it performs several optimizations to
decrease overhead involved in scheduling, network communication and intermediate grouping of
results.
4.3.1 Parallel efficiency of MapReduce
MapReduce is a distributed / parallel computing algorithm which is divided into 02 phases:

1. Map Phase
2. Reduce Phase
When it comes to parallel computing, there are 03 primary parallel computing models:

1. Data Parallel Model


a. In the data-parallel model, a given problem is split into several smaller
instances and distributed among the available machines. Each machine
independently processes the subproblem assigned to it. After all the
machines finish processing their instances, the results are aggregated if
required.

51 Prepared by Purusottam Adhikari


b. The data-parallel model is most suitable for problems that can be broken
down into a large number of independent subproblems, e.g. Keyword
extraction from a large database of documents.
2. Tree Parallel Model
a. The tree-parallel model, as the name indicates, accounts for dependencies
between a problem and its subproblems.
b. In this model, each subproblem is dynamically allocated to an idle machine,
thereby optimizing the processor utilisation, e.g. in parallel quicksort, each
array is partitioned and the resulting subarrays are partitioned in parallel.
c. This process continues resulting in a tree of subproblems.
3. Task Parallel Model
The Map phase implements the data parallel model and the Reduce phase implements the
reverse tree parallel model.

In evaluating the performance of parallel programming models, Speed-Up is an important


concept.

For a given computation, speedup is defined as T1/TP , where T1= Running time of the
computation on 1 processor and TP = Running time of the computation on P processors.

The ideal speedup of P is rarely achieved due to the following:

1. Control dependency: The program or computation may have portions that are
inherently sequential, e.g. a for-loop that updates a variable and uses the updated
value in the subsequent iteration. Amdahl's law illustrates the theoretical speedup
that can be obtained by running a program on a parallel computer with P
processors.
a. S = 1 / F + (1 − F/P), where F= Sequential portion of the program and hence,
1−F = Parallelizable portion of the program
2. Data dependency: Subproblems are dynamically created and alloted as seen in the
tree-parallel model. The processor utilisation remains sub-optimal during the
initial phases of division. Further, subproblems are alloted to machines by message
passing, thereby causing communication latency.
The performance of algorithms using the above 02 parallel programming models is as
follows:

 The data-parallel model achieves close to ideal speedups if the data is uniformly
distributed among the processors. In other words, for a parallel computer with P
machines, speedup = T1/(T1/P) = P.
 The control dependency in the program limits the speedup achieved in the tree-
parallel model.

52 Prepared by Purusottam Adhikari


4.3.2 Enterprise batch processing using Map-Reduce
Today, the volume of data is often too big for a single server – node – to process. Therefore,
there was a need to develop code that runs on multiple nodes. Writing distributed systems is an
endless array of problems, so people developed multiple frameworks to make our lives easier.
MapReduce is a framework that allows the user to write code that is executed on multiple nodes
without having to worry about fault tolerance, reliability, synchronization or availability. Batch
processing is an automated job that does some computation, usually done as a periodical job. It
runs the processing code on a set of inputs, called a batch. Usually, the job will read the batch
data from a database and store the result in the same or different database.

An example of a batch processing job could be reading all the sale logs from an online shop for a
single day and aggregating it into statistics for that day (number of users per country, the average
spent amount, etc.). Doing this as a daily job could give insights into customer trends.

MapReduce is a programming model that was introduced in a white paper by Google in 2004.
Today, it is implemented in various data processing and storing systems
(Hadoop, Spark, MongoDB, …) and it is a foundational building block of most big data batch
processing systems.

For MapReduce to be able to do computation on large amounts of data, it has to be a distributed


model that executes its code on multiple nodes. This allows the computation to handle larger
amounts of data by adding more machines – horizontal scaling. This is different from vertical
scaling, which implies increasing the performance of a single machine.

In order to decrease the duration of our distributed computation, MapReduce tries to


reduce shuffling (moving) the data from one node to another by distributing the computation so
that it is done on the same node where the data is stored. This way, the data stays on the same
node, but the code is moved via the network. This is ideal because the code is much smaller than
the data.

To run a MapReduce job, the user has to implement two functions, map and reduce, and those
implemented functions are distributed to nodes that contain the data by the MapReduce

53 Prepared by Purusottam Adhikari


framework. Each node runs (executes) the given functions on the data it has in order the
minimize network traffic (shuffling data).

The computation performance of MapReduce comes at the cost of its expressivity. When writing
a MapReduce job we have to follow the strict interface (return and input data structure) of
the map and the reduce functions. The map phase generates key-value data pairs from the input
data (partitions), which are then grouped by key and used in the reduce phase by the reduce task.
Everything except the interface of the functions is programmable by the user.
Hadoop, along with its many other features, had the first open-source implementation of
MapReduce. It also has its own distributed file storage called HDFS. In Hadoop, the typical input
into a MapReduce job is a directory in HDFS. In order to increase parallelization, each directory
is made up of smaller units called partitions and each partition can be processed separately by a
map task (the process that executes the map function). This is hidden from the user, but it is
important to be aware of it because the number of partitions can affect the speed of execution.

The map task (mapper) is called once for every input partition and its job is to extract key-value
pairs from the input partition. The mapper can generate any number of key-value pairs from a
single input

The MapReduce framework collects all the key-value pairs produced by the mappers, arranges
them into groups with the same key and applies the reduce function. All the grouped values
entering the reducers are sorted by the framework. The reducer can produce output files which
can serve as input into another MapReduce job, thus enabling multiple MapReduce jobs to
chain into a more complex data processing pipeline.

4.4 Comparisons between Thread, Task and Map reduce


High throughput computing is concerned with delivering large amounts of compute in the form
of transactions, which is accomplished by multithreading and multithreading programming may
now be extended to a distributed environment as well. Similarly, High throughput
computing(HTC) refers to the usage of a large number of resources over a lengthy period to
complete a computational job. The most obvious and frequent method for designing parallel and
distributed computing applications and gain the advantages of high-throughput computing is
through task programming.

54 Prepared by Purusottam Adhikari


A task describes a program that may need input files and generate output files as a result of its
execution and applications are collection of tasks. Task are submitted for execution and their
output data is gathered they need data interchange to distinguish the application model that
comes under the task programming umbrella. Similarly, another most important computing
category is data-intensive computing which deals with enormous amounts of data. Data
intensive computing use map-reduce as a programming models for creating data-intensive
applications and their deployment on clouds.

A thread is a fundamental unit of cpu utilization that consists of a program counter, a stack, and a
collection of registers. Threads have their program and memory access. A thread of execution is
the shortest series of programmed instructions that a scheduler can handle separately. Threads
are built in feature of os.
A task is anything that we want to be completed that is higher-level-abstraction on top of treads.
It is the collection of software instructions stored in memory. When the software instruction is
placed into memory, it is referred as task or process. The task can inform us if it has been
completed and whether the procedure has produce a result. A task will use the threadpool by
default, which saves resources because creating threads is costly as a large block of memory has
to be allocated and initializes for the thread stack and system calls need to be made to create and
register the native thread with the host OS. When the requests are frequent and lightweight as
they are in most server applications, establishing a new thread for each request might take
substantial computer resources.
Mapreduce is a framework that allows us to design program that can process massive volumes of
data in parallel on vast clusters of commodity hardware in a dependable manner. Mapreduce is a
programming architecture for distributing computing. The mapreduce method consists of two
key tasks: map and reduce . Map translate the one collection of data into another where
individual pieces are split down into tuples(key/value pairs). Reduce will takes the result of map
as an input and merges those data tuples into a smaller collection of tuples. The reduction work is
always executed after map job, as the name mapreduce indicates.

Multithreaded programming enables parallelism with in the confines of a single processor.


Applications that requires a high level of parallelism cannot be handled by traditional
multithreaded programming and must rely on distributed infrastructures such as clusters, grids,
or clouds. The usage of these facilities necessitates the development of programs and the usage
of certain APIs which may need considerable changes to existing programs. To overcome this
issue, Aneka provides the thread programing model, which extends the multithreaded
programming philosophy beyond the bounds of a single node and enables the use of
heterogenous distributed infrastructure. Eg. POSIX
The most natural technique of dividing an application computation among a group of nodes is
task programming. The idea of task, which represents a series of actions that may be isolated and
executed as a single unit, is the core abstraction of task based programming. A job might be as
basic as a shell program or as sophisticated as a pieces of code that require a certain runtime
environment to execute and generate output files as a result. The task can return a result. There is
no direct mechanism to return result from a thread. It is usually recommended to utilize task
rather than the thread since they formed thread pool which already included system-generated
threads to boost performance. A task by default run in background and task can’t run on

55 Prepared by Purusottam Adhikari


foreground. Thread can be background as well as foreground. Aneka supports task-based
program and services as a real example of a framework that facilitates the development and
execution of task based distributed applications.
Data-intensive computing is a discipline that begin with high speed WAN application but
nowadays it is the province of storage clouds with data dimensions reaching terabytes if not
petabytes that is referred as bigdata which represents data in semistructure or unstructured form.
Traditional techiniques based on relational databases are incapable of serving data-intensive
application efficiently.
To address such issue new technique and storage model have been developed. The major efforts
in the area of storage systems have been devoted to creation of high performance distributed file
system, storage clouds, nosql sytem.
The most significant improvement in assistance of programming data intensive applications has
been advent of map reduce. The first step of map-reduce “map” retrieves useful information
from data and saves it in key-value pairs which are then aggregated together in reduction step.

56 Prepared by Purusottam Adhikari


57 Prepared by Purusottam Adhikari
Threading offers facilities to partition a task into several subtasks, in a recursive-looking
fashion; more tiers, possibility of 'inter-fork' communication at this stage, much more
traditional programming. Does not extend (at least in the paper) beyond a single machine.
Great for taking advantage of your eight-core.

M-R only does one big split, with the mapped splits not talking between each other at all,
and then reduces everything together. A single tier, no inter-split communication until
reduce, and massively scalable. Great for taking advantage of your share of the cloud.

Unit 5: Cloud security

Security Issues:
Data breaches:
Data breaches might be the primary goal of a targeted attack, or might be the consequences of a
human mistake, application laws or inadequate security policies. It might include any materials
that are not meant for public distribution like personal health information, financial information,
trade secrets etc.
Insufficient identity, credential, and access management:
Inadequate identity, credential or key management can allow for illegal data access and possibly
catastrophic (huge or extreme) damage to companies or end users

Data Loss:
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data
Leakage. As we know that our sensitive data is in the hands of Somebody else, and we don’t
have full control over our database. So if the security of cloud service is to break by hackers
then it may be possible that hackers will get access to our sensitive data or personal files.
Interference of Hackers and Insecure API’s:
As we know if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is
important to protect the Interface’s and API’s which are used by an external user. But also in
cloud computing, few services are available in the public domain. An is the vulnerable part of
Cloud Computing because it may be possible that these services are accessed by some third
parties. So it may be possible that with the help of these services hackers can easily hack or
harm our data.
User Account Hijacking:
Account Hijacking is the most serious security issue in Cloud Computing. If somehow the
Account of User or an Organization is hijacked by Hacker. Then the hacker has full authority
to perform Unauthorized Activities.

58 Prepared by Purusottam Adhikari


Changing Service Provider
Vendor lock In is also an important Security issue in Cloud Computing. Many organizations
will face different problems while shifting from one vendor to another. For example, An
Organization wants to shift from AWS Cloud to Google Cloud Services then they ace various
problem’s like shifting of all data, also both cloud services have different techniques and
functions, so they also face problems regarding that. Also, it may be possible that the charges
of AWS are different from Google Cloud, etc.
Lack of Skill
While working, shifting or another service provider, need an extra feature, how to use a
feature, etc. are the main problems caused in IT Company who doesn’t have skilled Employee.
So it requires a skilled person to work with cloud Computing.
Denial of service (Dos) attack:
This type of attack occurs when the system receives too much traffic. Mostly DoS attacks
occur in large organizations such as the banking sector, government sector, etc. When a DoS
attack occurs data is lost. So in order to recover data, it requires a great amount of money as
well as time to handle it.

Cloud Security Risks


Cloud environments come with different security risks than traditional on-premises
environments. Some common cloud security threats include:
Misconfigurations.
Human error — or failing to set the right security controls in a cloud platform — is one of the
biggest cloud security threats. Examples of misconfigurations include accidentally allowing
unrestricted outbound access or opening up access to an S3 bucket. Cloud misconfiguration can
be extremely damaging; one real-life example of this was the Capital One breach in 2019, in
which a former Amazon employee was able to expose personal records of Capital One customers
due to a misconfigured web application firewall (WAF).
Data loss. The collaboration and share ability of cloud services are double-edged swords; these
benefits often make it too easy for users to share data with the wrong internal parties or external
third-parties. 64% of cyber security professionals cited data loss and leakage as a top cloud
security concern, according to Synopsys’ Cloud Security Report.
API vulnerabilities.
Cloud applications use APIs to interact with each other, but those APIs aren’t always secure.
Malicious actors can launch denial-of-service (DoS) attacks to exploit APIs, allowing them to
access company data.
Malware
Malware is a real threat in the cloud. Data and documents constantly travel to and from the
cloud, which means that there are more opportunities for threat actors to launch malware attacks
such as hyperjacking and hypervisor infections.
IAM complexity
Identity and access management (IAM) in a cloud or hybrid cloud environment can be extremely
complex. For larger organizations, the process of simply understanding who has access to which
resources can be time-consuming and difficult. Other IAM challenges in the cloud include
‘zombie’ SaaS accounts (inactive users), and improper user provisioning and deprovisioning.
Hybrid environments where users must access a mix of SaaS apps and on-premises applications
can introduce siloes and further complicate IAM, leading to misconfigurations and security gaps.

59 Prepared by Purusottam Adhikari


Challenges of Cloud Computing
Security
There is in reality that security challenge has been a major obstruction in the influencing cloud
computing acceptance[16]. Without hesitation, moving your records, running your software on
someone else's hard disk using someone else's CPU appears daunting to many[17]. Well-known
security challenges in the cloud computing are: confidentiality, integrity, privacy, accountability,
phishing and so on.
Services Delivery and Billing Cloud
computing having on demand nature of the service, so it is challenging task to figure out the
costs of delivery. Planning and study relating to cost would be very challenging until the
provider has a few good and identical benchmarks to present.
Load balancing
Load balancing is challenging task, when the failure some of its parts during the service
providing. Load balancing would be put in action duration of the service when the failure some
of its parts. Its component would be regularly watched and often one will became nonresponsive,
at that time load balancer is up on and balance the load of that nonresponsive part and do not
send traffic to it.
Transferability
If cloud customers want to migrate from one cloud to another cloud that is from one hosting
provider to another have to face more problems. It's not easy to migrate to other hosting provider
because of migration process will take time to transfer files, which indirectly your business in off
line for some time/days.
Ownership
Once data has been moved to the cloud, some people panic that they might lose some of their
rights or are unable to protect the rights of their users. A large number of cloud providers are
dealing this challenge along with well created user-sided agreements. That said, users would be
wise to seek advice from their favorite legal representative. Users never claim to service provider
who use in their provision of service and forms any kind of ownership over your data [19].

Software as a Service Security:


The seven security issues which one should discuss with a cloud-computing vendor:
1. Privileged user access —inquire about who has specialized access to data, and about the
hiring and management of such administrators.
2. Regulatory compliance—make sure that the vendor is willing to undergo external audits
and/or security certifications.
3. Data location—does the provider allow for any control over the location of data?
4. Data segregation —make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced professionals.
5. Recovery —Find out what will happen to data in the case of a disaster. Do they offer complete
restoration? If so, how long would that take?

60 Prepared by Purusottam Adhikari


6. Investigative support —Does the vendor have the ability to investigate any inappropriate or
illegal activity?
7. Long-term viability —What will happen to data if the company goes out of business? How
will data be returned, and in what format?

To address the security issues listed above, SaaS providers will need to incorporate and enhance
security practices used by the managed service providers and develop new ones as the cloud
computing environment evolves. The baseline security practices for the SaaS environment as
currently formulated are discussed in the following sections.

Security Management (People):


One of the most important actions for a security team is to develop a formal charter for the
security organization and program. This will foster a shared vision among the team of what
security leadership is driving toward and expects, and will also foster “ownership” in the success
of the collective team. The charter should be aligned with the strategic plan of the organization
or company the security team works for. Lack of clearly defined roles and responsibilities, and
agreement on expectations, can result in a general feeling of loss and confusion among the
security team about what is expected of them, how their skills and experienced can be leveraged,
and meeting their performance goals. Morale among the team and pride in the team is lowered,
and security suffers as a result.
Security Governance:
A security steering committee should be developed whose objective is to focus on providing
guidance about security initiatives and alignment with business and IT strategies. A charter for
the security team is typically one of the first deliverables from the steering committee. This
charter must clearly define the roles and responsibilities of the security team and other groups
involved in performing information security functions. Lack of a formalized strategy can lead to
an unsustainable operating model and security level as it evolves. In addition, lack of attention to
security governance can result in key needs of the business not being met, including but not
limited to, risk management, security monitoring, application security, and sales support. Lack of
proper governance and management of duties can also result in potential security risks being left
unaddressed and opportunities to improve the business being missed because the security team is
not focused on the key security functions and activities that are critical to the business.
Risk Management:
Effective risk management entails identification of technology assets; identification of data and
its links to business processes, applications, and data stores; and assignment of ownership and
custodial responsibilities. Actions should also include maintaining a repository of information
assets. Owners have authority and accountability for information assets including protection
requirements, and custodians implement confidentiality, integrity, availability, and privacy
controls. A formal risk assessment process should be created that allocates security resources
linked to business continuity.
Risk Assessment:
Security risk assessment is critical to helping the information security organization make
informed decisions when balancing the dueling priorities of business utility and protection of
assets. Lack of attention to completing formalized risk assessments can contribute to an increase

61 Prepared by Purusottam Adhikari


in information security audit findings, can jeopardize certification goals, and can lead to
inefficient and ineffective selection of security controls that may not adequately mitigate
information security risks to an acceptable level. A formal information security risk management
process should proactively assess information security risks as well as plan and manage them on
a periodic or as-needed basis. More detailed and technical security risk assessments in the form
of threat modeling should also be applied to applications and infrastructure. Doing so can help
the product management and engineering groups to be more proactive in designing and testing
the security of applications and systems and to collaborate more closely with the internal security
team. Threat modeling requires both IT and business process knowledge, as well as technical
knowledge of how the applications or systems under review work.
Security Monitoring and Incident Response:
Centralized security information management systems should be used to provide notification of
security vulnerabilities and to monitor systems continuously through automated technologies to
identify potential issues. They should be integrated with network and other systems monitoring
processes (e.g., security information management, security event management, security
information and event management, and security operations centers that use these systems for
dedicated 24/7/365 monitoring). Management of periodic, independent third-party security
testing should also be included. Many of the security threats and issues in SaaS center around
application and data layers, so the types and sophistication of threats and attacks for a SaaS
organization require a different approach to security monitoring than traditional infrastructure
and perimeter monitoring. The organization may thus need to expand its security monitoring
capabilities to include application- and data-level activities. This may also require subject-matter
experts in applications security and the unique aspects of maintaining privacy in the cloud.
Without this capability and expertise, a company may be unable to detect and prevent security
threats and attacks to its customer data and service stability.
Third-Party Risk Management: As SaaS moves into cloud computing for the storage and
processing of customer data, there is a higher expectation that the SaaS will effectively manage
the security risks with third parties. Lack of a third-party risk management program may result in
damage to the provider’s reputation, revenue losses, and legal actions should the provider be
found not to have performed due diligence on its third-party vendors.

Security Architecture Design:


A security architecture framework should be established with consideration of processes
(enterprise authentication and authorization, access control, confidentiality, integrity,
nonrepudiation, security management, etc.), operational procedures, technology specifications,
people and organizational management, and security program compliance and reporting. A
security architecture document should be developed that defines security and privacy principles
to meet business objectives. Documentation is required for management controls and metrics
specific to asset classification and control, physical security, system access controls, network and
computer management, application development and maintenance, business continuity, and
compliance. A design and implementation program should also be integrated with the formal
system development life cycle to include a business case, requirements definition, design, and
implementation plans. Technology and design methods should be included, as well as the
security processes necessary to provide the following services across all technology layers:

62 Prepared by Purusottam Adhikari


1. Authentication 2. Authorization 3. Availability 4. Confidentiality 5. Integrity 6.
Accountability 7. Privacy
The creation of a secure architecture provides the engineers, data center operations personnel,
and network operations personnel a common blueprint to design, build, and test the security of
the applications and systems. Design reviews of new changes can be betterassessed against this
architecture to assure that they conform to the principles described in the architecture, allowing
for more consistent and effective design reviews.
Vulnerability Assessment:
Vulnerability assessment classifies network assets to more efficiently prioritize vulnerability-
mitigation programs, such as patching and system upgrading. It measures the effectiveness of
risk mitigation by setting goals of reduced vulnerability exposure and faster mitigation.
Vulnerability management should be integrated with discovery, patch management, and upgrade
management processes to close vulnerabilities before they can be exploited.
Data Privacy:
A risk assessment and gap analysis of controls and procedures must be conducted. Based on this
data, formal privacy processes and initiatives must be defined, managed, and sustained. As with
security, privacy controls and protection must an element of the secure architecture design.
Depending on the size of the organization and the scale of operations, either an individual or a
team should be assigned and given responsibility for maintaining privacy. A member of the
security team who is responsible for privacy or a corporate security compliance team should
collaborate with the company legal team to address data privacy issues and concerns. As with
security, a privacy steering committee should also be created to help make decisions related to
data privacy. Typically, the security compliance team, if one even exists, will not have
formalized training on data privacy, which will limit the ability of the organization to address
adequately the data privacy issues they currently face and will be continually challenged on in
the future. The answer is to hire a consultant in this area, hire a privacy expert, or have one of
your existing team members trained properly. This will ensure that your organization is prepared
to meet the data privacy demands of its customers and regulators. For example, customer
contractual requirements/agreements for data privacy must be adhered to, accurate inventories of
customer data, where it is stored, who can access it, and how it is used must be known, and,
though often overlooked, Request for Interest/Request for Proposal questions regarding privacy
must answered accurately. This requires special skills, training, and experience that do not
typically exist within a security team. As companies move away from a service model under
which they do not store customer data to one under which they do store customer data, the data
privacy concerns of customers increase exponentially. This new service model pushes companies
into the cloud computing space, where many companies do not have sufficient experience in
dealing with customer privacy concerns, permanence of customer data throughout its globally
distributed systems, cross-border data sharing, and compliance with regulatory or lawful
intercept requirements.
Data Security:
The ultimate challenge in cloud computing is data-level security, and sensitive data is the domain
of the enterprise, not the cloud computing provider. Security will need to move to the data level
so that enterprises can be sure their data is protected wherever it goes. For example, with data-
level security, the enterprise can specify that this data is not allowed to go outside of the United
States. It can also force encryption of certain types of data, and permit only specified users to

63 Prepared by Purusottam Adhikari


access the data. It can provide compliance with the Payment Card Industry Data Security
Standard (PCI DSS). True unified end-to-end security in the cloud will likely requires an
ecosystem of partners.
Application Security:
Application security is one of the critical success factors for a world-class SaaS company. This
is where the security features and requirements are defined and application security test results
are reviewed. Application security processes, secure coding guidelines, training, and testing
scripts and tools are typically a collaborative effort between the security and the development
teams. Although product engineering will likely focus on the application layer, the security
design of the application itself, and the infrastructure layers interacting with the application, the
security team should provide the security requirements for the product development engineers to
implement. This should be a collaborative effort between the security and product development
team. External penetration testers are used for application source code reviews, and attack and
penetration tests provide an objectivereview of the security of the application as well as
assurance to customers that attack and penetration tests are performed regularly. Fragmented and
undefined collaboration on application security can result in lower-quality design, coding efforts,
and testing results.
Virtual Machine Security:
In the cloud environment, physical servers are consolidated to multiple virtual machine instances
on virtualized servers. Not only can data center security teams replicate typical security controls
for the data center at large to secure the virtual machines, they can also advise their customers on
how to prepare these machines for migration to a cloud environment when appropriate.
Firewalls, intrusion detection and prevention, integrity monitoring, and log inspection can all be
deployed as software on virtual machines to increase protection and maintain compliance
integrity of servers and applications as virtual resources move from onpremises to public cloud
environments. By deploying this traditional line of defense to the virtual machine itself, you can
enable critical applications and data to be moved to the cloud securely. To facilitate the
centralized management of a server firewall policy, the security software loaded onto a virtual
machine should include a bidirectional stateful firewall that enables virtual machine isolation and
location awareness, thereby enabling a tightened policy and the flexibility to move the virtual
machine from on-premises to cloud resources. Integrity monitoring and log inspection software
must be applied at the virtual machine level. This approach to virtual machine security, which
connects the machine back to the mother ship, has some advantages in that the security software
can be put into a single software agent that provides for consistent control and management
throughout the cloud while integrating seamlessly back into existing security infrastructure
investments, providing economies of scale, deployment, and cost savings for both the service
provider and the enterprise.

Legal issues and Aspects:


Computing in “the Cloud” has quickly become a fact of everyday life for many
businesses, and everyone needs to know that it creates many legal issues that can lead
to problems if not handled proactively. Legal standards, regulations, and norms
relating to cloud computing are evolving rapidly in the United States, in Europe, and
elsewhere. Businesses in industries including healthcare, software, financial services,
and social media are offering cloud-based products and services to customers and

64 Prepared by Purusottam Adhikari


clients that provide unprecedented convenience and mobility, but also create
unprecedented risks. Nearly every business that uses computers is itself a consumer of
cloud-based solutions.

Legal issues that can arise “in the cloud” include liability for copyright infringement,
data breaches, security violations, privacy and HIPAA violations, data loss, data
management, electronic discovery (“e-discovery”), hacking, cybersecurity, and many
other complex issues that can lead to complex litigation and regulatory matters before
courts and agencies in the United States, Europe, and elsewhere.

Whether as vendors or as consumers of cloud-based services, many businesses assume


that other participants in the process are taking the necessary steps to ensure data
security and otherwise address the many potential legal issues. Although it is
widespread, that assumption can be dangerous if it leads businesses not to take
adequate steps to protect themselves and their customers and clients.

What Is Cloud Security Monitoring?

Cloud security monitoring is the practice of continuously supervising both virtual and physical

servers to analyze data for threats and vulnerabilities. Cloud security monitoring solutions often

rely on automation to measure and assess behaviors related to data, applications and

infrastructure.

How Does Cloud Security Monitoring Work?

Cloud security monitoring solutions can be built natively into the cloud server hosting

infrastructure (like AWS’s CloudWatch, for example) or they can be third-party solutions that

are added to an existing environment (like Blumira). Organizations can also perform cloud

monitoring on premises using existing security management tools.

Like a SIEM, cloud security monitoring works by collecting log data across servers. Advanced

cloud monitoring solutions analyze and correlate gathered data for anomalous activity, then send

alerts and enable incident response. A cloud security monitoring service will typically offer:

Visibility. Moving to the cloud inherently lowers an organization’s visibility across their

infrastructure, so cloud monitoring security tools should bring a single pane of glass to monitor

application, user and file behavior to identify potential attacks.

65 Prepared by Purusottam Adhikari


Scalability. Cloud security monitoring tools should be able to monitor large amounts of data

across a variety of distributed locations.

Auditing. It’s a challenge for organizations to manage and meet compliance requirements, so

cloud security monitoring tools should provide robust auditing and monitoring capabilities.

Continuous monitoring. Advanced cloud security monitoring solutions should continuously

monitor behavior in real time to quickly identify malicious activity and prevent an attack.

Integration. To maximize visibility, a cloud monitoring solution should ideally integrate with an

organization’s existing services, such as productivity suites (i.e. Microsoft 365 and G Suite),

endpoint security solutions (i.e. Crowdstrike and VMware Carbon Black) and identity and

authentication services (i.e. Duo and Okta).

Benefits of Cloud Security Monitoring

Cloud security monitoring provides the following benefits:

Maintain compliance. Monitoring is a requirement for nearly every major regulation, from

HIPAA to PCI DSS. Cloud-based organizations must use monitoring tools to avoid compliance

violations and costly fees.

Identify vulnerabilities. Automated monitoring solutions can quickly alert IT and security

teams about anomalies and help identify patterns that point to risky or malicious behavior.

Overall, this brings a deeper level of observability and visibility to cloud environments.

Prevent loss of business. An overlooked security incident can be detrimental and even result in

shutting down business operations, leading to a decrease in customer trust and satisfaction —

especially if customer data was leaked. Cloud security monitoring can help with business

continuity and data security, while avoiding a potentially catastrophic data breach.

66 Prepared by Purusottam Adhikari


Increase security maturity. An organization with a mature infosec model has a proactive,

multi-layered approach to security. A cloud monitoring solution enables organizations to include

cloud as one of those layers and provides visibility into the overall environment.

Legal issues and Aspects,

Cloud computing exists in an exciting, complex, and dynamic legal environment spanning both
public and private law: countries actively try to protect the rights of their citizens and encourage
adoption of the cloud through strict, effective, and fair regulatory approaches, and businesses and
cloud service providers (CSPs) work together to craft contracts to the benefit of both. The
standards and requirements contained in these laws and contracts vary considerably, and may be
in conflict with one another. In addition, there are special considerations that must be undertaken
when data flows across borders between organizations that operate in two different jurisdictions.
In this chapter, we discuss the legal landscape in which cloud computing exists; provide a high-
level overview of relevant laws and regulations that govern it, including how countries have
addressed the problem of transborder dataflows, and describe the increasingly important role
played by contracts between cloud service providers and their clients.
Legal issues have risen with the changing landscape of computing, especially when the service,
data and infrastructure is not owned by the user. With the Cloud, the question arises as to who is
in the “possession” of the data. The Cloud provider can be considered as a legal custodian, owner
or possessor of the data thereby causing complexities in legal matters around trademark
infringement, privacy of users and their data, abuse and security. By introducing Cloud design
focusing on privacy, legal as a service on a Cloud and service provider accountability, users can
expect the service providers to be accountable for privacy and data in addition to their regular
SLAs.

Cross Border Legal Issues


• Cloud Cloud inherently being stateless and serves located in different locations and countries
creates issues related to conflict of laws, applicable law and jurisdiction.
• Cross-border data flow, potentially conflicting regulations, applicable regulations

Involvement of multiple parties

67 Prepared by Purusottam Adhikari


• Cloud services usually involve multiple parties which makes onus and liability shift on one
another. Liability and responsibility of sub-contractors is often limited or disclaimed in entirety.
• Contractual privity lacks between the parties which makes it difficult for the client to bind a
provider for a breach.
• Agreements should include liability of provider for acts of subcontractor.
• Right to conduct due diligence and to understand the model of delivery of services should be
given to the customer.

Privacy and Security


• Multi-tenant architecture
• Data from different user are usually stored on a single virtual server
• Multiple virtual servers run on a single physical server
• Data security depends upon the integrity of the virtualization

Service Level Agreements


• Cloud services are usually provided on standard service level agreements which are usually
non-negotiable.
• Even if negotiation is not agreeable for SLA, higher degree of reporting should be integrated in
the agreement.
• Additional options for termination should be provided.

IPR and Ownership Issues


• Trade Secret Protection. As third parties might have access to data, which can be detrimental to
trade secrets of a company.
• Companies should have non-disclosure agreements with the vendor.
• Ensure that no rights in IPR are transferred to the vendor.

Multi-tenancy

In cloud computing, multitenancy means that multiple customers of a cloud vendor are using the
same computing resources. Despite the fact that they share resources, cloud customers aren't
aware of each other, and their data is kept totally separate. Multitenancy is a crucial component
of cloud computing; without it, cloud services would be far less practical. Multitenant
architecture is a feature in many types of public cloud computing,
including IaaS, PaaS, SaaS, containers, and serverless computing.

68 Prepared by Purusottam Adhikari


What are the benefits of multitenancy?

Many of the benefits of cloud computing are only possible because of multitenancy. Here are
two crucial ways multitenancy improves cloud computing:

Better use of resources: One machine reserved for one tenant isn't efficient, as that one tenant is
not likely to use all of the machine's computing power. By sharing machines among multiple
tenants, use of available resources is maximized.

Lower costs: With multiple customers sharing resources, a cloud vendor can offer their services
to many customers at a much lower cost than if each customer required their own dedicated
infrastructure.

What are the drawbacks of multitenancy?

Possible security risks and compliance issues: Some companies may not be able to store data
within shared infrastructure, no matter how secure, due to regulatory requirements. Additionally,
security problems or corrupted data from one tenant could spread to other tenants on the same
machine, although this is extremely rare and shouldn't occur if the cloud vendor has configured
their infrastructure correctly. These security risks are somewhat mitigated by the fact that cloud
vendors typically are able to invest more in their security than individual businesses can.

The "noisy neighbor" effect: If one tenant is using an inordinate amount of computing power,
this could slow down performance for the other tenants. Again, this should not occur if the cloud
vendor has set up their infrastructure correctly.

Multi-tenancy Issues

Multi-tenancy issues in cloud computing are a growing concern, especially as the industry
expands. And big business enterprises have shifted their workload to the cloud. Cloud computing
provides different services on the internet. Including giving users access to resources via the
internet, such as servers and databases.

Cloud computing lets you work remotely with networking and software.

69 Prepared by Purusottam Adhikari


There is no need to be at a specific place to store data. Information or data can be available on
the internet. One can work from wherever he wants. Cloud computing brings many benefits for
its users or tenants, like flexibility and scalability. Tenants can expand and shrink their resources
according to the needs of their workload. Tenants or users do not need to worry about the
maintenance of the cloud.

Tenants need to pay for only the services they use. Still, there are some multi-tenancy issues in
cloud computing that you must look out for:

Security: This is one of the most challenging and risky issues in multi-tenancy cloud computing.
There is always a risk of data loss, data theft, and hacking. The database administrator can grant
access to an unauthorized person accidentally. Despite software and cloud computing companies
saying that client data is safer than ever on their servers, there are still security risks.

There is a potential for security threats when information is stored using remote servers and
accessed via the internet. There is always a risk of hacking with cloud computing. No matter how
secure encryption is, someone can always decrypt it with the proper knowledge. A hacker getting
access to a multitenant cloud system can gather data from many businesses and use it to his
advantage. Businesses need high-level trust when putting data on remote servers and using
resources provided by the cloud company to run the software.

The multi-tenancy model has many new security challenges and vulnerabilities. These new
security challenges and vulnerabilities require new techniques and solutions. For example, a
tenant gaining access to someone else’s data and it’s returned to the wrong tenant, or a tenant
affecting another in terms of resource sharing.

Performance: SaaS applications are at different places, and it affects the response time. SaaS
applications usually take longer to respond and are much slower than server applications. This
slowness affects the overall performance of the systems and makes them less efficient. In the
competitive and growing world of cloud computing, lack of performance pushes the cloud
service providers down. It is significant for multi-tenancy cloud service providers to enhance
their performance.

Less Powerful: Many cloud services run on web 2.0, with new user interfaces and the latest
templates, but they lack many essential features. Without the necessary and adequate features,
multi-tenancy cloud computing services can be a nuisance for clients.

Noisy Neighbor Effect: If a tenant uses a lot of the computing resources, other tenants may
suffer because of their low computing power. However, this is a rare case and only happens if
the cloud architecture and infrastructure are inappropriate.

Interoperability: users remain restricted by their cloud service providers. Users can not go
beyond the limitations set by the cloud service providers to optimize their systems. For example,
users can not interact with other vendors and service providers and can’t even communicate with
the local applications.

70 Prepared by Purusottam Adhikari


This prohibits the users from optimizing their system by integrating with other service providers
and local applications. Organizations can not even integrate with their existing systems like the
on-premise data centers.

Monitoring: constant monitoring is vital for cloud service providers to check if there is an issue
in the multi-tenancy cloud system. Multi-tenancy cloud systems require continuous monitoring,
as computing resources get shared with many users simultaneously. If any problem arises, it
must get solved immediately not to disturb the system’s efficiency.

However, monitoring a multi-tenancy cloud system is very difficult as it is tough to find flaws in
the system and adjust accordingly.

Capacity Optimization: Before giving users access, database administrators must know which
tenant to place on what network. The tools applied should be modern and latest that offer the
correct allocation of tenants. Capacity must get generated, or else the multi-tenancy cloud system
will have increased costs. As the demands keep on changing, multi-tenancy cloud systems must
keep on upgrading and providing sufficient capacity in the cloud system.

Multi-tenancy cloud computing is growing and growing at a rapid pace. It is the requirement for
the future and has significant potential to grow. Multi-tenancy cloud computing will keep on
improving and becoming better as large organizations are looking

Unit 6: Cloud Platforms and Applications (12 Hrs.)


Web services:
Web Service is a structured method for distributing client-server communication on the World
Wide Web. A web service is a software module that performs a variety of tasks. You can search
for the web services across the network and invoke them appropriately. The web service will,
when invoked, provide the customer with the features that the web service invokes.

71 Prepared by Purusottam Adhikari


The above diagram gives a very clear view of the internal working of a web service. The
customer will make a series of web service calls to a server to host the current web service via
request. These applications are rendered through so-called remote procedure calls. Remote
Procedure Call (RPC) are calls made using the webservice hosting service procedures. Amazon
provides a web service for products sold online through amazon.com, for example. The front end
and layer of presentation may be in. Net or Java, but the web service will interact in either
programming language. Data transmitted between the client and the server is the primary
component of a web service, namely XML. An XML is HTML equivalent, and the intermediate
language that many programming languages can easy to understand and they only speak in XML
while applications talk to each other. This provides a can application interface for interacting
with one another in different programming languages. Web services use SOAP (Simple Object
Access Protocol) to transfer XML data between applications. The data is transmitted through
standard HTTP. The data that is transmitted to the program from the web server is called SOAP.
The message from SOAP is just XML. The client application that calls to the Web service can be
written in any programming language, as this document is written in XML. Why do you need a
Web Service? Every day software systems use a wide range of web-based programming tools.
Several apps in Java, others in Net, others in Angular JS, Node.js, etc. can be built. These
heterogeneous applications most often require some kind of communication between them. Since
they are constructed in different programming languages, effective communication between
applications is very difficult to ensure. Here web services are offered. Web services provide a
shared platform that enables multiple applications could base on various programming languages
to communicate with each other. Type of Web Service Two kinds of web services are mainly
available
1. SOAP web services.
2. RESTful web services.

72 Prepared by Purusottam Adhikari


There are some components which must be in place to make a web service fully functional.
Regardless of which programming language is being used to program the web service, these
components must be present.
Let us take a closer look at these elements
SOAP is regarded as an independent message protocol for transport. SOAP is based on the
SOAP Messages transfer of XML data. Every message has a document called an XML
document. Only the XML document structure follows a certain pattern, but the contents do not
follow. The best component of Web services and SOAP is that they are all delivered via HTTP,
the standard web protocol . This is the message of a SOAP
A root element called the < Envelope > is needed in every SOAP document. The first element of
an XML document is the root element. The envelope is divided into 2 parts in turn. The first is
the header and the second is the body. The header comprises the routing data, the information to
which the XML document should be sent to. The actual message is in the body. A simple
example of communication through SOAP is given in the diagram below.

KEY DIFFERENCE
 SOAP stands for Simple Object Access Protocol whereas REST stands for
Representational State Transfer.
 SOAP is a protocol whereas REST is an architectural pattern.
 SOAP uses service interfaces to expose its functionality to client applications
while REST uses Uniform Service locators to access to the components on the
hardware device.

73 Prepared by Purusottam Adhikari


 SOAP needs more bandwidth for its usage whereas REST doesn’t need much
bandwidth.
 Comparing SOAP vs REST API, SOAP only works with XML formats
whereas REST work with plain text, XML, HTML and JSON.
 SOAP cannot make use of REST whereas REST can make use of SOAP.

Amazon web services (AWS) Amazon Web Services (AWS) is a cloud computing platform with
functionalities such as database storage, delivery of content, and secure IT infrastructure for
companies, among others. It is known for its on-demand services namely Elastic Compute Cloud
(EC2) and Simple Storage Service (S3). Amazon EC2 and Amazon S3 are essential tools to
understand if you want to make the most of AWS cloud. Amazon EC2 is a software for running
cloud servers that is short for Elastic Cloud compute. Amazon launched EC2 in 2006, as it
allowed companies to rapidly and easily spin servers into the cloud, instead of having to buy, set
up, and manage their own servers on the premises. While Amazon EC2 server instances can also
have bare-metal EC2 instances, most Amazon EC2 server instances are virtual machines housed
on Amazon's infrastructure. The server is operated by the cloud provider and you don't need to
set up or maintain the hardware.) A vast number of EC2 instances are available for different
prices; generally speaking the more computing capacity you use, the higher the EC2 instance you
need. (Bare metal Cloud Instances permit you to host a working load on a physical computer,
rather than a virtual machine. In certain Amazon EC2 examples, different types of applications
such as the parallel processing of big data workload GPUs are optimized for use. EC2 offers
functionality such as auto-scaling, which automates the process of increasing or decreasing
compute resources available for a given workload, not just to make the deployment of a server
simpler and quicker. Auto-scaling thus helps to optimize costs and efficiency, especially in
working conditions with significant variations in volume. Amazon S3 is a storage service
operating on the AWS cloud (as its full name, Simple Storage Service). It enables users to store
virtually every form of data in the cloud and access the storage over a web interface, AWS
Command Line Interface, or AWS API. You need to build what Amazon called a 'bucket' which
is a specific object that you use to store and retrieve data for the purpose of using S3. If you like,
you can set up many buckets. Amazon S3 is an object storage system which works especially
well for massive, uneven or highly dynamic data storage.
AppEngine
Google AppEngine The Google AppEngine (GAE) is a cloud computing service (belonging to
the platform as a service (PaaS) category) to create and host web-based applications within
Google's data centers. GAE web applications are sandboxed and run across many redundancy
servers to allow resources to be scaled up according to currently-existing traffic requirements.
App Engine assigns additional resources to servers to handle increased load. Google App Engine
is a Google platform for developers and businesses to create and run apps using advanced
Google infrastructure. These apps must be written in one of the few languages supported, namely
Java, Python, PHP and Go. This also requires the use of Google query language and Google Big
Table is the database used. The applications must comply with these standards, so that
applications must either be developed in keeping with GAE or modified to comply. GAE is a

74 Prepared by Purusottam Adhikari


platform for running and hosting Web apps, whether on mobile devices and on the Web. Without
this all-in function, developers should be responsible for creating their own servers, database
software and APIs that make everyone work together correctly. GAE takes away the developers'
pressure so that they can concentrate on the app's front end and features to enhance user
experience.
Azures Platform
Microsoft Azure Microsoft Azure is a platform as a service (PaaS) to develop and manage
applications for using their Microsoft products and in their data centers. This is a complete suite
of cloud products that allow users to develop business-class applications without developing
their own infrastructure. Three cloud-centric products are available on the Azure Cloud platform:
the Windows Azure, SQL Azure & Azure App Fabric controller. This involve the infrastructure
hosting facility for the application. In the Azure, the Cloud service role is a set of virtual
platforms that work together to accomplish basic tasks, which is managed, load-balanced and
Platform-as-a-Service. Cloud Service Roles are controlled by Azure fabric controller and provide
the perfect combination of scalability, control, and customization. Web Role is the role of an
Azure Cloud service which is configured and adapted to operate web applications developed on
the Internet Information (IIS) programming languages and technologies, such as ASP.NET, PHP,
Windows Communication Foundation and Fast CGI. Web Role is the role of an Azure Cloud
service which is configured and adapted to operate web applications developed on the Internet
Information (IIS) programming languages and technologies, such as ASP.NET, PHP, Windows
Communication Foundation and Fast CGI. Worker role is any role for Azure that works on
applications and services that do not usually require IIS. IIS is not enabled default in Worker
Roles. They are mainly utilized to support web-based background processes and to do tasks such
as compressing uploaded images automatically, run scripts, get new messages out of queue and
process and more, when something changes the database. VM Role: The VM role is a type of
Azure Platform role that supports the automated management of already installed service
packages, fixes, updates and applications for Windows Azure. The principal difference is that: A
Web Role deploys and hosts the application automatically via IIS A Worker Role does not use
IIS and runs the program independently The two can be handled similarly and can be run on the
same Azure instances if they are deployed and supplied via the Azure Service Platform. For
certain cases, instances of Web Role and Worker Roles work together and are also used
concurrently by an application. For example, a web role example can accept applications from
users, and then pass them to a database worker role example.
Azure Services Azure Services remembers different administrations for its cloud innovation.
These are: 1.Compute Services: This holds MS Azure administrations, for example, Azure VM,
Azure Website, Mobile administrations and so forth. 2.Data Services: It incorporates MS Azure
Storage, Azure SQL database, and so on. 3.Application Services: It Includes those
administrations that causes clients to fabricate and work applications, for example, Azure Active
Directory, Service transport for associating appropriated frameworks, preparing enormous
information and so forth. 2.Network Services: It incorporates Virtual system, content conveyance
system and Traffic administrator of Azure. There are other services such as: • BizTalk • Big
Compute • Identity •Messaging • Media • CDN etc...

Aneka

75 Prepared by Purusottam Adhikari


Manjrasoft Aneka MANJRASOFT Pvt. Ltd. is one of organization that works on cloud
computing technology by developing software compatible with distributed networks across
multiple servers.
• Create scalable, customizable building blocks essential to cloud computing platforms.
• Build software to accelerate applications that is designed for networked multi-core computers.
• Provide quality of service (QoS) and service level Agreement (SLA)-solutions based on the
service level agreement (SLA) which allow the scheduling, dispatching, pricing of applications
and accounting services, Business and/or public computing network environments.
• Development of applications by enabling the rapid generation of legacy and new applications
using innovative parallel and distributed models of programming.
• Ability of organizations to use computing resources .Business to speed up "compute" or "data"
execution-intensive applications
Open challenges:
1. Cloud definition:
There are several attempts to define cloud computing and to proactively pro-prove a classification of all
the services and technologies identified as such. The NIST working definition of cloud computing
characterizes cloud computing as on-demand self-service, broad network access, resource-pooling, and
measured service. A different approach has been taken at the University of California, Santa Barbara,
which tries to define an ontology for cloud computing. The needs of a different class of users within the
cloud computing community and most likely provide a more robust interaction model between the
different cloud entities on both the functional These characterizations and taxonomies reflect what is
meant by cloud computing at the present the attempts to capture the real nature of cloud computing.

2. Cloud interoperability and standards


Cloud computing is a service-based model for delivering IT infrastructure and applications like utilities
such as power, water, and electricity which allowing interoperability between solutions offered by
different vendors is of fundamental importance. The current state of standards and interoperability in
cloud computing resembles the early been made, and a few organizations are leading the path.
3. Scalability and fault tolerance:
The ability to scale on demand constitutes one of the most attractive features of cloud computing. cloud
middleware has to be designed with the principle of scalability along different dimensions in The cloud
middleware manages a huge number of These costs are a reality for whomever develops, manages, and
maintains the cloud middleware and offers the service in this case is designing highly scalable and fault-
tolerant systems that are easy to manage and at the same time provide competitive performance.
4. Security, Trust and Privacy
Cloud computing creates new opportunities for additional threats to the security of applications. Lack of
control over their own data and processes also poses severe problems for the trust we give to the cloud service
providers. The challenges in this area are concerned with devising secure and trustable systems. Security, trust,
and privacy issues are major obstacles for massive adoption of cloud computing. New way of using existing
technologies creates new opportunities for additional threats to the security of applications. Lack of control
over their own data and processes also poses severe problems for the trust we give to the cloud service
providers.

5. Organizational aspects
The role of the IT department in an enterprise that completely or significantly relies on the cloud poses a
number of challenges from an organizational point of view that must be faced. The lack of control over the

76 Prepared by Purusottam Adhikari


management of data and processes poses not only security threats but also new problems that previously did
not exist. One of the major advantages of moving IT infrastructure and services to the cloud is to reduce or
completely remove the costs related to maintenance and support.

Scientific applications
Scientific applications are increasingly using cloud computing systems and technologey. The most
relevant option is Iaas solutions, which offer the optimal environment for running bag-of-tasks
applications and workflows. Problems that require a higher degree of flexibility in terms of structuring of
their computation model can leverage platforms such as Aneka.
Healthcare: ECG analysis in the cloud
Cloud computing allows the remote monitoring of a patient's heartbeat and data analysis in
minimal time, and the notification of first-aid personnel and doctors should the data reveal
potentially dangerous conditions. ECG is the electrical manifestation of the contractile activity of
the heart's myocardium. Wearable computing devices equipped with ECG sensors constantly
monitor the patient's heartbeat. Such information is transmitted to the patient's mobile device,
which will forward it to the cloud-hosted Web service for analysis. The Web service is the SaaS
application that will store ECG data in Amazon S3 and issue a processing request to the scalable
cloud platform. The number of workload engine instances is controlled according to the queue of
each instance, while Aneka controls the number of EC2 instances used to execute the single tasks
for a single ECG job.

Biology: protein structure prediction


Biology applications have made extensive use of supercomputing and cluster computing
infrastructures. Cloud computing grants access to such capacity on apay-per-use basis and can be
leveraged in a more dynamic fashion for bioinformatics applications. The computational power
required for protein structure prediction can now beacquired on demand, without owning a
cluster. The prediction task uses machine learning techniques for determining the secondary
structure of proteins. It is possible totake advantage of parallel execution in the classification
phase, where multiple classifiers are exececuted concurrently.
The prediction algorithm is then translated into a task graph that is submitted to Aneka. Once the
task is completed, the middleware makes the results available for visualization through the Jeeva
Portal.
Biology: gene expression data analysis for cancer diagnosis
Gene expression profiling is the measurement of the expression levels of thousands of genes at a cellular
level. This activity is a fundamental component of drug design, since it allows scientists to identify the
effects of a specific treatment. Gene expression profiling can be used to provide a more accurate
classification of tumors.
Geoscience: satellite image processing
Satellite remote sensing generates hundreds of gigabytes of raw images that need to be
furtherprocessed to become the basis of several different different GIS products. A cloud-based
implemencation of such a workflow has been developed by the Department of Space,
Government of India. The platform leverages a Xen private cloud and the Aneka technology to
dynamically provision the required resources (i.e., grow or shrink) on demand.

77 Prepared by Purusottam Adhikari


Business and Consumer applications:
Cloud computing innovations are likely to help the commercial and consumer sectors the most.
On the other hand, the ability to convert capital expenses into operating costs makes cloud more
appealing alternative for any IT-centric business. On the other hand, the cloud’s feeling of
ubiquity in terms of accessing data and services makes it appealing to end-users. Furthermore,
because cloud technologies are elastic, they do not necessitate large upfront investments,
allowing innovative ideas to be easily converted into products and services that can readily scale
with demand.

CRM and ERP:


Cloud CRM applications constitute a great opportunity for small enterprises and start-upsto have
fully functional CRM software without large up-front costs and by paying subscriptions. CRM is
not an activity that requires specific needs, and it can be easily moved to thecloud. ERP solutions
on the cloud are less mature and have to compete with well-establishedin-house solutions.
Productivity:
Productivity applications replicate in the cloud some of the most common tasks that we are used to
performing on our desktop: from document storage to office automation and complete desktop
environments hosted in the cloud.

Social networking:
Social networking applications have grown considerably in the last few years to become the most active
sites on the Web. To sustain their traffic and serve millions of users seamlessly, services such as Twitter
and Facebook have leveraged cloud computing technologies. The possibility of continuously adding
capacity while systems are running is the most attractive feature for social networks, which constantly
increase their user base.

Media applications
Media applications are a niche that has taken a considerable advantage from leveraging cloud computing
technologies. In particular, video-processing operations, such as encoding, transcoding, composition, and
rendering, are good candidates for a cloud-based environment. These are computationally intensive tasks
that can be easily offloaded to cloud computing infrastructures.
Multiplayer online gaming:
Online multiplayer gaming attracts millions of gamers around the world who share a common experience
by playing together in a virtual environment that extends beyond the boundaries of a normal LAN. Online
games support hundreds of players in the same session, made possible by the specific architecture used to
forward interactions, which is based on game log processing. Players update the game server hosting the
game session, and the server integrates all the updates into a log that is made available to all the players
through a TCP port. The client software used for the game connects to the log port and, by reading the
log, updates the local user interface with the actions of other players.

Summary of scientific and business application:


Scientific applications take great benefit from the elastic scalability of cloud environments, which also
provide the required degree of customization to allow the deployment and execution of scientific
experiments.

78 Prepared by Purusottam Adhikari


Business and consumer applications can leverage several other characteristics: CRM and ERP
applications in the cloud can reduce or even eliminate maintenance costs due to hardware management,
system administration, and software upgrades. Moreover, they can also become ubiquitous and accessible
from any device and anywhere. Productivity applications, such as office automation products, can make
your document not only accessible but also modifiable from anywhere. This eliminates, for instance, the
need to copy documents between devices. Media applications such as video encoding can offload lengthy
and compute-intensive encoding tasks onto the cloud. Social networks can leverage the capability of
continuously adding capacity without major service disruptions and by maintaining expected performance
levels. All these new opportunities have transformed the way we use these applications on a daily basis,
but they also introduced new challenges for developers, who have to rethink their designs to better benefit
from elastic scalability, on-demand resource provisioning, and ubiquity. These are key features of cloud
technology that make it an attractive solution in several domains

79 Prepared by Purusottam Adhikari

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy