CC 2
CC 2
Internet
Provisioning of resources
● The infrastructure layer is built with virtualized compute, storage and network resources.
● The abstraction of these hardware resources is meant to provide the flexibility
demanded by users.
● Internally, virtualization realizes automated provisioning of resources and optimizes the
infrastructure management process.
ML1726 – Cloud Computing Techniques
● The platform layer is for general purpose and repeated usage of the collection of
software resources.
● This layer provides users with an environment to develop their applications, to test
operation flows and to monitor execution results and performance.
● The platform should be able to assure users that they have scalability, dependability, and
security protection.
● In a way, the virtualized cloud platform serves as a “system middleware” between the
infrastructure and application layers of the cloud.
● The application layer is formed with a collection of all needed software modules for SaaS
applications.
● Service applications in this layer include daily office management work such as information
retrieval, document processing and calendar and authentication services.
● The application layer is also heavily used by enterprises in business marketing and sales,
consumer relationship management (CRM), financial transactions and supply chain
management.
● From the provider’s perspective, the services at various layers demand different amounts of
functionality support and resource management by providers.
● In general, SaaS demands the most work from the provider, PaaS is in the middle, and IaaS
demands the least.
● For example, Amazon EC2 provides not only virtualized CPU resources to users but also
management of these provisioned resources.
● Services at the application layer demand more work from providers.
● The best example of this is the Salesforce.com CRM service in which the provider supplies
not only the hardware at the bottom layer and the software at the top layer but also the platform
and software tools for user application development and monitoring.
● In Market Oriented Cloud Architecture, as consumers rely on cloud providers to meet more
of their computing needs, they will require a specific level of QoS to be maintained by their
providers, in order to meet their objectives and sustain their operations.
● Market-oriented resource management is necessary to regulate the supply and demand of
cloud resources to achieve market equilibrium between supply and demand.
● This cloud is basically built with the following entities:
● Users or brokers acting on user’s behalf submit service requests from anywhere in the world
to the data center and cloud to be processed.
● The request examiner ensures that there is no overloading of resources whereby many service
requests cannot be fulfilled successfully due to limited resources.
● The Pricing mechanism decides how service requests are charged. For instance, requests can
be charged based on submission time (peak/off-peak), pricing rates (fixed/changing), or
ML1726 – Cloud Computing Techniques
The conceptual reference architecture involves five actors. Each actor as entity participates in
cloud computing
Privacy
Configuring
Service
IaaS
Privacy impact Aggregation
Portability and
Resource abstraction
Audit Interoperat-
& Control Layer
Service Arbitrage
Performance Audit Physical resource Business support
Layer
Cloud Carrier
● Cloud consumer: A person or an organization that maintains a business relationship with and uses a
services from cloud providers
● Cloud provider: A person, organization or entity responsible for making a service available to
interested parties
● Cloud auditor: A party that conduct independent assessment of cloud services, information
system operation, performance and security of cloud implementation
● Cloud broker: An entity that manages the performance and delivery of cloud services and
negotiates relationship between cloud provider and consumer.
● Cloud carrier: An intermediary that provides connectivity and transport of cloud services from
cloud providers to consumers.
Consumer Auditor
Broker Provider
● Above Figure illustrates the common interaction exist in between cloud consumer and provider
whereas the broker used to provide service to consumer and auditor collects the audit
information.
● The interaction between the actors may lead to different use case scenario.
Provider 1
Consumer Broker
Provider 2
● Above Figure shows one kind of scenario in which the Cloud consumer may request service from
a cloud broker instead of contacting service provider directly. In this case, a cloud broker can
create a new service by combining multiple services.
ML1726 – Cloud Computing Techniques
Auditor
Consumer Provider
Above figure shows the scenario where the Cloud auditor conducts independent assessment of
operation and security of the cloud service implementation.
Cloud consumer is a principal stake holder for the cloud computing service and requires service
level agreements to specify the performance requirements fulfilled by a cloud provider.
IaaS, PaaS, and SaaS are the three most prevalent cloud delivery models, and together they
have been widely adopted and formalized. A cloud delivery service model is a specific,
preconfigured combination of IT resources made available by a cloud service provider. But
the functionality and degree of administrative control each of these three delivery types offer
cloud users varies. These abstraction layers can also be considered a tiered architecture, where
services from one layer can be combined with services from another, for example, SaaS can
supply infrastructure to create services from a higher layer. The development of cloud computing
introduces the concept of everything as a Service (XaaS). This is one of the most important elements
of cloud computing. Cloud services from different providers can be combined to provide a
completely integrated solution covering all the computing stack of a system. IaaS providers can
offer the bare metal in terms of virtual machines where PaaS solutions are deployed.
ML1726 – Cloud Computing Techniques
The consumer has a choice of already installed VM machines with pre-installed Operating
systems.
The cloud provider has full control over the data centers and the other hardware involved in
them.
It has the ability to scale resources based on the usage of users.
It can also copy data worldwide so that data can be accessed from anywhere in the world as soon
as possible.
Characteristics of IaaS:
There are the following characteristics of IaaS:
Resources are available as a service
Services are highly scalable
Dynamic and flexible
GUI and API-based access
Automated administrative tasks
Example: DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure, Google
Compute Engine (GCE), Rackspace, and Cisco Metacloud
● The management of a cloud service by a single company is often the source of single points
of failure.
● To achieve HA, one can consider using multiple cloud providers.
● Even if a company has multiple data centers located in different geographic regions, it may
have common software infrastructure and accounting systems.
● Therefore, using multiple cloud providers may provide more protection from failures.
● Another availability obstacle is distributed denial of service (DDoS) attacks.
● Criminals threaten to cut off the incomes of SaaS providers by making their services
unavailable.
● Some utility computing services offer SaaS providers the opportunity to defend against DDoS
attacks by using quick scale ups.
● Software stacks have improved interoperability among different cloud platforms, but the APIs
itself are still proprietary. Thus, customers cannot easily extract their data and programs from
one site to run on another.
● The obvious solution is to standardize the APIs so that a SaaS developer can deploy services
and data across multiple cloud providers.
● This will rescue the loss of all data due to the failure of a single company.
● In addition to mitigating data lock-in concerns, standardization of APIs enables a new usage
model in which the same software infrastructure can be used in both public and private clouds.
● Such an option could enable surge computing, in which the public cloud is used to capture
the extra tasks that cannot be easily run in the data center of a private cloud.
● Current cloud offerings are essentially public (rather than private) networks, exposing the
ML1726 – Cloud Computing Techniques
● Multiple VMs can share CPUs and main memory in cloud computing, but I/O sharing is
problematic.
● For example, to run 75 EC2 instances with the STREAM benchmark requires a mean
bandwidth of 1,355 MB/second.
● However, for each of the 75 EC2 instances to write 1 GB files to the local disk requires a
mean disk write bandwidth of only 55 MB/second.
● This demonstrates the problem of I/O interference between VMs.
● One solution is to improve I/O architectures and operating systems to efficiently
virtualize interrupts and I/O channels.
● Internet applications continue to become more data intensive.
● If we assume applications to be pulled apart across the boundaries of clouds, this may
complicate data placement and transport.
● Cloud users and providers have to think about the implications of placement and trafficat
every level of the system, if they want to minimize costs.
● This kind of reasoning can be seen in Amazon’s development of its new CloudFront
service.
● Therefore, data transfer bottlenecks must be removed, bottleneck links must be widenedand
weak servers should be removed.
● The level of virtualization may make it possible to capture valuable information in waysthat
are impossible without using VMs.
● Debugging over simulators is another approach to attacking the problem, if the simulatoris
well designed.
● The pay as you go model applies to storage and network bandwidth; both are counted in
terms of the number of bytes used.
● Computation is different depending on virtualization level.
● GAE automatically scales in response to load increases or decreases and the users are
charged by the cycles used.
● AWS charges by the hour for the number of VM instances used, even if the machine is
idle.
● The opportunity here is to scale quickly up and down in response to load variation, in
order to save money, but without violating SLAs.
● Open Virtualization Format (OVF) describes an open, secure, portable, efficient and
extensible format for the packaging and distribution of VMs.
● It also defines a format for distributing software to be deployed in VMs.
● This VM format does not rely on the use of a specific host platform, virtualization
platform or guest operating system.
● The approach is to address virtual platform is agnostic packaging with certification and
integrity of packaged software.
● The package supports virtual appliances to span more than one VM.
● OVF also defines a transport mechanism for VM templates and the format can apply to
different virtualization platforms with different levels of virtualization.
● In terms of cloud standardization, the ability for virtual appliances to run on any virtual
platform.
● The user is also need to enable VMs to run on heterogeneous hardware platform
hypervisors.
● This requires hypervisor-agnostic VMs.
● And also the user need to realize cross platform live migration between x86 Intel and
AMD technologies and support legacy hardware for load balancing.
Challenge 6: Software Licensing and Reputation Sharing
● Many cloud computing providers originally relied on open source software because the
licensing model for commercial software is not ideal for utility computing.
● The primary opportunity is either for open source to remain popular or simply for commercial
software companies to change their licensing structure to better fit cloud computing.
● One can consider using both pay for use and bulk use licensing schemes to widen the business
coverage.
By using virtualization, you can interact with any hardware resource with greater flexibility. Physical
servers consume electricity, take up storage space, and need maintenance. You are often limited by
physical proximity and network design if you want to access them. Virtualization removes all these
limitations by abstracting physical hardware functionality into software. You can manage, maintain, and
use your hardware infrastructure like an application on the web.
Virtualization technologies have gained renewed interested recently due to the confluence of several
phenomena:
Increased performance and computing capacity.
Underutilized hardware and software resources.
Lack of space.
Greening initiatives.
Rise of administrative costs
ML1726 – Cloud Computing Techniques
Benefits of Virtualization:
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating systems.
Drawback of Virtualization:
High Initial Investment: Clouds have a very high initial investment, but it is also true that it will
help in reducing the cost of companies.
ML1726 – Cloud Computing Techniques
Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly
skilled staff who have skills to work with the cloud easily, and for this, you have to hire new staff or
provide training to current staff.
Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the
chance of getting attacked by any hacker or cracker very easily.
Characteristics of Virtualization:
Increased Security: The ability to control the execution of a guest program in a completely
transparent manner opens new possibilities for delivering a secure, controlled execution
environment. All the operations of the guest programs are generally performed against the virtual
machine, which then translates and applies them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the most
relevant features.
Sharing: Virtualization allows the creation of a separate computing environment within the
same host.
Aggregation: It is possible to share physical resources among several guests, but virtualization
also allows aggregation, which is the opposite process.
Types of Virtualization:
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
3. Desktop Virtualization: Desktop virtualization allows the users’ OS to be remotely stored on a server in
the data center. It allows the user to access their desktop virtually, from any location by a different machine.
Users who want specific operating systems other than Windows Server will need to have a virtual desktop.
The main benefits of desktop virtualization are user mobility, portability, and easy management of software
installation, updates, and patches.
4. Storage Virtualization: Storage virtualization is an array of servers that are managed by a virtual storage
system. The servers aren’t aware of exactly where their data is stored and instead function more like worker
bees in a hive. It makes managing storage from multiple sources be managed and utilized as a single
repository. storage virtualization software maintains smooth operations, consistent performance, and a
continuous suite of advanced functions despite changes, breaks down, and differences in the underlying
equipment.
5. Server Virtualization: This is a kind of virtualization in which the masking of server resources takes
place. Here, the central server (physical server) is divided into multiple different virtual servers by changing
the identity number, and processors. So, each system can operate its operating systems in an isolated manner.
Where each sub-server knows the identity of the central server. It causes an increase in performance and
reduces the operating cost by the deployment of main server resources into a sub-server resource. It’s
beneficial in virtual migration, reducing energy consumption, reducing infrastructural costs, etc.
ML1726 – Cloud Computing Techniques
6. Data Virtualization: This is the kind of virtualization in which the data is collected from various sources
and managed at a single place without knowing more about the technical information like how data is
collected, stored & formatted then arranged that data logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the various cloud services remotely. Many big giant
companies are providing their services like Oracle, IBM, At scale, Cdata, etc.
Uses of Virtualization
Data-integration
Business-integration
Service-oriented architecture data-services
Searching organizational data
Virtualization example
Consider a company that needs servers for three functions:
Store business email securely
Run a customer-facing application
Run internal business applications
Each of these functions has different configuration requirements:
The email application requires more storage capacity and a Windows operating system.
The customer-facing application requires a Linux operating system and high processing power to
handle large volumes of website traffic.
The internal business application requires iOS and more internal memory (RAM).
To meet these requirements, the company sets up three different dedicated physical servers for each
application. The company must make a high initial investment and perform ongoing maintenance and
upgrades for one machine at a time. The company also cannot optimize its computing capacity. It pays 100%
of the servers’ maintenance costs but uses only a fraction of their storage and processing capacities.
Efficient hardware use
ML1726 – Cloud Computing Techniques
With virtualization, the company creates three digital servers, or virtual machines, on a single physical
server. It specifies the operating system requirements for the virtual machines and can use them like the
physical servers. However, the company now has less hardware and fewer related expenses.
Infrastructure as a service
The company can go one step further and use a cloud instance or virtual machine from a cloud computing
provider such as AWS. AWS manages all the underlying hardware, and the company can request server
resources with varying configurations. All the applications run on these virtual servers without the users
noticing any difference. Server management also becomes easier for the company’s IT team.
● To support virtualization, processors such as the x86 employ a special running mode and
instructions known as hardware assisted virtualization.
● For the x86 architecture, Intel and AMD have proprietary technologies for hardware assisted
virtualization.
Figure provides an overview of Intel’s full virtualization techniques. For processor virtualization,
Intel offers the VT-x or VT-i technique. VT-x adds a privileged mode (VMX Root Mode) and some
instructions to processors. This enhancement traps all sensitive instructions in the VMM
automatically. For memory virtualization, Intel offers the EPT(Extended Page Table), which translates
the virtual address to the machine’s physical addresses to improve performance. For I/O virtualization,
Intel implements VT-d and VT-c to support this.
● The VMware Workstation is a VM software suite for x86 and x86-64 computers.
● This software suite allows users to set up multiple x86 and x86-64 virtual computers and
to use one or more of these VMs simultaneously with the host operating system.
● The VMware Workstation assumes the host-based virtualization.
● Xen is a hypervisor for use in IA-32, x86-64, Itanium and PowerPC 970 hosts.
● One or more guest OS can run on top of the hypervisor.
● KVM is a Linux kernel virtualization infrastructure.
● KVM can support hardware assisted virtualization and paravirtualization by using the Intel
VT-x or AMD-v and VirtIO framework, respectively.
● The VirtIO framework includes a paravirtual Ethernet card, a disk I/O controller and a
balloon device for adjusting guest memory usage and a VGA graphics interface using
VMware drivers.
CPU virtualization
● Therefore, operating systems can still run at Ring 0 and hypervisor can run at Ring 1.
● All the privileged and sensitive instructions are trapped in the hypervisor automatically.
Memory virtualization
● Virtual memory virtualization is similar to the virtual memory support provided by modern
operating systems.
● The guest OS continues to control the mapping of virtual addresses to the physical
memory addresses of VMs.
● But the guest OS cannot directly access the actual machine memory.
● The VMM is responsible for mapping the guest physical memory to the actual machine
memory.
● Since each page table of the guest OSes has a separate page table in the VMM
corresponding to it, the VMM page table is called the shadow page table.
● Nested page tables add another layer of indirection to virtual memory.
● The MMU already handles virtual-to-physical translations as defined by the OS. Thenthe
ML1726 – Cloud Computing Techniques
physical memory addresses are translated to machine addresses using another set of
page tables defined by the hypervisor.
● VMware uses shadow page tables to perform virtual-memory-to-machine-memory address
translation.
● Processors use TLB hardware to map the virtual memory directly to the machine memory
to avoid the two levels of translation on every access.
● When the guest OS changes the virtual memory to a physical memory mapping, the VMM
updates the shadow page tables to enable a direct lookup.
● The AMD Barcelona processor has featured hardware assisted memory virtualization
since 2007.
● It provides hardware assistance to the two stage address translation in a virtual execution
environment by using a technology called nested paging.
I/O virtualization
● I/O virtualization involves managing the routing of I/O requests between virtual devices
and the shared physical hardware.
● There are three ways to implement I/O virtualization: full device emulation,
paravirtualization, and direct I/O.
● Full device emulation is the first approach for I/O virtualization. Generally, this approach
emulates well known and real world devices.
● All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts, and DMA are replicated in software.
● This software is located in the VMM and acts as a virtual device.
● The I/O access requests of the guest OS are trapped in the VMM which interacts with
the I/O devices.
● A single hardware device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the hardware it emulates.
Guest OS
Guest Device Driver
Virtualization Layer
Virtual Hardware
Device Emulation
I/O Stack
Device Driver
Physical Hardware
● The para-virtualization method of I/O virtualization is typically used in Xen. It is also known
as the split driver model consisting of a frontend driver and a backend driver.
● The frontend driver is running in Domain U and the backend driver is running in Domain 0.
They interact with each other via a block of shared memory.
● The frontend driver manages the I/O requests of the guest OSes and the backend driver
is responsible for managing the real I/O devices and multiplexing the I/O data of different
VMs.
● Para I/O-virtualization achieves better device performance than full device emulation, it
comes with a higher CPU overhead.
● Direct I/O virtualization lets the VM access devices directly. It can achieve close-to- native
performance without high CPU costs.
● However, current direct I/O virtualization implementations focus on networking for
mainframes. There are a lot of challenges for commodity hardware devices.
● For example, when a physical device is reclaimed (required by workload migration) for
later reassignment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary
memory locations) that can function incorrectly or even crash the whole system.
● Since software based I/O virtualization requires a very high overhead of device emulation,
hardware-assisted I/O virtualization is critical.
● Intel VT-d supports the remapping of I/O DMA transfers and device generated interrupts.
The architecture of VT-d provides the flexibility to support multiple usage models that may
run unmodified, special-purpose, or “virtualization-aware” guest OSes.
● Another way to help I/O virtualization is via self virtualized I/O (SV-IO).
● The key idea of SV-IO is to harness the rich resources of a multicore processor. All tasks
associated with virtualizing an I/O device are encapsulated in SV-IO.
● It provides virtual devices and an associated access API to VMs and a management API
to the VMM.
● SV-IO defines one virtual interface (VIF) for every kind of virtualized I/O device, such as
virtual network interfaces, virtual block devices (disk), virtual camera devices.
● One very distinguishing feature of cloud computing infrastructure is the use of system
virtualization and the modification to provisioning tools.
● Virtualization of servers on a shared cluster can consolidate web services.
● In cloud computing, virtualization also means the resources and fundamental infrastructure
are virtualized.
ML1726 – Cloud Computing Techniques
● The user will not care about the computing resources that are used for providing the
services.
● Cloud users do not need to know and have no way to discover physical resources thatare
involved while processing a service request.
● In addition, application developers do not care about some infrastructure issues such as
scalability and fault tolerance. Application developers focus on service logic.
Hardware Virtualization:
● Virtualization software is also used as the platform for developing new cloud applications
that enable developers to use any operating systems and programming environments they
like.
ML1726 – Cloud Computing Techniques
● The development environment and deployment environment can now be the same, which
eliminates some runtime problems.
● VMs provide flexible runtime services to free users from worrying about the system
environment.
● Using VMs in a cloud computing platform ensures extreme flexibility for users. As the
computing resources are shared by many users, a method is required to maximize the user’s
privileges and still keep them separated safely.
● Traditional sharing of cluster resources depends on the user and group mechanism on a
system.
○ Such sharing is not flexible.
○ Users cannot customize the system for their special purposes.
○ Operating systems cannot be changed.
○ The separation is not complete.
● An environment that meets one user’s requirements often cannot satisfy another user.
Virtualization allows us to have full privileges while keeping them separate.
● Users have full access to their own VMs, which are completely separate from other
user’s VMs.
● Multiple VMs can be mounted on the same physical server. Different VMs may run
withdifferent OSes.
● The virtualized resources form a resource pool.
● The virtualization is carried out by special servers dedicated to generating the virtualized
resource pool.
● The virtualized infrastructure (black box in the middle) is built with many
virtualizing integration managers.
● These managers handle loads, resources, security, data, and provisioning functions.
● Each platform carries out a virtual solution to a user job. All cloud services are managed
in the boxes at the top.
Install
Configure Configure Automatic
Install OS backup
hardware agent revocery
● AWS provides extreme flexibility (VMs) for users to execute their own applications.
● GAE provides limited application level virtualization for users to build applications
onlybased on the services that are created by Google.
● Microsoft provides programming level virtualization (.NET virtualization) for users to
buildtheir applications.
● The VMware tools apply to workstations, servers, and virtual infrastructure.
● The Microsoft tools are used on PCs and some special servers.
ML1726 – Cloud Computing Techniques
● As shown in the top timeline of Figure 2.13, traditional disaster recovery from one
physical machine to another is rather slow, complex, and expensive.
● Total recovery time is attributed to the hardware configuration, installing and configuring
the OS, installing the backup agents and the long time to restart the physical machine.
● To recover a VM platform, the installation and configuration times for the OS and backup
agents are eliminated.
● Virtualization aids in fast disaster recovery by VM encapsulation.
● The cloning of VMs offers an effective solution.
● The idea is to make a clone VM on a remote server for every running VM on a local
server.
● Among the entire clone VMs, only one needs to be active.
● The remote VM should be in a suspended mode.
● A cloud control center should be able to activate this clone VM in case of failure of the
original VM, taking a snapshot of the VM to enable live migration in a minimal amount of
time.
● The migrated VM can run on a shared Internet connection. Only updated data and modified
states are sent to the suspended VM to update its state.
● The Recovery Property Objective (RPO) and Recovery Time Objective (RTO) are
affected by the number of snapshots taken.
● Security of the VMs should be enforced during live migration of VMs.
virtual hardware.
● Therefore, different operating systems such as Linux and Windows can run on the same
physical machine, simultaneously.
● Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely the hypervisor architecture, paravirtualization and host based
virtualization.
● The hypervisor is also known as the VMM (Virtual Machine Monitor). They both perform
the same virtualization operations.
● The hypervisor supports hardware level virtualization on bare metal devices like CPU,
memory, disk and network interfaces.
● The hypervisor software sits directly between the physical hardware and its OS. This
virtualization layer is referred to as either the VMM or the hypervisor.
● The hypervisor provides hypercalls for the guest OSes and applications.
● Depending on the functionality, a hypervisor can assume micro kernel architecture like the
Microsoft Hyper-V.
● It can assume monolithic hypervisor architecture like the VMware ESX for server
virtualization.
● A micro kernel hypervisor includes only the basic and unchanging functions (such as
physical memory management and processor scheduling).
● The device drivers and other changeable components are outside the hypervisor.
Xen architecture
● Xen is a microkernel hypervisor, which separates the policy from the mechanism.
● The Xen hypervisor implements all the mechanisms, leaving the policy to be handled by
Domain 0. Figure 2.9 shows architecture of Xen hypervisor.
ML1726 – Cloud Computing Techniques
● Xen does not include any device drivers natively. It just provides a mechanism by which a
guest OS can have direct access to the physical devices.
● Xen provides a virtual environment located between the hardware and the OS.
APPLICATION
APPLICATION
APPLICATION
APPLICATION
APPLICATION
APPLICATION
XEN
HARDWARE DEVICES
Xen domain 0 for control and I/O & guest domain for user applications.
● The core components of a Xen system are the hypervisor, kernel, and applications.
● However, not all guest OSes are created equal, and one in particular controls the others.
● The guest OS, which has control ability, is called Domain 0, and the others are called
Domain U.
● Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without any
file system drivers being available.
● Domain 0 is designed to access hardware directly and manage devices. Therefore, oneof
the responsibilities of Domain 0 is to allocate and map hardware resources for the guest
domains (the Domain U domains).
ML1726 – Cloud Computing Techniques
● For example, Xen is based on Linux and its security level is C2. Its
management VM is named Domain 0 which has the privilege to manage
other VMs implemented on thesame host.
● If Domain 0 is compromised, the hacker can control the entire system. So, in
the VM system, security policies are needed to improve the security of Domain
0.
Google AppEngine
● Google AppEngine is a scalable runtime environment frequently
ML1726 – Cloud Computing Techniques
dedicated to executing web applications.
● These utilize the benefits of the large computing infrastructure of
Google to dynamically scale as per the demand.
● AppEngine offers both a secure execution environment and a collection
of which simplifies the development of scalable and high-performance
Web applications.
● These services include: in-memory caching, scalable data store, job
queues, messaging, and corn tasks.
● Currently, the supported programming languages are Python, Java, and Go.
● Microsoft Azure is a Cloud operating system and a platform in which
users can develop the applications in the cloud.
● Azure provides a set of services that support storage, networking, caching,
content delivery, and others.
Hadoop
● Hadoop is an implementation of MapReduce, an application programming
model which is developed by Google.
● This model provides two fundamental operations for data processing: map and
reduce.
Force.com and Salesforce.com –
● Force.com is a Cloud computing platform at which users can develop social
enterprise applications.
● The platform is the basis of SalesForce.com – a Software-as-a-Service
solution for customer relationship management.
● Force.com allows creating applications by composing ready-to-use blocks: a
complete set of components supporting all the activities of an enterprise are
available.
● From the design of the data layout to the definition of business rules and user
interface is provided by Force.com as a support.
● This platform is completely hostel in the Cloud, and provides complete
access to its functionalities, and those implemented in the hosted
applications through Web services technologies.