0% found this document useful (0 votes)
13 views

Virtualization 6 (N)

virtualization

Uploaded by

tsnsurya13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Virtualization 6 (N)

virtualization

Uploaded by

tsnsurya13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

VIRTUALIZATION

Seminar report
MASTER OF TECHNOLOGY
IN
COMPUTER SCIENCE AND TECHNOLOGY
BY

Mekala Hari Ranjitha Nalini


Regd. No.: 23B81D5906

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


SIR C. R. REDDY COLLEGE OF ENGINEERING
(PERMANENTLY AFFILIATED TO JNTUK)
ELURU, A.P, INDIA
A.Y 2023-2024
DECLARATION

I Mekala Hari Ranjitha Nalini, a student of I M. Tech II Semester, Computer


Science & Engineering Department, SIR CRR COLLEGE OF ENGINEEING,
has successfully completed the seminar work entitled “Virtualization”.

Mekala Hari Ranjitha Nalini


(23B81D5906)
TABLE OF CONTENTS
Title Page NO
Introduction 1
Virtualization in a nutshell 2
Virtualization approaches 5
Virtualization for server consolidation and containment 9
How virtualization complements new generation hardware 11
Para virtualization 12
Vmware’s virtualization portfolio 13
Glossary 15
Implementation 17
Evaluation 19
Conclusion 20
References 21
INTRODUCTION

Virtualization provides a set of tools for increasing flexibility and lowering costs, things that
are important in every enterprise and Information Technology organization. Virtualization
solutions are becoming increasingly available and rich in features. Since virtualization can
provide significant benefits to your organization in multiple areas, you should be establishing
pilots, developing expertise and putting virtualization technology to work now. In essence,
virtualization increases flexibility by decoupling an operating system and the services and
applications supported by that system from a specific physical hardware platform. It allows the
establishment of multiple virtual environments on a shared hardware platform. Organizations
looking to innovate find that the ability to create new systems and services without installing
additional hardware (and to quickly tear down those systems and services when they are no
longer needed) can be a significant boost to innovation. Virtualization can also excel at
supporting innovation through the use of virtual environments for training and learning. These
services are ideal applications for virtualization technology. A student can start course work
with a known, standard system environment. Class work can be isolated from the production
network. Learners can establish unique software environments without demanding exclusive
use of hardware resources. Virtualization can also be used to lower costs. One obvious benefit
comes from the consolidation of servers into a smaller set of more powerful hardware platforms
running a collection of virtual environments. Not only can costs be reduced by reducing the
amount of hardware and reducing the amount of unused capacity, but application performance
can actually be improved since the virtual guests execute on more powerful hardware. Further
benefits include the ability to add hardware capacity in a non-disruptive manner and to
dynamically migrate workloads to available resources.

1
VIRTUALIZATION IN A NUTSHELL

Simply put, virtualization is an idea whose time has come. The term virtualization broadly
describes the separation of a resource or request for a service from the underlying physical
delivery of that service. With virtual memory, for example, computer software gains access to
more memory than is physically installed, via the background swapping of data to disk storage.
Similarly, virtualization techniques can be applied to other IT infrastructure layers - including
networks, storage, laptop or server hardware, operating systems and applications. This blend
of virtualization technologies - or virtual infrastructure - provides a layer of abstraction between
computing, storage and networking hardware, and the applications running on it (see Figure
1). The deployment of virtual infrastructure is non-disruptive, since the user experiences are
largely unchanged. However, virtual infrastructure gives administrators the advantage of
managing pooled resources across the enterprise, allowing IT managers to be more responsive
to dynamic organizational needs and to better leverage infrastructure investments.

Before Virtualization:

2
➢ Single OS image per machine
➢ Software and hardware tightly coupled
➢ Running multiple applications on same machine often creates conflict
➢ Underutilized resources • Inflexible and costly infrastructure

After Virtualization:
➢ Hardware-independence of operating system and applications
➢ Virtual machines can be provisioned to any system
➢ Can manage OS and application as a single unit by encapsulating them into virtual
machine

Using virtual infrastructure solutions such as those from VMware, enterprise IT managers can
address challenges that include:

Server Consolidation and Containment


➢ Eliminating ‘server – Eliminating ‘server sprawl’ via deployment of systems as virtual
machines (VMs) that can run safely and move transparently across shared hardware
and increase server utilization rates from 5-15% to 60-80%.

Test and Development Optimization


➢ Rapidly provisioning test and development servers by reusing pre-configured systems,
enhancing developer collaboration and standardizing development environments.

Business Continuity
➢ Reducing the cost and complexity of – Reducing the cost and complexity of business
continuity (high availability and disaster recovery solutions) by encapsulating entire
systems into single files that can be replicated and restored on any target server, thus
minimizing downtime.

Enterprise Desktop
➢ Securing unmanaged PCs, workstations and laptops without compromising end user
autonomy by layering a security policy in software around desktop virtual machines.

3
VIRTUALIZATION APPROACHES

While virtualization has been a part of the IT landscape for decades, it is only recently (in 1998)
that VMware delivered the benefits of virtualization to industry-standard x86-based platforms,
which now form the majority of desktop, laptop and server shipments. A key benefit of
virtualization is the ability to run multiple operating systems on a single physical system and
share the underlying hardware resources – known as partitioning.
Today, virtualization can apply to a range of system layers, including hardware-level
virtualization, operating system level virtualization, and high-level language virtual machines.
Hardware-level virtualization was pioneered on IBM mainframes in the 1970s, and then more
recently Unix/RISC system vendors began with hardware-based partitioning capabilities
before moving on to software-based partitioning.
For Unix/RISC and industry-standard x86 systems, the two approaches typically used with
software-based partitioning are hosted and hypervisor architectures (See Figure 2). A hosted
approach provides partitioning services on top of a standard operating system and supports the
broadest range of hardware configurations. In contrast, a hypervisor hypervisor architecture is
the first architecture is the first layer of software installed on a clean x86-based system (hence
it is often referred to as a “bare metal” approach). Since it has direct access to the hardware
resources, a hypervisor is more efficient than hosted architectures, enabling greater scalability,
robustness and performance.

4
Hosted Architecture
➢ Installs and runs as an application
➢ Relies on host OS for device support and physical resource management

Bare-Metal (Hypervisor) Architecture


➢ Lean virtualization-centric kernel
➢ Service Console for agents and helper applications.

Hypervisors can be designed to be tightly coupled with operating systems or can be agnostic
to operating systems. The latter approach provides customers with the capability to implement
an OS-neutral management paradigm, thereby providing further rationalization of the data
center.
Application-level partitioning is another approach, whereby many applications share a single
operating system, but this offers less isolation (and higher risk) than hardware or software
partitioning, and limited support for legacy applications or heterogeneous environments.
However, various partitioning techniques can be combined, albeit with increased complexity.
Hence, virtualization is a broad IT initiative, of which partitioning is just one facet. Other
benefits include the isolation of virtual machines and the hardware-independence that results
from the virtualization process. Virtual machines are highly portable, and can be moved or
copied to any industry-standard (x86- based) hardware platform, regardless of the make or
model. Thus, virtualization facilitates adaptive IT resource management, and greater
responsiveness to changing business conditions (see Figures 3-5).
To provide advantages beyond partitioning, several system resources must be virtualized and
managed, including CPUs, main memory, and I/O, in addition to having an inter-partition
resource management capability. While partitioning is a useful capability for IT organizations,
true virtual infrastructure delivers business value well beyond that.

5
6
Transforms farms of individual x86 servers, storage, and networking into a
pool of computing resources

➢ Infrastructure is what connects resources to your business.


➢ Virtual Infrastructure is a dynamic mapping of your resources to your business.
➢ Result: decreased costs and increased efficiencies and responsiveness

7
VIRTUALIZATION FOR SERVER CONSOLIDATION AND
CONTAINMENT
Virtual infrastructure initiatives often spring from data center server consolidation projects,
which focus on reducing existing infrastructure “box count”, retiring older hardware or life-
extending legacy applications. Server consolidation benefits result from a reduction in the
overall number of systems and related recurring costs (power, cooling, rack space, etc.)
While server consolidation addresses the reduction of existing infrastructure, server
containment server containment takes a more strategic view, takes a more strategic view,
leading to a goal of infrastructure unification. Server containment uses an incremental approach
to workload virtualization, whereby new projects are provisioned with virtual machines rather
than physical servers, thus deferring hardware purchases.
It is important to note that neither consolidation nor containment should be viewed as
standalone exercise. In either case, the most significant benefits result from adopting a total
costof-ownership (TCO) perspective, with a focus on the ongoing, recurring support and
management costs, in addition to onetime, up-front costs.
Data center environments are becoming more complex and heterogeneous, with
correspondingly CP U Memor y NIC Disk Hardware Blade Hardware Other Hardware VM
VM VM VM VirtualCenter Distributed Services ESX Server VMotion Provisioning
Consolidated Backup DRS DAS VMM VMM VMM VMM Resource Management VMFS
MPIO CPU Virtualization MMU Virtualization I/O Virtualization Virtual Networking Other
Enterprise Features Management and Distributed Virtualization Services Monitor Enterprise-
Class Features Hypervisor Hardware Certifi cation higher management costs. Virtual
infrastructure enables more effective optimization of IT resources, through the standardization
of data center elements that need to be managed.
Partitioning alone does not deliver server consolidation or containment, and in turn
consolidation does not equate to full virtual infrastructure management. Beyond partitioning
and basic component-level resource management, a core set of systems management
capabilities are required to effectively implement realistic data center solutions (see Figure6).
These management capabilities should include comprehensive system resource monitoring (of
metrics such as CPU activity, disk access, memory utilization and network bandwidth),
automated provisioning, high availability and workload migration support.

8
9
HOW VIRTUALIZATION COMPLEMENTS NEW
GENERATION HARDWARE
Extensive ‘scale-out’ and multi-tier application architectures are becoming increasingly
common, and the adoption of smaller form-factor blade servers is growing dramatically. Since
the transition to blade architectures is generally driven by a desire for physical consolidation
of IT resources, virtualization is an ideal complement for blade servers, delivering benefits
such as resource optimization, operational efficiency and rapid provisioning.
The latest generation of x86-based systems feature processors with 64-bit extensions
supporting very large memory capacities. This enhances their ability to host large, memory-
intensive applications, as well as allowing many more virtual machines to be hosted by a
physical server deployed within a virtual infrastructure. The continual decrease in memory
costs will further accelerate this trend.
Likewise, the forthcoming dual-core processor technology significantly benefits IT
organizations by dramatically lowering the costs of increased performance. Compared to
traditional single-core systems, systems utilizing dual-core processors will be less expensive,
since only half the number of sockets will be required for the same number of CPUs. By
significantly lowering the cost of multi-processor systems, dual-core technology will accelerate
data center consolidation and virtual infrastructure projects.
Beyond these enhancements, VMware is also working closely with both Intel and AMD to
ensure that new processor technology features are exploited by virtual infrastructure to the
fullest extent. In particular, the new virtualization hardware assist enhancements (Intel’s “VT”
and AMD’s “Pacifica”) will enable robust virtualization of the CPU functionality. Such
hardware virtualization support does not replace virtual infrastructure, but allows it to run more
efficiently.

10
PARA VIRTUALIZATION
Although virtualization is rapidly becoming mainstream technology, the concept has attracted
a huge amount of interest, and enhancements continue to be investigated. One of these is
para-virtualization, whereby operating system compatibility is traded off against performance
for certain CPU-bound applications running on systems without virtualization hardware assist
(see Figure 7). The para-virtualized model offers potential performance benefits when a guest
operating system or application is ‘aware’ that it is running within a virtualized environment,
and has been modified to exploit this. One potential downside of this approach is that such
modified guests cannot ever be migrated back to run on physical hardware.
In addition to requiring modified guest operating systems, paravirtualization leverages a
hypervisor for the underlying technology. In the case of Linux distributions, this approach
requires extensive changes to an operating system kernel so that it can coexist with the
hypervisor. Accordingly, mainstream Linux distributions (such as Red Hat or SUSE) cannot
be run in a paravirtualized mode without some level of modification. Likewise, Microsoft has
suggested that a future version of the Windows operating system will be developed that can
coexist with a new hypervisor offering from Microsoft.
Yet para-virtualization is not an entirely new concept. For example, VMware has employed it
by making available as an option enhanced device drivers (packaged as VMware Tools) that
increase the efficiency of guest operating systems. Furthermore, if and when para-
virtualization optimizations are eventually built into commercial enterprise Linux
distributions, VMware’s hypervisor will support those, as it does all mainstream operating
systems.

11
VMWARE’S VIRTUALIZATION PORTFOLIO

VMware pioneered x86-based virtualization in 1998 and continues to be the innovator in that
market, providing the fundamental virtualization technology for all leading x86- based
hardware suppliers. The company offers a variety of software-based partitioning approaches,
utilizing both hosted (Workstation and VMware Server) and hypervisor (ESX Server)
architectures. (see Figure 8)
VMware’s virtual machine (VM) approach creates a uniform hardware image – implemented
in software – on which operating systems and applications run. On top of this platform,
VMware’s VirtualCenter provides management and provisioning of virtual machines,
continuous workload consolidation across physical servers and VMotion™ technology for
virtual machine mobility.
VirtualCenter is virtual infrastructure management software that centrally manages an
enterprise’s virtual machines as a single, logical pool of resources. With VirtualCenter, an
administrator can manage thousands of Windows NT, Windows 2000, Windows 2003, Linux
and NetWare servers from a single point of control.
Unique to VMware is the VMotion technology, whereby live, running virtual machines can be
moved from one physical system to another while maintaining continuous service availability.
VMotion thus allows fast reconfiguration and optimization of resources across the virtual
infrastructure.
VMware is the only provider of high-performance virtualization products that give customers
a real choice in operating systems. VMware supports: Windows
95/98/NT/2K/2003/XP/3.1/MS-DOS 6; Linux (Red Hat, SUSE, Mandrake, Caldera); FreeBSD
(3.x, 4.0- 4.9); Novell (NetWare 4,5,6); Sun Solaris 9 and 10 (experimental).
VMware is designed from the ground up to ensure compatibility with customers’ existing
software infrastructure investments. This includes not just operating systems, but also software
for management, high availability, clustering, replication, multipathing, and so on.
VMware’s hypervisor-based products and solutions have been running at customer sites since
2001, with more than 75% of customers running ESX Server in production deployments. As
the clear x86 virtualization market leader, VMware is uniquely positioned to continue
providing robust, supportable, highperformance virtual infrastructure for real-world, enterprise
data center applications.

12
13
GLOSSARY
Virtual Machine
• A representation of a real machine using software that provides an operating
environment which can run or host a guest operating system.

Guest Operating System


• An operating system running in a virtual machine environment that would otherwise
run directly on a separate physical system.

Virtual Machine Monitor


• Software that runs in a layer between a hypervisor or host operating system and one or
more virtual machines that provides the virtual machine abstraction to the guest
operating systems. With full virtualization, the virtual machine monitor exports a
virtual machine abstraction identical to a physical machine, so that standard operating
systems (e.g., Windows 2000, Windows Server 2003, Linux, etc.) can run just as they
would on physical hardware.

Hypervisor
• A thin layer of software that generally provides virtual partitioning capabilities which
runs directly on hardware, but underneath higher-level virtualization services.
Sometimes referred to as a “bare metal” approach.

Hosted Virtualization
• A virtualization approach where partitioning and virtualization services run on top of a
standard operating system (the host). In this approach, the virtualization software relies
on the host operating system to provide the services to talk directly to the underlying
hardware.

Para-virtualization
• A virtualization approach that exports a modified hardware abstraction which requires
operating systems to be explicitly modified and ported to run.

Virtualization Hardware Support


• Industry standard servers will provide improved hardware support for virtualization.
Initial hardware support includes processor extensions to address CPU and some
memory virtualization. Future support will include I/O virtualization, and eventually
more complex memory virtualization management.

Hardware-level virtualization
• Here the virtualization layer sits right on top of the hardware exporting the virtual
machine abstraction. Because the virtual machine looks like the hardware, all the
software written for it will run in the virtual machine.

14
Operating system–level virtualization
• In this case the virtualization layer sits between the operating system and the application
programs that run on the operating system. The virtual machine runs applications, or
sets of applications, that are written for the particular operating system being
virtualized.

High-level language virtual machines


• In high-level language virtual machines, the virtualization layer sits as an application
program on top of an operating system. The layer exports an abstraction of the virtual
machine that can run programs written and compiled to the particular abstract machine
definition. Any program written in the high-level language and compiled for this virtual
machine will run in it.

15
IMPLEMENTATION

Intel and AMD have independently developed virtualization extensions to the x86 architecture.
They are not directly compatible with each other, but serve largely the same functions. Either
will allow a virtual machine hypervisor to run an unmodified guest operating system without
incurring significant emulation performance penalties.

Hardware Implementation
➢ Amd Virtualization (Amd-V)
AMD's virtualization extensions to the 64-bit X86 architecture is named AMD Virtualization,
abbreviated AMD-V. It is still referred to as "Pacifica", the AMD internal project code name
AMD-V is present in AMD Athlon 64 and Athlon 64 X2 with family "F" or "G" on socket
AM2 (not 939), Turion 64 X2, Opteron 2nd generation and 3rd generation, Phenom, and all
newer processors. Sempron processors do not include support for AMDV.
On May 23, 2006, AMD released the Athlon 64 ("Orleans"), the Athlon 64 X2 ("Windsor")
and the Athlon 64 FX ("Windsor") as the first AMD processors to support AMD-V. Prior
processors do not have AMD-V.
AMD has published a specification for a technology named IO Memory Management Unit
(IOMMU) to AMD-V. This provides a way of configuring interrupt delivery to individual
virtual machines and an IO memory translation unit for preventing a virtual machine from using
DMA to break isolation. The IOMMU also plays an important role in advanced operating
systems (absent virtualization) and the AMD Torrenza architecture.

➢ Intel Virtualization Technology For X86 (Intel Vt-X)


Previously codenamed "Vanderpool", VT-x is Intel's technology for virtualization on the x-86
platforms. Intel plans to add Extended Page Tables (EPT), a technology for page table
virtualization, in the upcoming Nehalem architecture.
The following modern Intel processors include support for VT-x:
• Pentium 4 - 662 and 672
• Pentium Extreme Edition 955 and 965 (not Pentium 4 Extreme Edition with HT)
• Pentium D 920-960 except 945, 935, 925, 915
• some models of the Core processors family
• some models of the Core 2 processors family
• Xeon 3000 series
• Xeon 5000 series
• Xeon 7000 series
• some models of the Atom processor family.
Neither Intel Celeron nor Pentium Dual-Core nor Pentium M processors have VT technology.

16
SOFTWARE IMPLEMENTATION
➢ Virtual Server 2005 R2
One way to support multiple virtual machines on a single physical machine is to run
virtualization software largely on top of the operating system. Writing this software is
challenging, especially for older processors that don’t provide built-in support for hardware
virtualization. Yet it’s a viable solution, one that’s proven quite successful in practice. One
example of this success is Virtual Server 2005 R2, a freely available technology for Windows
Server 2003. Figure 5 illustrates how Virtual Server supports multiple virtual machines on a
single physical machine.
Whatever guest operating systems are running, all of them require storage. To allow this,
Microsoft has defined a virtual hard disk (VHD) format. A VHD is really just a file, but to a
virtual machine, it appears to be an attached disk drive. Guest operating systems and their
applications rely on one or more VHDs for storage. In fact, all of Microsoft’s hardware
virtualization technologies use the same VHD format, making it easier to move information
among them.

➢ Virtual Pc 2007
The most commercially important aspect of hardware virtualization today is the ability to
consolidate workloads from multiple physical servers onto one machine. Yet it can also be
useful to run guest operating systems on a desktop machine. Virtual PC is architecturally much
like Virtual Server. Virtual Server is significantly more scalable than Virtual PC, and it
supports a wider array of storage options. Virtual Server also includes administrative tools that
target professional IT staff, while Virtual PC is designed to be managed by users. While Virtual
PC does provide a few things that are lacking in Virtual Server, such as sound card support,
it’s fair to think of it as offering a simpler approach to hardware virtualization for desktop users.

➢ Xen Virtualization
A functional Red Hat Virtualization system is multilayered and is driven by the privileged Red
Hat Virtualization component. Red Hat Virtualization can host multiple guest operating
systems. Each guest operating system runs in its own domain, Red Hat Virtualization schedules
virtual CPUs within the virtual machines to make the best use of the available physical CPUs.
Each guest operating systems handles its own applications. These guest operating systems
schedule each application accordingly.

17
EVALUATION
Overhead
• Virtualization usually imposes little or no overhead, because programs in virtual
partition use the operating system's normal system call interface and do not need to be
subject to emulation or run in an intermediate virtual machine, as is the case with whole-
system virtualizers (such as VMware and QEMU) or para-virtualizers (such as Xen and
UML).
• It also does not require hardware assistance to perform efficiently

Flexibility
• Operating system-level virtualization is not as flexible as other virtualization
approaches since it cannot host a guest operating system different from the host one, or
a different guest kernel. For example, with Linux, different distributions are fine, but
other OS such as Windows cannot be hosted. This limitation is partially overcome in
Solaris Containers by its branded zones feature, which provides the ability to run an
environment within a container that emulates a Linux 2.4-based release or an older
Solaris release.

Storage
• Some operating-system virtualizers provide file-level copy-on-write mechanisms.
(Most commonly, a standard file system is shared between partitions, and partitions
which change the files automatically create their own copies.) This is easier to back up,
more space-efficient and simpler to cache than the blocklevel copy-on-write schemes
common on wholesystem virtualizers. Whole-system virtualizers, however, can work
with non-native file systems and create and roll back snapshots of the entire system
state.

Restrictions Inside The Container


The following actions are often prohibited:
• Modifying the running kernel by direct access and loading kernel modules.
• Mounting and dismounting file systems.
• Creating device nodes.
• Accessing raw, divert, or routing sockets.
• Modifying kernel runtime parameters, such as most sysctl settings
• Changing secure level-related file flags.
• Accessing network resources not associated with the container.

18
Conclusion
Virtualization technologies have matured to the point where the technology is being
deployed across a wide range of platforms and environments. The usage of
virtualization has gone beyond increasing the utilization of infrastructure, to areas like
data replication and data protection. This white paper looks at the continuing evolution
of virtualization, its potential, some tips on optimizing virtualization as well as how to
future proof the technology.
After all, server virtualization's value is wellestablished. Many, many companies have
migrated significant percentages of their servers to virtual machines hosted on larger
servers, gaining benefits in hardware utilization, energy use, and data centre space. And
those companies that haven't done so thus far are hatching plans to consolidate their
servers in the future. These are all capital or infrastructure costs, though. What does
server virtualization do for human costs—the IT operations piece of the puzzle?
Base level server consolidation offers a few benefits for IT operations. It makes
hardware maintenance much easier, since virtual machines can be moved to other
physical servers when it's time to maintain or repair the original server. This moves
hardware maintenance from a weekend and late night effort to a part of the regular
business day—certainly a great convenience.
The next step for most companies is to leverage the portability of virtual machines to
achieve IT operational agility. Because virtual machines are captured in disk images,
they are portable, no longer bound to an individual physical server. The ease of
reproducing virtual images means that application capacity can be easily dialed up and
down with the creation or tear down of additional virtual images. Server pooling allows
virtual machines to be automatically migrated according to application load. More
sophisticated virtualization uses include high availability, where virtual machines can
be moved— automatically, by the virtualization management software itself & mdash:
when hardware failures occur. Seeing the magic of a virtual machine automatically
being brought up on a new server after its original host is brought down, all without any
human intervention vividly demonstrates the power of more sophisticated virtualization
use.
Certainly these kinds of uses of virtualization demonstrate its power to transform IT
operations, enabling IT organizations to offer the kind of responsiveness and creativity
that could only be dreamed of a few years ago. The deftness with which applications
can be migrated, upsized, downsized, cloned, etc. is something that will forever change
the way IT does its job.

19
References
1. Graziano, Charles. "A performance analysis of Xen and KVM hypervisors for
hosting the Xen Worlds Project". Retrieved 2013-01-29.
2. ^ Turban, E; King, D; Lee, J; Viehland, D (2008). "Chapter 19: Building E-
Commerce Applications and Infrastructure". Electronic Commerce A
Managerial Perspective. Prentice-Hall. p. 27.
3. ^ "Virtualization in education" (PDF). IBM. October 2007. Retrieved 6
July 2010. A virtual computer is a logical representation of a computer in
software. By decoupling the physical hardware from the operating system,
virtualization provides more operational flexibility and increases the utilization
rate of the underlying physical hardware.
4. ^ Orit Wasserman, Red Hat (2013). "Nested virtualization: Shadow
turtles" (PDF). KVM forum. Retrieved 2021-05-07.
5. ^ Jump up to:a b Muli Ben-Yehuda; Michael D. Day; Zvi Dubitzky; Michael
Factor; Nadav Har’El; Abel Gordon; Anthony Liguori; Orit Wasserman; Ben-
Ami Yassour (2010-09-23). "The Turtles Project: Design and Implementation
of Nested Virtualization" (PDF). usenix.org. Retrieved 2014-12-16.
6. ^ Alex Fishman; Mike Rapoport; Evgeny Budilovsky; Izik Eidus (2013-06-
25). "HVX: Virtualizing the Cloud" (PDF). rackcdn.com. Retrieved 2014-12-
16.
7. ^ "4th-Gen Intel Core vPro Processors with Intel VMCS
Shadowing" (PDF). Intel. 2013. Retrieved 2014-12-16.
8. VMware.com- Microsoft.com- SANS.org- Gartner.com- Trendmicro.com- Symantec.com

20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy