CCS Module 2
CCS Module 2
• Virtualization uses software that simulates hardware functionality to create a virtual system.
This practice lets IT organizations run multiple OSes, more than one virtual system and
various applications on a single server. The benefits of virtualization include greater
efficiency and economies of scale.
• OS virtualization uses software that enables a piece of hardware to run multiple operating
system images at the same time. The technology got its start on mainframes decades ago to
save on expensive processing power
• Organizations that divide their hard drives into different partitions already engage in
virtualization. A partition is the logical division of a hard disk drive that, in effect, creates two
separate hard drives
• Server virtualization is a key use of virtualization technology. It uses a software layer called a
hypervisor to emulate the underlying hardware. This includes the central processing unit's
(CPU's) memory, input/output and network traffic.
• Hypervisors take the physical resources and separate them for the virtual environment. They
can sit on top of an OS or be directly installed onto the hardware.
• The term virtualization is often synonymous with hardware virtualization, which plays a
fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions for cloud
computing. Moreover, virtualization technologies provide a virtual environment for not only
executing applications but also for storage, memory, and networking.
•
• Host Machine: The machine on which the virtual machine is going to be built is known as
Host Machine.
Components of Virtualization.
These are virtual instances of a physical computer, complete with their own operating system,
applications, and resources. VMs allow multiple operating systems to run concurrently on a single
physical server, increasing resource utilization.
Hypervisors:
This software acts as a bridge between the physical hardware and the virtual machines, managing
the allocation and sharing of resources. There are two main types of hypervisors:
Type 1 (Bare Metal): Runs directly on the physical hardware and has direct access to hardware
resources.
Type 2: Runs on top of a traditional operating system and relies on it for resource allocation.
Virtualization Software:
This includes the software and tools used to manage and create virtualized resources, such as VM
management platforms and networking virtualization tools.
Networking Virtualization:
This component allows for the creation of virtual networks within the physical network
infrastructure, enabling isolated and flexible networking environments.
Storage Virtualization:
This involves creating a unified, virtualized view of storage resources, making them accessible to
multiple virtual machines and applications, even if the physical storage devices are geographically
separated.
• Physical resources:
• Cloud providers have physical servers (the "host") with CPUs, memory, storage, and network
interfaces.
• Virtualization layer:
• A hypervisor creates a layer of abstraction between the physical hardware and the VMs.
• The hypervisor creates virtual copies of hardware components (CPU, memory, storage) for
each VM, allowing multiple VMs to run on the same physical server.
• Isolation:
• Each VM has its own isolated environment, meaning they can run different operating
systems and applications without interfering with each other.
• 2. Hypervisors:
• Function:
• The hypervisor is the core component that manages the virtualized environment.
• Resource management:
• Types:
• There are different types of hypervisors, including Type 1 (bare metal) and Type 2 (hosted).
• Example:
• Foundation:
• Resource sharing:
• It allows multiple users or organizations to share the same physical infrastructure (servers,
storage).
• Cost reduction:
• By sharing resources, cloud providers can optimize hardware utilization and reduce costs.
• On-demand access:
• Users can access virtualized resources on demand, such as virtual servers, storage, and
networks, through cloud services
• A guest operating system is the operating system installed on either a virtual machine (VM)
or partitioned disk. It is usually different from the host operating system (OS). Simply put, a
host OS runs on hardware while a guest OS runs on a VM.
• The OS installed on a computer that lets it communicate with its various hardware and
software elements is its host OS. As the originally installed OS, the host OS is the system's
"primary" OS
• The guest OS can also be used for hardware abstraction. It ensures efficient resource
management in the virtual environment and includes all required drivers for communication
with virtual hardware
• One can directly create app Based Virtual Machine Instances. The OS as well as the
respective app is deployed for you.
Advantages of Virtualization
• 1. Cost Savings
• Reduced Hardware Costs: Virtualization allows multiple virtual machines (VMs) to run on a
single physical machine, reducing the need for additional hardware.
• Energy Efficiency: Fewer physical servers mean lower power and cooling costs.
• Optimized Resource Utilization: Maximizes the use of existing hardware, eliminating
underutilization.
• Dynamic Resource Allocation: Resources such as CPU, memory, and storage can be allocated
dynamically based on demand.
• Ease of Deployment: Quickly create and deploy new virtual machines or applications without
the need for new hardware.
• Consolidation: Multiple virtual machines (VMs) can run on a single physical server,
maximizing the use of hardware resources like CPU, memory, and storage.
• Dynamic Allocation: Resources can be dynamically allocated to different VMs based on their
workload.
• Improved Disaster Recovery and Backup Snapshot Capabilities: VMs can be backed up easily
through snapshots, allowing quick recovery in case of failure.
• Isolation: Faults in one VM do not affect others, improving overall system stability.
• Replication: Virtual machines can be replicated across different locations for robust disaster
recovery plans.
• Automation: Tasks such as provisioning, load balancing, and updates can be automated.
• Multi-Platform Support: Run multiple operating systems on the same hardware, increasing
compatibility.
• Enhanced Security Isolation: Each VM operates independently, reducing the risk of spreading
malware or security breaches.
• Controlled Access: Better access control and monitoring features enhance security.
• Increased Productivity and Agility Faster Testing and Development: Developers can quickly
spin up environments for testing and development.
• Mobility: Virtual machines can be easily migrated between physical servers without
downtime.
• High Availability Load Balancing: Distribute workloads across multiple servers to maintain
availability.
• Failover Mechanisms: Virtualization tools often include failover capabilities to ensure
business continuity.
Levels of Virtualizations.
• At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the
help of ISA emulation.
• One source instruction may require tens or hundreds of native target instructions to perform
its function. Obviously, this process is relatively slow. For better performance, dynamic binary
translation is desired.
• This approach translates basic blocks of dynamic source instructions to target instructions.
• The basic blocks can also be extended to program traces or super blocks to increase
translation efficiency. Instruction set emulation requires binary translation and optimization.
• Hardware-level virtualization is performed right on top of the bare hardware. On the one
hand, this approach generates a virtual hardware environment for a VM. On the other hand,
the process manages the underlying hardware through virtualization.
• The idea is to virtualize a computer’s resources, such as its processors, memory, and I/O
devices. The intention is to upgrade the hardware utilization rate by multiple users
concurrently. The idea was implemented in the IBM VM/370 in the 1960s. More recently, the
Xen hypervisor has been applied to virtualize x86-based machines to run Linux or other guest
OS applications.
• This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hard-ware and software in data centers.
• The containers behave like real servers. OS-level virtualization is commonly used in creating
virtual hosting environments to allocate hardware resources among a large number of
mutually distrusting users. It is also used, to a lesser extent, in consolidating server hardware
by moving services on separate hosts into containers or VMs on one server.
• Most applications use APIs exported by user-level libraries rather than using lengthy system
calls by the OS. Since most systems provide well-documented APIs, such an interface
becomes another candidate for virtualization. Virtualization with library interfaces is possible
by controlling the communication link between applications and the rest of a system through
API hooks.
•
• The software tool WINE has implemented this approach to support Windows applications on
top of UNIX hosts. Another example is the vCUDA which allows applications executing within
VMs to leverage GPU hardware acceleration
• User-Application Level
• VMs. In this scenario, the virtualization layer sits as an application program on top of the
operating system, and the layer exports an abstraction of a VM that can run programs
written and compiled to a particular abstract machine definition. Any program written in the
HLL and compiled for this VM will be able to run on it.
• The result is an application that is much easier to distribute and remove from user
workstations. An example is the LANDesk application virtuali-zation platform which deploys
software applications as self-contained, executable files in an isolated environment without
requiring installation, system modifications, or elevated security privileges.
Hypervisor
• A hypervisor, also known as a virtual machine monitor or VMM, is software that creates and
runs virtual machines (VMs). A hypervisor allows one host computer to support multiple
guest VMs by virtually sharing its resources, such as memory and processing.
• Benefits of hypervisors
• There are several benefits to using a hypervisor that hosts multiple virtual machines:
• Speed: Hypervisors allow virtual machines to be created instantly, unlike bare-metal servers.
This makes it easier to provision resources as needed for dynamic workloads.
• Efficiency: Hypervisors that run several virtual machines on one physical machine’s resources
also allow for more efficient utilization of one physical server. It is more cost- and
energy-efficient to run several virtual machines on one physical machine than to run multiple
underutilized physical machines for the same task
• Flexibility: Bare-metal hypervisors allow operating systems and their associated applications
to run on a variety of hardware types because the hypervisor separates the OS from the
underlying hardware, so the software no longer relies on specific hardware devices or drivers
• Portability: Hypervisors allow multiple operating systems to reside on the same physical
server (host machine). Because the virtual machines that the hypervisor runs are
independent from the physical machine, they are portable.
• IT teams can shift workloads and allocate networking, memory, storage and processing
resources across multiple servers as needed, moving from machine to machine or platform
to platform. When an application needs more processing power, the virtualization software
allows it to seamlessly access additional machines.
• Types of hypervisors
• There are two main hypervisor types, referred to as “Type 1” (or “bare metal”) and “Type 2”
(or “hosted”). A type 1 hypervisor acts like a lightweight operating system and runs directly
on the host’s hardware, while a type 2 hypervisor runs as a software layer on an operating
system, like other computer programs.
The hypervisor runs directly on the underlying host system. It is also known as a “Native Hypervisor”
or “Bare metal hypervisor”. It does not require any base server operating system. It has direct access
to hardware resources. Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and
Microsoft Hyper-V hypervisor.
• Pros: Such kinds of hypervisors are very efficient because they have direct access to the
physical hardware resources(like Cpu, Memory, Network, and Physical storage). This causes
the empowerment of the security because there is nothing any kind of the third party
resource so that attacker couldn’t compromise with anything.
• Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation and to instruct different VMs and control the host
hardware resources.
• It is also known as ‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over
the underlying hardware rather they run as an application in a Host system(physical
machine). Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls..
• Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine
and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks
are also there an attacker can compromise the security weakness if there is access to the host
operating system so he can also access the guest operating system.
Type 1 Type 2
Can it negotiate
Yes. No.
dedicated resources?
System administrator-level
Knowledge required Basic user knowledge.
knowledge.
Full virtualization: Fully simulates the hardware to enable a guest OS to run in an isolated instance.
In a fully virtualized instance, an application would run on top of a guest OS, which would operate on
top of the hypervisor and finally the host OS and hardware. Full virtualization creates an environment
similar to an OS operating on an individual server. Utilizing full virtualization enables administrators
to run a virtual environment unchanged to its physical counterpart. For example, the full
virtualization approach was used for IBM's CP-40 and CP-67.
Other examples of fully virtualized systems include Oracle VM and ESXi. Full virtualization enables
administrators to combine both existing and new systems; however, each feature that the hardware
has must also appear in each VM for the process to be considered full virtualization. This means, to
integrate older systems, hardware must be upgraded to match newer systems.
• Advantages:
• Full virtualization technique grants the potential to combine existing systems on to the
newer ones with increased efficiency and a well-organized hardware.
• This amazing methodology contributes effectively to trim down the operating costs engaged
in repairing and enhancing older systems.
• The less competent systems can be power-packed with this technique, while reducing the
physical space and augmenting the overall performance of the company.
Paravirtualization: Runs a modified and recompiled version of the guest OSes in a VM. This
modification enables the VM to differ somewhat from the hardware. The hardware isn't necessarily
simulated in paravirtualization but uses an application program interface (API) that can modify guest
OSes. To modify the guest OS, the source code for the guest OS must be accessible to replace
portions of code with customizable instructions, such as calls to VMM APIs. The OS is then
recompiled to use the new modifications.
The hypervisor then providea commands sent from the OS to the hypervisor, called hypercalls. These
are used for kernel operations, such as management of memory. Paravirtualization can improve
performance by decreasing the amount of VMM calls; however, paravirtualization requires the
modification of the OS, which also creates a large dependency between the OS and hypervisor that
could potentially limit further updates. For example, Xen is a product that can aid in
paravirtualization.
• Advantages:
• It enhances the performance notably by decreasing the number of VMM calls and prevents
the needless use of privileged instructions.
• This method is considered as the most advantageous one as it augments the performance
per server without the operating cost of a host operating system.
Virtual machines: Virtual machines are popularly known as VMs, imitate certain factual or illusory
hardware requiring the valid resources from the host, which is nothing but the actual machine
operating the VMs. A virtual machines monitor (VMM) is used in certain cases where the CPU
directives need extra privileges and may not be employed in user space
• Advantages:
• This methodology benefits numerous system emulators who use it for running a random
guest operating system without altering the guest OS.
• VMMs are used to examine the performed code and facilitate its secure running. For such
varied benefits it is widely used by Microsoft Virtual Server, QEMU, Parallels, VirtualBox and
many other VMware products.
Operating System level Virtualization: Operating system level virtualization is specially intended to
grant the necessary security and separation to run manifold applications and replicas of the same
operating system on the same server. Isolating, segregating and providing a safe environment
enables the easy running and sharing of machines of numerous applications operating on a single
server. This technique is used by Linux-VServer, FreeBSD Jails, OpenVZ, Solaris Zones and Virtuozzo.
Advantages:
• When compared with all the above mentioned techniques, OS level virtualization is
considered to give the best performance and measurability.
Designers soon realized that virtualization functions could be implemented far more efficiently in
hardware rather than software, driving the development of extended command sets for Intel and
AMD processors, such as Intel VT and AMD-V extensions.
So, the hypervisor can simply make calls to the processor, which then does the heavy lifting of
creating and maintaining VMs. System overhead is significantly reduced, enabling the host system to
host more VMs and provide greater VM performance for more demanding workloads.
Hardware-assisted virtualization is the most common form of virtualization.
OS virtualization virtualizes hardware at the OS level to create multiple isolated virtualized instances
to run on a single system. Additionally, this process is done without the use of a hypervisor. This is
possible because OS virtualization will have the guest OS use the same running OS as the host
system. OS virtualization uses the host OS as the base of all independent VMs in a system. OS
virtualization gets rid of the need for driver emulation.
This leads to better performance and the possibility to run more machines simultaneously. However,
recommended LAN speeds are up to 100 Mb, and not all OSes, such as some Linux distributions, can
support software virtualization. Examples of OS virtualization-based systems include Docker, Solaris
Containers and Linux-VServer.
Modern operating systems and processors permit multiple processes to run simultaneously. If there
is no protection mechanism in a processor, all instructions from different processes will access the
hardware directly and cause a system crash. Therefore, all processors have at least two modes, user
mode and supervisor mode, to ensure controlled access of critical hardware. Instructions running in
supervisor mode are called privileged instructions. Other instructions are unprivileged instructions.
In a virtualized environment, it is more difficult to make OSes and applications run correctly because
there are more layers in the machine stack. Example 3.4 discusses Intel’s hardware support
approach.
At the time of this writing, many hardware virtualization products were available. The VMware
Workstation is a VM software suite for x86 and x86-64 computers. This software suite allows users to
set up multiple x86 and x86-64 virtual computers and to use one or more of these VMs
simultaneously with the host operating system. The VMware Workstation assumes the host-based
virtualization. Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts. Actually,
Xen modifies Linux as the lowest and most privileged layer, or a hypervisor.
One or more guest OS can run on top of the hypervisor. KVM (Kernel-based Virtual Machine) is a
Linux kernel virtualization infrastructure. KVM can support hardware-assisted virtualization and
paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework, respectively. The VirtIO
framework includes a paravirtual Ethernet card, a disk I/O controller, a balloon device for adjusting
guest memory usage, and a VGA graphics interface using VMware drivers.
Example 3.4 Hardware Support for Virtualization in the Intel x86 Processor
Since software-based virtualization techniques are complicated and incur performance overhead,
Intel provides a hardware-assist technique to make virtualization easy and improve performance.
Figure 3.10 provides an overview of Intel’s full virtualization techniques. For processor virtualization,
Intel offers the VT-x or VT-i technique. VT-x adds a privileged mode (VMX Root Mode) and some
instructions to processors. This enhancement traps all sensitive instructions in the VMM
automatically. For memory virtualization, Intel offers the EPT, which translates the virtual address to
the machine’s physical addresses to improve performance. For I/O virtualization, Intel implements
VT-d and VT-c to support this.
2. CPU Virtualization
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode. When the
privileged instructions including control- and behavior-sensitive instructions of a VM are exe-cuted,
they are trapped in the VMM. In this case, the VMM acts as a unified mediator for hardware access
from different VMs to guarantee the correctness and stability of the whole system. However, not all
CPU architectures are virtualizable. RISC CPU architectures can be naturally virtualized because all
control- and behavior-sensitive instructions are privileged instructions. On the contrary, x86 CPU
architectures are not primarily designed to support virtualization. This is because about 10 sensitive
instructions, such as SGDT and SMSW, are not privileged instructions. When these instruc-tions
execute in virtualization, they cannot be trapped in the VMM.
On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to the OS
kernel. The interrupt handler in the kernel is then invoked to process the system call. On a
para-virtualization system such as Xen, a system call in the guest OS first triggers the 80h interrupt
nor-mally. Almost at the same time, the 82h interrupt in the hypervisor is triggered. Incidentally,
control is passed on to the hypervisor as well. When the hypervisor completes its task for the guest
OS system call, it passes control back to the guest OS kernel. Certainly, the guest OS kernel may also
invoke the hypercall while it’s running. Although paravirtualization of a CPU lets unmodified
applications run in the VM, it causes a small performance penalty.
Although x86 processors are not virtualizable primarily, great effort is taken to virtualize them. They
are used widely in comparing RISC processors that the bulk of x86-based legacy systems cannot
discard easily. Virtuali-zation of x86 processors is detailed in the following sections. Intel’s VT-x
technology is an example of hardware-assisted virtualization, as shown in Figure 3.11. Intel calls the
privilege level of x86 processors the VMX Root Mode. In order to control the start and stop of a VM
and allocate a memory page to maintain the
CPU state for VMs, a set of additional instructions is added. At the time of this writing, Xen, VMware,
and the Microsoft Virtual PC all implement their hypervisors by using the VT-x technology.
Generally, hardware-assisted virtualization should have high efficiency. However, since the transition
from the hypervisor to the guest OS incurs high overhead switches between processor modes, it
sometimes cannot outperform binary translation. Hence, virtualization systems such as VMware now
use a hybrid approach, in which a few tasks are offloaded to the hardware but the rest is still done in
software. In addition, para-virtualization and hardware-assisted virtualization can be combined to
improve the performance further.
3. Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern
operat-ing systems. In a traditional execution environment, the operating system maintains mappings
of virtual memory to machine memory using page tables, which is a one-stage mapping from
virtual memory to machine memory. All modern x86 CPUs include a memory management unit
(MMU) and a translation lookaside buffer (TLB) to optimize virtual memory performance. However,
in a virtual execution environment, virtual memory virtualization involves sharing the physical system
memory in RAM and dynamically allocating it to the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses of
VMs. But the guest OS cannot directly access the actual machine memory. The VMM is responsible
for mapping the guest physical memory to the actual machine memory. Figure 3.12 shows the
two-level memory mapping procedure.
Since each page table of the guest OSes has a separate page table in the VMM corresponding to it,
the VMM page table is called the shadow page table. Nested page tables add another layer of
indirection to virtual memory. The MMU already handles virtual-to-physical translations as defined
by the OS. Then the physical memory addresses are translated to machine addresses using another
set of page tables defined by the hypervisor. Since modern operating systems maintain a set of page
tables for every process, the shadow page tables will get flooded. Consequently, the perfor-mance
overhead and cost of memory will be very high.
Since the efficiency of the software shadow page table technique was too low, Intel developed a
hardware-based EPT technique to improve it, as illustrated in Figure 3.13. In addition, Intel offers a
Virtual Processor ID (VPID) to improve use of the TLB. Therefore, the performance of memory
virtualization is greatly improved. In Figure 3.13, the page tables of the guest OS and EPT are all
four-level.
When a virtual address needs to be translated, the CPU will first look for the L4 page table pointed to
by Guest CR3. Since the address in Guest CR3 is a physical address in the guest OS, the CPU needs to
convert the Guest CR3 GPA to the host physical address (HPA) using EPT. In this procedure, the CPU
will check the EPT TLB to see if the translation is there. If there is no required translation in the EPT
TLB, the CPU will look for it in the EPT. If the CPU cannot find the translation in the EPT, an EPT
violation exception will be raised.
When the GPA of the L4 page table is obtained, the CPU will calculate the GPA of the L3 page table by
using the GVA and the content of the L4 page table. If the entry corresponding to the GVA in the L4
page table is a page fault, the CPU will generate a page fault interrupt and will let the guest OS kernel
handle the interrupt. When the PGA of the L3 page table is obtained, the CPU will look for the EPT to
get the HPA of the L3 page table, as described earlier. To get the HPA corresponding to a GVA, the
CPU needs to look for the EPT five times, and each time, the memory needs to be accessed four
times. There-fore, there are 20 memory accesses in the worst case, which is still very slow. To
overcome this short-coming, Intel increased the size of the EPT TLB to decrease the number of
memory accesses.
4. I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is the
first approach for I/O virtualization. Generally, this approach emulates well-known, real-world
devices.
All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as a
virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts with
the I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However, software
emulation runs much slower than the hardware it emulates [10,15]. The para-virtualization method
of I/O virtualization is typically used in Xen. It is also known as the split driver model consisting of a
frontend driver and a backend driver. The frontend driver is running in Domain U and the backend
dri-ver is running in Domain 0. They interact with each other via a block of shared memory. The
frontend driver manages the I/O requests of the guest OSes and the backend driver is responsible for
managing the real I/O devices and multiplexing the I/O data of different VMs. Although
para-I/O-virtualization achieves better device performance than full device emulation, it comes with
a higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O virtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware devices.
For example, when a physical device is reclaimed (required by workload migration) for later
reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since software-based I/O
virtualization requires a very high overhead of device emulation, hardware-assisted I/O virtualization
is critical. Intel VT-d supports the remapping of I/O DMA transfers and device-generated interrupts.
The architecture of VT-d provides the flexibility to support multiple usage models that may run
unmodified, special-purpose, or “virtualization-aware” guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key idea of SV-IO is to
harness the rich resources of a multicore processor. All tasks associated with virtualizing an I/O
device are encapsulated in SV-IO. It provides virtual devices and an associated access API to VMs and
a management API to the VMM. SV-IO defines one virtual interface (VIF) for every kind of virtua-lized
I/O device, such as virtual network interfaces, virtual block devices (disk), virtual camera devices, and
others. The guest OS interacts with the VIFs via VIF device drivers. Each VIF consists of two mes-sage
queues. One is for outgoing messages to the devices and the other is for incoming messages from
the devices. In addition, each VIF has a unique ID for identifying it in SV-IO.
The VMware Workstation runs as an application. It leverages the I/O device support in guest OSes,
host OSes, and VMM to implement I/O virtualization. The application portion (VMApp) uses a driver
loaded into the host operating system (VMDriver) to establish the privileged VMM, which runs
directly on the hardware. A given physical processor is executed in either the host world or the VMM
world, with the VMDriver facilitating the transfer of control between the two worlds. The VMware
Workstation employs full device emulation to implement I/O virtualization. Figure 3.15 shows the
functional blocks used in sending and receiving packets via the emulated virtual NIC.
The virtual NIC models an AMD Lance Am79C970A controller. The device driver for a Lance controller
in the guest OS initiates packet transmissions by reading and writing a sequence of virtual I/O ports;
each read or write switches back to the VMApp to emulate the Lance port accesses. When the last
OUT instruc-tion of the sequence is encountered, the Lance emulator calls a normal write() to the
VMNet driver. The VMNet driver then passes the packet onto the network via a host NIC and then
the VMApp switches back to the VMM. The switch raises a virtual interrupt to notify the guest device
driver that the packet was sent. Packet receives occur in reverse.
Concerning the first challenge, new programming models, languages, and libraries are needed to
make parallel programming easier. The second challenge has spawned research involving scheduling
algorithms and resource management policies. Yet these efforts cannot balance well among
performance, complexity, and other issues. What is worse, as technology scales, a new challenge
called dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores on the same
chip, which further complicates the multi-core or many-core resource management. The dynamic
heterogeneity of hardware infrastructure mainly comes from less reliable transistors and increased
complexity in using the transistors [33,66].
Wells, et al. [74] proposed a multicore virtualization method to allow hardware designers to get an
abstraction of the low-level details of the processor cores. This technique alleviates the burden and
inefficiency of managing hardware resources by software. It is located under the ISA and remains
unmodified by the operating system or VMM (hypervisor). Figure 3.16 illustrates the technique of a
software-visible VCPU moving from one core to another and temporarily suspending execution of a
VCPU when there are no appropriate cores on which it can run.
The emerging many-core chip multiprocessors (CMPs) provides a new computing landscape. Instead
of supporting time-sharing jobs on one or a few cores, we can use the abundant cores in a
space-sharing, where single-threaded or multithreaded jobs are simultaneously assigned to separate
groups of cores for long time intervals. This idea was originally suggested by Marty and Hill [39]. To
optimize for space-shared workloads, they propose using virtual hierarchies to overlay a coherence
and caching hierarchy onto a physical processor. Unlike a fixed physical hierarchy, a virtual hierarchy
can adapt to fit how the work is space shared for improved performance and performance isolation.
Today’s many-core CMPs use a physical hierarchy of two or more cache levels that statically
determine the cache allocation and mapping. A virtual hierarchy is a cache hierarchy that can adapt
to fit the workload or mix of workloads [39]. The hierarchy’s first level locates data blocks close to the
cores needing them for faster access, establishes a shared-cache domain, and establishes a point of
coherence for faster communication. When a miss leaves a tile, it first attempts to locate the block
(or sharers) within the first level. The first level can also pro-vide isolation between independent
workloads. A miss at the L1 cache can invoke the L2 access.
The idea is illustrated in Figure 3.17(a). Space sharing is applied to assign three workloads to three
clusters of virtual cores: namely VM0 and VM3 for database workload, VM1 and VM2 for web server
workload, and VM4–VM7 for middleware workload. The basic assumption is that each workload runs
in its own VM. However, space sharing applies equally within a single operating system. Statically
distributing the directory among tiles can do much better, provided operating sys-tems or
hypervisors carefully map virtual pages to physical frames. Marty and Hill suggested a two-level
virtual coherence and caching hierarchy that harmonizes with the assignment of tiles to the virtual
clusters of VMs.
Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two levels. Each VM
operates in a isolated fashion at the first level. This will minimize both miss access time and
performance interference with other workloads or VMs. Moreover, the shared resources of cache
capacity, inter-connect links, and miss handling are mostly isolated between VMs. The second level
maintains a globally shared memory. This facilitates dynamically repartitioning resources without
costly cache flushes. Furthermore, maintaining globally shared memory minimizes changes to
existing system software and allows virtualization features such as content-based page sharing. A
virtual hierarchy adapts to space-shared workloads like multiprogramming and server consolidation.
Figure 3.17 shows a case study focused on consolidated server workloads in a tiled architecture. This
many-core mapping scheme can also optimize for space-shared multiprogrammed workloads in a
single-OS environment.
• Kernel-based Virtual Machine (KVM) is a software feature that you can install on physical
Linux machines to create virtual machines. A virtual machine is a software application that
acts as an independent computer within another physical computer. It shares resources like
CPU cycles, network bandwidth, and memory with the physical machine. KVM is a Linux
operating system component that provides native support for virtual machines on Linux.
• Linux kernel
• Linux kernel is the core of the open-source operating system. A kernel is a low-level program
that interacts with computer hardware. It also ensures that software applications running on
the operating system receive the required computing resources. Linux distributions, such as
Red Hat Enterprise Linux, Fedora, and Ubuntu, pack the Linux kernel and additional programs
into a user-friendly commercial operating system
• Linux kernel
• Linux kernel is the core of the open-source operating system. A kernel is a low-level program
that interacts with computer hardware. It also ensures that software applications running on
the operating system receive the required computing resources. Linux distributions, such as
Red Hat Enterprise Linux, Fedora, and Ubuntu, pack the Linux kernel and additional programs
into a user-friendly commercial operating system
• Kernel-based Virtual Machine (KVM) and VMware ESXi both provide virtualization
infrastructure to deploy type 1 hypervisors on the Linux kernel. However, KVM is an
open-source feature while VMware ESXi is available via commercial licenses.
• Organizations using VMware’s virtualization components enjoy professional support from its
technical team. Meanwhile, KVM users rely on a vast open-source community to address
potential issues.
KVM (Kernel-based Virtual Machine) is a virtualization technology built into the Linux kernel that
allows you to run multiple virtual machines (VMs) on a physical server. Each VM has its own virtual
CPU, memory, disk, and network interface.
Security Uses Linux security modules like SELinux and sVirt for VM isolation.
Live Migration Move running VMs between hosts with little or no downtime.
Resource Control Manage CPU, memory, disk, and network per VM using cgroups.
Storage Support Works with different storage backends like LVM, iSCSI, NFS, etc.
Feature Description
Networking Supports bridged, NAT, and virtual networking via Linux bridges or Open
Flexibility vSwitch.
Tool Integration Works with tools like libvirt, virt-manager, and QEMU.
Guest OS Support Runs Linux, Windows, BSD, Solaris, and more as guest VMs.
Xen Server
• It enables high performance to execute guest operating system. This is probably done by
removing the performance loss while executing the instructions requiring significant
handling and by modifying portion of the guest operating system executed by Xen, with
reference to the execution of such instructions.
• Above figure describes the Xen Architecture and its mapping onto a classic x86 privilege
model. A Xen based system is handled by Xen hypervisor, which is executed in the most
privileged mode and maintains the access of guest operating system to the basic hardware.
Guest operating system are run between domains, which represents virtual machine
instances
• In addition, particular control software, which has privileged access to the host and handles
all other guest OS, runs in a special domain called Domain 0.
• This the only one loaded once the virtual machine manager has fully booted, and hosts an
HTTP server that delivers requests for virtual machine creation, configuration, and
termination.
• This component establishes the primary version of a shared virtual machine manager
(VMM), which is a necessary part of Cloud computing system delivering
Infrastructure-as-a-Service (IaaS) solution.
• Various x86 implementation support four distinct security levels, termed as rings, i.e.,
• Ring 0,
• Ring 1,
• Ring 2,
• Ring 3
• Here, Ring 0 represents the level having most privilege and Ring 3 represents the level having
least privilege. Almost all the frequently used Operating system, except for OS/2, uses only
two levels i.e. Ring 0 for the Kernel code and Ring 3 for user application and non-privilege OS
program
• This provides a chance to the Xen to implement paravirtualization. This enables Xen to
control unchanged the Application Binary Interface (ABI) thus allowing a simple shift to
Xen-virtualized solutions, from an application perspective.
• Due to the structure of x86 instruction set, some instructions allow code execution in Ring 3
to switch to Ring 0 (Kernel mode).
• Such an operation is done at hardware level, and hence between a virtualized environment,
it will lead to a TRAP or a silent fault, thus preventing the general operation of the guest OS
as it is now running in Ring 1.
• This condition is basically occurred by a subset of system calls. To eliminate this situation,
implementation in operating system requires a modification and all the sensitive system calls
needs re-implementation with hypercalls.
• Here, hypercalls are the particular calls revealed by the virtual machine (VM) interface of Xen
and by use of it, Xen hypervisor tends to catch the execution of all the sensitive instructions,
manage them, and return the control to the guest OS with the help of a supplied handler.
• Paravirtualization demands the OS codebase be changed, and hence all operating systems
can not be referred to as guest OS in a Xen-based environment.
• This condition holds where hardware-assisted virtualization can not be free, which enables
to run the hypervisor in Ring 1 and the guest OS in Ring 0. Hence, Xen shows some
limitations in terms of legacy hardware and in terms of legacy OS.
• In fact, these are not possible to modify to be run in Ring 1 safely as their codebase is not
reachable, and concurrently, the primary hardware hasn’t any support to execute them in a
more privileged mode than Ring 0.
• Open source OS like Linux can be simply modified as its code is openly available, and Xen
delivers full support to virtualization, while components of Windows are basically not
compatible with Xen, unless hardware-assisted virtualization is available. As new releases of
OS are designed to be virtualized, the problem is getting resolved and new hardware
supports x86 virtualization.
• Pros:
• a) Xen server is developed over open-source Xen hypervisor and it uses a combination of
hardware-based virtualization and paravirtualization. This tightly coupled collaboration
between the operating system and virtualized platform enables the system to develop lighter
and flexible hypervisor that delivers their functionalities in an optimized manner.
• b) Xen supports balancing of large workload efficiently that capture CPU, Memory, disk
input-output and network input-output of data. It offers two modes to handle this workload:
Performance enhancement, and For handling data density.
• It also comes equipped with a special storage feature that we call Citrix storage link. Which
allows a system administrator to uses the features of arrays from Giant companies- Hp,
Netapp, Dell Equal logic etc.
• It also supports multiple processor, Iive migration one machine to another, physical server to
virtual machine or virtual server to virtual machine conversion tools, centralized multiserver
management, real time performance monitoring over window and linux.
• Cons:
• b) Xen relies on 3rd-party component to manage the resources like drivers, storage, backup,
recovery & fault tolerance.
• c) Xen deployment could be a burden some on your Linux kernel system as time passes.
• d) Xen sometimes may cause increase in load on your resources by high input-output rate
and and may cause starvation of other Vm’s.
The total cost of cloud computing is The total cost of virtualization is lower
6
higher than virtualization. than Cloud Computing.