0% found this document useful (0 votes)
3 views

VIRZ LECT

Virtualization is a technology that allows the creation of virtual representations of physical hardware, enabling efficient resource utilization and flexibility in managing IT infrastructure. It involves concepts like virtual machines and hypervisors, which facilitate the running of multiple operating systems on a single physical machine. The document outlines the benefits, types, and limitations of virtualization, emphasizing its importance in enhancing performance, reducing administrative costs, and promoting eco-friendly initiatives.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

VIRZ LECT

Virtualization is a technology that allows the creation of virtual representations of physical hardware, enabling efficient resource utilization and flexibility in managing IT infrastructure. It involves concepts like virtual machines and hypervisors, which facilitate the running of multiple operating systems on a single physical machine. The document outlines the benefits, types, and limitations of virtualization, emphasizing its importance in enhancing performance, reducing administrative costs, and promoting eco-friendly initiatives.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 71

UNIT-I INTRODUCTION TO VIRTUALIZATION

What is Virtualization

Virtualization is technology that you can use to create virtual representations of servers,
storage, networks, and other physical machines. Virtual software mimics the functions of
physical hardware to run multiple virtual machines simultaneously on a single physical
machine. Businesses use virtualization to use their hardware resources efficiently and get
greater returns from their investment.

Why is virtualization important

By using virtualization, you can interact with any hardware resource with greater
flexibility. Physical servers consume electricity, take up storage space, and need maintenance.
You are often limited by physical proximity and network design if you want to access them.
Virtualization removes all these limitations by abstracting physical hardware functionality into
software. You can manage, maintain, and use your hardware infrastructure like an application
on the web.

Virtualization example

Consider a company that needs servers for three functions:

1. Store business email securely


2. Run a customer-facing application
3. Run internal business applications

Each of these functions has different configuration requirements:

 The email application requires more storage capacity and a Windows operating system.
 The customer-facing application requires a Linux operating system and high processing power
to handle large volumes of website traffic.
 The internal business application requires iOS and more internal memory (RAM).

To meet these requirements, the company sets up three different dedicated physical servers for
each application. The company must make a high initial investment and perform ongoing
maintenance and upgrades for one machine at a time. The company also cannot optimize its
computing capacity. It pays 100% of the servers’ maintenance costs but uses only a fraction of
their storage and processing capacities.
Efficient hardware use

With virtualization, the company creates three digital servers, or virtual machines, on a single
physical server. It specifies the operating system requirements for the virtual machines and can
use them like the physical servers. However, the company now has less hardware and fewer
related expenses.

Infrastructure as a service

The company can go one step further and use a cloud instance or virtual machine from a cloud
computing provider such as AWS. AWS manages all the underlying hardware, and the
company can request server resources with varying configurations. All the applications run on
these virtual servers without the users noticing any difference. Server management also
becomes easier for the company’s IT team.

What is virtualization?

To properly understand Kernel-based Virtual Machine (KVM), you first need to understand
some basic concepts in virtualization. Virtualization is a process that allows a computer to
share its hardware resources with multiple digitally separated environments. Each virtualized
environment runs within its allocated resources, such as memory, processing power, and
storage. With virtualization, organizations can switch between different operating systems on
the same server without rebooting.

Virtual machines and hypervisors are two important concepts in virtualization.

Virtual machine

A virtual machine is a software-defined computer that runs on a physical computer with a


separate operating system and computing resources. The physical computer is called the host
machine and virtual machines are guest machines. Multiple virtual machines can run on a
single physical machine. Virtual machines are abstracted from the computer hardware by a
hypervisor.

Hypervisor

The hypervisor is a software component that manages multiple virtual machines in a computer.
It ensures that each virtual machine gets the allocated resources and does not interfere with the
operation of other virtual machines. There are two types of hypervisors.

Type 1 hypervisor

A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on the


computer’s hardware instead of the operating system. Therefore, type 1 hypervisors have better
performance and are commonly used by enterprise applications. KVM uses the type 1
hypervisor to host multiple virtual machines on the Linux operating system.

Type 2 hypervisor

Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system.
Type 2 hypervisors are suitable for end-user computing.
What are the benefits of virtualization?

Virtualization provides several benefits to any organization:

Efficient resource use

Virtualization improves hardware resources used in your data center. For example, instead of
running one server on one computer system, you can create a virtual server pool on the same
computer system by using and returning servers to the pool as required. Having fewer
underlying physical servers frees up space in your data center and saves money on electricity,
generators, and cooling appliances.

Automated IT management

Now that physical computers are virtual, you can manage them by using software tools.
Administrators create deployment and configuration programs to define virtual machine
templates. You can duplicate your infrastructure repeatedly and consistently and avoid error-
prone manual configurations.

Faster disaster recovery

When events such as natural disasters or cyberattacks negatively affect business operations,
regaining access to IT infrastructure and replacing or fixing a physical server can take hours or
even days. By contrast, the process takes minutes with virtualized environments. This prompt
response significantly improves resiliency and facilitates business continuity so that operations
can continue as scheduled.

How does virtualization work?

Virtualization uses specialized software, called a hypervisor, to create several cloud instances
or virtual machines on one physical computer.

Cloud instances or virtual machines

After you install virtualization software on your computer, you can create one or more virtual
machines. You can access the virtual machines in the same way that you access other
applications on your computer. Your computer is called the host, and the virtual machine is
called the guest. Several guests can run on the host. Each guest has its own operating system,
which can be the same or different from the host operating system.

From the user’s perspective, the virtual machine operates like a typical server. It has settings,
configurations, and installed applications. Computing resources, such as central processing
units (CPUs), Random Access Memory (RAM), and storage appear the same as on a physical
server. You can also configure and update the guest operating systems and their applications
as necessary without affecting the host operating system.

Hypervisors

The hypervisor is the virtualization software that you install on your physical machine. It is a
software layer that acts as an intermediary between the virtual machines and the underlying
hardware or host operating system. The hypervisor coordinates access to the physical
environment so that several virtual machines have access to their own share of physical
resources.

For example, if the virtual machine requires computing resources, such as computer processing
power, the request first goes to the hypervisor. The hypervisor then passes the request to the
underlying hardware, which performs the task.

The following are the two main types of hypervisors.

Type 1 hypervisors

A type 1 hypervisor—also called a bare-metal hypervisor—runs directly on the computer


hardware. It has some operating system capabilities and is highly efficient because it interacts
directly with the physical resources.

Type 2 hypervisors

A type 2 hypervisor runs as an application on computer hardware with an existing operating


system. Use this type of hypervisor when running multiple operating systems on a single
machine.

What are the different types of virtualization?

You can use virtualization technology to get the functions of many different types of physical
infrastructure and all the benefits of a virtualized environment. You can go beyond virtual
machines to create a collection of virtual resources in your virtual environment.

Server virtualization

Server virtualization is a process that partitions a physical server into multiple virtual servers.
It is an efficient and cost-effective way to use server resources and deploy IT services in an
organization. Without server virtualization, physical servers use only a small amount of their
processing capacities, which leave devices idle.

Storage virtualization

Storage virtualization combines the functions of physical storage devices such as network
attached storage (NAS) and storage area network (SAN). You can pool the storage hardware
in your data center, even if it is from different vendors or of different types. Storage
virtualization uses all your physical data storage and creates a large unit of virtual storage that
you can assign and control by using management software. IT administrators can streamline
storage activities, such as archiving, backup, and recovery, because they can combine multiple
network storage devices virtually into a single storage device.

Network virtualization

Any computer network has hardware elements such as switches, routers, and firewalls. An
organization with offices in multiple geographic locations can have several different network
technologies working together to create its enterprise network. Network virtualization is a
process that combines all of these network resources to centralize administrative tasks.
Administrators can adjust and control these elements virtually without touching the physical
components, which greatly simplifies network management.
The following are two approaches to network virtualization.

Software-defined networking

Software-defined networking (SDN) controls traffic routing by taking over routing


management from data routing in the physical environment. For example, you can program
your system to prioritize your video call traffic over application traffic to ensure consistent call
quality in all online meetings.

Network function virtualization

Network function virtualization technology combines the functions of network appliances,


such as firewalls, load balancers, and traffic analyzers that work together, to improve network
performance.

Data virtualization

Modern organizations collect data from several sources and store it in different formats. They
might also store data in different places, such as in a cloud infrastructure and an on-premises
data center. Data virtualization creates a software layer between this data and the applications
that need it. Data virtualization tools process an application’s data request and return results in
a suitable format. Thus, organizations use data virtualization solutions to increase flexibility
for data integration and support cross-functional data analysis.

Application virtualization

Application virtualization pulls out the functions of applications to run on operating systems
other than the operating systems for which they were designed. For example, users can run a
Microsoft Windows application on a Linux machine without changing the machine
configuration. To achieve application virtualization, follow these practices:

 Application streaming – Users stream the application from a remote server, so it runs only on
the end user's device when needed.
 Server-based application virtualization – Users can access the remote application from their
browser or client interface without installing it.
 Local application virtualization – The application code is shipped with its own environment to
run on all operating systems without changes.
Desktop virtualization

Most organizations have nontechnical staff that use desktop operating systems to run common
business applications. For instance, you might have the following staff:

 A customer service team that requires a desktop computer with Windows 10 and customer-
relationship management software
 A marketing team that requires Windows Vista for sales applications

You can use desktop virtualization to run these different desktop operating systems on virtual
machines, which your teams can access remotely. This type of virtualization makes desktop
management efficient and secure, saving money on desktop hardware. The following are types
of desktop virtualization.

Virtual desktop infrastructure

Virtual desktop infrastructure runs virtual desktops on a remote server. Your users can access
them by using client devices.

Local desktop virtualization

In local desktop virtualization, you run the hypervisor on a local computer and create a virtual
computer with a different operating system. You can switch between your local and virtual
environment in the same way you can switch between applications.

Need of Virtualization
There are five major needs of virtualization which are described below:
Figure: Major needs of Virtualization.

1. ENHANCED PERFORMANCE-
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely used
by the user. Most of their systems have sufficient resources which can host a virtual machine
manager and can perform a virtual machine with acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-
The limited use of the resources leads to under-utilization of hardware and software resources.
As all the PCs of the user are sufficiently capable to fulfill their regular computational needs
that’s why many of their computers are used often which can be used 24/7 continuously without
any interruption. The efficiency of IT infrastructure could be increase by using these resources
after hours for other purposes. This environment is possible to attain with the help of
Virtualization.
3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.

Need of Virtualization and its Reference Model

There are five major needs of virtualization which are described below:

Figure: Major needs of Virtualization.

1. ENHANCED PERFORMANCE-
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely used
by the user. Most of their systems have sufficient resources which can host a virtual machine
manager and can perform a virtual machine with acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-
The limited use of the resources leads to under-utilization of hardware and software resources.
As all the PCs of the user are sufficiently capable to fulfill their regular computational needs
that’s why many of their computers are used often which can be used 24/7 continuously without
any interruption. The efficiency of IT infrastructure could be increase by using these resources
after hours for other purposes. This environment is possible to attain with the help of
Virtualization.
3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.
4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as well
as a good amount of energy is needed to keep them cool for well-functioning. Therefore,
server consolidation drops the power consumed and cooling impact by having a fall in
number of servers. Virtualization can provide a sophisticated method of server consolidation.
5. ADMINISTRATIVE COSTS-
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel-intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.

VIRTUALIZATION REFERENCE MODEL-


Figure: Reference Model of Virtualization.

Three major Components falls under this category in a virtualized environment:


1. GUEST:
The guest represents the system component that interacts with the virtualization layer rather
than with the host, as would normally happen. Guests usually consist of one or more virtual
disk files, and a VM definition file. Virtual Machines are centrally managed by a host
application that sees and manages each virtual machine as a different application.
2. HOST:
The host represents the original environment where the guest is supposed to be managed.
Each guest runs on the host using shared resources donated to it by the host. The operating
system, works as the host and manages the physical resource management, and the device
support.
3. VIRTUALIZATION LAYER:
The virtualization layer is responsible for recreating the same or a different environment
where the guest will operate. It is an additional abstraction layer between a network and
storage hardware, computing, and the application running on it. Usually it helps to run a
single operating system per machine which can be very inflexible compared to the usage of
virtualization.
Limitations of virtualization

There are a few limitations with the hardware or VM virtualization, which leads to
containerization. Let's look at a few of them.

Machine turn up time


VMs run a fully-fledged OS. Every time a machine needs to be started, restarted, or shut down
it involves running the full OS life cycle and booting procedure. A few enterprises employ rigid
policies for procuring new IT resources. All of this increases the time required by the team to
deliver a VM or to upgrade an existing one because each new request should be fulfilled by a
whole set of steps. For example, a machine provisioning involves gathering the requirements,
provisioning a new VM, procuring a license and installing OS, allocating storage, network
configuration, and setting up redundancy and security policies.

Every time you wish to deploy your application you also have to ensure application specific
software requirements such as web servers, database servers, runtimes, and any support
software such as plugin drivers are installed on the machine. With teams obliged to deliver at
light speed, the current VM virtualization will create more friction and latency.

Low resource utilization


The preceding problem can be partially solved by using the cloud platforms, which offer on-
demand resource provisioning, but again public cloud vendors come up with a predefined set
of VM configuration and not every application utilizes all allocated compute and memory.

In a common enterprise scenario every small application is deployed in a separate VM for


isolation and security benefits. Further for ensuring scalability and availability identical VMs
are created and traffic is balanced among them. If the application utilizes only 5-10% of the
CPU's capacity, the IT infrastructure is heavily underutilized. Power and cooling needs for such
systems are also high, which adds up to the costs. Few applications are used seasonally or by
limited set of users, but still the servers have to be up and running. Another important drawback
of VMs is that inside a VM OS and supporting services occupy more size than the application
itself.

Operational costs
Every IT organization needs an operations team to manage the infrastructure's regular
maintenance activities. The team's responsibility is to ensure that activities such as procuring
machines, maintaining SOX Compliance, executing regular updates, and security patches are
done in a timely manner. The following are a few drawbacks that add up to operational costs
due to VM virtualization:

 The size of the operations team is proportional to the size of the IT. Large infrastructures
require larger teams, therefore more costs to maintain.

 Every enterprise is obliged to provide continuous business to its customers for which it
has to employ redundant and recovery systems. Recovery systems often take the same
amount of resources and configuration as original ones, which means twice the original
costs.

 Enterprises also have to pay for licenses for each guest OS no matter how little the
usage may be.

Application packaging and deployment


VMs are not easily shippable. Every application has to be tested on developer machines, proper
instruction sets have to be documented for operations or deployment teams to prepare the
machine and deploy the application. No matter how well you document and take precautions
in many instances the deployments fail because at the end of the day the application runs on a
completely different environment than it is tested on which makes it riskier.

Let us imagine you have successfully installed the application on VM, but still VMs are not
easily sharable as application packages due to their extremely large sizes, which makes them
misfit for DevOps type work cultures. Imagine your applications need to go through rigorous
testing cycles to ensure high quality. Every time you want to deploy and test a developed
feature a new environment needs to be created and configured. The application should be
deployed on the machine and then the test cases should be executed. In agile teams, release
happens quite often, so the turnaround time for the testing phase to begin and results to be out
will be quite high because of the machine provisioning and preparation work.

Choosing between VM virtualization or containerization is purely a matter of scope and need.


It might not always be feasible to use containers. One advantage, for example, is in VM
virtualization the guest OS of the VM and the host OS need not be the same. A Linux VM and
a Windows VM can run in parallel on Hyper-V. This is possible because in VM virtualization
only the hardware layer is virtualized. Since containers share the kernel OS of the host, a Linux
container cannot be shipped to a Windows machine. Having said that, the future holds good
things for both containers and VMs in both private and public clouds. There might be cases
where an enterprise opts to use a hybrid model depending on scope and need.
Types of hardware virtualization techniques and its advantages: Virtualization techniques are
used to generate numerous isolated partitions on a single physical server and these techniques
vary in the Virtualization solutions methods and the level of abstraction while offering similar
traits and traveling towards the same goal. The most popular virtualization techniques are:

1. Full Virtualization: This technique fully virtualizes the main physical server to support
applications and software to operate in a much similar way on virtualized divisions. This
creates an environment as if it is working on a unique server. Full virtualization technique
enables the administrators to run unchanged and entirely virtualized operating system.

Advantages:

 Full virtualization technique grants the potential to combine existing systems on to the
newer ones with increased efficiency and a well-organized hardware.
 This amazing methodology contributes effectively to trim down the operating costs
engaged in repairing and enhancing older systems.
 The less competent systems can be power-packed with this technique, while reducing
the physical space and augmenting the overall performance of the company.

2. Virtual machines: Virtual machines are popularly known as VMs, imitate certain factual
or illusory hardware requiring the valid resources from the host, which is nothing but the actual
machine operating the VMs. A virtual machines monitor (VMM) is used in certain cases where
the CPU directives need extra privileges and may not be employed in user space.

Advantages:

 This methodology benefits numerous system emulators who use it for running a random
guest operating system without altering the guest OS.
 VMMs are used to examine the performed code and facilitate its secure running. For
such varied benefits it is widely used by Microsoft Virtual Server, QEMU, Parallels,
VirtualBox and many other VMware products.

3. Para-Virtualization: This methodology clearly runs modified versions of operating


systems. Only the software and programs are carried out in a precise manner to work for their
exclusive websites without executing any kind of hardware simulation. Using this technique,
the guest is very well aware of its environment as the para-virtualized OS is altered to be alert
about its virtualization.

Advantages:

 It enhances the performance notably by decreasing the number of VMM calls and
prevents the needless use of privileged instructions.
 It allows running many operating systems on a single server.
 This method is considered as the most advantageous one as it augments the performance
per server without the operating cost of a host operating system.
4. Operating System level Virtualization: Operating system level virtualization is specially
intended to grant the necessary security and separation to run manifold applications and
replicas of the same operating system on the same server. Isolating, segregating and providing
a safe environment enables the easy running and sharing of machines of numerous applications
operating on a single server. This technique is used by Linux-VServer, FreeBSD Jails,
OpenVZ, Solaris Zones and Virtuozzo.

Advantages:

 When compared with all the above mentioned techniques, OS level virtualization is
considered to give the best performance and measurability.
 This technique is easy to control and comparatively uncomplicated to manage as
everything can be administered from the host system.

Virtualization has become a widespread concept in the today's world of information


technology. Decisive and influential designers can do all the wonders required for optimizing
the performance of virtualized systems while steadily focusing on your business needs.

Hypervisor
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate
the resources on various pieces of hardware. The program which provides partitioning,
isolation, or abstraction is called a virtualization hypervisor. The hypervisor is a hardware
virtualization technique that allows multiple guest operating systems (OS) to run on a single
host system at the same time. A hypervisor is sometimes also called a virtual machine
manager(VMM).

Types of Hypervisor –

TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating system.
It has direct access to hardware resources. Examples of Type 1 hypervisors include VMware
ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.

Pros & Cons of Type-1 Hypervisor:


Pros: Such kinds of hypervisors are very efficient because they have direct access to the
physical hardware resources(like Cpu, Memory, Network, and Physical storage). This causes
the empowerment of the security because there is nothing any kind of the third party resource
so that attacker couldn’t compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate
machine to perform their operation and to instruct different VMs and control the host
hardware resources.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as ‘Hosted
Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying hardware
rather they run as an application in a Host system(physical machine). Basically, the software
is installed on an operating system. Hypervisor asks the operating system to make hardware
calls. An example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
Hosted hypervisors are often found on endpoints like PCs. The type-2 hypervisor is very
useful for engineers, and security analysts (for checking malware, or malicious source code
and newly developed applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System
alongside the host machine running. These hypervisors usually come with additional useful
features for guest machines. Such tools enhance the coordination between the host machine
and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of
these hypervisors lags in performance as compared to the type-1 hypervisors, and potential
security risks are also there an attacker can compromise the security weakness if there is
access to the host operating system so he can also access the guest operating system.

Choosing the right hypervisor :


Type 1 hypervisors offer much better performance than Type 2 ones because there’s no
middle layer, making them the logical choice for mission-critical applications and workloads.
But that’s not to say that hosted hypervisors don’t have their place – they’re much simpler to
set up, so they’re a good bet if, say, you need to deploy a test environment quickly. One of
the best ways to determine which hypervisor meets your needs is to compare their
performance metrics. These include CPU overhead, the amount of maximum host and guest
memory, and support for virtual processors. The following factors should be examined before
choosing a suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for the data center
(and your job). Besides your company’s needs, you (and your co-workers in IT) also have
your own needs. Needs for a virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor is
striking the right balance between cost and functionality. While a number of entry-level
solutions are free, or practically free, the prices at the opposite end of the market can be
staggering. Licensing frameworks also vary, so it’s important to be aware of exactly what
you’re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the performance of
their physical counterparts, at least in relation to the applications within each server.
Everything beyond meeting this benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the
availability of documentation, support, training, third-party developers and consultancies,
and so on – in determining whether or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You
can run both VMware vSphere and Microsoft Hyper-V in either VMware Workstation or
VMware Fusion to create a nice virtual learning and testing environment.

HYPERVISOR REFERENCE MODEL :


There are 3 main modules coordinates in order to emulate the underlying hardware:

1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the instructions
of the virtual machine instance to one of the other two modules.

2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to the virtual
machine instance. It means whenever a virtual machine tries to execute an instruction that
results in changing the machine resources associated with the virtual machine, the
allocator is invoked by the dispatcher.

3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed, whenever a
virtual machine executes a privileged instruction.
UNIT-II SERVER AND DESKTOP
VIRTUALIZATION
Virtual machine defined

A VM is a virtualized instance of a computer that can perform almost all of the same
functions as a computer, including running applications and operating systems.
Virtual machines run on a physical machine and access computing resources from software
called a hypervisor. The hypervisor abstracts the physical machine’s resources into a pool that
can be provisioned and distributed as needed, enabling multiple VMs to run on a single
physical machine.
How multiple virtual machines work

Multiple VMs can be hosted on a single physical machine, often a server, and then
managed using virtual machine software. This provides flexibility for compute resources
(compute, storage, network) to be distributed among VMs as needed, increasing overall
efficiency. This architecture provides the basic building blocks for the advanced virtualized
resources we use today, including cloud computing.

Types of Virtual Machines

1. System virtual machines

These kinds of VMs are completely virtualized to replace a real machine. The way they
virtualize depends on a hypervisor such as VMware ESXi, which can operate on an operating
system or bare hardware.
The hardware resources of the host can be shared and managed by more than one virtual
machine. This makes it possible to create more than one environment on the host system. Even
though these environments are on the same physical host, they are kept separate. This lets
several single-tasking operating systems share resources concurrently.

Different VMs on a single computer operating system can share memories by applying
memory overcommitment systems. This way, users can share memory pages with identical
content among multiple virtual machines on the same host, which is helpful, especially for
read-only pages.
Advantages of system VMs are:
A System virtual machines have the capability, either via emulators or by using just-in-
time compilation, of providing a simulated hardware environment. This environment is distinct
from the instruction set architecture of the host (ISA).
The virtual machine software users choose comes packed with application provisioning
that allows users to create packages, high availability, maintenance, and disaster recovery. This
makes the tools for virtual machines more straightforward to use, making it possible for many
operating systems to operate effectively on a single host.
The presence of a virtual partition allows for multiple OS environments to co-exist on
the same primary drive. This partition allows for sharing files generated from the host or the
guest operating environment. Other processes, such as software installations, wireless
connections, and remote replications, such as printing, can be performed efficiently in the
host’s or guest’s environment.
It allows developers to perform tasks without changing operating systems. All the
generated data is stored on the host’s hard drive.

Disadvantages of system virtual machines are:


When virtual machines indirectly access the host’s hard drive, they become less
efficient than actual machines. Depending on the system, the performance of several virtual
machines running on the same host can be different. The speed of execution and the protection
against malware can also vary, leading to unstable behaviour. Users can mitigate this problem
using virtual machine software that provides temporal isolation.
The guest operating system may not necessarily be compatible with the malware
protections provided by the host resources. Therefore, it may require additional separate
software leading to increased costs.

Process virtual machines


These virtual machines are sometimes called application virtual machines or Managed
Runtime Environments (MREs). They run as standard applications inside the host’s operating
system, supporting a single process. It is triggered to launch when the process starts and
destroyed when it exits. It offers a platform-independent programming environment to the
process, allowing it to execute similarly on any platform.

The Process virtual machines are implemented using interpreters and they provide high-
level abstractions. They are often used with Java programming language, which uses Java
virtual machines to execute programs. There can be two more examples of process VMs i.e.,
The Parrot virtual machine and the .NET Framework that runs on the Common Language
Runtime VM. Additionally, they operate as an abstraction layer for any computer language
being used.
A process virtual machine may, under some circumstances, take on the role of an
abstraction layer between its users and the underlying communication mechanisms of a
computer cluster. In place of a single process, such a virtual machine (VM) for a process
consists of one method for each real computer that is part of the cluster.

A Special case process VMs enable programmers to concentrate on the algorithm


instead of the communication process provided by the virtual machine OS and the interconnect.

These VMs are based on an existing language, so they don’t come with a specific
programming language. Their systems provide bindings for several programming languages,
such as Fortran and C. In contrast to other process VMs, they can enter all OS services and
aren’t limited by the system model. Therefore, it cannot be categorized strictly as virtual
machines.

Server Virtualization

Server Virtualization is the process of dividing a physical server into several virtual
servers, called virtual private servers. Each virtual private server can run independently. The
concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs by
increasing the utilization of existing resources.

Types of Server Virtualization


1. Hypervisor

In the Server Virtualization, Hypervisor plays an important role. It is a layer between


the operating system (OS) and hardware. There are two types of hypervisors.

Type 1 hypervisor (Bare metal or native hypervisors)


Type 2 hypervisor (Hosted or Embedded hypervisors)

The hypervisor is mainly used to perform various tasks such as allocate physical hardware
resources to several smaller independent virtual machines, called "guest" on the host machine.

2. Full Virtualization
Full Virtualization uses a hypervisor to directly communicate with the CPU and
physical server. It provides the best isolation and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its
own processing needs, so it can slow down the application and server performance. VMWare
ESX server is the best example of full virtualization.

3. Para Virtualization

Para Virtualization is quite similar to the Full Virtualization. The advantage of using
this virtualization is that it is easier to use, Enhanced performance, and does not require
emulation overhead. Xen primarily and UML use the Para Virtualization. The difference
between full and pare virtualization is that, In para virtualization hypervisor does not need too
much processing power to manage the OS.

4. Operating System Virtualization

The Operating system virtualization is also called as system-lever virtualization. It is


a server virtualization technology that divides one operating system into multiple isolated user-
spaces called virtual environments. The biggest advantage of using server visualization is that
it reduces the use of physical space, so it will save money. Linux OS
Virtualization and Windows OS Virtualization are the types of Operating System
virtualization. FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level
Virtualization.

5. Hardware Assisted Virtualization

Hardware Assisted Virtualization was presented by AMD and Intel. It is also known
as Hardware virtualization, AMD virtualization, and Intel virtualization. It is designed to
increase the performance of the processor. The advantage of using Hardware Assisted
Virtualization is that it requires less hypervisor overhead.

6. Kernel-Level Virtualization

Kernel-level virtualization is one of the most important types of server virtualization. It


is an open-source virtualization which uses the Linux kernel as a hypervisor. The advantage of
using kernel virtualization is that it does not require any special administrative software and
has very less overhead. User Mode Linux (UML) and Kernel-based virtual machine are some
examples of kernel virtualization.

Advantages of Server Virtualization

1. Independent Restart

In Server Virtualization, each server can be restart independently and does not affect
the working of other virtual servers.

2. Low Cost
Server Virtualization can divide a single server into multiple virtual private servers, so
it reduces the cost of hardware components.

3. Disaster Recovery

Disaster Recovery is one of the best advantages of Server Virtualization. In Server


Virtualization, data can easily and quickly move from one server to another and these data can
be stored and retrieved from anywhere.

4. Faster deployment of resources

Server virtualization allows us to deploy our resources in a simpler and faster way.

5. Security

It allows uses to store their sensitive data inside the data centres.

Disadvantages of Server Virtualization


The biggest disadvantage of server virtualization is that when the server goes offline,
all the websites that are hosted by the server will also go down.
There is no way to measure the performance of virtualized environments.
It requires a huge amount of RAM consumption.
It is difficult to set up and maintain.
Some core applications and databases are not supported virtualization.
It requires extra hardware resources.

Uses of Server Virtualization


Server Virtualization is used in the testing and development environment.
It improves the availability of servers.
It allows organizations to make efficient use of resources.
It reduces redundancy without purchasing additional hardware components.

Business Cases of Server Virtualization

Business challenge: We have many stand-alone HMI workstations running different


versions of the HMI client application and are looking for a simpler and more cost effective
approach to managing our applications and client operating systems.

Business solution: Investment in virtualization technology and hardware that would


provide the foundation for future operations for managing the application and client hardware
infrastructure.

Business benefits: Allowed customer to introduce the use of zero clients running ACP
thin manager to allow for device management running on a server based virtual infrastructure
in which application deliver was also made simpler through the use of server-based deployment
via Citrix/RDS.
Business Benefits of Server Virtualization

Virtualization is not just an IT trend. It is also not new, but it is new in many
organizations, as companies of all sizes invest in virtualization technologies to reap it’s many
benefits: server and desktop provisioning, reduction in physical servers, increased uptime and
availability, better disaster recovery, energy savings…and the list goes on.

Switching to virtualization means that the workloads happening on servers are not tied
to a specific piece of physical hardware and that multiple virtual workloads can occur
simultaneously on the same piece of machinery. The immediate benefits of virtualization
include higher server utilization rates in the and lower costs, but there are more sophisticated
advantages as well.

Server virtualization has proven itself to be a revolutionary technology solution for IT


management, presenting capabilities that would never be possible within a physical
infrastructure. At an economic stance, the benefits of server virtualization are focused on cost
savings because it allows multiple applications to be installed on a single physical server. So
how exactly can your company benefit from a virtual infrastructure? Let’s take a look at the
top five ways:

1. Reduced Hardware Costs

It is said that humans theoretically only use 10% of their brain command; most of the
servers in a strictly physical environment are heavily under-utilized, using an estimated 5-15%
of their capacity. When you implement a virtualized server / cloud computing approach,
hardware utilization is increased because one physical server can now hold multiple virtual
machines. Applications no longer need their own server because each virtual machine on the
physical server now runs them. In 2011, IDC reported a 40% reduction in hardware and
software costs for IT departments that adopted a server virtualization strategy.

2. Faster Server Provisioning and Deployment

Server virtualization enables system provisioning and deployment within minutes,


allowing you to clone an existing virtual machine without the hours and costs normally spent
procuring and installing a new physical server. Companies with virtual environments already
look back and cringe at the grueling process of filling out a purchase order, waiting for the
server to arrive and then waiting hours for the operating system and applications to finish
installing.

Time and cost add up substantially, not to mention the growing number of racks and
cables you would have to purchase to accommodate for the increasing number of physical
servers. Virtualization is most certainly necessary for most businesses to keep up with the
explosion of data resources needed to keep pace with competitors.

3. Greatly Improved Disaster Recovery


Perhaps the greatest benefit of server virtualization is the capability to move a virtual
machine from one server to another quickly and safely. Backing up critical data is done quickly
and effectively because your company can effortlessly create a replication site. Most enterprise
virtualization platforms contain software that helps automate the failover during a disaster. The
software also allows you to test a disaster recovery failover think of it as your data center’s
own fire escape plan. If a data center disaster occurs, your infrastructure is already set up to
take appropriate measures for a swift and safe recovery. Try achieving that with arrays of
physical servers now that’s a real disaster.

4. Increased Productivity

Few physical servers’ means there are less of them to maintain and manage. A benefit
of applications that used to take days or weeks to provision are now done in minutes. This
leaves your IT/OT staff more time to spend on more productive tasks such as driving new
business initiatives, cutting expenses and raising revenue.

5. Energy Cost Savings

Among other server virtualization benefits, the migration of physical servers to virtual
machines allows you to consolidate them onto fewer physical servers. The result? Cooling and
power costs are significantly reduced, which means not only will you be “going green,” but
you will also have more green to spend elsewhere. According to VMware, server consolidation
reduces energy costs by up to 80%. Another major plus is the ability to power down servers
without affecting applications or users.

Server virtualization brings positive transformations, such as reduced hardware costs,


improved server provisioning and deployment, better disaster recovery solutions, efficient and
economic use of energy, and increased staff productivity. Still, it may seem like a daunting task
to move to a virtual infrastructure.

Uses of virtual server consolidation

A Server consolidation in cloud computing refers to the process of combining multiple


servers into a single, more powerful server or cluster of servers. This can be done in order to
improve the efficiency and cost-effectiveness of the cloud computing environment. Server
consolidation is typically achieved through the use of virtualization technology, which allows
multiple virtual servers to run on a single physical server. This allows for better utilization of
resources, as well as improved scalability and flexibility. It also allows organizations to reduce
the number of physical servers they need to maintain, which can lead to cost savings on
hardware, power, and cooling.
Benefits of Server Consolidation
Cost savings: By consolidating servers, organizations can reduce the number of physical
servers they need to maintain, which can lead to cost savings on hardware, power, and cooling.
Improved performance: Consolidating servers can also improve the performance of the cloud
computing environment. By using virtualization technology, multiple virtual servers can run
on a single physical server, which allows for better utilization of resources. This can lead to
faster processing times and better overall performance.

Scalability and flexibility: Server consolidation can also improve the scalability and
flexibility of the cloud environment. By using virtualization technology, organizations can
easily add or remove virtual servers as needed, which allows them to more easily adjust to
changing business needs.
Management simplicity: Managing multiple servers can be complex and time-consuming.
Consolidating servers can help to reduce the complexity of managing multiple servers, by
providing a single point of management. This can help organizations to reduce the effort and
costs associated with managing multiple servers.
Better utilization of resources: By consolidating servers, organizations can improve the
utilization of resources, which can lead to better performance and cost savings.Server
consolidation in cloud computing is a process of combining multiple servers into a single, more
powerful server or cluster of servers, in order to improve the efficiency and cost-effectiveness
of the cloud computing environment.

Virtualization Platform

A big part of a virtualized system’s success depends on implementation architecture


choices. Here we’ll take a look at how to choose a virtualization platform.

Hardware Environment

These are the servers, storage and networking components of the data center being
virtualized. It is possible to reuse the hardware already in place. In fact, virtualization supports
this by providing a consistent interface to the application to be deployed even if the hardware
differs. The hardware environment chosen plays a crucial role in determining the software
platform to be used.
Software Platform

The software layer abstracts the hardware environment to provide the hosted
environments with an idealized environment. Distinct virtualization software has unique
hardware requirements so when existing hardware is to be used, software choices are limited
by compatibility with the hardware. Even when compatible hardware is used, the specifics
influence performance. If exceptionally high performance is critical, hardware should be
chosen very carefully.

Virtualization Platforms

VMware’s vSphere is the celebrity in virtualization’s climate-controlled data room, but


it is not the only option. There are other platforms available for virtualization, including
XenSource and Hyper-V, which are stable, enterprise-ready products backed by established
players and that provide long-term support.

The platform to use should not be decided based on cost alone, as sub-optimal solutions
will eventually necessitate additional expenditures, resulting in higher long-term costs. There
is no best solution here as finding the right fit involves individual considerations. That said,
there are a few pointers when it comes to choosing the best possible virtualization platform.

Features to Look for in a Virtualization Platform

Reduction of Capital Expenditures (CAPEX): Typically servers have low utilization levels
averaging around 15 percent. Virtualization can increase throughput four-fold. This means that
a company can use less hardware and reduce energy costs. Note that this must be weighed
against the total cost of ownership of the virtualization platform.

System Consolidation: By running multiple applications and operating systems, an


organization reduces the number of physical servers required.

Server Consolidation: This is an approach to the efficient usage of computer server resources
in order to reduce the total number of servers or server locations that an organization requires.

Ease of System Management: Easy management is how rapidly new services such as
platform as service (PaaS), infrastructure as a service (IaaS) and software as a service (SaaS)
can be deployed. It also refers to agility and speed in deploying new application stacks.

Platform Maturity: Virtualization is an expensive long-term investment which will be wasted


if vendors are switched. As such, new entrants to the market may not be a good option for
mission-critical data centers.

Hardware Compatibility: Virtualizing platforms have issues with hardware, including


outright incompatibility and sub-optimal performance. Hosted hypervisors tend to have the
greatest support for a range of hardware types, while bare-bones virtualizers are typically
choosy about supported hardware. If you are planning to repurpose existing hardware, you
should check if your platform of choice supports that hardware.
Total Cost of Ownership: Total cost of ownership is not just the price of purchasing a
virtualization platform. Additional expenses such as guest OS licenses, training and support
plus hardware should be factored in. One cost component that is often hidden is the purchase
of advanced management tools. As a rule, these are not included in the initial purchase price.
For instance, if you go for VMware you are likely to need vCenter Server as the installation
scales up while System Center Virtual Machine Manager is the comparable offering from
Microsoft to go with its Hyper-V. It might also be necessary to invest in third-party tools for
auditing, capacity planning or reporting.

Technical Differences: When virtualizing, it is also important to consider technical concerns


early on in the process in order to fully realize the benefits mentioned above without
compromising efficiency.There are two main approaches to virtualizing:

Type 1 Hypervisor: Bare-metal virtualization, a hypervisor is the first layer of software on a


clean x86 system. This provides more efficiency, performance, stability and greater scalability
than hosted architectures. vSphere is a good example of this approach.

Type 2 Hypervisor: Hosted virtualization, virtualization is implemented on top of a standard


operating system and supports the widest range of hardware. Microsoft’s Hyper-V is the best
example of this approach.

The top virtualization platforms are more or less evenly matched as far as features are
concerned. However, features are not the only consideration when making hypervisor choice.
Some of the other important factors are hardware compatibility and cost of ownership. System
support, stability and scalability deserve a long hard look as well. The key is to assess all of a
platform’s features and costs against your company’s needs and budget. The key isn’t just to
virtualize. For true success, companies must find the right fit

What is desktop virtualization

Desktop virtualization includes the replacement of traditional physical desktop


computing environments with virtual computing environments. It helps create and store
multiple user desktop environments on a single host, residing in the cloud or a data center.

Desktop virtualization is the one-stop solution to many problems that organizations face today,
including regulatory compliance, security, cost control, business continuity and manageability.

Types of desktop virtualization

1.Virtual Desktop Infrastructure

A virtual desktop interface (VDI) uses host-based virtual machines (VMs) to run the
operating system. It delivers non-persistent and persistent virtual desktops to all connected
devices. With a non-persistent virtual desktop, employees can access a virtual desktop from a
shared pool, whereas in a persistent virtual desktop, each user gets a unique desktop image that
can be customized with data and applications. VDI gives each user their virtual machine and
supports only one user per operating system.
2. Remote Desktop Services

Remote desktop services (RDS) or remote desktop session host (RDSH) are beneficial
where only limited applications require virtualization. They allow users to remotely access
Windows applications and desktops using the Microsoft Windows Server operating system.
RDS is a more cost-effective solution, since one Windows server can support multiple users.

3.Desktop-as-a-Service (DaaS)

Desktop-as-a-service (DaaS) is a flexible desktop virtualization solution that uses


cloud-based virtual machines backed by a third-party provider. Using DaaS, organizations can
outsource desktop virtualization solutions that help a user to access computer applications and
desktops from any endpoint platform or device.

To understand which type of solution suits your business, you should:

Identify the costs associated with setting up the infrastructure and deployment of virtual desktops
Determine whether you have the required resources and expertise to adopt these solutions
Determine the infrastructure control capabilities of the virtualization providers
Determine the level of elasticity and agility you want in your desktop virtualization solution
Why do you need desktop virtualization for your business.

Beyond saving money and time, desktop virtualization offers various other benefits for
organizations. These include:

Better security and control: Virtual desktops store data in a secure environment and allow
central management of confidential information to prevent data leaks. Desktop virtualization
solutions restrict the users from saving or copying data to any source other than its servers,
making it hard to get crucial company information out.Reliable virtualization solution
providers offer multiple layers of cloud safeguards such as the highest quality encryption,
switches, routers and constant monitoring to eliminate threats and protect users’ data.

Ease of maintenance: Unlike traditional computers, virtual desktops are far easier to maintain.
All end-users don’t need to update or download the necessary programs individually since these
are centrally managed by the IT department.

IT admin can also easily keep track of the software assets through a virtual desktop. Once a
user logs off from the virtual desktop it can reset, and any customizations or software programs
downloaded on the desktop can be easily removed. It also helps prevent system slowdown
caused by customizations and software downloads.

Remote work:Since virtual desktops are connected to a central server, provision for new
desktops can be made in minutes so they are instantly available to new users. Instead of
manually setting up a new desktop for new employees, IT admins can deploy a ready-to-go
virtual desktop to the new user’s device using desktop virtualization. The users can access and
interact with the operating systems and applications from virtually anywhere with an internet
connection.

Resource management: Resources for desktop virtualization are located in a data center,
which allows the pooling of resources for better efficiency. With desktop virtualization, IT
admins can maximize their hardware investment returns by consolidating the majority of their
computing in a data center. This helps eliminate the need to push application and operating
system updates to the end-user machines.

Organizations can deploy less expensive and less powerful devices to end-users since
they are only used for input and output. IT departments can save money and resources that
would otherwise be used to deploy more expensive and powerful machines.

Reduced costs: Desktop virtualization solutions help shift the IT budget from capital expenses
to operating expenses. Organizations can prolong the shelf life of their traditional computers
and other less powerful machines by delivering compute-intensive applications via VMs hosted
on a data center.

IT departments can also significantly save costs on software licensing as you only need
to install and update the software on a single, central server instead of multiple end-user
workstations. Savings on energy bills, capital costs, licensing costs, IT support costs and
upfront purchasing costs can reduce the overall IT operating costs by almost 70%.

Increased employee productivity: Employees’ productivity may increase when they can
easily access the organization’s computing resources from any supported device, anywhere and
anytime. Employees can work in a comfortable environment and still be able to access all
applications and software programs that would otherwise only be available on their office
desktops. Desktop virtualization allows for a seamless and faster employee onboarding process,
productivity and provisioning for remote workers.

Improved flexibility: Desktop virtualization reduces the need to configure desktops for each
end-user. These solutions help organizations manage and customize desktops through a single
interface and eliminate the need to personalize each desktop. Desktop virtualization lets the
administrator set permissions for access to programs and files already stored on the central
server with just a few clicks. Employees can access the required programs from anywhere,
offering them better work flexibility.

UNIT-III NETWORK VIRTUALIZATION


Introduction to Network Virtualization:
What is network virtualization
Network Virtualization (NV) refers to abstracting network resources that were
traditionally delivered in hardware to software. NV can combine multiple physical networks to
one virtual, software-based network, or it can divide one physical network into separate,
independent virtual networks.
Network virtualization software allows network administrators to move virtual machines
across different domains without reconfiguring the network. The software creates a network
overlay that can run separate virtual network layers on top of the same physical network fabric.

Why network virtualization


Network virtualization is rewriting the rules for the way services are delivered, from
the software-defined data center (SDDC), to the cloud, to the edge. This approach moves
networks from static, inflexible, and inefficient to dynamic, agile, and optimized. Modern
networks must keep up with the demands for cloud-hosted, distributed apps, and the increasing
threats of cybercriminals while delivering the speed and agility you need for faster time to
market for your applications. With network virtualization, you can forget about spending days
or weeks provisioning the infrastructure to support a new application. Apps can be deployed
or updated in minutes for rapid time to value.

How does network virtualization work


Network virtualization decouples network services from the underlying hardware and
allows virtual provisioning of an entire network. It makes it possible to programmatically
create, provision, and manage networks all in software, while continuing to leverage the
underlying physical network as the packet-forwarding backplane. Physical network resources,
such as switching, routing, firewalling, load balancing, virtual private networks (VPNs), and
more, are pooled, delivered in software, and require only Internet Protocol (IP) packet
forwarding from the underlying physical network.
Network and security services in software are distributed to a virtual layer (hypervisors,
in the data center and “attached” to individual workloads, such as your virtual machines (VMs)
or containers, in accordance with networking and security policies defined for each connected
application. When a workload is moved to another host, network services and security policies
move with it. And when new workloads are created to scale an application, necessary policies
are dynamically applied to these new workloads, providing greater policy consistency and
network agility.

Benefits of network virtualization


Network virtualization helps organizations achieve major advances in speed, agility,
and security by automating and simplifying many of the processes that go into running a data
center network and managing networking and security in the cloud. Here are some of the key
benefits of network virtualization:
Reduce network provisioning time from weeks to minutes
Achieve greater operational efficiency by automating manual processes
Place and move workloads independently of physical topology
Improve network security within the data center

Network Virtualization Example


One example of network virtualization is virtual LAN (VLAN). A VLAN is a
subsection of a local area network (LAN) created with software that combines network devices
into one group, regardless of physical location. VLANs can improve the speed and performance
of busy networks and simplify changes or additions to the network.
Another example is network overlays. There are various overlay technologies. One
industry-standard technology is called virtual extensible local area network (VXLAN).
VXLAN provides a framework for overlaying virtualized layer 2 networks over layer 3
networks, defining both an encapsulation mechanism and a control plane. Another is generic
network virtualization encapsulation (GENEVE), which takes the same concepts but makes
them more extensible by being flexible to multiple control plane mechanisms.

VMware NSX Data Center – Network Virtualization Platform


VMware NSX Data Center is a network virtualization platform that delivers networking
and security components like firewalling, switching, and routing that are defined and consumed
in software. NSX takes an architectural approach built on scale-out network virtualization that
delivers consistent, pervasive connectivity and security for apps and data wherever they reside,
independent of underlying physical infrastructure.

ADVANTAGES of Network Virtualization:

Reduces hardware costs – With network virtualization, companies save money by using less
physical equipment. This approach uses software to handle tasks that once needed lots of
machines.

Enhances network flexibility – Changing network setups becomes easier and quicker, as
virtual networks can be adjusted without altering physical wires or devices.

Simplifies management tasks – Keeping track of a network and making changes is less
complicated because you can manage everything from one place, instead of handling lots of
separate devices.

Improves disaster recovery – If something goes wrong, like a natural disaster, network
virtualization helps businesses get their systems back up and running quickly because they can
move their network to another location virtually.

Supports multiple applications – Different programs and services can run on the same
physical network without interfering with each other, making it easier for businesses to use
various applications smoothly.
Disadvantages of Network Virtualization

Complex setup process – Setting up a virtual network can be complicated. It often requires
specialized knowledge and careful planning to ensure everything works together correctly.

Increased management overhead – Keeping track of a virtual network needs extra effort.
More components and layers mean more things to manage, which can be time-consuming.

Potential security vulnerabilities – When you create virtual networks, there might be more
chances for hackers to find a way in. This can happen if the system isn’t set up with strong
protections.

Compatibility issues with hardware – Sometimes, the hardware you have might not work
well with virtual networks. This can lead to extra costs if you need to buy new equipment that’s
compatible.

Latency and performance concerns – Virtual networks can sometimes slow down data. This
is because information has to travel through more steps before it gets to where it’s going.
Functions virtualization
Network functions virtualization (NFV) is the replacement of network appliance
hardware with virtual machines. The virtual machines use a hypervisor to run networking
software and processes such as routing and load balancing.

Why network functions virtualization


NFV allows for the separation of communication services from dedicated hardware,
such as routers and firewalls. This separation means network operations can provide new
services dynamically and without installing new hardware. Deploying network components
with network functions virtualization takes hours instead of months like with traditional
networking. Also, the virtualized services can run on less expensive, generic servers instead of
proprietary hardware.
Additional reasons to use network functions virtualization include:

Pay-as-you-go: Pay-as-you-go NFV models can reduce costs because businesses pay only for
what they need.

Fewer appliances: Because NFV runs on virtual machines instead of physical machines, fewer
appliances are necessary and operational costs are lower.

Scalability: Scaling the network architecture with virtual machines is faster and easier, and it
does not require purchasing additional hardware.

How does network functions virtualization work

Essentially, network functions virtualization replaces the functionality provided by


individual hardware networking components. This means that virtual machines run software
that accomplishes the same networking functions as the traditional hardware. Load balancing,
routing and firewall security are all performed by software instead of hardware components. A
hypervisor or software-defined networking controller allows network engineers to program all
of the different segments of the virtual network, and even automate the provisioning of the
network. IT managers can configure various aspects of the network functionality through one
pane of glass, in minutes.
Benefits of network functions virtualization
Many service providers feel that the benefits of network functions virtualization
outweigh the risks. With traditional hardware-based networks, network managers have to
purchase dedicated hardware devices and manually configure and connect them to build a
network. This is time-consuming and requires specialized networking expertise.
NFV allows virtual network function to run on a standard generic server, controlled by
a hypervisor, which is far less expensive than purchasing proprietary hardware
devices. Network configuration and management is much simpler with a virtualized network.
Best of all, network functionality can be changed or added on demand because the network
runs on virtual machines that are easily provisioned and managed.

Risks of network functions virtualization


NFV makes a network more responsive and flexible, and easily scalable. It can
accelerate time to market and significantly reduce equipment costs. However, there are security
risks, and network functions virtualization security concerns have proven to be a hurdle for
wide adoption among telecommunications providers. Here are some of the risks of
implementing network functions virtualization that service providers need to consider:
Physical security controls are not effective: Virtualizing network components increases their
vulnerability to new kinds of attacks compared to physical equipment that is locked in a data
center.
Malware is difficult to isolate and contain: It is easier for malware to travel among virtual
components that are all running off of one virtual machine than between hardware components
that can be isolated or physically separated.
Network traffic is less transparent: Traditional traffic monitoring tools have a hard time
spotting potentially malicious anomalies within network traffic that is traveling east-west
between virtual machines, so NFV requires more fine-grained security solutions.
Complex layers require multiple forms of security: Network functions virtualization
environments are inherently complex, with multiple layers that are hard to secure with blanket
security policies.

NFV architecture
A traditional network architecture, individual proprietary hardware devices such as
routers, switches, gateways, firewalls, load balancers and intrusion detection systems all carry
out different networking tasks. A virtualized network replaces these pieces of equipment with
software applications that run on virtual machines to perform networking tasks.
An NFV architecture consists of three parts:
Centralized virtual network infrastructure: An NFV infrastructure may be based on either
a container management platform or a hypervisor that abstracts the compute, storage and
network resources.
Software applications: Software replaces the hardware components of a traditional network
architecture to deliver the different types of network functionality.
Framework: A framework is needed to manage the infrastructure and provision network
functionality.

History of network functions virtualization


The European Telecommunications Standards Institute (ETSI), a consortium of service
providers including AT&T, China Mobile, BT Group, Deutsche Telekom and many others,
first presented the idea of a network functions virtualization standard at the OpenFlow World
Congress in 2012. These service providers had been looking for a way to accelerate the
deployment of network services.
Launching new network services used to be a cumbersome process that required space
and power for additional hardware boxes. As energy and space costs increased and the number
of skilled networking hardware engineers decreased, the ETSI committee turned to network
functions virtualization to solve both of these problems. NFV eliminates the need for physical
space for hardware appliances, and does not require intensive networking experience to
configure and manage.
The open source projects are working on developing NFV standards, including ETSI,
Open Platform for NFV, Open Network Automation Platform, Open Source MANO and MEF
formerly the Metro Ethernet Forum. So many different organizations with competing proposals
for standards have made it challenging for service providers to get comfortable with network
functions virtualization. Still, it is growing in popularity because of the quickly expanding
complexity and requirements of enterprise networks today.

NFV vs. SDN


NFV separates networking services from dedicated hardware appliances, software-
defined networking, or SDN, separates the network control functions such as routing, policy
definition and applications from network forwarding functions. With SDN, a virtual network
control plane decides where to send traffic, enabling entire networks to be programmed through
one pane of glass. SDN allows network control functions to be automated, which makes it
possible for the network to respond quickly to dynamic workloads. A software-defined network
can sit on top of either a virtual network or a physical network, but a virtual network does not
require SDN to operate. Both SDN and NFV rely on virtualization technology to function.

Network virtualization tools


In order to hone your network virtualization skills, you need to take advantage of the diverse
tools available in the market, or even create your own if necessary. Commonly used tools for
network virtualization include virtualization platforms like VMware, Hyper-V, KVM, or
Docker that enable you to create and manage VMs and containers on physical servers. SDN
controllers such as Open Daylight, ONOS, or Cisco ACI allow you to program and control the
behavior of network devices and services with software. NFV platforms like OpenStack, ETSI
MANO, or Cloudify enable you to deploy and orchestrate network functions as software
applications on virtualized infrastructure. Additionally, network testing and monitoring tools
such as Wireshark, Iperf, or PingPlotter can be used to measure and analyze network
performance and troubleshoot network issues.

Architectural Components of WAN


The exact design of your WAN architecture will vary based on your business
requirements and the type of WAN in use. However, seven architectural components are
essential for WAN implementation.

1. On-Premises SD-WAN
The SD-WAN hardware resides on-site. Network operators have direct, secure access
and control over the network and hardware, offering enhanced security for sensitive
information.

2. Cloud-Enabled SD-WAN
This form of SD-WAN architecture connects to a virtual cloud gateway over the
internet, enhancing network accessibility and facilitating better integration and performance
with cloud-native applications.
3. Cloud-Enabled with Backbone SD-WAN
This architecture gives organizations an extra layer of security by connecting the
network to a nearby point of presence (PoP), like a data center. It allows traffic to shift from
the public internet to a private connection, enhancing network security and providing a fallback
in case of connection failures.

How Does SD-WAN Work


SD-WAN works by abstracting the underlying network infrastructure and separating
the control plane from the data plane. This allows for centralized control and management of
the network, regardless of the physical devices and connections being used.
There are two main components:
SD-WAN controller
SD-WAN edge devices
The controller acts as the centralized brain of the network, overseeing the entire network
and making intelligent decisions about traffic routing and optimization.The edge devices,
which can be physical or virtual appliances, are deployed at each branch location and connect
to the controller. They establish secure tunnels over various connections, such as MPLS,
broadband, or LTE, and forward traffic based on the policies set by the controller.

A simple terms, think of SD-WAN as a smart traffic cop for your internet connections.
It looks at all the different paths data can take to get from one place to another and chooses the
best route based on factors like speed, cost, and reliability. This helps businesses improve their
network performance, reduce costs, and make their network more flexible and responsive to
their needs.
Through the use of intelligent algorithms and real-time monitoring, SD-WAN can dynamically
route traffic across the most optimal paths, based on factors like latency, packet loss, and
available bandwidth. This ensures that critical applications receive the necessary resources and
guarantees a consistent user experience.

SD-WAN architecture explained:

Key Features of SD WAN

SD-WAN brings a multitude of essential features that set it apart from conventional WAN solutions:
1. Centralized management: SD-WAN provides a single interface to monitor and configure
the entire network, simplifying management tasks and reducing operational overhead.
2. Dynamic path selection: SD-WAN can intelligently select the best path for traffic based on
real-time conditions, ensuring optimal performance and reliability.
3. Quality of Service (QoS): SD-WAN enables prioritization of critical applications and traffic
types, ensuring that they receive the necessary bandwidth and performance.
4. Security: SD-WAN incorporates security features like encryption, segmentation, and threat
detection to protect network traffic and data.
5. Scalability: SD-WAN can easily scale to accommodate growing network demands, allowing
organizations to add new branches or increase bandwidth without significant hardware
investments.

Benefits of SD-WAN
SD-WAN offers several benefits for organizations:
Cost savings: By leveraging inexpensive broadband or LTE connections alongside
traditional MPLS, organizations can significantly reduce their WAN costs.
Improved performance: SD-WAN can optimize traffic routing and prioritize critical
applications, resulting in faster application performance and reduced latency.
Enhanced security: With built-in security features, SD-WAN improves data protection
and network security, reducing the risk of unauthorized access and data breaches.
Simplified management: SD-WAN provides a centralized management interface,
making it easier to configure, monitor, and troubleshoot the network.
Flexibility and agility: SD-WAN enables organizations to quickly adapt to changing
business needs, allowing for rapid deployment of new branches or services.

UNIT-4 STORAGE VIRTUALIZATION

Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers. Applications and operating systems on the device can directly
access the discs for writing. Local storage is configured by the controllers in RAID groups, and
the operating system sees the storage based on the configuration. However, the storage is
abstracted and the controller is determining how to write the data or retrieve the requested data
for the operating system. Storage virtualization is important in various other forms:

 File servers: The operating system doesn't need to know how to write to physical
media; it can write data to a remote location.
 WAN Accelerators: WAN accelerators allow you to provide re-requested blocks at
LAN speed without affecting WAN performance. This eliminates the need to transfer
duplicate copies of the same material over WAN environments.
 SAN and NAS: Storage is presented over the Ethernet network of the operating system.
NAS (Network Attached Storage) presents the storage as file operations (like NFS).
SAN technologies present the storage as block level storage (like Fibre Channel). SAN
(Storage Area Network) technologies receive the operating instructions only when if
the storage was a locally attached device.
 Storage Tiering: Analysing the most frequently used data and allocating it to the best-
performing storage pool, storage tiering uses the storage pool concept as an entry point.
The least used data is stored in the storage pool with the lowest performance.

Advantages of Storage Virtualization


 Data is stored in the more convenient locations away from the specific host. In the case
of a host failure, the data is not compromised necessarily
 The storage devices can perform advanced functions like replication, reduplication, and
disaster recovery functionality.
 By doing abstraction of the storage level, IT operations become more flexible in how
storage is provided, partitioned, and protected.

Drawbacks of Storage Virtualization


 Agility and scalability: Storage virtualization cannot always be a smooth
implementation. It comes with few technical hurdles, such as scalability.
 Data security: Data security also remains a concern. Though some may argue that
virtual machines and servers are more secure than physical ones, virtual
environments can attract new kinds of cyber-attacks.
 Manageability and integration: Virtualisation breaks the end-to-end view of your
data. The virtualized storage solution must be capable of integrating with existing
tools and systems.

Memory Virtualization:
 Memory virtualization gathers volatile random access memory (RAM) resources from
many data centre systems, making them accessible to any cluster member machine.
 Software performance issues commonly occur from physical memory limits. Memory
virtualization solves this issue by enabling networked, and hence distributed, servers to
share a pool of memory. Applications can utilise a vast quantity of memory to boost
system utilisation, enhance memory usage efficiency, and open up new use cases when
this feature is integrated into the network.
 Shared memory systems and memory virtualization solutions are different. Because
shared memory systems do not allow memory resources to be abstracted, they can only
be implemented with a single instance of an operating system (that is, not in a clustered
application environment).
 Memory virtualization differs from flash memory-based storage, like solid-state drives
(SSDs), in that the former replaces or enhances regular RAM, while the latter replaces
hard drives (networked or not).
 Products based on Memory Virtualization are: ScaleMP, RNA Networks Memory
Virtualization Platform, Oracle Coherence and GigaSpaces.

Implementaions
Application level integration
In this case, applications running on connected computers connect to the memory pool directly
through an API or the file system.

Operating System Level Integration


In this case, the operating system connects to the memory pool, and makes pooled memory
available to applications.

Features

1. Virtual Address Space: Creating a virtual address space for each programme that
corresponds to a physical memory address is the first stage in memory virtualization. While
physical memory addresses are often bigger than virtual address spaces, numerous
applications can run simultaneously.

2. Page Tables: The operating system keeps track of the memory pages used by each app and
their matching physical memory addresses in order to manage the mapping between virtual
and physical memory addresses. This data structure is known as a page table.

3. Memory Paging: A page fault occurs when an application tries to access a memory page
that is not already in physical memory. The OS reacts to this by loading the requested page
from disc into physical memory and swapping out a page of memory from physical memory
to disc.
4. Memory compression: Different memory compression algorithms, which analyse the
contents of memory pages and compress them to conserve space, are used to make better
use of physical memory. A compressed page is instantly decompressed when a programme
wants to access it.

5. Memory Over commitment: Memory over commitment, in which applications are given
access to more virtual memory than is physically accessible, is made possible by
virtualization. because not all memory pages are actively being used at once, the System
can employ memory paging and compression to release physical memory as needed.

6. Memory Ballooning: Several virtualization technologies utilise a method called


ballooning to further minimise memory use. This entails dynamically modifying the
memory allotted to each virtual machine in accordance with its usage trends. The
hypervisor can reclaim some of a virtual machine’s allocated memory if it is not being fully
utilised and make it accessible to other virtual machines.

Benefits of Memory Virtualization


1. Increased Address Space: It allows processes to utilize a larger address space than what is
physically available, enabling the execution of larger programs or multiple programs
concurrently.

2. Memory Isolation: Each process has its own virtual memory space, which provides memory
isolation and protects processes from interfering with each other’s memory.

3. Simplified Memory Management: Memory virtualization simplifies memory management


for both the operating system and application developers. It abstracts away the details of
physical memory allocation and allows for more flexible memory usage.

4. Efficient Memory Utilization: By using techniques like demand paging and page
replacement, memory virtualization optimizes the usage of physical memory by keeping
frequently accessed pages in memory and swapping out less used pages to disk.

Types Of Storage Virtualization:


1. Block-Level: When you write to a hard drive on your desktop computer, it writes
directly to the hard disk. This is block-level storage. When you use virtualized block
storage, the server acts as a desktop computer and can access virtual disks, which act
like regular hard drives. This gives you benefits such as booting off of a block device
along with increased performance and scalability.

2. Object-Level: Data is not immediately stored on a disc when using object storage. Data
buckets are used to abstract it instead. You can retrieve this data from your programme
using API (programme Programming Interface) calls. This may be a more scalable
option than block storage when dealing with big data volumes. Hence, after arranging
your buckets, you won't need to be concerned about running out of room.

3. File-Level: When someone wants another server to host their data, they use file server
software such as Samba and NFS. The files are kept in directories known as shares. As
a result, this eliminates the requirement for disc space management and permits
numerous users to share a storage device. File servers are useful for desktop PCs, virtual
servers, and servers.

4. Host-based: Access to the host or any connected devices is made possible via host-
based storage virtualization. The server's installed driver intercepts and reroutes the
input and output (IO) requests. These input/output (IO) requests are typically sent
towards a hard disc, but they can also be directed towards other devices, including a
USB flash drive. This kind of storage is mostly used for accessing actual installation
CDs or DVDs, which make it simple to install an operating system on the virtual
computer.

5. Network-based: The host and the storage are separated by a fibre channel switch. The
virtualization takes place and the IO requests are redirected at the switch. No specific
drivers are needed for this approach to function on any operating system.

6. Array-based: All of the arrays' IO requests are handled by a master array. This makes
data migrations easier and permits management from a single location.

Block-Level Storage Virtualization:


 Block-level storage virtualization is a storage service that provides a flexible, logical
arrangement of storage capacity to applications and users while abstracting its physical
location. As a software layer, it intercepts I/O requests to that logical capacity and maps
them to the appropriate physical locations.
 Block-level storage virtualization is a technology that abstracts physical storage devices
into a virtualized layer, providing a unified and simplified view of storage resources.
This virtualization occurs at the block level, where data is divided into fixed-size blocks,
typically ranging from a few kilobytes to several megabytes. Each block is assigned a
unique address, and the storage virtualization layer manages the mapping of these
addresses to physical storage locations.

Key Aspects Of Block-Level Storage Virtualization

1. Abstraction and Pooling:

Abstraction: Block-level virtualization abstracts the underlying physical storage devices,


presenting them as a single, logical storage pool.
Pooling: Multiple storage devices, such as hard disk drives (HDDs) or solid-state drives
(SSDs), can be pooled together to create a larger and more flexible storage resource.

2. Storage Virtualization Layer: A storage virtualization layer sits between the applications
and the physical storage devices. It manages the allocation and retrieval of data blocks,
providing a transparent interface to the applications.

3. Uniform Addressing: Each block of data is assigned a unique address within the virtualized
storage space. This allows for consistent addressing regardless of the physical location of the
data.
4. Dynamic Provisioning: Block-level virtualization enables dynamic provisioning of storage
space. Storage can be allocated or de-allocated on-the-fly without disrupting ongoing
operations.

5. Data Migration and Load Balancing: The virtualization layer can facilitate data migration
across different storage devices without affecting the applications using the data. This helps in
load balancing and optimizing storage performance.

6. Improving Utilization and Efficiency: By pooling and dynamically allocating storage


resources, block-level storage virtualization improves overall storage utilization, ensuring that
available storage capacity is used efficiently.

7. Vendor Independence: Users can often mix and match storage devices from different
vendors within the virtualized storage pool. This promotes vendor independence and flexibility
in choosing hardware components.

8. Snapshot and Backup: Many block-level storage virtualization solutions offer features like
snapshots and backups. Snapshots allow for point-in-time copies of data, and backup processes
can be streamlined through centralized management.

9. Centralized Management: Administrators can centrally manage the storage infrastructure,


monitor performance, and implement policies for data protection and access control.

10. Scalability: Block-level storage virtualization is scalable, allowing organizations to easily


expand their storage infrastructure by adding new devices to the virtualized pool.
Benefits of Block-Level Storage Virtualization:
1. Centralized Management: Administrators can manage storage resources centrally,
streamlining tasks such as provisioning, monitoring, and data migration.

2. Improved Utilization: Virtualization allows for efficient use of storage capacity, as it enables
pooling and dynamic allocation of resources based on demand.

3. Vendor Independence: Users can integrate storage devices from different vendors into a
unified storage pool, promoting flexibility and preventing vendor lock-in.

4. Scalability: Block-level storage virtualization is scalable, enabling organizations to easily


expand their storage infrastructure by adding new devices to the virtualized pool.

5. Data Migration and Load Balancing: The virtualization layer facilitates seamless data
migration across storage devices, aiding in load balancing and optimizing storage performance.

Drawbacks of Block-Level Storage Virtualization:

1. Complexity: Implementing and managing a block-level storage virtualization solution can


be complex. It may require specialized knowledge and skills.

2. Performance Overhead: Depending on the virtualization implementation, there may be


some level of performance overhead introduced, potentially impacting the speed of data access.

3. Initial Setup Costs: The initial investment in virtualization infrastructure, including


hardware and software, can be significant.

4. Compatibility Issues: Integrating storage devices from different vendors may lead to
compatibility issues or require additional effort to ensure seamless operation.

5. Security Concerns: Centralized management of storage resources requires robust security


measures to protect against unauthorized access and data breaches.

In summary, while block-level storage virtualization offers numerous benefits in terms of


flexibility, efficiency, and centralized management, organizations should carefully consider the
associated complexities, costs, and potential drawbacks. The decision to implement storage
virtualization should align with the specific needs and goals of the organization.

File-Level Storage Virtualization:


 File-level virtualization is a method that operates at the file system layer, which is the
level that organizes and manages the files and directories on a storage device.
 File-level virtualization allows multiple file systems to be pooled together and accessed
as a single namespace, regardless of their physical location, size, or format. This can
simplify the administration and migration of files, as well as provide load balancing
and fault tolerance
 File-level storage virtualization is a technology that abstracts physical file storage
systems and presents a unified view of files and directories to users and applications.
Unlike block-level storage virtualization, which operates at a lower level with data
blocks, file-level virtualization deals with entire files and the hierarchical structure of
file systems.

Key Aspects Of File-Level Storage Virtualization:


1. Abstraction of Physical Storage: File-level storage virtualization abstracts the
underlying physical storage systems, presenting them as a single, logical file system.

2. Unified Namespace: It provides a unified namespace for files and directories, allowing
users and applications to interact with a centralized and standardized file system.

3. Hierarchical Structure: File-level virtualization maintains the hierarchical structure of


files and directories, resembling traditional file systems.

4. Access Control and Security: Administrators can implement access control and security
policies at the file level, managing permissions for individual files or directories.

5. Dynamic Expansion and Contraction: The virtualization layer allows for dynamic
expansion or contraction of storage resources, making it easier to manage changing storage
requirements.

6. Data Migration: File-level virtualization facilitates the movement of files across


different storage devices or locations without affecting how users or applications access the
data.

7. Centralized Management: Administrators can centrally manage file storage, monitor


usage, and implement policies for data protection and access control.

8. Vendor Independence: Similar to block-level virtualization, file-level virtualization


allows the integration of storage devices from different vendors into a unified file system.

9. Compatibility with Network Attached Storage (NAS): File-level virtualization is often


used in conjunction with Network Attached Storage (NAS) environments, where it can
simplify storage management across multiple NAS devices.
Benefits of File-Level Storage Virtualization:
1. Simplified Data Management: Users and applications interact with a single, unified file
system, simplifying data management and reducing the complexities associated with
multiple storage systems.

2. Improved Scalability: File-level virtualization supports dynamic expansion and


contraction of storage resources, improving scalability to meet changing storage needs.

3. Efficient Data Migration: Files can be migrated between storage devices without
affecting user access, facilitating efficient data movement for load balancing or hardware
upgrades.

4. Centralized Control: Administrators have centralized control over file-level


permissions, security settings, and storage policies.

5. Enhanced Access Control: Access control can be applied at the file level, allowing for
fine-grained permissions management.

Drawbacks of File-Level Storage Virtualization:


1. Performance Overhead: Depending on the implementation, file-level virtualization may
introduce some performance overhead, potentially affecting data access speeds.

2. Complexity: Implementing and managing file-level virtualization solutions can be


complex, requiring specialized knowledge and skills.

3. Compatibility Challenges: Some legacy applications or systems may not fully support
file-level virtualization, leading to compatibility challenges.

4. Initial Setup Costs: There can be significant initial setup costs associated with
implementing file-level virtualization, including hardware and software investments.

5. Learning Curve: Adopting file-level storage virtualization may involve a learning curve
for administrators, especially if they are not familiar with the specific virtualization solution.

File-level storage virtualization is often employed in scenarios where simplifying storage


management, supporting scalable file systems, and providing centralized control over file
access are critical requirements. Organizations considering file-level virtualization should
carefully evaluate their specific needs and weigh the benefits against potential drawbacks.

Address Space Remapping:


 Virtualization of storage helps achieve location independence by abstracting the
physical location of the data. The virtualization system presents to the user a logical
space for data storage and handles the process of mapping it to the actual physical
location.

 It is possible to have multiple layers of virtualization or mapping. It is then possible that


the output of one layer of virtualization can then be used as the input for a higher layer
of virtualization. Virtualization maps space between back-end resources, to front-end
resources. In this instance, "back-end" refers to a logical unit number (LUN) that is not
presented to a computer, or host system for direct use. A "front-end" LUN or volume
is presented to a host or computer system for use.

 The actual form of the mapping will depend on the chosen implementation. Some
implementations may limit the granularity of the mapping which may limit the
capabilities of the device. Typical granularities range from a single physical disk down
to some small subset (multiples of megabytes or gigabytes) of the physical disk.

 In a block-based storage environment, a single block of information is addressed using


a LUN identifier and an offset within that LUN – known as a logical block
addressing (LBA).

Address space remapping is a technique used in storage virtualization to manage and


control the mapping of logical addresses to physical storage locations. This process
involves dynamically associating logical addresses, which are used by applications or
the operating system, with physical addresses on the storage devices. Address space
remapping plays a crucial role in achieving flexibility, efficiency, and abstraction in
storage environments. Below are key aspects and considerations related to address
space remapping in storage virtualization:

Considerations and Challenges:

1. Performance Impact: The remapping process may introduce some performance


overhead, depending on the implementation. Storage virtualization solutions aim to
minimize this impact to ensure efficient data access.

2. Complexity: Managing the dynamic mapping of addresses adds complexity to


storage virtualization systems. Administrators must carefully configure and monitor the
remapping process.
3. Compatibility: Compatibility with existing applications and systems may be a
consideration. Address space remapping should be transparent to applications to ensure
a smooth integration.

4. Security: Security measures must be in place to protect the mapping information and
prevent unauthorized access or tampering with the address space remapping process.

5. Data Integrity: Ensuring data integrity during address space remapping is crucial.
The virtualization layer must guarantee that data is correctly mapped to the intended
physical locations.

Risk Of Storage Virtualization:


The major challenges linked with storage virtualisation are as follows:

 Managing the different software and hardware can get difficult when there are several
hardware and software elements.
 Storage systems need frequent upgradation to meet the challenging nature of
applications and huge data.
 Despite the ease of accessing data with storage virtualisation, there is always a risk of
cyber-attacks and various cyber threats in virtual environments. That is, for the data
stored in virtual machines, data security and its governance are the major challenges.
 Amongst the various vendors delivering storage virtualisation solutions, it’s important
to find a reliable one. As many a time, it happens when vendors provide storage
solutions but ignore the complexities of backing up virtual storage pools.
 Similarly, they fall in situations when there is a need for immediate recovery of data in
case of hardware failure or any other issue.
 Storage virtualisation, at times, can lead to access issues. This can be if the LAN
connection is disrupted, or internet access is lost due to some reason.
 There comes a time when there is a need to switch from a smaller network to a larger
one, as the capacity of the current one is insufficient. The migration process is time-
consuming and can even result in downtime.
 Additionally, problems like more significant data analysis, lack of agility, scalability,
and more rapid access to data are the common challenges companies face while
selecting storage solutions.

Storage Area Network(SAN):

 A Storage Area Network (SAN) is a network of storage devices that can be


accessed by multiple servers or computers, providing a shared pool of storage space.
Each computer on the network can access storage on the SAN as though they were
local disks connected directly to the computer.
 A SAN is typically assembled with cabling, host bus adapters, and SAN switches
attached to storage arrays and servers. Each switch and storage system on the SAN
must be interconnected.
SANs are often used to:

 Improve application availability (e.g., multiple data paths),


 Enhance application performance (e.g., off-load storage functions, segregate or zone
networks, etc.),
 Increase storage utilization and effectiveness (e.g., consolidate storage resources,
provide tiered storage, etc.), and improve data protection and security.

 A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use
of different types of virtualization.
 SANs perform an important role in an organization's Business Continuity Management
(BCM) activities (e.g., by spanning multiple sites).
 SANs are commonly based on a switched fabric technology. Examples include Fibre
Channel (FC), Ethernet, and InfiniBand. Gateways may be used to move data between
different SAN technologies.
 Fibre Channel is commonly used in enterprise environments. Fibre Channel may be used
to transport SCSI, NVMe, FICON, and other protocols.
 Ethernet is commonly used in small and medium sized organizations. Ethernet
infrastructure can be used for SANs to converge storage and IP protocols onto the same
network. Ethernet may be used to transport SCSI, FCoE, NVMe, RDMA, and other
protocols.
 InfiniBand is commonly used in high performance computing environments. InfiniBand
may be used to transport SRP, NVMe, RDMA, and other protocols.

SAN Fabric Architecture and Operation:

 The core of a SAN is its fabric: the scalable, high-performance network that interconnects
hosts -- servers -- and storage devices or subsystems. The design of the fabric is directly
responsible for the SAN's reliability and complexity. At its simplest, an FC SAN can simply
attach HBA ports on servers directly to corresponding ports on SAN storage arrays, often
using optical cables for top speed and support for networking over greater physical distances.
 But such simple connectivity schemes belay the true power of a SAN. In actual practice, the
SAN fabric is designed to enhance storage reliability and availability by eliminating single
points of failure. A central strategy in creating a SAN is to employ a minimum of two
connections between any SAN elements. The goal is to ensure that at least one working
network path is always available between SAN hosts and SAN storage.

SAN architecture includes host components, fabric components and storage components.

 Consider a simple example in the image above where two SAN hosts must communicate with
two SAN storage subsystems. Each host employs a separate HBA -- not a multiport HBA
because the HBA device itself is a single point of failure. The port from each HBA is
connected to a port on a different SAN switch, such as Fibre Channel switch. Similarly,
multiple ports on the SAN switch connect to different storage target devices or systems. This
is a simple redundant fabric; remove any one connection in the diagram, and both servers can
still communicate with both storage systems to preserve storage access for the workloads on
both servers.

 Consider the basic behaviour of a SAN and its fabric. A host server requires access to SAN
storage; the host will internally create a request to access the storage device. The traditional
SCSI commands used for storage access are encapsulated into packets for the network -- in
this case FC packets -- and the packets are structured according to the rules of the FC protocol.
The packets are delivered to the host's HBA where the packets are placed onto the network's
optical or copper cables. The HBA transmits the request packet(s) to the SAN where the
request will arrive at the SAN switch(s). One of the switches will receive the request and send
it along to the corresponding storage device. In a storage array, the storage processor will
receive the request and interact with storage devices within the array to accommodate the
host's request.

SAN Switches:

 The SAN switch is the focal point of any SAN. As with most network switches, the SAN
switch receives a data packet, determines the source and destination of the packet and then
forwards that packet to the intended destination device. Ultimately, the SAN fabric
topology is defined by number of switches, the type of switches -- such as backbone
switches, or modular or edge switches -- and the way in which the switches are
interconnected. Smaller SANs might use modular switches with 16, 24 or even 32 ports,
while larger SANs might use backbone switches with 64 or 128 ports. SAN switches can
be combined to create large and complex SAN fabrics that connect thousands of servers
and storage devices.

Alternative SAN Approaches:

 Virtual SAN. Virtualization technology was a natural fit for the SAN, encompassing both
storage and storage network resources to add flexibility and scalability to the underlying
physical SAN. A virtual SAN -- denoted with a capital V in VSAN -- is a form of isolation,
reminiscent of traditional SAN zoning, which essentially uses virtualization to create one
or more logical partitions or segments within the physical SAN. Traditional VSANs can
employ such isolation to manage SAN network traffic, enhance performance and improve
security. Thus, VSAN isolation can prevent potential problems on one segment of the SAN
from affecting other SAN segments, and the segments can be changed logically as needed
without the need to touch any physical SAN components. VMware offers Virtual SAN
Technology.

 Unified SAN. A SAN is noted for its support of block storage, which is typical for
enterprise applications. But file, object and other types of storage would traditionally
demand a separate storage system, such as network-attached storage (NAS). A SAN that
supports unified storage is capable of supporting multiple approaches -- such as file, block
and object-based storage -- within the same storage subsystem. Unified storage provides
such capabilities by handling multiple protocols, including file-based SMB and NFS, as
well as block-based, such as FC and iSCSI. By using a single storage platform for block
and file storage, users can take advantage of powerful features that are usually reserved
for traditional block-based SANs, such as storage snapshots, data replication, storage
tiering, data encryption, data compression and data deduplication.

 Converged SAN. One common disadvantage to a traditional FC SAN is the cost and
complexity of a separate network dedicated to storage. ISCSI is one means of overcoming
the cost of a SAN by using common Ethernet networking components rather than FC
components. FCoE supports a converged SAN that can run FC communication directly
over Ethernet network components -- converging both common IP and FC storage
protocols onto a single low-cost network. FCoE works by encapsulating FC frames within
Ethernet frames to route and transport FC data across an Ethernet network. However,
FCoE relies on end-to-end support in network devices, which has been difficult to achieve
on a broad basis, making the choice of vendor limited.

 Hyper-converged infrastructure. The data center use of HCI has grown dramatically in
recent years. HCI combines compute and storage resources into pre-packaged modules,
allowing modules -- also called nodes -- to be added as needed and managed through a
single common utility. HCI employs virtualization, which abstracts and pools all the
compute and storage resources. IT administrators then provision virtual machines and
storage from the available resource pools. The fundamental goal of HCI is to simplify
hardware deployment and management while allowing fast scalability.

SAN Benefits:
 High performance. The typical SAN uses a separate network fabric that is dedicated to
storage tasks. The fabric is traditionally FC for top performance, though iSCSI and
converged networks are also available.
 High scalability. The SAN can support extremely large deployments encompassing
thousands of SAN host servers and storage devices or even storage systems. New hosts and
storage can be added as required to build out the SAN to meet the organization's specific
requirements.
 High availability. A traditional SAN is based on the idea of a network fabric, which --
ideally -- interconnects everything to everything else. This means a full-featured SAN
deployment has no single point of failure between a host and a storage device, and
communication across the fabric can always find an alternative path to maintain storage
availability to the workload.
 Advanced management features. A SAN will support an array of useful enterprise-class
storage features, including data encryption, data deduplication, storage replication and self-
healing technologies intended to maximize storage capacity, security and data resilience.
Features are almost universally centralized and can easily be applied to all the storage
resources on the SAN.

SAN Disadvantages:
 Complexity. Although more convergence options, such as FCoE and unified options,
exist for SANs today, traditional SANs present the added complexity of a second
network -- complete with costly, dedicated HBAs on the host servers, switches and
cabling within a complex and redundant fabric and storage processor ports at the
storage arrays. Such networks must be designed and monitored with care, but the
complexity is increasingly troublesome for IT organizations with fewer staff and
smaller budgets.

 Scale. Considering the cost, a SAN is generally effective only in larger and more
complex environments where there are many servers and significant storage. It's
certainly possible to implement a SAN on a small scale, but the cost and complexity
are difficult to justify. Smaller deployments can often achieve satisfactory results using
an iSCSI SAN, a converged SAN over a single common network -- such as FCoE -- or
an HCI deployment, which is adept at pooling and provisioning resources.

 Management. With the idea of complexity focused on hardware, there is also


significant challenge in SAN management. Configuring features, such as LUN
mapping or zoning, can be problematic for busy organizations. Setting up RAID and
other self-healing technologies as well as corresponding logging and reporting -- not to
mention security -- can be time-consuming but unavoidable to maintain the
organization's compliance, DR and BC postures.

Network Attached Storage (NAS):


 An NAS device is a storage device connected to a network that allows storage and
retrieval of data from a central location for authorised network users and varied clients.
NAS devices are flexible and scale out, meaning that as you need additional storage,
you can add to what you have. NAS is like having a private cloud in the office. It’s
faster, less expensive and provides all the benefits of a public cloud on site, giving you
complete control.
 NAS devices typically don't have a keyboard or display and are configured and
managed with a browser-based utility. Each NAS resides on the LAN as an independent
network node, defined by its own unique IP address.

NAS Uses:

 The purpose of network-attached storage is to enable users to collaborate and share


data more effectively. It is useful to distributed teams that need remote access or
work in different time zones. NAS connects to a wireless router, making it easy for
distributed workers to access files from any desktop or mobile device with a
network connection.
 Some NAS products are designed for use in large enterprises. Others are for home
offices or small businesses. Devices usually contain at least two drive bays,
although single-bay systems are available for noncritical data.
 In addition, most NAS vendors partner with cloud storage providers to give
customers the flexibility of redundant backup.
 Network-attached storage relies on hard disk drives (HDDs) to serve data. I/O
contention can occur when too many users overwhelm the system with requests at
the same time.
 Higher-end NAS products have enough disks to support redundant arrays of
independent disks, or RAID, which is a storage configuration that turns multiple
hard disks into one logical unit to boost performance, high availability and
redundancy.
NAS Components:

 CPU. The heart of every NAS is a computer that includes the central processing
unit (CPU) and memory. The CPU is responsible for running the NAS OS, reading
and writing data against storage, handling user access and even integrating with
cloud storage if so designed. Where typical computers or servers use a general-
purpose CPU, a dedicated device such as NAS might use a specialized CPU
designed for high performance and low power consumption in NAS use cases.
 Network interface. Small NAS devices designed for desktop or single-user use
might allow for direct computer connections, such as USB or limited wireless (Wi-
Fi) connectivity. But any business NAS intended for data sharing and file serving
will demand a physical network connection, such as a cabled Ethernet interface,
giving the NAS a unique IP address. This is often considered part of the NAS
hardware suite, along with the CPU.
 Storage. Every NAS must provide physical storage, which is typically in the form
of disk drives. The drives might include traditional magnetic HDDs, SSDs or other
non-volatile memory devices, often supporting a mix of different storage devices.
The NAS might support logical storage organization for redundancy and
performance, such as mirroring and other RAID implementations -- but it's the
CPU, not the disks, that handle such logical organization.
 OS. Just as with a conventional computer, the OS organizes and manages the NAS
hardware and makes storage available to clients, including users and other
applications. Simple NAS devices might not highlight a specific OS, but more
sophisticated NAS systems might employ a discrete OS such as Netgear
ReadyNAS, QNAP QTS, Zyxel FW, among others.

Types and Alternatives of NAS:


Scale up and scale out are two versions of NAS. Object storage is an alternative to
NAS for handling unstructured data.
Scale-up NAS
In a network-attached storage deployment, the NAS head is the hardware that
performs the control functions. It provides access to back-end storage through an
internet connection. This configuration is known as scale-up architecture. A two-
controller system expands capacity with the addition of drive shelves, depending on
the scalability of the controllers.

Scale-out NAS
With scale-out systems, the storage administrator installs larger heads and more hard
disks to boost storage capacity. Scaling out provides the flexibility to adapt to an
organization's business needs. Enterprise scale-out systems can store billions of files
without the performance tradeoff of doing metadata searches.

Object storage
Some industry experts speculate that object storage will overtake scale-out NAS.
However, it's possible the two technologies will continue to function side by side.
Both scale-out and object storage methodologies deal with scale, but in different ways.

NAS files are centrally managed via the Portable Operating System Interface
(POSIX). It provides data security and ensures multiple applications can share a scale-
out device without fear that one application will overwrite a file being accessed by
other users.

Object storage is a new method for easily scalable storage in web-scale environments.
It is useful for unstructured data that is not easily compressible, particularly large
video files.

Object storage does not use POSIX or any file system. Instead, all the objects are
presented in a flat address space. Bits of metadata are added to describe each object,
enabling quick identification within a flat address namespace.

Advantages of NAS:

 Many NAS products are multiprotocol which gives users flexibility


 Multiple users can access at any time.
 Offers data backup capabilities like replication and redundancy.
 Provides 24/7 and remote data availability.
 Easy to operate
 Offers data protection capabilities such as RAID.

Disadvantages of NAS:

 Requires special skills to manage high-end NAS.


 Does not work well with block or object storage.
 Performance can increase with increased traffic.
 Not well suited for highly transactional environments.
 Scalability is limited compared to other storage systems.
 Protocol admins choose may affect performance.
Redudant Array Of Indepemdent Disks:

• Technology used in computer systems to organize and manage multiple physical hard
drives as a single logical unit.
• RAID is designed to improve the reliability, performance, and/or capacity of data
storage systems.
• It achieves this by storing data across multiple disks in a way that provides redundancy
and/or data striping.
• There are different levels of RAID, each with its own set of characteristics and
advantages.
• Some common RAID levels include
 RAID 0
 RAID 1
 RAID 2
 RAID 3
 RAID 4
 RAID 5
 RAID 6

RAID 0:

• RAID 0 implements data striping.


• The data blocks are placed in multiple disks without redundancy. None of the disks are
used for data redundancy so if one disk fails then all
the data in the array is lost. No data block is being repeated
in any disk
Block '10,11,12,13' form a stripe

Instead of placing one block of data in a disk, we can place more than one block of data in a
disk and then move to another disk.

Pros of RAID 0:
• All the disk space is utilized and hence performance is increased.
• Data requests can be on multiple disks and not on a single disk hence improving the
throughput.
Cons of RAID 0:

• Failure of one disk can lead to complete data loss in the respective array.
• No data Redundancy is implemented so one disk failure can lead to system failure.

RAID 1:

• RAID 1 implements mirroring which means the data of one disk is replicated in
another disk.
• This helps in preventing system failure as if one disk fails then the redundant disk
takes over.

Here Disk 0 and Disk 1 have the same data as disk 0 is copied to disk 1. Same is the case with
Disk 2 and Disk 3.

Pros of RAID 1:

• Failure of one Disk does not lead to system failure as there is redundant data in other
disk.

Cons of RAID 1:

• Extra space is required for each disk as each disk data is copied to some other disk also

RAID 2:

• RAID 2 is used when error in data has to be checked at bit level, which uses a
Hamming code detection method.
• Two disks are used in this technique.
• One is used to store bit of each word in the disk and another is used to store error code
correction (Parity bits) of data words.
• The structure of this RAID is complex, so it is not used commonly.

Here Disk 3, Disk 4 and Disk 5 stores the parity bits of Data stored in Disk 0, Disk 1, and Disk
2 respectively. Parity bits are used to detect the error in data.

Pros of RAID 2:

• It checks for error at a bit level for every data word.


• One full disk is used to store parity bits which helps in detecting error.

Cons of RAID 2:

• Large extra space is used for parity bit storage.

RAID 3:

• RAID 3 implements byte-level striping of Data.


• Data is stored across disks with their parity bits in a separate disk. The parity bits helps
to reconstruct the data when there is a data loss.
Here Disk 3 contains the Parity bits for Disk 0 Disk 1 and Disk 2. If any one of the Disk's data
is lost the data can be reconstructed using parity bits in Disk 3.

Pros of RAID 3:

 Data can be recovered with the help of parity bits

Cons of RAID 3:

• Extra space for storing parity bits is used.

RAID 4:

• RAID 4 implements block-level striping of data with dedicated parity drive.


• If only one of the data is lost in any disk, then it can be reconstructed with the help of
parity drive.
• Parity is calculated with the help of XOR operation over each data disk block.

Here P0 is calculated using XOR (0,1,0) = 1 and P1 is calculated using XOR (1,1,0) = 0. If
there is even number of 1 then XOR is 0 and for odd number of 1 XOR is 1. If suppose Disk 0
data is lost, by checking parity P0=1 we will know that Disk 0 should have 0 to make the Parity
P0 as 1 whereas if there was 1 in Disk 0 it would have made the parity P0=0 which contradicts
with the current parity value.

Pros of RAID 4:

• Parity bits helps to reconstruct the data if at most one data is lost from the disks.

Cons of RAID 4:

• Extra space for Parity is required.


• If there is more than one data loss from multiple disks then Parity cannot help us
reconstruct the data.

RAID 5:

• RAID 5 is similar to RAID 4 with only one difference.


• The parity Rotates among the Disks.
Pros of RAID 5:

• Parity is distributed over the disk and makes the performance better.
• Data can be reconstructed using parity bits.

Cons of RAID 5:

• Parity bits are useful only when there is data loss in at most one Disk.
• If there is loss in more than one Disk block then parity is of no use.
• Extra space for parity is required.

RAID 6:

• If there is more than one Disk failure, then RAID 6 implementation helps in that case.
• In RAID 6 there are two parity in each array/row. It is similar to RAID 5 with extra
parity.

Here P0,P1,P2,P3 and Q0,Q1,Q2,Q3 are two parity to reconstruct the data if almost two disks
fail.

Pros of RAID 6:

• More parity helps in reconstructing at most 2 Disk data.


Cons of RAID 6:

• Extra space is used for both parities. (P and Q).


• More than 2 disk failures can not be corrected.

In Summary, RAID is used to backup the data when a disk fails for some reason and there are
some levels of RAID.

• RAID 0 implements data striping.


• RAID 1 implements mirroring which creates redundant data.
• RAID 2 uses Hamming code Error Detection method to correct error in data.
• RAID 3 does byte-level data striping and has parity bits for each data word.
• RAID 4 does block-level data striping.
• RAID 5 has rotating parity across the disks.
• RAID 6 has two parity which can handle at most two disk failures.

UNIT-V VMware Tools


VMware Tools is a suite of utilities that enhances the performance of the virtual machines guest operating
system and improves management of the virtual machine. Without VMware Tools installed in your guest
operating system, guest performance lacks important functionality. Installing VMware Tools eliminates or
improves these issues:

 Low video resolution


 Inadequate color depth
 Incorrect display of network speed
 Restricted movement of the mouse
 Inability to copy and paste and drag-and-drop files
 Missing sound
 Provides the ability to take quiesced snapshots of the guest OS
 Synchronizes the time in the guest operating system with the time on the host

VMware Tools includes these components:

 VMware Tools service


 VMware device drivers
 VMware user process
 VMware Tools control panel

VMware Tools is provided in these formats:


 ISOs (containing installers): These are packaged with the product and are installed in a number of ways,
depending upon the VMware product and the guest operating system installed in the virtual machine. For
more information, see the Installing VMware Tools section. VMware Tools provides a different ISO file
for each type of supported guest operating system: Mac OS X,Windows, Linux, NetWare, Solaris, and
FreeBSD.
 Operating System Specific Packages (OSPs): Downloadable binary packages that are built and
provided by VMware for particular versions of Linux distributions. OSPs are typically available for older
releases, such as RHEL 6. Most current versions of Linux include Open VM Tools, eliminating the need
to separately install OSPs. To download OSPs and to find important information and instructions.
 Open VM Tools (OVT): This is the open source implementation of VMware Tools intended for Linux
distribution maintainers and virtual appliance vendors. OVTs are generally included in the current
versions of popular Linux distributions, allowing administrators to effortlessly install and update
VMware Tools alongside other Linux packages.

Installing VMware Tools


The steps to install VMware Tools vary depending on your VMware product and the
guest operating system you have installed. For information on steps to install VMware Tools
in most VMware products,

Installing VMware Tools


The following are general steps used to start the VMware Tools installation in most
VMware products. Certain guest operating systems may require different steps, but these steps
work for most operating systems.

To install VMware Tools in most VMware products:


1. Power on the virtual machine.
2. Log in to the virtual machine using an account with Administrator or root privileges.
3. Wait for the desktop to load and be ready.
4. In the user interface for the VMware product, locate the Install VMware Tools menu
item, which is typically found under a Virtual Machine or Guest Operating System
menu.
5. Based on the operating system you specified when creating the virtual machine, the
correct ISO CD-ROM image containing VMware Tools is mounted to the virtual CD-
ROM of the virtual machine.

Amazon Web Services.

o AWS stands for Amazon Web Services.


o The AWS service is provided by the Amazon that uses distributed IT infrastructure to
provide different IT resources available on demand. It provides different services such
as infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software
as a service (SaaS).
o Amazon launched AWS, a cloud computing platform to allow the different
organizations to take advantage of reliable IT infrastructure.

Uses of AWS
o A small manufacturing organization uses their expertise to expand their business by
leaving their IT management to the AWS.
o A large enterprise spread across the globe can utilize the AWS to deliver the training to
the distributed workforce.
o An architecture consulting company can use AWS to get the high-compute rendering
of construction prototype.
o A media company can use the AWS to provide different types of content such as ebox
or audio files to the worldwide files.

Pay-As-You-Go

Based on the concept of Pay-As-You-Go, AWS provides the services to the customers.

AWS provides services to customers when required without any prior commitment or
upfront investment. Pay-As-You-Go enables the customers to procure services from AWS.

o Computing
o Programming models
o Database storage
o Networking
Advantages of AWS

1. Flexibility
o We can get more time for core business tasks due to the instant availability of new
features and services in AWS.
o It provides effortless hosting of legacy applications. AWS does not require learning
new technologies and migration of applications to the AWS provides the advanced
computing and efficient storage.
o AWS also offers a choice that whether we want to run the applications and services
together or not. We can also choose to run a part of the IT infrastructure in AWS and
the remaining part in data centres.

2. Cost-effectiveness

AWS requires no upfront investment, long-term commitment, and minimum expense


when compared to traditional IT infrastructure that requires a huge investment.

3. Scalability/Elasticity

Through AWS, autoscaling and elastic load balancing techniques are automatically
scaled up or down, when demand increases or decreases respectively. AWS techniques are
ideal for handling unpredictable or very high loads. Due to this reason, organizations enjoy the
benefits of reduced cost and increased user satisfaction.

4. Security
o AWS provides end-to-end security and privacy to customers.
o AWS has a virtual infrastructure that offers optimum availability while managing full
privacy and isolation of their operations.
o Customers can expect high-level of physical security because of Amazon's several
years of experience in designing, developing and maintaining large-scale IT operation
centers.
o AWS ensures the three aspects of security, i.e., Confidentiality, integrity, and
availability of user's data.
Features of AWS

The following are the features of AWS:

o Flexibility
o Cost-effective
o Scalable and elastic
o Secure
o Experienced

1. Flexibility

o The difference between AWS and traditional IT models is flexibility.


o The traditional models used to deliver IT solutions that require large investments in a
new architecture, programming languages, and operating system. Although these
investments are valuable, it takes time to adopt new technologies and can also slow
down your business.
o The flexibility of AWS allows us to choose which programming models, languages,
and operating systems are better suited for their project, so we do not have to learn new
skills to adopt new technologies.
o Flexibility means that migrating legacy applications to the cloud is easy, and cost-
effective. Instead of re-writing the applications to adopt new technologies, you just need
to move the applications to the cloud and tap into advanced computing capabilities.
o Building applications in aws are like building applications using existing hardware
resources.
o The larger organizations run in a hybrid mode, i.e., some pieces of the application run
in their data center, and other portions of the application run in the cloud.
o The flexibility of aws is a great asset for organizations to deliver the product with
updated technology in time, and overall enhancing the productivity.

2. Cost-effective

o Cost is one of the most important factors that need to be considered in delivering IT
solutions.
o For example, developing and deploying an application can incur a low cost, but after
successful deployment, there is a need for hardware and bandwidth. Owing our own
infrastructure can incur considerable costs, such as power, cooling, real estate, and staff.
o The cloud provides on-demand IT infrastructure that lets you consume the resources
what you actually need. In aws, you are not limited to a set amount of resources such
as storage, bandwidth or computing resources as it is very difficult to predict the
requirements of every resource. Therefore, we can say that the cloud provides flexibility
by maintaining the right balance of resources.
o AWS provides no upfront investment, long-term commitment, or minimum spend.
o You can scale up or scale down as the demand for resources increases or decreases
respectively.
o An aws allows you to access the resources more instantly. It has the ability to respond
the changes more quickly, and no matter whether the changes are large or small, means
that we can take new opportunities to meet the business challenges that could increase
the revenue, and reduce the cost.

3. Scalable and elastic

o In a traditional IT organization, scalability and elasticity were calculated with


investment and infrastructure while in a cloud, scalability and elasticity provide savings
and improved ROI (Return On Investment).
o Scalability in aws has the ability to scale the computing resources up or down when
demand increases or decreases respectively.
o Elasticity in aws is defined as the distribution of incoming application traffic across
multiple targets such as Amazon EC2 instances, containers, IP addresses, and Lambda
functions.
o Elasticity load balancing and scalability automatically scale your AWS computing
resources to meet unexpected demand and scale down automatically when demand
decreases.
o The aws cloud is also useful for implementing short-term jobs, mission-critical jobs,
and the jobs repeated at the regular intervals.

4. Secure

o AWS provides a scalable cloud-computing platform that provides customers with end-
to-end security and end-to-end privacy.
o AWS incorporates the security into its services, and documents to describe how to use
the security features.
o AWS maintains confidentiality, integrity, and availability of your data which is the
utmost importance of the aws.

Physical security: Amazon has many years of experience in designing, constructing, and
operating large-scale data centers. An aws infrastructure is incorporated in AWS controlled
data centers throughout the world. The data centers are physically secured to prevent
unauthorized access.

Secure services: Each service provided by the AWS cloud is secure.

Data privacy: A personal and business data can be encrypted to maintain data privacy.

5. Experienced

o The AWS cloud provides levels of scale, security, reliability, and privacy.
o AWS has built an infrastructure based on lessons learned from over sixteen years of
experience managing the multi-billion dollar Amazon.com business.
o Amazon continues to benefit its customers by enhancing their infrastructure
capabilities.
o Nowadays, Amazon has become a global web platform that serves millions of
customers, and AWS has been evolved since 2006, serving hundreds of thousands of
customers worldwide.

Hyper-V

 Establish or expand a private cloud environment. Provide more flexible, on-demand


IT services by moving to or expanding your use of shared resources and adjust utilization
as demand changes.
 Use your hardware more effectively. Consolidate servers and workloads onto fewer,
more powerful physical computers to use less power and physical space.
 Improve business continuity. Minimize the impact of both scheduled and unscheduled
downtime of your workloads.
 Establish or expand a virtual desktop infrastructure (VDI). Use a centralized desktop
strategy with VDI can help you increase business agility and data security, as well as
simplify regulatory compliance and manage desktop operating systems and applications.
Deploy Hyper-V and Remote Desktop Virtualization Host (RD Virtualization Host) on
the same server to make personal virtual desktops or virtual desktop pools available to
your users.
 Make development and test more efficient. Reproduce different computing
environments without having to buy or maintain all the hardware you'd need if you only
used physical systems.

Virtualization Products

Hyper-V in Windows and Windows Server replaces older hardware virtualization


products, such as Microsoft Virtual PC, Microsoft Virtual Server, and Windows Virtual PC.
Hyper-V offers networking, performance, storage and security features not available in these
older products.

Hyper-V and most third-party virtualization applications that require the same
processor features aren't compatible. That's because the processor features, known as hardware
virtualization extensions, are designed to not be shared.

Features of Hyper-V

Computing environment - A Hyper-V virtual machine includes the same basic parts as a
physical computer, such as memory, processor, storage, and networking. All these parts have
features and options that you can configure different ways to meet different needs. Storage and
networking can each be considered categories of their own, because of the many ways you can
configure them.

Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of
virtual machines, intended to be stored in another physical location, so you can restore the
virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states
and the other uses Volume Shadow Copy Service (VSS) so you can make application-
consistent backups for programs that support VSS.

Optimization - Each supported guest operating system has a customized set of services and
drivers, called integration services, that make it easier to use the operating system in a Hyper-
V virtual machine.

Portability - Features such as live migration, storage migration, and import/export make it
easier to move or distribute a virtual machine.

Remote connectivity - Hyper-V includes Virtual Machine Connection, a remote connection


tool for use with both Windows and Linux. Unlike Remote Desktop, this tool gives you console
access, so you can see what's happening in the guest even when the operating system isn't
booted yet.

Security - Secure boot and shielded virtual machines help protect against malware and other
unauthorized access to a virtual machine and its data.

Hyper-V Components

Hyper-V has required parts that work together so you can create and run virtual machines.
Together, these parts are called the virtualization platform. They're installed as a set when you
install the Hyper-V role. The required parts include Windows hypervisor, Hyper-V Virtual
Machine Management Service, the virtualization WMI provider, the virtual machine bus
(VMbus), virtualization service provider (VSP) and virtual infrastructure driver (VID).

Hyper-V also has tools for management and connectivity. You can install these on the same
computer that Hyper-V role is installed on, and on computers without the Hyper-V role
installed. These tools are:

 Hyper-V Manager
 Hyper-V module for Windows PowerShell
 Virtual Machine Connection
 Windows PowerShell Direct

Oracle VM VirtualBox
Oracle VM VirtualBox is a free and open-source hosted hypervisor for x86 virtualization,
developed and maintained by Oracle Corporation. It allows users to create and run virtual
machines on their desktop or laptop computers, enabling them to run multiple operating
systems simultaneously.

VirtualBox supports a wide range of guest operating systems including various versions of
Windows, Linux, macOS, Solaris, and others. It provides features such as snapshotting, which
allows users to save the current state of a virtual machine and revert back to it later if needed,
as well as support for virtual networking, USB device passthrough, and more.

VirtualBox is commonly used for purposes such as software development and testing, running
legacy applications, experimenting with different operating systems, and creating virtualized
environments for training or educational purposes. It's popular among developers, IT
professionals, and enthusiasts due to its versatility, ease of use, and the fact that it's available
for free under the GNU General Public License (GPL).

Oracle VM Virtua:lBox for Windows


Oracle VM VirtualBox is indeed available for Windows operating systems. Users can
download and install the VirtualBox software on their Windows computers to create and run
virtual machines. This allows them to utilize multiple operating systems simultaneously on
their Windows-based systems for various purposes such as development, testing,
experimentation, and more.
To download Oracle VM VirtualBox for Windows, you can visit the official VirtualBox
website (https://www.virtualbox.org/) and navigate to the "Downloads" section. From there,
you can select the version of VirtualBox compatible with your Windows operating system and
download the installation package. Once downloaded, you can proceed with the installation
process, which typically involves running the installer and following the on-screen instructions
to complete the setup.

After installation, you can launch VirtualBox and start creating and managing virtual machines
to meet your specific needs and requirements on your Windows computer.

IBM PowerVM

IBM PowerVM is a virtualization solution designed specifically for IBM Power Systems
servers, which are based on IBM's POWER architecture. PowerVM provides virtualization
capabilities for these servers, enabling the creation and management of virtualized partitions
or logical partitions (LPARs).

Here are some key features and components of IBM PowerVM:

1. Logical Partitioning (LPAR): PowerVM allows administrators to divide a single physical


Power Systems server into multiple logical partitions (LPARs), each running its own operating
system instance. This enables better utilization of server resources by running multiple
workloads on a single physical server.
2. Micro-Partitioning: PowerVM supports micro-partitioning, which allows the subdivision of
CPU and memory resources within a single logical partition (LPAR) into smaller units. This
fine-grained resource allocation helps optimize resource utilization and performance for
workloads with varying resource requirements.
3. Shared Processor Pool: PowerVM includes a feature called Shared Processor Pool, which
enables dynamic sharing of processor resources among multiple logical partitions (LPARs)
based on workload demands. This feature helps improve resource utilization and flexibility in
dynamic workload environments.
4. Live Partition Mobility (LPM): PowerVM provides Live Partition Mobility (LPM), allowing
administrators to migrate running logical partitions (LPARs) from one physical server to
another within the same Power Systems environment without interrupting workload
operations. LPM helps in workload balancing, planned maintenance, and disaster recovery
scenarios.
5. Virtual I/O Server (VIOS): PowerVM utilizes Virtual I/O Server (VIOS), a specialized
partition responsible for handling I/O operations and providing virtualized I/O services to other
logical partitions (LPARs) on the same physical server. VIOS helps streamline I/O
management, improve scalability, and enhance performance in virtualized environments.
6. Capacity on Demand (CoD): PowerVM supports Capacity on Demand (CoD), allowing
organizations to dynamically activate additional processor and memory resources on-demand
to meet workload requirements without requiring hardware upgrades.

IBM PowerVM

IBM PowerVM is widely used in enterprise environments that rely on IBM Power Systems
servers for their mission-critical workloads, providing advanced virtualization capabilities
tailored to the unique architecture and capabilities of IBM's POWER processors.
Google offers several virtualization solutions and services, primarily targeted at cloud
computing and enterprise customers. Some of the key virtualization offerings from Google
include:

1. Google Cloud Platform (GCP) Compute Engine: GCP Compute Engine is Google's
Infrastructure-as-a-Service (IaaS) offering that provides virtual machines (VMs) running on
Google's global infrastructure. Customers can create and manage VM instances in the cloud,
choosing from various machine types and operating systems to run their workloads.
2. Google Kubernetes Engine (GKE): GKE is a managed Kubernetes service provided by
Google Cloud Platform. Kubernetes is an open-source container orchestration platform for
automating the deployment, scaling, and management of containerized applications. GKE
enables users to deploy and manage containerized applications using Kubernetes clusters
running on Google Cloud infrastructure.
3. Google Cloud VMware Engine: Google Cloud VMware Engine is a fully managed VMware
service that allows customers to migrate and run VMware workloads natively on Google Cloud
Platform. It provides a dedicated VMware environment running on Google Cloud
infrastructure, enabling organizations to leverage their existing VMware-based solutions while
benefiting from Google Cloud's scalability, reliability, and global reach.
4. Anthos: Anthos is Google Cloud's hybrid and multi-cloud platform that enables customers to
build, deploy, and manage applications across on-premises data centers, Google Cloud
Platform, and other public cloud environments. Anthos provides a consistent platform for
deploying and managing workloads using containers and Kubernetes, offering capabilities for
modernizing existing applications and building new cloud-native applications.
5. Google Cloud Functions: Google Cloud Functions is a serverless compute service that allows
developers to build and deploy event-driven functions in the cloud. Functions are triggered by
various events such as HTTP requests, cloud storage changes, or pub/sub messages, and they
automatically scale in response to demand, eliminating the need for managing infrastructure.

These are some of the key virtualization offerings and services provided by Google, catering
to different use cases and deployment scenarios in cloud computing and enterprise
environments.

Case Study
Here's a hypothetical case study illustrating the benefits of virtualization in an enterprise
environment:

Company Background: XYZ Corporation is a medium-sized enterprise with offices located


in multiple cities. The company operates in the finance sector and handles sensitive financial
data from clients. They have a traditional IT infrastructure with physical servers and
workstations distributed across various locations.

Challenges:

1. Resource Underutilization: The company's physical servers are running at low utilization
levels, resulting in inefficient resource allocation and increased hardware costs.
2. Disaster Recovery: XYZ Corporation lacks a robust disaster recovery plan, making them
vulnerable to data loss and extended downtime in case of a disaster or system failure.
3. Testing and Development: The IT team struggles to provision and manage testing and
development environments efficiently, leading to delays in application development and
deployment.
4. Flexibility and Scalability: There is a lack of flexibility and scalability in the existing
infrastructure, making it challenging to adapt to changing business requirements and scale
resources as needed.

Solution: XYZ Corporation decides to implement virtualization technology to address these


challenges. They choose VMware vSphere as their virtualization platform due to its reliability,
performance, and comprehensive feature set.

Implementation:

1. Server Virtualization: XYZ Corporation virtualizes their physical servers using VMware
vSphere, consolidating multiple virtual machines (VMs) onto a smaller number of physical
servers. This improves resource utilization, reduces hardware costs, and simplifies server
management.
2. Disaster Recovery: They implement VMware Site Recovery Manager (SRM) to automate the
replication and failover of virtual machines to a secondary data center in case of a disaster. This
ensures business continuity and minimizes downtime in the event of a disaster.
3. Testing and Development: The IT team creates isolated virtualized environments for testing
and development purposes using VMware vSphere. They can easily provision and manage
virtual machines for different development projects, speeding up the application development
lifecycle.
4. Flexibility and Scalability: With VMware vSphere's dynamic resource allocation and
scalability features, XYZ Corporation gains the flexibility to scale resources up or down based
on demand. They can easily add or remove virtual machines as needed, enabling them to adapt
to changing business requirements more effectively.

Results:

1. Cost Savings: By consolidating their physical servers through virtualization, XYZ Corporation
reduces hardware costs and achieves higher resource utilization levels, leading to cost savings.
2. Improved Disaster Recovery: With VMware SRM, XYZ Corporation strengthens their
disaster recovery capabilities, ensuring data protection and minimizing downtime in case of a
disaster.
3. Faster Time-to-Market: The IT team can provision and manage testing and development
environments more efficiently using virtualization, resulting in faster application development
and deployment.
4. Increased Agility: Virtualization enables XYZ Corporation to scale resources quickly and
adapt to changing business needs, increasing their agility and competitiveness in the market.

In conclusion, virtualization technology, exemplified here through VMware vSphere, helps


XYZ Corporation overcome their IT challenges, improve operational efficiency, and achieve
business objectives effectively.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy