VIRZ LECT
VIRZ LECT
What is Virtualization
Virtualization is technology that you can use to create virtual representations of servers,
storage, networks, and other physical machines. Virtual software mimics the functions of
physical hardware to run multiple virtual machines simultaneously on a single physical
machine. Businesses use virtualization to use their hardware resources efficiently and get
greater returns from their investment.
By using virtualization, you can interact with any hardware resource with greater
flexibility. Physical servers consume electricity, take up storage space, and need maintenance.
You are often limited by physical proximity and network design if you want to access them.
Virtualization removes all these limitations by abstracting physical hardware functionality into
software. You can manage, maintain, and use your hardware infrastructure like an application
on the web.
Virtualization example
The email application requires more storage capacity and a Windows operating system.
The customer-facing application requires a Linux operating system and high processing power
to handle large volumes of website traffic.
The internal business application requires iOS and more internal memory (RAM).
To meet these requirements, the company sets up three different dedicated physical servers for
each application. The company must make a high initial investment and perform ongoing
maintenance and upgrades for one machine at a time. The company also cannot optimize its
computing capacity. It pays 100% of the servers’ maintenance costs but uses only a fraction of
their storage and processing capacities.
Efficient hardware use
With virtualization, the company creates three digital servers, or virtual machines, on a single
physical server. It specifies the operating system requirements for the virtual machines and can
use them like the physical servers. However, the company now has less hardware and fewer
related expenses.
Infrastructure as a service
The company can go one step further and use a cloud instance or virtual machine from a cloud
computing provider such as AWS. AWS manages all the underlying hardware, and the
company can request server resources with varying configurations. All the applications run on
these virtual servers without the users noticing any difference. Server management also
becomes easier for the company’s IT team.
What is virtualization?
To properly understand Kernel-based Virtual Machine (KVM), you first need to understand
some basic concepts in virtualization. Virtualization is a process that allows a computer to
share its hardware resources with multiple digitally separated environments. Each virtualized
environment runs within its allocated resources, such as memory, processing power, and
storage. With virtualization, organizations can switch between different operating systems on
the same server without rebooting.
Virtual machine
Hypervisor
The hypervisor is a software component that manages multiple virtual machines in a computer.
It ensures that each virtual machine gets the allocated resources and does not interfere with the
operation of other virtual machines. There are two types of hypervisors.
Type 1 hypervisor
Type 2 hypervisor
Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system.
Type 2 hypervisors are suitable for end-user computing.
What are the benefits of virtualization?
Virtualization improves hardware resources used in your data center. For example, instead of
running one server on one computer system, you can create a virtual server pool on the same
computer system by using and returning servers to the pool as required. Having fewer
underlying physical servers frees up space in your data center and saves money on electricity,
generators, and cooling appliances.
Automated IT management
Now that physical computers are virtual, you can manage them by using software tools.
Administrators create deployment and configuration programs to define virtual machine
templates. You can duplicate your infrastructure repeatedly and consistently and avoid error-
prone manual configurations.
When events such as natural disasters or cyberattacks negatively affect business operations,
regaining access to IT infrastructure and replacing or fixing a physical server can take hours or
even days. By contrast, the process takes minutes with virtualized environments. This prompt
response significantly improves resiliency and facilitates business continuity so that operations
can continue as scheduled.
Virtualization uses specialized software, called a hypervisor, to create several cloud instances
or virtual machines on one physical computer.
After you install virtualization software on your computer, you can create one or more virtual
machines. You can access the virtual machines in the same way that you access other
applications on your computer. Your computer is called the host, and the virtual machine is
called the guest. Several guests can run on the host. Each guest has its own operating system,
which can be the same or different from the host operating system.
From the user’s perspective, the virtual machine operates like a typical server. It has settings,
configurations, and installed applications. Computing resources, such as central processing
units (CPUs), Random Access Memory (RAM), and storage appear the same as on a physical
server. You can also configure and update the guest operating systems and their applications
as necessary without affecting the host operating system.
Hypervisors
The hypervisor is the virtualization software that you install on your physical machine. It is a
software layer that acts as an intermediary between the virtual machines and the underlying
hardware or host operating system. The hypervisor coordinates access to the physical
environment so that several virtual machines have access to their own share of physical
resources.
For example, if the virtual machine requires computing resources, such as computer processing
power, the request first goes to the hypervisor. The hypervisor then passes the request to the
underlying hardware, which performs the task.
Type 1 hypervisors
Type 2 hypervisors
You can use virtualization technology to get the functions of many different types of physical
infrastructure and all the benefits of a virtualized environment. You can go beyond virtual
machines to create a collection of virtual resources in your virtual environment.
Server virtualization
Server virtualization is a process that partitions a physical server into multiple virtual servers.
It is an efficient and cost-effective way to use server resources and deploy IT services in an
organization. Without server virtualization, physical servers use only a small amount of their
processing capacities, which leave devices idle.
Storage virtualization
Storage virtualization combines the functions of physical storage devices such as network
attached storage (NAS) and storage area network (SAN). You can pool the storage hardware
in your data center, even if it is from different vendors or of different types. Storage
virtualization uses all your physical data storage and creates a large unit of virtual storage that
you can assign and control by using management software. IT administrators can streamline
storage activities, such as archiving, backup, and recovery, because they can combine multiple
network storage devices virtually into a single storage device.
Network virtualization
Any computer network has hardware elements such as switches, routers, and firewalls. An
organization with offices in multiple geographic locations can have several different network
technologies working together to create its enterprise network. Network virtualization is a
process that combines all of these network resources to centralize administrative tasks.
Administrators can adjust and control these elements virtually without touching the physical
components, which greatly simplifies network management.
The following are two approaches to network virtualization.
Software-defined networking
Data virtualization
Modern organizations collect data from several sources and store it in different formats. They
might also store data in different places, such as in a cloud infrastructure and an on-premises
data center. Data virtualization creates a software layer between this data and the applications
that need it. Data virtualization tools process an application’s data request and return results in
a suitable format. Thus, organizations use data virtualization solutions to increase flexibility
for data integration and support cross-functional data analysis.
Application virtualization
Application virtualization pulls out the functions of applications to run on operating systems
other than the operating systems for which they were designed. For example, users can run a
Microsoft Windows application on a Linux machine without changing the machine
configuration. To achieve application virtualization, follow these practices:
Application streaming – Users stream the application from a remote server, so it runs only on
the end user's device when needed.
Server-based application virtualization – Users can access the remote application from their
browser or client interface without installing it.
Local application virtualization – The application code is shipped with its own environment to
run on all operating systems without changes.
Desktop virtualization
Most organizations have nontechnical staff that use desktop operating systems to run common
business applications. For instance, you might have the following staff:
A customer service team that requires a desktop computer with Windows 10 and customer-
relationship management software
A marketing team that requires Windows Vista for sales applications
You can use desktop virtualization to run these different desktop operating systems on virtual
machines, which your teams can access remotely. This type of virtualization makes desktop
management efficient and secure, saving money on desktop hardware. The following are types
of desktop virtualization.
Virtual desktop infrastructure runs virtual desktops on a remote server. Your users can access
them by using client devices.
In local desktop virtualization, you run the hypervisor on a local computer and create a virtual
computer with a different operating system. You can switch between your local and virtual
environment in the same way you can switch between applications.
Need of Virtualization
There are five major needs of virtualization which are described below:
Figure: Major needs of Virtualization.
1. ENHANCED PERFORMANCE-
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely used
by the user. Most of their systems have sufficient resources which can host a virtual machine
manager and can perform a virtual machine with acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-
The limited use of the resources leads to under-utilization of hardware and software resources.
As all the PCs of the user are sufficiently capable to fulfill their regular computational needs
that’s why many of their computers are used often which can be used 24/7 continuously without
any interruption. The efficiency of IT infrastructure could be increase by using these resources
after hours for other purposes. This environment is possible to attain with the help of
Virtualization.
3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.
There are five major needs of virtualization which are described below:
1. ENHANCED PERFORMANCE-
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which are rarely used
by the user. Most of their systems have sufficient resources which can host a virtual machine
manager and can perform a virtual machine with acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-
The limited use of the resources leads to under-utilization of hardware and software resources.
As all the PCs of the user are sufficiently capable to fulfill their regular computational needs
that’s why many of their computers are used often which can be used 24/7 continuously without
any interruption. The efficiency of IT infrastructure could be increase by using these resources
after hours for other purposes. This environment is possible to attain with the help of
Virtualization.
3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute power,
leads data centers raise rapidly. Companies like Google, Microsoft and Amazon develop their
infrastructure by building data centers as per their needs. Mostly, enterprises unable to pay to
build any other data center to accommodate additional resource capacity. This heads to the
diffusion of a technique which is known as server consolidation.
4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main power
consumers and maintaining a data center operations needs a continuous power supply as well
as a good amount of energy is needed to keep them cool for well-functioning. Therefore,
server consolidation drops the power consumed and cooling impact by having a fall in
number of servers. Virtualization can provide a sophisticated method of server consolidation.
5. ADMINISTRATIVE COSTS-
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data
center, accountable for a significant increase in administrative costs. Hardware monitoring,
server setup and updates, defective hardware replacement, server resources monitoring, and
backups are included in common system administration tasks. These are personnel-intensive
operations. The administrative costs is increased as per the number of servers. Virtualization
decreases number of required servers for a given workload, hence reduces the cost of
administrative employees.
There are a few limitations with the hardware or VM virtualization, which leads to
containerization. Let's look at a few of them.
Every time you wish to deploy your application you also have to ensure application specific
software requirements such as web servers, database servers, runtimes, and any support
software such as plugin drivers are installed on the machine. With teams obliged to deliver at
light speed, the current VM virtualization will create more friction and latency.
Operational costs
Every IT organization needs an operations team to manage the infrastructure's regular
maintenance activities. The team's responsibility is to ensure that activities such as procuring
machines, maintaining SOX Compliance, executing regular updates, and security patches are
done in a timely manner. The following are a few drawbacks that add up to operational costs
due to VM virtualization:
The size of the operations team is proportional to the size of the IT. Large infrastructures
require larger teams, therefore more costs to maintain.
Every enterprise is obliged to provide continuous business to its customers for which it
has to employ redundant and recovery systems. Recovery systems often take the same
amount of resources and configuration as original ones, which means twice the original
costs.
Enterprises also have to pay for licenses for each guest OS no matter how little the
usage may be.
Let us imagine you have successfully installed the application on VM, but still VMs are not
easily sharable as application packages due to their extremely large sizes, which makes them
misfit for DevOps type work cultures. Imagine your applications need to go through rigorous
testing cycles to ensure high quality. Every time you want to deploy and test a developed
feature a new environment needs to be created and configured. The application should be
deployed on the machine and then the test cases should be executed. In agile teams, release
happens quite often, so the turnaround time for the testing phase to begin and results to be out
will be quite high because of the machine provisioning and preparation work.
1. Full Virtualization: This technique fully virtualizes the main physical server to support
applications and software to operate in a much similar way on virtualized divisions. This
creates an environment as if it is working on a unique server. Full virtualization technique
enables the administrators to run unchanged and entirely virtualized operating system.
Advantages:
Full virtualization technique grants the potential to combine existing systems on to the
newer ones with increased efficiency and a well-organized hardware.
This amazing methodology contributes effectively to trim down the operating costs
engaged in repairing and enhancing older systems.
The less competent systems can be power-packed with this technique, while reducing
the physical space and augmenting the overall performance of the company.
2. Virtual machines: Virtual machines are popularly known as VMs, imitate certain factual
or illusory hardware requiring the valid resources from the host, which is nothing but the actual
machine operating the VMs. A virtual machines monitor (VMM) is used in certain cases where
the CPU directives need extra privileges and may not be employed in user space.
Advantages:
This methodology benefits numerous system emulators who use it for running a random
guest operating system without altering the guest OS.
VMMs are used to examine the performed code and facilitate its secure running. For
such varied benefits it is widely used by Microsoft Virtual Server, QEMU, Parallels,
VirtualBox and many other VMware products.
Advantages:
It enhances the performance notably by decreasing the number of VMM calls and
prevents the needless use of privileged instructions.
It allows running many operating systems on a single server.
This method is considered as the most advantageous one as it augments the performance
per server without the operating cost of a host operating system.
4. Operating System level Virtualization: Operating system level virtualization is specially
intended to grant the necessary security and separation to run manifold applications and
replicas of the same operating system on the same server. Isolating, segregating and providing
a safe environment enables the easy running and sharing of machines of numerous applications
operating on a single server. This technique is used by Linux-VServer, FreeBSD Jails,
OpenVZ, Solaris Zones and Virtuozzo.
Advantages:
When compared with all the above mentioned techniques, OS level virtualization is
considered to give the best performance and measurability.
This technique is easy to control and comparatively uncomplicated to manage as
everything can be administered from the host system.
Hypervisor
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate
the resources on various pieces of hardware. The program which provides partitioning,
isolation, or abstraction is called a virtualization hypervisor. The hypervisor is a hardware
virtualization technique that allows multiple guest operating systems (OS) to run on a single
host system at the same time. A hypervisor is sometimes also called a virtual machine
manager(VMM).
Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native
Hypervisor” or “Bare metal hypervisor”. It does not require any base server operating system.
It has direct access to hardware resources. Examples of Type 1 hypervisors include VMware
ESXi, Citrix XenServer, and Microsoft Hyper-V hypervisor.
1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the instructions
of the virtual machine instance to one of the other two modules.
2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to the virtual
machine instance. It means whenever a virtual machine tries to execute an instruction that
results in changing the machine resources associated with the virtual machine, the
allocator is invoked by the dispatcher.
3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed, whenever a
virtual machine executes a privileged instruction.
UNIT-II SERVER AND DESKTOP
VIRTUALIZATION
Virtual machine defined
A VM is a virtualized instance of a computer that can perform almost all of the same
functions as a computer, including running applications and operating systems.
Virtual machines run on a physical machine and access computing resources from software
called a hypervisor. The hypervisor abstracts the physical machine’s resources into a pool that
can be provisioned and distributed as needed, enabling multiple VMs to run on a single
physical machine.
How multiple virtual machines work
Multiple VMs can be hosted on a single physical machine, often a server, and then
managed using virtual machine software. This provides flexibility for compute resources
(compute, storage, network) to be distributed among VMs as needed, increasing overall
efficiency. This architecture provides the basic building blocks for the advanced virtualized
resources we use today, including cloud computing.
These kinds of VMs are completely virtualized to replace a real machine. The way they
virtualize depends on a hypervisor such as VMware ESXi, which can operate on an operating
system or bare hardware.
The hardware resources of the host can be shared and managed by more than one virtual
machine. This makes it possible to create more than one environment on the host system. Even
though these environments are on the same physical host, they are kept separate. This lets
several single-tasking operating systems share resources concurrently.
Different VMs on a single computer operating system can share memories by applying
memory overcommitment systems. This way, users can share memory pages with identical
content among multiple virtual machines on the same host, which is helpful, especially for
read-only pages.
Advantages of system VMs are:
A System virtual machines have the capability, either via emulators or by using just-in-
time compilation, of providing a simulated hardware environment. This environment is distinct
from the instruction set architecture of the host (ISA).
The virtual machine software users choose comes packed with application provisioning
that allows users to create packages, high availability, maintenance, and disaster recovery. This
makes the tools for virtual machines more straightforward to use, making it possible for many
operating systems to operate effectively on a single host.
The presence of a virtual partition allows for multiple OS environments to co-exist on
the same primary drive. This partition allows for sharing files generated from the host or the
guest operating environment. Other processes, such as software installations, wireless
connections, and remote replications, such as printing, can be performed efficiently in the
host’s or guest’s environment.
It allows developers to perform tasks without changing operating systems. All the
generated data is stored on the host’s hard drive.
The Process virtual machines are implemented using interpreters and they provide high-
level abstractions. They are often used with Java programming language, which uses Java
virtual machines to execute programs. There can be two more examples of process VMs i.e.,
The Parrot virtual machine and the .NET Framework that runs on the Common Language
Runtime VM. Additionally, they operate as an abstraction layer for any computer language
being used.
A process virtual machine may, under some circumstances, take on the role of an
abstraction layer between its users and the underlying communication mechanisms of a
computer cluster. In place of a single process, such a virtual machine (VM) for a process
consists of one method for each real computer that is part of the cluster.
These VMs are based on an existing language, so they don’t come with a specific
programming language. Their systems provide bindings for several programming languages,
such as Fortran and C. In contrast to other process VMs, they can enter all OS services and
aren’t limited by the system model. Therefore, it cannot be categorized strictly as virtual
machines.
Server Virtualization
Server Virtualization is the process of dividing a physical server into several virtual
servers, called virtual private servers. Each virtual private server can run independently. The
concept of Server Virtualization widely used in the IT infrastructure to minimizes the costs by
increasing the utilization of existing resources.
The hypervisor is mainly used to perform various tasks such as allocate physical hardware
resources to several smaller independent virtual machines, called "guest" on the host machine.
2. Full Virtualization
Full Virtualization uses a hypervisor to directly communicate with the CPU and
physical server. It provides the best isolation and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its
own processing needs, so it can slow down the application and server performance. VMWare
ESX server is the best example of full virtualization.
3. Para Virtualization
Para Virtualization is quite similar to the Full Virtualization. The advantage of using
this virtualization is that it is easier to use, Enhanced performance, and does not require
emulation overhead. Xen primarily and UML use the Para Virtualization. The difference
between full and pare virtualization is that, In para virtualization hypervisor does not need too
much processing power to manage the OS.
Hardware Assisted Virtualization was presented by AMD and Intel. It is also known
as Hardware virtualization, AMD virtualization, and Intel virtualization. It is designed to
increase the performance of the processor. The advantage of using Hardware Assisted
Virtualization is that it requires less hypervisor overhead.
6. Kernel-Level Virtualization
1. Independent Restart
In Server Virtualization, each server can be restart independently and does not affect
the working of other virtual servers.
2. Low Cost
Server Virtualization can divide a single server into multiple virtual private servers, so
it reduces the cost of hardware components.
3. Disaster Recovery
Server virtualization allows us to deploy our resources in a simpler and faster way.
5. Security
It allows uses to store their sensitive data inside the data centres.
Business benefits: Allowed customer to introduce the use of zero clients running ACP
thin manager to allow for device management running on a server based virtual infrastructure
in which application deliver was also made simpler through the use of server-based deployment
via Citrix/RDS.
Business Benefits of Server Virtualization
Virtualization is not just an IT trend. It is also not new, but it is new in many
organizations, as companies of all sizes invest in virtualization technologies to reap it’s many
benefits: server and desktop provisioning, reduction in physical servers, increased uptime and
availability, better disaster recovery, energy savings…and the list goes on.
Switching to virtualization means that the workloads happening on servers are not tied
to a specific piece of physical hardware and that multiple virtual workloads can occur
simultaneously on the same piece of machinery. The immediate benefits of virtualization
include higher server utilization rates in the and lower costs, but there are more sophisticated
advantages as well.
It is said that humans theoretically only use 10% of their brain command; most of the
servers in a strictly physical environment are heavily under-utilized, using an estimated 5-15%
of their capacity. When you implement a virtualized server / cloud computing approach,
hardware utilization is increased because one physical server can now hold multiple virtual
machines. Applications no longer need their own server because each virtual machine on the
physical server now runs them. In 2011, IDC reported a 40% reduction in hardware and
software costs for IT departments that adopted a server virtualization strategy.
Time and cost add up substantially, not to mention the growing number of racks and
cables you would have to purchase to accommodate for the increasing number of physical
servers. Virtualization is most certainly necessary for most businesses to keep up with the
explosion of data resources needed to keep pace with competitors.
4. Increased Productivity
Few physical servers’ means there are less of them to maintain and manage. A benefit
of applications that used to take days or weeks to provision are now done in minutes. This
leaves your IT/OT staff more time to spend on more productive tasks such as driving new
business initiatives, cutting expenses and raising revenue.
Among other server virtualization benefits, the migration of physical servers to virtual
machines allows you to consolidate them onto fewer physical servers. The result? Cooling and
power costs are significantly reduced, which means not only will you be “going green,” but
you will also have more green to spend elsewhere. According to VMware, server consolidation
reduces energy costs by up to 80%. Another major plus is the ability to power down servers
without affecting applications or users.
Scalability and flexibility: Server consolidation can also improve the scalability and
flexibility of the cloud environment. By using virtualization technology, organizations can
easily add or remove virtual servers as needed, which allows them to more easily adjust to
changing business needs.
Management simplicity: Managing multiple servers can be complex and time-consuming.
Consolidating servers can help to reduce the complexity of managing multiple servers, by
providing a single point of management. This can help organizations to reduce the effort and
costs associated with managing multiple servers.
Better utilization of resources: By consolidating servers, organizations can improve the
utilization of resources, which can lead to better performance and cost savings.Server
consolidation in cloud computing is a process of combining multiple servers into a single, more
powerful server or cluster of servers, in order to improve the efficiency and cost-effectiveness
of the cloud computing environment.
Virtualization Platform
Hardware Environment
These are the servers, storage and networking components of the data center being
virtualized. It is possible to reuse the hardware already in place. In fact, virtualization supports
this by providing a consistent interface to the application to be deployed even if the hardware
differs. The hardware environment chosen plays a crucial role in determining the software
platform to be used.
Software Platform
The software layer abstracts the hardware environment to provide the hosted
environments with an idealized environment. Distinct virtualization software has unique
hardware requirements so when existing hardware is to be used, software choices are limited
by compatibility with the hardware. Even when compatible hardware is used, the specifics
influence performance. If exceptionally high performance is critical, hardware should be
chosen very carefully.
Virtualization Platforms
The platform to use should not be decided based on cost alone, as sub-optimal solutions
will eventually necessitate additional expenditures, resulting in higher long-term costs. There
is no best solution here as finding the right fit involves individual considerations. That said,
there are a few pointers when it comes to choosing the best possible virtualization platform.
Reduction of Capital Expenditures (CAPEX): Typically servers have low utilization levels
averaging around 15 percent. Virtualization can increase throughput four-fold. This means that
a company can use less hardware and reduce energy costs. Note that this must be weighed
against the total cost of ownership of the virtualization platform.
Server Consolidation: This is an approach to the efficient usage of computer server resources
in order to reduce the total number of servers or server locations that an organization requires.
Ease of System Management: Easy management is how rapidly new services such as
platform as service (PaaS), infrastructure as a service (IaaS) and software as a service (SaaS)
can be deployed. It also refers to agility and speed in deploying new application stacks.
The top virtualization platforms are more or less evenly matched as far as features are
concerned. However, features are not the only consideration when making hypervisor choice.
Some of the other important factors are hardware compatibility and cost of ownership. System
support, stability and scalability deserve a long hard look as well. The key is to assess all of a
platform’s features and costs against your company’s needs and budget. The key isn’t just to
virtualize. For true success, companies must find the right fit
Desktop virtualization is the one-stop solution to many problems that organizations face today,
including regulatory compliance, security, cost control, business continuity and manageability.
A virtual desktop interface (VDI) uses host-based virtual machines (VMs) to run the
operating system. It delivers non-persistent and persistent virtual desktops to all connected
devices. With a non-persistent virtual desktop, employees can access a virtual desktop from a
shared pool, whereas in a persistent virtual desktop, each user gets a unique desktop image that
can be customized with data and applications. VDI gives each user their virtual machine and
supports only one user per operating system.
2. Remote Desktop Services
Remote desktop services (RDS) or remote desktop session host (RDSH) are beneficial
where only limited applications require virtualization. They allow users to remotely access
Windows applications and desktops using the Microsoft Windows Server operating system.
RDS is a more cost-effective solution, since one Windows server can support multiple users.
3.Desktop-as-a-Service (DaaS)
Identify the costs associated with setting up the infrastructure and deployment of virtual desktops
Determine whether you have the required resources and expertise to adopt these solutions
Determine the infrastructure control capabilities of the virtualization providers
Determine the level of elasticity and agility you want in your desktop virtualization solution
Why do you need desktop virtualization for your business.
Beyond saving money and time, desktop virtualization offers various other benefits for
organizations. These include:
Better security and control: Virtual desktops store data in a secure environment and allow
central management of confidential information to prevent data leaks. Desktop virtualization
solutions restrict the users from saving or copying data to any source other than its servers,
making it hard to get crucial company information out.Reliable virtualization solution
providers offer multiple layers of cloud safeguards such as the highest quality encryption,
switches, routers and constant monitoring to eliminate threats and protect users’ data.
Ease of maintenance: Unlike traditional computers, virtual desktops are far easier to maintain.
All end-users don’t need to update or download the necessary programs individually since these
are centrally managed by the IT department.
IT admin can also easily keep track of the software assets through a virtual desktop. Once a
user logs off from the virtual desktop it can reset, and any customizations or software programs
downloaded on the desktop can be easily removed. It also helps prevent system slowdown
caused by customizations and software downloads.
Remote work:Since virtual desktops are connected to a central server, provision for new
desktops can be made in minutes so they are instantly available to new users. Instead of
manually setting up a new desktop for new employees, IT admins can deploy a ready-to-go
virtual desktop to the new user’s device using desktop virtualization. The users can access and
interact with the operating systems and applications from virtually anywhere with an internet
connection.
Resource management: Resources for desktop virtualization are located in a data center,
which allows the pooling of resources for better efficiency. With desktop virtualization, IT
admins can maximize their hardware investment returns by consolidating the majority of their
computing in a data center. This helps eliminate the need to push application and operating
system updates to the end-user machines.
Organizations can deploy less expensive and less powerful devices to end-users since
they are only used for input and output. IT departments can save money and resources that
would otherwise be used to deploy more expensive and powerful machines.
Reduced costs: Desktop virtualization solutions help shift the IT budget from capital expenses
to operating expenses. Organizations can prolong the shelf life of their traditional computers
and other less powerful machines by delivering compute-intensive applications via VMs hosted
on a data center.
IT departments can also significantly save costs on software licensing as you only need
to install and update the software on a single, central server instead of multiple end-user
workstations. Savings on energy bills, capital costs, licensing costs, IT support costs and
upfront purchasing costs can reduce the overall IT operating costs by almost 70%.
Increased employee productivity: Employees’ productivity may increase when they can
easily access the organization’s computing resources from any supported device, anywhere and
anytime. Employees can work in a comfortable environment and still be able to access all
applications and software programs that would otherwise only be available on their office
desktops. Desktop virtualization allows for a seamless and faster employee onboarding process,
productivity and provisioning for remote workers.
Improved flexibility: Desktop virtualization reduces the need to configure desktops for each
end-user. These solutions help organizations manage and customize desktops through a single
interface and eliminate the need to personalize each desktop. Desktop virtualization lets the
administrator set permissions for access to programs and files already stored on the central
server with just a few clicks. Employees can access the required programs from anywhere,
offering them better work flexibility.
Reduces hardware costs – With network virtualization, companies save money by using less
physical equipment. This approach uses software to handle tasks that once needed lots of
machines.
Enhances network flexibility – Changing network setups becomes easier and quicker, as
virtual networks can be adjusted without altering physical wires or devices.
Simplifies management tasks – Keeping track of a network and making changes is less
complicated because you can manage everything from one place, instead of handling lots of
separate devices.
Improves disaster recovery – If something goes wrong, like a natural disaster, network
virtualization helps businesses get their systems back up and running quickly because they can
move their network to another location virtually.
Supports multiple applications – Different programs and services can run on the same
physical network without interfering with each other, making it easier for businesses to use
various applications smoothly.
Disadvantages of Network Virtualization
Complex setup process – Setting up a virtual network can be complicated. It often requires
specialized knowledge and careful planning to ensure everything works together correctly.
Increased management overhead – Keeping track of a virtual network needs extra effort.
More components and layers mean more things to manage, which can be time-consuming.
Potential security vulnerabilities – When you create virtual networks, there might be more
chances for hackers to find a way in. This can happen if the system isn’t set up with strong
protections.
Compatibility issues with hardware – Sometimes, the hardware you have might not work
well with virtual networks. This can lead to extra costs if you need to buy new equipment that’s
compatible.
Latency and performance concerns – Virtual networks can sometimes slow down data. This
is because information has to travel through more steps before it gets to where it’s going.
Functions virtualization
Network functions virtualization (NFV) is the replacement of network appliance
hardware with virtual machines. The virtual machines use a hypervisor to run networking
software and processes such as routing and load balancing.
Pay-as-you-go: Pay-as-you-go NFV models can reduce costs because businesses pay only for
what they need.
Fewer appliances: Because NFV runs on virtual machines instead of physical machines, fewer
appliances are necessary and operational costs are lower.
Scalability: Scaling the network architecture with virtual machines is faster and easier, and it
does not require purchasing additional hardware.
NFV architecture
A traditional network architecture, individual proprietary hardware devices such as
routers, switches, gateways, firewalls, load balancers and intrusion detection systems all carry
out different networking tasks. A virtualized network replaces these pieces of equipment with
software applications that run on virtual machines to perform networking tasks.
An NFV architecture consists of three parts:
Centralized virtual network infrastructure: An NFV infrastructure may be based on either
a container management platform or a hypervisor that abstracts the compute, storage and
network resources.
Software applications: Software replaces the hardware components of a traditional network
architecture to deliver the different types of network functionality.
Framework: A framework is needed to manage the infrastructure and provision network
functionality.
1. On-Premises SD-WAN
The SD-WAN hardware resides on-site. Network operators have direct, secure access
and control over the network and hardware, offering enhanced security for sensitive
information.
2. Cloud-Enabled SD-WAN
This form of SD-WAN architecture connects to a virtual cloud gateway over the
internet, enhancing network accessibility and facilitating better integration and performance
with cloud-native applications.
3. Cloud-Enabled with Backbone SD-WAN
This architecture gives organizations an extra layer of security by connecting the
network to a nearby point of presence (PoP), like a data center. It allows traffic to shift from
the public internet to a private connection, enhancing network security and providing a fallback
in case of connection failures.
A simple terms, think of SD-WAN as a smart traffic cop for your internet connections.
It looks at all the different paths data can take to get from one place to another and chooses the
best route based on factors like speed, cost, and reliability. This helps businesses improve their
network performance, reduce costs, and make their network more flexible and responsive to
their needs.
Through the use of intelligent algorithms and real-time monitoring, SD-WAN can dynamically
route traffic across the most optimal paths, based on factors like latency, packet loss, and
available bandwidth. This ensures that critical applications receive the necessary resources and
guarantees a consistent user experience.
SD-WAN brings a multitude of essential features that set it apart from conventional WAN solutions:
1. Centralized management: SD-WAN provides a single interface to monitor and configure
the entire network, simplifying management tasks and reducing operational overhead.
2. Dynamic path selection: SD-WAN can intelligently select the best path for traffic based on
real-time conditions, ensuring optimal performance and reliability.
3. Quality of Service (QoS): SD-WAN enables prioritization of critical applications and traffic
types, ensuring that they receive the necessary bandwidth and performance.
4. Security: SD-WAN incorporates security features like encryption, segmentation, and threat
detection to protect network traffic and data.
5. Scalability: SD-WAN can easily scale to accommodate growing network demands, allowing
organizations to add new branches or increase bandwidth without significant hardware
investments.
Benefits of SD-WAN
SD-WAN offers several benefits for organizations:
Cost savings: By leveraging inexpensive broadband or LTE connections alongside
traditional MPLS, organizations can significantly reduce their WAN costs.
Improved performance: SD-WAN can optimize traffic routing and prioritize critical
applications, resulting in faster application performance and reduced latency.
Enhanced security: With built-in security features, SD-WAN improves data protection
and network security, reducing the risk of unauthorized access and data breaches.
Simplified management: SD-WAN provides a centralized management interface,
making it easier to configure, monitor, and troubleshoot the network.
Flexibility and agility: SD-WAN enables organizations to quickly adapt to changing
business needs, allowing for rapid deployment of new branches or services.
Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers. Applications and operating systems on the device can directly
access the discs for writing. Local storage is configured by the controllers in RAID groups, and
the operating system sees the storage based on the configuration. However, the storage is
abstracted and the controller is determining how to write the data or retrieve the requested data
for the operating system. Storage virtualization is important in various other forms:
File servers: The operating system doesn't need to know how to write to physical
media; it can write data to a remote location.
WAN Accelerators: WAN accelerators allow you to provide re-requested blocks at
LAN speed without affecting WAN performance. This eliminates the need to transfer
duplicate copies of the same material over WAN environments.
SAN and NAS: Storage is presented over the Ethernet network of the operating system.
NAS (Network Attached Storage) presents the storage as file operations (like NFS).
SAN technologies present the storage as block level storage (like Fibre Channel). SAN
(Storage Area Network) technologies receive the operating instructions only when if
the storage was a locally attached device.
Storage Tiering: Analysing the most frequently used data and allocating it to the best-
performing storage pool, storage tiering uses the storage pool concept as an entry point.
The least used data is stored in the storage pool with the lowest performance.
Memory Virtualization:
Memory virtualization gathers volatile random access memory (RAM) resources from
many data centre systems, making them accessible to any cluster member machine.
Software performance issues commonly occur from physical memory limits. Memory
virtualization solves this issue by enabling networked, and hence distributed, servers to
share a pool of memory. Applications can utilise a vast quantity of memory to boost
system utilisation, enhance memory usage efficiency, and open up new use cases when
this feature is integrated into the network.
Shared memory systems and memory virtualization solutions are different. Because
shared memory systems do not allow memory resources to be abstracted, they can only
be implemented with a single instance of an operating system (that is, not in a clustered
application environment).
Memory virtualization differs from flash memory-based storage, like solid-state drives
(SSDs), in that the former replaces or enhances regular RAM, while the latter replaces
hard drives (networked or not).
Products based on Memory Virtualization are: ScaleMP, RNA Networks Memory
Virtualization Platform, Oracle Coherence and GigaSpaces.
Implementaions
Application level integration
In this case, applications running on connected computers connect to the memory pool directly
through an API or the file system.
Features
1. Virtual Address Space: Creating a virtual address space for each programme that
corresponds to a physical memory address is the first stage in memory virtualization. While
physical memory addresses are often bigger than virtual address spaces, numerous
applications can run simultaneously.
2. Page Tables: The operating system keeps track of the memory pages used by each app and
their matching physical memory addresses in order to manage the mapping between virtual
and physical memory addresses. This data structure is known as a page table.
3. Memory Paging: A page fault occurs when an application tries to access a memory page
that is not already in physical memory. The OS reacts to this by loading the requested page
from disc into physical memory and swapping out a page of memory from physical memory
to disc.
4. Memory compression: Different memory compression algorithms, which analyse the
contents of memory pages and compress them to conserve space, are used to make better
use of physical memory. A compressed page is instantly decompressed when a programme
wants to access it.
5. Memory Over commitment: Memory over commitment, in which applications are given
access to more virtual memory than is physically accessible, is made possible by
virtualization. because not all memory pages are actively being used at once, the System
can employ memory paging and compression to release physical memory as needed.
2. Memory Isolation: Each process has its own virtual memory space, which provides memory
isolation and protects processes from interfering with each other’s memory.
4. Efficient Memory Utilization: By using techniques like demand paging and page
replacement, memory virtualization optimizes the usage of physical memory by keeping
frequently accessed pages in memory and swapping out less used pages to disk.
2. Object-Level: Data is not immediately stored on a disc when using object storage. Data
buckets are used to abstract it instead. You can retrieve this data from your programme
using API (programme Programming Interface) calls. This may be a more scalable
option than block storage when dealing with big data volumes. Hence, after arranging
your buckets, you won't need to be concerned about running out of room.
3. File-Level: When someone wants another server to host their data, they use file server
software such as Samba and NFS. The files are kept in directories known as shares. As
a result, this eliminates the requirement for disc space management and permits
numerous users to share a storage device. File servers are useful for desktop PCs, virtual
servers, and servers.
4. Host-based: Access to the host or any connected devices is made possible via host-
based storage virtualization. The server's installed driver intercepts and reroutes the
input and output (IO) requests. These input/output (IO) requests are typically sent
towards a hard disc, but they can also be directed towards other devices, including a
USB flash drive. This kind of storage is mostly used for accessing actual installation
CDs or DVDs, which make it simple to install an operating system on the virtual
computer.
5. Network-based: The host and the storage are separated by a fibre channel switch. The
virtualization takes place and the IO requests are redirected at the switch. No specific
drivers are needed for this approach to function on any operating system.
6. Array-based: All of the arrays' IO requests are handled by a master array. This makes
data migrations easier and permits management from a single location.
2. Storage Virtualization Layer: A storage virtualization layer sits between the applications
and the physical storage devices. It manages the allocation and retrieval of data blocks,
providing a transparent interface to the applications.
3. Uniform Addressing: Each block of data is assigned a unique address within the virtualized
storage space. This allows for consistent addressing regardless of the physical location of the
data.
4. Dynamic Provisioning: Block-level virtualization enables dynamic provisioning of storage
space. Storage can be allocated or de-allocated on-the-fly without disrupting ongoing
operations.
5. Data Migration and Load Balancing: The virtualization layer can facilitate data migration
across different storage devices without affecting the applications using the data. This helps in
load balancing and optimizing storage performance.
7. Vendor Independence: Users can often mix and match storage devices from different
vendors within the virtualized storage pool. This promotes vendor independence and flexibility
in choosing hardware components.
8. Snapshot and Backup: Many block-level storage virtualization solutions offer features like
snapshots and backups. Snapshots allow for point-in-time copies of data, and backup processes
can be streamlined through centralized management.
2. Improved Utilization: Virtualization allows for efficient use of storage capacity, as it enables
pooling and dynamic allocation of resources based on demand.
3. Vendor Independence: Users can integrate storage devices from different vendors into a
unified storage pool, promoting flexibility and preventing vendor lock-in.
5. Data Migration and Load Balancing: The virtualization layer facilitates seamless data
migration across storage devices, aiding in load balancing and optimizing storage performance.
4. Compatibility Issues: Integrating storage devices from different vendors may lead to
compatibility issues or require additional effort to ensure seamless operation.
2. Unified Namespace: It provides a unified namespace for files and directories, allowing
users and applications to interact with a centralized and standardized file system.
4. Access Control and Security: Administrators can implement access control and security
policies at the file level, managing permissions for individual files or directories.
5. Dynamic Expansion and Contraction: The virtualization layer allows for dynamic
expansion or contraction of storage resources, making it easier to manage changing storage
requirements.
3. Efficient Data Migration: Files can be migrated between storage devices without
affecting user access, facilitating efficient data movement for load balancing or hardware
upgrades.
5. Enhanced Access Control: Access control can be applied at the file level, allowing for
fine-grained permissions management.
3. Compatibility Challenges: Some legacy applications or systems may not fully support
file-level virtualization, leading to compatibility challenges.
4. Initial Setup Costs: There can be significant initial setup costs associated with
implementing file-level virtualization, including hardware and software investments.
5. Learning Curve: Adopting file-level storage virtualization may involve a learning curve
for administrators, especially if they are not familiar with the specific virtualization solution.
The actual form of the mapping will depend on the chosen implementation. Some
implementations may limit the granularity of the mapping which may limit the
capabilities of the device. Typical granularities range from a single physical disk down
to some small subset (multiples of megabytes or gigabytes) of the physical disk.
4. Security: Security measures must be in place to protect the mapping information and
prevent unauthorized access or tampering with the address space remapping process.
5. Data Integrity: Ensuring data integrity during address space remapping is crucial.
The virtualization layer must guarantee that data is correctly mapped to the intended
physical locations.
Managing the different software and hardware can get difficult when there are several
hardware and software elements.
Storage systems need frequent upgradation to meet the challenging nature of
applications and huge data.
Despite the ease of accessing data with storage virtualisation, there is always a risk of
cyber-attacks and various cyber threats in virtual environments. That is, for the data
stored in virtual machines, data security and its governance are the major challenges.
Amongst the various vendors delivering storage virtualisation solutions, it’s important
to find a reliable one. As many a time, it happens when vendors provide storage
solutions but ignore the complexities of backing up virtual storage pools.
Similarly, they fall in situations when there is a need for immediate recovery of data in
case of hardware failure or any other issue.
Storage virtualisation, at times, can lead to access issues. This can be if the LAN
connection is disrupted, or internet access is lost due to some reason.
There comes a time when there is a need to switch from a smaller network to a larger
one, as the capacity of the current one is insufficient. The migration process is time-
consuming and can even result in downtime.
Additionally, problems like more significant data analysis, lack of agility, scalability,
and more rapid access to data are the common challenges companies face while
selecting storage solutions.
A SAN presents storage devices to a host such that the storage appears to be locally
attached. This simplified presentation of storage to a host is accomplished through the use
of different types of virtualization.
SANs perform an important role in an organization's Business Continuity Management
(BCM) activities (e.g., by spanning multiple sites).
SANs are commonly based on a switched fabric technology. Examples include Fibre
Channel (FC), Ethernet, and InfiniBand. Gateways may be used to move data between
different SAN technologies.
Fibre Channel is commonly used in enterprise environments. Fibre Channel may be used
to transport SCSI, NVMe, FICON, and other protocols.
Ethernet is commonly used in small and medium sized organizations. Ethernet
infrastructure can be used for SANs to converge storage and IP protocols onto the same
network. Ethernet may be used to transport SCSI, FCoE, NVMe, RDMA, and other
protocols.
InfiniBand is commonly used in high performance computing environments. InfiniBand
may be used to transport SRP, NVMe, RDMA, and other protocols.
The core of a SAN is its fabric: the scalable, high-performance network that interconnects
hosts -- servers -- and storage devices or subsystems. The design of the fabric is directly
responsible for the SAN's reliability and complexity. At its simplest, an FC SAN can simply
attach HBA ports on servers directly to corresponding ports on SAN storage arrays, often
using optical cables for top speed and support for networking over greater physical distances.
But such simple connectivity schemes belay the true power of a SAN. In actual practice, the
SAN fabric is designed to enhance storage reliability and availability by eliminating single
points of failure. A central strategy in creating a SAN is to employ a minimum of two
connections between any SAN elements. The goal is to ensure that at least one working
network path is always available between SAN hosts and SAN storage.
SAN architecture includes host components, fabric components and storage components.
Consider a simple example in the image above where two SAN hosts must communicate with
two SAN storage subsystems. Each host employs a separate HBA -- not a multiport HBA
because the HBA device itself is a single point of failure. The port from each HBA is
connected to a port on a different SAN switch, such as Fibre Channel switch. Similarly,
multiple ports on the SAN switch connect to different storage target devices or systems. This
is a simple redundant fabric; remove any one connection in the diagram, and both servers can
still communicate with both storage systems to preserve storage access for the workloads on
both servers.
Consider the basic behaviour of a SAN and its fabric. A host server requires access to SAN
storage; the host will internally create a request to access the storage device. The traditional
SCSI commands used for storage access are encapsulated into packets for the network -- in
this case FC packets -- and the packets are structured according to the rules of the FC protocol.
The packets are delivered to the host's HBA where the packets are placed onto the network's
optical or copper cables. The HBA transmits the request packet(s) to the SAN where the
request will arrive at the SAN switch(s). One of the switches will receive the request and send
it along to the corresponding storage device. In a storage array, the storage processor will
receive the request and interact with storage devices within the array to accommodate the
host's request.
SAN Switches:
The SAN switch is the focal point of any SAN. As with most network switches, the SAN
switch receives a data packet, determines the source and destination of the packet and then
forwards that packet to the intended destination device. Ultimately, the SAN fabric
topology is defined by number of switches, the type of switches -- such as backbone
switches, or modular or edge switches -- and the way in which the switches are
interconnected. Smaller SANs might use modular switches with 16, 24 or even 32 ports,
while larger SANs might use backbone switches with 64 or 128 ports. SAN switches can
be combined to create large and complex SAN fabrics that connect thousands of servers
and storage devices.
Virtual SAN. Virtualization technology was a natural fit for the SAN, encompassing both
storage and storage network resources to add flexibility and scalability to the underlying
physical SAN. A virtual SAN -- denoted with a capital V in VSAN -- is a form of isolation,
reminiscent of traditional SAN zoning, which essentially uses virtualization to create one
or more logical partitions or segments within the physical SAN. Traditional VSANs can
employ such isolation to manage SAN network traffic, enhance performance and improve
security. Thus, VSAN isolation can prevent potential problems on one segment of the SAN
from affecting other SAN segments, and the segments can be changed logically as needed
without the need to touch any physical SAN components. VMware offers Virtual SAN
Technology.
Unified SAN. A SAN is noted for its support of block storage, which is typical for
enterprise applications. But file, object and other types of storage would traditionally
demand a separate storage system, such as network-attached storage (NAS). A SAN that
supports unified storage is capable of supporting multiple approaches -- such as file, block
and object-based storage -- within the same storage subsystem. Unified storage provides
such capabilities by handling multiple protocols, including file-based SMB and NFS, as
well as block-based, such as FC and iSCSI. By using a single storage platform for block
and file storage, users can take advantage of powerful features that are usually reserved
for traditional block-based SANs, such as storage snapshots, data replication, storage
tiering, data encryption, data compression and data deduplication.
Converged SAN. One common disadvantage to a traditional FC SAN is the cost and
complexity of a separate network dedicated to storage. ISCSI is one means of overcoming
the cost of a SAN by using common Ethernet networking components rather than FC
components. FCoE supports a converged SAN that can run FC communication directly
over Ethernet network components -- converging both common IP and FC storage
protocols onto a single low-cost network. FCoE works by encapsulating FC frames within
Ethernet frames to route and transport FC data across an Ethernet network. However,
FCoE relies on end-to-end support in network devices, which has been difficult to achieve
on a broad basis, making the choice of vendor limited.
Hyper-converged infrastructure. The data center use of HCI has grown dramatically in
recent years. HCI combines compute and storage resources into pre-packaged modules,
allowing modules -- also called nodes -- to be added as needed and managed through a
single common utility. HCI employs virtualization, which abstracts and pools all the
compute and storage resources. IT administrators then provision virtual machines and
storage from the available resource pools. The fundamental goal of HCI is to simplify
hardware deployment and management while allowing fast scalability.
SAN Benefits:
High performance. The typical SAN uses a separate network fabric that is dedicated to
storage tasks. The fabric is traditionally FC for top performance, though iSCSI and
converged networks are also available.
High scalability. The SAN can support extremely large deployments encompassing
thousands of SAN host servers and storage devices or even storage systems. New hosts and
storage can be added as required to build out the SAN to meet the organization's specific
requirements.
High availability. A traditional SAN is based on the idea of a network fabric, which --
ideally -- interconnects everything to everything else. This means a full-featured SAN
deployment has no single point of failure between a host and a storage device, and
communication across the fabric can always find an alternative path to maintain storage
availability to the workload.
Advanced management features. A SAN will support an array of useful enterprise-class
storage features, including data encryption, data deduplication, storage replication and self-
healing technologies intended to maximize storage capacity, security and data resilience.
Features are almost universally centralized and can easily be applied to all the storage
resources on the SAN.
SAN Disadvantages:
Complexity. Although more convergence options, such as FCoE and unified options,
exist for SANs today, traditional SANs present the added complexity of a second
network -- complete with costly, dedicated HBAs on the host servers, switches and
cabling within a complex and redundant fabric and storage processor ports at the
storage arrays. Such networks must be designed and monitored with care, but the
complexity is increasingly troublesome for IT organizations with fewer staff and
smaller budgets.
Scale. Considering the cost, a SAN is generally effective only in larger and more
complex environments where there are many servers and significant storage. It's
certainly possible to implement a SAN on a small scale, but the cost and complexity
are difficult to justify. Smaller deployments can often achieve satisfactory results using
an iSCSI SAN, a converged SAN over a single common network -- such as FCoE -- or
an HCI deployment, which is adept at pooling and provisioning resources.
NAS Uses:
CPU. The heart of every NAS is a computer that includes the central processing
unit (CPU) and memory. The CPU is responsible for running the NAS OS, reading
and writing data against storage, handling user access and even integrating with
cloud storage if so designed. Where typical computers or servers use a general-
purpose CPU, a dedicated device such as NAS might use a specialized CPU
designed for high performance and low power consumption in NAS use cases.
Network interface. Small NAS devices designed for desktop or single-user use
might allow for direct computer connections, such as USB or limited wireless (Wi-
Fi) connectivity. But any business NAS intended for data sharing and file serving
will demand a physical network connection, such as a cabled Ethernet interface,
giving the NAS a unique IP address. This is often considered part of the NAS
hardware suite, along with the CPU.
Storage. Every NAS must provide physical storage, which is typically in the form
of disk drives. The drives might include traditional magnetic HDDs, SSDs or other
non-volatile memory devices, often supporting a mix of different storage devices.
The NAS might support logical storage organization for redundancy and
performance, such as mirroring and other RAID implementations -- but it's the
CPU, not the disks, that handle such logical organization.
OS. Just as with a conventional computer, the OS organizes and manages the NAS
hardware and makes storage available to clients, including users and other
applications. Simple NAS devices might not highlight a specific OS, but more
sophisticated NAS systems might employ a discrete OS such as Netgear
ReadyNAS, QNAP QTS, Zyxel FW, among others.
Scale-out NAS
With scale-out systems, the storage administrator installs larger heads and more hard
disks to boost storage capacity. Scaling out provides the flexibility to adapt to an
organization's business needs. Enterprise scale-out systems can store billions of files
without the performance tradeoff of doing metadata searches.
Object storage
Some industry experts speculate that object storage will overtake scale-out NAS.
However, it's possible the two technologies will continue to function side by side.
Both scale-out and object storage methodologies deal with scale, but in different ways.
NAS files are centrally managed via the Portable Operating System Interface
(POSIX). It provides data security and ensures multiple applications can share a scale-
out device without fear that one application will overwrite a file being accessed by
other users.
Object storage is a new method for easily scalable storage in web-scale environments.
It is useful for unstructured data that is not easily compressible, particularly large
video files.
Object storage does not use POSIX or any file system. Instead, all the objects are
presented in a flat address space. Bits of metadata are added to describe each object,
enabling quick identification within a flat address namespace.
Advantages of NAS:
Disadvantages of NAS:
• Technology used in computer systems to organize and manage multiple physical hard
drives as a single logical unit.
• RAID is designed to improve the reliability, performance, and/or capacity of data
storage systems.
• It achieves this by storing data across multiple disks in a way that provides redundancy
and/or data striping.
• There are different levels of RAID, each with its own set of characteristics and
advantages.
• Some common RAID levels include
RAID 0
RAID 1
RAID 2
RAID 3
RAID 4
RAID 5
RAID 6
RAID 0:
Instead of placing one block of data in a disk, we can place more than one block of data in a
disk and then move to another disk.
Pros of RAID 0:
• All the disk space is utilized and hence performance is increased.
• Data requests can be on multiple disks and not on a single disk hence improving the
throughput.
Cons of RAID 0:
• Failure of one disk can lead to complete data loss in the respective array.
• No data Redundancy is implemented so one disk failure can lead to system failure.
RAID 1:
• RAID 1 implements mirroring which means the data of one disk is replicated in
another disk.
• This helps in preventing system failure as if one disk fails then the redundant disk
takes over.
Here Disk 0 and Disk 1 have the same data as disk 0 is copied to disk 1. Same is the case with
Disk 2 and Disk 3.
Pros of RAID 1:
• Failure of one Disk does not lead to system failure as there is redundant data in other
disk.
Cons of RAID 1:
• Extra space is required for each disk as each disk data is copied to some other disk also
RAID 2:
• RAID 2 is used when error in data has to be checked at bit level, which uses a
Hamming code detection method.
• Two disks are used in this technique.
• One is used to store bit of each word in the disk and another is used to store error code
correction (Parity bits) of data words.
• The structure of this RAID is complex, so it is not used commonly.
Here Disk 3, Disk 4 and Disk 5 stores the parity bits of Data stored in Disk 0, Disk 1, and Disk
2 respectively. Parity bits are used to detect the error in data.
Pros of RAID 2:
Cons of RAID 2:
RAID 3:
Pros of RAID 3:
Cons of RAID 3:
RAID 4:
Here P0 is calculated using XOR (0,1,0) = 1 and P1 is calculated using XOR (1,1,0) = 0. If
there is even number of 1 then XOR is 0 and for odd number of 1 XOR is 1. If suppose Disk 0
data is lost, by checking parity P0=1 we will know that Disk 0 should have 0 to make the Parity
P0 as 1 whereas if there was 1 in Disk 0 it would have made the parity P0=0 which contradicts
with the current parity value.
Pros of RAID 4:
• Parity bits helps to reconstruct the data if at most one data is lost from the disks.
Cons of RAID 4:
RAID 5:
• Parity is distributed over the disk and makes the performance better.
• Data can be reconstructed using parity bits.
Cons of RAID 5:
• Parity bits are useful only when there is data loss in at most one Disk.
• If there is loss in more than one Disk block then parity is of no use.
• Extra space for parity is required.
RAID 6:
• If there is more than one Disk failure, then RAID 6 implementation helps in that case.
• In RAID 6 there are two parity in each array/row. It is similar to RAID 5 with extra
parity.
Here P0,P1,P2,P3 and Q0,Q1,Q2,Q3 are two parity to reconstruct the data if almost two disks
fail.
Pros of RAID 6:
In Summary, RAID is used to backup the data when a disk fails for some reason and there are
some levels of RAID.
Uses of AWS
o A small manufacturing organization uses their expertise to expand their business by
leaving their IT management to the AWS.
o A large enterprise spread across the globe can utilize the AWS to deliver the training to
the distributed workforce.
o An architecture consulting company can use AWS to get the high-compute rendering
of construction prototype.
o A media company can use the AWS to provide different types of content such as ebox
or audio files to the worldwide files.
Pay-As-You-Go
Based on the concept of Pay-As-You-Go, AWS provides the services to the customers.
AWS provides services to customers when required without any prior commitment or
upfront investment. Pay-As-You-Go enables the customers to procure services from AWS.
o Computing
o Programming models
o Database storage
o Networking
Advantages of AWS
1. Flexibility
o We can get more time for core business tasks due to the instant availability of new
features and services in AWS.
o It provides effortless hosting of legacy applications. AWS does not require learning
new technologies and migration of applications to the AWS provides the advanced
computing and efficient storage.
o AWS also offers a choice that whether we want to run the applications and services
together or not. We can also choose to run a part of the IT infrastructure in AWS and
the remaining part in data centres.
2. Cost-effectiveness
3. Scalability/Elasticity
Through AWS, autoscaling and elastic load balancing techniques are automatically
scaled up or down, when demand increases or decreases respectively. AWS techniques are
ideal for handling unpredictable or very high loads. Due to this reason, organizations enjoy the
benefits of reduced cost and increased user satisfaction.
4. Security
o AWS provides end-to-end security and privacy to customers.
o AWS has a virtual infrastructure that offers optimum availability while managing full
privacy and isolation of their operations.
o Customers can expect high-level of physical security because of Amazon's several
years of experience in designing, developing and maintaining large-scale IT operation
centers.
o AWS ensures the three aspects of security, i.e., Confidentiality, integrity, and
availability of user's data.
Features of AWS
o Flexibility
o Cost-effective
o Scalable and elastic
o Secure
o Experienced
1. Flexibility
2. Cost-effective
o Cost is one of the most important factors that need to be considered in delivering IT
solutions.
o For example, developing and deploying an application can incur a low cost, but after
successful deployment, there is a need for hardware and bandwidth. Owing our own
infrastructure can incur considerable costs, such as power, cooling, real estate, and staff.
o The cloud provides on-demand IT infrastructure that lets you consume the resources
what you actually need. In aws, you are not limited to a set amount of resources such
as storage, bandwidth or computing resources as it is very difficult to predict the
requirements of every resource. Therefore, we can say that the cloud provides flexibility
by maintaining the right balance of resources.
o AWS provides no upfront investment, long-term commitment, or minimum spend.
o You can scale up or scale down as the demand for resources increases or decreases
respectively.
o An aws allows you to access the resources more instantly. It has the ability to respond
the changes more quickly, and no matter whether the changes are large or small, means
that we can take new opportunities to meet the business challenges that could increase
the revenue, and reduce the cost.
4. Secure
o AWS provides a scalable cloud-computing platform that provides customers with end-
to-end security and end-to-end privacy.
o AWS incorporates the security into its services, and documents to describe how to use
the security features.
o AWS maintains confidentiality, integrity, and availability of your data which is the
utmost importance of the aws.
Physical security: Amazon has many years of experience in designing, constructing, and
operating large-scale data centers. An aws infrastructure is incorporated in AWS controlled
data centers throughout the world. The data centers are physically secured to prevent
unauthorized access.
Data privacy: A personal and business data can be encrypted to maintain data privacy.
5. Experienced
o The AWS cloud provides levels of scale, security, reliability, and privacy.
o AWS has built an infrastructure based on lessons learned from over sixteen years of
experience managing the multi-billion dollar Amazon.com business.
o Amazon continues to benefit its customers by enhancing their infrastructure
capabilities.
o Nowadays, Amazon has become a global web platform that serves millions of
customers, and AWS has been evolved since 2006, serving hundreds of thousands of
customers worldwide.
Hyper-V
Virtualization Products
Hyper-V and most third-party virtualization applications that require the same
processor features aren't compatible. That's because the processor features, known as hardware
virtualization extensions, are designed to not be shared.
Features of Hyper-V
Computing environment - A Hyper-V virtual machine includes the same basic parts as a
physical computer, such as memory, processor, storage, and networking. All these parts have
features and options that you can configure different ways to meet different needs. Storage and
networking can each be considered categories of their own, because of the many ways you can
configure them.
Disaster recovery and backup - For disaster recovery, Hyper-V Replica creates copies of
virtual machines, intended to be stored in another physical location, so you can restore the
virtual machine from the copy. For backup, Hyper-V offers two types. One uses saved states
and the other uses Volume Shadow Copy Service (VSS) so you can make application-
consistent backups for programs that support VSS.
Optimization - Each supported guest operating system has a customized set of services and
drivers, called integration services, that make it easier to use the operating system in a Hyper-
V virtual machine.
Portability - Features such as live migration, storage migration, and import/export make it
easier to move or distribute a virtual machine.
Security - Secure boot and shielded virtual machines help protect against malware and other
unauthorized access to a virtual machine and its data.
Hyper-V Components
Hyper-V has required parts that work together so you can create and run virtual machines.
Together, these parts are called the virtualization platform. They're installed as a set when you
install the Hyper-V role. The required parts include Windows hypervisor, Hyper-V Virtual
Machine Management Service, the virtualization WMI provider, the virtual machine bus
(VMbus), virtualization service provider (VSP) and virtual infrastructure driver (VID).
Hyper-V also has tools for management and connectivity. You can install these on the same
computer that Hyper-V role is installed on, and on computers without the Hyper-V role
installed. These tools are:
Hyper-V Manager
Hyper-V module for Windows PowerShell
Virtual Machine Connection
Windows PowerShell Direct
Oracle VM VirtualBox
Oracle VM VirtualBox is a free and open-source hosted hypervisor for x86 virtualization,
developed and maintained by Oracle Corporation. It allows users to create and run virtual
machines on their desktop or laptop computers, enabling them to run multiple operating
systems simultaneously.
VirtualBox supports a wide range of guest operating systems including various versions of
Windows, Linux, macOS, Solaris, and others. It provides features such as snapshotting, which
allows users to save the current state of a virtual machine and revert back to it later if needed,
as well as support for virtual networking, USB device passthrough, and more.
VirtualBox is commonly used for purposes such as software development and testing, running
legacy applications, experimenting with different operating systems, and creating virtualized
environments for training or educational purposes. It's popular among developers, IT
professionals, and enthusiasts due to its versatility, ease of use, and the fact that it's available
for free under the GNU General Public License (GPL).
After installation, you can launch VirtualBox and start creating and managing virtual machines
to meet your specific needs and requirements on your Windows computer.
IBM PowerVM
IBM PowerVM is a virtualization solution designed specifically for IBM Power Systems
servers, which are based on IBM's POWER architecture. PowerVM provides virtualization
capabilities for these servers, enabling the creation and management of virtualized partitions
or logical partitions (LPARs).
IBM PowerVM
IBM PowerVM is widely used in enterprise environments that rely on IBM Power Systems
servers for their mission-critical workloads, providing advanced virtualization capabilities
tailored to the unique architecture and capabilities of IBM's POWER processors.
Google offers several virtualization solutions and services, primarily targeted at cloud
computing and enterprise customers. Some of the key virtualization offerings from Google
include:
1. Google Cloud Platform (GCP) Compute Engine: GCP Compute Engine is Google's
Infrastructure-as-a-Service (IaaS) offering that provides virtual machines (VMs) running on
Google's global infrastructure. Customers can create and manage VM instances in the cloud,
choosing from various machine types and operating systems to run their workloads.
2. Google Kubernetes Engine (GKE): GKE is a managed Kubernetes service provided by
Google Cloud Platform. Kubernetes is an open-source container orchestration platform for
automating the deployment, scaling, and management of containerized applications. GKE
enables users to deploy and manage containerized applications using Kubernetes clusters
running on Google Cloud infrastructure.
3. Google Cloud VMware Engine: Google Cloud VMware Engine is a fully managed VMware
service that allows customers to migrate and run VMware workloads natively on Google Cloud
Platform. It provides a dedicated VMware environment running on Google Cloud
infrastructure, enabling organizations to leverage their existing VMware-based solutions while
benefiting from Google Cloud's scalability, reliability, and global reach.
4. Anthos: Anthos is Google Cloud's hybrid and multi-cloud platform that enables customers to
build, deploy, and manage applications across on-premises data centers, Google Cloud
Platform, and other public cloud environments. Anthos provides a consistent platform for
deploying and managing workloads using containers and Kubernetes, offering capabilities for
modernizing existing applications and building new cloud-native applications.
5. Google Cloud Functions: Google Cloud Functions is a serverless compute service that allows
developers to build and deploy event-driven functions in the cloud. Functions are triggered by
various events such as HTTP requests, cloud storage changes, or pub/sub messages, and they
automatically scale in response to demand, eliminating the need for managing infrastructure.
These are some of the key virtualization offerings and services provided by Google, catering
to different use cases and deployment scenarios in cloud computing and enterprise
environments.
Case Study
Here's a hypothetical case study illustrating the benefits of virtualization in an enterprise
environment:
Challenges:
1. Resource Underutilization: The company's physical servers are running at low utilization
levels, resulting in inefficient resource allocation and increased hardware costs.
2. Disaster Recovery: XYZ Corporation lacks a robust disaster recovery plan, making them
vulnerable to data loss and extended downtime in case of a disaster or system failure.
3. Testing and Development: The IT team struggles to provision and manage testing and
development environments efficiently, leading to delays in application development and
deployment.
4. Flexibility and Scalability: There is a lack of flexibility and scalability in the existing
infrastructure, making it challenging to adapt to changing business requirements and scale
resources as needed.
Implementation:
1. Server Virtualization: XYZ Corporation virtualizes their physical servers using VMware
vSphere, consolidating multiple virtual machines (VMs) onto a smaller number of physical
servers. This improves resource utilization, reduces hardware costs, and simplifies server
management.
2. Disaster Recovery: They implement VMware Site Recovery Manager (SRM) to automate the
replication and failover of virtual machines to a secondary data center in case of a disaster. This
ensures business continuity and minimizes downtime in the event of a disaster.
3. Testing and Development: The IT team creates isolated virtualized environments for testing
and development purposes using VMware vSphere. They can easily provision and manage
virtual machines for different development projects, speeding up the application development
lifecycle.
4. Flexibility and Scalability: With VMware vSphere's dynamic resource allocation and
scalability features, XYZ Corporation gains the flexibility to scale resources up or down based
on demand. They can easily add or remove virtual machines as needed, enabling them to adapt
to changing business requirements more effectively.
Results:
1. Cost Savings: By consolidating their physical servers through virtualization, XYZ Corporation
reduces hardware costs and achieves higher resource utilization levels, leading to cost savings.
2. Improved Disaster Recovery: With VMware SRM, XYZ Corporation strengthens their
disaster recovery capabilities, ensuring data protection and minimizing downtime in case of a
disaster.
3. Faster Time-to-Market: The IT team can provision and manage testing and development
environments more efficiently using virtualization, resulting in faster application development
and deployment.
4. Increased Agility: Virtualization enables XYZ Corporation to scale resources quickly and
adapt to changing business needs, increasing their agility and competitiveness in the market.