Cloud Computing Unit-III Cloud Management & Virtualization Technology By. Dr. Samta Gajbhiye
Cloud Computing Unit-III Cloud Management & Virtualization Technology By. Dr. Samta Gajbhiye
Unit-III
Cloud Management & Virtualization Technology
1
Resiliency
2
Resiliency Cont…….
Measures: There are several steps that can be taken to improve a cloud
computing system’s resilience:
1. Implement redundant systems: Using redundant systems, such as multiple
servers or data centers, can help ensure that the system continues to function
even if one component fails.
2. Use load balancers: Load balancers can distribute traffic across multiple
servers, preventing a single server from becoming overburdened and ensuring
that the system remains operational.
3. Use backup and recovery systems: Using backup and recovery systems can
help ensure that data is protected and recoverable in the event of a disaster
4. Use monitoring and alerting tools: Monitoring tools can assist in identifying
issues before they become problems, and alerting systems can notify the
appropriate personnel when problems arise.
5. Implement security measures: Encryption and access controls, for example,
can help protect data and systems from unauthorized access.
6. Use disaster recovery as a service (DRaaS): DraaS is a cloud-based service that
provides backup and recovery capabilities for cloud systems. In the event of a
disaster, using DRaaS can help ensure that a system is quickly recovered.
3
Advantages
Resiliency Cont…….
1. Reduced Downtime: A robust cloud system can lessen the amount of users
downtime (time during which a machine, especially a computer, is out of action or
unavailable for use.) by promptly recovering from faults.
2. Greater Adaptability: Because a resilient cloud system can recover from faults and
scale up or down as necessary, it can be more adaptive and flexible to changing
needs and workloads.
3. Increased Availability: The ability of a resilient cloud system to recover from errors
and carry on operating might increase the system’s overall availability.
4. Increased reliability: A resilient system is less likely to be disrupted or fail, which
can lead to increased reliability and a better user experience.
5. Faster recovery: A resilient system can withstand and recover from disruptions more
quickly, resulting in a shorter recovery time and less downtime.
6. Increased security: A resilient system can withstand and recover from security
breaches and other types of attacks, which can help protect data and assets.
7. Cost savings: Putting in place resiliency measures can help cut the costs associated
with disruptions and failures, such as lost revenue, repair costs, and reputation
damage.
8. Increased competitiveness: A resilient system is more appealing to customers and
4
partners, which can lead to increased market competitiveness.
Resiliency Cont…….
Limitations of Resiliency in Cloud
1. Cost: Putting steps in place to make a cloud system more resilient can be
expensive, especially if doing so entails buying extra hardware or creating and
testing a thorough disaster recovery plan.
2. Human Error: Despite all efforts to create a resilient system, human mistakes can
sometimes result in interruptions, such as incorrect setups or unintentional data
erasure.
3. Complexity: Establishing a durable cloud system can be difficult because it calls
for coordinating the work of numerous teams and incorporating numerous
technologies and procedures.
4. Limited Control: The user may only have a limited amount of control over the
underlying infrastructure and may not be able to adopt certain resiliency
measures, depending on the type of cloud service being utilized.
5. Dependence on External Elements: A cloud system may be susceptible to
interruptions brought on by external circumstances, which may be out of the
user’s control, such as network problems or power outages.
5
Provisioning
6
Provisioning Cont…….
1. Server provisioning: Server provisioning is the process of setting up physical or
virtual hardware; installing and configuring software, such as the operating system
and applications; and connecting it to middleware, network, and storage
components.
Provisioning can encompass all of the operations needed to create a new
machine and bring it to the desired state, which is defined according to business
requirements.
There are many servers categorized according to their uses, Each of them has
unique provisioning requirements, and the choice of the server itself will be
driven by the intended use.
For example, there are file servers, policy servers, mail servers, and
application servers
Server provisioning includes processes such as adjusting control panels,
installing operating systems and other software, or even replicating the set-
up of other servers.
Generally, server provisioning is a process that constructs a new machine,
bringing it up to speed and defining the system’s desired state.
7
Provisioning Cont…….
2. User Provisioning: User provisioning is a type of identity management that
involves granting permissions to services and applications within a corporate
environment—like email, a database, or a network—often based on a user's job
title or area of responsibility.
The act of revoking user access is often referred to as deprovisioning.
An example of user provisioning is role based access control (RBAC). Configuring
RBAC includes assigning user accounts to a group, defining the group’s role—for
example, read-only, editor, or administrator—and then granting these roles
access rights to specific resources based on the users’ functional needs.
The user provisioning process is often managed between IT and human
resources.
3. Network Provisioning: When referring to IT infrastructure, network provisioning is
the setting up of components such as routers, switches, and firewalls; allocating IP
addresses; and performing operational health checks and fact gathering.
For telecommunications companies, the term “network provisioning” refers to
providing users with a telecommunications service, such as assigning a phone
number, or installing equipment and wiring..
8
Provisioning Cont…….
9
Provisioning Cont…….
Benefits of automated provisioning
• Provisioning often requires IT teams to repeat the same process over and over,
such as granting a developer access to a virtual machine in order to deploy and test
a new application.
This makes manually provisioning resources time-consuming and prone to
human error, which can delay the time-to-market of new products and
services.
Manual provisioning also pulls busy IT teams away from projects that are more
important to an organization’s larger strategy.
• Provisioning tasks can easily be handled through automation, using infrastructure-
as-code (IaC).
With IaC, infrastructure specifications are stored in configuration files, which
means that developers just need to execute a script to provision the same
environment every time.
Codifying infrastructure gives IT teams a template to follow for provisioning,
• Automated provisioning provides greater consistency across modern IT
environments, reduces the likelihood of errors and loss of productivity, and frees
IT teams to focus on strategic business objectives. 10
Provisioning Cont…….
With this more efficient provisioning process:
End users and developers can gain access to the IT resources and systems they
need in less time, empowering them to be more productive.
Developers can bring applications and services to market faster, which can
improve customer experience and revenue.
IT teams can spend less time on menial (trivial), repetitive tasks—like correcting
errors and misconfigurations—which allows them to focus on more critical
priorities.
Cloud provisioning tools and software: Organizations can manually provision
whatever resources and services they need, but public cloud providers offer tools
to provision multiple resources and services:
• AWS CloudFormation
• Microsoft Azure Resource Manager
• Google Cloud Deployment Manager
• IBM Cloud Orchestrator
Some organizations further automate the provisioning process as part of a
broader cloud management strategy through orchestration and configuration
management tools, such as HashiCorp's Terraform, Red Hat Ansible, Chef and
Puppet 11
Provisioning Cont…….
12
Asset Management
Cloud asset management is the process used to control an organization's cloud
infrastructure and the application data within the cloud.
Many organizations use a variety of cloud-based applications to store and manage
their digital assets.
Cloud asset management software offers organizations robust capabilities,
from asset tagging and mobile apps, to advanced reporting capabilities.
With a collection of cloud-based asset sources, CAM helps organize assets to avoid
operational hiccups and security concerns.
An asset cloud is a centralized digital storage facility that operates over the
internet.
Though asset records can be stored on internal company hard drives and servers,
cloud-based asset storage solutions utilize a remote server.
Asset clouds ultimately help avoid costly internal infrastructure, enhance
company-wide functionality, and provide detailed asset history
Asset cloud can also enhance employee productivity. With digital assets available
from anywhere in the world with an internet connection
Incorporating the use of cloud asset management practices provides an
organization with visibility and easy control over the digital assets within the
company cloud 13
Asset Management Cont…..
Cloud asset management software is often better suited for larger organizations
that simply possess a higher quantity and wider variety of both physical and digital
assets.
Software improves the visibility into each digital asset and its data. This way,
your company can keep a finger on the pulse of depreciation, application
updates, warranty expirations, and more.
Software also comes equipped with strong security features, like dedicated
servers and encrypted data, to keep company (and consumer) data safe and out of
the wrong hands.
Pros of Cloud asset management
Centralized Data: Cloud-based software centralizes all data in one location to
search and organize
Advanced Accessibility: Cloud-based solutions allow for multiple users and
remote access
Attachment Functionality: Software increases asset efficiency by allowing users to
attach vital asset information such as pictures, SOPs, user manuals, and more
Less Time and Effort: Software automation helps reduce typically time-consuming
management processes while maintaining data accuracy.
Mobile Apps: Mobile apps are handy to scan barcodes and qr-codes that display
14
on asset tags.
Asset Management Cont…..
15
Asset Management Cont…..
Features of Cloud based Asset management
16
Asset Management Cont…..
Benefits of Cloud based Asset management
17
Concepts of Map reduce
A MapReduce is a data processing tool which is used to process the data parallelly
in a distributed form. It was developed in 2004, on the basis of paper titled as
"MapReduce: Simplified Data Processing on Large Clusters," published by Google.
The MapReduce is a paradigm which has two phases, the mapper phase, and the
reducer phase.
In the Mapper, the input is given in the form of a key-value pair. The output of the
Mapper is fed to the reducer as input.
The reducer runs only after the Mapper is over. The reducer too takes input in
key-value format, and the output of reducer is the final output.
The MapReduce task is mainly divided into two phases: Map Phase and Reduce
Phase.
19
Map Reduce Cont…..
20
Map Reduce Cont…..
Map Reduce Architecture
21
Map Reduce Cont…..
22
Map Reduce Cont…..
The MapReduce task is mainly divided into 2 phases i.e. Map phase and Reduce
phase.
• Map: As the name suggests its main use is to map the input data in key-value
pairs. The input to the map may be a key-value pair where the key can be the id of
some kind of address and value is the actual value that it keeps.
The Map() function will be executed in its memory repository on each of these
input key-value pairs and generates the intermediate key-value pair which works as
input for the Reducer or Reduce() function.
• Reduce: The intermediate key-value pairs that work as input for Reducer are
shuffled and sort and send to the Reduce() function. Reducer aggregate or group
the data based on its key-value pair as per the reducer algorithm written by the
developer.
23
Map Reduce Cont…..
How Job tracker and the task tracker deal with MapReduce:
• Job Tracker: The work of Job tracker is to manage all the resources and all the jobs
across the cluster and also to schedule each map on the Task Tracker running on
the same data node since there can be hundreds of data nodes available in the
cluster.
• Task Tracker: The Task Tracker can be considered as the actual slaves that are
working on the instruction given by the Job Tracker. This Task Tracker is deployed
on each of the nodes available in the cluster that executes the Map and Reduce
task as instructed by Job Tracker.
24
Map Reduce Cont…..
25
Map Reduce Cont…..
Sort and Shuffle
• The sort and shuffle occur on the output of Mapper and before the reducer. When
the Mapper task is complete, the results are sorted by key, partitioned if there are
multiple reducers, and then written to disk. Using the input from each Mapper
<k2,v2>, we collect all the values for each unique key k2. This output from the
shuffle phase in the form of <k2, list(v2)> is sent as input to reducer phase.
26
Map Reduce Cont…..
Usage of MapReduce
It can be used in various application like document clustering, distributed
sorting, and web link-graph reversal [The map function outputs <target, source>
pairs for each link to a target URL found in a page named “source”. The reduce
function concatenates the list of all source URLs associated with a given target
URL and emits the pair: <target, list(source)>].
It can be used for distributed pattern-based searching.
We can also use MapReduce in machine learning.
It was used by Google to regenerate Google's index of the World Wide Web.
27
Cloud Governance
Cloud governance is the process of defining, implementing, and monitoring a
framework of policies that guides an organization's cloud operations.
This process regulates, how users work in cloud environments to facilitate
consistent performance of cloud services and systems.
It is the set of policies or principles that act as the guidance for the adoption use,
and management of cloud technology services.
Cloud computing can alleviate (lessen) infrastructure and resource limitations, but
with users spanning multiple business units, it can be difficult to ensure cost-
effective use of cloud resources, minimize security issues, and enforce
established policies.
29
Cloud Governance Cont…..
30
High Availability
As organizations move their systems to the hybrid cloud, resilience is often a
critical concern.
A few moments disrupted in such environments can cause a loss of credibility,
clients, and billions of dollars.
The ability to withstand errors and failures without data loss is key to providing
reliable application services that contribute to business continuity.
High Availability (HA) provides a failover solution in the event a Control Room
service, server, or database fails.
In order to provide continuous service over a particular time, a highly available
architecture needs numerous modules functioning simultaneously. This often
requires the reaction time to the queries of users. In particular, software solutions
need not only to be digital but also reactive.
Highly available networks have the potential to regenerate in the quickest way
possible from unforeseen incidents.
These systems reduce downtime or remove it by transferring the functions to
replacement modules.
In order to ensure that there are no trouble spots, this normally includes routine
servicing, inspection, and preliminary in-depth checks.
31
High Availability Cont……
32
High Availability Cont……
How to measure the percentage of response time for high
availability?
Normally, the response time is demonstrated by using the scoring with the
availability of five 9's.
It will be specified in the Service Level Agreement (SLA) if you plan to go out for a
unified platform.
Five nines, "99.999 percent of throughput, would target those who need to stay
functional across the clock during the year." It would seem like 0.1 percent
doesn't make a significant difference that much.
High Availability Cont……
High availability (HA) is the elimination of single points of failure to enable
applications to continue to operate even if one of the IT components it depends
on, such as a server, fails. IT professionals eliminate single points of failure to
ensure continuous operation and uptime at least 99.99% annually.
High availability clusters are groups of servers that support business-critical
applications. Applications are run on a primary server and in the event of a
failure, application operation is moved to secondary server(s) where they
continue to operate.
Most organizations rely on business-critical databases and applications, such as
ERPs, data warehouses, e-commerce applications, customer relationship
management systems (CRM), financial systems, supply chain management, and
business intelligence systems. When a system, database, or application fails, these
organizations require high availability protection to keep systems up and running
and minimize the risk of lost revenue, unproductive employees, and unhappy
customers.
High Availability Cont……
Highly Available Clusters Incorporate Five Design Principles:
They automatically failover [a procedure by which a system automatically
transfers control to a duplicate system when it detects a fault or failure.] to a
redundant system to pick up an operation when an active component fails.
This eliminates single points of failure.
They can automatically detect application-level failures as they happen,
regardless of the causes.
They ensure no amount of data loss during a system failure.
They automatically and quickly failover to redundant components to
minimize downtime.
They provide the ability to manually failover and failback [failback operation
is the process of returning production to its original location after a disaster or a
scheduled maintenance period.] to minimize downtime during planned
maintenance.
Disaster Recovery
Disaster recovery as a service(DRaaS) is a cloud computing service model that
allows an organization to back up its data and IT infrastructure in a third party
cloud computing environment and provide all the DR orchestration, all through a
SaaS solution, to regain access and functionality to IT infrastructure after a disaster.
The as-a-service model means that the organization itself doesn’t have to own all
the resources or handle all the management for disaster recovery, instead relying
on the service provider.
Why virtualization?
Virtualization enables cloud providers to serve users with their existing
physical computer hardware.
It enables cloud users to purchase only the computing resources they need
when they need it, and to scale those resources cost-effectively as their
workloads grow.
Virtualization Cont…..
4. Data virtualization: Modern organizations collect data from several sources and
store it in different formats. They might also store data in different places, such as
in a cloud infrastructure and an on-premises data center.
Data virtualization creates a software layer between this data and the
applications that need it.
Data virtualization tools process an application’s data request and return
results in a suitable format.
Thus, organizations use data virtualization solutions to increase flexibility for
data integration and support cross-functional data analysis.
Virtualization Cont…..
5. Application virtualization: Application virtualization pulls out the functions of
applications to run on operating systems other than the operating systems for
which they were designed.
For example, users can run a Microsoft Windows application on a Linux
machine without changing the machine configuration.
Application virtualization runs application software without installing it directly
on the user’s OS.
There are three types of application virtualization:
Local application virtualization: The entire application runs on the endpoint
device.
Application streaming: The application lives on a server which sends small
components of the software to run on the end user's device when needed.
Server-based application virtualization The application runs entirely on a
server that sends only its user interface to the client device.
Admins can run multiple redundant virtual machines alongside each other and
failover between them when problems arise.
3. Faster provisioning:
Buying, installing, and configuring hardware for each application is time-
consuming.
You can even automate it using management software and build it into existing
workflows.
Virtualization Cont…..
4. Easier management:
Replacing physical computers with software-defined VMs makes it easier to use
and manage policies written in software. This allows you to create automated IT
service management workflows.
Policies can even increase resource efficiency by retiring unused virtual machines
to save on space and computing power.
Server Virtualization
Server virtualization is the process of dividing a physical server into multiple
unique and isolated virtual servers called virtual private servers by means of a
software application.
It is used to mask server resources from server users. i.e server resources are
kept hidden from the user This can include the number and identity of operating
systems, processors, and individual physical servers.
Each virtual private server can run its own operating systems independently.
These operating systems are known as guest operating systems. These are
running on another operating system known as the host operating system. Each
guest running in this manner is unaware of any other guests running on the
same host. Different virtualization techniques are employed to achieve this
transparency.
This partitioning of physical server into several virtual environments; result in
the dedication of one server to perform a single application or task.
This technique is mainly used in web-servers which reduces the cost of web-
hosting services. Instead of having separate system for each web-server,
multiple virtual servers can run on the same system/computer.
Server Virtualization Cont….
Server Virtualization Cont….
Server Virtualization Cont….
Server Virtualization : Portability
On migration from one physical server to another, there can be different processors,
but should be from same manufacturer
Server Virtualization Cont….
Usage of Server Virtualization: This technique is mainly used in web servers which
reduces the cost of web-hosting services. Instead of having separate system for
each web-server, multiple virtual servers can run on same system/computer
The primary uses of server virtualization are:
To centralize the server administration
Improve the availability of server
Helps in disaster recovery
Ease in development & testing
Make efficient use of server resources.
Server Virtualization Cont….
Advantages:
Cost Reduction: Server virtualization reduces cost because less hardware is
required.
Independent Restart: Each server can be rebooted independently and that reboot
won't affect the working of other virtual servers.
Disadvantages
The biggest disadvantage of server virtualization is that when the server goes
offline, all the websites that are hosted by the server will also go down.
There is no way to measure the performance of virtualized environments.
It requires a huge amount of RAM consumption.
It is difficult to set up and maintain.
Some core applications and databases are not supported virtualization.
It requires extra hardware resources.
Hypervisor Management Software
Before hypervisors hit the mainstream, most physical computers could only run
one operating system (OS) at a time. This made them stable because the
computing hardware only had to handle requests from that one OS. The
downside of this approach was that it wasted resources because the operating
system couldn’t always use all of the computer’s power.
It provides the necessary services and features for the smooth running of
multiple operating systems.
It responds to privileged CPU instructions, and handles queuing, dispatching,
and returning the hardware requests.
The hypervisor is a hardware virtualization technique that allows multiple guest
operating systems (OS) to run on a single host system at the same time.
It separates VMs from each other logically, assigning each its own slice of the
underlying computing power, memory, and storage. This prevents the VMs from
interfering with each other; so if, for example, one OS suffers a crash or a
security compromise, the others survive.
Hypervisor Management Software Cont….
Types of Hypervisor: There are two types of hypervisors: "Type 1" (also known as
"bare metal") and "Type 2" (also known as "hosted").
1. Type-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It has direct access to
hardware resources interacting directly with its CPU, memory, and physical
storage.
Therefore it is also known as a “Native Hypervisor” or “Bare metal hypervisor”.
A Type 1 hypervisor takes the place of the host operating system. It does not
require any base server operating system.
Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and
Microsoft Hyper-V hypervisor, KVM
If we are running the updated version of the hypervisor then we must have
already got the KVM integrated into the Linux kernel in 2007.
Pros of Type-1 Hypervisor: Such kinds of hypervisors are very efficient because
they have direct access to the physical hardware resources(like Cpu, Memory,
Network, and Physical storage). This causes the empowerment of the security
because there is nothing any kind of the third party resource so that attacker
couldn’t compromise with anything.
Cons of Type-1 Hypervisor: One problem with Type-1 hypervisors is that they
usually need a dedicated separate machine to perform their operation and to
instruct different VMs and control the host hardware resources.
Hypervisor Management Software Cont….
Type 1 hypervisors can virtualize more than just server operating systems. They can
also virtualize desktop operating systems for companies that want to centrally
manage their end-user IT resources. Virtual desktop integration (VDI) lets users
work on desktops running inside virtual machines on a central server, making it
easier for IT staff to administer and maintain their OSs.
Hyper-V hypervisor
Hyper-V is Microsoft’s hypervisor designed for use on Windows systems. It shipped
in 2008 as part of Windows Server, meaning that customers needed to install the
entire Windows operating system to use it. Hyper-V is also available on Windows
clients.
Microsoft designates Hyper-V as a Type 1 hypervisor, even though it runs
differently to many competitors. Hyper-V installs on Windows but runs directly on
the physical hardware, inserting itself underneath the host OS. All guest operating
systems then run through the hypervisor, but the host operating system gets
special access to the hardware, giving it a performance advantage.
Hypervisor Management Software Cont….
2. Type-2 Hypervisor:
It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or
framework that runs on a traditional operating system.
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the
underlying hardware rather they run as an application in a Host system(physical
machine).
Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls.
Individual users who wish to operate multiple operating systems on a personal
computer should use a form 2 hypervisor.
A Type 2 hypervisor doesn’t run directly on the underlying hardware. Instead, it
runs as an application in an OS.
Hypervisor Management Software Cont….
Type 2 hypervisors are suitable for individual PC users needing to run multiple
operating systems.
Examples include engineers, security professionals analyzing malware, and
business users that need access to applications only available on other software
platforms.
Type 2 hypervisors often feature additional toolkits for users to install into the
guest OS. These tools provide enhanced connections between the guest and the
host OS, often enabling the user to cut and paste between the two or access host
OS files and folders from within the guest VM
A Type 2 hypervisor enables quick and easy access to an alternative guest OS
alongside the primary one running on the host system. This makes it great for
end-user productivity. A consumer might use it to access their favorite Linux-
based development tools while using a speech dictation system only found in
Windows, for example.
Example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
VMware also offers Type 2 hypervisor products for desktop and laptop users:
VirtualBox [A Type 2 hypervisor running on Linux, Mac OS, and Windows
operating systems. Oracle inherited the product when it bought Sun
Microsystems in 2010.]
Hypervisor Management Software Cont….
Files, blocks, and objects are storage formats that hold, organize, and present
data in different ways—each with their own capabilities and limitations.
File storage organizes and represents data as a hierarchy of files in folders
Block storage chunks data into arbitrarily organized, evenly sized volumes
Object storage manages data and links it to associated metadata.
Storage Virtualization
To access any file, the server's operating system uses the unique address to pull
the blocks back together into the file, which takes less time than navigating
through directories and file hierarchies to access a file.
Block storage works well for critical business applications, transactional databases,
and virtual machines that require low-latency (minimal delay).
Blocks can be stored anywhere in the system, rather than conforming to a static
directory/subdirectory/folder structure. The operating system of the server uses
the unique address to bring the blocks back together into the file to access every
file, which takes less time to access a file than navigating through folders and file
hierarchies. For critical business applications, transaction databases, and
virtualized system that require low latency, block storage works well (minimal
delay). It also provides us with more granular knowledge access and reliable
results.
Block Level Storage Virtualization Cont…..
Block storage breaks up data into blocks and then stores those blocks as separate
pieces, each with a unique identifier.
Block-level storage, a storage device such as a hard disk drive (HDD) is identified as
something called a storage volume.
Block Storage as a Service: Block Storage as a Service (BSSaaS) falls into the much
larger category of Enterprise Storage as a Service (STaaS), where those looking for
cloud-based storage can select from block, file, or object storage to support their
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Storage Area Network (SAN)
Block storage, sometimes referred to as block-level storage, is a technology that is
used to store data files on Storage Area Networks (SAN) or cloud-based storage
environments.
A Storage Area Network (SAN) is a dedicated, independent high-speed network
that interconnects and delivers shared pools of storage devices to multiple
servers.
Block Level Storage Virtualization Cont….
Each server can access shared storage as if it were a drive directly attached to
the server.
SANs are typically composed of hosts, switches, storage elements, and storage
devices that are interconnected using a variety of technologies, topologies, and
protocols.
The host layer is connected to the fabric layer, which is a collection of devices,
such as SAN switches, routers, protocol bridges, gateway devices, and cables.
The fabric layer interacts with the storage layer, which consists of the physical
storage devices, such as disk drives, magnetic tape, or optical media.
What are the benefits of block level storage virtualization?: There are several
benefits to using block level storage virtualization.
First, it can improve the performance of a computer system by allowing the
system to access the virtual device faster than the physical device.
Second, it can provide a higher level of security for the data stored on the
virtual device, as the physical device can be kept offline and away from
potential threats.
Finally, it can allow for easier management of storage devices, as the virtual
devices can be created, deleted, and modified as needed.
Virtual Storage Area Network
A Virtual Storage Area Network (VSAN) is a logical partitioning created within a
physical storage area network.
VSAN (Virtual Storage Area Network) is a storage solution that is used to create
and manage storage for virtual machines.
A VSAN allows end users and organizations to provision a logical storage area
network on top of the physical SAN through storage virtualization.
The virtualized SAN can be used to build a virtual storage pool for multiple
services; however, it is generally provisioned to be integrated with virtual
machines and virtual servers.
A VSAN provides similar services and features as a typical SAN, but because it is
virtualized, it allows for the addition and relocation of subscribers without having
to change the network’s physical layout. It also provides flexible storage capacity
that can be increased or decreased over time.
Virtual Storage Area Network
It is intended for usage in scenarios that leverage cloud computing, especially with
virtualized infrastructure like VMware vSphere.
Storage resources from various physical servers can be combined and shown as a
single, shared storage pool using VSAN.
The virtual machine disc files (VMDKs) and other data can then be kept in this
pool. VSAN dynamically allocates storage for virtual machines according to
requirements.
This can make it simpler to administer virtual environments and lower the
expense of obtaining and maintaining real storage.
Virtual Storage Area Network Cont…..
Benefits of VSAN
Cost-effectiveness: VSAN does not require physical storage arrays therefore it is
less expensive than conventional storage systems.
Scalability: VSAN is well-suited for cloud computing environments that demand the
capacity to scale rapidly and efficiently to meet changing storage requirements.
Increased performance: To deliver quick and dependable storage performance,
VSAN makes use of the high-speed interconnects found inside the cloud
computing architecture.
Flexibility: Block and file storage are both supported by VSAN, giving clients the
option to select the type of storage that best suits their requirements.
Data security: VSAN has tools like data replication and snapshots that guard
against data loss and guarantee that vital information is always accessible.
Simplified Management: Administration is made easier because of VSAN’s
integration with the VMware vSphere virtualization technology, which offers a
single management panel.
Virtual Storage Area Network Cont…..
Applications of VSAN
Hybrid Cloud Storage: VSAN can be used to create a hybrid cloud environment
where data can be stored and managed on-premises and in the cloud.
Virtual Desktop Infrastructure (VDI): VSAN can be used to offer storage for virtual
desktop environments (VDI), enabling effective virtual desktop management and
storage in the cloud.
Disaster Recovery and Business Continuity: Data can be copied to a secondary site
in the cloud for protection against outages and data loss using VSAN to develop
disaster recovery and business continuity solutions.
Application Development and Testing: VSAN can be used to offer storage for
environments used for developing and testing applications, allowing programmers
to build and test cloud-based apps.
Backup and Archiving: Storage for backup and archiving solutions can be provided
by VSAN, allowing businesses to store and safeguard their data in the cloud.
Virtual Storage Area Network Cont…..
Parameter
Difference Between SAN andSAN
VSAN VSAN
A VSAN is a software-defined
A SAN is a dedicated network storage solution that aggregates
Definition that provides block-level access the physical storage resources of
hosts in a vSphere cluster and
to storage devices. presents them as a single, shared
datastore.
In file storage, data is stored in files, the files are organized in folders, and the
folders are organized under a hierarchy of directories and subdirectories.
To locate a file, all you or your computer system need is the path—from directory
to subdirectory to folder to file.
Hierarchical file storage works well with easily organized amounts of structured
data. But, as the number of files grows, the file retrieval process can become
cumbersome and time-consuming.
File Level Storage Virtualization
Scaling requires adding more hardware devices or continually replacing these with
higher-capacity devices, both of which can get expensive.
To some extent, you can mitigate these scaling and performance issues with
cloud-based file storage services.
These services allow multiple users to access and share the same file data located
in off-site data centers (the cloud). You simply pay a monthly subscription fee to
store your file data in the cloud, and you can easily scale-up capacity and specify
your data performance and protection criteria.
Moreover, in cloud services you, eliminate the expense of maintaining your own
on-site hardware since this infrastructure is managed and maintained by the cloud
service provider in its data center. This is also known as Infrastructure-as-a-Service
(IaaS)
File Level Storage Virtualization Cont….
File storage has been a popular storage technique for decades—it’s familiar to
virtually every computer user, and it’s well-suited to storing and organizing
transactional data or manageable structured data volumes that can be neatly
stored in a database in a disk drive on a server.
Network attached storage (NAS) is a centralized, file server, which allows multiple
users to store and share files over a TCP/IP network via Wifi or an Ethernet cable.
It is also commonly known as a NAS box, NAS unit, NAS server, or NAS head.
These devices rely on a few components to operate, such as hard drives, network
protocols, and a lightweight operating system (OS).
Hard drives or hard disk drives (HDDs): HDDs provide storage capacity for a NAS
unit as well as an easy way to scale. As more data storage is needed, additional
hard disks can be added to meet the system demand, earning it the name “scale-
out” NAS.
The use case for the NAS device usually determines the type of HDD used. For
example, sharing large media files, such as streaming video, across an organization
requires more resources than a file system for a single user at home.
File Level Storage Virtualization
Network Protocols: TCP/IP protocols –i.e. Transmission Control Protocol (TCP) and
Internet Protocol (IP)—are used for data transfer, but the network protocols for
data sharing can vary based on the type of client. For example, a Windows client
will typically have a server message block (SMB) protocol while a Linux or UNIX
client will have a network file system (NFS) protocol.
Usually, file-level storage can be accessed using standard file-level protocols such as
SMB/CIFS (Windows) and NFSS (Linux, VMware).
File level storage system or NAS has to manage user access control and the
assignment of permissions in certain instances. Some devices may be integrated
into existing systems of authentication and security.
File level storage system or NAS has to manage user access control and the
assignment of permissions in certain instances
File Level Storage Virtualization
File Level Storage Virtualization Cont….
File Level Storage Virtualization Cont….
File Level Storage Virtualization Cont….
Pros and cons of file storage: Examples of data typically saved using file storage
include presentations, reports, spreadsheets, graphics, photos, etc.
File storage is familiar to most users and allows access rights and limits to be set by the user but
managing large numbers of files and hardware costs can become a challenge.
Pros
1. Easy to access on a small scale: With a small-to-moderate number of files, users can easily
locate and click on a desired file, and the file with the data opens. Users then save the file to
the same or a different location when they’re finished with it.
2. Familiar to most users: As the most common storage type for end users, most people with
basic computer skills can easily navigate file storage with little assistance or additional
training.
3. Users can manage their own files: Using a simple interface, end users can create, move and
delete their files.
4. Allows access rights/file sharing/file locking to be set at user level: Users and
administrators can set a file as write (meaning users can make changes to the file), read-only
(users can only view the data) or locked (specific users cannot access the file even as read
only). Files can also be password-protected.
File Level Storage Virtualization Cont….
Cons
1. Challenging to manage and retrieve large numbers of files: While hierarchical storage
works well for, say, 20 folders with 10 subfolders each, file management becomes
increasingly complicated as the number of folders, subfolders and files increases. As the
volume grows, the amount of time for the search feature to find a desired file increases
and becomes a significant waste of time spread over employees throughout an
organization.
3. Becomes expensive at large scales: When the amount of storage space on devices and
networks reaches capacity, additional hardware devices must be purchased.
File Level Storage Virtualization Cont….
2. Backup and recovery: Cloud backup and external backup devices typically use file storage
for creating copies of the latest versions of files.
3. Archiving: Because of the ability to set permissions at a file level for sensitive data and the
simplicity of management, many organizations use file storage for archiving documents for
compliance or historical reasons.
File Level Storage Virtualization Cont….
Virtual Local Area Network
Virtual Local Area Network Cont…..
Virtual Local Area Network Cont…..
Through VLAN, different small-size sub-networks are created which are
comparatively easy to handle.
Virtual Local Area Networks or Virtual LANs (VLANs) are a logical group of
computers that appear to be on the same LAN irrespective of the configuration
of the underlying physical network.
Network administrators partition the networks to match the functional
requirements of the VLANs so that each VLAN comprise of a subset of ports on a
single or multiple switches or bridges. This allows computers and devices in a
VLAN to communicate in the simulated environment as if it is a separate LAN.
Virtual LAN (VLAN) is a concept in which we can divide the devices logically on
layer 2 (data link layer) of OSI model. Generally, layer 3 devices divide the
broadcast domain but the broadcast domain can be divided by switches using
the concept of VLAN.
Most data centers utilize a variety of IT tools for systems management, security,
provisioning, customer care, billing, and directories, among others.
And these work with cloud management services and open APIs to integrate
existing operation, administration, maintenance, and provisioning (OAM&P)
systems.
A modern cloud service should support a data center’s existing infrastructure as
well as leveraging modern software, hardware, and virtualization, and other
technology.
Cloud Infrastructure Requirements Cont….
3. Requirement 3: Reporting, Visibility, Reliability, a Security
Data centers need high levels of real-time reporting and visibility capabilities in
cloud environments to guarantee compliance, SLAs, security, billing, and charge-
backs.
Without robust reporting and visibility, managing system performance, customer
service, and other processes are nearly impossible.
And to be wholly reliable, cloud infrastructures must operate regardless of one
or more failing components.
For to safeguard the cloud, services must ensure data and apps are secure while
providing access to those who are authorized.
Cloud Infrastructure Requirements Cont….