0% found this document useful (0 votes)
224 views

Cloud Computing Unit-III Cloud Management & Virtualization Technology By. Dr. Samta Gajbhiye

The document discusses resiliency and provisioning in cloud computing. Regarding resiliency, it defines resiliency as a cloud system's ability to continue functioning after failures. It provides examples of failures like hardware issues and disasters. It also lists measures to improve resiliency like implementing redundancy, load balancing, monitoring tools, and disaster recovery services. Regarding provisioning, it defines provisioning as allocating cloud resources and services to customers. It discusses different types of provisioning like server, user, network, and cloud provisioning. It notes that provisioning involves setting up infrastructure and managing access to resources.

Uploaded by

Pulkit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
224 views

Cloud Computing Unit-III Cloud Management & Virtualization Technology By. Dr. Samta Gajbhiye

The document discusses resiliency and provisioning in cloud computing. Regarding resiliency, it defines resiliency as a cloud system's ability to continue functioning after failures. It provides examples of failures like hardware issues and disasters. It also lists measures to improve resiliency like implementing redundancy, load balancing, monitoring tools, and disaster recovery services. Regarding provisioning, it defines provisioning as allocating cloud resources and services to customers. It discusses different types of provisioning like server, user, network, and cloud provisioning. It notes that provisioning involves setting up infrastructure and managing access to resources.

Uploaded by

Pulkit Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 130

Cloud Computing

Unit-III
Cloud Management & Virtualization Technology

By. Dr. Samta Gajbhiye

1
Resiliency

 In cloud computing, resilience refers to a cloud system’s capacity to bounce back


from setbacks and carry on operating normally.
 Resiliency is the ability of the system to react to failure and still remain functional.
 Hardware malfunctions, software flaws, and natural disasters are just a few
examples of the different failures that a resilient cloud system can survive and
recover from with little to no service interruption.

2
Resiliency Cont…….
 Measures: There are several steps that can be taken to improve a cloud
computing system’s resilience:
1. Implement redundant systems: Using redundant systems, such as multiple
servers or data centers, can help ensure that the system continues to function
even if one component fails.
2. Use load balancers: Load balancers can distribute traffic across multiple
servers, preventing a single server from becoming overburdened and ensuring
that the system remains operational.
3. Use backup and recovery systems: Using backup and recovery systems can
help ensure that data is protected and recoverable in the event of a disaster
4. Use monitoring and alerting tools: Monitoring tools can assist in identifying
issues before they become problems, and alerting systems can notify the
appropriate personnel when problems arise.
5. Implement security measures: Encryption and access controls, for example,
can help protect data and systems from unauthorized access.
6. Use disaster recovery as a service (DRaaS): DraaS is a cloud-based service that
provides backup and recovery capabilities for cloud systems. In the event of a
disaster, using DRaaS can help ensure that a system is quickly recovered.
3
 Advantages
Resiliency Cont…….
1. Reduced Downtime: A robust cloud system can lessen the amount of users
downtime (time during which a machine, especially a computer, is out of action or
unavailable for use.) by promptly recovering from faults.
2. Greater Adaptability: Because a resilient cloud system can recover from faults and
scale up or down as necessary, it can be more adaptive and flexible to changing
needs and workloads.
3. Increased Availability: The ability of a resilient cloud system to recover from errors
and carry on operating might increase the system’s overall availability.
4. Increased reliability: A resilient system is less likely to be disrupted or fail, which
can lead to increased reliability and a better user experience.
5. Faster recovery: A resilient system can withstand and recover from disruptions more
quickly, resulting in a shorter recovery time and less downtime.
6. Increased security: A resilient system can withstand and recover from security
breaches and other types of attacks, which can help protect data and assets.
7. Cost savings: Putting in place resiliency measures can help cut the costs associated
with disruptions and failures, such as lost revenue, repair costs, and reputation
damage.
8. Increased competitiveness: A resilient system is more appealing to customers and
4
partners, which can lead to increased market competitiveness.
Resiliency Cont…….
 Limitations of Resiliency in Cloud
1. Cost: Putting steps in place to make a cloud system more resilient can be
expensive, especially if doing so entails buying extra hardware or creating and
testing a thorough disaster recovery plan.
2. Human Error: Despite all efforts to create a resilient system, human mistakes can
sometimes result in interruptions, such as incorrect setups or unintentional data
erasure.
3. Complexity: Establishing a durable cloud system can be difficult because it calls
for coordinating the work of numerous teams and incorporating numerous
technologies and procedures.
4. Limited Control: The user may only have a limited amount of control over the
underlying infrastructure and may not be able to adopt certain resiliency
measures, depending on the type of cloud service being utilized.
5. Dependence on External Elements: A cloud system may be susceptible to
interruptions brought on by external circumstances, which may be out of the
user’s control, such as network problems or power outages.

5
Provisioning

 Cloud provisioning is the allocation of a cloud provider's resources and services to


a customer.
 Provisioning is the process of creating and setting up IT infrastructure, and
includes the steps required to manage user and system access to various
resources.
 The term “provisioning” is seldom confused with “configuration,” although both are
steps in the deployment process. Provisioning comes first, then configuration, after
something has been provisioned.
 Variety of processes, which include: Server Provisioning, User Provisioning,
Network Provisioning, Device Provisioning, Internet Access Provisioning

6
Provisioning Cont…….
1. Server provisioning: Server provisioning is the process of setting up physical or
virtual hardware; installing and configuring software, such as the operating system
and applications; and connecting it to middleware, network, and storage
components.
 Provisioning can encompass all of the operations needed to create a new
machine and bring it to the desired state, which is defined according to business
requirements.
 There are many servers categorized according to their uses, Each of them has
unique provisioning requirements, and the choice of the server itself will be
driven by the intended use.
 For example, there are file servers, policy servers, mail servers, and
application servers
 Server provisioning includes processes such as adjusting control panels,
installing operating systems and other software, or even replicating the set-
up of other servers.
 Generally, server provisioning is a process that constructs a new machine,
bringing it up to speed and defining the system’s desired state.

7
Provisioning Cont…….
2. User Provisioning:  User provisioning is a type of identity management that
involves granting permissions to services and applications within a corporate
environment—like email, a database, or a network—often based on a user's job
title or area of responsibility.
 The act of revoking user access is often referred to as deprovisioning.
 An example of user provisioning is role based access control (RBAC). Configuring
RBAC includes assigning user accounts to a group, defining the group’s role—for
example, read-only, editor, or administrator—and then granting these roles
access rights to specific resources based on the users’ functional needs.
 The user provisioning process is often managed between IT and human
resources.
3. Network Provisioning:  When referring to IT infrastructure, network provisioning is
the setting up of components such as routers, switches, and firewalls; allocating IP
addresses; and performing operational health checks and fact gathering. 
 For telecommunications companies, the term “network provisioning” refers to
providing users with a telecommunications service, such as assigning a phone
number, or installing equipment and wiring..

8
Provisioning Cont…….

4. Device Provisioning: This technology is mostly used when deploying the IoT


network. In this, a device is configured, secured, customized, and certified, after
which a user is allocated these devices. This enables improved device
management, flexibility, and device sharing.
5. Cloud provisioning: Cloud provisioning includes creating the underlying
infrastructure for an organization’s cloud environment, like installing networking
elements, services, and more. Once the basic cloud infrastructure is in place,
provisioning involves setting up the resources, services, and applications inside a
cloud.
6. Service provisioning: Service provisioning includes the set up of IT-dependent
services for an end user and managing the related data. Examples of service
provisioning may include granting an employee access to a software-as-a-
service platform, and setting up credentials and system privileges to limit access to
certain types of data and activities.

9
 
Provisioning Cont…….
 Benefits of automated provisioning
• Provisioning often requires IT teams to repeat the same process over and over,
such as granting a developer access to a virtual machine in order to deploy and test
a new application.
 This makes manually provisioning resources time-consuming and prone to
human error, which can delay the time-to-market of new products and
services.
 Manual provisioning also pulls busy IT teams away from projects that are more
important to an organization’s larger strategy. 
• Provisioning tasks can easily be handled through automation, using infrastructure-
as-code (IaC).
 With IaC, infrastructure specifications are stored in configuration files, which
means that developers just need to execute a script to provision the same
environment every time.
 Codifying infrastructure gives IT teams a template to follow for provisioning,
• Automated provisioning provides greater consistency across modern IT
environments, reduces the likelihood of errors and loss of productivity, and frees
IT teams to focus on strategic business objectives. 10
Provisioning Cont…….
 With this more efficient provisioning process: 
 End users and developers can gain access to the IT resources and systems they
need in less time, empowering them to be more productive. 
 Developers can bring applications and services to market faster, which can
improve customer experience and revenue. 
 IT teams can spend less time on menial (trivial), repetitive tasks—like correcting
errors and misconfigurations—which allows them to focus on more critical
priorities.
 Cloud provisioning tools and software: Organizations can manually provision
whatever resources and services they need, but public cloud providers offer tools
to provision multiple resources and services:
• AWS CloudFormation
• Microsoft Azure Resource Manager
• Google Cloud Deployment Manager
• IBM Cloud Orchestrator
 Some organizations further automate the provisioning process as part of a
broader cloud management strategy through orchestration and configuration
management tools, such as HashiCorp's Terraform, Red Hat Ansible, Chef and
Puppet 11
Provisioning Cont…….

12
Asset Management
 Cloud asset management is the process used to control an organization's cloud
infrastructure and the application data within the cloud.
 Many organizations use a variety of cloud-based applications to store and manage
their digital assets.
 Cloud asset management software offers organizations robust capabilities,
from asset tagging and mobile apps, to advanced reporting capabilities. 
 With a collection of cloud-based asset sources, CAM helps organize assets to avoid
operational hiccups and security concerns. 
 An asset cloud is a centralized digital storage facility that operates over the
internet. 
 Though asset records can be stored on internal company hard drives and servers,
cloud-based asset storage solutions utilize a remote server. 
 Asset clouds ultimately help avoid costly internal infrastructure, enhance
company-wide functionality, and provide detailed asset history
 Asset cloud can also enhance employee productivity. With digital assets available
from anywhere in the world with an internet connection
 Incorporating the use of cloud asset management practices provides an
organization with visibility and easy control over the digital assets within the
company cloud 13
Asset Management Cont…..
 Cloud asset management software is often better suited for larger organizations
that simply possess a higher quantity and wider variety of both physical and digital
assets. 
 Software improves the visibility into each digital asset and its data. This way,
your company can keep a finger on the pulse of depreciation, application
updates, warranty expirations, and more.
 Software also comes equipped with strong security features, like dedicated
servers and encrypted data, to keep company (and consumer) data safe and out of
the wrong hands. 
 Pros of Cloud asset management
 Centralized Data: Cloud-based software centralizes all data in one location to
search and organize 
 Advanced Accessibility: Cloud-based solutions allow for multiple users and
remote access
 Attachment Functionality: Software increases asset efficiency by allowing users to
attach vital asset information such as pictures, SOPs, user manuals, and more
 Less Time and Effort: Software automation helps reduce typically time-consuming
management processes while maintaining data accuracy.
 Mobile Apps: Mobile apps are handy to scan barcodes and qr-codes that display
14
on asset tags.
Asset Management Cont…..

 Cons of Cloud asset management


• Price Point: Cloud asset management solutions can become pricey depending on
your company needs

• Internet Reliance: Cloud access requires an internet connection, meaning assets


may not be available if internet connection is lost 

• Learning Curve: It can be a challenge to reach complete software efficiency


without the appropriate training and education

15
Asset Management Cont…..
 Features of Cloud based Asset management

16
Asset Management Cont…..
 Benefits of Cloud based Asset management

17
Concepts of Map reduce
 A MapReduce is a data processing tool which is used to process the data parallelly
in a distributed form. It was developed in 2004, on the basis of paper titled as
"MapReduce: Simplified Data Processing on Large Clusters," published by Google.

 The MapReduce is a paradigm which has two phases, the mapper phase, and the
reducer phase.
 In the Mapper, the input is given in the form of a key-value pair. The output of the
Mapper is fed to the reducer as input.
 The reducer runs only after the Mapper is over. The reducer too takes input in
key-value format, and the output of reducer is the final output.
 The MapReduce task is mainly divided into two phases: Map Phase and Reduce
Phase.  

 MapReduce and HDFS are the two major components of Hadoop which makes it


so powerful and efficient to use. MapReduce is a programming model used for
efficient processing in parallel over large data-sets in a distributed manner. The
data is first split and then combined to produce the final result.
[Apache Hadoop is an open source framework that is used to efficiently store and
process large datasets ranging in size from gigabytes to petabytes of data]
Map Reduce Cont…..

19
Map Reduce Cont…..

20
Map Reduce Cont…..
 Map Reduce Architecture

21
Map Reduce Cont…..

 Components of MapReduce Architecture:


1. Client: The MapReduce client is the one who brings the Job to the MapReduce
for processing. There can be multiple clients available that continuously send jobs
for processing to the Hadoop MapReduce Manager.
2. Job: The MapReduce Job is the actual work that the client wanted to do which is
comprised of so many smaller tasks that the client wants to process or execute.
3. Hadoop MapReduce Master: It divides the particular job into subsequent job-
parts.
4. Job-Parts:  The task or sub-jobs that are obtained after dividing the main job.
The result of all the job-parts combined to produce the final output.
5. Input Data: The data set that is fed to the MapReduce for processing.
6. Output Data: The final result is obtained after the processing.

22
Map Reduce Cont…..

 The MapReduce task is mainly divided into 2 phases i.e. Map phase and Reduce
phase.
• Map: As the name suggests its main use is to map the input data in key-value
pairs. The input to the map may be a key-value pair where the key can be the id of
some kind of address and value is the actual value that it keeps.
The Map() function will be executed in its memory repository on each of these
input key-value pairs and generates the intermediate key-value pair which works as
input for the Reducer or Reduce() function.
 
• Reduce: The intermediate key-value pairs that work as input for Reducer are
shuffled and sort and send to the Reduce() function. Reducer aggregate or group
the data based on its key-value pair as per the reducer algorithm written by the
developer.

23
Map Reduce Cont…..

 How Job tracker and the task tracker deal with MapReduce:
• Job Tracker: The work of Job tracker is to manage all the resources and all the jobs
across the cluster and also to schedule each map on the Task Tracker running on
the same data node since there can be hundreds of data nodes available in the
cluster.
 
• Task Tracker: The Task Tracker can be considered as the actual slaves that are
working on the instruction given by the Job Tracker. This Task Tracker is deployed
on each of the nodes available in the cluster that executes the Map and Reduce
task as instructed by Job Tracker.

24
Map Reduce Cont…..

 Steps in Map Reduce


1. The map takes data in the form of pairs and returns a list of <key, value> pairs. The
keys will not be unique in this case.
2. Using the output of Map, sort and shuffle are applied by the Hadoop architecture.
This sort and shuffle acts on these list of <key, value> pairs and sends out unique
keys and a list of values associated with this unique key <key, list(values)>.
3. An output of sort and shuffle sent to the reducer phase. The reducer performs a
defined function on a list of values for unique keys, and Final output <key, value>
will be stored/displayed.

25
Map Reduce Cont…..
 Sort and Shuffle
• The sort and shuffle occur on the output of Mapper and before the reducer. When
the Mapper task is complete, the results are sorted by key, partitioned if there are
multiple reducers, and then written to disk. Using the input from each Mapper
<k2,v2>, we collect all the values for each unique key k2. This output from the
shuffle phase in the form of <k2, list(v2)> is sent as input to reducer phase.

26
Map Reduce Cont…..

 Usage of MapReduce
 It can be used in various application like document clustering, distributed
sorting, and web link-graph reversal [The map function outputs <target, source>
pairs for each link to a target URL found in a page named “source”. The reduce
function concatenates the list of all source URLs associated with a given target
URL and emits the pair: <target, list(source)>].
 It can be used for distributed pattern-based searching.
 We can also use MapReduce in machine learning.
 It was used by Google to regenerate Google's index of the World Wide Web.

27
Cloud Governance
 Cloud governance is the process of defining, implementing, and monitoring a
framework of policies that guides an organization's cloud operations.
 This process regulates, how users work in cloud environments to facilitate
consistent performance of cloud services and systems.

 It is the set of policies or principles that act as the guidance for the adoption use,
and management of cloud technology services.

 It is an ongoing process that must sit on top of existing governance models.


 
 A cloud governance framework is commonly built from existing IT practices, but in
some instances, organizations choose to develop a new set of rules and policies
specifically for the cloud.

 Implementing and monitoring this framework allows organizations to improve


oversight and control over vital areas of cloud operations—such as data
management, data security, risk management, legal procedures, cost management,
and much more—and makes sure that they are all working together to meet
28
business goals.
Cloud Governance Cont…..

 As organizations opt for the flexibility and scalability of cloud and hybrid


cloud operating models, their IT teams must also manage the new complexity that
comes with decentralized cloud environments.

 Cloud computing can alleviate (lessen) infrastructure and resource limitations, but
with users spanning multiple business units, it can be difficult to ensure cost-
effective use of cloud resources, minimize security issues, and enforce
established policies. 

 Building a cloud governance framework with strict processes for enforcement


and monitoring can help keep the many moving parts of cloud environments
aligned and working efficiently.

29
Cloud Governance Cont…..

 A comprehensive cloud governance strategy can help organizations:


• Improve business continuity. With better visibility across all business units,
organizations can develop clearer action plans in the case of data breaches or
downtime.
• Optimize resources and infrastructure. Increasing monitoring and control over
cloud resources and infrastructure can inform efforts to use resources effectively
and keep cloud costs low.
• Maximize performance. With a clearer view of their entire cloud environment,
organizations can improve operational efficiency by eliminating productivity
bottlenecks and simplifying management processes.
• Minimize security risks. A good governance model includes clear identity and
access management strategy and security monitoring processes, so that IT teams
are better positioned to identify and mitigate (lessen) vulnerabilities and
improve cloud security.

30
High Availability
 As organizations move their systems to the hybrid cloud, resilience is often a
critical concern.
 A few moments disrupted in such environments can cause a loss of credibility,
clients, and billions of dollars.
 The ability to withstand errors and failures without data loss is key to providing
reliable application services that contribute to business continuity. 
 High Availability (HA) provides a failover solution in the event a Control Room
service, server, or database fails.
 In order to provide continuous service over a particular time, a highly available
architecture needs numerous modules functioning simultaneously. This often
requires the reaction time to the queries of users. In particular, software solutions
need not only to be digital but also reactive.
 Highly available networks have the potential to regenerate in the quickest way
possible from unforeseen incidents.
 These systems reduce downtime or remove it by transferring the functions to
replacement modules.
 In order to ensure that there are no trouble spots, this normally includes routine
servicing, inspection, and preliminary in-depth checks.
31
High Availability Cont……

 The Need of High Availability


 Downtime is the amount of time when your device (or connection) is
unconscious or inaccessible.
 Leisure time may cause severe damage for a business, as when their devices
are overloaded; all their operations are temporarily suspended.
 Amazon fell down for 15 minutes in August 2013 (like web and mobile
operations) and eventually lost more than $66,000 in a minute.

 There are two kinds of rest time: scheduled and unscheduled.


 A planned schedule throughput is an inevitable consequence of service. This
involves the application of patches, software updates, and even modifications
to the data structure.
 Unplanned downtime is induced by a certain unexpected occurrence, such as
system failure. It can occur because of a module's power shortages or failure.

32
High Availability Cont……
 How to measure the percentage of response time for high
availability?
 Normally, the response time is demonstrated by using the scoring with the
availability of five 9's.
 It will be specified in the Service Level Agreement (SLA) if you plan to go out for a
unified platform.
 Five nines, "99.999 percent of throughput, would target those who need to stay
functional across the clock during the year." It would seem like 0.1 percent
doesn't make a significant difference that much.
High Availability Cont……
 High availability (HA) is the elimination of single points of failure to enable
applications to continue to operate even if one of the IT components it depends
on, such as a server, fails. IT professionals eliminate single points of failure to
ensure continuous operation and uptime at least 99.99% annually.
 High availability clusters are groups of servers that support business-critical
applications. Applications are run on a primary server and in the event of a
failure, application operation is moved to secondary server(s) where they
continue to operate.
 Most organizations rely on business-critical databases and applications, such as
ERPs, data warehouses, e-commerce applications, customer relationship
management systems (CRM), financial systems, supply chain management, and
business intelligence systems. When a system, database, or application fails, these
organizations require high availability protection to keep systems up and running
and minimize the risk of lost revenue, unproductive employees, and unhappy
customers.
High Availability Cont……
 Highly Available Clusters Incorporate Five Design Principles:
 They automatically failover [a procedure by which a system automatically
transfers control to a duplicate system when it detects a fault or failure.] to a
redundant system to pick up an operation when an active component fails.
This eliminates single points of failure.
 They can automatically detect application-level failures as they happen,
regardless of the causes.
 They ensure no amount of data loss during a system failure.
 They automatically and quickly failover to redundant components to
minimize downtime.
 They provide the ability to manually failover and failback [failback operation
is the process of returning production to its original location after a disaster or a
scheduled maintenance period.] to minimize downtime during planned
maintenance.
Disaster Recovery
 Disaster recovery as a service(DRaaS) is a cloud computing service model that
allows an organization to back up its data and IT infrastructure in a third party
cloud computing environment and provide all the DR orchestration, all through a
SaaS solution, to regain access and functionality to IT infrastructure after a disaster.

 The as-a-service model means that the organization itself doesn’t have to own all
the resources or handle all the management for disaster recovery, instead relying
on the service provider.

 Disaster recovery planning is critical to business continuity. Many disasters that


have the potential to wreak havoc on an IT organization have become more
frequent in recent years:
 Natural disasters such as hurricanes, floods, wildfires and earthquakes
 Equipment failures and power outages
 Cyberattacks 
 Technology failures
 System incompatibilities
 Simple human error 
36
 Intentional unauthorized access by third parties
Disaster Recovery Cont……

 True DRaaS mirrors a complete infrastructure in fail-safe mode on virtual servers,


including compute, storage and networking functions.
 An organization can continue to run applications—it just runs them from the
service provider’s cloud or hybrid cloud environment instead of from the disaster-
affected physical servers.
 This means recovery time after a disaster can be much faster, or even
instantaneous.
 Once the physical servers are recovered or replaced, the processing and data is
migrated back onto them.
Disaster Recovery Cont……
 Key benefits
1. Ensures business continuity: When a disaster strikes interrupts normal business
operations, as the team’s productivity is reduced due to limited access to tools
they require to work. A disaster recovery plan prompts the quick restart of
backup systems and data so that operations can continue as scheduled. 
2. Enhances system security: Integrating data protection, backup, and restoring
processes into a disaster recovery plan limits the impact of ransomware,
malware, or other security risks for business.
3. Improves customer retention: If a disaster occurs, customers question the
reliability of an organization’s security practices and services. The longer a
disaster impacts a business, the greater the customer frustration.
 A good disaster recovery plan mitigates this risk by training employees to
handle customer inquiries. Customers gain confidence when they observe that
the business is well-prepared to handle any disaster. 
4. Reduces recovery costs: Depending on its severity, a disaster causes both loss of
income and productivity. A robust disaster recovery plan avoids unnecessary
losses as systems return to normal soon after the incident. For example, cloud
storage solutions are a cost-effective data backup method. You can manage,
monitor, and maintain data while the business operates as usual. 
Disaster Recovery Cont……
 How does disaster recovery work?: Disaster recovery focuses on getting
applications up and running within minutes of an outage. Organizations address
the following three components.
1. Prevention: To reduce the likelihood of a technology-related disaster, businesses
need a plan to ensure that all key systems are as reliable and secure as possible.
This requires to set up the right tools and techniques to prevent disaster. For
example, system-testing software that auto-checks all new configuration files
before applying them can prevent configuration mistakes and failures. 
2. Anticipation: Anticipation includes predicting possible future disasters, knowing
the consequences, and planning appropriate disaster recovery procedures.
 For example, backing up all critical business data to the cloud in anticipation of
future hardware failure of on-premises devices is a pragmatic approach to data
management.
3. Mitigation: Mitigation is how a business responds after a disaster scenario. A
mitigation strategy aims to reduce the negative impact on normal business
procedures. All key stakeholders know what to do in the event of a disaster,
including the following steps: Updating documentation, Conducting regular
disaster recovery testing, Identifying manual operating procedures in the event of
an outage, Coordinating a disaster recovery strategy with corresponding
personnel
Disaster Recovery Cont……
 AWS Elastic Disaster Recovery
Virtualization
 It is also the technology that drives cloud computing economics.
 Virtualization is technology that you can use to create virtual representations of
servers, storage, networks, and other physical machines.
 Virtualization is a technique, which allows to share a single physical instance of a
resource or an application among multiple customers and organizations. It does
by assigning a logical name to a physical storage and providing a pointer to that
physical resource when demanded.
 Virtual software mimics the functions of physical hardware to run multiple virtual
machines simultaneously on a single physical machine.
 Virtualization uses software to create an abstraction layer over computer
hardware that allows the hardware elements of a single computer—processors,
memory, storage and more—to be divided into multiple virtual computers,
commonly called virtual machines (VMs).
 Each VM runs its own operating system (OS) and behaves like an
independent computer, even though it is running on just a portion of the
actual underlying computer hardware.
 Virtual machines (VMs) are virtual environments that simulate a physical
compute in software form. They normally comprise several files containing the
VM’s configuration, the storage for the virtual hard drive, and some snapshots
of the VM that preserve its state at a particular point in time.
Virtualization Cont…..

 Why virtualization?
 Virtualization enables cloud providers to serve users with their existing
physical computer hardware.

 It enables cloud users to purchase only the computing resources they need
when they need it, and to scale those resources cost-effectively as their
workloads grow.
Virtualization Cont…..

 Types of virtualization: You can go beyond virtual machines to create a collection


of virtual resources in your virtual environment. 
• Desktop virtualization.
• Network virtualization
• Storage virtualization
• Data virtualization
• Application virtualization
• Server virtualization
• Cloud virtualization
Virtualization Cont…..
1. Desktop virtualization: Most organizations have nontechnical staff that use
desktop operating systems to run common business applications.
 For instance, you might have the following staff:
– A customer service team that requires a desktop computer with Windows 10
and customer-relationship management software
– A marketing team that requires Windows Vista for sales applications
 You can use desktop virtualization to run these different desktop operating
systems on virtual machines on the same computer., which your teams can
access remotely. This type of virtualization makes desktop management efficient
and secure, saving money on desktop hardware.
 There are two types of desktop virtualization:
 Virtual desktop infrastructure (VDI) runs multiple desktops in VMs on a
central server and streams them to users who log in on thin client devices. In
this way, VDI lets an organization provide its users access to variety of OS's
from any device, without installing OS's on any device.
 Local desktop virtualization runs a hypervisor on a local computer, enabling
the user to run one or more additional OSs on that computer and switch
from one OS to another as needed without changing anything about the
primary OS.
Virtualization Cont…..
2. Network virtualization: Network virtualization uses software to create a “view” of
the network that an administrator can use to manage the network from a single
console.
 Any computer network has hardware elements such as switches, routers, and
firewalls. An organization with offices in multiple geographic locations can have
several different network technologies working together to create its enterprise
network.
 Network virtualization is a process that combines all of these network resources
to centralize administrative tasks and abstracts them into software running on a
hypervisor. Administrators can adjust and control these elements virtually
without touching the physical components, which greatly simplifies network
management.
 The following are two approaches to network virtualization.
 Software-defined networking: Software-defined networking (SDN) controls
traffic routing by taking over routing management from data routing in the
physical environment. For example, you can program your system to prioritize
your video call traffic over application traffic to ensure consistent call quality in
all online meetings.
 Network function virtualization : Network function virtualization technology
combines the functions of network appliances, such as firewalls, load
balancers, and traffic analyzers that work together, to improve network
Virtualization Cont…..
3. Storage virtualization: Storage virtualization uses all your physical data storage
and creates a large unit of virtual storage that you can assign and control by using
management software.
 IT administrators can streamline storage activities, such as archiving, backup,
and recovery, because they can combine multiple network storage devices
virtually into a single storage device.
 Storage virtualization makes it easier to provision storage for VMs and makes
maximum use of all available storage on the network.

4. Data virtualization: Modern organizations collect data from several sources and
store it in different formats. They might also store data in different places, such as
in a cloud infrastructure and an on-premises data center.
 Data virtualization creates a software layer between this data and the
applications that need it.
 Data virtualization tools process an application’s data request and return
results in a suitable format.
 Thus, organizations use data virtualization solutions to increase flexibility for
data integration and support cross-functional data analysis.
Virtualization Cont…..
5. Application virtualization: Application virtualization pulls out the functions of
applications to run on operating systems other than the operating systems for
which they were designed.
 For example, users can run a Microsoft Windows application on a Linux
machine without changing the machine configuration.
 Application virtualization runs application software without installing it directly
on the user’s OS.
 There are three types of application virtualization: 
 Local application virtualization: The entire application runs on the endpoint
device.
 Application streaming: The application lives on a server which sends small
components of the software to run on the end user's device when needed.
 Server-based application virtualization The application runs entirely on a
server that sends only its user interface to the client device.

6. Server virtualization: Server virtualization is the process of dividing a physical


server into multiple unique and isolated virtual servers by means of a software
application. Each virtual server can run its own operating systems independently.
Virtualization Cont…..

6. Cloud virtualization: As noted above, the cloud computing model depends on


virtualization. By virtualizing servers, storage, and other physical data center
resources, cloud computing providers can offer a range of services to customers,
including the following: 
 Infrastructure as a service (IaaS): Virtualized server, storage, and network
resources you can configure based on their requirements.  

 Platform as a service (PaaS): Virtualized development tools, databases, and


other cloud-based services you can use to build you own cloud-based
applications and solutions.

 Software as a service (SaaS): Software applications you use on the cloud.


Virtualization Cont…..

 Benefits of virtualization: brings several benefits to data center operators and


service providers:
1. Resource efficiency: 
 Before virtualization, each application server required its own dedicated
physical CPU—IT staff would purchase and configure a separate server for each
application they wanted to run. Invariably, each physical server would be
underused.

 In contrast, server virtualization lets you run several applications—each on its


own VM with its own OS—on a single physical computer This enables
maximum utilization of the physical hardware’s computing capacity.
Virtualization Cont…..
2. Minimal downtime: 
 OS and application crashes can cause downtime and disrupt user productivity.

 Admins can run multiple redundant virtual machines alongside each other and
failover between them when problems arise.

 Running multiple redundant physical servers is more expensive.

3. Faster provisioning: 
 Buying, installing, and configuring hardware for each application is time-
consuming.

 Provided that the hardware is already in place, provisioning virtual machines to


run all your applications is significantly faster.

 You can even automate it using management software and build it into existing
workflows.
Virtualization Cont…..
4. Easier management: 
 Replacing physical computers with software-defined VMs makes it easier to use
and manage policies written in software. This allows you to create automated IT
service management workflows.

 For example, automated deployment and configuration tools enable


administrators to define collections of virtual machines and applications as
services, in software templates. This means that they can install those services
repeatedly and consistently without cumbersome, time-consuming. and error-
prone manual setup.

 Admins can use virtualization security policies to mandate certain security


configurations based on the role of the virtual machine.

 Policies can even increase resource efficiency by retiring unused virtual machines
to save on space and computing power.
Server Virtualization
 Server virtualization is the process of dividing a physical server into multiple
unique and isolated virtual servers called virtual private servers by means of a
software application.
 It  is used to mask server resources from server users. i.e server resources are
kept hidden from the user This can include the number and identity of operating
systems, processors, and individual physical servers.
 Each virtual private server can run its own operating systems independently.
 These operating systems are known as guest operating systems. These are
running on another operating system known as the host operating system. Each
guest running in this manner is unaware of any other guests running on the
same host. Different virtualization techniques are employed to achieve this
transparency. 
 This partitioning of physical server into several virtual environments; result in
the dedication of one server to perform a single application or task.
 This technique is mainly used in web-servers which reduces the cost of web-
hosting services. Instead of having separate system for each web-server,
multiple virtual servers can run on the same system/computer.
Server Virtualization Cont….
Server Virtualization Cont….
Server Virtualization Cont….
Server Virtualization : Portability
On migration from one physical server to another, there can be different processors,
but should be from same manufacturer
Server Virtualization Cont….

 There are three types of server virtualization


 Hypervisor
 Para Virtualization
 Full Virtualization

 Approaches to Virtualization: For Server Virtualization, there are three ways to


create virtual server. Each system uses different approach to allocate physical
server resources to virtual server needs
 Full Virtualization
 Para-virtual Machine model
 Operating System (OS) layer Virtualization
Server Virtualization Cont….
 Hypervisor:
 In the Server Virtualization, Hypervisor plays an important role.
 A Cloud Hypervisor is software that enables the sharing of cloud provider's
physical compute and memory resources across multiple virtual machines (VMs).
 A Hypervisor or VMM(virtual machine monitor) a layer between the operating
system (OS) and hardware.
 A hypervisor is a form of virtualization software used in Cloud hosting to divide
and allocate the resources on various pieces of hardware.
 It provides the necessary services and features for the smooth running of
multiple operating systems. 
 It identifies traps, responds to privileged CPU instructions, and handles queuing,
dispatching, and returning the hardware requests.
 The program which provides partitioning, isolation, or abstraction is called a
virtualization hypervisor. The hypervisor is a hardware virtualization technique
that allows multiple guest operating systems (OS) to run on a single host system at
the same time. 
 A host operating system also runs on top of the hypervisor to administer and
manage the virtual machines
Server Virtualization Cont….
1. Full Virtualization
 Full virtualization uses a hypervisor.
 The hypervisor monitors the physical server's resources and keeps each virtual
server independent and unaware of the other virtual servers.
 As virtual server run application, the hypervisor relays resources from the
physical server to the correct virtual server.
 Each guest server runs on its own OS: one running on linux and other may be
running on windows
 This technique of virtualization provide guest OS to run without modification.
 It can emulate the underlying hardware when necessary. The hypervisor traps
the machine operations used by the operating system to perform I/O or modify
the system status. After trapping, these operations are emulated in software
and the status codes are returned very much consistent with what the real
hardware would deliver. This is why an unmodified operating system is able to
run on top of the hypervisor. 
 The biggest limitation of using full virtualization is that a hypervisor has its own
processing needs which means physical server must reserve resources to run
hypervisor. This can slow down applications and impact server performance.
 Example: VMWare ESX server uses this method. A customized Linux version
known as Service Console is used as the administrative operating system.
Server Virtualization Cont….
2. Para Virtualization:
 Unlike Full virtualization, the guest servers in a para-virtualization system are
aware of one another.
 The entire system work as cohesive unit.
 A para-virtualization hypervisor doesn't need as much processing power to
manage the guest operating system, because each OS is already aware of the
demands the other operating systems are placing on the physical server.
 Like that of virtual machine, similarly the Para-virtual machine is also capable of
executing multiple operating systems.
 In this model the VMM modifies the guest operating system's code which is called
'porting'.
 The guest operating system is modified and recompiled before installation into
the virtual machine.
 Due to the modification in the Guest operating system, performance is enhanced
as the modified guest operating system communicates directly with the
hypervisor and emulation overhead is removed. 
 Limitations: Requires modification to a guest operating system
 Example: Xen primarily uses Paravirtualization, where a customized Linux
environment is used to support the administrative environment known as domain 0
Server Virtualization Cont….

3. Operating System (OS) layer Virtualization


 Unlike full and para-virtualization, OS-level visualization does not use a hypervisor
 Instead, the virtualization capability, is part of the physical server i.e host
operating system, performs all the tasks of a hypervisor.
 Limitation of this approach: all the virtual (guest) servers must run that same
operating system in this server virtualization method.
 Because all guest operating system must be same this is called a homogenous
environment.
 Each virtual server remains independent from all the others
Server Virtualization Cont….

 Which method is best?


 Depends on the network administrators need
 If administrators physical servers all run on the same Operating systems, then
OS-level approach might work.
 If administrators physical servers all run on the several different Operating
systems, then para-virtualization might be a better choice

 Usage of Server Virtualization: This technique is mainly used in web servers which
reduces the cost of web-hosting services. Instead of having separate system for
each web-server, multiple virtual servers can run on same system/computer
 The primary uses of server virtualization are:
 To centralize the server administration
 Improve the availability of server
 Helps in disaster recovery
 Ease in development & testing
 Make efficient use of server resources.
Server Virtualization Cont….

 Advantages:
 Cost Reduction: Server virtualization reduces cost because less hardware is
required.
 Independent Restart: Each server can be rebooted independently and that reboot
won't affect the working of other virtual servers.
 Disadvantages
 The biggest disadvantage of server virtualization is that when the server goes
offline, all the websites that are hosted by the server will also go down.
 There is no way to measure the performance of virtualized environments.
 It requires a huge amount of RAM consumption.
 It is difficult to set up and maintain.
 Some core applications and databases are not supported virtualization.
 It requires extra hardware resources.
Hypervisor Management Software

 Before hypervisors hit the mainstream, most physical computers could only run
one operating system (OS) at a time. This made them stable because the
computing hardware only had to handle requests from that one OS. The
downside of this approach was that it wasted resources because the operating
system couldn’t always use all of the computer’s power.

 A hypervisor solves that problem. It is a small software layer that enables


multiple instances of operating systems to run alongside each other, sharing the
same physical computing resources. This process is called virtualization, and the
operating system instances are referred to as virtual machines (VMs)—software
emulations of physical computers.

 A Cloud Hypervisor is software that enables the sharing of cloud provider's


physical compute and memory resources across multiple virtual machines (VMs).
 A Hypervisor or VMM(virtual machine monitor) a layer between the operating
system (OS) and hardware.
Hypervisor Management Software Cont….

 It provides the necessary services and features for the smooth running of
multiple operating systems. 
 It responds to privileged CPU instructions, and handles queuing, dispatching,
and returning the hardware requests.
 The hypervisor is a hardware virtualization technique that allows multiple guest
operating systems (OS) to run on a single host system at the same time. 
 It separates VMs from each other logically, assigning each its own slice of the
underlying computing power, memory, and storage. This prevents the VMs from
interfering with each other; so if, for example, one OS suffers a crash or a
security compromise, the others survive.
Hypervisor Management Software Cont….
 Types of Hypervisor: There are two types of hypervisors: "Type 1" (also known as
"bare metal") and "Type 2" (also known as "hosted").
1. Type-1 Hypervisor:
 The hypervisor runs directly on the underlying host system. It has direct access to
hardware resources interacting directly with its CPU, memory, and physical
storage.
 Therefore it is also known as a “Native Hypervisor” or “Bare metal hypervisor”.
 A Type 1 hypervisor takes the place of the host operating system. It does not
require any base server operating system.
 Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and
Microsoft Hyper-V hypervisor, KVM
 If we are running the updated version of the hypervisor then we must have
already got the KVM integrated into the Linux kernel in 2007.
 Pros of Type-1 Hypervisor: Such kinds of hypervisors are very efficient because
they have direct access to the physical hardware resources(like Cpu, Memory,
Network, and Physical storage). This causes the empowerment of the security
because there is nothing any kind of the third party resource so that attacker
couldn’t compromise with anything. 
 Cons of Type-1 Hypervisor: One problem with Type-1 hypervisors is that they
usually need a dedicated separate machine to perform their operation and to
instruct different VMs and control the host hardware resources.
Hypervisor Management Software Cont….

 Type 1 hypervisors can virtualize more than just server operating systems. They can
also virtualize desktop operating systems for companies that want to centrally
manage their end-user IT resources. Virtual desktop integration (VDI) lets users
work on desktops running inside virtual machines on a central server, making it
easier for IT staff to administer and maintain their OSs.
 Hyper-V hypervisor
 Hyper-V is Microsoft’s hypervisor designed for use on Windows systems. It shipped
in 2008 as part of Windows Server, meaning that customers needed to install the
entire Windows operating system to use it. Hyper-V is also available on Windows
clients.
 Microsoft designates Hyper-V as a Type 1 hypervisor, even though it runs
differently to many competitors. Hyper-V installs on Windows but runs directly on
the physical hardware, inserting itself underneath the host OS. All guest operating
systems then run through the hypervisor, but the host operating system gets
special access to the hardware, giving it a performance advantage.
Hypervisor Management Software Cont….

2. Type-2 Hypervisor:
 It is also known as a hosted hypervisor, The type 2 hypervisor is a software layer or
framework that runs on a traditional operating system.
 A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the
underlying hardware rather they run as an application in a Host system(physical
machine).
 Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls.
 Individual users who wish to operate multiple operating systems on a personal
computer should use a form 2 hypervisor.
 A Type 2 hypervisor doesn’t run directly on the underlying hardware. Instead, it
runs as an application in an OS.
Hypervisor Management Software Cont….
 Type 2 hypervisors are suitable for individual PC users needing to run multiple
operating systems.
 Examples include engineers, security professionals analyzing malware, and
business users that need access to applications only available on other software
platforms.
 Type 2 hypervisors often feature additional toolkits for users to install into the
guest OS. These tools provide enhanced connections between the guest and the
host OS, often enabling the user to cut and paste between the two or access host
OS files and folders from within the guest VM
 A Type 2 hypervisor enables quick and easy access to an alternative guest OS
alongside the primary one running on the host system. This makes it great for
end-user productivity. A consumer might use it to access their favorite Linux-
based development tools while using a speech dictation system only found in
Windows, for example.
 Example of a Type 2 hypervisor includes VMware Player or Parallels Desktop.
 VMware also offers Type 2 hypervisor products for desktop and laptop users:
VirtualBox [A Type 2 hypervisor running on Linux, Mac OS, and Windows
operating systems. Oracle inherited the product when it bought Sun
Microsystems in 2010.]
Hypervisor Management Software Cont….

 Pros & Cons of Type-2 Hypervisor:


 Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
System alongside the host machine running.
 These hypervisors usually come with additional useful features for guest
machines. Such tools enhance the coordination between the host machine
and the guest machine.
 Cons:  Because a Type 2 hypervisor must access computing, memory, and network
resources via the host OS, it introduces latency issues that can affect performance.
 Potential security risks are also there an attacker can compromise the security
weakness if there is access to the host operating system so he can also
access the guest operating system.
Hypervisor Management Software Cont….

 Choosing the right hypervisor :


 Type 1 hypervisors offer much better performance than Type 2 ones because
there’s no middle layer, making them the logical choice for mission-critical
applications and workloads.
 But that’s not to say that hosted hypervisors don’t have their place – they’re
much simpler to set up, so they’re a good bet if, say, you need to deploy a test
environment quickly.
 One of the best ways to determine which hypervisor meets your needs is to
compare their performance metrics. These include CPU overhead, the amount of
maximum host and guest memory, and support for virtual processors.
Hypervisor Management Software Cont….

 The following factors should be examined before choosing a suitable hypervisor: 


1. Understand your needs: Needs for a virtualization hypervisor are: Flexibility,
Scalability, Usability, Availability, Reliability, Efficiency , Reliable support 
2. The cost of a hypervisor 
3. Virtual machine performance: Virtual systems should meet or exceed the
performance of their physical counterparts, at least in relation to the
applications within each server.
4. Test for yourself: You can gain basic experience from your existing desktop or
laptop.
Hypervisor Management Software Cont….
 HYPERVISOR REFERENCE MODEL : There are 3 main modules coordinates in order
to emulate the underlying hardware:
 DISPATCHER: 
The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules. 
 
 ALLOCATOR: 
The allocator is responsible for deciding the system resources to be provided to
the virtual machine instance. It means whenever a virtual machine tries to execute
an instruction that results in changing the machine resources associated with the
virtual machine, the allocator is invoked by the dispatcher. 
 
 INTERPRETER: 
The interpreter module consists of interpreter routines. These are executed,
whenever a virtual machine executes a privileged instruction.
Storage Virtualization
Block level and File Level Storage Virtualization
 Storage virtualization: Storage virtualization uses all your physical data storage
and creates a large unit of virtual storage that you can assign and control by using
management software.
 IT administrators can streamline storage activities, such as archiving, backup,
and recovery, because they can combine multiple network storage devices
virtually into a single storage device.
 Storage virtualization makes it easier to provision storage for VMs and makes
maximum use of all available storage on the network.

 Files, blocks, and objects are storage formats that hold, organize, and present
data in different ways—each with their own capabilities and limitations.
 File storage organizes and represents data as a hierarchy of files in folders
 Block storage chunks data into arbitrarily organized, evenly sized volumes
 Object storage manages data and links it to associated metadata.
Storage Virtualization

Block level and File Level Storage Virtualization


 Files, blocks, and objects are storage formats that hold, organize, and present
data in different ways—each with their own capabilities and limitations.
 File storage organizes and represents data as a hierarchy of files in folders
 Block storage chunks data into arbitrarily organized, evenly sized volumes
 Object storage manages data and links it to associated metadata.
Storage Virtualization
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont…..
 Block storage breaks a file into equally-sized chunks (or blocks) of data and stores
each block separately under a unique address.

 Rather than conforming to a rigid directory/subdirectory/folder structure, blocks


can be stored anywhere in the system.

 To access any file, the server's operating system uses the unique address to pull
the blocks back together into the file, which takes less time than navigating
through directories and file hierarchies to access a file.

 Block storage works well for critical business applications, transactional databases,
and virtual machines that require low-latency (minimal delay).
 Blocks can be stored anywhere in the system, rather than conforming to a static
directory/subdirectory/folder structure. The operating system of the server uses
the unique address to bring the blocks back together into the file to access every
file, which takes less time to access a file than navigating through folders and file
hierarchies. For critical business applications, transaction databases, and
virtualized system that require low latency, block storage works well (minimal
delay). It also provides us with more granular knowledge access and reliable
results.
Block Level Storage Virtualization Cont…..
 Block storage breaks up data into blocks and then stores those blocks as separate
pieces, each with a unique identifier.

 Block-level storage, a storage device such as a hard disk drive (HDD) is identified as
something called a storage volume.

 A storage volume can be treated as an individual drive, a “block”.


 The storage blocks can be modified by an administrator, adding more capacity
when necessary, which makes block storage fast, flexible, and reliable.
 The blocks can be stored in different environments, such as one block in Windows
and the rest in Linux. When a user retrieves a block, the storage system
reassembles the blocks into a single unit.
 Block storage is the default storage for both hard disk drive and frequently updated
data. You can store blocks on Storage Area Networks (SANs) or in cloud storage
environments.
 When a user or application requests data from a block storage system, the
underlying storage system reassembles the data blocks and presents the data to
the user or application.
Block Level Storage Virtualization Cont….
 Block storage allows for the creation of raw storage volumes, which server-based
operating systems can connect to.
 You can treat those raw volumes as individual hard drives. This lets you use block
storage for almost any kind of application, including file storage, database storage,
Virtual Machine File System (VMFS) volumes, and more.
[VMFS stores VM files, which include disk images, snapshots, and so on]
 For example, Organizations often use block storage for deploying VMFS across the
enterprise. Take, for example, the deployment of virtual machines across an
enterprise. 
 With block storage, you can easily create and format a block-based storage
volume to store the VMFS.
 A physical server can then attach to that block, creating multiple virtual
machines.
 Creating a block-based volume, installing an operating system, and attaching to
that volume allows users to share files using that native operating system.

 Block Storage as a Service: Block Storage as a Service (BSSaaS) falls into the much
larger category of Enterprise Storage as a Service (STaaS), where those looking for
cloud-based storage can select from block, file, or object storage to support their
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Block Level Storage Virtualization Cont….
Storage Area Network (SAN)
 Block storage, sometimes referred to as block-level storage, is a technology that is
used to store data files on Storage Area Networks (SAN) or cloud-based storage
environments.
 A Storage Area Network (SAN) is a dedicated, independent high-speed network
that interconnects and delivers shared pools of storage devices to multiple
servers.
Block Level Storage Virtualization Cont….
 Each server can access shared storage as if it were a drive directly attached to
the server.

 SANs are typically composed of hosts, switches, storage elements, and storage
devices that are interconnected using a variety of technologies, topologies, and
protocols.

 A SAN switch is hardware that connects servers to shared pools of storage


devices. It is dedicated to moving storage traffic in a SAN.
Block Level Storage Virtualization Cont….
 The SAN places blocks of data wherever it is most efficient. That means it can
store those blocks across different systems and each block can be configured (or
partitioned) to work with different operating systems.
 SANs present block storage to other networked systems as if those blocks were
locally attached devices.
 For example, a server can attach to a SAN using a data network connection—
such as Fibre Channel, Internet Small Computer System Interface (iSCSI), or
Infiniband—to access a block as if it was a locally accessed volume.
 Can also configure multiple storage arrays on a SAN
 Can attach multiple servers to the SAN.
Block Level Storage Virtualization Cont….
 A SAN consists of many elements or layers.
 The first is the host layer, which consists of the server—connected to the
storage network using a cable.

 The host layer is connected to the fabric layer, which is a collection of devices,
such as SAN switches, routers, protocol bridges, gateway devices, and cables.

 The fabric layer interacts with the storage layer, which consists of the physical
storage devices, such as disk drives, magnetic tape, or optical media.

 SANs are commonly based on a switched fabric technology.


 Examples include Fibre Channel (FC), Ethernet, and InfiniBand. 
 Gateways may be used to move data between different SAN technologies. 
 Fibre Channel is commonly used in enterprise environments.
 Ethernet infrastructure can be used for SANs to converge storage and IP
protocols onto the same network.
 InfiniBand is commonly used in high performance computing environments.
Block Level Storage Virtualization Cont….
Block-level storage virtualization
 What is block level storage virtualization?:
 Block level storage virtualization is the process of creating a virtual disk from
one or more physical disks.
 This virtual disk can then be used like any other disk, except that it can be located
on any server in the network.
 This allows for greater flexibility in storage management and can make it easier
to move data between servers.

 Block-level storage virtualization is a storage service that provides a flexible,


logical arrangement of storage capacity to applications and users while abstracting
its physical location.
 As a software layer, it intercepts I/O requests to that logical capacity and maps
them to the appropriate physical locations.
 In this way virtualization enables administrators to provide the storage capacity
when and where it’s needed while isolating users from the potentially disruptive
details of expansion, data protection and system maintenance.
Block Level Storage Virtualization Cont….
 How does block level storage virtualization work?:. Block level storage
virtualization can be used to create a storage area network (SAN), which can be
used to improve the performance of a server or cluster of servers by providing
them with their own dedicated storage. A SAN can also be used to provide storage
redundancy, by replicating data across multiple physical disks.

 What are the benefits of block level storage virtualization?: There are several
benefits to using block level storage virtualization.
 First, it can improve the performance of a computer system by allowing the
system to access the virtual device faster than the physical device.
 Second, it can provide a higher level of security for the data stored on the
virtual device, as the physical device can be kept offline and away from
potential threats.
 Finally, it can allow for easier management of storage devices, as the virtual
devices can be created, deleted, and modified as needed.
Virtual Storage Area Network
 A Virtual Storage Area Network (VSAN) is a logical partitioning created within a
physical storage area network.
 VSAN (Virtual Storage Area Network) is a storage solution that is used to create
and manage storage for virtual machines.

 This implementation model of a storage virtualization technique divides and


allocates some or an entire storage area network into one or more logical SANs to
be used by internal or external IT services and solutions.

 A VSAN allows end users and organizations to provision a logical storage area
network on top of the physical SAN through storage virtualization.
 The virtualized SAN can be used to build a virtual storage pool for multiple
services; however, it is generally provisioned to be integrated with virtual
machines and virtual servers.

 A VSAN provides similar services and features as a typical SAN, but because it is
virtualized, it allows for the addition and relocation of subscribers without having
to change the network’s physical layout. It also provides flexible storage capacity
that can be increased or decreased over time.
Virtual Storage Area Network
 It is intended for usage in scenarios that leverage cloud computing, especially with
virtualized infrastructure like VMware vSphere.

 Centralized storage management is offered by VSAN for virtual machines and


applications running in a virtualized environment.

 Storage resources from various physical servers can be combined and shown as a
single, shared storage pool using VSAN.

 The virtual machine disc files (VMDKs) and other data can then be kept in this
pool. VSAN dynamically allocates storage for virtual machines according to
requirements.

 It uses distributed architecture and enables IT organizations to pool storage


resources and dynamically provide storage to virtual machines on an as-needed
basis.
 VSAN is especially well-suited for cloud computing environments.

 This can make it simpler to administer virtual environments and lower the
expense of obtaining and maintaining real storage. 
Virtual Storage Area Network Cont…..
Benefits of VSAN
 Cost-effectiveness: VSAN does not require physical storage arrays therefore it is
less expensive than conventional storage systems.
 Scalability: VSAN is well-suited for cloud computing environments that demand the
capacity to scale rapidly and efficiently to meet changing storage requirements.
 Increased performance: To deliver quick and dependable storage performance,
VSAN makes use of the high-speed interconnects found inside the cloud
computing architecture.
 Flexibility: Block and file storage are both supported by VSAN, giving clients the
option to select the type of storage that best suits their requirements.
 Data security: VSAN has tools like data replication and snapshots that guard
against data loss and guarantee that vital information is always accessible.
 Simplified Management: Administration is made easier because of VSAN’s
integration with the VMware vSphere virtualization technology, which offers a
single management panel.
Virtual Storage Area Network Cont…..
Applications of VSAN
 Hybrid Cloud Storage: VSAN can be used to create a hybrid cloud environment
where data can be stored and managed on-premises and in the cloud.

 Virtual Desktop Infrastructure (VDI): VSAN can be used to offer storage for virtual
desktop environments (VDI), enabling effective virtual desktop management and
storage in the cloud.

 Disaster Recovery and Business Continuity: Data can be copied to a secondary site
in the cloud for protection against outages and data loss using VSAN to develop
disaster recovery and business continuity solutions. 

 Application Development and Testing: VSAN can be used to offer storage for
environments used for developing and testing applications, allowing programmers
to build and test cloud-based apps.

 Backup and Archiving: Storage for backup and archiving solutions can be provided
by VSAN, allowing businesses to store and safeguard their data in the cloud. 
Virtual Storage Area Network Cont…..
Parameter
Difference Between SAN andSAN
VSAN VSAN
A VSAN is a software-defined
A SAN is a dedicated network storage solution that aggregates
Definition that provides block-level access the physical storage resources of
hosts in a vSphere cluster and
to storage devices. presents them as a single, shared
datastore.

SAN requires dedicated physical VSAN utilizes the existing


Physical storage hardware, such as disk physical storage resources of the
Hardware
arrays and switches. hosts in a vSphere cluster.

SAN can be difficult to scale, as VSAN is highly scalable, as it can


Scalability it requires the addition of more dynamically allocate more storage
physical hardware. resources as needed.
Virtual Storage Area Network Cont…..
Parameter
Difference SAN
Between SAN and VSAN VSAN
VSAN is managed by vSphere, so
SAN requires a separate that it can be managed by the
Management storage administrator to
manage the storage hardware. same administrator that manages
the virtual infrastructure.

SAN can be optimized for high


performance through the use VSAN performance is optimized
Performance of specialized hardware, such through the use of caching, data
as storage controllers and high- mirroring, and data distribution.
speed switches.

SAN can be expensive, as it VSAN can be more cost-effective,


as it utilizes existing infrastructure
Cost requires dedicated hardware and eliminates the need for
and a separate network. separate storage hardware.
File Level Storage Virtualization

 File storage—also called file-level or file-based storage—is a hierarchical storage


methodology used to organize and store data on a computer hard drive or on
network-attached storage (NAS) device.

 In file storage, data is stored in files, the files are organized in folders, and the
folders are organized under a hierarchy of directories and subdirectories.

 To locate a file, all you or your computer system need is the path—from directory
to subdirectory to folder to file.

 Hierarchical file storage works well with easily organized amounts of structured
data. But, as the number of files grows, the file retrieval process can become
cumbersome and time-consuming.
File Level Storage Virtualization

 Scaling requires adding more hardware devices or continually replacing these with
higher-capacity devices, both of which can get expensive.

 To some extent, you can mitigate these scaling and performance issues with
cloud-based file storage services.

 These services allow multiple users to access and share the same file data located
in off-site data centers (the cloud). You simply pay a monthly subscription fee to
store your file data in the cloud, and you can easily scale-up capacity and specify
your data performance and protection criteria.

 Moreover, in cloud services you, eliminate the expense of maintaining your own
on-site hardware since this infrastructure is managed and maintained by the cloud
service provider in its data center. This is also known as Infrastructure-as-a-Service
(IaaS)
File Level Storage Virtualization Cont….

 File storage has been a popular storage technique for decades—it’s familiar to
virtually every computer user, and it’s well-suited to storing and organizing
transactional data or manageable structured data volumes that can be neatly
stored in a database in a disk drive on a server.

 However, many organizations are now struggling to manage mounting volumes of


web-based digital content or unstructured data. If you need to store very large or
unstructured data volumes, you should consider block-based or object-based
storage that organizes and accesses data differently.

 Depending on the various speed and performance requirements of your IT


operations and various applications, you might require a combination of these
approaches.
File Level Storage Virtualization
 File-level storage or file storage is a mechanism of technology used on hard drives,
Network Attached Storage (NAS) systems, and similar storage systems are.
 File-level storage system can access and handle individual files and folders
 File-level storage or file storage is the storage that is typically deployed in Network
Attached Storage (NAS) systems that are used for unstructured data.
 It uses Linux's Network File System (NFS) and Windows' Common Internet File
System (CIFS) or Server Message Block (SMB) standard file-level protocols.
 It is easy to implement and use file-level storage, and it is often less costly to
maintain than block-level storage, which are important reasons for its success in
terms of use on personal computers and in narrower business systems.
 NAS devices offer a lot of space at what is typically a lower cost than storage at the
block level.
File Level Storage Virtualization
File Level Storage Virtualization

 Network attached storage (NAS) is a centralized, file server, which allows multiple
users to store and share files over a TCP/IP network via Wifi or an Ethernet cable.

 It is also commonly known as a NAS box, NAS unit, NAS server, or NAS head.
These devices rely on a few components to operate, such as hard drives, network
protocols, and a lightweight operating system (OS). 

 Hard drives or hard disk drives (HDDs): HDDs provide storage capacity for a NAS
unit as well as an easy way to scale. As more data storage is needed, additional
hard disks can be added to meet the system demand, earning it the name “scale-
out” NAS.

 The use case for the NAS device usually determines the type of HDD used. For
example, sharing large media files, such as streaming video, across an organization
requires more resources than a file system for a single user at home. 
File Level Storage Virtualization

 Network Protocols: TCP/IP protocols –i.e. Transmission Control Protocol (TCP) and
Internet Protocol (IP)—are used for data transfer, but the network protocols for
data sharing can vary based on the type of client. For example, a Windows client
will typically have a server message block (SMB) protocol while a Linux or UNIX
client will have a network file system (NFS) protocol. 

 Operating System: While standard operating systems can handle thousands of


requests, the NAS OS restricts the system to two types of requests, data storage
and file sharing. 
File Level Storage Virtualization Cont….

 Usually, file-level storage can be accessed using standard file-level protocols such as
SMB/CIFS (Windows) and NFSS (Linux, VMware).

 File level storage system or NAS has to manage user access control and the
assignment of permissions in certain instances. Some devices may be integrated
into existing systems of authentication and security.

 File level storage system or NAS has to manage user access control and the
assignment of permissions in certain instances
File Level Storage Virtualization
File Level Storage Virtualization Cont….
File Level Storage Virtualization Cont….
File Level Storage Virtualization Cont….

Pros and cons of file storage: Examples of data typically saved using file storage
include presentations, reports, spreadsheets, graphics, photos, etc.

File storage is familiar to most users and allows access rights and limits to be set by the user but
managing large numbers of files and hardware costs can become a challenge.

Pros
1. Easy to access on a small scale: With a small-to-moderate number of files, users can easily
locate and click on a desired file, and the file with the data opens. Users then save the file to
the same or a different location when they’re finished with it.
2. Familiar to most users: As the most common storage type for end users, most people with
basic computer skills can easily navigate file storage with little assistance or additional
training.
3. Users can manage their own files: Using a simple interface, end users can create, move and
delete their files.
4. Allows access rights/file sharing/file locking to be set at user level: Users and
administrators can set a file as write (meaning users can make changes to the file), read-only
(users can only view the data) or locked (specific users cannot access the file even as read
only). Files can also be password-protected.
File Level Storage Virtualization Cont….
Cons
1. Challenging to manage and retrieve large numbers of files: While hierarchical storage
works well for, say, 20 folders with 10 subfolders each, file management becomes
increasingly complicated as the number of folders, subfolders and files increases. As the
volume grows, the amount of time for the search feature to find a desired file increases
and becomes a significant waste of time spread over employees throughout an
organization.

2. Hard to work with unstructured data: While it’s possible to save unstructured data like


text, mobile activity, social media posts and Internet of Things (IoT) sensor data in file
storage, it is typically not the best option for unstructured data storage, especially in large
amounts.

3. Becomes expensive at large scales: When the amount of storage space on devices and
networks reaches capacity, additional hardware devices must be purchased.
File Level Storage Virtualization Cont….

Use cases for file storage


1. Collaboration of documents: While it’s easy to collaborate on a single document with
cloud storage or Local Area Network (LAN) file storage, users must create a versioning
system or use versioning software to prevent overwriting each other’s changes.

2. Backup and recovery: Cloud backup and external backup devices typically use file storage
for creating copies of the latest versions of files.

3. Archiving: Because of the ability to set permissions at a file level for sensitive data and the
simplicity of management, many organizations use file storage for archiving documents for
compliance or historical reasons.   
File Level Storage Virtualization Cont….
Virtual Local Area Network
Virtual Local Area Network Cont…..
Virtual Local Area Network Cont…..
 Through VLAN, different small-size sub-networks are created which are
comparatively easy to handle. 
 Virtual Local Area Networks or Virtual LANs (VLANs) are a logical group of
computers that appear to be on the same LAN irrespective of the configuration
of the underlying physical network.
 Network administrators partition the networks to match the functional
requirements of the VLANs so that each VLAN comprise of a subset of ports on a
single or multiple switches or bridges. This allows computers and devices in a
VLAN to communicate in the simulated environment as if it is a separate LAN.  

 Virtual LAN (VLAN) is a concept in which we can divide the devices logically on
layer 2 (data link layer) of OSI model. Generally, layer 3 devices divide the
broadcast domain but the broadcast domain can be divided by switches using
the concept of VLAN.

 A broadcast domain is a network segment in which if a device broadcast a packet


then all the devices in the same broadcast domain will receive it.
 Routers don’t forward out the broadcast packet.
 To forward out the packets to different VLAN (from one VLAN to another) or
broadcast domains, inter Vlan routing is needed.
Virtual Local Area Network Cont…..
Switch
 Switches are networking devices operating at layer 2 or a data link layer of the OSI model.
They connect devices in a network and use packet switching to send, receive or forward
data packets or data frames over the network.
 A switch has many ports, to which computers are plugged in. When a data frame arrives at
any port of a network switch, it examines the destination address, performs necessary
checks and sends the frame to the corresponding device(s).
 Features of Switches
1. A switch operates in the layer 2, i.e. data link layer of the OSI model.
2. It is an intelligent network device that can be conceived as a multiport network bridge.
3. It uses MAC addresses (addresses of medium access control sublayer) to send data
packets to selected destination ports.
4. It uses packet switching technique to receive and forward data packets from the source
to the destination device.
5. It is supports unicast (one-to-one), multicast (one-to-many) and broadcast (one-to-all)
communications.
6. Transmission mode is full duplex, i.e. communication in the channel occurs in both the
directions at the same time.
7. Switches are active devices, equipped with network software and network
management capabilities.
8. Switches can perform some error checking before forwarding data to the destined port.
9. The number of ports is higher – 24/48.
Virtual Local Area Network Cont…..
Virtual Local Area Network Cont…..
 Features of VLANs
1. A VLAN forms sub-network grouping together devices on separate physical LANs.
2. VLAN's help the network manager to segment LANs logically into different broadcast
domains.
3. VLANs function at layer 2, i.e. Data Link Layer of the OSI model.
4. There may be one or more network bridges or switches to form multiple, independent
VLANs.
5. Using VLANs, network administrators can easily partition a single switched network
into multiple networks depending upon the functional and security requirements of
their systems.
6. VLANs eliminate the requirement to run new cables or reconfiguring physical
connections in the present network infrastructure.
7. VLANs help large organizations to re-partition devices aiming improved traffic
management.
8. VLANs also provide better security management allowing partitioning of devices
according to their security criteria and also by ensuring a higher degree of control
connected devices.
9. VLANs are more flexible than physical LANs since they are formed by logical
connections. This aids is quicker and cheaper reconfiguration of devices when the logical
partitioning needs to be changed.
Virtual Local Area Network Cont…..
Virtual Local Area Network Cont…..
Virtual Local Area Network Cont…..
Virtual Local Area Network Cont…..
VLANs offer several features and benefits

1. Improved network security: VLANs can be used to separate network traffic and


limit access to specific network resources. This improves security by preventing
unauthorized access to sensitive data and network resources.
2. Better network performance: By segregating network traffic into smaller logical
networks, VLANs can reduce the amount of broadcast traffic and improve
network performance.
3. Simplified network management: VLANs allow network administrators to group
devices together logically, rather than physically, which can simplify network
management tasks such as configuration, troubleshooting, and maintenance.
4. Flexibility: VLANs can be configured dynamically, allowing network
administrators to quickly and easily adjust network configurations as needed.
5. Cost savings: VLANs can help reduce hardware costs by allowing multiple virtual
networks to share a single physical network infrastructure.
6. Scalability: VLANs can be used to segment a network into smaller, more
manageable groups as the network grows in size and complexity.
Virtual Local Area Network Cont…..

Some of the key features of VLANs include


1. VLAN tagging: VLAN tagging is a way to identify and distinguish VLAN traffic from
other network traffic. This is typically done by adding a VLAN tag to the Ethernet
frame header.

2. VLAN membership: VLAN membership determines which devices are assigned to


which VLANs. Devices can be assigned to VLANs based on port, MAC address, or
other criteria.

3. VLAN trunking: VLAN trunking allows multiple VLANs to be carried over a single


physical link. This is typically done using a protocol such as IEEE 802.1Q.

4. VLAN management: VLAN management involves configuring and managing


VLANs, including assigning devices to VLANs, configuring VLAN tags, and
configuring VLAN trunking.
Virtual Local Area Network Cont…..
Advantages
1. Performance –The network traffic is full of broadcast and multicast. VLAN reduces
the need to send such traffic to unnecessary destinations. e.g.-If the traffic is
intended for 2 users but as 10 devices are present in the same broadcast domain,
therefore, all will receive the traffic i.e. wastage of bandwidth but if we make
VLANs, then the broadcast or multicast packet will go to the intended users only.
2. Formation of virtual groups –As there are different departments in every
organization namely sales, finance etc., VLANs can be very useful in order to group
the devices logically according to their departments.
3. Security – In the same network, sensitive data can be broadcast which can be
accessed by the outsider but by creating VLAN, we can control broadcast domains,
set up firewalls, restrict access. Also, VLANs can be used to inform the network
manager of an intrusion. Hence, VLANs greatly enhance network security.
4. Flexibility –VLAN provide flexibility to add, remove the number of host we want.
5. Cost reduction –VLANs can be used to create broadcast domains which eliminate
the need for expensive routers.By using Vlan, the number of small size broadcast
domain can be increased which are easy to handle as compared to a bigger
broadcast domain.
Virtual Local Area Network Cont…..
Disadvantages of VLAN 
1. Complexity: VLANs can be complex to configure and manage, particularly in large
or dynamic cloud computing environments.
2. Limited scalability: VLANs are limited by the number of available VLAN IDs, which
can be a constraint in larger cloud computing environments.
3. Limited security: VLANs do not provide complete security and can be
compromised by malicious actors who are able to gain access to the network.
4. Limited interoperability: VLANs may not be fully compatible with all types of
network devices and protocols, which can limit their usefulness in cloud
computing environments.
5. Limited mobility: VLANs may not support the movement of devices or users
between different network segments, which can limit their usefulness in mobile or
remote cloud computing environments. 
6. Cost: Implementing and maintaining VLANs can be costly, especially if specialized
hardware or software is required.
7. Limited visibility: VLANs can make it more difficult to monitor and troubleshoot
network issues, as traffic is isolated in different segments.
Virtual Local Area Network Cont…..
Real-Time Applications of VLAN 
Virtual LANs (VLANs) are widely used in cloud computing environments to improve
network performance and security. Here are a few examples of real-time applications
of VLANs:
1. Voice over IP (VoIP) : VLANs can be used to isolate voice traffic from data traffic,
which improves the quality of VoIP calls and reduces the risk of network
congestion.
2. Video Conferencing : VLANs can be used to prioritize video traffic and ensure that
it receives the bandwidth and resources it needs for high-quality video
conferencing.
3. Remote Access : VLANs can be used to provide secure remote access to cloud-
based applications and resources, by isolating remote users from the rest of the
network.
4. Cloud Backup and Recovery : VLANs can be used to isolate backup and recovery
traffic, which reduces the risk of network congestion and improves the
performance of backup and recovery operations.
5. Gaming : VLANs can be used to prioritize gaming traffic, which ensures that gamers
receive the bandwidth and resources they need for a smooth gaming experience.
6. IoT : VLANs can be used to isolate Internet of Things (IoT) devices from the rest of
the network, which improves security and reduces the risk of network congestion.
Cloud Infrastructure Requirements
Requirements for Building a Cloud Infrastructure
When building out a cloud strategy, there are several in-depth steps that must be
taken to ensure a robust infrastructure.
1. Requirement 1: Service and Resource Management
 A cloud infrastructure virtualizes all components of a data center.
 Service management is a measured package of applications and services that end
users can easily deploy and manage via a public and/or private cloud vendor.
 And a simplified tool to outline and gauge services is vital for cloud
administrators to market functionality.
 Service management needs to contain resource maintenance, resource
guarantees, billing cycles, and measured regulations.
 Once deployed, management services should help create policies for data and
workflows to make sure it’s fully efficient and processes are delivered to systems
in the cloud.
Cloud Infrastructure Requirements

2. Requirement 2: Data Center Management Tools Integration

 Most data centers utilize a variety of IT tools for systems management, security,
provisioning, customer care, billing, and directories, among others.
 And these work with cloud management services and open APIs to integrate
existing operation, administration, maintenance, and provisioning (OAM&P)
systems. 
 A modern cloud service should support a data center’s existing infrastructure as
well as leveraging modern software, hardware, and virtualization, and other
technology.
Cloud Infrastructure Requirements Cont….
3. Requirement 3: Reporting, Visibility, Reliability, a Security

 Data centers need high levels of real-time reporting and visibility capabilities in
cloud environments to guarantee compliance, SLAs, security, billing, and charge-
backs.
 Without robust reporting and visibility, managing system performance, customer
service, and other processes are nearly impossible.
 And to be wholly reliable, cloud infrastructures must operate regardless of one
or more failing components.
 For to safeguard the cloud, services must ensure data and apps are secure while
providing access to those who are authorized.
Cloud Infrastructure Requirements Cont….

4. Requirement 4: Interfaces for Users, Admins, and Developers

 Automated deployment and self-service interfaces ease complex cloud services


for end users, helping lower operating costs and deliver adoption.
 Self-service interfaces offer customers the ability to effectively launch a cloud
service by managing their own data centers virtually, designing and driving
templates, maintaining virtual storage, networking resources, and utilizing
libraries.
 Administrator interfaces present better visibility to all resources, virtual
machines, templates, service offers, and various cloud users.
 And all of these structures integrate by way of APIs for developers.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy