Azure Resilency
Azure Resilency
Azure Resiliency
Azure Resiliency feature page
Resiliency in Azure
Design resilient applications for Azure
Availability Zones fundamentals
Regions and Availability Zones in Azure
Azure Services
Azure Services that support Availability Zones
Migration Guidance
API Management
App Service Environment
App Service
Cache for Redis
Recovery Services vault
Storage accounts
Virtual Machines and virtual machine scale sets
Terminology
High Availability
Quickstart guides
Virtual machines
Create a Linux VM in an Availability Zone with CLI
Create a Windows VM in an Availability Zone with PowerShell
Create a Windows VM in an Availability Zone with Azure portal
Managed disks
Add a managed disk in Availability Zones with CLI
Add a managed disk in Availability Zones with PowerShell
Virtual machine scale sets
Create a scale set in an Availability Zone
Load Balancer
What is Load Balancer?
Load Balancer Standard and Availability Zones
Create a zone redundant public Standard Load Balancer
Create a zone redundant public Standard Load Balancer (PowerShell)
Create a zone redundant public Standard Load Balancer (CLI)
Create a zonal public Standard Load Balancer
Create a zonal public Standard Load Balancer (PowerShell)
Create a zonal redundant public Standard Load Balancer (CLI)
Load balance VMs across availability zones
Load balance VMs across availability zones with Azure (CLI)
Public IP address
SQL Database
Availability zones with SQL Database general purpose tier
Availability zones with SQL Database premium & business critical tiers
Storage
Zone-redundant storage
Event Hubs
Event Hubs geo-disaster recovery
Service Bus
Service Bus geo-disaster recovery
VPN Gateway
Create a zone-redundant virtual network gateway
ExpressRoute
Create a zone-redundant virtual network gateway
Application Gateway v2
Autoscaling and Zone-redundant Application Gateway v2
Identity
Create an Azure Active Directory Domain Services instance
Disaster Recovery
Business continuity management in Azure
Cross-region replication in Azure
Use Azure Site Recovery
Use Azure Backup
Microsoft Azure Well-Architected Framework
Resources
Azure Roadmap
Azure Regions
Resiliency in Azure
7/17/2022 • 5 minutes to read • Edit Online
Resiliency is a system’s ability to recover from failures and continue to function. It’s not only about avoiding
failures but also involves responding to failures in a way that minimizes downtime or data loss. Because failures
can occur at various levels, it’s important to have protection for all types based on your service availability
requirements. Resiliency in Azure supports and advances capabilities that respond to outages in real time to
ensure continuous service and data protection assurance for mission-critical applications that require near-zero
downtime and high customer confidence.
Azure includes built-in resiliency services that you can leverage and manage based on your business needs.
Whether it’s a single hardware node failure, a rack level failure, a datacenter outage, or a large-scale regional
outage, Azure provides solutions that improve resiliency. For example, availability sets ensure that the virtual
machines deployed on Azure are distributed across multiple isolated hardware nodes in a cluster. Availability
zones protect customers’ applications and data from datacenter failures across multiple physical locations
within a region. Regions and availability zones are central to your application design and resiliency strategy
and are discussed in greater detail later in this article.
Resiliency requirements
The required level of resilience for any Azure solution depends on several considerations. Availability and
latency SLA and other business requirements drive the architectural choices and resiliency level and should be
considered first. Availability requirements range from how much downtime is acceptable – and how much it
costs your business – to the amount of money and time that you can realistically invest in making an application
highly available.
Building resilient systems on Azure is a shared responsibility . Microsoft is responsible for the reliability of the
cloud platform, including its global network and data centers. Azure customers and partners are responsible for
the resilience of their cloud applications, using architectural best practices based on the requirements of each
workload. While Azure continually strives for highest possible resiliency in SLA for the cloud platform, you must
define your own target SLAs for each workload in your solution. An SLA makes it possible to evaluate whether
the architecture meets the business requirements. As you strive for higher percentages of SLA guaranteed
uptime, the cost and complexity to achieve that level of availability grows. An uptime of 99.99 percent translates
to about five minutes of total downtime per month. Is it worth the additional complexity and cost to reach that
percentage? The answer depends on the individual business requirements. While deciding final SLA
commitments, understand Microsoft’s supported SLAs. Each Azure service has its own SLA.
Building resiliency
You should define your application’s availability requirements at the beginning of planning. Many applications
do not need 100% high availability; being aware of this can help to optimize costs during non-critical periods.
Identify the type of failures an application can experience as well as the potential effect of each failure. A
recovery plan should cover all critical services by finalizing recovery strategy at the individual component and
the overall application level. Design your recovery strategy to protect against zonal, regional, and application-
level failure. And perform testing of the end-to-end application environment to measure application resiliency
and recovery against unexpected failure.
The following checklist covers the scope of resiliency planning.
RESIL IEN C Y P L A N N IN G
Design the resiliency features of your applications based on the availability requirements.
Use availability zones and disaster recovery planning where applicable to improve reliability and optimize costs.
Identify possible failure points in the system; application design should tolerate dependency failures by deploying circuit
breaking.
Shared responsibility
Building resilient systems on Azure is a shared responsibility. Microsoft is responsible for the reliability of the
cloud platform, which includes its global network and datacenters. Azure customers and partners are
responsible for the resilience of their cloud applications, using architectural best practices based on the
requirements of each workload. See Business continuity management program in Azure for more information.
Next steps
Regions and availability zones in Azure
Azure services that support availability zones
Azure Resiliency whitepaper
Azure Well-Architected Framework
Azure architecture guidance
Regions and availability zones
7/17/2022 • 3 minutes to read • Edit Online
Azure regions and availability zones are designed to help you achieve resiliency and reliability for your
business-critical workloads. Azure maintains multiple geographies. These discrete demarcations define disaster
recovery and data residency boundaries across one or multiple Azure regions. Maintaining many regions
ensures customers are supported across the world.
Regions
Each Azure region features datacenters deployed within a latency-defined perimeter. They're connected through
a dedicated regional low-latency network. This design ensures that Azure services within any region offer the
best possible performance and security.
Availability zones
Azure availability zones are physically separate locations within each Azure region that are tolerant to local
failures. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires.
Tolerance to failures is achieved because of redundancy and logical isolation of Azure services. To ensure
resiliency, a minimum of three separate availability zones are present in all availability zone-enabled regions.
Azure availability zones are connected by a high-performance network with a round-trip latency of less than
2ms. They help your data stay synchronized and accessible when things go wrong. Each zone is composed of
one or more datacenters equipped with independent power, cooling, and networking infrastructure. Availability
zones are designed so that if one zone is affected, regional services, capacity, and high availability are supported
by the remaining two zones.
Datacenter locations are selected by using rigorous vulnerability risk assessment criteria. This process identifies
all significant datacenter-specific risks and considers shared risks between availability zones.
With availability zones, you can design and operate applications and databases that automatically transition
between zones without interruption. Azure availability zones are highly available, fault tolerant, and more
scalable than traditional single or multiple datacenter infrastructures.
Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure
subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created.
You can use the dedicated ARM API called: checkZonePeers to compare zone mapping for resilient solutions that
span across multiple subscriptions.
You can design resilient solutions by using Azure services that use availability zones. Co-locate your compute,
storage, networking, and data resources across an availability zone, and replicate this arrangement in other
availability zones.
Azure availability zones-enabled services are designed to provide the right level of resiliency and flexibility. They
can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or
zonal, with instances pinned to a specific zone. You can also combine these approaches.
Some organizations require high availability of availability zones and protection from large-scale phenomena
and regional disasters. Azure regions are designed to offer protection against localized disasters with availability
zones and protection from regional or large geography disasters with disaster recovery, by making use of
another region. To learn more about business continuity, disaster recovery, and cross-region replication, see
Cross-region replication in Azure.
Brazil South France Central Qatar Central* South Africa North Australia East
West US 3
* To learn more about Availability Zones and available services support in these regions, contact your Microsoft
sales or customer representative. For the upcoming regions that will support Availability Zones, see Azure
geographies.
Next steps
Microsoft commitment to expand Azure availability zones to more regions
Azure services that support availability zones
Azure services
Azure services
7/17/2022 • 4 minutes to read • Edit Online
Availability of services across Azure regions depends on a region's type. There are two types of regions in Azure:
recommended and alternate.
Recommended : These regions provide the broadest range of service capabilities and currently support
availability zones. Designated in the Azure portal as Recommended .
Alternate : These regions extend Azure's footprint within a data residency boundary where a recommended
region currently exists. Alternate regions help to optimize latency and provide a second region for disaster
recovery needs but don't support availability zones. Azure conducts regular assessments of alternate regions
to determine if they should become recommended regions. Designated in the Azure portal as Other .
Recommende Y Y Y Demand- Y Y
d driven
Strategic Services
As mentioned previously, Azure classifies services into three categories: foundational, mainstream, and strategic.
Service categories are assigned at general availability. Often, services start their lifecycle as a strategic service
and as demand and utilization increases may be promoted to mainstream or foundational. The following table
lists strategic services.
ST RAT EGIC
Azure Automation
Azure Databricks
Microsoft Purview
Older generations of services or virtual machines aren't listed. For more information, see Previous generations
of virtual machine sizes.
To learn more about preview services that aren't yet in general availability and to see a listing of these services,
see Products available by region. For a complete listing of services that support availability zones, see Azure
services that support availability zones.
Next steps
Azure services that support availability zones
Regions and availability zones in Azure
Azure services that support availability zones
7/17/2022 • 5 minutes to read • Edit Online
Azure regions and availability zones are physically separate locations within each Azure region that are tolerant
to datacenter failures because of redundant infrastructure and logical isolation of Azure services.
Azure services that support availability zones are designed to provide the right level of resiliency and flexibility
along with ultra-low latency. With Azure services that support availability zones, whether you architect your own
resiliency or opt for automatic replication and distribution, the benefit is the same. You get superior resiliency
across highly available services, no matter the service type.
Azure strives to enable high resiliency across every service and offering. Running Azure services that support
availability zones provides fully transparent and consistent resiliency against nearly all scenarios, without
interruption.
Brazil South France Central Qatar Central* South Africa North Australia East
West US 3
* To learn more about Availability Zones and available services support in these regions, contact your Microsoft
sales or customer representative. For the upcoming regions that will support Availability Zones, see Azure
geographies.
For a list of Azure services that support availability zones by Azure region, see the availability zones
documentation.
Azure Backup
Azure Cosmos DB
Azure ExpressRoute
Azure Public IP
Azure SQL
Virtual Machines:Av2-Series
Virtual Machines:Bs-Series
Virtual Machines:DSv2-Series
Virtual Machines:DSv3-Series
Virtual Machines:Dv2-Series
Virtual Machines:Dv3-Series
P RO DUC T S RESIL IEN C Y
Virtual Machines:ESv3-Series
Virtual Machines:Ev3-Series
Virtual Machines:F-Series
Virtual Machines:FS-Series
*VMs that support availability zones: AV2-series, B-series, DSv2-series, DSv3-series, Dv2-series, Dv3-series,
ESv3-series, Ev3-series, F-series, FS-series, FSv2-series, and M-series.*
Mainstream services
P RO DUC T S RESIL IEN C Y
Azure Bastion
Azure Batch
Azure Firewall
Azure Functions
Azure HDInsight
Azure Monitor
Power BI Embedded
Virtual Machines:Ddsv4-Series
Virtual Machines:Ddv4-Series
Virtual Machines:Dsv4-Series
Virtual Machines:Dv4-Series
Virtual Machines:Edsv4-Series
Virtual Machines:Edv4-Series
Virtual Machines:Esv4-Series
Virtual Machines:Ev4-Series
Virtual Machines:Fsv2-Series
Virtual Machines:M-Series
Strategic services
P RO DUC T S RESIL IEN C Y
Azure Advisor
P RO DUC T S RESIL IEN C Y
Azure Blueprints
Azure DNS
Azure Lighthouse
Azure Maps
Azure Policy
Azure portal
Microsoft Graph
Microsoft Intune
Microsoft Sentinel
For a list of Azure services that support availability zones by Azure region, see the availability zones
documentation.
Pricing for virtual machines in availability zones
You can access Azure availability zones by using your Azure subscription. To learn more, see Bandwidth pricing.
Next steps
Building solutions for high availability using availability zones
High availability with Azure services
Design patterns for high availability
Migrate Azure API Management to availability zone
support
7/17/2022 • 5 minutes to read • Edit Online
This guide describes how to enable availability zone support for your API Management instance. The API
Management service supports Zone redundancy, which provides resiliency and high availability to a service
instance in a specific Azure region. With zone redundancy, the gateway and the control plane of your API
Management instance (Management API, developer portal, Git configuration) are replicated across datacenters
in physically separated zones, making it resilient to a zone failure.
In this article, we'll take you through the different options for availability zone migration.
Prerequisites
To configure API Management for zone redundancy, your instance must be in one of the following
regions:
Australia East
Brazil South
Canada Central
Central India
Central US
East Asia
East US
East US 2
France Central
Germany West Central
Japan East
Korea Central (*)
North Europe
Norway East (*)
South Africa North (*)
South Central US
Southeast Asia
Switzerland North
UK South
West Europe
West US 2
West US 3
IMPORTANT
The regions with * against them have restrictive access in an Azure subscription to enable availability zone
support. Please work with your Microsoft sales or customer representative.
If you haven't yet created an API Management service instance, see Create an API Management service
instance. Select the Premium service tier.
API Management service must be in the Premium tier. If it isn't, you can upgrade to the Premium tier.
If your API Management instance is deployed (injected) in a Azure virtual network (VNet), check the
version of the compute platform (stv1 or stv2) that hosts the service.
Downtime requirements
There are no downtime requirements for any of the migration options.
Considerations
Changes can take from 15 to 45 minutes to apply. The API Management gateway can continue to handle
API requests during this time.
Migrating to availability zones or changing the availability zone configuration will trigger a public IP
address change.
If you've configured autoscaling for your API Management instance in the primary location, you might
need to adjust your autoscale settings after enabling zone redundancy. The number of API Management
units in autoscale rules and limits must be a multiple of the number of zones.
This guide describes how to migrate an App Service Environment from non-availability zone support to
availability support. We'll take you through the different options for migration.
NOTE
This article is about App Service Environment v3, which is used with Isolated v2 App Service plans. Availability zones are
only supported on App Service Environment v3. If you're using App Service Environment v1 or v2 and want to use
availability zones, you'll need to migrate to App Service Environment v3.
Azure App Service Environment can be deployed across Availability Zones (AZ) to help you achieve resiliency
and reliability for your business-critical workloads. This architecture is also known as zone redundancy.
When you configure to be zone redundant, the platform automatically spreads the instances of the Azure App
Service plan across all three zones in the selected region. If you specify a capacity larger than three, and the
number of instances is divisible by three, the instances are spread evenly. Otherwise, instance counts beyond
3*N are spread across the remaining one or two zones.
Prerequisites
You configure availability zones when you create your App Service Environment.
All App Service plans created in that App Service Environment will automatically be zone redundant.
You can only specify availability zones when creating a new App Service Environment. A pre-existing App
Service Environment can't be converted to use availability zones.
Availability zones are only supported in a subset of regions.
Downtime requirements
Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing
App Service Environments to use availability zones, migration will consist of a side-by-side deployment where
you'll create a new App Service Environment with availability zones enabled.
Downtime will depend on how you choose to redirect traffic from your old to your new availability zone enabled
App Service Environment. For example, if you're using an Application Gateway, a custom domain, or Azure Front
Door, downtime will be dependent on the time it takes to update those respective services with your new app's
information. Alternatively, you can route traffic to multiple apps at the same time using a service such as Azure
Traffic Manager and only fully cutover to your new availability zone enabled apps when everything is deployed
and fully tested. For more information on App Service Environment migration options, see App Service
Environment migration. If you're already using App Service Environment v3, disregard the information about
migration from previous versions and focus on the app migration strategies.
To create an App Service Environment with availability zones using the Azure portal, enable the zone
redundancy option during the "Create App Service Environment v3" experience on the Hosting tab.
The only change needed in an Azure Resource Manager template to specify an App Service Environment with
availability zones is the zoneRedundant property on the Microsoft.Web/hostingEnvironments resource. The
zoneRedundant property should be set to true .
"resources": [
{
"apiVersion": "2019-08-01",
"type": "Microsoft.Web/hostingEnvironments",
"name": "MyAppServiceEnvironment",
"kind": "ASEV3",
"location": "West US 3",
"properties": {
"name": "MyAppServiceEnvironment",
"location": "West US 3",
"dedicatedHostCount": "0",
"zoneRedundant": true,
"InternalLoadBalancingMode": 0,
"virtualNetwork": {
"id": "/subscriptions/00000000-0000-0000-0000-
000000000000/resourceGroups/MyResourceGroup/providers/Microsoft.Network/virtualNetworks/MyVNet/subnets/MySub
net"
}
}
}
]
Pricing
There's a minimum charge of nine App Service plan instances in a zone redundant App Service Environment.
There's no added charge for availability zone support if you have nine or more instances. If you have fewer than
nine instances (of any size) across App Service plans in the zone redundant App Service Environment, you're
charged for the difference between nine and the running instance count. This difference is billed as Windows
I1v2 instances.
Next steps
Learn more about availability zones
Migrate App Service to availability zone support
7/17/2022 • 7 minutes to read • Edit Online
This guide describes how to migrate the public multi-tenant App Service from non-availability zone support to
availability support. We'll take you through the different options for migration.
Azure App Service can be deployed into Availability Zones (AZ) to help you achieve resiliency and reliability for
your business-critical workloads. This architecture is also known as zone redundancy.
An App Service lives in an App Service plan (ASP), and the App Service plan exists in a single scale unit. App
Services are zonal services, which means that App Services can be deployed using one of the following
methods:
For App Services that aren't configured to be zone redundant, the VM instances are placed in a single zone
that is selected by the platform in the selected region.
For App Services that are configured to be zone redundant, the platform automatically spreads the VM
instances in the App Service plan across all three zones in the selected region. If a VM instance capacity
larger than three is specified and the number of instances is divisible by three, the instances will be spread
evenly. Otherwise, instance counts beyond 3*N will get spread across the remaining one or two zones.
Prerequisites
Availability zone support is a property of the App Service plan. The following are the current
requirements/limitations for enabling availability zones:
Both Windows and Linux are supported.
Requires either Premium v2 or Premium v3 App Service plans.
Minimum instance count of three is enforced.
The platform will enforce this minimum count behind the scenes if you specify an instance count
fewer than three.
Can be enabled in any of the following regions:
West US 2
West US 3
Central US
East US
East US 2
Canada Central
Brazil South
North Europe
West Europe
Germany West Central
France Central
UK South
Japan East
Southeast Asia
Australia East
Availability zones can only be specified when creating a new App Service plan. A pre-existing App Service
plan can't be converted to use availability zones.
Availability zones are only supported in the newer portion of the App Service footprint.
Currently, if you're running on Pv3, then it's possible that you're already on a footprint that supports
availability zones. In this scenario, you can create a new App Service plan and specify zone
redundancy.
If you aren't using Pv3 or a scale unit that supports availability zones, are in an unsupported region, or
are unsure, see the migration guidance.
Downtime requirements
Downtime will be dependent on how you decide to carry out the migration. Since you can't convert pre-existing
App Service plans to use availability zones, migration will consist of a side-by-side deployment where you'll
create new App Service plans. Downtime will depend on how you choose to redirect traffic from your old to
your new availability zone enabled App Service. For example, if you're using an Application Gateway, a custom
domain, or Azure Front Door, downtime will be dependent on the time it takes to update those respective
services with your new app's information. Alternatively, you can route traffic to multiple apps at the same time
using a service such as Azure Traffic Manager and only fully cutover to your new availability zone enabled apps
when everything is deployed and fully tested.
TIP
To decide instance capacity, you can use the following calculation:
Since the platform spreads VMs across three zones and you need to account for at least the failure of one zone, multiply
peak workload instance count by a factor of zones/(zones-1), or 3/2. For example, if your typical peak workload requires
four instances, you should provision six instances: (2/3 * 6 instances) = 4 instances.
To create an App Service with availability zones using the Azure portal, enable the zone redundancy option
during the "Create Web App" or "Create App Service Plan" experiences.
The capacity/number of workers/instance count can be changed once the App Service Plan is created by
navigating to the Scale out (App Ser vice plan) settings.
The only changes needed in an Azure Resource Manager template to specify an App Service with availability
zones are the zoneRedundant property (required) and optionally the App Service plan instance count
(capacity ) on the Microsoft.Web/serverfarms resource. The zoneRedundant property should be set to true
and capacity should be set based on the same conditions described previously.
The Azure Resource Manager template snippet below shows the new zoneRedundant property and capacity
specification.
"resources": [
{
"type": "Microsoft.Web/serverfarms",
"apiVersion": "2018-02-01",
"name": "your-appserviceplan-name-here",
"location": "West US 3",
"sku": {
"name": "P1v3",
"tier": "PremiumV3",
"size": "P1v3",
"family": "Pv3",
"capacity": 3
},
"kind": "app",
"properties": {
"zoneRedundant": true
}
}
]
Pricing
There's no additional cost associated with enabling availability zones. Pricing for a zone redundant App Service
is the same as a single zone App Service. You'll be charged based on your App Service plan SKU, the capacity
you specify, and any instances you scale to based on your autoscale criteria. If you enable availability zones but
specify a capacity less than three, the platform will enforce a minimum instance count of three and charge you
for those three instances.
Next steps
Learn how to create and deploy ARM templates
ARM Quickstart Templates
Learn how to scale up an app in Azure App Service
Overview of autoscale in Microsoft Azure
Manage disaster recovery
Migrate an Azure Cache for Redis instance to
availability zone support
7/17/2022 • 2 minutes to read • Edit Online
This guide describes how to migrate your Azure Cache for Redis instance from non-availability zone support to
availability zone support.
Azure Cache for Redis supports zone redundancy in its Premium, Enterprise, and Enterprise Flash tiers. A zone-
redundant cache runs on VMs spread across multiple availability zone to provide high resilience and availability.
Currently, the only way to convert a resource from non-availability zone support to availability zone support is
to redeploy your current cache.
Prerequisites
To migrate to availability zone support, you must have an Azure Cache for Redis resource in either the Premium,
Enterprise, or Enterprise Flash tiers.
Downtime requirements
There are multiple ways to migrate data to a new cache. Many of them require some downtime.
TIP
To ease the migration process, it is recommended that you create the cache to use the same tier, SKU, and region as your
current cache.
1. Migrate your data from the current cache to the new zone redundant cache. To learn the most common
ways to migrate based on your requirements and constraints, see Cache migration guide - Migration
options.
2. Configure your application to point to the new zone redundant cache
3. Delete your old cache
Next Steps
Learn more about:
Regions and Availability Zones in Azure
Azure Services that support Availability Zones
Migrate Azure Recovery Services vault to
availability zone support
7/17/2022 • 3 minutes to read • Edit Online
This article describes how to migrate Recovery Services vault from non-availability zone support to availability
zone support.
Recovery Services vault supports local redundancy, zone redundancy, and geo-redundancy for storage. Storage
redundancy is a setting that must be configured before protecting any workloads. Once a workload is protected
in Recovery Services vault, the setting is locked and can't be changed. To learn more about different storage
redundancy options, see Set storage redundancy.
To change your current Recovery Services vault to availability zone support, you need to deploy a new vault.
Perform the following actions to create a new vault and migrate your existing workloads.
Prerequisites
Standard SKU is supported.
Downtime requirements
Because you're required to deploy a new Recovery Services vault and migrate your workloads to the new vault,
some downtime is expected.
Considerations
When switching recovery vaults for backup, the existing backup data is in the old recovery vault and can't be
migrated to the new one.
If your workloads are backed-up by the old vault and you want to re-assign them to the new vault, follow these
steps:
1. Stop backup for:
a. Virtual Machines.
b. SQL Server database in Azure VM.
c. Storage Files.
d. SAP HANA database in Azure VM.
2. To unregister from old vault, follow these steps:
a. Virtual Machines.
b. SQL Server database in Azure VM.
Move the SQL database on Azure VM to another resource group to completely break the
association with the old vault.
c. Storage Files.
d. SAP HANA database in Azure VM.
Move the SAP HANA database on Azure VM to another resource group to completely break the
association with the old vault.
3. Configure the various backup items for protection in the new vault.
IMPORTANT
Existing recovery points in the old vault is retained and objects can be restored from these. However, as protection is
stopped, backup policy no longer applies to the retained data. As a result, recovery points won't expire through policy, but
must be deleted manually. If this isn't done, recovery points are retained and indefinitely incurs cost. To avoid the cost for
the remaining recovery points, see Delete protected items in the cloud.
Next steps
Learn more about:
Regions and Availability Zones in Azure
Azure Services that support Availability Zones
Migrate Azure Storage accounts to availability zone
support
7/17/2022 • 3 minutes to read • Edit Online
This guide describes how to migrate Azure Storage accounts from non-availability zone support to availability
support. We'll take you through the different options for migration.
Azure Storage always stores multiple copies of your data so that it is protected from planned and unplanned
events, including transient hardware failures, network or power outages, and massive natural disasters.
Redundancy ensures that your storage account meets the Service-Level Agreement (SLA) for Azure Storage
even in the face of failures.
Azure Storage offers the following types of replication:
Locally redundant storage (LRS)
Zone-redundant storage (ZRS)
Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS)
Geo-zone-redundant storage (GZRS) or read-access geo-zone-redundant storage (RA-GZRS)
For an overview of each of these options, see Azure Storage redundancy.
You can switch a storage account from one type of replication to any other type, but some scenarios are more
straightforward than others. This article describes two basic options for migration. The first is a manual
migration and the second is a live migration that you must initiate by contacting Microsoft support.
Prerequisites
Make sure your storage account(s) are in a region that supports ZRS. To determine whether or not the
region supports ZRS, see Zone-redundant storage.
Confirm that your storage account(s) is a general-purpose v2 account. If your storage account is v1, you'll
need to upgrade it to v2. To learn how to upgrade your v1 account, see Upgrade to a general-purpose v2
storage account.
Downtime requirements
If you choose manual migration, downtime is required. If you choose live migration, there's no downtime
requirement.
This guide describes how to migrate Virtual Machines (VMs) and Virtual Machine Scale Sets (VMSS) from non-
availability zone support to availability zone support. We'll take you through the different options for migration,
including how you can use availability zone support for Disaster Recovery solutions.
Virtual Machine (VM) and Virtual Machine Scale Sets (VMSS) are zonal services, which means that VM
resources can be deployed by using one of the following methods:
VM resources are deployed to a specific, self-selected availability zone to achieve more stringent latency
or performance requirements.
VM resources are replicated to one or more zones within the region to improve the resiliency of the
application and data in a High Availability (HA) architecture.
When you migrate resources to availability zone support, we recommend that you select multiple zones for
your new VMs and VMSS, to ensure high-availability of your compute resources.
Prerequisites
To migrate to availability zone support, your VM SKUs must be available across the zones in for your region. To
check for VM SKU availability, use one of the following methods:
Use PowerShell to Check VM SKU availability.
Use the Azure CLI to Check VM SKU availability.
Go to Foundational Services.
Downtime requirements
Because zonal VMs are created across the availability zones, all migration options mentioned in this article
require downtime during deployment because zonal VMs are created across the availability zones.
Next Steps
Learn more about:
Regions and Availability Zones in Azure
Azure Services that support Availability Zones
Terminology
7/17/2022 • 2 minutes to read • Edit Online
To better understand regions and availability zones in Azure, it helps to understand key terms or concepts.
geography An area of the world that contains at least one Azure region.
Geographies define a discrete market that preserves data-
residency and compliance boundaries. Geographies allow
customers with specific data-residency and compliance
needs to keep their data and applications close. Geographies
are fault tolerant to withstand complete region failure
through their connection to our dedicated high-capacity
networking infrastructure.
availability zone Unique physical locations within a region. Each zone is made
up of one or more datacenters equipped with independent
power, cooling, and networking.
alternate (other) region A region that extends Azure's footprint within a data-
residency boundary where a recommended region also
exists. Alternate regions help to optimize latency and provide
a second region for disaster recovery needs. They aren't
designed to support availability zones, although Azure
conducts regular assessment of these regions to determine if
they should become recommended regions. These regions
are designated in the Azure portal as Other .
cross-region replication (formerly paired region) A reliability strategy and implementation that combines high
availability of availability zones with protection from region-
wide incidents to meet both disaster recovery and business
continuity needs.
foundational service A core Azure service that's available in all regions when the
region is generally available.
regional service An Azure service that's deployed regionally and enables the
customer to specify the region into which the service will be
deployed. For a complete list, see Products available by
region.
zonal service An Azure service that supports availability zones, and that
enables a resource to be deployed to a specific, self-selected
availability zone to achieve more stringent latency or
performance requirements.
zone-redundant service An Azure service that supports availability zones, and that
enables resources to be replicated or distributed across
zones automatically.
always-available service An Azure service that supports availability zones, and that
enables resources to be always available across all Azure
geographies as well as resilient to zone-wide and region-
wide outages.
Create a virtual machine in an availability zone
using Azure CLI
7/17/2022 • 4 minutes to read • Edit Online
Applies to: ✔
️ Linux VMs ✔
️ Flexible scale sets
This article steps through using the Azure CLI to create a Linux VM in an Azure availability zone. An availability
zone is a physically separate zone in an Azure region. Use availability zones to protect your apps and data from
an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.
Make sure that you have installed the latest Azure CLI and logged in to an Azure account with az login.
The output is similar to the following condensed example, which shows the Availability Zones in which each VM
size is available:
The resource group is specified when creating or modifying a VM, which can be seen throughout this article.
az vm create --resource-group myResourceGroupVM --name myVM --location eastus2 --image UbuntuLTS --generate-
ssh-keys --zone 1
It may take a few minutes to create the VM. Once the VM has been created, the Azure CLI outputs information
about the VM. Take note of the zones value, which indicates the availability zone in which the VM is running.
{
"fqdns": "",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"location": "eastus2",
"macAddress": "00-0D-3A-23-9A-49",
"powerState": "VM running",
"privateIpAddress": "10.0.0.4",
"publicIpAddress": "52.174.34.95",
"resourceGroup": "myResourceGroupVM",
"zones": "1"
}
The output shows that the managed disk is in the same availability zone as the VM:
{
"creationData": {
"createOption": "FromImage",
"imageReference": {
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/Providers/Microsoft.Compute/Locations/westeurope/Publishers/Canonical/ArtifactTypes/VMImage/Off
ers/UbuntuServer/Skus/16.04-LTS/Versions/latest",
"lun": null
},
"sourceResourceId": null,
"sourceUri": null,
"storageAccountId": null
},
"diskSizeGb": 30,
"encryptionSettings": null,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/disks/osdisk_761c570dab",
"location": "eastus2",
"managedBy": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Compute/virtualMachines/myVM",
"name": "myVM_osdisk_761c570dab",
"osType": "Linux",
"provisioningState": "Succeeded",
"resourceGroup": "myResourceGroupVM",
"sku": {
"name": "Premium_LRS",
"tier": "Premium"
},
"tags": {},
"timeCreated": "2018-03-05T22:16:06.892752+00:00",
"type": "Microsoft.Compute/disks",
"zones": [
"1"
]
}
Use the az vm list-ip-addresses command to return the name of public IP address resource in myVM. In this
example, the name is stored in a variable that is used in a later step.
The output shows that the IP address is in the same availability zone as the VM:
{
"dnsSettings": null,
"etag": "W/\"b7ad25eb-3191-4c8f-9cec-c5e4a3a37d35\"",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Network/publicIPAddresses/myVMPublicIP",
"idleTimeoutInMinutes": 4,
"ipAddress": "52.174.34.95",
"ipConfiguration": {
"etag": null,
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroupVM/providers/Microsoft.Network/networkInterfaces/myVMVMNic/ipConf
igurations/ipconfigmyVM",
"name": null,
"privateIpAddress": null,
"privateIpAllocationMethod": null,
"provisioningState": null,
"publicIpAddress": null,
"resourceGroup": "myResourceGroupVM",
"subnet": null
},
"location": "eastUS2",
"name": "myVMPublicIP",
"provisioningState": "Succeeded",
"publicIpAddressVersion": "IPv4",
"publicIpAllocationMethod": "Dynamic",
"resourceGroup": "myResourceGroupVM",
"resourceGuid": "8c70a073-09be-4504-0000-000000000000",
"tags": {},
"type": "Microsoft.Network/publicIPAddresses",
"zones": [
"1"
]
}
Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure
VMs.
Create a virtual machine in an availability zone
using Azure PowerShell
7/17/2022 • 4 minutes to read • Edit Online
Applies to: ✔
️ Windows VMs
This article details using Azure PowerShell to create an Azure virtual machine running Windows Server 2016 in
an Azure availability zone. An availability zone is a physically separate zone in an Azure region. Use availability
zones to protect your apps and data from an unlikely failure or loss of an entire datacenter.
To use an availability zone, create your virtual machine in a supported Azure region.
Sign in to Azure
Sign in to your Azure subscription with the Connect-AzAccount command and follow the on-screen directions.
Connect-AzAccount
The output is similar to the following condensed example, which shows the Availability Zones in which each VM
size is available:
# Create an inbound network security group rule for port 3389 - change -Access to "Allow" if you want to
allow RDP access
$nsgRuleRDP = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleRDP -Protocol Tcp `
-Direction Inbound -Priority 1000 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix *
`
-DestinationPortRange 3389 -Access Deny
# Create an inbound network security group rule for port 80 - - change -Access to "Allow" if you want to
allow TCP traffic over port 80
$nsgRuleWeb = New-AzNetworkSecurityRuleConfig -Name myNetworkSecurityGroupRuleWWW -Protocol Tcp `
-Direction Inbound -Priority 1001 -SourceAddressPrefix * -SourcePortRange * -DestinationAddressPrefix *
`
-DestinationPortRange 80 -Access Deny
# Create a virtual network card and associate with public IP address and NSG
$nic = New-AzNetworkInterface -Name myNic -ResourceGroupName myResourceGroup -Location eastus2 `
-SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id
The output shows that the managed disk is in the same availability zone as the VM:
ResourceGroupName : myResourceGroup
AccountType : PremiumLRS
OwnerId : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.
Compute/virtualMachines/myVM
ManagedBy : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx//resourceGroups/myResourceGroup/providers/Microsoft.
Compute/virtualMachines/myVM
Sku : Microsoft.Azure.Management.Compute.Models.DiskSku
Zones : {2}
TimeCreated : 9/7/2017 6:57:26 PM
OsType : Windows
CreationData : Microsoft.Azure.Management.Compute.Models.CreationData
DiskSizeGB : 127
EncryptionSettings :
ProvisioningState : Succeeded
Id : /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-
xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.
Compute/disks/myVM_OsDisk_1_bd921920bb0a4650becfc2d830000000
Name : myVM_OsDisk_1_bd921920bb0a4650becfc2d830000000
Type : Microsoft.Compute/disks
Location : eastus2
Tags : {}
Next steps
In this article, you learned how to create a VM in an availability zone. Learn more about availability for Azure
VMs.
Add a disk to a Linux VM
7/17/2022 • 7 minutes to read • Edit Online
Applies to: ✔
️ Linux VMs ✔
️ Flexible scale sets
This article shows you how to attach a persistent disk to your VM so that you can preserve your data - even if
your VM is reprovisioned due to maintenance or resizing.
az vm disk attach \
-g myResourceGroup \
--vm-name myVM \
--name myDataDisk \
--new \
--size-gb 50
Lower latency
In select regions, the disk attach latency has been reduced, so you'll see an improvement of up to 15%. This is
useful if you have planned/unplanned failovers between VMs, you're scaling your workload, or are running a
high scale stateful workload such as Azure Kubernetes Service. However, this improvement is limited to the
explicit disk attach command, az vm disk attach . You won't see the performance improvement if you call a
command that may implicitly perform an attach, like az vm update . You don't need to take any action other than
calling the explicit attach command to see this improvement.
Lower latency is currently available in every public region except for:
Canada Central
Central US
East US
East US 2
South Central US
West US 2
Germany North
Jio India West
North Europe
West Europe
ssh azureuser@10.123.123.25
Here, sdc is the disk that we want, because it is 50G. If you add multiple disks, and aren't sure which disk it is
based on size alone, you can go to the VM page in the portal, select Disks , and check the LUN number for the
disk under Data disks . Compare the LUN number from the portal to the last number of the HTCL portion of
the output, which is the LUN.
Format the disk
Format the disk with parted , if the disk size is 2 tebibytes (TiB) or larger then you must use GPT partitioning, if
it is under 2TiB, then you can use either MBR or GPT partitioning.
NOTE
It is recommended that you use the latest version parted that is available for your distro. If the disk size is 2 tebibytes
(TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
The following example uses parted on /dev/sdc , which is where the first data disk will typically be on most
VMs. Replace sdc with the correct option for your disk. We are also formatting it using the XFS filesystem.
sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mkfs.xfs /dev/sdc1
sudo partprobe /dev/sdc1
Use the partprobe utility to make sure the kernel is aware of the new partition and filesystem. Failure to use
partprobe can cause the blkid or lslbk commands to not return the UUID for the new filesystem immediately.
Mount the disk
Now, create a directory to mount the file system using mkdir . The following example creates a directory at
/datadrive :
Use mountto then mount the filesystem. The following example mounts the /dev/sdc1 partition to the
/datadrive mount point:
sudo blkid
NOTE
Improperly editing the /etc/fstab file could result in an unbootable system. If unsure, refer to the distribution's
documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file
is created before editing.
In this example, use the UUID value for the /dev/sdc1 device that was created in the previous steps, and the
mountpoint of /datadrive . Add the following line to the end of the /etc/fstab file:
In this example, we are using the nano editor, so when you are done editing the file, use Ctrl+O to write the file
and Ctrl+X to exit the editor.
NOTE
Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the
nofail and/or nobootwait fstab options. These options allow a system to boot even if the disk fails to mount at boot time.
Consult your distribution's documentation for more information on these parameters.
The nofail option ensures that the VM starts even if the filesystem is corrupt or the disk does not exist at boot time.
Without this option, you may encounter behavior as described in Cannot SSH to Linux VM due to FSTAB errors
The Azure VM Serial Console can be used for console access to your VM if modifying fstab has resulted in a boot failure.
More details are available in the Serial Console documentation.
In some cases, the discard option may have performance implications. Alternatively, you can run the
fstrim command manually from the command line, or add it to your crontab to run regularly:
Ubuntu
RHEL/CentOS
Troubleshooting
When adding data disks to a Linux VM, you may encounter errors if a disk does not exist at LUN 0. If you are
adding a disk manually using the az vm disk attach -new command and you specify a LUN ( --lun ) rather than
allowing the Azure platform to determine the appropriate LUN, take care that a disk already exists / will exist at
LUN 0.
Consider the following example showing a snippet of the output from lsscsi :
The two data disks exist at LUN 0 and LUN 1 (the first column in the lsscsi output details
[host:channel:target:lun] ). Both disks should be accessible from within the VM. If you had manually specified
the first disk to be added at LUN 1 and the second disk at LUN 2, you may not see the disks correctly from
within your VM.
NOTE
The Azure host value is 5 in these examples, but this may vary depending on the type of storage you select.
This disk behavior is not an Azure problem, but the way in which the Linux kernel follows the SCSI specifications.
When the Linux kernel scans the SCSI bus for attached devices, a device must be found at LUN 0 in order for the
system to continue scanning for additional devices. As such:
Review the output of lsscsi after adding a data disk to verify that you have a disk at LUN 0.
If your disk does not show up correctly within your VM, verify a disk exists at LUN 0.
Next steps
To ensure your Linux VM is configured correctly, review the Optimize your Linux machine performance
recommendations.
Expand your storage capacity by adding additional disks and configure RAID for additional performance.
Attach a data disk to a Windows VM with
PowerShell
7/17/2022 • 3 minutes to read • Edit Online
Applies to: ✔
️ Windows VMs ✔
️ Flexible scale sets
This article shows you how to attach both new and existing disks to a Windows virtual machine by using
PowerShell.
First, review these tips:
The size of the virtual machine controls how many data disks you can attach. For more information, see Sizes
for virtual machines.
To use premium SSDs, you'll need a premium storage-enabled VM type, like the DS-series or GS-series
virtual machine.
This article uses PowerShell within the Azure Cloud Shell, which is constantly updated to the latest version. To
open the Cloud Shell, select Tr y it from the top of any code block.
Lower latency
In select regions, the disk attach latency has been reduced, so you'll see an improvement of up to 15%. This is
useful if you have planned/unplanned failovers between VMs, you're scaling your workload, or are running a
high scale stateful workload such as Azure Kubernetes Service. However, this improvement is limited to the
explicit disk attach command, Add-AzVMDataDisk . You won't see the performance improvement if you call a
command that may implicitly perform an attach, like Update-AzVM . You don't need to take any action other than
calling the explicit attach command to see this improvement.
Lower latency is currently available in every public region except for:
Canada Central
Central US
East US
East US 2
South Central US
West US 2
Germany North
Jio India West
North Europe
West Europe
$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Empty -DiskSizeGB 128
$dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName $rgName
$rgName = 'myResourceGroup'
$vmName = 'myVM'
$location = 'East US 2'
$storageType = 'Premium_LRS'
$dataDiskName = $vmName + '_datadisk1'
$diskConfig = New-AzDiskConfig -SkuName $storageType -Location $location -CreateOption Empty -DiskSizeGB 128
-Zone 1
$dataDisk1 = New-AzDisk -DiskName $dataDiskName -Disk $diskConfig -ResourceGroupName $rgName
$location = "location-name"
$scriptName = "script-name"
$fileName = "script-file-name"
Set-AzVMCustomScriptExtension -ResourceGroupName $rgName -Location $locName -VMName $vmName -Name
$scriptName -TypeHandlerVersion "1.4" -StorageAccountName "mystore1" -StorageAccountKey "primary-key" -
FileName $fileName -ContainerName "scripts"
The script file can contain code to initialize the disks, for example:
$disks = Get-Disk | Where partitionstyle -eq 'raw' | sort number
$rgName = "myResourceGroup"
$vmName = "myVM"
$dataDiskName = "myDisk"
$disk = Get-AzDisk -ResourceGroupName $rgName -DiskName $dataDiskName
Next steps
You can also deploy managed disks using templates. For more information, see Using Managed Disks in Azure
Resource Manager Templates or the quickstart template for deploying multiple data disks.
Create a virtual machine scale set that uses
Availability Zones
7/17/2022 • 9 minutes to read • Edit Online
To protect your virtual machine scale sets from datacenter-level failures, you can create a scale set across
Availability Zones. Azure regions that support Availability Zones have a minimum of three separate zones, each
with their own independent power source, network, and cooling. For more information, see Overview of
Availability Zones.
Availability considerations
When you deploy a regional (non-zonal) scale set into one or more zones as of API version 2017-12-01, you
have the following availability options:
Max spreading (platformFaultDomainCount = 1)
Static fixed spreading (platformFaultDomainCount = 5)
Spreading aligned with storage disk fault domains (platformFaultDomainCount = 2 or 3)
With max spreading, the scale set spreads your VMs across as many fault domains as possible within each zone.
This spreading could be across greater or fewer than five fault domains per zone. With static fixed spreading, the
scale set spreads your VMs across exactly five fault domains per zone. If the scale set cannot find five distinct
fault domains per zone to satisfy the allocation request, the request fails.
We recommend deploying with max spreading for most workloads , as this approach provides the best
spreading in most cases. If you need replicas to be spread across distinct hardware isolation units, we
recommend spreading across Availability Zones and utilize max spreading within each zone.
NOTE
With max spreading, you only see one fault domain in the scale set VM instance view and in the instance metadata
regardless of how many fault domains the VMs are spread across. The spreading within each zone is implicit.
Placement groups
IMPORTANT
Placement groups only apply to virtual machine scale sets running in Uniform orchestration mode.
When you deploy a scale set, you also have the option to deploy with a single placement group per Availability
Zone, or with multiple per zone. For regional (non-zonal) scale sets, the choice is to have a single placement
group in the region or to have multiple in the region. If the scale set property called singlePlacementGroup is set
to false, the scale set can be composed of multiple placement groups and has a range of 0-1,000 VMs. When set
to the default value of true, the scale set is composed of a single placement group, and has a range of 0-100
VMs. For most workloads, we recommend multiple placement groups, which allows for greater scale. In API
version 2017-12-01, scale sets default to multiple placement groups for single-zone and cross-zone scale sets,
but they default to single placement group for regional (non-zonal) scale sets.
NOTE
If you use max spreading, you must use multiple placement groups.
Zone balancing
Finally, for scale sets deployed across multiple zones, you also have the option of choosing "best effort zone
balance" or "strict zone balance". A scale set is considered "balanced" if each zone the same number of VMs or
+\- 1 VM in all other zones for the scale set. For example:
A scale set with 2 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced. There is only
one zone with a different VM count and it is only 1 less than the other zones.
A scale set with 1 VM in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered unbalanced. Zone 1 has
2 fewer VMs than zones 2 and 3.
It's possible that VMs in the scale set are successfully created, but extensions on those VMs fail to deploy. These
VMs with extension failures are still counted when determining if a scale set is balanced. For instance, a scale set
with 3 VMs in zone 1, 3 VMs in zone 2, and 3 VMs in zone 3 is considered balanced even if all extensions failed
in zone 1 and all extensions succeeded in zones 2 and 3.
With best-effort zone balance, the scale set attempts to scale in and out while maintaining balance. However, if
for some reason this is not possible (for example, if one zone goes down, the scale set cannot create a new VM
in that zone), the scale set allows temporary imbalance to successfully scale in or out. On subsequent scale-out
attempts, the scale set adds VMs to zones that need more VMs for the scale set to be balanced. Similarly, on
subsequent scale in attempts, the scale set removes VMs from zones that need fewer VMs for the scale set to be
balanced. With "strict zone balance", the scale set fails any attempts to scale in or out if doing so would cause
unbalance.
To use best-effort zone balance, set zoneBalance to false. This setting is the default in API version 2017-12-01. To
use strict zone balance, set zoneBalance to true.
NOTE
The zoneBalance property can only be set if the zones property of the scale set contains more than one zone. If there
are no zones or only one zone specified, then zoneBalance property should not be set.
The scale set and supporting resources, such as the Azure load balancer and public IP address, are created in the
single zone that you specify.
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--generate-ssh-keys \
--zones 1
For a complete example of a single-zone scale set and network resources, see this sample CLI script
Zone -redundant scale set
To create a zone-redundant scale set, you use a Standard SKU public IP address and load balancer. For enhanced
redundancy, the Standard SKU creates zone-redundant network resources. For more information, see Azure
Load Balancer Standard overview and Standard Load Balancer and Availability Zones.
To create a zone-redundant scale set, specify multiple zones with the --zones parameter. The following example
creates a zone-redundant scale set named myScaleSet across zones 1,2,3:
az vmss create \
--resource-group myResourceGroup \
--name myScaleSet \
--image UbuntuLTS \
--upgrade-policy-mode automatic \
--admin-username azureuser \
--generate-ssh-keys \
--zones 1 2 3
It takes a few minutes to create and configure all the scale set resources and VMs in the zone(s) that you specify.
For a complete example of a zone-redundant scale set and network resources, see this sample CLI script
New-AzVmss `
-ResourceGroupName "myResourceGroup" `
-Location "EastUS2" `
-VMScaleSetName "myScaleSet" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-UpgradePolicy "Automatic" `
-Zone "1"
New-AzVmss `
-ResourceGroupName "myResourceGroup" `
-Location "EastUS2" `
-VMScaleSetName "myScaleSet" `
-VirtualNetworkName "myVnet" `
-SubnetName "mySubnet" `
-PublicIpAddressName "myPublicIPAddress" `
-LoadBalancerName "myLoadBalancer" `
-UpgradePolicy "Automatic" `
-Zone "1", "2", "3"
For a complete example of a single-zone scale set and network resources, see this sample Resource Manager
template
Zone -redundant scale set
To create a zone-redundant scale set, specify multiple values in the zones property for the
Microsoft.Compute/virtualMachineScaleSets resource type. The following example creates a zone-redundant
scale set named myScaleSet across East US 2 zones 1,2,3:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"name": "myScaleSet",
"location": "East US 2",
"apiVersion": "2017-12-01",
"zones": [
"1",
"2",
"3"
]
}
If you create a public IP address or a load balancer, specify the "sku": { "name": "Standard" }" property to create
zone-redundant network resources. You also need to create a Network Security Group and rules to permit any
traffic. For more information, see Azure Load Balancer Standard overview and Standard Load Balancer and
Availability Zones.
For a complete example of a zone-redundant scale set and network resources, see this sample Resource
Manager template
Next steps
Now that you have created a scale set in an Availability Zone, you can learn how to Deploy applications on
virtual machine scale sets or Use autoscale with virtual machine scale sets.
What is Azure Load Balancer?
7/17/2022 • 3 minutes to read • Edit Online
Load balancing refers to evenly distributing load (incoming network traffic) across a group of backend resources
or servers.
Azure Load Balancer operates at layer 4 of the Open Systems Interconnection (OSI) model. It's the single point of
contact for clients. Load balancer distributes inbound flows that arrive at the load balancer's front end to
backend pool instances. These flows are according to configured load-balancing rules and health probes. The
backend pool instances can be Azure Virtual Machines or instances in a virtual machine scale set.
A public load balancer can provide outbound connections for virtual machines (VMs) inside your virtual
network. These connections are accomplished by translating their private IP addresses to public IP addresses.
Public Load Balancers are used to load balance internet traffic to your VMs.
An internal (or private) load balancer is used where private IPs are needed at the frontend only. Internal
load balancers are used to load balance traffic inside a virtual network. A load balancer frontend can be
accessed from an on-premises network in a hybrid scenario.
Figure: Balancing multi-tier applications by using both public and internal Load Balancer
For more information on the individual load balancer components, see Azure Load Balancer components.
NOTE
Azure provides a suite of fully managed load-balancing solutions for your scenarios.
If you are looking to do DNS based global routing and do not have requirements for Transport Layer Security (TLS)
protocol termination ("SSL offload"), per-HTTP/HTTPS request or application-layer processing, review Traffic Manager.
If you want to load balance between your servers in a region at the application layer, review Application Gateway.
If you need to optimize global routing of your web traffic and optimize top-tier end-user performance and reliability
through quick global failover, see Front Door.
Your end-to-end scenarios may benefit from combining these solutions as needed. For an Azure load-balancing options
comparison, see Overview of load-balancing options in Azure.
What's new?
Subscribe to the RSS feed and view the latest Azure Load Balancer feature updates on the Azure Updates page.
Next steps
See Create a public standard load balancer to get started with using a load balancer.
For more information on Azure Load Balancer limitations and components, see Azure Load Balancer
components and Azure Load Balancer concepts
Learn module: Introduction to Azure Load Balancer.
Load Balancer and Availability Zones
7/17/2022 • 4 minutes to read • Edit Online
Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase
availability throughout your scenario by aligning resources with, and distribution across zones. Review this
document to understand these concepts and fundamental scenario design guidance.
A Load Balancer can either be zone redundant, zonal, or non-zonal . To configure the zone-related properties
(mentioned above) for your load balancer, select the appropriate type of frontend needed.
Zone redundant
In a region with Availability Zones, a Standard Load Balancer can be zone-redundant. This traffic is served by a
single IP address.
A single frontend IP address will survive zone failure. The frontend IP may be used to reach all (non-impacted)
backend pool members no matter the zone. One or more availability zones can fail and the data path survives as
long as one zone in the region remains healthy.
The frontend's IP address is served simultaneously by multiple independent infrastructure deployments in
multiple availability zones. Any retries or reestablishment will succeed in other zones not affected by the zone
failure.
Zonal
You can choose to have a frontend guaranteed to a single zone, which is known as a zonal. This scenario means
any inbound or outbound flow is served by a single zone in a region. Your frontend shares fate with the health
of the zone. The data path is unaffected by failures in zones other than where it was guaranteed. You can use
zonal frontends to expose an IP address per Availability Zone.
Additionally, the use of zonal frontends directly for load-balanced endpoints within each zone is supported. You
can use this configuration to expose per zone load-balanced endpoints to individually monitor each zone. For
public endpoints, you can integrate them with a DNS load-balancing product like Traffic Manager and use a
single DNS name.
Figure: Zonal load balancer
For a public load balancer frontend, you add a zones parameter to the public IP. This public IP is referenced by
the frontend IP configuration used by the respective rule.
For an internal load balancer frontend, add a zones parameter to the internal load balancer frontend IP
configuration. A zonal frontend guarantees an IP address in a subnet to a specific zone.
Non-Zonal
Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend (a public IP or
public IP prefix in the case of a public load balancer; a private IP in the case of an internal load balancer). This
option does not give a guarantee of redundancy. Note that all public IP addresses that are upgraded will be of
type "no-zone".
Design considerations
Now that you understand the zone-related properties for Standard Load Balancer, the following design
considerations might help as you design for high availability.
Tolerance to zone failure
A zone redundant frontend can serve a zonal resource in any zone with a single IP address. The IP can
survive one or more zone failures as long as at least one zone remains healthy within the region.
A zonal frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the
deployment in your zone goes down, your load balancer will not survive this failure.
Members in the backend pool of a load balancer are normally associated with a single zone (e.g. zonal virtual
machines). A common design for production workloads would be to have multiple zonal resources (e.g. virtual
machines from zone 1, 2, and 3) in the backend of a load balancer with a zone-redundant frontend.
Multiple frontends
Using multiple frontends allow you to load balance traffic on more than one port and/or IP address. When
designing your architecture, it is important to account for the way zone redundancy and multiple frontends can
interact. Note that if the goal is to always have every frontend be resilient to failure, then all IP addresses
assigned as frontends must be zone-redundant. If a set of frontends is intended to be associated with a single
zone, then every IP address for that set must be associated with that specific zone. It is not required to have a
load balancer for each zone; rather, each zonal frontend (or set of zonal frontends) could be associated with
virtual machines in the backend pool that are part of that specific availability zone.
Transition between regional zonal models
In the case where a region is augmented to have availability zones, any existing IPs (e.g., used for load balancer
frontends) would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it
is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to
utilize these new IPs.
Control vs data plane implications
Zone-redundancy doesn't imply hitless data plane or control plane. Zone-redundant flows can use any zone and
your flows will use all healthy zones in a region. In a zone failure, traffic flows using healthy zones aren't
affected.
Traffic flows using a zone at the time of zone failure may be affected but applications can recover. Traffic
continues in the healthy zones within the region upon retransmission when Azure has converged around the
zone failure.
Review Azure cloud design patterns to improve the resiliency of your application to failure scenarios.
Limitations
Zones can't be changed, updated, or created for the resource after creation.
Resources can't be updated from zonal to zone-redundant or vice versa after creation.
Next steps
Learn more about Availability Zones
Learn more about Standard Load Balancer
Learn how to load balance VMs within a zone using a zonal Standard Load Balancer
Learn how to load balance VMs across zones using a zone redundant Standard Load Balancer
Learn about Azure cloud design patterns to improve the resiliency of your application to failure scenarios.
Quickstart: Create a public load balancer to load
balance VMs using the Azure portal
7/17/2022 • 9 minutes to read • Edit Online
Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and two virtual
machines.
Prerequisites
An Azure account with an active subscription. Create an account for free.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
SET T IN G VA L UE
Project Details
Instance details
4. Select the IP Addresses tab or select Next: IP Addresses at the bottom of the page.
5. In the IP Addresses tab, enter this information:
SET T IN G VA L UE
6. Under Subnet name , select the word default . If a subnet isn't present, select + Add subnet .
7. In Edit subnet , enter this information:
SET T IN G VA L UE
SET T IN G VA L UE
11. Select the Review + create tab or select the Review + create button.
12. Select Create .
NOTE
The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will
complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
SET T IN G VA L UE
Project details
Instance details
NOTE
IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
NOTE
For more information on IP prefixes, see Azure Public IP address prefix.
NOTE
In regions with Availability Zones, you have the option to select no-zone (default option), a specific zone, or zone-
redundant. The choice will depend on your specific domain failure requirements. In regions without Availability
Zones, this field won't appear.
For more information on availability zones, see Availability zones overview.
Port Enter 80 .
Outbound source network address translation (SNAT) Leave the default of (Recommended) Use outbound
rules to provide backend pool members access to
the internet.
NOTE
In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the
configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT
gateway, see What is Azure Virtual Network NAT? For more information about outbound connections in Azure,
see Source Network Address Translation (SNAT) for outbound connections
SET T IN G VA L UE
Project details
Instance details
4. Select the Outbound IP tab or select Next: Outbound IP at the bottom of the page.
5. In Outbound IP , select Create a new public IP address next to Public IP addresses .
6. Enter myNATgatewayIP in Name .
7. Select OK .
8. Select the Subnet tab or select the Next: Subnet button at the bottom of the page.
9. In Vir tual network in the Subnet tab, select myVNet .
10. Select myBackendSubnet under Subnet name .
11. Select the blue Review + create button at the bottom of the page, or select the Review + create tab.
12. Select Create .
SET T IN G VA L UE
Project Details
Instance details
Administrator account
4. Select the Networking tab, or select Next: Disks , then Next: Networking .
5. In the Networking tab, select or enter the following information:
SET T IN G VA L UE
Network interface
Load balancing
Place this virtual machine behind an existing load- Select the check box.
balancing solution?
SET T IN G VM 2
Name myVM2
Install IIS
1. In the search box at the top of the portal, enter Vir tual machine . Select Vir tual machines in the search
results.
2. Select myVM1 .
3. On the Over view page, select Connect , then Bastion .
4. Enter the username and password entered during VM creation.
5. Select Connect .
6. On the server desktop, navigate to Windows Administrative Tools > Windows PowerShell .
7. In the PowerShell Window, run the following commands to:
Install the IIS server
Remove the default iisstart.htm file
Add a new iisstart.htm file that displays the name of the VM:
Clean up resources
When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the
resource group CreatePubLBQS-rg that contains the resources and then select Delete .
Next steps
In this quickstart, you:
Created an Azure Load Balancer
Attached 2 VMs to the load balancer
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using Azure PowerShell
7/17/2022 • 8 minutes to read • Edit Online
Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and two virtual
machines.
Prerequisites
An Azure account with an active subscription. Create an account for free
Azure PowerShell installed locally or Azure Cloud Shell
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version
5.4.1 or later. Run Get-Module -ListAvailable Az to find the installed version. If you need to upgrade, see Install
Azure PowerShell module. If you're running PowerShell locally, you also need to run Connect-AzAccount to create
a connection with Azure.
$rg = @{
Name = 'CreatePubLBQS-rg'
Location = 'eastus'
}
New-AzResourceGroup @rg
$publicip = @{
Name = 'myPublicIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1,2,3
}
New-AzPublicIpAddress @publicip
## For loop with variable to create virtual machines for load balancer backend pool. ##
for ($i=1; $i -le 2; $i++)
{
## Command to create network interface for VMs ##
$nic = @{
Name = "myNicVM$i"
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Subnet = $vnet.Subnets[0]
NetworkSecurityGroup = $nsg
LoadBalancerBackendAddressPool = $bepool
}
$nicVM = New-AzNetworkInterface @nic
The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status
of the jobs, use Get-Job:
Get-Job
Ensure the State of the VM creation is Completed before moving on to the next steps.
NOTE
Azure provides a default outbound access IP for VMs that either aren't assigned a public IP address or are in the back-end
pool of an internal basic Azure load balancer. The default outbound access IP mechanism provides an outbound IP
address that isn't configurable.
For more information, see Default outbound access in Azure.
The default outbound access IP is disabled when either a public IP address is assigned to the VM or the VM is placed in
the back-end pool of a standard load balancer, with or without outbound rules. If an Azure Virtual Network network
address translation (NAT) gateway resource is assigned to the subnet of the virtual machine, the default outbound access
IP is disabled.
VMs that are created by virtual machine scale sets in flexible orchestration mode don't have default outbound access.
For more information about outbound connections in Azure, see Use source network address translation (SNAT) for
outbound connections.
Install IIS
Use Set-AzVMExtension to install the Custom Script Extension.
The extension runs PowerShell Add-WindowsFeature Web-Server to install the IIS webserver and then updates the
Default.htm page to show the hostname of the VM:
IMPORTANT
Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use Get-Job to
check the status of the virtual machine deployment jobs.
## For loop with variable to install custom script extension on virtual machines. ##
for ($i=1; $i -le 2; $i++)
{
$ext = @{
Publisher = 'Microsoft.Compute'
ExtensionType = 'CustomScriptExtension'
ExtensionName = 'IIS'
ResourceGroupName = 'CreatePubLBQS-rg'
VMName = "myVM$i"
Location = 'eastus'
TypeHandlerVersion = '1.8'
SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
}
Set-AzVMExtension @ext -AsJob
}
The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use Get-Job:
Get-Job
Ensure the State of the jobs is Completed before moving on to the next steps.
$ip = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myPublicIP'
}
Get-AzPublicIPAddress @ip | select IpAddress
Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web
server is displayed on the browser.
Clean up resources
When no longer needed, you can use the Remove-AzResourceGroup command to remove the resource group,
load balancer, and the remaining resources.
Next steps
In this quickstart, you:
Created an Azure Load Balancer
Attached 2 VMs to the load balancer
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using the Azure CLI
7/17/2022 • 8 minutes to read • Edit Online
Get started with Azure Load Balancer by using the Azure CLI to create a public load balancer and two virtual
machines.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version
is already installed.
az group create \
--name CreatePubLBQS-rg \
--location eastus
az network lb create \
--resource-group CreatePubLBQS-rg \
--name myLoadBalancer \
--sku Standard \
--public-ip-address myPublicIP \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool
It can take a few minutes for the Azure Bastion host to deploy.
array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}"
do
az network nic create \
--resource-group CreatePubLBQS-rg \
--name $vmnic \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
done
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM1 \
--nics myNicVM1 \
--image win2019datacenter \
--admin-username azureuser \
--zone 1 \
--no-wait
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM2 \
--nics myNicVM2 \
--image win2019datacenter \
--admin-username azureuser \
--zone 2 \
--no-wait
It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating.
NOTE
Azure provides a default outbound access IP for VMs that either aren't assigned a public IP address or are in the back-end
pool of an internal basic Azure load balancer. The default outbound access IP mechanism provides an outbound IP
address that isn't configurable.
For more information, see Default outbound access in Azure.
The default outbound access IP is disabled when either a public IP address is assigned to the VM or the VM is placed in
the back-end pool of a standard load balancer, with or without outbound rules. If an Azure Virtual Network network
address translation (NAT) gateway resource is assigned to the subnet of the virtual machine, the default outbound access
IP is disabled.
VMs that are created by virtual machine scale sets in flexible orchestration mode don't have default outbound access.
For more information about outbound connections in Azure, see Use source network address translation (SNAT) for
outbound connections.
array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done
array=(myVM1 myVM2)
for vm in "${array[@]}"
do
az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $vm \
--resource-group CreatePubLBQS-rg \
--settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
done
Clean up resources
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all
related resources.
az group delete \
--name CreatePubLBQS-rg
Next steps
In this quickstart:
You created a standard public load balancer
Attached two virtual machines
Configured the load balancer traffic rule and health probe
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using the Azure portal
7/17/2022 • 9 minutes to read • Edit Online
Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and two virtual
machines.
Prerequisites
An Azure account with an active subscription. Create an account for free.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
SET T IN G VA L UE
Project Details
Instance details
4. Select the IP Addresses tab or select Next: IP Addresses at the bottom of the page.
5. In the IP Addresses tab, enter this information:
SET T IN G VA L UE
6. Under Subnet name , select the word default . If a subnet isn't present, select + Add subnet .
7. In Edit subnet , enter this information:
SET T IN G VA L UE
SET T IN G VA L UE
11. Select the Review + create tab or select the Review + create button.
12. Select Create .
NOTE
The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will
complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
SET T IN G VA L UE
Project details
Instance details
NOTE
IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
NOTE
For more information on IP prefixes, see Azure Public IP address prefix.
NOTE
In regions with Availability Zones, you have the option to select no-zone (default option), a specific zone, or zone-
redundant. The choice will depend on your specific domain failure requirements. In regions without Availability
Zones, this field won't appear.
For more information on availability zones, see Availability zones overview.
Port Enter 80 .
Outbound source network address translation (SNAT) Leave the default of (Recommended) Use outbound
rules to provide backend pool members access to
the internet.
NOTE
In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the
configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT
gateway, see What is Azure Virtual Network NAT? For more information about outbound connections in Azure,
see Source Network Address Translation (SNAT) for outbound connections
SET T IN G VA L UE
Project details
Instance details
4. Select the Outbound IP tab or select Next: Outbound IP at the bottom of the page.
5. In Outbound IP , select Create a new public IP address next to Public IP addresses .
6. Enter myNATgatewayIP in Name .
7. Select OK .
8. Select the Subnet tab or select the Next: Subnet button at the bottom of the page.
9. In Vir tual network in the Subnet tab, select myVNet .
10. Select myBackendSubnet under Subnet name .
11. Select the blue Review + create button at the bottom of the page, or select the Review + create tab.
12. Select Create .
SET T IN G VA L UE
Project Details
Instance details
Administrator account
4. Select the Networking tab, or select Next: Disks , then Next: Networking .
5. In the Networking tab, select or enter the following information:
SET T IN G VA L UE
Network interface
Load balancing
Place this virtual machine behind an existing load- Select the check box.
balancing solution?
SET T IN G VM 2
Name myVM2
Install IIS
1. In the search box at the top of the portal, enter Vir tual machine . Select Vir tual machines in the search
results.
2. Select myVM1 .
3. On the Over view page, select Connect , then Bastion .
4. Enter the username and password entered during VM creation.
5. Select Connect .
6. On the server desktop, navigate to Windows Administrative Tools > Windows PowerShell .
7. In the PowerShell Window, run the following commands to:
Install the IIS server
Remove the default iisstart.htm file
Add a new iisstart.htm file that displays the name of the VM:
Clean up resources
When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the
resource group CreatePubLBQS-rg that contains the resources and then select Delete .
Next steps
In this quickstart, you:
Created an Azure Load Balancer
Attached 2 VMs to the load balancer
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using Azure PowerShell
7/17/2022 • 8 minutes to read • Edit Online
Get started with Azure Load Balancer by using Azure PowerShell to create a public load balancer and two virtual
machines.
Prerequisites
An Azure account with an active subscription. Create an account for free
Azure PowerShell installed locally or Azure Cloud Shell
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version
5.4.1 or later. Run Get-Module -ListAvailable Az to find the installed version. If you need to upgrade, see Install
Azure PowerShell module. If you're running PowerShell locally, you also need to run Connect-AzAccount to create
a connection with Azure.
$rg = @{
Name = 'CreatePubLBQS-rg'
Location = 'eastus'
}
New-AzResourceGroup @rg
$publicip = @{
Name = 'myPublicIP'
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Sku = 'Standard'
AllocationMethod = 'static'
Zone = 1,2,3
}
New-AzPublicIpAddress @publicip
## For loop with variable to create virtual machines for load balancer backend pool. ##
for ($i=1; $i -le 2; $i++)
{
## Command to create network interface for VMs ##
$nic = @{
Name = "myNicVM$i"
ResourceGroupName = 'CreatePubLBQS-rg'
Location = 'eastus'
Subnet = $vnet.Subnets[0]
NetworkSecurityGroup = $nsg
LoadBalancerBackendAddressPool = $bepool
}
$nicVM = New-AzNetworkInterface @nic
The deployments of the virtual machines and bastion host are submitted as PowerShell jobs. To view the status
of the jobs, use Get-Job:
Get-Job
Ensure the State of the VM creation is Completed before moving on to the next steps.
NOTE
Azure provides a default outbound access IP for VMs that either aren't assigned a public IP address or are in the back-end
pool of an internal basic Azure load balancer. The default outbound access IP mechanism provides an outbound IP
address that isn't configurable.
For more information, see Default outbound access in Azure.
The default outbound access IP is disabled when either a public IP address is assigned to the VM or the VM is placed in
the back-end pool of a standard load balancer, with or without outbound rules. If an Azure Virtual Network network
address translation (NAT) gateway resource is assigned to the subnet of the virtual machine, the default outbound access
IP is disabled.
VMs that are created by virtual machine scale sets in flexible orchestration mode don't have default outbound access.
For more information about outbound connections in Azure, see Use source network address translation (SNAT) for
outbound connections.
Install IIS
Use Set-AzVMExtension to install the Custom Script Extension.
The extension runs PowerShell Add-WindowsFeature Web-Server to install the IIS webserver and then updates the
Default.htm page to show the hostname of the VM:
IMPORTANT
Ensure the virtual machine deployments have completed from the previous steps before proceeding. Use Get-Job to
check the status of the virtual machine deployment jobs.
## For loop with variable to install custom script extension on virtual machines. ##
for ($i=1; $i -le 2; $i++)
{
$ext = @{
Publisher = 'Microsoft.Compute'
ExtensionType = 'CustomScriptExtension'
ExtensionName = 'IIS'
ResourceGroupName = 'CreatePubLBQS-rg'
VMName = "myVM$i"
Location = 'eastus'
TypeHandlerVersion = '1.8'
SettingString = '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
}
Set-AzVMExtension @ext -AsJob
}
The extensions are deployed as PowerShell jobs. To view the status of the installation jobs, use Get-Job:
Get-Job
Ensure the State of the jobs is Completed before moving on to the next steps.
$ip = @{
ResourceGroupName = 'CreatePubLBQS-rg'
Name = 'myPublicIP'
}
Get-AzPublicIPAddress @ip | select IpAddress
Copy the public IP address, and then paste it into the address bar of your browser. The default page of IIS Web
server is displayed on the browser.
Clean up resources
When no longer needed, you can use the Remove-AzResourceGroup command to remove the resource group,
load balancer, and the remaining resources.
Next steps
In this quickstart, you:
Created an Azure Load Balancer
Attached 2 VMs to the load balancer
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using the Azure CLI
7/17/2022 • 8 minutes to read • Edit Online
Get started with Azure Load Balancer by using the Azure CLI to create a public load balancer and two virtual
machines.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version
is already installed.
az group create \
--name CreatePubLBQS-rg \
--location eastus
az network lb create \
--resource-group CreatePubLBQS-rg \
--name myLoadBalancer \
--sku Standard \
--public-ip-address myPublicIP \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool
It can take a few minutes for the Azure Bastion host to deploy.
array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}"
do
az network nic create \
--resource-group CreatePubLBQS-rg \
--name $vmnic \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
done
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM1 \
--nics myNicVM1 \
--image win2019datacenter \
--admin-username azureuser \
--zone 1 \
--no-wait
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM2 \
--nics myNicVM2 \
--image win2019datacenter \
--admin-username azureuser \
--zone 2 \
--no-wait
It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating.
NOTE
Azure provides a default outbound access IP for VMs that either aren't assigned a public IP address or are in the back-end
pool of an internal basic Azure load balancer. The default outbound access IP mechanism provides an outbound IP
address that isn't configurable.
For more information, see Default outbound access in Azure.
The default outbound access IP is disabled when either a public IP address is assigned to the VM or the VM is placed in
the back-end pool of a standard load balancer, with or without outbound rules. If an Azure Virtual Network network
address translation (NAT) gateway resource is assigned to the subnet of the virtual machine, the default outbound access
IP is disabled.
VMs that are created by virtual machine scale sets in flexible orchestration mode don't have default outbound access.
For more information about outbound connections in Azure, see Use source network address translation (SNAT) for
outbound connections.
array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done
array=(myVM1 myVM2)
for vm in "${array[@]}"
do
az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $vm \
--resource-group CreatePubLBQS-rg \
--settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
done
Clean up resources
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all
related resources.
az group delete \
--name CreatePubLBQS-rg
Next steps
In this quickstart:
You created a standard public load balancer
Attached two virtual machines
Configured the load balancer traffic rule and health probe
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using the Azure portal
7/17/2022 • 9 minutes to read • Edit Online
Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and two virtual
machines.
Prerequisites
An Azure account with an active subscription. Create an account for free.
Sign in to Azure
Sign in to the Azure portal at https://portal.azure.com.
SET T IN G VA L UE
Project Details
Instance details
4. Select the IP Addresses tab or select Next: IP Addresses at the bottom of the page.
5. In the IP Addresses tab, enter this information:
SET T IN G VA L UE
6. Under Subnet name , select the word default . If a subnet isn't present, select + Add subnet .
7. In Edit subnet , enter this information:
SET T IN G VA L UE
SET T IN G VA L UE
11. Select the Review + create tab or select the Review + create button.
12. Select Create .
NOTE
The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will
complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
SET T IN G VA L UE
Project details
Instance details
NOTE
IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
NOTE
For more information on IP prefixes, see Azure Public IP address prefix.
NOTE
In regions with Availability Zones, you have the option to select no-zone (default option), a specific zone, or zone-
redundant. The choice will depend on your specific domain failure requirements. In regions without Availability
Zones, this field won't appear.
For more information on availability zones, see Availability zones overview.
Port Enter 80 .
Outbound source network address translation (SNAT) Leave the default of (Recommended) Use outbound
rules to provide backend pool members access to
the internet.
NOTE
In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the
configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT
gateway, see What is Azure Virtual Network NAT? For more information about outbound connections in Azure,
see Source Network Address Translation (SNAT) for outbound connections
SET T IN G VA L UE
Project details
Instance details
4. Select the Outbound IP tab or select Next: Outbound IP at the bottom of the page.
5. In Outbound IP , select Create a new public IP address next to Public IP addresses .
6. Enter myNATgatewayIP in Name .
7. Select OK .
8. Select the Subnet tab or select the Next: Subnet button at the bottom of the page.
9. In Vir tual network in the Subnet tab, select myVNet .
10. Select myBackendSubnet under Subnet name .
11. Select the blue Review + create button at the bottom of the page, or select the Review + create tab.
12. Select Create .
SET T IN G VA L UE
Project Details
Instance details
Administrator account
4. Select the Networking tab, or select Next: Disks , then Next: Networking .
5. In the Networking tab, select or enter the following information:
SET T IN G VA L UE
Network interface
Load balancing
Place this virtual machine behind an existing load- Select the check box.
balancing solution?
SET T IN G VM 2
Name myVM2
Install IIS
1. In the search box at the top of the portal, enter Vir tual machine . Select Vir tual machines in the search
results.
2. Select myVM1 .
3. On the Over view page, select Connect , then Bastion .
4. Enter the username and password entered during VM creation.
5. Select Connect .
6. On the server desktop, navigate to Windows Administrative Tools > Windows PowerShell .
7. In the PowerShell Window, run the following commands to:
Install the IIS server
Remove the default iisstart.htm file
Add a new iisstart.htm file that displays the name of the VM:
Clean up resources
When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the
resource group CreatePubLBQS-rg that contains the resources and then select Delete .
Next steps
In this quickstart, you:
Created an Azure Load Balancer
Attached 2 VMs to the load balancer
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Quickstart: Create a public load balancer to load
balance VMs using the Azure CLI
7/17/2022 • 8 minutes to read • Edit Online
Get started with Azure Load Balancer by using the Azure CLI to create a public load balancer and two virtual
machines.
If you don't have an Azure subscription, create an Azure free account before you begin.
Prerequisites
Use the Bash environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart -
Bash.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows
or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the
Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish
the authentication process, follow the steps displayed in your terminal. For other sign-in options,
see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about
extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the
latest version, run az upgrade.
This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version
is already installed.
az group create \
--name CreatePubLBQS-rg \
--location eastus
az network lb create \
--resource-group CreatePubLBQS-rg \
--name myLoadBalancer \
--sku Standard \
--public-ip-address myPublicIP \
--frontend-ip-name myFrontEnd \
--backend-pool-name myBackEndPool
It can take a few minutes for the Azure Bastion host to deploy.
array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}"
do
az network nic create \
--resource-group CreatePubLBQS-rg \
--name $vmnic \
--vnet-name myVNet \
--subnet myBackEndSubnet \
--network-security-group myNSG
done
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM1 \
--nics myNicVM1 \
--image win2019datacenter \
--admin-username azureuser \
--zone 1 \
--no-wait
az vm create \
--resource-group CreatePubLBQS-rg \
--name myVM2 \
--nics myNicVM2 \
--image win2019datacenter \
--admin-username azureuser \
--zone 2 \
--no-wait
It may take a few minutes for the VMs to deploy. You can continue to the next steps while the VMs are creating.
NOTE
Azure provides a default outbound access IP for VMs that either aren't assigned a public IP address or are in the back-end
pool of an internal basic Azure load balancer. The default outbound access IP mechanism provides an outbound IP
address that isn't configurable.
For more information, see Default outbound access in Azure.
The default outbound access IP is disabled when either a public IP address is assigned to the VM or the VM is placed in
the back-end pool of a standard load balancer, with or without outbound rules. If an Azure Virtual Network network
address translation (NAT) gateway resource is assigned to the subnet of the virtual machine, the default outbound access
IP is disabled.
VMs that are created by virtual machine scale sets in flexible orchestration mode don't have default outbound access.
For more information about outbound connections in Azure, see Use source network address translation (SNAT) for
outbound connections.
array=(myNicVM1 myNicVM2)
for vmnic in "${array[@]}"
do
az network nic ip-config address-pool add \
--address-pool myBackendPool \
--ip-config-name ipconfig1 \
--nic-name $vmnic \
--resource-group CreatePubLBQS-rg \
--lb-name myLoadBalancer
done
array=(myVM1 myVM2)
for vm in "${array[@]}"
do
az vm extension set \
--publisher Microsoft.Compute \
--version 1.8 \
--name CustomScriptExtension \
--vm-name $vm \
--resource-group CreatePubLBQS-rg \
--settings '{"commandToExecute":"powershell Add-WindowsFeature Web-Server; powershell Add-Content -
Path \"C:\\inetpub\\wwwroot\\Default.htm\" -Value $($env:computername)"}'
done
Clean up resources
When no longer needed, use the az group delete command to remove the resource group, load balancer, and all
related resources.
az group delete \
--name CreatePubLBQS-rg
Next steps
In this quickstart:
You created a standard public load balancer
Attached two virtual machines
Configured the load balancer traffic rule and health probe
Tested the load balancer
To learn more about Azure Load Balancer, continue to:
What is Azure Load Balancer?
Create, change, or delete an Azure public IP
address
7/17/2022 • 10 minutes to read • Edit Online
Learn about a public IP address and how to create, change, and delete one. A public IP address is a resource with
configurable settings. Assigning a public IP address to an Azure resource that supports public IP addresses
enables:
Inbound communication from the Internet to the resource, such as Azure Virtual Machines (VM), Azure
Application Gateways, Azure Load Balancers, Azure VPN Gateways, and others.
Outbound connectivity to the Internet using a predictable IP address.
NOTE
Azure provides a default outbound access IP for VMs that either aren't assigned a public IP address or are in the back-end
pool of an internal basic Azure load balancer. The default outbound access IP mechanism provides an outbound IP
address that isn't configurable.
For more information, see Default outbound access in Azure.
The default outbound access IP is disabled when either a public IP address is assigned to the VM or the VM is placed in
the back-end pool of a standard load balancer, with or without outbound rules. If an Azure Virtual Network network
address translation (NAT) gateway resource is assigned to the subnet of the virtual machine, the default outbound access
IP is disabled.
VMs that are created by virtual machine scale sets in flexible orchestration mode don't have default outbound access.
For more information about outbound connections in Azure, see Use source network address translation (SNAT) for
outbound connections.
NOTE
Though the portal provides the option to create two public IP address resources (one IPv4 and one IPv6), the PowerShell
and CLI commands create one resource with an address for one IP version or the other. If you want two public IP address
resources, one for each IP version, you must run the command twice, specifying different names and IP versions for the
public IP address resources.
For more detail on the specific attributes of a public IP address during creation, see the following table:
SET T IN G REQ UIRED? DETA IL S
Name (Only visible if you select IP Yes, if you select IP Version of Both The name must be different than the
Version of Both ) name you enter for the first Name in
this list. If you choose to create both
an IPv4 and an IPv6 address, the
portal creates two separate public IP
address resources, one with each IP
address version assigned to it.
IP address assignment (Only visible if Yes, if you select IP Version of Both Same restrictions as IP address
you select IP Version of Both ) assignment above.
WARNING
Remove the address from any applicable IP configurations (see Delete section) to change assignment for a public IP from
static to dynamic. When you change the assignment method from static to dynamic, you lose the IP address that was
assigned to the public IP resource. While the Azure public DNS servers maintain a mapping between static or dynamic
addresses and any DNS name label (if you defined one), a dynamic IP address can change when the virtual machine is
started after being in the stopped (deallocated) state. To prevent the address from changing, assign a static IP address.
Delete : Deletion of public IPs requires that the public IP object isn't associated to any IP configuration or
virtual machine network interface. For more information, see the following table.
RESO URC E A Z URE P O RTA L A Z URE P O W ERSH EL L A Z URE C L I
Load balancer frontend Browse to an unused public Use Set- Use az network lb frontend-
IP address and select AzLoadBalancerFrontendIp ip update to associate a
Associate . Pick the load Config to associate a new new frontend IP config with
balancer with the relevant front-end IP config with a a public load balancer. Use
front-end IP configuration public load balancer. Remove-AzPublicIpAddress
to replace the IP. The old IP UseRemove- to delete a public IP. You can
can be deleted using the AzPublicIpAddress to delete also use az network lb
same method as a virtual a public IP. You can also use frontend-ip delete to
machine. Remove- remove a frontend IP config
AzLoadBalancerFrontendIp if there are more than one.
Config to remove a
frontend IP config if there
are more than one.
Region availability
Azure Public IP is available in all regions for both Public and US Gov clouds. Azure Public IP doesn't move or
store customer data out of the region it's deployed in.
Permissions
To manage public IP addresses, your account must be assigned to the network contributor role. A custom role is
also supported. The custom role must be assigned the appropriate actions listed in the following table:
A C T IO N NAME
Next steps
Public IP addresses have a nominal charge. To view the pricing, read the IP address pricing page.
Create a public IP address using PowerShell or Azure CLI sample scripts, or using Azure Resource Manager
templates
Create and assign Azure Policy definitions for public IP addresses
Azure Storage redundancy
7/17/2022 • 19 minutes to read • Edit Online
Azure Storage always stores multiple copies of your data so that it's protected from planned and unplanned
events, including transient hardware failures, network or power outages, and massive natural disasters.
Redundancy ensures that your storage account meets its availability and durability targets even in the face of
failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs
and higher availability. The factors that help determine which redundancy option you should choose include:
How your data is replicated in the primary region.
Whether your data is replicated to a second region that is geographically distant to the primary region, to
protect against regional disasters (geo-replication).
Whether your application requires read access to the replicated data in the secondary region if the primary
region becomes unavailable for any reason (geo-replication with read access).
NOTE
The features and regional availability described in this article are also available to accounts that have a hierarchical
namespace (Azure Blob storage).
The services that comprise Azure Storage are managed through a common Azure resource called a storage
account. The storage account represents a shared pool of storage that can be used to deploy storage resources
such as blob containers (Blob Storage), file shares (Azure Files), tables (Table Storage), or queues (Queue
Storage). For more information about Azure Storage accounts, see Storage account overview.
The redundancy setting for a storage account is shared for all storage services exposed by that account. All
storage resources deployed in the same storage account have the same redundancy setting. You may want to
isolate different types of resources in separate storage accounts if they have different redundancy requirements.
NOTE
Microsoft recommends using ZRS in the primary region for Azure Data Lake Storage Gen2 workloads.
ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily
unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones
are permanently affected. For protection against regional disasters, Microsoft recommends using geo-zone-
redundant storage (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a
secondary region.
The Archive tier for Blob Storage isn't currently supported for ZRS accounts. Unmanaged disks don't support
ZRS or GZRS.
For more information about which regions support ZRS, see Azure regions with availability zones.
Standard storage accounts
ZRS is supported for all Azure Storage services through standard general-purpose v2 storage accounts,
including:
Azure Blob storage (hot and cool block blobs, non-disk page blobs)
Azure Files (all standard tiers: transaction optimized, hot, and cool)
Azure Table storage
Azure Queue storage
ZRS for standard general-purpose v2 storage accounts is available for a subset of Azure regions:
(Africa) South Africa North
(Asia Pacific) Australia East
(Asia Pacific) Central India
(Asia Pacific) East Asia
(Asia Pacific) Japan East
(Asia Pacific) Korea Central
(Asia Pacific) South India
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) Germany West Central
(Europe) North Europe
(Europe) Norway East
(Europe) Sweden Central
(Europe) Switzerland North
(Europe) UK South
(Europe) West Europe
(North America) Canada Central
(North America) Central US
(North America) East US
(North America) East US 2
(North America) South Central US
(North America) US Gov Virginia
(North America) West US 2
(North America) West US 3
(South America) Brazil South
Premium block blob accounts
ZRS is supported for premium block blobs accounts. For more information about premium block blobs, see
Premium block blob storage accounts.
Premium block blobs are available in a subset of Azure regions:
(Asia Pacific) Australia East
(Asia Pacific) East Asia
(Asia Pacific) Japan East
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) North Europe
(Europe) West Europe
(Europe) UK South
(North America) East US
(North America) East US 2
(North America) West US 2
(North America) South Central US
(South America) Brazil South
Premium file share accounts
ZRS is supported for premium file shares (Azure Files) through the FileStorage storage account kind.
ZRS for premium file shares is available for a subset of Azure regions:
(Asia Pacific) Australia East
(Asia Pacific) Japan East
(Asia Pacific) Southeast Asia
(Europe) France Central
(Europe) North Europe
(Europe) West Europe
(Europe) UK South
(North America) East US
(North America) East US 2
(North America) West US 2
(North America) South Central US
(South America) Brazil South
NOTE
The primary difference between GRS and GZRS is how data is replicated in the primary region. Within the secondary
region, data is always replicated synchronously three times using LRS. LRS in the secondary region protects your data
against hardware failures.
With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there's a
failover to the secondary region. For read access to the secondary region, configure your storage account to use
read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more
information, see Read access to data in the secondary region.
If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the
failover has completed, the secondary region becomes the primary region, and you can again read and write
data. For more information on disaster recovery and to learn how to fail over to the secondary region, see
Disaster recovery and storage account failover.
IMPORTANT
Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in
data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region
and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in
time to which data can be recovered. The Azure Storage platform typically has an RPO of less than 15 minutes, although
there's currently no SLA on how long it takes to replicate data to the secondary region.
Because data is replicated asynchronously from the primary to the secondary region, the secondary region is
typically behind the primary region in terms of write operations. If a disaster were to strike the primary region,
it's likely that some data would be lost. For more information about how to plan for potential data loss, see
Anticipate data loss.
NOTE
Azure Files does not support read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage
(RA-GZRS).
Availability for read At least 99.9% (99% At least 99.9% (99% At least 99.9% (99% At least 99.9% (99%
requests for Cool or Archive for Cool or Archive for Cool or Archive for Cool or Archive
access tiers) access tiers) access tiers) for GRS access tiers) for GZRS
Availability for write At least 99.9% (99% At least 99.9% (99% At least 99.9% (99% At least 99.9% (99%
requests for Cool or Archive for Cool or Archive for Cool or Archive for Cool or Archive
access tiers) access tiers) access tiers) access tiers)
Number of copies of Three copies within a Three copies across Six copies total, Six copies total,
data maintained on single region separate availability including three in the including three
separate nodes zones within a single primary region and across separate
region three in the availability zones in
secondary region the primary region
and three locally
redundant copies in
the secondary region
1 Account failoveris required to restore write availability if the primary region becomes unavailable. For more
information, see Disaster recovery and storage account failover.
Supported Azure Storage services
The following table shows which redundancy options are supported by each Azure Storage service.
L RS Z RS GRS RA - GRS GZ RS RA - GZ RS
Blob storage Blob storage Blob storage Blob storage Blob storage Blob storage
(including Data (including Data (including Data (including Data (including Data (including Data
Lake Storage) Lake Storage) Lake Storage) Lake Storage) Lake Storage) Lake Storage)
Queue storage Queue storage Queue storage Queue storage Queue storage Queue storage
Table storage Table storage Table storage Table storage Table storage Table storage
Azure Files1,2 Azure Files1,2 Azure Files1 Azure Files1
Azure managed Azure managed
disks disks3
Page blobs
1 Standard file shares are supported on LRS and ZRS. Standard file shares are supported on GRS and GZRS as
long as they're less than or equal to 5 TiB in size.
2 Premium file shares are supported on LRS and ZRS.
3 ZRS managed disks have certain limitations. See the Limitations section of the redundancy options for
STO RA GE A C C O UN T
T Y P ES L RS Z RS GRS/ RA - GRS GZ RS/ RA - GZ RS
1 Accounts of this type with a hierarchical namespace enabled also support the specified redundancy option.
All data for all storage accounts is copied according to the redundancy option for the storage account. Objects
including block blobs, append blobs, page blobs, queues, tables, and files are copied.
Data in all tiers, including the Archive tier, is copied. For more information about blob tiers, see Hot, Cool, and
Archive access tiers for blob data.
For pricing information for each redundancy option, see Azure Storage pricing.
NOTE
Azure Premium Disk Storage currently supports only locally redundant storage (LRS). Block blob storage accounts support
locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
NOTE
Customer-managed account failover is not yet supported in accounts that have a hierarchical namespace (Azure Data
Lake Storage Gen2). To learn more, see Blob storage features available in Azure Data Lake Storage Gen2.
In the event of a disaster that affects the primary region, Microsoft will manage the failover for accounts with a
hierarchical namespace. For more information, see Microsoft-managed failover.
Data integrity
Azure Storage regularly verifies the integrity of data stored using cyclic redundancy checks (CRCs). If data
corruption is detected, it's repaired using redundant data. Azure Storage also calculates checksums on all
network traffic to detect corruption of data packets when storing or retrieving data.
See also
Change the redundancy option for a storage account
Geo replication (GRS/GZRS/RA-GRS/RA-GZRS)
Check the Last Sync Time property for a storage account
Disaster recovery and storage account failover
Pricing
Blob Storage
Azure Files
Table Storage
Queue Storage
Azure Event Hubs - Geo-disaster recovery
7/17/2022 • 12 minutes to read • Edit Online
Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in
some cases even required by industry regulations.
Azure Event Hubs already spreads the risk of catastrophic failures of individual machines or even complete racks
across clusters that span multiple failure domains within a datacenter and it implements transparent failure
detection and failover mechanisms such that the service will continue to operate within the assured service-
levels and typically without noticeable interruptions in the event of such failures. If an Event Hubs namespace
has been created with the enabled option for availability zones, the outage risk is further spread across three
physically separated facilities, and the service has enough capacity reserves to instantly cope with the complete,
catastrophic loss of the entire facility.
The all-active Azure Event Hubs cluster model with availability zone support provides resiliency against grave
hardware failures and even catastrophic loss of entire datacenter facilities. Still, there might be grave situations
with widespread physical destruction that even those measures cannot sufficiently defend against.
The Event Hubs Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this
magnitude and abandon a failed Azure region for good and without having to change your application
configurations. Abandoning an Azure region will typically involve several services and this feature primarily
aims at helping to preserve the integrity of the composite application configuration.
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Event Hubs, Consumer
Groups and settings) is continuously replicated from a primary namespace to a secondary namespace when
paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time.
The failover move will re-point the chosen alias name for the namespace to the secondary namespace and then
break the pairing. The failover is nearly instantaneous once initiated.
IMPORTANT
The feature enables instantaneous continuity of operations with the same configuration, but does not replicate the
event data . Unless the disaster caused the loss of all zones, the event data that is preserved in the primary Event
Hub after failover will be recoverable and the historic events can be obtained from there once access is restored. For
replicating event data and operating corresponding namespaces in active/active configurations to cope with outages
and disasters, don't lean on this Geo-disaster recovery feature set, but follow the replication guidance.
Azure Active Directory (Azure AD) role-based access control (RBAC) assignments to entities in the primary namespace
aren't replicated to the secondary namespace. Create role assignments manually in the secondary namespace to
secure access to them.
Premium Premium
Dedicated Dedicated
NOTE
You can't pair namespaces that are in the same dedicated cluster. You can pair namespaces that are in separate clusters.
WARNING
Failing over will activate the secondary namespace and remove the primary namespace from the Geo-
Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.
Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is one part
of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be performed in sync
with the remaining subsystem or infrastructure.
Example
In one example of this scenario, consider a Point of Sale (POS) solution that emits either messages or events.
Event Hubs passes those events to some mapping or reformatting solution, which then forwards mapped data
to another system for further processing. At that point, all of these systems might be hosted in the same Azure
region. The decision of when and what part to fail over depends on the flow of data in your infrastructure.
You can automate failover either with monitoring systems, or with custom-built monitoring solutions. However,
such automation takes extra planning and work, which is out of the scope of this article.
Failover flow
If you initiate the failover, two steps are required:
1. If another outage occurs, you want to be able to fail over again. Therefore, set up another passive
namespace and update the pairing.
2. Pull messages from the former primary namespace once it's available again. After that, use that
namespace for regular messaging outside of your geo-recovery setup, or delete the old primary
namespace.
NOTE
Only fail forward semantics are supported. In this scenario, you fail over and then re-pair with a new namespace. Failing
back is not supported; for example, in a SQL cluster.
Manual failover
This section shows how to manually fail over using Azure portal, CLI, PowerShell, C#, etc.
Azure portal
Azure CLI
Azure PowerShell
C#
WARNING
Failing over will activate the secondary namespace and remove the primary namespace from the Geo-Disaster
Recovery pairing. Create another namespace to have a new geo-disaster recovery pair.
Management
If you made a mistake; for example, you paired the wrong regions during the initial setup, you can break the
pairing of the two namespaces at any time. If you want to use the paired namespaces as regular namespaces,
delete the alias.
Considerations
Note the following considerations to keep in mind:
1. By design, Event Hubs geo-disaster recovery does not replicate data, and therefore you cannot reuse the
old offset value of your primary event hub on your secondary event hub. We recommend restarting your
event receiver with one of the following methods:
EventPosition.FromStart() - If you wish read all data on your secondary event hub.
EventPosition.FromEnd() - If you wish to read all new data from the time of connection to your
secondary event hub.
EventPosition.FromEnqueuedTime(dateTime) - If you wish to read all data received in your secondary
event hub starting from a given date and time.
2. In your failover planning, you should also consider the time factor. For example, if you lose connectivity
for longer than 15 to 20 minutes, you might decide to initiate the failover.
3. The fact that no data is replicated means that current active sessions aren't replicated. Additionally,
duplicate detection and scheduled messages may not work. New sessions, scheduled messages, and new
duplicates will work.
4. Failing over a complex distributed infrastructure should be rehearsed at least once.
5. Synchronizing entities can take some time, approximately 50-100 entities per minute.
Availability Zones
Event Hubs supports Availability Zones, providing fault-isolated locations within an Azure region. The
Availability Zones support is only available in Azure regions with availability zones. Both metadata and data
(events) are replicated across data centers in the availability zone.
When creating a namespace, you see the following highlighted message when you select a region that has
availability zones.
NOTE
When you use the Azure portal, zone redundancy via support for availability zones is automatically enabled. You can't
disable it in the portal. You can use the Azure CLI command az eventhubs namespace with --zone-redundant=false
or use the PowerShell command New-AzEventHubNamespace with -ZoneRedundant=false to create a namespace with
zone redundancy disabled.
Private endpoints
This section provides more considerations when using Geo-disaster recovery with namespaces that use private
endpoints. To learn about using private endpoints with Event Hubs in general, see Configure private endpoints.
New pairings
If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace
without a private endpoint, the pairing will fail. The pairing will succeed only if both primary and secondary
namespaces have private endpoints. We recommend that you use same configurations on the primary and
secondary namespaces and on virtual networks in which private endpoints are created.
NOTE
When you try to pair the primary namespace with private endpoint and a secondary namespace, the validation process
only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the endpoint works
or will work after failover. It's your responsibility to ensure that the secondary namespace with private endpoint will work
as expected after failover.
To test that the private endpoint configurations are same on primary and secondary namespaces, send a read request (for
example: Get Event Hub) to the secondary namespace from outside the virtual network, and verify that you receive an
error message from the service.
Existing pairings
If pairing between primary and secondary namespace already exists, private endpoint creation on the primary
namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one
for the primary namespace.
NOTE
While we allow read-only access to the secondary namespace, updates to the private endpoint configurations are
permitted.
Recommended configuration
When creating a disaster recovery configuration for your application and Event Hubs namespaces, you must
create private endpoints for both primary and secondary Event Hubs namespaces against virtual networks
hosting both primary and secondary instances of your application.
Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and secondary namespaces:
EventHubs-Namespace1-Primary, EventHubs-Namespace2-Secondary. You need to do the following steps:
On EventHubs-Namespace1-Primary, create two private endpoints that use subnets from VNET-1 and VNET-
2
On EventHubs-Namespace2-Secondary, create two private endpoints that use the same subnets from VNET-
1 and VNET-2
Advantage of this approach is that failover can happen at the application layer independent of Event Hubs
namespace. Consider the following scenarios:
Application-only failover : Here, the application won't exist in VNET-1 but will move to VNET-2. As both
private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the
application will just work.
Event Hubs namespace-only failover : Here again, since both private endpoints are configured on both
virtual networks for both primary and secondary namespaces, the application will just work.
NOTE
For guidance on geo-disaster recovery of a virtual network, see Virtual Network - Business Continuity.
Next steps
Review the following samples or reference documentation.
.NET GeoDR sample
Java GeoDR sample
.NET - Azure.Messaging.EventHubs samples
.NET - Microsoft.Azure.EventHubs samples
Java - azure-messaging-eventhubs samples
Java - azure-eventhubs samples
Python samples
JavaScript samples
TypeScript samples
REST API reference
Azure Service Bus Geo-disaster recovery
7/17/2022 • 13 minutes to read • Edit Online
Resilience against disastrous outages of data processing resources is a requirement for many enterprises and in
some cases even required by industry regulations.
Azure Service Bus already spreads the risk of catastrophic failures of individual machines or even complete
racks across clusters that span multiple failure domains within a datacenter and it implements transparent
failure detection and failover mechanisms such that the service will continue to operate within the assured
service-levels and typically without noticeable interruptions when such failures occur. If a Service Bus
namespace has been created with the enabled option for availability zones, the outage risk is further spread
across three physically separated facilities, and the service has enough capacity reserves to instantly cope with
the complete, catastrophic loss of the entire facility.
The all-active Azure Service Bus cluster model with availability zone support is superior to any on-premises
message broker product in terms of resiliency against grave hardware failures and even catastrophic loss of
entire datacenter facilities. Still, there might be grave situations with widespread physical destruction that even
those measures can't sufficiently defend against.
The Service Bus Geo-disaster recovery feature is designed to make it easier to recover from a disaster of this
magnitude and abandon a failed Azure region for good and without having to change your application
configurations. Abandoning an Azure region will typically involve several services and this feature primarily
aims at helping to preserve the integrity of the composite application configuration. The feature is globally
available for the Service Bus Premium SKU.
The Geo-Disaster recovery feature ensures that the entire configuration of a namespace (Queues, Topics,
Subscriptions, Filters) is continuously replicated from a primary namespace to a secondary namespace when
paired, and it allows you to initiate a once-only failover move from the primary to the secondary at any time.
The failover move will repoint the chosen alias name for the namespace to the secondary namespace and then
break the pairing. The failover is nearly instantaneous once initiated.
TIP
For replicating the contents of queues and topic subscriptions and operating corresponding namespaces in active/active
configurations to cope with outages and disasters, don't lean on this Geo-disaster recovery feature set, but follow the
replication guidance.
Setup
The following section is an overview to set up pairing between the namespaces.
You first create or use an existing primary namespace, and a new secondary namespace, then pair the two. This
pairing gives you an alias that you can use to connect. Because you use an alias, you don't have to change
connection strings. Only new namespaces can be added to your failover pairing.
1. Create the primary namespace.
2. Create the secondary namespace in a different region. This step is optional. You can create the secondary
namespace while creating the pairing in the next step.
3. In the Azure portal, navigate to your primary namespace.
4. Select Geo-recover y on the left menu, and select Initiate pairing on the toolbar.
5. On the Initiate pairing page, follow these steps:
a. Select an existing secondary namespace or create one in a different region. In this example, an
existing namespace is used as the secondary namespace.
b. For Alias , enter an alias for the geo-dr pairing.
c. Then, select Create .
6. You should see the Ser vice Bus Geo-DR Alias page as shown in the following image. You can also
navigate to the Geo-DR Alias page from the primary namespace page by selecting the Geo-recover y
on the left menu.
7. On the Geo-DR Alias page, select Shared access policies on the left menu to access the primary
connection string for the alias. Use this connection string instead of using the connection string to the
primary/secondary namespace directly. Initially, the alias points to the primary namespace.
8. Switch to the Over view page. You can do the following actions:
a. Break the pairing between primary and secondary namespaces. Select Break pairing on the toolbar.
b. Manually fail over to the secondary namespace.
a. Select Failover on the toolbar.
b. Confirm that you want to fail over to the secondary namespace by typing in your alias.
c. Turn ON the Safe Failover option to safely fail over to the secondary namespace. This
feature makes sure that pending Geo-DR replications are completed before switching over
to the secondary.
d. Then, select Failover .
IMPORTANT
Failing over will activate the secondary namespace and remove the primary namespace from the
Geo-Disaster Recovery pairing. Create another namespace to have a new geo-disaster recovery
pair.
9. Finally, you should add some monitoring to detect if a failover is necessary. In most cases, the service is
one part of a large ecosystem, thus automatic failovers are rarely possible, as often failovers must be
performed in sync with the remaining subsystem or infrastructure.
Service Bus standard to premium
If you have migrated your Azure Service Bus Standard namespace to Azure Service Bus Premium, then you
must use the pre-existing alias (that is, your Service Bus Standard namespace connection string) to create the
disaster recovery configuration through the PS/CLI or REST API .
It's because, during migration, your Azure Service Bus Standard namespace connection string/DNS name itself
becomes an alias to your Azure Service Bus Premium namespace.
Your client applications must utilize this alias (that is, the Azure Service Bus Standard namespace connection
string) to connect to the Premium namespace where the disaster recovery pairing has been set up.
If you use the Portal to set up the Disaster recovery configuration, then the portal will abstract this caveat from
you.
Failover flow
A failover is triggered manually by the customer (either explicitly through a command, or through client owned
business logic that triggers the command) and never by Azure. It gives the customer full ownership and visibility
for outage resolution on Azure's backbone.
After the failover is triggered -
1. The alias connection string is updated to point to the Secondary Premium namespace.
2. Clients(senders and receivers) automatically connect to the Secondary namespace.
3. The existing pairing between Primary and Secondary premium namespace is broken.
Once the failover is initiated -
1. If another outage occurs, you want to be able to fail over again. So, set up another passive namespace
and update the pairing.
2. Pull messages from the former primary namespace once it's available again. After that, use that
namespace for regular messaging outside of your geo-recovery setup, or delete the old primary
namespace.
NOTE
Only fail forward semantics are supported. In this scenario, you fail over and then re-pair with a new namespace. Failing
back is not supported; for example, in a SQL cluster.
You can automate failover either with monitoring systems, or with custom-built monitoring solutions. However,
such automation takes extra planning and work, which is out of the scope of this article.
Management
If you made a mistake; for example, you paired the wrong regions during the initial setup, you can break the
pairing of the two namespaces at any time. If you want to use the paired namespaces as regular namespaces,
delete the alias.
Samples
The samples on GitHub show how to set up and initiate a failover. These samples demonstrate the following
concepts:
A .NET sample and settings that are required in Azure Active Directory to use Azure Resource Manager with
Service Bus, to set up, and enable Geo-disaster recovery.
Steps required to execute the sample code.
How to use an existing namespace as an alias.
Steps to alternatively enable Geo-disaster recovery via PowerShell or CLI.
Send and receive from the current primary or secondary namespace using the alias.
Considerations
Note the following considerations to keep in mind with this release:
1. In your failover planning, you should also consider the time factor. For example, if you lose connectivity
for longer than 15 to 20 minutes, you might decide to initiate the failover.
2. The fact that no data is replicated means that currently active sessions aren't replicated. Additionally,
duplicate detection and scheduled messages may not work. New sessions, new scheduled messages, and
new duplicates will work.
3. Failing over a complex distributed infrastructure should be rehearsed at least once.
4. Synchronizing entities can take some time, approximately 50-100 entities per minute. Subscriptions and
rules also count as entities.
Availability Zones
The Service Bus Premium SKU supports Availability Zones, providing fault-isolated locations within the same
Azure region. Service Bus manages three copies of messaging store (1 primary and 2 secondary). Service Bus
keeps all the three copies in sync for data and management operations. If the primary copy fails, one of the
secondary copies is promoted to primary with no perceived downtime. If the applications see transient
disconnects from Service Bus, the retry logic in the SDK will automatically reconnect to Service Bus.
When you use availability zones, both metadata and data (messages) are replicated across data centers in the
availability zone.
NOTE
The Availability Zones support for Azure Service Bus Premium is only available in Azure regions where availability zones
are present.
You can enable Availability Zones on new namespaces only, using the Azure portal. Service Bus does not
support migration of existing namespaces. You cannot disable zone redundancy after enabling it on your
namespace.
Private endpoints
This section provides more considerations when using Geo-disaster recovery with namespaces that use private
endpoints. To learn about using private endpoints with Service Bus in general, see Integrate Azure Service Bus
with Azure Private Link.
New pairings
If you try to create a pairing between a primary namespace with a private endpoint and a secondary namespace
without a private endpoint, the pairing will fail. The pairing will succeed only if both primary and secondary
namespaces have private endpoints. We recommend that you use same configurations on the primary and
secondary namespaces and on virtual networks in which private endpoints are created.
NOTE
When you try to pair the primary namespace with a private endpoint and the secondary namespace, the validation
process only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the
endpoint works or will work after failover. It's your responsibility to ensure that the secondary namespace with private
endpoint will work as expected after failover.
To test that the private endpoint configurations are same, send a Get queues request to the secondary namespace from
outside the virtual network, and verify that you receive an error message from the service.
Existing pairings
If pairing between primary and secondary namespace already exists, private endpoint creation on the primary
namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one
for the primary namespace.
NOTE
While we allow read-only access to the secondary namespace, updates to the private endpoint configurations are
permitted.
Recommended configuration
When creating a disaster recovery configuration for your application and Service Bus, you must create private
endpoints for both primary and secondary Service Bus namespaces against virtual networks hosting both
primary and secondary instances of your application.
Let's say you have two virtual networks: VNET-1, VNET-2 and these primary and second namespaces:
ServiceBus-Namespace1-Primary, ServiceBus-Namespace2-Secondary. You need to do the following steps:
On ServiceBus-Namespace1-Primary, create two private endpoints that use subnets from VNET-1 and VNET-
2
On ServiceBus-Namespace2-Secondary, create two private endpoints that use the same subnets from VNET-
1 and VNET-2
Advantage of this approach is that failover can happen at the application layer independent of Service Bus
namespace. Consider the following scenarios:
Application-only failover : Here, the application won't exist in VNET-1 but will move to VNET-2. As both
private endpoints are configured on both VNET-1 and VNET-2 for both primary and secondary namespaces, the
application will just work.
Ser vice Bus namespace-only failover : Here again, since both private endpoints are configured on both
virtual networks for both primary and secondary namespaces, the application will just work.
NOTE
For guidance on geo-disaster recovery of a virtual network, see Virtual Network - Business Continuity.
Next steps
See the Geo-disaster recovery REST API reference here.
Run the Geo-disaster recovery sample on GitHub.
See the Geo-disaster recovery sample that sends messages to an alias.
To learn more about Service Bus messaging, see the following articles:
Service Bus queues, topics, and subscriptions
Get started with Service Bus queues
How to use Service Bus topics and subscriptions
REST API
Create a zone-redundant virtual network gateway
in Azure Availability Zones
7/17/2022 • 4 minutes to read • Edit Online
You can deploy VPN and ExpressRoute gateways in Azure Availability Zones. This brings resiliency, scalability,
and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically
and logically separates gateways within a region, while protecting your on-premises network connectivity to
Azure from zone-level failures. For information, see About zone-redundant virtual network gateways and About
Azure Availability Zones.
$RG1 = "TestRG1"
$VNet1 = "VNet1"
$Location1 = "CentralUS"
$FESubnet1 = "FrontEnd"
$BESubnet1 = "Backend"
$GwSubnet1 = "GatewaySubnet"
$VNet1Prefix = "10.1.0.0/16"
$FEPrefix1 = "10.1.0.0/24"
$BEPrefix1 = "10.1.1.0/24"
$GwPrefix1 = "10.1.255.0/27"
$Gw1 = "VNet1GW"
$GwIP1 = "VNet1GWIP"
$GwIPConf1 = "gwipconf1"
$getvnet | Set-AzVirtualNetwork
$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard
$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard -Zone 1
FAQ
What will change when I deploy these new SKUs?
From your perspective, you can deploy your gateways with zone-redundancy. This means that all instances of
the gateways will be deployed across Azure Availability Zones, and each Availability Zone is a different fault and
update domain. This makes your gateways more reliable, available, and resilient to zone failures.
Can I use the Azure portal?
Yes, you can use the Azure portal to deploy the new SKUs. However, you will see these new SKUs only in those
Azure regions that have Azure Availability Zones.
What regions are available for me to use the new SKUs?
See Availability Zones for the latest list of available regions.
Can I change/migrate/upgrade my existing virtual network gateways to zone -redundant or zonal gateways?
Migrating your existing virtual network gateways to zone-redundant or zonal gateways is currently not
supported. You can, however, delete your existing gateway and re-create a zone-redundant or zonal gateway.
Can I deploy both VPN and Express Route gateways in same virtual network?
Co-existence of both VPN and Express Route gateways in the same virtual network is supported. However, you
should reserve a /27 IP address range for the gateway subnet.
Create a zone-redundant virtual network gateway
in Azure Availability Zones
7/17/2022 • 4 minutes to read • Edit Online
You can deploy VPN and ExpressRoute gateways in Azure Availability Zones. This brings resiliency, scalability,
and higher availability to virtual network gateways. Deploying gateways in Azure Availability Zones physically
and logically separates gateways within a region, while protecting your on-premises network connectivity to
Azure from zone-level failures. For information, see About zone-redundant virtual network gateways and About
Azure Availability Zones.
$RG1 = "TestRG1"
$VNet1 = "VNet1"
$Location1 = "CentralUS"
$FESubnet1 = "FrontEnd"
$BESubnet1 = "Backend"
$GwSubnet1 = "GatewaySubnet"
$VNet1Prefix = "10.1.0.0/16"
$FEPrefix1 = "10.1.0.0/24"
$BEPrefix1 = "10.1.1.0/24"
$GwPrefix1 = "10.1.255.0/27"
$Gw1 = "VNet1GW"
$GwIP1 = "VNet1GWIP"
$GwIPConf1 = "gwipconf1"
$getvnet | Set-AzVirtualNetwork
$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard
$pip1 = New-AzPublicIpAddress -ResourceGroup $RG1 -Location $Location1 -Name $GwIP1 -AllocationMethod Static
-Sku Standard -Zone 1
FAQ
What will change when I deploy these new SKUs?
From your perspective, you can deploy your gateways with zone-redundancy. This means that all instances of
the gateways will be deployed across Azure Availability Zones, and each Availability Zone is a different fault and
update domain. This makes your gateways more reliable, available, and resilient to zone failures.
Can I use the Azure portal?
Yes, you can use the Azure portal to deploy the new SKUs. However, you will see these new SKUs only in those
Azure regions that have Azure Availability Zones.
What regions are available for me to use the new SKUs?
See Availability Zones for the latest list of available regions.
Can I change/migrate/upgrade my existing virtual network gateways to zone -redundant or zonal gateways?
Migrating your existing virtual network gateways to zone-redundant or zonal gateways is currently not
supported. You can, however, delete your existing gateway and re-create a zone-redundant or zonal gateway.
Can I deploy both VPN and Express Route gateways in same virtual network?
Co-existence of both VPN and Express Route gateways in the same virtual network is supported. However, you
should reserve a /27 IP address range for the gateway subnet.
Scaling Application Gateway v2 and WAF v2
7/17/2022 • 2 minutes to read • Edit Online
Next steps
Learn more about Application Gateway v2
Create an autoscaling, zone redundant application gateway with a reserved virtual IP address using Azure
PowerShell
Tutorial: Create and configure an Azure Active
Directory Domain Services managed domain
7/17/2022 • 12 minutes to read • Edit Online
Azure Active Directory Domain Services (Azure AD DS) provides managed domain services such as domain join,
group policy, LDAP, Kerberos/NTLM authentication that is fully compatible with Windows Server Active
Directory. You consume these domain services without deploying, managing, and patching domain controllers
yourself. Azure AD DS integrates with your existing Azure AD tenant. This integration lets users sign in using
their corporate credentials, and you can use existing groups and user accounts to secure access to resources.
You can create a managed domain using default configuration options for networking and synchronization, or
manually define these settings. This tutorial shows you how to use default options to create and configure an
Azure AD DS managed domain using the Azure portal.
In this tutorial, you learn how to:
Understand DNS requirements for a managed domain
Create a managed domain
Enable password hash synchronization
If you don't have an Azure subscription, create an account before you begin.
Prerequisites
To complete this tutorial, you need the following resources and privileges:
An active Azure subscription.
If you don't have an Azure subscription, create an account.
An Azure Active Directory tenant associated with your subscription, either synchronized with an on-premises
directory or a cloud-only directory.
If needed, create an Azure Active Directory tenant or associate an Azure subscription with your
account.
You need Application Administrator and Groups Administrator Azure AD roles in your tenant to enable Azure
AD DS.
You need Domain Services Contributor Azure role to create the required Azure AD DS resources.
A virtual network with DNS servers that can query necessary infrastructure such as storage. DNS servers
that can't perform general internet queries might block the ability to create a managed domain.
Although not required for Azure AD DS, it's recommended to configure self-service password reset (SSPR) for
the Azure AD tenant. Users can change their password without SSPR, but SSPR helps if they forget their
password and need to reset it.
IMPORTANT
You can't move the managed domain to a different subscription, resource group, region, virtual network, or subnet after
you create it. Take care to select the most appropriate subscription, resource group, region, virtual network, and subnet
when you deploy the managed domain.
TIP
If you create a custom domain name, take care with existing DNS namespaces. It's recommended to use a domain name
separate from any existing Azure or on-premises DNS name space.
For example, if you have an existing DNS name space of contoso.com, create a managed domain with the custom domain
name of aaddscontoso.com. If you need to use secure LDAP, you must register and own this custom domain name to
generate the required certificates.
You may need to create some additional DNS records for other services in your environment, or conditional DNS
forwarders between existing DNS name spaces in your environment. For example, if you run a webserver that hosts a site
using the root DNS name, there can be naming conflicts that require additional DNS entries.
In these tutorials and how-to articles, the custom domain of aaddscontoso.com is used as a short example. In all
commands, specify your own domain name.
TIP
Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more
datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of
three separate zones in all enabled regions.
There's nothing for you to configure for Azure AD DS to be distributed across zones. The Azure platform
automatically handles the zone distribution of resources. For more information and to see region availability, see
What are Availability Zones in Azure?
3. The SKU determines the performance and backup frequency. You can change the SKU after the managed
domain has been created if your business demands or requirements change. For more information, see
Azure AD DS SKU concepts.
For this tutorial, select the Standard SKU.
4. A forest is a logical construct used by Active Directory Domain Services to group one or more domains.
By default, a managed domain is created as a User forest. This type of forest synchronizes all objects from
Azure AD, including any user accounts created in an on-premises AD DS environment.
A Resource forest only synchronizes users and groups created directly in Azure AD. For more information
on Resource forests, including why you may use one and how to create forest trusts with on-premises AD
DS domains, see Azure AD DS resource forests overview.
For this tutorial, choose to create a User forest.
To quickly create a managed domain, you can select Review + create to accept additional default configuration
options. The following defaults are configured when you choose this create option:
Creates a virtual network named aadds-vnet that uses the IP address range of 10.0.2.0/24.
Creates a subnet named aadds-subnet using the IP address range of 10.0.2.0/24.
Synchronizes All users from Azure AD into the managed domain.
Select Review + create to accept these default configuration options.
3. The page will load with updates on the deployment process, including the creation of new resources in
your directory.
4. Select your resource group, such as myResourceGroup, then choose your managed domain from the list
of Azure resources, such as aaddscontoso.com. The Over view tab shows that the managed domain is
currently Deploying. You can't configure the managed domain until it's fully provisioned.
5. When the managed domain is fully provisioned, the Over view tab shows the domain status as Running.
IMPORTANT
The managed domain is associated with your Azure AD tenant. During the provisioning process, Azure AD DS creates two
Enterprise Applications named Domain Controller Services and AzureActiveDirectoryDomainControllerServices in the
Azure AD tenant. These Enterprise Applications are needed to service your managed domain. Don't delete these
applications.
2. To update the DNS server settings for the virtual network, select the Configure button. The DNS settings
are automatically configured for your virtual network.
TIP
If you selected an existing virtual network in the previous steps, any VMs connected to the network only get the new
DNS settings after a restart. You can restart VMs using the Azure portal, Azure PowerShell, or the Azure CLI.
The steps to generate and store these password hashes are different for cloud-only user accounts created in
Azure AD versus user accounts that are synchronized from your on-premises directory using Azure AD Connect.
A cloud-only user account is an account that was created in your Azure AD directory using either the Azure
portal or Azure AD PowerShell cmdlets. These user accounts aren't synchronized from an on-premises directory.
In this tutorial, let's work with a basic cloud-only user account. For more information on the additional steps
required to use Azure AD Connect, see Synchronize password hashes for user accounts synced from your
on-premises AD to your managed domain.
TIP
If your Azure AD tenant has a combination of cloud-only users and users from your on-premises AD, you need to
complete both sets of steps.
For cloud-only user accounts, users must change their passwords before they can use Azure AD DS. This
password change process causes the password hashes for Kerberos and NTLM authentication to be generated
and stored in Azure AD. The account isn't synchronized from Azure AD to Azure AD DS until the password is
changed. Either expire the passwords for all cloud users in the tenant who need to use Azure AD DS, which
forces a password change on next sign-in, or instruct cloud users to manually change their passwords. For this
tutorial, let's manually change a user password.
Before a user can reset their password, the Azure AD tenant must be configured for self-service password reset.
To change the password for a cloud-only user, the user must complete the following steps:
1. Go to the Azure AD Access Panel page at https://myapps.microsoft.com.
2. In the top-right corner, select your name, then choose Profile from the drop-down menu.
3. On the Profile page, select Change password .
4. On the Change password page, enter your existing (old) password, then enter and confirm a new
password.
5. Select Submit .
It takes a few minutes after you've changed your password for the new password to be usable in Azure AD DS
and to successfully sign in to computers joined to the managed domain.
Next steps
In this tutorial, you learned how to:
Understand DNS requirements for a managed domain
Create a managed domain
Add administrative users to domain management
Enable user accounts for Azure AD DS and generate password hashes
Before you domain-join VMs and deploy applications that use the managed domain, configure an Azure virtual
network for application workloads.
Configure Azure virtual network for application workloads to use your managed domain
Business continuity management in Azure
7/17/2022 • 6 minutes to read • Edit Online
Azure maintains one of the most mature and respected business continuity management programs in the
industry. The goal of business continuity in Azure is to build and advance recoverability and resiliency for all
independently recoverable services, whether a service is customer-facing (part of an Azure offering) or an
internal supporting platform service.
In understanding business continuity, it's important to note that many offerings are made up of multiple
services. At Azure, each service is statically identified through tooling and is the unit of measure used for
privacy, security, inventory, risk business continuity management, and other functions. To properly measure
capabilities of a service, the three elements of people, process, and technology are included for each service,
whatever the service type.
For example:
If there's a business process based on people, such as a help desk or team, the service delivery is what they
do. The people use processes and technology to perform the service.
If there's technology as a service, such as Azure Virtual Machines, the service delivery is the technology along
with the people and processes that support its operation.
A good example of the shared responsibility model is the deployment of virtual machines. If a customer wants
to set up cross-region replication for resiliency if there's region failure, they must deploy a duplicate set of
virtual machines in an alternate enabled region. Azure doesn't automatically replicate these services over if
there's a failure. It's the customer's responsibility to deploy necessary assets. The customer must have a process
to manually change primary regions, or they must use a traffic manager to detect and automatically fail over.
Customer-enabled disaster recovery services all have public-facing documentation to guide you. For an example
of public-facing documentation for customer-enabled disaster recovery, see Azure Data Lake Analytics.
For more information on the shared responsibility model, see Microsoft Trust Center.
Dependencies : Each service maps the dependencies (other services) it requires to operate no matter
how critical, and is mapped to runtime, needed for recovery only, or both. If there are storage
dependencies, another data is mapped that defines what's stored, and if it requires point-in-time
snapshots, for example.
Workforce : As noted in the definition of a service, it's important to know the location and quantity of
workforce able to support the service, ensuring no single points of failure, and if critical employees are
dispersed to avoid failures by cohabitation in a single location.
External suppliers : Microsoft keeps a comprehensive list of external suppliers, and the suppliers
deemed critical are measured for capabilities. If identified by a service as a dependency, supplier
capabilities are compared to the needs of the service to ensure a third-party outage doesn't disrupt Azure
services.
Recover y rating : This rating is unique to the Azure Business Continuity Management program. This
rating measures several key elements to create a resiliency score:
Willingness to fail over: Although there can be a process, it might not be the first choice for short-term
outages.
Automation of failover.
Automation of the decision to fail over.
The most reliable and shortest time to failover is a service that's automated and requires no human
decision. An automated service uses heartbeat monitoring or synthetic transactions to determine a
service is down and to start immediate remediation.
Recover y plan and test : Azure requires every service to have a detailed recovery plan and to test that
plan as if the service has failed because of catastrophic outage. The recovery plans are required to be
written so that someone with similar skills and access can complete the tasks. A written plan avoids
relying on subject matter experts being available.
Testing is done in several ways, including self-test in a production or near-production environment, and
as part of Azure full-region down drills in canary region sets. These enabled regions are identical to
production regions but can be disabled without affecting customers. Testing is considered integrated
because all services are affected simultaneously.
Customer enablement : When the customer is responsible for setting up disaster recovery, Azure is
required to have public-facing documentation guidance. For all such services, links are provided to
documentation and details about the process.
Next steps
Regions that support availability zones in Azure
Azure Resiliency whitepaper
Quickstart templates
Cross-region replication in Azure: Business
continuity and disaster recovery
7/17/2022 • 5 minutes to read • Edit Online
Many organizations require both high availability provided by availability zones that are also supported with
protection from large-scale phenomena and regional disasters. As discussed in the resiliency overview for
regions and availability zones, Azure regions are designed to offer protection against local disasters with
availability zones. But they can also provide protection from regional or large geography disasters with disaster
recovery by making use of another region that uses cross-region replication.
Cross-region replication
To ensure customers are supported across the world, Azure maintains multiple geographies. These discrete
demarcations define a disaster recovery and data residency boundary across one or multiple Azure regions.
Cross-region replication is one of several important pillars in the Azure business continuity and disaster
recovery strategy. Cross-region replication builds on the synchronous replication of your applications and data
that exists by using availability zones within your primary Azure region for high availability. Cross-region
replication asynchronously replicates the same applications and data across other Azure regions for disaster
recovery protection.
Some Azure services take advantage of cross-region replication to ensure business continuity and protect
against data loss. Azure provides several storage solutions that make use of cross-region replication to ensure
data availability. For example, Azure geo-redundant storage (GRS) replicates data to a secondary region
automatically. This approach ensures that data is durable even if the primary region isn't recoverable.
Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-
replicate to another enabled region. In these scenarios, recovery and replication must be configured by the
customer. These examples are illustrations of the shared responsibility model. It's a fundamental pillar in your
disaster recovery strategy. For more information about the shared responsibility model and to learn about
business continuity and disaster recovery in Azure, see Business continuity management in Azure.
Shared responsibility becomes the crux of your strategic decision-making when it comes to disaster recovery.
Azure doesn't require you to use cross-region replication, and you can use services to build resiliency without
cross-replicating to another enabled region. But we strongly recommend that you configure your essential
services across regions to benefit from isolation and improve availability.
For applications that support multiple active regions, we recommend that you use available multiple enabled
regions. This practice ensures optimal availability for applications and minimized recovery time if an event
affects availability. Whenever possible, design your application for maximum resiliency and ease of disaster
recovery.
UK UK West UK South
(*) Certain regions are access restricted to support specific customer scenarios, such as in-country disaster
recovery. These regions are available only upon request by creating a new support request in the Azure portal.
IMPORTANT
West India is paired in one direction only. West India's secondary region is South India, but South India's secondary
region is Central India.
Brazil South is unique because it's paired with a region outside of its geography. Brazil South's secondary region is
South Central US. The secondary region of South Central US isn't Brazil South.
Next steps
Regions that support availability zones in Azure
Quickstart templates