Architecting Cloud Native NET Apps For Azure
Architecting Cloud Native NET Apps For Azure
Architecting Cloud Native NET Apps For Azure
PUBLISHED BY
All rights reserved. No part of the contents of this book may be reproduced or transmitted in any
form or by any means without the written permission of the publisher.
This book is provided “as-is” and expresses the author’s views and opinions. The views, opinions, and
information expressed in this book, including URL and other Internet website references, may change
without notice.
Some examples depicted herein are provided for illustration only and are fictitious. No real association
or connection is intended or should be inferred.
Microsoft and the trademarks listed at https://www.microsoft.com on the “Trademarks” webpage are
trademarks of the Microsoft group of companies.
The Docker whale logo is a registered trademark of Docker, Inc. Used by permission.
All other marks and logos are property of their respective owners.
Authors:
Editors:
A secondary audience is technical decision-makers who plan to choose whether to build their
applications using a cloud-native approach.
Microservices ........................................................................................................................................................................ 8
Containers ........................................................................................................................................................................... 10
Automation ......................................................................................................................................................................... 14
Summary .............................................................................................................................................................................. 18
Data ....................................................................................................................................................................................... 27
Resiliency ............................................................................................................................................................................. 28
i Contents
Azure Key Vault ................................................................................................................................................................. 33
References ........................................................................................................................................................................... 34
When should you avoid using containers with Azure Functions? ................................................................ 49
When does it make sense to deploy to App Service for Containers? ......................................................... 56
ii Contents
How to deploy to App Service for Containers ...................................................................................................... 56
References ........................................................................................................................................................................... 57
Queries ................................................................................................................................................................................. 70
Commands .......................................................................................................................................................................... 73
Events .................................................................................................................................................................................... 76
gRPC ........................................................................................................................................................................................... 82
gRPC usage......................................................................................................................................................................... 84
Summary .............................................................................................................................................................................. 88
Distributed data..................................................................................................................... 90
Database-per-microservice, why? .................................................................................................................................. 91
CQRS ..................................................................................................................................................................................... 95
iii Contents
Relational vs. NoSQL data ................................................................................................................................................. 98
iv Contents
Challenges with detecting and responding to potential app health issues ........................................... 132
v Contents
Azure security for cloud-native apps .......................................................................................................................... 149
vi Contents
Versioning releases........................................................................................................................................................ 175
vii Contents
CHAPTER 1
Introduction to cloud-
native applications
Another day, at the office, working on “the next big thing.”
Your cellphone rings. It’s your friendly recruiter - the one who calls you twice a day about new jobs.
But this time it’s different: Start-up, equity, and plenty of funding.
The mention of the cloud and cutting-edge technology pushes you over the edge.
Fast forward a few weeks and you’re now a new employee in a design session architecting a major
eCommerce application. You’re going to compete with the leading eCommerce sites.
If you follow the guidance from past 15 years, you’ll most likely build the system shown in Figure 1.1.
You construct a large core application containing all of your domain logic. It includes modules such as
Identity, Catalog, Ordering, and more. The core app communicates with a large relational database.
The core exposes functionality via an HTML interface.
Not all is bad. Monoliths offer some distinct advantages. For example, they’re straightforward to…
• build
At some point, however, you begin to feel uncomfortable. You find yourself losing control of the
application. As time goes on, the feeling becomes more intense and you eventually enter a state
known as the Fear Cycle.
• The app has become so overwhelmingly complicated that no single person understands it.
• You fear making changes - each change has unintended and costly side effects.
• New features/fixes become tricky, time-consuming, and expensive to implement.
• Each release as small as possible and requires a full deployment of the entire application.
• One unstable component can crash the entire system.
• New technologies and frameworks aren’t an option.
• It’s difficult to implement agile delivery methodologies.
• Architectural erosion sets in as the code base deteriorates with never-ending “special cases.”
• The consultants tell you to rewrite it.
Many organizations have addressed the monolithic fear cycle by adopting a cloud-native approach to
building systems. Figure 1-2 shows the same system built applying cloud-native techniques and
practices.
Note how the application is decomposed across a set of small isolated microservices. Each service is
self-contained and encapsulates its own code, data, and dependencies. Each is deployed in a software
container and managed by a container orchestrator. Instead of a large relational database, each
service owns it own datastore, the type of which vary based upon the data needs. Note how some
services depend on a relational database, but other on NoSQL databases. One service stores its state
in a distributed cache. Note how all traffic routes through an API Gateway service that is responsible
for directing traffic to the core back-end services and enforcing many cross-cutting concerns. Most
importantly, the application takes full advantage of the scalability, availability, and resiliency features
found in modern cloud platforms.
Cloud-native computing
Hmm… We just used the term, Cloud Native. Your first thought might be, “What exactly does that
mean?” Another industry buzzword concocted by software vendors to market more stuff?"
Fortunately it’s far different and hopefully this book will help convince you.
Within a short time, cloud native has become a driving trend in the software industry. It’s a new way
to think about building large, complex systems, an approach that takes full advantage of modern
software development practices, technologies, and cloud infrastructure. The approach changes the
way you design, implement, deploy, and operationalize systems.
Unlike the continuous hype that drives our industry, cloud native is for-real. Consider the Cloud Native
Computing Foundation (CNCF), a consortium of over 300 major corporations with a charter to make
The CNCF fosters an ecosystem of open-source and vendor-neutrality. Following that lead, this book
presents cloud-native principles, patterns, and best practices that are technology agnostic. At the
same time, we discuss the services and infrastructure available in the Microsoft Azure cloud for
constructing cloud-native systems.
So, what exactly is Cloud Native? Sit back, relax, and let us help you explore this new world.
Cloud native is all about changing the way you think about constructing critical business systems.
Cloud-native systems are designed to embrace rapid change, large scale, and resilience.
Cloud-native technologies empower organizations to build and run scalable applications in modern,
dynamic environments such as public, private, and hybrid clouds. Containers, service meshes,
microservices, immutable infrastructure, and declarative APIs exemplify this approach.
These techniques enable loosely coupled systems that are resilient, manageable, and observable.
Combined with robust automation, they allow engineers to make high-impact changes frequently and
predictably with minimal toil.
Applications have become increasingly complex with users demanding more and more. Users expect
rapid responsiveness, innovative features, and zero downtime. Performance problems, recurring
errors, and the inability to move fast are no longer acceptable. They’ll easily move to your competitor.
Cloud native is about speed and agility. Business systems are evolving from enabling business
capabilities to being weapons of strategic transformation that accelerate business velocity and
growth. It’s imperative to get ideas to market immediately.
Here are some companies who have implemented these techniques. Think about the speed, agility,
and scalability they’ve achieved.
Company Experience
Netflix Has 600+ services in production. Deploys hundred times per day.
Uber Has 1,000+ services in production. Deploys several thousand times each week.
WeChat Has 3,000+ services in production. Deploys 1,000 times a day.
As you can see, Netflix, Uber, and WeChat expose systems that consist of hundreds of independent
microservices. This architectural style enables them to rapidly respond to market conditions. They can
The speed and agility of cloud native come about from a number of factors. Foremost is cloud
infrastructure. Five additional foundational pillars shown in Figure 1-3 also provide the bedrock for
cloud-native systems.
Let’s take some time to better understand the significance of each pillar.
The cloud…
Cloud-native systems take full advantage of the cloud service model.
Designed to thrive in a dynamic, virtualized cloud environment, these systems make extensive use of
Platform as a Service (PaaS) compute infrastructure and managed services. They treat the underlying
infrastructure as disposable - provisioned in minutes and resized, scaled, moved, or destroyed on
demand – via automation.
Consider the widely accepted DevOps concept of Pets vs. Cattle. In a traditional data center, servers
are treated as Pets: a physical machine, given a meaningful name, and cared for. You scale by adding
more resources to the same machine (scaling up). If the server becomes sick, you nurse it back to
health. Should the server become unavailable, everyone notices.
The Cattle service model is different. You provision each instance as a virtual machine or container.
They’re identical and assigned a system identifier such as Service-01, Service-02, and so on. You scale
by creating more of them (scaling out). When one becomes unavailable, nobody notices.
The cattle model embraces immutable infrastructure. Servers aren’t repaired or modified. If one fails or
requires updating, it’s destroyed and a new one is provisioned – all done via automation.
Cloud-native systems embrace the Cattle service model. They continue to run as the infrastructure
scales in or out with no regard to the machines upon which they’re running.
The Azure cloud platform supports this type of highly elastic infrastructure with automatic scaling,
self-healing, and monitoring capabilities.
While applicable to any web-based application, many practitioners consider Twelve-Factor as a solid
foundation for building cloud-native apps. Systems built upon these principles can deploy and scale
rapidly and add features to react quickly to market changes.
Factor Explanation
1 Code Base A single code base for each microservice, stored in its own repository. Tracked
with version control, it can deploy to multiple environments (QA, Staging,
Production).
2 Dependencies Each microservice isolates and packages its own dependencies, embracing
changes without impacting the entire system.
3 Configurations Configuration information is moved out of the microservice and externalized
through a configuration management tool outside of the code. The same
deployment can propagate across environments with the correct configuration
applied.
4 Backing Ancillary resources (data stores, caches, message brokers) should be exposed
Services via an addressable URL. Doing so decouples the resource from the application,
enabling it to be interchangeable.
5 Build, Release, Each release must enforce a strict separation across the build, release, and run
Run stages. Each should be tagged with a unique ID and support the ability to roll
back. Modern CI/CD systems help fulfill this principle.
6 Processes Each microservice should execute in its own process, isolated from other
running services. Externalize required state to a backing service such as a
distributed cache or data store.
7 Port Binding Each microservice should be self-contained with its interfaces and functionality
exposed on its own port. Doing so provides isolation from other microservices.
8 Concurrency Services scale out across a large number of small identical processes (copies)
as opposed to scaling-up a single large instance on the most powerful
machine available.
In the book, Beyond the Twelve-Factor App, author Kevin Hoffman details each of the original 12
factors (written in 2011). Additionally, he discusses three additional factors that reflect today’s modern
cloud application design.
We’ll refer to many of the 12+ factors in this chapter and throughout the book.
Communication
How will front-end client applications communicate with backed-end core services? Will you allow
direct communication? Or, might you abstract the back-end services with a gateway façade that
provides flexibility, control, and security?
How will back-end core services communicate with each other? Will you allow direct HTTP calls that
lead to coupling and impact performance and agility? Or might you consider decoupled messaging
with queue and topic technologies?
Distributed Data
By design, each microservice encapsulates its own data, exposing operations via its public interface. If
so, how do you query data or implement a transaction across multiple services?
Identity
How will your service identify who is accessing it and what permissions they have?
Microservices
Cloud-native systems embrace microservices, a popular architectural style for constructing modern
applications.
Built as a distributed set of small, independent services that interact through a shared fabric,
microservices share the following characteristics:
• Each is self-contained encapsulating its own data storage technology (SQL, NoSQL) and
programming platform.
• Each runs in its own process and communicates with others using standard communication
protocols such as HTTP/HTTPS, WebSockets, or AMQP.
Figure 1-4 contrasts a monolithic application approach with a microservices approach. Note how the
monolith is composed of a layered architecture, which executes in a single process. It typically
consumes a relational database. The microservice approach, however, segregates functionality into
independent services that include logic and data. Each microservice hosts its own datastore.
Note how microservices promote the “One Codebase, One Application” principle from the Twelve-
Factor Application, discussed earlier in the chapter.
Factor #1 specifies “A single codebase for each microservice, stored in its own repository. Tracked with
version control, it can deploy to multiple environments.”
Why microservices?
Microservices provide agility.
Earlier in the chapter, we compared an eCommerce application built as a monolith to that with
microservices. In the example, we saw some clear benefits:
• Each microservice has an autonomous lifecycle and can evolve independently and deploy
frequently. You don’t have to wait for a quarterly release to deploy a new features or update.
You can update a small area of a complex application with less risk of disrupting the entire
system.
• Each microservice can scale independently. Instead of scaling the entire application as a single
unit, you scale out only those services that require more processing power or network
bandwidth. This fine-grained approach to scaling provides for greater control of your system
and helps to reduce overall costs as you scale portions of your system, not everything.
An excellent reference guide for understanding microservices is .NET Microservices: Architecture for
Containerized .NET Applications. The book deep dives into microservices design and architecture. It’s
a companion for a full-stack microservice reference architecture available as a free download from
Microsoft.
Developing microservices
Microservices can be created with any modern development platform.
.NET is highly performant and has scored well in comparison to Node.js and other competing
platforms. Interestingly, TechEmpower conducted an extensive set of performance benchmarks across
many web application platforms and frameworks. .NET scored in the top 10 - well above Node.js and
other competing platforms.
Containers
Nowadays, it’s natural to hear the term container mentioned in any conversation concerning cloud
native. In the book, Cloud Native Patterns, author Cornelia Davis observes that, “Containers are a great
enabler of cloud-native software.” The Cloud Native Computing Foundation places microservice
containerization as the first step in their Cloud-Native Trail Map - guidance for enterprises beginning
their cloud-native journey.
Containerizing a microservice is simple and straightforward. The code, its dependencies, and runtime
are packaged into a binary called a container image. Images are stored in a container registry, which
acts as a repository or library for images. A registry can be located on your development computer, in
your data center, or in a public cloud. Docker itself maintains a public registry via Docker Hub. The
Azure cloud features a container registry to store container images close to the cloud applications
that will run them.
When needed, you transform the image into a running container instance. The instance runs on any
computer that has a container runtime engine installed. You can have as many instances of the
containerized service as needed.
Figure 1-5 shows three different microservices, each in its own container, running on a single host.
Note how well the container model embraces the “Dependencies” principle from the Twelve-Factor
Application.
Factor #2 specifies that “Each microservice isolates and packages its own dependencies, embracing
changes without impacting the entire system.”
Containers support both Linux and Windows workloads. The Azure cloud openly embraces both.
Interestingly, it’s Linux, not Windows Server, that has become the most popular operating system in
Azure.
While several container vendors exist, Docker has captured the lion’s share of the market. The
company has been driving the software container movement. It has become the de facto standard for
packaging, deploying, and running cloud-native applications.
Why containers?
Containers provide portability and guarantee consistency across environments. By encapsulating
everything into a single package, you isolate the microservice and its dependencies from the
underlying infrastructure.
You can deploy that same container in any environment that has the Docker runtime engine.
Containerized workloads also eliminate the expense of pre-configuring each environment with
frameworks, software libraries, and runtime engines.
By sharing the underlying operating system and host resources, containers have a much smaller
footprint than a full virtual machine. The smaller size increases the density, or number of
microservices, that a given host can run at one time.
Container orchestration
While tools such as Docker create images and run containers, you also need tools to manage them.
Container management is done with a special software program called a container orchestrator. When
operating at scale, container orchestration is essential.
Tasks Explanation
Scheduling Automatically provision container instances.
Affinity/anti- Provision containers nearby or far apart from each other, helping availability and
affinity performance.
Health Automatically detect and correct failures.
monitoring
Failover Automatically reprovision failed instance to healthy machines.
Scaling Automatically add or remove container instance to meet demand.
Networking Manage a networking overlay for container communication.
Service Discovery Enable containers to locate each other.
Rolling Upgrades Coordinate incremental upgrades with zero downtime deployment. Automatically
roll back problematic changes.
Note how orchestrators embrace the disposability and concurrency principles from the Twelve-Factor
Application, discussed earlier in the chapter.
Factor #9 specifies that “Service instances should be disposable, favoring fast startups to increase
scalability opportunities and graceful shutdowns to leave the system in a correct state. Docker containers
along with an orchestrator inherently satisfy this requirement.”
Factor #8 specifies that “Services scale out across a large number of small identical processes (copies) as
opposed to scaling-up a single large instance on the most powerful machine available.”
While several container orchestrators exist, Kubernetes has become the de facto standard for the
cloud-native world. It’s a portable, extensible, open-source platform for managing containerized
workloads.
You could host your own instance of Kubernetes, but then you’d be responsible for provisioning and
managing its resources - which can be complex. The Azure cloud features Kubernetes as a managed
service, Azure Kubernetes Service (AKS). A managed service allows you to fully leverage its features,
without having to install and maintain it.
Figure 1-7 shows many common backing services that cloud-native systems consume.
Backing services promote the “Statelessness” principle from the Twelve-Factor Application, discussed
earlier in the chapter.
Factor #6 specifies that, “Each microservice should execute in its own process, isolated from other
running services. Externalize required state to a backing service such as a distributed cache or data
store.”
You could host your own backing services, but then you’d be responsible for licensing, provisioning,
and managing those resources.
Cloud providers offer a rich assortment of managed backing services. Instead of owning the service,
you simply consume it. The provider operates the resource at scale and bears the responsibility for
performance, security, and maintenance. Monitoring, redundancy, and availability are built into the
service. Providers fully support their managed services - open a ticket and they fix your issue.
Cloud-native systems favor managed backing services from cloud vendors. The savings in time and
labor are great. The operational risk of hosting your own and experiencing trouble can get expensive
fast.
Factor #3 specifies that “Configuration information is moved out of the microservice and externalized
through a configuration management tool outside of the code.”
With this pattern, a backing service can be attached and detached without code changes. You might
promote a microservice from QA to a staging environment. You update the microservice
configuration to point to the backing services in staging and inject the settings into your container
through an environment variable.
Cloud vendors provide APIs for you to communicate with their proprietary backing services. These
libraries encapsulate the plumbing and complexity. Communicating directly with these APIs will tightly
couple your code to the backing service. It’s a better practice to insulate the implementation details of
the vendor API. Introduce an intermediation layer, or intermediate API, exposing generic operations to
your service code. This loose coupling enables you to swap out one backing service for another or
move your code to a different public cloud without having to make changes to the mainline service
code.
Backing services are discussed in detail Chapter 5, Cloud-Native Data Patterns, and Chapter 4, Cloud-
Native Communication Patterns.
Automation
As you’ve seen, cloud-native systems embrace microservices, containers, and modern system design
to achieve speed and agility. But, that’s only part of the story. How do you provision the cloud
environments upon which these systems run? How do you rapidly deploy app features and updates?
How do you round out the full picture?
With IaC, you automate platform provisioning and application deployment. You essentially apply
software engineering practices such as testing and versioning to your DevOps practices. Your
infrastructure and deployments are automated, consistent, and repeatable.
Automating infrastructure
Tools like Azure Resource Manager, Terraform, and the Azure CLI, enable you to declaratively script
the cloud infrastructure you require. Resource names, locations, capacities, and secrets are
parameterized and dynamic. The script is versioned and checked into source control as an artifact of
your project. You invoke the script to provision a consistent and repeatable infrastructure across
system environments, such as QA, staging, and production.
Under the hood, IaC is idempotent, meaning that you can run the same script over and over without
side effects. If the team needs to make a change, they edit and rerun the script. Only the updated
resources are affected.
In the article, What is Infrastructure as Code, Author Sam Guckenheimer describes how, “Teams who
implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual
configuration of environments and enforce consistency by representing the desired state of their
Automating deployments
The Twelve-Factor Application, discussed earlier, calls for separate steps when transforming
completed code into a running application.
Factor #5 specifies that “Each release must enforce a strict separation across the build, release and run
stages. Each should be tagged with a unique ID and support the ability to roll back.”
Modern CI/CD systems help fulfill this principle. They provide separate deployment steps and help
ensure consistent and quality code that’s readily available to users.
The developer constructs a feature in their development environment, iterating through what is called
the “inner loop” of code, run, and debug. When complete, that code is pushed into a code repository,
such as GitHub, Azure DevOps, or BitBucket.
The push triggers a build stage that transforms the code into a binary artifact. The work is
implemented with a Continuous Integration (CI) pipeline. It automatically builds, tests, and packages
the application.
The release stage picks up the binary artifact, applies external application and environment
configuration information, and produces an immutable release. The release is deployed to a specified
Finally, the released feature is run in the target execution environment. Releases are immutable
meaning that any change must create a new release.
Applying these practices, organizations have radically evolved how they ship software. Many have
moved from quarterly releases to on-demand updates. The goal is to catch problems early in the
development cycle when they’re less expensive to fix. The longer the duration between integrations,
the more expensive problems become to resolve. With consistency in the integration process, teams
can commit code changes more frequently, leading to better collaboration and software quality.
Azure Pipelines
The Azure cloud includes a new CI/CD service entitled Azure Pipelines, which is part of the Azure
DevOps offering shown in Figure 1-9.
Azure Pipelines is a cloud service that combines continuous integration (CI) and continuous delivery
(CD). You can automatically test, build, and ship your code to any target.
You define your pipeline in code in a YAML file alongside the rest of the code for your app.
• The pipeline is versioned with your code and follows the same branching structure.
• You get validation of your changes through code reviews in pull requests and branch build
policies.
• Every branch you use can customize the build policy by modifying the azure-pipelines.yml file.
• The pipeline file is checked into version control and can be investigated if there’s a problem.
The Azure Pipelines service supports most Git providers and can generate deployment pipelines for
applications written on the Linux, macOS, or Windows platforms. It includes support for Java, .NET,
JavaScript, Python, PHP, Go, XCode, and C++.
Applying a cost/benefit analysis, there’s a good chance that most wouldn’t support the hefty price tag
required to be cloud native. The cost of being cloud native would far exceed the business value of the
application.
• A system with where individual features must release without a full redeployment of the entire
system
Then there are legacy systems. While we’d all like to build new applications, we’re often responsible
for modernizing legacy workloads that are critical to the business. Over time, a legacy application
could be decomposed into microservices, containerized, and ultimately “replatformed” into a cloud-
native architecture.
Monolithic apps that are non-critical largely benefit from a quick lift-and-shift (Cloud Infrastructure-
Ready) migration. Here, the on-premises workload is rehosted to a cloud-based VM, without changes.
This approach uses the IaaS (Infrastructure as a Service) model. Azure includes several tools such as
Azure Migrate, Azure Site Recovery, and Azure Database Migration Service to make such a move
Monolithic apps that are critical to the business oftentimes benefit from an enhanced lift-and-shift
(Cloud Optimized) migration. This approach includes deployment optimizations that enable key cloud
services - without changing the core architecture of the application. For example, you might
containerize the application and deploy it to a container orchestrator, like Azure Kubernetes Services,
discussed later in this book. Once in the cloud, the application could consume other cloud services
such as databases, message queues, monitoring, and distributed caching.
Finally, monolithic apps that perform strategic enterprise functions might best benefit from a Cloud-
Native approach, the subject of this book. This approach provides agility and velocity. But, it comes at
a cost of replatforming, rearchitecting, and rewriting code.
If you and your team believe a cloud-native approach is appropriate, it behooves you to rationalize
the decision with your organization. What exactly is the business problem that a cloud-native
approach will solve? How would it align with business needs?
• Blend development platforms and data stores to arrive at the best tool for the job?
The right migration strategy depends on organizational priorities and the systems you’re targeting.
For many, it may be more cost effective to cloud-optimize a monolithic application or add coarse-
grained services to an N-Tier app. In these cases, you can still make full use of cloud PaaS capabilities
like the ones offered by Azure App Service.
Summary
In this chapter, we introduced cloud-native computing. We provided a definition along with the key
capabilities that drive a cloud-native application. We looked at the types of applications that might
justify this investment and effort.
With the introduction behind, we now dive into a much more detailed look at cloud native.
References
• Cloud Native Computing Foundation
• Modernize existing .NET applications with Azure cloud and Windows Containers
Before starting this chapter, we recommend that you download the eShopOnContainers reference
application. If you do so, it should be easier for you to follow along with the information presented.
• It needs to be highly available and it must scale automatically to meet increased traffic (and
scale back down once traffic subsides).
• It should provide easy-to-use monitoring of its health and diagnostic logs to help troubleshoot
any issues it encounters.
• It should support an agile development process, including support for continuous integration
and deployment (CI/CD).
• In addition to the two web front ends (traditional and Single Page Application), the application
must also support mobile client apps running different kinds of operating systems.
• It should support cross-platform hosting and cross-platform development.
The eShopOnContainers application is accessible from web or mobile clients that access the
application over HTTPS targeting either the ASP.NET Core MVC server application or an appropriate
API Gateway. API Gateways offer several advantages, such as decoupling back-end services from
individual front-end clients and providing better security. The application also makes use of a related
pattern known as Backends-for-Frontends (BFF), which recommends creating separate API gateways
for each front-end client. The reference architecture demonstrates breaking up the API gateways
based on whether the request is coming from a web or mobile client.
The application’s functionality is broken up into many distinct microservices. There are services
responsible for authentication and identity, listing items from the product catalog, managing users’
shopping baskets, and placing orders. Each of these separate services has its own persistent storage.
There’s no single master data store with which all services interact. Instead, coordination and
communication between the services is done on an as-needed basis and by using a message bus.
Each of the different microservices is designed differently, based on their individual requirements. This
aspect means their technology stack may differ, although they’re all built using .NET and designed for
the cloud. Simpler services provide basic Create-Read-Update-Delete (CRUD) access to the underlying
data stores, while more advanced services use Domain-Driven Design approaches and patterns to
manage business complexity.
The code is organized to support the different microservices, and within each microservice, the code is
broken up into domain logic, infrastructure concerns, and user interface or service endpoint. In many
cases, each service’s dependencies can be fulfilled by Azure services in production, and alternative
options for local development. Let’s examine how the application’s requirements map to Azure
services.
The application’s architecture is shown in Figure 2-5. On the left are the client apps, broken up into
mobile, traditional Web, and Web Single Page Application (SPA) flavors. On the right are the server-
side components that make up the system, each of which can be hosted in Docker containers and
Kubernetes clusters. The traditional web app is powered by the ASP.NET Core MVC application shown
in yellow. This app and the mobile and web SPA applications communicate with the individual
microservices through one or more API gateways. The API gateways follow the “backends for front
ends” (BFF) pattern, meaning that each gateway is designed to support a given front-end client. The
individual microservices are listed to the right of the API gateways and include both business logic
and some kind of persistence store. The different services make use of SQL Server databases, Redis
cache instances, and MongoDB/CosmosDB stores. On the far right is the system’s Event Bus, which is
used for communication between the microservices.
The server-side components of this architecture all map easily to Azure services.
AKS provides management services for individual clusters of containers. The application will deploy
separate containers for each microservice in the AKS cluster, as shown in the architecture diagram
above. This approach allows each individual service to scale independently according to its resource
demands. Each microservice can also be deployed independently, and ideally such deployments
should incur zero system downtime.
API Gateway
The eShopOnContainers application has multiple front-end clients and multiple different back-end
services. There’s no one-to-one correspondence between the client applications and the microservices
that support them. In such a scenario, there may be a great deal of complexity when writing client
software to interface with the various back-end services in a secure manner. Each client would need to
address this complexity on its own, resulting in duplication and many places in which to make updates
as services change or new policies are implemented.
Azure API Management (APIM) helps organizations publish APIs in a consistent, manageable fashion.
APIM consists of three components: the API Gateway, and administration portal (the Azure portal),
and a developer portal.
The Azure portal is where you define the API schema and package different APIs into products. You
also configure user access, view reports, and configure policies for quotas or transformations.
The developer portal serves as the main resource for developers. It provides developers with API
documentation, an interactive test console, and reports on their own usage. Developers also use the
portal to create and manage their own accounts, including subscription and API key support.
Using APIM, applications can expose several different groups of services, each providing a back end
for a particular front-end client. APIM is recommended for complex scenarios. For simpler needs, the
lightweight API Gateway Ocelot can be used. The eShopOnContainers app uses Ocelot because of its
simplicity and because it can be deployed into the same application environment as the application
itself. Learn more about eShopOnContainers, APIM, and Ocelot.
Another option if your application is using AKS is to deploy the Azure Gateway Ingress Controller as a
pod within your AKS cluster. This approach allows your cluster to integrate with an Azure Application
Gateway, allowing the gateway to load-balance traffic to the AKS pods. Learn more about the Azure
Gateway Ingress Controller for AKS.
Data
The various back-end services used by eShopOnContainers have different storage requirements.
Several microservices use SQL Server databases. The Basket microservice leverages a Redis cache for
its persistence. The Locations microservice expects a MongoDB API for its data. Azure supports each
of these data formats.
For SQL Server database support, Azure has products for everything from single databases up to
highly scalable SQL Database elastic pools. Individual microservices can be configured to
communicate with their own individual SQL Server databases quickly and easily. These databases can
be scaled as needed to support each separate microservice according to its needs.
The eShopOnContainers application stores the user’s current shopping basket between requests. This
aspect is managed by the Basket microservice that stores the data in a Redis cache. In development,
this cache can be deployed in a container, while in production it can utilize Azure Cache for Redis.
Azure Cache for Redis is a fully managed service offering high performance and reliability without the
need to deploy and manage Redis instances or containers on your own.
The Locations microservice uses a MongoDB NoSQL database for its persistence. During
development, the database can be deployed in its own container, while in production the service can
leverage Azure Cosmos DB’s API for MongoDB. One of the benefits of Azure Cosmos DB is its ability
to leverage multiple different communication protocols, including a SQL API and common NoSQL
APIs including MongoDB, Cassandra, Gremlin, and Azure Table Storage. Azure Cosmos DB offers a
fully managed and globally distributed database as a service that can scale to meet the needs of the
services that use it.
Resiliency
Once deployed to production, the eShopOnContainers application would be able to take advantage
of several Azure services available to improve its resiliency. The application publishes health checks,
which can be integrated with Application Insights to provide reporting and alerts based on the app’s
availability. Azure resources also provide diagnostic logs that can be used to identify and correct bugs
and performance issues. Resource logs provide detailed information on when and how different Azure
resources are used by the application. You’ll learn more about cloud-native resiliency features in
chapter 6.
You can now deploy the eShop application to the cluster using Helm.
Using Helm, applications include text-based configuration files, called Helm charts, which declaratively
describe the application and configuration in Helm packages. Charts use standard YAML-formatted
28 CHAPTER 2 | Introducing eShopOnContainers reference app
files to describe a related set of Kubernetes resources. They’re versioned alongside the application
code they describe. Helm Charts range from simple to complex depending on the requirements of the
installation they describe.
Helm is composed of a command-line client tool, which consumes helm charts and launches
commands to a server component named, Tiller. Tiller communicates with the Kubernetes API to
ensure the correct provisioning of your containerized workloads. Helm is maintained by the Cloud-
native Computing Foundation.
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.app.svc.marketing }}
labels:
app: {{ template "marketing-api.name" . }}
chart: {{ template "marketing-api.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
app: {{ template "marketing-api.name" . }}
release: {{ .Release.Name }}
Note how the template describes a dynamic set of key/value pairs. When the template is invoked,
values that enclosed in curly braces are pulled in from other yaml-based configuration files.
You’ll find the eShopOnContainers helm charts in the /k8s/helm folder. Figure 2-6 shows how the
different components of the application are organized into a folder structure used by helm to define
and managed deployments.
Each individual component is installed using a helm install command. eShop includes a “deploy all”
script that loops through and installs the components using their respective helm charts. The result is
a repeatable process, versioned with the application in source control, that anyone on the team can
deploy to an AKS cluster with a one-line script command.
Note that version 3 of Helm officially removes the need for the Tiller server component. More
information on this enhancement can be found here.
Developers share a running (development) instance in an AKS cluster that contains the entire
containerized application. But they use personal spaces set up on their machine to locally develop
their services. When ready, they test from end-to-end in the AKS cluster - without replicating
dependencies. Azure Dev Spaces merges code from the local machine with services in AKS. Team
members can see how their changes will behave in a real AKS environment. Developers can rapidly
iterate and debug code directly in Kubernetes using Visual Studio 2017 or Visual Studio Code.
In Figure 2-7, you can see that Developer Susie has deployed an updated version of the Bikes
microservice into her dev space. She’s then able to test her changes using a custom URL starting with
the name of her space (susie.s.dev.myapp.eus.azds.io).
Figure 2-7. Developer Susie deploys her own version of the Bikes microservice and tests it.
At the same time, developer John is customizing the Reservations microservice and needs to test his
changes. He deploys his changes to his own dev space without conflicting with Susie’s changes as
shown in Figure 2-8. John then tests his changes using his own URL that is prefixed with the name of
his space (john.s.dev.myapp.eus.azds.io).
Figure 2-8. Developer John deploys his own version of the Reservations microservice and tests it without conflicting with
other developers.
Using Azure Dev Spaces, teams can work directly with AKS while independently changing, deploying,
and testing their changes. This approach reduces the need for separate dedicated hosted
environments since every developer effectively has their own AKS environment. Developers can work
Centralized configuration
Unlike a monolithic app in which everything runs within a single instance, a cloud-native application
consists of independent services distributed across virtual machines, containers, and geographic
regions. Managing configuration settings for dozens of interdependent services can be challenging.
Duplicate copies of configuration settings across different locations are error prone and difficult to
manage. Centralized configuration is a critical requirement for distributed cloud-native applications.
As discussed in Chapter 1, the Twelve-Factor App recommendations require strict separation between
code and configuration. Configuration must be stored externally from the application and read-in as
needed. Storing configuration values as constants or literal values in code is a violation. The same
configuration values are often be used by many services in the same application. Additionally, we
must support the same values across multiple environments, such as dev, testing, and production. The
best practice is store them in a centralized configuration store.
App Configuration automatically caches each setting to avoid excessive calls to the configuration
store. The refresh operation waits until the cached value of a setting expires to update that setting,
even when its value changes in the configuration store. The default cache expiration time is 30
seconds. You can override the expiration time.
App Configuration encrypts all configuration values in transit and at rest. Key names and labels are
used as indexes for retrieving configuration data and aren’t encrypted.
Although App Configuration provides hardened security, Azure Key Vault is still the best place for
storing application secrets. Key Vault provides hardware-level encryption, granular access policies, and
management operations such as certificate rotation. You can create App Configuration values that
reference secrets stored in a Key Vault.
Key Vault greatly reduces the chances that secrets may be accidentally leaked. When using Key Vault,
application developers no longer need to store security information in their application. This practice
eliminates the need to store this information inside your code. For example, an application may need
to connect to a database. Instead of storing the connection string in the app’s code, you can store it
securely in Key Vault.
Your applications can securely access the information they need by using URIs. These URIs allow the
applications to retrieve specific versions of a secret. There’s no need to write custom code to protect
any of the secret information stored in Key Vault.
Access to Key Vault requires proper caller authentication and authorization. Typically, each cloud-
native microservice uses a ClientId/ClientSecret combination. It’s important to keep these credentials
outside source control. A best practice is to set them in the application’s environment. Direct access to
Key Vault from AKS can be achieved using Key Vault FlexVolume.
Configuration in eShop
The eShopOnContainers application includes local application settings files with each microservice.
These files are checked into source control, but don’t include production secrets such as connection
strings or API keys. In production, individual settings may be overwritten with per-service environment
variables. Injecting secrets in environment variables is a common practice for hosted applications, but
doesn’t provide a central configuration store. To support centralized management of configuration
settings, each microservice includes a setting to toggle between its use of local settings or Azure Key
Vault settings.
In this chapter, we discuss technologies that enable cloud-native applications to scale to meet user
demand. These technologies include:
• Containers
• Orchestrators
• Serverless computing
Although they have the benefit of simplicity, monolithic architectures face a number of challenges:
Deployment
Additionally, they require a restart of the application, which may temporarily impact availability if
zero-downtime techniques are not applied while deploying.
Scaling
A monolithic application is hosted entirely on a single machine instance, often requiring high-
capability hardware. If any part of the monolith requires scaling, another copy of the entire application
must be deployed to another machine. With a monolith, you can’t scale application components
individually - it’s all or nothing. Scaling components that don’t require scaling results in inefficient and
costly resource usage.
Environment
Monolithic applications are typically deployed to a hosting environment with a pre-installed operating
system, runtime, and library dependencies. This environment may not match that upon which the
application was developed or tested. Inconsistencies across application environments are a common
source of problems for monolithic deployments.
Coupling
A monolithic application is likely to experience high coupling across its functional components.
Without hard boundaries, system changes often result in unintended and costly side effects. New
features/fixes become tricky, time-consuming, and expensive to implement. Updates require extensive
testing. Coupling also makes it difficult to refactor components or swap in alternative
implementations. Even when constructed with a strict separation of concerns, architectural erosion
sets in as the monolithic code base deteriorates with never-ending “special cases.”
Docker is the most popular container management platform. It works with containers on both Linux or
Windows. Containers provide separate but reproducible application environments that run the same
way on any system. This aspect makes them perfect for developing and hosting cloud-native services.
Containers are isolated from one another. Two containers on the same host hardware can have
different versions of software, without causing conflicts.
Containers are defined by simple text-based files that become project artifacts and are checked into
source control. While full servers and virtual machines require manual effort to update, containers are
easily version-controlled. Apps built to run in containers can be developed, tested, and deployed
using automated tools as part of a build pipeline.
Containers are immutable. Once you define a container, you can recreate and run it exactly the same
way. This immutability lends itself to component-based design. If some parts of an application evolve
differently than others, why redeploy the entire app when you can just deploy the parts that change
most frequently? Different features and cross-cutting concerns of an app can be broken up into
separate units. Figure 3-2 shows how a monolithic app can take advantage of containers and
microservices by delegating certain features or functionality. The remaining functionality in the app
itself has also been containerized.
Each cloud-native service is built and deployed in a separate container. Each can update as needed.
Individual services can be hosted on nodes with resources appropriate to each service. The
environment each service runs in is immutable, shared across dev, test, and production environments,
and easily versioned. Coupling between different areas of the application occurs explicitly as calls or
messages between services, not compile-time dependencies within the monolith. You can also choose
the technology that best suites a given capability without requiring changes to the rest of the app.
In the cloud-native eco-system, Kubernetes has become the de facto container orchestrator. It’s an
open-source platform managed by the Cloud Native Computing Foundation (CNCF). Kubernetes
automates the deployment, scaling, and operational concerns of containerized workloads across a
machine cluster. However, installing and managing Kubernetes is notoriously complex.
AKS is a cluster-based technology. A pool of federated virtual machines, or nodes, is deployed to the
Azure cloud. Together they form a highly available environment, or cluster. The cluster appears as a
seamless, single entity to your cloud-native application. Under the hood, AKS deploys your
containerized services across these nodes following a predefined strategy that evenly distributes the
load.
Scaling containerized workloads is a key feature of container orchestrators. AKS supports automatic
scaling across two dimensions: Container instances and compute nodes. Together they give AKS the
ability to quickly and efficiently respond to spikes in demand and add additional resources. We
discuss scaling in AKS later in this chapter.
Imperative commands are great for learning and interactive experimentation. However, you’ll want to
declaratively create Kubernetes manifest files to embrace an infrastructure as code approach,
providing for reliable and repeatable deployments. The manifest file becomes a project artifact and is
used in your CI/CD pipeline for automating Kubernetes deployments.
If you’ve already configured your cluster using imperative commands, you can export a declarative
manifest by using kubectl get svc SERVICENAME -o yaml > service.yaml. This command produces a
manifest similar to one shown below:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-09-13T13:58:47Z"
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "153"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: 9b1fac62-d62e-11e9-8968-00155d38010d
spec:
clusterIP: 10.96.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 6443
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
When using declarative configuration, you can preview the changes that will be made before
committing them by using kubectl diff -f FOLDERNAME against the folder where your configuration
files are located. Once you’re sure you want to apply the changes, run kubectl apply -f FOLDERNAME.
Add -R to recursively process a folder hierarchy.
You can also use declarative configuration with other Kubernetes features, one of which being
deployments. Declarative deployments help manage releases, updates, and scaling. They instruct the
Kubernetes deployment controller on how to deploy new changes, scale out load, or roll back to a
previous revision. If a cluster is unstable, a declarative deployment will automatically return the cluster
back to a desired state. For example, if a node should crash, the deployment mechanism will redeploy
a replacement to achieve your desired state
Development resources
This section shows a short list of development resources that may help you get started using
containers and orchestrators for your next application. If you’re looking for guidance on how to
design your cloud-native microservices architecture app, read this book’s companion, .NET
Microservices: Architecture for Containerized .NET Applications.
• DNS
• NodePorts
• ConfigMaps and secrets
• Dashboards
• Container runtimes: Docker, rkt, CRI-O, and containerd
• Enabling Container Network Interface (CNI)
• Ingress
After installing Minikube, you can quickly start using it by running the minikube start command, which
downloads an image and start the local Kubernetes cluster. Once the cluster is started, you interact
with it using the standard Kubernetes kubectl commands.
Docker Desktop
You can also work with Kubernetes directly from Docker Desktop on Windows. It is your only option if
you’re using Windows Containers, and is a great choice for non-Windows containers as well. Figure 3-
4 shows how to enable local Kubernetes support when running Docker Desktop.
When this option is selected, the project is created with a Dockerfile in its root, which can be used to
build and host the app in a Docker container. An example Dockerfile is shown in Figure 3-6.git
The default behavior when the app runs is configured to use Docker as well. Figure 3-7 shows the
different run options available from a new ASP.NET Core project created with Docker support added.
In addition to local development, Azure Dev Spaces provides a convenient way for multiple
developers to work with their own Kubernetes configurations within Azure. As you can see in Figure 3-
7, you can also run the application in Azure Dev Spaces.
Also, at any time you can add Docker support to an existing ASP.NET Core application. From the
Visual Studio Solution Explorer, right-click on the project and select Add > Docker Support, as shown
in Figure 3-8.
You can also add Container Orchestration Support, also shown in Figure 3-8. By default, the
orchestrator uses Kubernetes and Helm. Once you’ve chosen the orchestrator, a azds.yaml file is
added to the project root and a charts folder is added containing the Helm charts used to configure
and deploy the application to Kubernetes. Figure 3-9 shows the resulting files in a new project.
Microsoft provides the Docker for Visual Studio Code extension. This extension simplifies the process
of adding container support to applications. It scaffolds required files, builds Docker images, and
enables you to debug your app inside a container. The extension features a visual explorer that makes
it easy to take actions on containers and images such as start, stop, inspect, remove, and more. The
extension also supports Docker Compose enabling you to manage multiple running containers as a
single unit.
What is serverless?
Serverless is a relatively new service model of cloud computing. It doesn’t mean that servers are
optional - your code still runs on a server somewhere. The distinction is that the application team no
Serverless computing uses event-triggered stateless containers to host your services. They can scale
out and in to meet demand as-needed. Serverless platforms like Azure Functions have tight
integration with other Azure services like queues, events, and storage.
An application might need to send an email as a step in a workflow. Instead of sending the
notification as part of a microservice request, place the message details onto a queue. An Azure
Function can dequeue the message and asynchronously send the email. Doing so could improve the
performance and scalability of the microservice. Queue-based load leveling can be implemented to
avoid bottlenecks related to sending the emails. Additionally, this stand-alone service could be reused
as a utility across many different applications.
Asynchronous messaging from queues and topics is a common pattern to trigger serverless functions.
However, Azure Functions can be triggered by other events, such as changes to Azure Blob Storage. A
service that supports image uploads could have an Azure Function responsible for optimizing the
Many services have long-running processes as part of their workflows. Often these tasks are done as
part of the user’s interaction with the application. These tasks can force the user to wait, negatively
impacting their experience. Serverless computing provides a great way to move slower tasks outside
of the user interaction loop. These tasks can scale with demand without requiring the entire
application to scale.
Figure 3-10 shows a cold-start pattern. Note the extra steps required when the app is cold.
To avoid cold starts entirely, you might switch from a consumption plan to a dedicated plan. You can
also configure one or more pre-warmed instances with the premium plan upgrade. In these cases,
when you need to add another instance, it’s already up and ready to go. These options can help
mitigate the cold start issue associated with serverless computing.
Cloud providers bill for serverless based on compute execution time and consumed memory. Long
running operations or high memory consumption workloads aren’t always the best candidates for
serverless. Serverless functions favor small chunks of work that can complete quickly. Most serverless
platforms require individual functions to complete within a few minutes. Azure Functions defaults to a
Finally, leveraging Azure Functions for application tasks adds complexity. It’s wise to first architect
your application with a modular, loosely coupled design. Then, identify if there are benefits serverless
would offer that justify the additional complexity.
When the project is created, it will include a Dockerfile and the worker runtime configured to dotnet.
Now, you can create and test your function locally. Build and run it using the docker build and docker
run commands. For detailed steps to get started building Azure Functions with Docker support, see
the Create a function on Linux using a custom image tutorial.
KEDA provides event-driven scaling functionality to the Functions’ runtime in a Docker container.
KEDA can scale from zero instances (when no events are occurring) out to n instances, based on load.
It enables autoscaling by exposing custom metrics to the Kubernetes autoscaler (Horizontal Pod
Autoscaler). Using Functions containers with KEDA makes it possible to replicate serverless function
capabilities in any Kubernetes cluster.
It is worth noting that the KEDA project is now managed by the Cloud Native Computing Foundation
(CNCF).
Once created, container images are stored in container registries. They enable you to build, store, and
manage container images. There are many registries available, both public and private. Azure
Container Registry (ACR) is a fully managed container registry service in the Azure cloud. It persists
your images inside the Azure network, reducing the time to deploy them to Azure container hosts.
You can also secure them using the same security and identity procedures that you use for other
Azure resources.
You create an Azure Container Registry using the Azure portal, Azure CLI, or PowerShell tools.
Creating a registry in Azure is simple. It requires an Azure subscription, resource group, and a unique
name. Figure 3-11 shows the basic options for creating a registry, which will be hosted at
registryname.azurecr.io.
Once you’ve created the registry, you’ll need to authenticate with it before you can use it. Typically,
you’ll log into the registry using the Azure CLI command:
Once authenticated, you can use docker commands to push container images to it. Before you can do
so, however, you must tag your image with the fully qualified name (URL) of your ACR login server. It
will have the format registryname.azurecr.io.
After you’ve tagged the image, you use the docker push command to push the image to your ACR
instance.
After you push an image to the registry, it’s a good idea to remove the image from your local Docker
environment, using this command:
As a best practice, developers shouldn’t manually push images to a container registry. Instead, a build
pipeline defined in a tool like GitHub or Azure DevOps should be responsible for this process. Learn
more in the Cloud-Native DevOps chapter.
ACR Tasks
ACR Tasks is a set of features available from the Azure Container Registry. It extends your inner-loop
development cycle by building and managing container images in the Azure cloud. Instead of
invoking a docker build and docker push locally on your development machine, they’re automatically
handled by ACR Tasks in the cloud.
The following AZ CLI command both builds a container image and pushes it to ACR:
# build container image in ACR and push it into your container regsitry
az acr build --image sample/hello-world:v1 --registry myContainerRegistry008 --file
Dockerfile .
As you can see from the previous command block, there’s no need to install Docker Desktop on your
development machine. Additionally, you can configure ACR Task triggers to rebuild containers images
on both source code and base image updates.
Once you deploy an image to a registry, such as ACR, you can configure AKS to automatically pull and
deploy it. With a CI/CD pipeline in place, you might configure a canary release strategy to minimize
the risk involved when rapidly deploying updates. The new version of the app is initially configured in
production with no traffic routed to it. Then, the system will route a small percentage of users to the
newly deployed version. As the team gains confidence in the new version, it can roll out more
instances and retire the old. AKS easily supports this style of deployment.
As with most resources in Azure, you can create an Azure Kubernetes Service cluster using the portal,
command-line, or automation tools like Helm or Terraform. To get started with a new cluster, you
need to provide the following information:
• Azure subscription
• Resource group
• Kubernetes cluster name
• Region
• Kubernetes version
• DNS name prefix
• Node size
• Node count
52 CHAPTER 3 | Scaling cloud-native applications
This information is sufficient to get started. As part of the creation process in the Azure portal, you can
also configure options for the following features of your cluster:
• Scale
• Authentication
• Networking
• Monitoring
• Tags
This quickstart walks through deploying an AKS cluster using the Azure portal.
Developers share a running (development) instance in an AKS cluster that contains the entire
containerized application. But they use personal spaces set up on their machine to locally develop
their services. When ready, they test from end-to-end in the AKS cluster - without replicating
dependencies. Azure Dev Spaces merges code from the local machine with services in AKS.
Developers can rapidly iterate and debug code directly in Kubernetes using Visual Studio or Visual
Studio Code.
To understand the value of Azure Dev Spaces, let me share this quotation from Gabe Monroy, PM
Lead of Containers at Microsoft Azure:
“Imagine you’re a new employee trying to fix a bug in a complex microservices application consisting
of dozens of components, each with their own configuration and backing services. To get started, you
must configure your local development environment so that it can mimic production including setting
up your IDE, building tool chain, containerized service dependencies, a local Kubernetes environment,
mocks for backing services, and more. With all the time involved setting up your development
environment, fixing that first bug could take days. Or you could use Dev Spaces and AKS.”
The process for working with Azure Dev Spaces involves the following steps:
Serverless apps scale up by choosing the premium Functions plan or premium instance sizes from a
dedicated app service plan.
First, the Horizontal Pod Autoscaler monitors resource demand and automatically scales your POD
replicas to meet it. When traffic increases, additional replicas are automatically provisioned to scale
out your services. Likewise, when demand decreases, they’re removed to scale-in your services. You
define the metric on which to scale, for example, CPU usage. You can also specify the minimum and
maximum number of replicas to run. AKS monitors that metric and scales accordingly.
Next, the AKS Cluster Autoscaler feature enables you to automatically scale compute nodes across a
Kubernetes cluster to meet demand. With it, you can automatically add new VMs to the underlying
Azure Virtual Machine Scale Set whenever more compute capacity of is required. It also removes
nodes when no longer required.
Figure 3-13 shows the relationship between these two scaling services.
Working together, both ensure an optimal number of container instances and compute nodes to
support fluctuating demand. The horizontal pod autoscaler optimizes the number of pods required.
The cluster autoscaler optimizes the number of nodes required.
While the default consumption plan provides an economical and scalable solution for most apps, the
premium option allows developers flexibility for custom Azure Functions requirements. Upgrading to
the premium plan provides control over instance sizes, pre-warmed instances (to avoid cold start
delays), and dedicated VMs.
Creating an instance in ACI can be done quickly. Specify the image registry, Azure resource group
information, the amount of memory to allocate, and the port on which to listen. This quickstart shows
how to deploy a container instance to ACI using the Azure portal.
Once the deployment completes, find the newly deployed container’s IP address and communicate
with it over the port you specified.
Azure Container Instances offers the fastest way to run simple container workloads in Azure. You don’t
need to configure an app service, orchestrator, or virtual machine. For scenarios where you require full
container orchestration, service discovery, automatic scaling, or coordinated upgrades, we
recommend Azure Kubernetes Service (AKS).
References
• What is Kubernetes?
• Installing Kubernetes with Minikube
• MiniKube vs Docker Desktop
• Visual Studio Tools for Docker
Communication considerations
In a monolithic application, communication is straightforward. The code modules execute together in
the same executable space (process) on a server. This approach can have performance advantages as
everything runs together in shared memory, but results in tightly coupled code that becomes difficult
to maintain, evolve, and scale.
A cluster groups a pool of virtual machines together to form a highly available environment. They’re
managed with an orchestration tool, which is responsible for deploying and managing the
containerized microservices. Figure 4-1 shows a Kubernetes cluster deployed into the Azure cloud
with the fully managed Azure Kubernetes Services.
Across the cluster, microservices communicate with each other through APIs and messaging
technologies.
While they provide many benefits, microservices are no free lunch. Local in-process method calls
between components are now replaced with network calls. Each microservice must communicate over
a network protocol, which adds complexity to your system:
• Each message must be serialized and then deserialized - which can be expensive.
The book .NET Microservices: Architecture for Containerized .NET Applications, available for free from
Microsoft, provides an in-depth coverage of communication patterns for microservice applications. In
this chapter, we provide a high-level overview of these patterns along with implementation options
available in the Azure cloud.
In this chapter, we’ll first address communication between front-end applications and back-end
microservices. We’ll then look at back-end microservices communicate with each other. We’ll explore
To keep things simple, a front-end client could directly communicate with the back-end microservices,
shown in Figure 4-2.
With this approach, each microservice has a public endpoint that is accessible by front-end clients. In
a production environment, you’d place a load balancer in front of the microservices, routing traffic
proportionately.
While simple to implement, direct client communication would be acceptable only for simple
microservice applications. This pattern tightly couples front-end clients to core back-end services,
opening the door for a number of problems, including:
In the previous figure, note how the API Gateway service abstracts the back-end core microservices.
Implemented as a web API, it acts as a reverse proxy, routing incoming traffic to the internal
microservices.
The gateway insulates the client from internal service partitioning and refactoring. If you change a
back-end service, you accommodate for it in the gateway without breaking the client. It’s also your
first line of defense for cross-cutting concerns, such as identity, caching, resiliency, metering, and
throttling. Many of these cross-cutting concerns can be off-loaded from the back-end core services to
the gateway, simplifying the back-end services.
Care must be taken to keep the API Gateway simple and fast. Typically, business logic is kept out of
the gateway. A complex gateway risks becoming a bottleneck and eventually a monolith itself. Larger
systems often expose multiple API Gateways segmented by client type (mobile, web, desktop) or
back-end functionality. The Backend for Frontends pattern provides direction for implementing
multiple gateways. The pattern is shown in Figure 4-4.
Note in the previous figure how incoming traffic is sent to a specific API gateway - based upon client
type: web, mobile, or desktop app. This approach makes sense as the capabilities of each device differ
significantly across form factor, performance, and display limitations. Typically mobile applications
expose less functionality than a browser or desktop applications. Each gateway can be optimized to
match the capabilities and functionality of the corresponding device.
To start, you could build your own API Gateway service. A quick search of GitHub will provide many
examples. However, there are several frameworks and commercial gateway products available.
Ocelot Gateway
For simple .NET cloud-native applications, you might consider the Ocelot Gateway. Ocelot is an Open
Source API Gateway created for .NET microservices that require a unified point of entry into their
system. It’s lightweight, fast, scalable.
Like any API Gateway, its primary functionality is to forward incoming HTTP requests to downstream
services. Additionally, it supports a wide variety of capabilities that are configurable in a .NET
middleware pipeline. Its feature set is presented in following table.
Ocelot Features
Routing Authentication
Request Aggregation Authorization
Service Discovery (with Consul and Eureka) Throttling
Load Balancing Logging, Tracing
Each Ocelot gateway specifies the upstream and downstream addresses and configurable features in a
JSON configuration file. The client sends an HTTP request to the Ocelot gateway. Once received,
Ocelot passes the HttpRequest object through its pipeline manipulating it into the state specified by
its configuration. At the end of pipeline, Ocelot creates a new HTTPResponseObject and passes it to
the downstream service. For the response, Ocelot reverses the pipeline, sending the response back to
client.
Ocelot is available as a NuGet package. It targets the NET Standard 2.0, making it compatible with
both .NET Core 2.0+ and .NET Framework 4.6.1+ runtimes. Ocelot integrates with anything that
speaks HTTP and runs on the platforms which .NET Core supports: Linux, macOS, and Windows.
Ocelot is extensible and supports many modern platforms, including Docker containers, Azure
Kubernetes Services, or other public clouds. Ocelot integrates with open-source packages like Consul,
GraphQL, and Netflix’s Eureka.
Consider Ocelot for simple cloud-native applications that don’t require the rich feature-set of a
commercial API gateway.
The Application Gateway Ingress Controller enables Azure Application Gateway to work directly with
Azure Kubernetes Service. Figure 4.5 shows the architecture.
Kubernetes includes a built-in feature that supports HTTP (Level 7) load balancing, called Ingress.
Ingress defines a set of rules for how microservice instances inside AKS can be exposed to the outside
world. In the previous image, the ingress controller interprets the ingress rules configured for the
cluster and automatically configures the Azure Application Gateway. Based on those rules, the
Application Gateway routes traffic to microservices running inside AKS. The ingress controller listens
for changes to ingress rules and makes the appropriate changes to the Azure Application Gateway.
To start, API Management exposes a gateway server that allows controlled access to back-end
services based upon configurable rules and policies. These services can be in the Azure cloud, your
on-prem data center, or other public clouds. API keys and JWT tokens determine who can do what. All
traffic is logged for analytical purposes.
For developers, API Management offers a developer portal that provides access to services,
documentation, and sample code for invoking them. Developers can use Swagger/Open API to
inspect service endpoints and analyze their usage. The service works across the major development
platforms: .NET, Java, Golang, and more.
The publisher portal exposes a management dashboard where administrators expose APIs and
manage their behavior. Service access can be granted, service health monitored, and service telemetry
gathered. Administrators apply policies to each endpoint to affect behavior. Policies are pre-built
statements that execute sequentially for each service call. Policies are configured for an inbound call,
outbound call, or invoked upon an error. Policies can be applied at different service scopes as to
enable deterministic ordering when combining policies. The product ships with a large number of
prebuilt policies.
Here are examples of how policies can affect the behavior of your cloud-native services:
• Developer
• Basic
• Standard
• Premium
The Developer tier is meant for non-production workloads and evaluation. The other tiers offer
progressively more power, features, and higher service level agreements (SLAs). The Premium tier
provides Azure Virtual Network and multi-region support. All tiers have a fixed price per hour.
The Azure cloud also offers a serverless tier for Azure API Management. Referred to as the
consumption pricing tier, the service is a variant of API Management designed around the serverless
computing model. Unlike the “pre-allocated” pricing tiers previously shown, the consumption tier
provides instant provisioning and pay-per-action pricing.
• Microservices implemented using serverless technologies such as Azure Functions and Azure
Logic Apps.
• Azure backing service resources such as Service Bus queues and topics, Azure storage, and
others.
• Microservices where traffic has occasional large spikes but remains low most the time.
The consumption tier uses the same underlying service API Management components, but employs
an entirely different architecture based on dynamically allocated resources. It aligns perfectly with the
serverless computing model:
• No infrastructure to manage.
• No idle capacity.
• High-availability.
• Automatic scaling.
• Cost is based on actual usage.
The new consumption tier is a great choice for cloud-native systems that expose serverless resources
as APIs.
Real-time systems are often characterized by high-frequency data flows and large numbers of
concurrent client connections. Manually implementing real-time connectivity can quickly become
complex, requiring non-trivial infrastructure to ensure scalability and reliable messaging to connected
clients. You could find yourself managing an instance of Azure Redis Cache and a set of load
balancers configured with sticky sessions for client affinity.
Azure SignalR Service is a fully managed Azure service that simplifies real-time communication for
your cloud-native applications. Technical implementation details like capacity provisioning, scaling,
and persistent connections are abstracted away. They’re handled for you with a 99.9% service-level
agreement. You focus on application features, not infrastructure plumbing.
Once enabled, a cloud-based HTTP service can push content updates directly to connected clients,
including browser, mobile and desktop applications. Clients are updated without the need to poll the
server. Azure SignalR abstracts the transport technologies that create real-time connectivity, including
WebSockets, Server-Side Events, and Long Polling. Developers focus on sending messages to all or
specific subsets of connected clients.
Figure 4-7 shows a set of HTTP Clients connecting to a Cloud-native application with Azure SignalR
enabled.
Another advantage of Azure SignalR Service comes with implementing Serverless cloud-native
services. Perhaps your code is executed on demand with Azure Functions triggers. This scenario can
be tricky because your code doesn’t maintain long connections with clients. Azure SignalR Service can
handle this situation since the service already manages connections for you.
Azure SignalR Service closely integrates with other Azure services, such as Azure SQL Database,
Service Bus, or Redis Cache, opening up many possibilities for your cloud-native applications.
Service-to-service communication
Moving from the front-end client, we now address back-end microservices communicate with each
other.
When constructing a cloud-native application, you’ll want to be sensitive to how back-end services
communicate with each other. Ideally, the less inter-service communication, the better. However,
avoidance isn’t always possible as back-end services often rely on one another to complete an
operation.
There are several widely accepted approaches to implementing cross-service communication. The type
of communication interaction will often determine the best approach.
• Query – when a calling microservice requires a response from a called microservice, such as,
“Hey, give me the buyer information for a given customer Id.”
• Event – when a microservice, called the publisher, raises an event that state has changed or an
action has occurred. Other microservices, called subscribers, who are interested, can react to the
event appropriately. The publisher and the subscribers aren’t aware of each other.
Microservice systems typically use a combination of these interaction types when executing
operations that require cross-service interaction. Let’s take a close look at each and how you might
implement them.
Queries
Many times, one microservice might need to query another, requiring an immediate response to
complete an operation. A shopping basket microservice may need product information and a price to
add an item to its basket. There are many approaches for implementing query operations.
Request/Response Messaging
One option for implementing this scenario is for the calling back-end microservice to make direct
HTTP requests to the microservices it needs to query, shown in Figure 4-8.
While direct HTTP calls between microservices are relatively simple to implement, care should be
taken to minimize this practice. To start, these calls are always synchronous and will block the
operation until a result is returned or the request times outs. What were once self-contained,
independent services, able to evolve independently and deploy frequently, now become coupled to
each other. As coupling among microservices increase, their architectural benefits diminish.
You can certainly imagine the risk in the design shown in the previous image. What happens if Step
#3 fails? Or Step #8 fails? How do you recover? What if Step #6 is slow because the underlying service
is busy? How do you continue? Even if all works correctly, think of the latency this call would incur,
which is the sum of the latency of each step.
The large degree of coupling in the previous image suggests the services weren’t optimally modeled.
It would behoove the team to revisit their design.
The pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its
logic into a specialized microservice. The purple checkout aggregator microservice in the previous
figure orchestrates the workflow for the Checkout operation. It includes calls to several back-end
microservices in a sequenced order. Data from the workflow is aggregated and returned to the caller.
While it still implements direct HTTP calls, the aggregator microservice reduces direct dependencies
among back-end microservices.
Request/Reply Pattern
Another approach for decoupling synchronous HTTP messages is a Request-Reply Pattern, which uses
queuing communication. Communication using a queue is always a one-way channel, with a producer
sending the message and consumer receiving it. With this pattern, both a request queue and response
queue are implemented, shown in Figure 4-11.
Here, the message producer creates a query-based message that contains a unique correlation ID and
places it into a request queue. The consuming service dequeues the messages, processes it and places
the response into the response queue with the same correlation ID. The producer service dequeues
the message, matches it with the correlation ID and continues processing. We cover queues in detail
in the next section.
Commands
Another type of communication interaction is a command. A microservice may need another
microservice to perform an action. The Ordering microservice may need the Shipping microservice to
create a shipment for an approved order. In Figure 4-12, one microservice, called a Producer, sends a
message to another microservice, the Consumer, commanding it to do something.
Most often, the Producer doesn’t require a response and can fire-and-forget the message. If a reply is
needed, the Consumer sends a separate message back to Producer on another channel. A command
message is best sent asynchronously with a message queue. supported by a lightweight message
broker. In the previous diagram, note how a queue separates and decouples both services.
A message queue is an intermediary construct through which a producer and consumer pass a
message. Queues implement an asynchronous, point-to-point messaging pattern. The Producer
knows where a command needs to be sent and routes appropriately. The queue guarantees that a
message is processed by exactly one of the consumer instances that are reading from the channel. In
this scenario, either the producer or consumer service can scale out without affecting the other. As
well, technologies can be disparate on each side, meaning that we might have a Java microservice
calling a Golang microservice.
In chapter 1, we talked about backing services. Backing services are ancillary resources upon which
cloud-native systems depend. Message queues are backing services. The Azure cloud supports two
types of message queues that your cloud-native systems can consume to implement command
messaging: Azure Storage Queues and Azure Service Bus Queues.
Azure Storage Queues feature a REST-based queuing mechanism with reliable and persistent
messaging. They provide a minimal feature set, but are inexpensive and store millions of messages.
Their capacity ranges up to 500 TB. A single message can be up to 64 KB in size.
You can access messages from anywhere in the world via authenticated calls using HTTP or HTTPS.
Storage queues can scale out to large numbers of concurrent clients to handle traffic spikes.
• A message can only persist for seven days before it’s automatically removed.
In the previous figure, note how storage queues store their messages in the underlying Azure Storage
account.
For developers, Microsoft provides several client and server-side libraries for Storage queue
processing. Most major platforms are supported including .NET, Java, JavaScript, Ruby, Python, and
Go. Developers should never communicate directly with these libraries. Doing so will tightly couple
your microservice code to the Azure Storage Queue service. It’s a better practice to insulate the
implementation details of the API. Introduce an intermediation layer, or intermediate API, that exposes
generic operations and encapsulates the concrete library. This loose coupling enables you to swap out
one queuing service for another without having to make changes to the mainline service code.
Azure Storage queues are an economical option to implement command messaging in your cloud-
native applications. Especially when a queue size will exceed 80 GB, or a simple feature set is
acceptable. You only pay for the storage of the messages; there are no fixed hourly charges.
Sitting atop a robust message infrastructure, Azure Service Bus supports a brokered messaging model.
Messages are reliably stored in a broker (the queue) until received by the consumer. The queue
guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were
added to the queue.
The size of a message can be much larger, up to 256 KB. Messages are persisted in the queue for an
unlimited period of time. Service Bus supports not only HTTP-based calls, but also provides full
Service Bus provides a rich set of features, including transaction support and a duplicate detection
feature. The queue guarantees “at most once delivery” per message. It automatically discards a
message that has already been sent. If a producer is in doubt, it can resend the same message, and
Service Bus guarantees that only one copy will be processed. Duplicate detection frees you from
having to build additional infrastructure plumbing.
Two more enterprise features are partitioning and sessions. A conventional Service Bus queue is
handled by a single message broker and stored in a single message store. But, Service Bus Partitioning
spreads the queue across multiple message brokers and message stores. The overall throughput is no
longer limited by the performance of a single message broker or messaging store. A temporary
outage of a messaging store doesn’t render a partitioned queue unavailable.
Service Bus Sessions provide a way to group-related messages. Imagine a workflow scenario where
messages must be processed together and the operation completed at the end. To take advantage,
sessions must be explicitly enabled for the queue and each related messaged must contain the same
session ID.
However, there are some important caveats: Service Bus queues size is limited to 80 GB, which is much
smaller than what’s available from store queues. Additionally, Service Bus queues incur a base cost
and charge per operation.
In the previous figure, note the point-to-point relationship. Two instances of the same provider are
enqueuing messages into a single Service Bus queue. Each message is consumed by only one of three
consumer instances on the right. Next, we discuss how to implement messaging where different
consumers may all be interested the same message.
Events
Message queuing is an effective way to implement communication where a producer can
asynchronously send a consumer a message. However, what happens when many different consumers
To address this scenario, we move to the third type of message interaction, the event. One
microservice announces that an action had occurred. Other microservices, if interested, react to the
action, or event.
Eventing is a two-step process. For a given state change, a microservice publishes an event to a
message broker, making it available to any other interested microservice. The interested microservice
is notified by subscribing to the event in the message broker. You use the Publish/Subscribe pattern
to implement event-based communication.
Figure 4-15 shows a shopping basket microservice publishing an event with two other microservices
subscribing to it.
Note the event bus component that sits in the middle of the communication channel. It’s a custom
class that encapsulates the message broker and decouples it from the underlying application. The
ordering and inventory microservices independently operate the event with no knowledge of each
other, nor the shopping basket microservice. When the registered event is published to the event bus,
they act upon it.
With eventing, we move from queuing technology to topics. A topic is similar to a queue, but supports
a one-to-many messaging pattern. One microservice publishes a message. Multiple subscribing
microservices can choose to receive and act upon that message. Figure 4-16 shows a topic
architecture.
In the previous figure, publishers send messages to the topic. At the end, subscribers receive
messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a
set of rules, shown in dark blue boxes. Rules act as a filter that forward specific messages to a
subscription. Here, a “GetPrice” event would be sent to the price and logging subscriptions as the
logging subscription has chosen to receive all messages. A “GetInformation” event would be sent to
the information and logging subscriptions.
The Azure cloud supports two different topic services: Azure Service Bus Topics and Azure EventGrid.
Many advanced features from Azure Service Bus queues are also available for topics, including
Duplicate Detection and Transaction support. By default, Service Bus topics are handled by a single
message broker and stored in a single message store. But, Service Bus Partitioning scales a topic by
spreading it across many message brokers and message stores.
Scheduled Message Delivery tags a message with a specific time for processing. The message won’t
appear in the topic before that time. Message Deferral enables you to defer a retrieval of a message
to a later time. Both are commonly used in workflow processing scenarios where operations are
processed in a particular order. You can postpone processing of received messages until prior work
has been completed.
Service Bus topics are a robust and proven technology for enabling publish/subscribe communication
in your cloud-native systems.
As a centralized eventing backplane, or pipe, Event Grid reacts to events inside Azure resources and
from your own services.
Event notifications are published to an Event Grid Topic, which, in turn, routes each event to a
subscription. Subscribers map to subscriptions and consume the events. Like Service Bus, Event Grid
supports a filtered subscriber model where a subscription sets rule for the events it wishes to receive.
Event Grid provides fast throughput with a guarantee of 10 million events per second enabling near
real-time delivery - far more than what Azure Service Bus can generate.
A sweet spot for Event Grid is its deep integration into the fabric of Azure infrastructure. An Azure
resource, such as Cosmos DB, can publish built-in events directly to other interested Azure resources -
without the need for custom code. Event Grid can publish events from an Azure Subscription,
Resource Group, or Service, giving developers fine-grained control over the lifecycle of cloud
resources. However, Event Grid isn’t limited to Azure. It’s an open platform that can consume custom
HTTP events published from applications or third-party services and route events to external
subscribers.
When publishing and subscribing to native events from Azure resources, no coding is required. With
simple configuration, you can integrate events from one Azure resource to another leveraging built-in
plumbing for Topics and Subscriptions. Figure 4-17 shows the anatomy of Event Grid.
Service Bus implements an older style pull model in which the downstream subscriber actively polls
the topic subscription for new messages. On the upside, this approach gives the subscriber full control
of the pace at which it processes messages. It controls when and how many messages to process at
any given time. Unread messages remain in the subscription until processed. A significant
shortcoming is the latency between the time the event is generated and the polling operation that
pulls that message to the subscriber for processing. Also, the overhead of constant polling for the
next event consumes resources and money.
EventGrid, however, is different. It implements a push model in which events are sent to the
EventHandlers as received, giving near real-time event delivery. It also reduces cost as the service is
triggered only when it’s needed to consume an event – not continually as with polling. That said, an
event handler must handle the incoming load and provide throttling mechanisms to protect itself
from becoming overwhelmed. Many Azure services that consume these events, such as Azure
Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads.
Event Grid is a fully managed serverless cloud service. It dynamically scales based on your traffic and
charges you only for your actual usage, not pre-purchased capacity. The first 100,000 operations per
month are free – operations being defined as event ingress (incoming event notifications),
subscription delivery attempts, management calls, and filtering by subject. With 99.99% availability,
EventGrid guarantees the delivery of an event within a 24-hour period, with built-in retry functionality
for unsuccessful delivery. Undelivered messages can be moved to a “dead-letter” queue for resolution.
Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn’t support features like
ordered messaging, transactions, and sessions.
Azure Event Hub is a data streaming platform and event ingestion service that collects, transforms,
and stores events. It’s fine-tuned to capture streaming data, such as continuous event notifications
emitted from a telemetry context. The service is highly scalable and can store and process millions of
events per second. Shown in Figure 4-18, it’s often a front door for an event pipeline, decoupling
ingest stream from event consumption.
Event Hub supports low latency and configurable time retention. Unlike queues and topics, Event
Hubs keep event data after it’s been read by a consumer. This feature enables other data analytic
services, both internal and external, to replay the data for further analysis. Events stored in event hub
are only deleted upon expiration of the retention period, which is one day by default, but
configurable.
Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports
Kafka 1.0. Existing Kafka applications can communicate with Event Hub using the Kafka protocol
providing an alternative to managing large Kafka clusters. Many open-source cloud-native systems
embrace Kafka.
Event Hubs implements message streaming through a partitioned consumer model in which each
consumer only reads a specific subset, or partition, of the message stream. This pattern enables
tremendous horizontal scale for event processing and provides other stream-focused features that are
unavailable in queues and topics. A partition is an ordered sequence of events that is held in an event
hub. As newer events arrive, they’re added to the end of this sequence. Figure 4-19 shows partitioning
in an Event Hub.
Instead of reading from the same resource, each consumer group reads across a subset, or partition,
of the message stream.
gRPC
So far in this book, we’ve focused on REST-based communication. We’ve seen that REST is a flexible
architectural style that defines CRUD-based operations against entity resources. Clients interact with
resources across HTTP with a request/response communication model. While REST is widely
implemented, a newer communication technology, gRPC, has gained tremendous momentum across
the cloud-native community.
What is gRPC?
gRPC is a modern, high-performance framework that evolves the age-old remote procedure call (RPC)
protocol. At the application level, gRPC streamlines messaging between clients and back-end services.
Originating from Google, gRPC is open source and part of the Cloud Native Computing Foundation
(CNCF) ecosystem of cloud-native offerings. CNCF considers gRPC an incubating project. Incubating
means end users are using the technology in production applications, and the project has a healthy
number of contributors.
A typical gRPC client app will expose a local, in-process function that implements a business
operation. Under the covers, that local function invokes another function on a remote machine. What
appears to be a local call essentially becomes a transparent out-of-process call to a remote service.
The RPC plumbing abstracts the point-to-point networking communication, serialization, and
execution between computers.
In cloud-native applications, developers often work across programming languages, frameworks, and
technologies. This interoperability complicates message contracts and the plumbing required for
cross-platform communication. gRPC provides a “uniform horizontal layer” that abstracts these
concerns. Developers code in their native platform focused on business functionality, while gRPC
handles communication plumbing.
gRPC offers comprehensive support across most popular development stacks, including Java,
JavaScript, C#, Go, Swift, and NodeJS.
gRPC Benefits
gRPC uses HTTP/2 for its transport protocol. While compatible with HTTP 1.1, HTTP/2 features many
advanced capabilities:
• A binary framing protocol for data transport - unlike HTTP 1.1, which is text based.
• Multiplexing support for sending multiple parallel requests over the same connection - HTTP 1.1
limits processing to one request/response message at a time.
• Bidirectional full-duplex communication for sending both client requests and server responses
simultaneously.
• Built-in streaming enabling requests and responses to asynchronously stream large data sets.
• Header compression that reduces network usage.
Protocol Buffers
gRPC embraces an open-source technology called Protocol Buffers. They provide a highly efficient
and platform-neutral serialization format for serializing structured messages that services send to
each other. Using a cross-platform Interface Definition Language (IDL), developers define a service
contract for each microservice. The contract, implemented as a text-based .proto file, describes the
methods, inputs, and outputs for each service. The same contract file can be used for gRPC clients and
services built on different development platforms.
Using the proto file, the Protobuf compiler, protoc, generates both client and service code for your
target platform. The code includes the following components:
• Strongly typed objects, shared by the client and service, that represent the service operations
and data elements for a message.
• A strongly typed base class with the required network plumbing that the remote gRPC service
can inherit and extend.
• A client stub that contains the required plumbing to invoke the remote gRPC service.
At runtime, each message is serialized as a standard Protobuf representation and exchanged between
the client and remote service. Unlike JSON or XML, Protobuf messages are serialized as compiled
binary bytes.
The book, gRPC for WCF Developers, available from the Microsoft Architecture site, provides in-depth
coverage of gRPC and Protocol Buffers.
• Visual Studio 2019, version 16.3 or later, with the web development workload installed.
• Visual Studio Code
• the dotnet CLI
The SDK includes tooling for endpoint routing, built-in IoC, and logging. The open-source Kestrel web
server supports HTTP/2 connections. Figure 4-20 shows a Visual Studio 2019 template that scaffolds a
skeleton project for a gRPC service. Note how .NET fully supports Windows, Linux, and macOS.
Figure 4-21 shows the skeleton gRPC service generated from the built-in scaffolding included in
Visual Studio 2019.
In the previous figure, note the proto description file and service code. As you’ll see shortly, Visual
Studio generates additional configuration in both the Startup class and underlying project file.
gRPC usage
Favor gRPC for the following scenarios:
gRPC implementation
The microservice reference architecture, eShop on Containers, from Microsoft, shows how to
implement gRPC services in .NET applications. Figure 4-22 presents the back-end architecture.
In the previous figure, note how eShop embraces the Backend for Frontends pattern (BFF) by
exposing multiple API gateways. We discussed the BFF pattern earlier in this chapter. Pay close
attention to the Aggregator microservice (in gray) that sits between the Web-Shopping API Gateway
and backend Shopping microservices. The Aggregator receives a single request from a client,
gRPC communication requires both client and server components. In the previous figure, note how
the Shopping Aggregator implements a gRPC client. The client makes synchronous gRPC calls (in red)
to backend microservices, each of which implement a gRPC server. Both the client and server take
advantage of the built-in gRPC plumbing from the .NET SDK. Client-side stubs provide the plumbing
to invoke remote gRPC calls. Server-side components provide gRPC plumbing that custom service
classes can inherit and consume.
Microservices that expose both a RESTful API and gRPC communication require multiple endpoints to
manage traffic. You would open an endpoint that listens for HTTP traffic for the RESTful calls and
another for gRPC calls. The gRPC endpoint must be configured for the HTTP/2 protocol that is
required for gRPC communication.
Looking ahead
Looking ahead, gRPC will continue to gain traction for cloud-native systems. The performance
benefits and ease of development are compelling. However, REST will likely be around for a long time.
It excels for publicly exposed APIs and for backward compatibility reasons.
A more modern approach to microservice communication centers around a new and rapidly evolving
technology entitled Service Mesh. A service mesh is a configurable infrastructure layer with built-in
capabilities to handle service-to-service communication, resiliency, and many cross-cutting concerns.
It moves the responsibility for these concerns out of the microservices and into service mesh layer.
Communication is abstracted away from your microservices.
Note in the previous figure how messages are intercepted by a proxy that runs alongside each
microservice. Each proxy can be configured with traffic rules specific to the microservice. It
understands messages and can route them across your services and the outside world.
Along with managing service-to-service communication, the Service Mesh provides support for
service discovery and load balancing.
87 CHAPTER 4 | Cloud-native communication patterns
Once configured, a service mesh is highly functional. The mesh retrieves a corresponding pool of
instances from a service discovery endpoint. It sends a request to a specific service instance, recording
the latency and response type of the result. It chooses the instance most likely to return a fast
response based on different factors, including the observed latency for recent requests.
A service mesh manages traffic, communication, and networking concerns at the application level. It
understands messages and requests. A service mesh typically integrates with a container orchestrator.
Kubernetes supports an extensible architecture in which a service mesh can be added.
In chapter 6, we deep-dive into Service Mesh technologies including a discussion on its architecture
and available open-source implementations.
Summary
In this chapter, we discussed cloud-native communication patterns. We started by examining how
front-end clients communicate with back-end microservices. Along the way, we talked about API
Gateway platforms and real-time communication. We then looked at how microservices communicate
with other back-end services. We looked at both synchronous HTTP communication and
asynchronous messaging across services. We covered gRPC, an upcoming technology in the cloud-
native world. Finally, we introduced a new and rapidly evolving technology entitled Service Mesh that
can streamline microservice communication.
Special emphasis was on managed Azure services that can help implement communication in cloud-
native systems:
References
• .NET Microservices: Architecture for Containerized .NET applications
• gRPC Documentation
Experienced developers will easily recognize the architecture on the left-side of figure 5-1. In this
monolithic application, business service components collocate together in a shared services tier,
sharing data from a single relational database.
In many ways, a single database keeps data management simple. Querying data across multiple tables
is straightforward. Changes to data update together or they all rollback. ACID transactions guarantee
strong and immediate consistency.
Designing for cloud-native, we take a different approach. On the right-side of Figure 5-1, note how
business functionality segregates into small, independent microservices. Each microservice
encapsulates a specific business capability and its own data. The monolithic database decomposes
Database-per-microservice, why?
This database per microservice provides many benefits, especially for systems that must evolve rapidly
and support massive scale. With this model…
Note in the previous figure how each microservice supports a different type of data store.
• The product catalog microservice consumes a relational database to accommodate the rich
relational structure of its underlying data.
• The shopping cart microservice consumes a distributed cache that supports its simple, key-value
data store.
• The ordering microservice consumes both a NoSql document database for write operations
along with a highly denormalized key/value store to accommodate high-volumes of read
operations.
While encapsulating data into separate microservices can increase agility, performance, and scalability,
it also presents many challenges. In the next section, we discuss these challenges along with patterns
and practices to help overcome them.
Cross-service queries
While microservices are independent and focus on specific functional capabilities, like inventory,
shipping, or ordering, they frequently require integration with other microservices. Often the
integration involves one microservice querying another for data. Figure 5-3 shows the scenario.
In the preceding figure, we see a shopping basket microservice that adds an item to a user’s shopping
basket. While the data store for this microservice contains basket and line item data, it doesn’t
maintain product or pricing data. Instead, those data items are owned by the catalog and pricing
microservices. This aspect presents a problem. How can the shopping basket microservice add a
product to the user’s shopping basket when it doesn’t have product nor pricing data in its database?
One option discussed in Chapter 4 is a direct HTTP call from the shopping basket to the catalog and
pricing microservices. However, in chapter 4, we said synchronous HTTP calls couple microservices
together, reducing their autonomy and diminishing their architectural benefits.
We could also implement a request-reply pattern with separate inbound and outbound queues for
each service. However, this pattern is complicated and requires plumbing to correlate request and
response messages. While it does decouple the backend microservice calls, the calling service must
still synchronously wait for the call to complete. Network congestion, transient faults, or an
overloaded microservice and can result in long-running and even failed operations.
With this pattern, you place a local data table (known as a read model) in the shopping basket service.
This table contains a denormalized copy of the data needed from the product and pricing
microservices. Copying the data directly into the shopping basket microservice eliminates the need for
expensive cross-service calls. With the data local to the service, you improve the service’s response
time and reliability. Additionally, having its own copy of the data makes the shopping basket service
more resilient. If the catalog service should become unavailable, it wouldn’t directly impact the
shopping basket service. The shopping basket can continue operating with the data from its own
store.
The catch with this approach is that you now have duplicate data in your system. However,
strategically duplicating data in cloud-native systems is an established practice and not considered an
anti-pattern, or bad practice. Keep in mind that one and only one service can own a data set and have
authority over it. You’ll need to synchronize the read models when the system of record is updated.
Synchronization is typically implemented via asynchronous messaging with a publish/subscribe
pattern, as shown in Figure 5.4.
Distributed transactions
While querying data across microservices is difficult, implementing a transaction across several
microservices is even more complex. The inherent challenge of maintaining data consistency across
independent data sources in different microservices can’t be understated. The lack of distributed
transactions in cloud-native applications means that you must manage distributed transactions
programmatically. You move from a world of immediate consistency to that of eventual consistency.
In the preceding figure, five independent microservices participate in a distributed transaction that
creates an order. Each microservice maintains its own data store and implements a local transaction
for its store. To create the order, the local transaction for each individual microservice must succeed,
or all must abort and roll back the operation. While built-in transactional support is available inside
each of the microservices, there’s no support for a distributed transaction that would span across all
five services to keep data consistent.
A popular pattern for adding distributed transactional support is the Saga pattern. It’s implemented
by grouping local transactions together programmatically and sequentially invoking each one. If any
of the local transactions fail, the Saga aborts the operation and invokes a set of compensating
transactions. The compensating transactions undo the changes made by the preceding local
transactions and restore data consistency. Figure 5-6 shows a failed transaction with the Saga pattern.
Saga patterns are typically choreographed as a series of related events, or orchestrated as a set of
related commands. In Chapter 4, we discussed the service aggregator pattern that would be the
foundation for an orchestrated saga implementation. We also discussed eventing along with Azure
Service Bus and Azure Event Grid topics that would be a foundation for a choreographed saga
implementation.
CQRS
CQRS, is an architectural pattern that can help maximize performance, scalability, and security. The
pattern separates operations that read data from those operations that write data.
For normal scenarios, the same entity model and data repository object are used for both read and
write operations.
However, a high volume data scenario can benefit from separate models and data tables for reads
and writes. To improve performance, the read operation could query against a highly denormalized
representation of the data to avoid expensive repetitive table joins and table locks. The write
operation, known as a command, would update against a fully normalized representation of the data
that would guarantee consistency. You then need to implement a mechanism to keep both
representations in sync. Typically, whenever the write table is modified, it publishes an event that
replicates the modification to the read table.
This separation enables reads and writes to scale independently. Read operations use a schema
optimized for queries, while the writes use a schema optimized for updates. Read queries go against
denormalized data, while complex business logic can be applied to the write model. As well, you
might impose tighter security on write operations than those exposing reads.
Implementing CQRS can improve application performance for cloud-native services. However, it does
result in a more complex design. Apply this principle carefully and strategically to those sections of
your cloud-native application that will benefit from it. For more on CQRS, see the Microsoft book .NET
Microservices: Architecture for Containerized .NET Applications.
Event sourcing
Another approach to optimizing high volume data scenarios involves Event Sourcing.
A system typically stores the current state of a data entity. If a user changes their phone number, for
example, the customer record is updated with the new number. We always know the current state of a
data entity, but each update overwrites the previous state.
In most cases, this model works fine. In high volume systems, however, overhead from transactional
locking and frequent update operations can impact database performance, responsiveness, and limit
scalability.
Event Sourcing takes a different approach to capturing data. Each operation that affects data is
persisted to an event store. Instead of updating the state of a data record, we append each change to
a sequential list of past events - similar to an accountant’s ledger. The Event Store becomes the
system of record for the data. It’s used to propagate various materialized views within the bounded
context of a microservice. Figure 5.8 shows the pattern.
In the previous figure, note how each entry (in blue) for a user’s shopping cart is appended to an
underlying event store. In the adjoining materialized view, the system projects the current state by
replaying all the events associated with each shopping cart. This view, or read model, is then exposed
back to the UI. Events can also be integrated with external systems and applications or queried to
determine the current state of an entity. With this approach, you maintain history. You know not only
the current state of an entity, but also how you reached this state.
Mechanically speaking, event sourcing simplifies the write model. There are no updates or deletes.
Appending each data entry as an immutable event minimizes contention, locking, and concurrency
conflicts associated with relational databases. Building read models with the materialized view pattern
enables you to decouple the view from the write model and choose the best data store to optimize
the needs of your application UI.
For this pattern, consider a data store that directly supports event sourcing. Azure Cosmos DB,
MongoDB, Cassandra, CouchDB, and RavenDB are good candidates.
As with all patterns and technologies, implement strategically and when needed. While event sourcing
can provide increased performance and scalability, it comes at the expense of complexity and a
learning curve.
Relational databases have been a prevalent technology for decades. They’re mature, proven, and
widely implemented. Competing database products, tooling, and expertise abound. Relational
databases provide a store of related data tables. These tables have a fixed schema, use SQL
(Structured Query Language) to manage data, and support ACID guarantees.
No-SQL databases refer to high-performance, non-relational data stores. They excel in their ease-of-
use, scalability, resilience, and availability characteristics. Instead of joining tables of normalized data,
NoSQL stores unstructured or semi-structured data, often in key-value pairs or JSON documents. No-
SQL databases typically don’t provide ACID guarantees beyond the scope of a single database
partition. High volume services that require sub second response time favor NoSQL datastores.
The impact of NoSQL technologies for distributed cloud-native systems can’t be overstated. The
proliferation of new data technologies in this space has disrupted solutions that once exclusively
relied on relational databases.
NoSQL databases include several different models for accessing and managing data, each suited to
specific use cases. Figure 5-9 presents four common models.
Model Characteristics
Document Store Data and metadata are stored hierarchically in JSON-based documents inside the
database.
Key Value Store The simplest of the NoSQL databases, data is represented as a collection of key-
value pairs.
Wide-Column Related data is stored as a set of nested-key/value pairs within a single column.
Store
Graph Store Data is stored in a graph structure as node, edge, and data properties.
The theorem states that distributed data systems will offer a trade-off between consistency,
availability, and partition tolerance. And, that any database can only guarantee two of the three
properties:
• Consistency. Every node in the cluster responds with the most recent data, even if the system
must block the request until all replicas update. If you query a “consistent system” for an item
that is currently updating, you’ll wait for that response until all replicas successfully update.
However, you’ll receive the most current data.
• Availability. Every node returns an immediate response, even if that response isn’t the most
recent data. If you query an “available system” for an item that is updating, you’ll get the best
possible answer the service can provide at that moment.
• Partition Tolerance. Guarantees the system continues to operate even if a replicated data node
fails or loses connectivity with other replicated data nodes.
Relational databases typically provide consistency and availability, but not partition tolerance. They’re
typically provisioned to a single server and scale vertically by adding more resources to the machine.
Many relational database systems support built-in replication features where copies of the primary
database can be made to other secondary server instances. Write operations are made to the primary
instance and replicated to each of the secondaries. Upon a failure, the primary instance can fail over
to a secondary to provide high availability. Secondaries can also be used to distribute read operations.
Data can also be horizontally partitioned across multiple nodes, such as with sharding. But, sharding
dramatically increases operational overhead by spitting data across many pieces that cannot easily
communicate. It can be costly and time consuming to manage. It can end up impacting performance,
table joins, and referential integrity.
If data replicas were to lose network connectivity in a “highly consistent” relational database cluster,
you wouldn’t be able to write to the database. The system would reject the write operation as it can’t
replicate that change to the other data replica. Every data replica has to update before the transaction
can complete.
NoSQL databases typically support high availability and partition tolerance. They scale out
horizontally, often across commodity servers. This approach provides tremendous availability, both
within and across geographical regions at a reduced cost. You partition and replicate data across
these machines, or nodes, providing redundancy and fault tolerance. The downside is consistency. A
change to data on one NoSQL node can take some time to propagate to other nodes. Typically, a
NoSQL database node will provide an immediate response to a query - even if the data that is
presented is stale and hasn’t updated yet.
If data replicas were to lose connectivity in a “highly available” NoSQL database cluster, you could still
complete a write operation to the database. The database cluster would allow the write operation and
update each data replica as it becomes available.
This kind of result is known as eventual consistency, a characteristic of distributed data systems where
ACID transactions aren’t supported. It’s a brief delay between the update of a data item and time that
it takes to propagate that update to each of the replica nodes. Under normal conditions, the lag is
typically short, but can increase when problems arise. For example, what would happen if you were to
update a product item in a NoSQL database in the United States and query that same data item from
a replica node in Europe? You would receive the earlier product information, until the cluster updates
the European node with the product change. By immediately returning a query result and not waiting
for all replica nodes to update, you gain enormous scale and volume, but with the possibility of
presenting older data.
High availability and massive scalability are often more critical to the business than strong
consistency. Developers can implement techniques and patterns such as Sagas, CQRS, and
asynchronous messaging to embrace eventual consistency.
Nowadays, care must be taken when considering the CAP theorem constraints. A new type of
database, called NewSQL, has emerged which extends the relational database engine to support both
horizontal scalability and the scalable performance of NoSQL systems.
In the next sections, we’ll explore the options available in the Azure cloud for storing and managing
your cloud-native data.
Database as a Service
To start, you could provision an Azure virtual machine and install your database of choice for each
service. While you’d have full control over the environment, you’d forgo many built-in features of the
cloud platform. You’d also be responsible for managing the virtual machine and database for each
service. This approach could quickly become time-consuming and expensive.
Instead, cloud-native applications favor data services exposed as a Database as a Service (DBaaS).
Fully managed by a cloud vendor, these services provide built-in security, scalability, and monitoring.
Instead of owning the service, you simply consume it as a backing service. The provider operates the
resource at scale and bears the responsibility for performance and maintenance.
They can be configured across cloud availability zones and regions to achieve high availability. They
all support just-in-time capacity and a pay-as-you-go model. Azure features different kinds of
managed data service options, each with specific benefits.
We’ll first look at relational DBaaS services available in Azure. You’ll see that Microsoft’s flagship SQL
Server database is available along with several open-source options. Then, we’ll talk about the NoSQL
data services in Azure.
In the previous figure, note how each sits upon a common DBaaS infrastructure which features key
capabilities at no additional cost.
These features are especially important to organizations who provision large numbers of databases,
but have limited resources to administer them. You can provision an Azure database in minutes by
selecting the amount of processing cores, memory, and underlying storage. You can scale the
database on-the-fly and dynamically adjust resources with little to no downtime.
For use with a cloud-native microservice, Azure SQL Database is available with three deployment
options:
• A Single Database represents a fully managed SQL Database running on an Azure SQL Database
server in the Azure cloud. The database is considered contained as it has no configuration
dependencies on the underlying database server.
• A Managed Instance is a fully managed instance of the Microsoft SQL Server Database Engine
that provides near-100% compatibility with an on-premises SQL Server. This option supports
larger databases, up to 35 TB and is placed in an Azure Virtual Network for better isolation.
• Azure SQL Database serverless is a compute tier for a single database that automatically scales
based on workload demand. It bills only for the amount of compute used per second. The
service is well suited for workloads with intermittent, unpredictable usage patterns, interspersed
with periods of inactivity. The serverless compute tier also automatically pauses databases
Beyond the traditional Microsoft SQL Server stack, Azure also features managed versions of three
popular open-source databases.
Developers can easily self-host any open-source database on an Azure VM. While providing full
control, this approach puts you on the hook for the management, monitoring, and maintenance of
the database and VM.
However, Microsoft continues its commitment to keeping Azure an “open platform” by offering
several popular open-source databases as fully managed DBaaS services.
Azure Database for MySQL is a managed relational database service based on the open-source
MySQL Server engine. It uses the MySQL Community edition. The Azure MySQL server is the
administrative point for the service. It’s the same MySQL server engine used for on-premises
deployments. The engine can create a single database per server or multiple databases per server that
share resources. You can continue to manage data using the same open-source tools without having
to learn new skills or manage virtual machines.
MariaDB has a strong community and is used by many large enterprises. While Oracle continues to
maintain, enhance, and support MySQL, the MariaDB foundation manages MariaDB, allowing public
contributions to the product and documentation.
Azure Database for PostgreSQL is a fully managed relational database service, based on the open-
source Postgres database engine. The service supports many development platforms, including C++,
Java, Python, Node, C#, and PHP. You can migrate PostgreSQL databases to it using the command-
line interface tool or Azure Data Migration Service.
• The Single Server deployment option is a central administrative point for multiple databases to
which you can deploy many databases. The pricing is structured per-server based upon cores
and storage.
• The Hyperscale (Citus) option is powered by Citus Data technology. It enables high performance
by horizontally scaling a single database across hundreds of nodes to deliver fast performance
and scale. This option allows the engine to fit more data in memory, parallelize queries across
hundreds of nodes, and index data faster.
If your services require fast response from anywhere in the world, high availability, or elastic
scalability, Cosmos DB is a great choice. Figure 5-12 shows Cosmos DB.
The previous figure presents many of the built-in cloud-native capabilities available in Cosmos DB. In
this section, we’ll take a closer look at them.
Global support
Cloud-native applications often have a global audience and require global scale.
You can distribute Cosmos databases across regions or around the world, placing data close to your
users, improving response time, and reducing latency. You can add or remove a database from a
region without pausing or redeploying your services. In the background, Cosmos DB transparently
replicates the data to each of the configured regions.
Cosmos DB supports active/active clustering at the global level, enabling you to configure any of your
database regions to support both writes and reads.
The Multi-Master protocol is an important feature in Cosmos DB that enables the following
functionality:
• Guaranteed reads and writes served in less than 10 milliseconds at the 99th percentile.
With the Cosmos DB Multi-Homing APIs, your microservice is automatically aware of the nearest
Azure region and sends requests to it. The nearest region is identified by Cosmos DB without any
configuration changes. Should a region become unavailable, the Multi-Homing feature will
automatically route requests to the next nearest available region.
Multi-model support
When replatforming monolithic applications to a cloud-native architecture, development teams
sometimes have to migrate open-source, NoSQL data stores. Cosmos DB can help you preserve your
Provider Description
SQL API Proprietary API that supports JSON documents and SQL-based queries
Mongo DB API Supports Mongo DB APIs and JSON documents
Gremlin API Supports Gremlin API with graph-based nodes and edge data representations
Cassandra API Supports Casandra API for wide-column data representations
Table API Supports Azure Table Storage with premium enhancements
etcd API Enables Cosmos DB as a backing store for Azure Kubernetes Service clusters
Development teams can migrate existing Mongo, Gremlin, or Cassandra databases into Cosmos DB
with minimal changes to data or code. For new apps, development teams can choose among open-
source options or the built-in SQL API model.
Internally, Cosmos stores the data in a simple struct format made up of primitive data types. For each
request, the database engine translates the primitive data into the model representation you’ve
selected.
In the previous table, note the Table API option. This API is an evolution of Azure Table Storage. Both
share the same underlying table model, but the Cosmos DB Table API adds premium enhancements
not available in the Azure Storage API. The following table contrasts the features.
Microservices that consume Azure Table storage can easily migrate to the Cosmos DB Table API. No
code changes are required.
Tunable consistency
Earlier in the Relational vs. NoSQL section, we discussed the subject of data consistency. Data
consistency refers to the integrity of your data. Cloud-native services with distributed data rely on
replication and must make a fundamental tradeoff between read consistency, availability, and latency.
Azure Cosmos DB offers five well-defined consistency models shown in Figure 5-13.
These options enable you to make precise choices and granular tradeoffs for consistency, availability,
and the performance for your data. The levels are presented in the following table.
Consistency
Level Description
Eventual No ordering guarantee for reads. Replicas will eventually converge.
Constant Prefix Reads are still eventual, but data is returned in the ordering in which it is written.
Session Guarantees you can read any data written during the current session. It is the
default consistency level.
Bounded Reads trail writes by interval that you specify.
Staleness
Strong Reads are guaranteed to return most recent committed version of an item. A client
never sees an uncommitted or partial read.
In the article Getting Behind the 9-Ball: Cosmos DB Consistency Levels Explained, Microsoft Program
Manager Jeremy Likness provides an excellent explanation of the five models.
Partitioning
Azure Cosmos DB embraces automatic partitioning to scale a database to meet the performance
needs of your cloud-native services.
You manage data in Cosmos DB data by creating databases, containers, and items.
Containers live in a Cosmos DB database and represent a schema-agnostic grouping of items. Items
are the data that you add to the container. They’re represented as documents, rows, nodes, or edges.
All items added to a container are automatically indexed.
To partition the container, items are divided into distinct subsets called logical partitions. Logical
partitions are populated based on the value of a partition key that is associated with each item in a
container. Figure 5-14 shows two containers each with a logical partition based on a partition key
value.
Note in the previous figure how each item includes a partition key of either ‘city’ or ‘airport’. The key
determines the item’s logical partition. Items with a city code are assigned to the container on the left,
and items with an airport code, to the container on the right. Combining the partition key value with
the ID value creates an item’s index, which uniquely identifies the item.
NewSQL databases
NewSQL is an emerging database technology that combines the distributed scalability of NoSQL with
the ACID guarantees of a relational database. NewSQL databases are important for business systems
that must process high-volumes of data, across distributed environments, with full transactional
support and ACID compliance. While a NoSQL database can provide massive scalability, it does not
guarantee data consistency. Intermittent problems from inconsistent data can place a burden on the
development team. Developers must construct safeguards into their microservice code to manage
problems caused by inconsistent data.
The Cloud Native Computing Foundation (CNCF) features several NewSQL database projects.
Project Characteristics
Cockroach An ACID-compliant, relational database that scales globally. Add a new node to a
DB cluster and CockroachDB takes care of balancing the data across instances and
geographies. It creates, manages, and distributes replicas to ensure reliability. It’s open
source and freely available.
The open-source projects in the previous figure are available from the Cloud Native Computing
Foundation. Three of the offerings are full database products, which include .NET support. The other,
Vitess, is a database clustering system that horizontally scales large clusters of MySQL instances.
A key design goal for NewSQL databases is to work natively in Kubernetes, taking advantage of the
platform’s resiliency and scalability.
NewSQL databases are designed to thrive in ephemeral cloud environments where underlying virtual
machines can be restarted or rescheduled at a moment’s notice. The databases are designed to
survive node failures without data loss nor downtime. CockroachDB, for example, is able to survive a
machine loss by maintaining three consistent replicas of any data across the nodes in a cluster.
Kubernetes uses a Services construct to allow a client to address a group of identical NewSQL
databases processes from a single DNS entry. By decoupling the database instances from the address
of the service with which it’s associated, we can scale without disrupting existing application instances.
Sending a request to any service at a given time will always yield the same result.
In this scenario, all database instances are equal. There are no primary or secondary relationships.
Techniques like consensus replication found in CockroachDB allow any database node to handle any
request. If the node that receives a load-balanced request has the data it needs locally, it responds
immediately. If not, the node becomes a gateway and forwards the request to the appropriate nodes
to get the correct answer. From the client’s perspective, every database node is the same: They appear
as a single logical database with the consistency guarantees of a single-machine system, despite
having dozens or even hundreds of nodes that are working behind the scenes.
For a detailed look at the mechanics behind NewSQL databases, see the DASH: Four Properties of
Kubernetes-Native Databases article.
Why?
As discussed in the Microsoft caching guidance, caching can increase performance, scalability, and
availability for individual microservices and the system as a whole. It reduces the latency and
contention of handling large volumes of concurrent requests to a data store. As data volume and the
number of users increase, the greater the benefits of caching become.
Caching is most effective when a client repeatedly reads data that is immutable or that changes
infrequently. Examples include reference information such as product and pricing information, or
shared static resources that are costly to construct.
While microservices should be stateless, a distributed cache can support concurrent access to session
state data when absolutely required.
Also consider caching to avoid repetitive computations. If an operation transforms data or performs a
complicated calculation, cache the result for subsequent requests.
Caching architecture
Cloud native applications typically implement a distributed caching architecture. The cache is hosted
as a cloud-based backing service, separate from the microservices. Figure 5-15 shows the architecture.
In the previous figure, note how the cache is independent of and shared by the microservices. In this
scenario, the cache is invoked by the API Gateway. As discussed in chapter 4, the gateway serves as a
front end for all incoming requests. The distributed cache increases system responsiveness by
returning cached data whenever possible. Additionally, separating the cache from the services allows
the cache to scale up or out independently to meet increased traffic demands.
The previous figure presents a common caching pattern known as the cache-aside pattern. For an
incoming request, you first query the cache (step #1) for a response. If found, the data is returned
immediately. If the data doesn’t exist in the cache (known as a cache miss), it’s retrieved from a local
database in a downstream service (step #2). It’s then written to the cache for future requests (step #3),
and returned to the caller. Care must be taken to periodically evict cached data so that the system
remains timely and consistent.
As a shared cache grows, it might prove beneficial to partition its data across multiple nodes. Doing
so can help minimize contention and improve scalability. Many Caching services support the ability to
dynamically add and remove nodes and rebalance data across partitions. This approach typically
involves clustering. Clustering exposes a collection of federated nodes as a seamless, single cache.
Internally, however, the data is dispersed across the nodes following a predefined distribution strategy
that balances the load evenly.
The Azure Cache for Redis service manages access to open-source Redis servers hosted across Azure
data centers. The service acts as a facade providing management, access control, and security. The
Azure Cache for Redis is more than a simple cache server. It can support a number of scenarios to
enhance a microservices architecture:
Azure Redis Cache is available across a number of predefined configurations and pricing tiers. The
Premium tier features many enterprise-level features such as clustering, data persistence, geo-
replication, and virtual-network isolation.
From the Microsoft Azure Marketplace, developers can use preconfigured templates built to quickly
deploy an Elasticsearch cluster on Azure. Using the Azure-managed offering, you can deploy up to 50
data nodes, 20 coordinating nodes, and three dedicated master nodes.
Summary
This chapter presented a detailed look at data in cloud-native systems. We started by contrasting data
storage in monolithic applications with data storage patterns in cloud-native systems. We looked at
data patterns implemented in cloud-native systems, including cross-service queries, distributed
transactions, and patterns to deal with high-volume systems. We contrasted SQL with NoSQL data.
We looked at data storage options available in Azure that include both Microsoft-centric and open-
source options. Finally, we discussed caching and Elasticsearch in a cloud-native application.
References
• Command and Query Responsibility Segregation (CQRS) pattern
• Why isn’t RDBMS Partition Tolerant in CAP Theorem and why is it Available?
• Materialized View
• Saga Pattern
• CockroachDB
• TiDB
• YugabyteDB
• Vitess
Unlike traditional monolithic applications, where everything runs together in a single process, cloud-
native systems embrace a distributed architecture as shown in Figure 6-1:
In the previous figure, each microservice and cloud-based backing service execute in a separate
process, across server infrastructure, communicating via network-based calls.
• Unexpected network latency - the time for a service request to travel to the receiver and back.
• An in-flight orchestrator operation such as a rolling upgrade or moving a service from one node
to another.
Cloud platforms can detect and mitigate many of these infrastructure issues. It may restart, scale out,
and even redistribute your service to a different node. However, to take full advantage of this built-in
protection, you must design your services to react to it and thrive in this dynamic environment.
In the following sections, we’ll explore defensive techniques that your service and managed cloud
resources can leverage to minimize downtime and disruption.
While you could invest considerable time writing your own resiliency framework, such products
already exist. Polly is a comprehensive .NET resilience and transient-fault-handling library that allows
developers to express resiliency policies in a fluent and thread-safe manner. Polly targets applications
built with either the .NET Framework or .NET 5. The following table describes the resiliency features,
called policies, available in the Polly Library. They can be applied individually or grouped together.
Policy Experience
Retry Configures retry operations on designated operations.
Circuit Blocks requested operations for a predefined period when faults exceed a configured
Breaker threshold
Timeout Places limit on the duration for which a caller can wait for a response.
Bulkhead Constrains actions to fixed-size resource pool to prevent failing calls from swamping
a resource.
Cache Stores responses automatically.
Fallback Defines structured behavior upon a failure.
Note how in the previous figure the resiliency policies apply to request messages, whether coming
from an external client or back-end service. The goal is to compensate the request for a service that
might be momentarily unavailable. These short-lived interruptions typically manifest themselves with
the HTTP status codes shown in the following table.
Retry pattern
In a distributed cloud-native environment, calls to services and cloud resources can fail because of
transient (short-lived) failures, which typically correct themselves after a brief period of time.
Implementing a retry strategy helps a cloud-native service mitigate these scenarios.
The Retry pattern enables a service to retry a failed request operation a (configurable) number of
times with an exponentially increasing wait time. Figure 6-2 shows a retry in action.
In the previous figure, a retry pattern has been implemented for a request operation. It’s configured to
allow up to four retries before failing with a backoff interval (wait time) starting at two seconds, which
exponentially doubles for each subsequent attempt.
• The first invocation fails and returns an HTTP status code of 500. The application waits for two
seconds and retries the call.
• The second invocation also fails and returns an HTTP status code of 500. The application now
doubles the backoff interval to four seconds and retries the call.
• Finally, the third call succeeds.
To make things worse, executing continual retry operations on a non-responsive service can move
you into a self-imposed denial of service scenario where you flood your service with continual calls
exhausting resources such as memory, threads and database connections, causing failure in unrelated
parts of the system that use the same resources.
In these situations, it would be preferable for the operation to fail immediately and only attempt to
invoke the service if it’s likely to succeed.
The Circuit Breaker pattern can prevent an application from repeatedly trying to execute an operation
that’s likely to fail. After a pre-defined number of failed calls, it blocks all traffic to the service.
Periodically, it will allow a trial call to determine whether the fault has resolved. Figure 6-3 shows the
Circuit Breaker pattern in action.
Keep in mind that the intent of the Circuit Breaker pattern is different than that of the Retry pattern.
The Retry pattern enables an application to retry an operation in the expectation that it will succeed.
The Circuit Breaker pattern prevents an application from doing an operation that is likely to fail.
Typically, an application will combine these two patterns by using the Retry pattern to invoke an
operation through a circuit breaker.
Application resiliency is a must for handling problematic requested operations. But, it’s only half of the
story. Next, we cover resiliency features available in the Azure cloud.
• Hardware failure. Build redundancy into the application by deploying components across
different fault domains. For example, ensure that Azure VMs are placed in different racks by
using Availability Sets.
• Regional failure. Replicate the data and components into another region so that applications can
be quickly recovered. For example, use Azure Site Recovery to replicate Azure VMs to another
Azure region.
• Heavy load. Load balance across instances to handle spikes in usage. For example, put two or
more Azure VMs behind a load balancer to distribute traffic to all VMs.
• Accidental data deletion or corruption. Back up data so it can be restored if there’s any deletion
or corruption. For example, use Azure Backup to periodically back up your Azure VMs.
Redundancy is one way to provide application resilience. The exact level of redundancy needed
depends upon your business requirements and will affect both the cost and complexity of your
system. For example, a multi-region deployment is more expensive and more complex to manage
than a single-region deployment. You’ll need operational procedures to manage failover and failback.
The additional cost and complexity might be justified for some business scenarios, but not others.
To architect redundancy, you need to identify the critical paths in your application, and then
determine if there’s redundancy at each point in the path? If a subsystem should fail, will the
application fail over to something else? Finally, you need a clear understanding of those features built
into the Azure cloud platform that you can leverage to meet your redundancy requirements. Here are
recommendations for architecting redundancy:
• Plan for multiregion deployment. If you deploy your application to a single region, and that
region becomes unavailable, your application will also become unavailable. This may be
unacceptable under the terms of your application’s service level agreements. Instead, consider
deploying your application and its services across multiple regions. For example, an Azure
Kubernetes Service (AKS) cluster is deployed to a single region. To protect your system from a
regional failure, you might deploy your application to multiple AKS clusters across different
• Enable geo-replication. Geo-replication for services such as Azure SQL Database and Cosmos DB
will create secondary replicas of your data across multiple regions. While both services will
automatically replicate data within the same region, geo-replication protects you against a
regional outage by enabling you to fail over to a secondary region. Another best practice for
geo-replication centers around storing container images. To deploy a service in AKS, you need
to store and pull the image from a repository. Azure Container Registry integrates with AKS and
can securely store container images. To improve performance and availability, consider geo-
replicating your images to a registry in each region where you have an AKS cluster. Each AKS
cluster then pulls container images from the local container registry in its region as shown in
Figure 6-4:
• Implement a DNS traffic load balancer. Azure Traffic Manager provides high-availability for
critical applications by load-balancing at the DNS level. It can route traffic to different regions
based on geography, cluster response time, and even application endpoint health. For example,
Azure Traffic Manager can direct customers to the closest AKS cluster and application instance. If
you have multiple AKS clusters in different regions, use Traffic Manager to control how traffic
flows to the applications that run in each cluster. Figure 6-5 shows this scenario.
• Design for scaling. An application must be designed for scaling. To start, services should be
stateless so that requests can be routed to any instance. Having stateless services also means
that adding or removing an instance doesn’t adversely impact current users.
• Favor scale-out. Cloud-based applications favor scaling out resources as opposed to scaling up.
Scaling out (also known as horizontal scaling) involves adding more service resources to an
existing system to meet and share a desired level of performance. Scaling up (also known as
vertical scaling) involves replacing existing resources with more powerful hardware (more disk,
memory, and processing cores). Scaling out can be invoked automatically with the autoscaling
features available in some Azure cloud resources. Scaling out across multiple resources also adds
redundancy to the overall system. Finally scaling up a single resource is typically more expensive
than scaling out across many smaller resources. Figure 6-6 shows the two approaches:
• Scale proportionally. When scaling a service, think in terms of resource sets. If you were to
dramatically scale out a specific service, what impact would that have on back-end data stores,
caches and dependent services? Some resources such as Cosmos DB can scale out
proportionally, while many others can’t. You want to ensure that you don’t scale out a resource
to a point where it will exhaust other associated resources.
• Avoid affinity. A best practice is to ensure a node doesn’t require local affinity, often referred to
as a sticky session. A request should be able to route to any instance. If you need to persist state,
it should be saved to a distributed cache, such as Azure Redis cache.
• Take advantage of platform autoscaling features. Use built-in autoscaling features whenever
possible, rather than custom or third-party mechanisms. Where possible, use scheduled scaling
rules to ensure that resources are available without a startup delay, but add reactive autoscaling
to the rules as appropriate, to cope with unexpected changes in demand. For more information,
see Autoscaling guidance.
• Scale out aggressively. A final practice would be to scale out aggressively so that you can quickly
meet immediate spikes in traffic without losing business. And, then scale in (that is, remove
unneeded instances) conservatively to keep the system stable. A simple way to implement this is
to set the cool down period, which is the time to wait between scaling operations, to five
minutes for adding resources and up to 15 minutes for removing instances.
• Azure Cosmos DB. The DocumentClient class from the client API automatically retires failed
attempts. The number of retries and maximum wait time are configurable. Exceptions thrown by
the client API are either requests that exceed the retry policy or non-transient errors.
• Azure Service Bus. The Service Bus client exposes a RetryPolicy class that can be configured with
a back-off interval, retry count, and TerminationTimeBuffer, which specifies the maximum time
an operation can take. The default policy is nine maximum retry attempts with a 30-second
backoff period between attempts.
• Azure SQL Database. Retry support is provided when using the Entity Framework Core library.
• Azure Storage. The storage client library support retry operations. The strategies vary across
Azure storage tables, blobs, and queues. As well, alternate retries switch between primary and
secondary storage services locations when the geo-redundancy feature is enabled.
• Azure Event Hubs. The Event Hub client library features a RetryPolicy property, which includes a
configurable exponential backoff feature.
Resilient communications
Throughout this book, we’ve embraced a microservice-based architectural approach. While such an
architecture provides important benefits, it presents many challenges:
• Service discovery. How do microservices discover and communicate with each other when
running across a cluster of machines with their own IP addresses and ports?
• Resiliency. How do you manage short-lived failures and keep the system stable?
• Load balancing. How does inbound traffic get distributed across multiple instances of a
microservice?
• Security. How are security concerns such as transport-level encryption and certificate
management enforced?
• Distributed Monitoring. - How do you correlate and capture traceability and monitoring for a
single request across multiple consuming microservices?
You can address these concerns with different libraries and frameworks, but the implementation can
be expensive, complex, and time-consuming. You also end up with infrastructure concerns coupled to
business logic.
Service mesh
A better approach is an evolving technology entitled Service Mesh. A service mesh is a configurable
infrastructure layer with built-in capabilities to handle service communication and the other
challenges mentioned above. It decouples these concerns by moving them into a service proxy. The
In the previous figure, note how the proxy intercepts and manages communication among the
microservices and the cluster.
A service mesh is logically split into two disparate components: A data plane and control plane. Figure
6-8 shows these components and their responsibilities.
Once configured, a service mesh is highly functional. It can retrieve a corresponding pool of instances
from a service discovery endpoint. The mesh can then send a request to a specific instance, recording
If an instance is unresponsive or fails, the mesh will retry the request on another instance. If it returns
errors, a mesh will evict the instance from the load-balancing pool and restate it after it heals. If a
request times out, a mesh can fail and then retry the request. A mesh captures and emits metrics and
distributed tracing to a centralized metrics system.
• Retry pattern
• network latency
• Redundancy
• geo-replication
• Autoscaling guidance
• Istio
• Envoy proxy
“DevOps is the union of people, process, and products to enable continuous delivery of value to our
end users.”
Unfortunately, with terse definitions, there’s always room to say more things. One of the key
components of DevOps is ensuring that the applications running in production are functioning
properly and efficiently. To gauge the health of the application in production, it’s necessary to monitor
the various logs and metrics being produced from the servers, hosts, and the application proper. The
number of different services running in support of a cloud-native application makes monitoring the
health of individual components and the application as a whole a critical challenge.
Observability patterns
Just as patterns have been developed to aid in the layout of code in applications, there are patterns
for operating applications in a reliable way. Three useful patterns in maintaining applications have
emerged: logging, monitoring, and alerts.
The usefulness of logging to a flat file on a single machine is vastly reduced in a cloud environment.
Applications producing logs may not have access to the local disk or the local disk may be highly
transient as containers are shuffled around physical machines. Even simple scaling up of monolithic
applications across multiple nodes can make it challenging to locate the appropriate file-based log
file.
Cloud-native applications developed using a microservices architecture also pose some challenges for
file-based loggers. User requests may now span multiple services that are run on different machines
Finally, the number of users in some cloud-native applications is high. Imagine that each user
generates a hundred lines of log messages when they log into an application. In isolation, that is
manageable, but multiply that over 100,000 users and the volume of logs becomes large enough that
specialized tools are needed to support effective use of the logs.
• Verbose
• Debug
• Information
• Warning
• Error
• Fatal
These different log levels provide granularity in logging. When the application is functioning properly
in production, it may be configured to only log important messages. When the application is
The high performance of logging tools and the tunability of verbosity should encourage developers to
log frequently. Many favor a pattern of logging the entry and exit of each method. This approach may
sound like overkill, but it’s infrequent that developers will wish for less logging. In fact, it’s not
uncommon to perform deployments for the sole purpose of adding logging around a problematic
method. Err on the side of too much logging and not on too little. Some tools can be used to
automatically provide this kind of logging.
Because of the challenges associated with using file-based logs in cloud-native apps, centralized logs
are preferred. Logs are collected by the applications and shipped to a central logging application
which indexes and stores the logs. This class of system can ingest tens of gigabytes of logs every day.
It’s also helpful to follow some standard practices when building logging that spans many services.
For instance, generating a correlation ID at the start of a lengthy interaction, and then logging it in
each message that is related to that interaction, makes it easier to search for all related messages. One
need only find a single message and extract the correlation ID to find all the related messages.
Another example is ensuring that the log format is the same for every service, whatever the language
or logging library it uses. This standardization makes reading logs much easier. Figure 7-4
demonstrates how a microservices architecture can leverage centralized logging as part of its
workflow.
Figure 7-4. Logs from various sources are ingested into a centralized log store.
• One service in your application keeps failing and restarting, resulting in intermittent slow
responses.
• At some times of the day, your application’s response time is slow.
• After a recent deployment, load on the database has tripled.
Implemented properly, monitoring can let you know about conditions that will lead to problems,
letting you address underlying conditions before they result in any significant user impact.
The metric-gathering capabilities of the monitoring tools can also be fed manually from within the
application. Business flows that are of particular interest such as new users signing up or orders being
placed, may be instrumented such that they increment a counter in the central monitoring system.
This aspect unlocks the monitoring tools to not only monitor the health of the application but the
health of the business.
Queries can be constructed in the log aggregation tools to look for certain statistics or patterns, which
can then be displayed in graphical form, on custom dashboards. Frequently, teams will invest in large,
wall-mounted displays that rotate through the statistics related to an application. This way, it’s simple
to see the problems as they occur.
Cloud-native monitoring tools provide real-time telemetry and insight into apps regardless of whether
they’re single-process monolithic applications or distributed microservice architectures. They include
tools that allow collection of data from the app as well as tools for querying and displaying
information about the app’s health.
Generally, alerts are layered on top of monitoring such that certain conditions trigger appropriate
alerts to notify team members of urgent problems. Some scenarios that may require alerts include:
Typically, though, a single 500 error isn’t enough to determine that a problem has occurred. It could
mean that a user mistyped their password or entered some malformed data. The alert queries can be
crafted to only fire when a larger than average number of 500 errors are detected.
One of the most damaging patterns in alerting is to fire too many alerts for humans to investigate.
Service owners will rapidly become desensitized to errors that they’ve previously investigated and
found to be benign. Then, when true errors occur, they’ll be lost in the noise of hundreds of false
positives. The parable of the Boy Who Cried Wolf is frequently told to children to warn them of this
very danger. It’s important to ensure that the alerts that do fire are indicative of a real problem.
Collectively these tools are known as the Elastic Stack or ELK stack.
Elastic Stack
The Elastic Stack is a powerful option for gathering information from a Kubernetes cluster. Kubernetes
supports sending logs to an Elasticsearch endpoint, and for the most part, all you need to get started
is to set the environment variables as shown in Figure 7-5:
KUBE_LOGGING_DESTINATION=elasticsearch
KUBE_ENABLE_NODE_LOGGING=true
This step will install Elasticsearch on the cluster and target sending all the cluster logs to it.
Logstash
The first component is Logstash. This tool is used to gather log information from a large variety of
different sources. For instance, Logstash can read logs from disk and also receive messages from
logging libraries like Serilog. Logstash can do some basic filtering and expansion on the logs as they
arrive. For instance, if your logs contain IP addresses then Logstash may be configured to do a
geographical lookup and obtain a country or even city of origin for that message.
Serilog is a logging library for .NET languages, which allows for parameterized logging. Instead of
generating a textual log message that embeds fields, parameters are kept separate. This library allows
for more intelligent filtering and searching. A sample Serilog configuration for writing to Logstash
appears in Figure 7-7.
Figure 7-7. Serilog config for writing log information directly to logstash over HTTP
Logstash would use a configuration like the one shown in Figure 7-8.
output {
elasticsearch {
hosts => "elasticsearch:9200"
index=>"sales-%{+xxxx.ww}"
}
}
For scenarios where extensive log manipulation isn’t needed there’s an alternative to Logstash known
as Beats. Beats is a family of tools that can gather a wide variety of data from logs to network data
and uptime information. Many applications will use both Logstash and Beats.
Once the logs have been gathered by Logstash, it needs somewhere to put them. While Logstash
supports many different outputs, one of the more exciting ones is Elastic search.
Elastic search
Elastic search is a powerful search engine that can index logs as they arrive. It makes running queries
against the logs quick. Elastic search can handle huge quantities of logs and, in extreme cases, can be
scaled out across many nodes.
Log messages that have been crafted to contain parameters or that have had parameters split from
them through Logstash processing, can be queried directly as Elasticsearch preserves this information.
A query that searches for the top 10 pages visited by jill@example.com, appears in Figure 7-9.
"query": {
"match": {
"user": "jill@example.com"
}
},
"aggregations": {
"top_10_pages": {
"terms": {
"field": "page",
"size": 10
}
}
}
Figure 7-9. An Elasticsearch query for finding top 10 pages visited by a user
An option with less overhead is to make use of one of the many Docker containers on which the
Elastic Stack has already been configured. These containers can be dropped into an existing
Kubernetes cluster and run alongside application code. The sebp/elk container is a well-documented
and tested Elastic Stack container.
References
• Install Elastic Stack on Azure
Prometheus is a popular open source metric monitoring solution. It is part of the Cloud Native
Compute Foundation. Typically, using Prometheus requires managing a Prometheus server with its
own store. However, Azure Monitor for Containers provides direct integration with Prometheus
metrics endpoints, so a separate server is not required.
Log and metric information is gathered not just from the containers running in the cluster but also
from the cluster hosts themselves. It allows correlating log information from the two making it much
easier to track down an error.
Installing the log collectors differs on Windows and Linux clusters. But in both cases the log collection
is implemented as a Kubernetes DaemonSet, meaning that the log collector is run as a container on
each of the nodes.
No matter which orchestrator or operating system is running the Azure Monitor daemon, the log
information is forwarded to the same Azure Monitor tools with which users are familiar. This approach
ensures a parallel experience in environments that mix different log sources such as a hybrid
Kubernetes/Azure Functions environment.
Log.Finalize()
Logging is one of the most overlooked and yet most important parts of deploying any application at
scale. As the size and complexity of applications increase, then so does the difficulty of debugging
them. Having top quality logs available makes debugging much easier and moves it from the realm of
“nearly impossible” to “a pleasant experience”.
Azure Monitor
No other cloud provider has as mature of a cloud application monitoring solution than that found in
Azure. Azure Monitor is an umbrella name for a collection of tools designed to provide visibility into
the state of your system. It helps you understand how your cloud-native services are performing and
proactively identifies issues affecting them. Figure 7-12 presents a high level of view of Azure Monitor.
Application level metrics and events aren’t possible to instrument automatically because they’re
specific to the application being deployed. In order to gather these metrics, there are SDKs and APIs
available to directly report such information, such as when a customer signs up or completes an order.
Exceptions can also be captured and reported back into Azure Monitor via Application Insights. The
SDKs support most every language found in Cloud Native Applications including Go, Python,
JavaScript, and the .NET languages.
The ultimate goal of gathering information about the state of your application is to ensure that your
end users have a good experience. What better way to tell if users are experiencing issues than doing
outside-in web tests? These tests can be as simple as pinging your website from locations around the
world or as involved as having agents log into the site and simulate user actions.
Reporting data
Once the data is gathered, it can be manipulated, summarized, and plotted into charts, which allow
users to instantly see when there are problems. These charts can be gathered into dashboards or into
Workbooks, a multi-page report designed to tell a story about some aspect of the system.
Application Insights provides a powerful (SQL-like) query language called Kusto that can query
records, summarize them, and even plot charts. For example, the following query will locate all records
for the month of November 2007, group them by state, and plot the top 10 as a pie chart.
StormEvents
| where StartTime >= datetime(2007-11-01) and StartTime < datetime(2007-12-01)
| summarize count() by State
| top 10 by count_
| render piechart
There is a playground for experimenting with Kusto queries. Reading sample queries can also be
instructive.
Dashboards
There are several different dashboard technologies that may be used to surface the information from
Azure Monitor. Perhaps the simplest is to just run queries in Application Insights and plot the data
into a chart.
These charts can then be embedded in the Azure portal proper through use of the dashboard feature.
For users with more exacting requirements, such as being able to drill down into several tiers of data,
Azure Monitor data is available to Power BI. Power BI is an industry-leading, enterprise class, business
intelligence tool that can aggregate data from many different data sources.
Alerts
Sometimes, having data dashboards is insufficient. If nobody is awake to watch the dashboards, then
it can still be many hours before a problem is addressed, or even detected. To this end, Azure Monitor
also provides a top notch alerting solution. Alerts can be triggered by a wide range of conditions
including:
• Metric values
• Log search queries
• Activity Log events
• Health of the underlying Azure platform
• Tests for web site availability
When triggered, the alerts can perform a wide variety of tasks. On the simple side, the alerts may just
send an e-mail notification to a mailing list or a text message to an individual. More involved alerts
As common causes of alerts are identified, the alerts can be enhanced with details about the common
causes of the alerts and the steps to take to resolve them. Highly mature cloud-native application
deployments may opt to kick off self-healing tasks, which perform actions such as removing failing
nodes from a scale set or triggering an autoscaling activity. Eventually it may no longer be necessary
to wake up on-call personnel at 2AM to resolve a live-site issue as the system will be able to adjust
itself to compensate or at least limp along until somebody arrives at work the next morning.
Azure Monitor automatically leverages machine learning to understand the normal operating
parameters of deployed applications. This approach enables it to detect services that are operating
outside of their normal parameters. For instance, the typical weekday traffic on the site might be
10,000 requests per minute. And then, on a given week, suddenly the number of requests hits a highly
unusual 20,000 requests per minute. Smart Detection will notice this deviation from the norm and
trigger an alert. At the same time, the trend analysis is smart enough to avoid firing false positives
when the traffic load is expected.
References
• Azure Monitor
While this solution is effective within corporate networks, it isn’t designed for use by users or
applications that are outside of the AD domain. With the growth of Internet-based applications and
the rise of cloud-native apps, security models have evolved.
Modern cloud-native identity solutions typically use access tokens that are issued by a secure token
service/server (STS) to a security principal once their identity is determined. The access token, typically
a JSON Web Token (JWT), includes claims about the security principal. These claims will minimally
include the user’s identity but may also include other claims that can be used by applications to
determine the level of access to grant the principal.
Typically, the STS is only responsible for authenticating the principal. Determining their level of access
to resources is left to other parts of the application.
References
• Microsoft identity platform
Many organizations still rely on local authentication services like Active Directory Federation Services
(ADFS). While this approach has traditionally served organizations well for on premises authentication
needs, cloud-native applications benefit from systems designed specifically for the cloud. A recent
2019 United Kingdom National Cyber Security Centre (NCSC) advisory states that “organizations using
Azure AD as their primary authentication source will actually lower their risk compared to ADFS.”
Some reasons outlined in this analysis include:
References
• Authentication basics
• Access tokens and claims
• It may be time to ditch your on premises authentication services
Azure AD is built for the cloud. It’s truly a cloud-native identity solution that uses a REST-based Graph
API and OData syntax for queries, unlike Windows AD, which uses LDAP. On premises Active Directory
can sync user attributes to the cloud using Identity Sync Services, allowing all authentication to take
place in the cloud using Azure AD. Alternately, authentication can be configured via Connect to pass
back to local Active Directory via ADFS to be completed by Windows AD on premises.
Azure AD supports company branded sign-in screens, multi-factory authentication, and cloud-based
application proxies that are used to provide SSO for applications hosted on premises. It offers
different kinds of security reporting and alert capabilities.
References
• Microsoft identity platform
In each of these scenarios, the exposed functionality needs to be secured against unauthorized use. At
a minimum, this typically requires authenticating the user or principal making a request for a resource.
This authentication may use one of several common protocols such as SAML2p, WS-Fed, or OpenID
Connect. Communicating with APIs typically uses the OAuth2 protocol and its support for security
tokens. Separating these critical cross-cutting security concerns and their implementation details from
the applications themselves ensures consistency and improves security and maintainability.
IdentityServer provides middleware that runs within an ASP.NET Core application and adds support
for OpenID Connect and OAuth2 (see supported specifications). Organizations would create their own
ASP.NET Core app using IdentityServer middleware to act as the STS for all of their token-based
security protocols. The IdentityServer middleware exposes endpoints to support standard
functionality, including:
Getting started
IdentityServer4 is open-source and free to use. You can add it to your applications using its NuGet
packages. The main package is IdentityServer4 that has been downloaded over four million times. The
base package doesn’t include any user interface code and only supports in memory configuration. To
use it with a database, you’ll also want a data provider like IdentityServer4.EntityFramework that uses
Entity Framework Core to store configuration and operational data for IdentityServer. For user
interface, you can copy files from the Quickstart UI repository into your ASP.NET Core MVC
application to add support for sign in and sign out using IdentityServer middleware.
Configuration
IdentityServer supports different kinds of protocols and social authentication providers that can be
configured as part of each custom installation. This is typically done in the ASP.NET Core application’s
Startup class in the ConfigureServices method. The configuration involves specifying the supported
protocols and the paths to the servers and endpoints that will be used. Figure 8-2 shows an example
configuration taken from the IdentityServer4 Quickstart UI project:
services.AddAuthentication()
.AddGoogle("Google", options =>
{
options.SignInScheme =
IdentityServerConstants.ExternalCookieAuthenticationScheme;
options.Authority = "https://demo.identityserver.io/";
options.ClientId = "implicit";
options.ResponseType = "id_token";
options.SaveTokens = true;
options.CallbackPath = new PathString("/signin-idsrv");
options.SignedOutCallbackPath = new PathString("/signout-callback-idsrv");
options.RemoteSignOutPath = new PathString("/signout-idsrv");
IdentityServer also hosts a public demo site that can be used to test various protocols and
configurations. It’s located at https://demo.identityserver.io/ and includes information on how to
configure its behavior based on the client_id provided to it.
JavaScript clients
Many cloud-native applications leverage server-side APIs and rich client single page applications
(SPAs) on the front end. IdentityServer ships a JavaScript client (oidc-client.js) via NPM that can be
added to SPAs to enable them to use IdentityServer for sign in, sign out, and token-based
authentication of web APIs.
References
• IdentityServer documentation
• Application types
• JavaScript OIDC client
However, there are starting to be real-world consequences for not maintaining a security mindset
when building and deploying applications. Many companies learned the hard way what can happen
when servers and desktops aren’t patched during the 2017 outbreak of NotPetya. The cost of these
attacks has easily reached into the billions, with some estimates putting the losses from this single
attack at 10 billion US dollars.
Even governments aren’t immune to hacking incidents. The city of Baltimore was held ransom by
criminals making it impossible for citizens to pay their bills or use city services.
There has also been an increase in legislation that mandates certain data protections for personal
data. In Europe, GDPR has been in effect for more than a year and, more recently, California passed
their own version called CCDA, which comes into effect January 1, 2020. The fines under GDPR can be
so punishing as to put companies out of business. Google has already been fined 50 million Euros for
violations, but that’s just a drop in the bucket compared with the potential fines.
On the flip side, smaller services, each with their own data store, limit the scope of an attack. If an
attacker compromises one system, it’s probably more difficult for the attacker to make the jump to
another system than it is in a monolithic application. Process boundaries are strong boundaries. Also,
if a database backup gets exposed, then the damage is more limited, as that database contains only a
subset of data and is unlikely to contain personal data.
Once the list of threats has been established, you need to decide whether they’re worth mitigating.
Sometimes a threat is so unlikely and expensive to plan for that it isn’t worth spending energy on it.
For instance, some state level actor could inject changes into the design of a process that is used by
millions of devices. Now, instead of running a certain piece of code in Ring 3, that code is run in Ring
0. This process allows an exploit that can bypass the hypervisor and run the attack code on the bare
metal machines, allowing attacks on all the virtual machines that are running on that hardware.
The altered processors are difficult to detect without a microscope and advanced knowledge of the on
silicon design of that processor. This scenario is unlikely to happen and expensive to mitigate, so
probably no threat model would recommend building exploit protection for it.
More likely threats, such as broken access controls permitting Id incrementing attacks (replacing Id=2
with Id=3 in the URL) or SQL injection, are more attractive to build protections against. The
mitigations for these threats are quite reasonable to build and prevent embarrassing security holes
that smear the company’s reputation.
As an example, think of the tellers at a bank: accessing the safe is an uncommon activity. So, the
average teller can’t open the safe themselves. To gain access, they need to escalate their request
through a bank manager, who performs additional security checks.
In a computer system, a fantastic example is the rights of a user connecting to a database. In many
cases, there’s a single user account used to both build the database structure and run the application.
Except in extreme cases, the account running the application doesn’t need the ability to update
schema information. There should be several accounts that provide different levels of privilege. The
application should only use the permission level that grants read and writes access to the data in the
tables. This kind of protection would eliminate attacks that aimed to drop database tables or
introduce malicious triggers.
Penetration testing
As applications become more complicated the number of attack vectors increases at an alarming rate.
Threat modeling is flawed in that it tends to be executed by the same people building the system. In
the same way that many developers have trouble envisioning user interactions and then build
unusable user interfaces, most developers have difficulty seeing every attack vector. It’s also possible
that the developers building the system aren’t well versed in attack methodologies and miss
something crucial.
Penetration testing or “pen testing” involves bringing in external actors to attempt to attack the
system. These attackers may be an external consulting company or other developers with good
security knowledge from another part of the business. They’re given carte blanche to attempt to
subvert the system. Frequently, they’ll find extensive security holes that need to be patched.
Sometimes the attack vector will be something totally unexpected like exploiting a phishing attack
against the CEO.
Azure itself is constantly undergoing attacks from a team of hackers inside Microsoft. Over the years,
they’ve been the first to find dozens of potentially catastrophic attack vectors, closing them before
they can be exploited externally. The more tempting a target, the more likely that eternal actors will
attempt to exploit it and there are a few targets in the world more tempting than Azure.
Monitoring
Should an attacker attempt to penetrate an application, there should be some warning of it.
Frequently, attacks can be spotted by examining the logs from services. Attacks leave telltale signs
that can be spotted before they succeed. For instance, an attacker attempting to guess a password
will make many requests to a login system. Monitoring around the login system can detect weird
patterns that are out of line with the typical access pattern. This monitoring can be turned into an
alert that can, in turn, alert an operations person to activate some sort of countermeasure. A highly
mature monitoring system might even take action based on these deviations proactively adding rules
to block requests or throttle responses.
Imagine that an attacker is looking to steal the passwords of people signing into a web application.
They could introduce a build step that modifies the checked-out code to mirror any login request to
another server. The next time code goes through the build, it’s silently updated. The source code
vulnerability scanning won’t catch this vulnerability as it runs before the build. Equally, nobody will
This scenario is a perfect example of a seemingly low-value target that can be used to break into the
system. Once an attacker breaches the perimeter of the system, they can start working on finding
ways to elevate their permissions to the point that they can cause real harm anywhere they like.
There are many ways to make .NET code more secure. Following guidelines such as the Secure coding
guidelines for .NET article is a reasonable step to take to ensure that the code is secure from the
ground up. The OWASP top 10 is another invaluable guide to build secure code.
The build process is a good place to put scanning tools to detect problems in source code before they
make it into production. Most every project has dependencies on some other packages. A tool that
can scan for outdated packages will catch problems in a nightly build. Even when building Docker
images, it’s useful to check and make sure that the base image doesn’t have known vulnerabilities.
Another thing to check is that nobody has accidentally checked in credentials.
Built-in security
Azure is designed to balance usability and security for most users. Different users are going to have
different security requirements, so they need to fine-tune their approach to cloud security. Microsoft
publishes a great deal of security information in the Trust Center. This resource should be the first
stop for those professionals interested in understanding how the built-in attack mitigation
technologies work.
Within the Azure portal, the Azure Advisor is a system that is constantly scanning an environment and
making recommendations. Some of these recommendations are designed to save users money, but
others are designed to identify potentially insecure configurations, such as having a storage container
open to the world and not protected by a Virtual Network.
Out of the box, most PaaS Azure resources have only the most basic and permissive networking setup.
For instance, anybody on the Internet can access an app service. New SQL Server instances typically
Fortunately, most Azure resources can be placed into an Azure Virtual Network that allows fine-
grained access control. Similar to the way that on-premises networks establish private networks that
are protected from the wider world, virtual networks are islands of private IP addresses that are
located within the Azure network.
In the same way that on-premises networks have a firewall governing access to the network, you can
establish a similar firewall at the boundary of the virtual network. By default, all the resources on a
virtual network can still talk to the Internet. It’s only incoming connections that require some form of
explicit firewall exception.
With the network established, internal resources like storage accounts can be set up to only allow for
access by resources that are also on the Virtual Network. This firewall provides an extra level of
security, should the keys for that storage account be leaked, attackers wouldn’t be able to connect to
it to exploit the leaked keys. This scenario is another example of the principle of least privilege.
The nodes in an Azure Kubernetes cluster can participate in a virtual network just like other resources
that are more native to Azure. This functionality is called Azure Container Networking Interface. In
effect, it allocates a subnet within the virtual network on which virtual machines and container images
are allocated.
Continuing down the path of illustrating the principle of least privilege, not every resource within a
Virtual Network needs to talk to every other resource. For instance, in an application that provides a
153 CHAPTER 9 | Security
web API over a storage account and a SQL database, it’s unlikely that the database and the storage
account need to talk to one another. Any data sharing between them would go through the web
application. So, a network security group (NSG) could be used to deny traffic between the two
services.
Virtual Networks can also be useful when setting up communication between on-premises and cloud
resources. A virtual private network can be used to seamlessly attach the two networks together. This
approach allows running a virtual network without any sort of gateway for scenarios where all the
users are on-site. There are a number of technologies that can be used to establish this network. The
simplest is to use a site-to-site VPN that can be established between many routers and Azure. Traffic
is encrypted and tunneled over the Internet at the same cost per byte as any other traffic. For
scenarios where more bandwidth or more security is desirable, Azure offers a service called Express
Route that uses a private circuit between an on-premises network and Azure. It’s more costly and
difficult to establish but also more secure.
Security Principals
The first component in RBAC is a security principal. A security principal can be a user, group, service
principal, or managed identity.
• User - Any user who has an account in Azure Active Directory is a user.
• Group - A collection of users from Azure Active Directory. As a member of a group, a user takes
on the roles of that group in addition to their own.
• Service principal - A security identity under which services or applications run.
Roles
A security principal can take on many roles or, using a more sartorial analogy, wear many hats. Each
role defines a series of permissions such as “Read messages from Azure Service Bus endpoint”. The
effective permission set of a security principal is the combination of all the permissions assigned to all
the roles that a security principal has. Azure has a large number of built-in roles and users can define
their own roles.
Built into Azure are also a number of high-level roles such as Owner, Contributor, Reader, and User
Account Administrator. With the Owner role, a security principal can access all resources and assign
permissions to others. A contributor has the same level of access to all resources but they can’t assign
permissions. A Reader can only view existing Azure resources and a User Account Administrator can
manage access to Azure resources.
More granular built-in roles such as DNS Zone Contributor have rights limited to a single service.
Security principals can take on any number of roles.
The scope can be as narrow as a single resource or it can be applied to an entire resource group,
subscription, or even management group.
When testing if a security principal has certain permission, the combination of role and scope are
taken into account. This combination provides a powerful authorization mechanism.
Deny
Previously, only “allow” rules were permitted for RBAC. This behavior made some scopes complicated
to build. For instance, allowing a security principal access to all storage accounts except one required
granting explicit permission to a potentially endless list of storage accounts. Every time a new storage
account was created, it would have to be added to this list of accounts. This added management
overhead that certainly wasn’t desirable.
Deny rules take precedence over allow rules. Now representing the same “allow all but one” scope
could be represented as two rules “allow all” and “deny this one specific one”. Deny rules not only
ease management but allow for resources that are extra secure by denying access to everybody.
Checking access
As you can imagine, having a large number of roles and scopes can make figuring out the effective
permission of a service principal quite difficult. Piling deny rules on top of that, only serves to increase
the complexity. Fortunately, there’s a permissions calculator that can show the effective permissions
for any service principal. It’s typically found under the IAM tab in the portal, as shown in Figure 9-3.
Many security experts suggest that using a password manager to keep your own passwords is the
best approach. While it centralizes your passwords in one location, it also allows using highly complex
passwords and ensuring they’re unique for each account. The same system exists within Azure: a
central store for secrets.
Access to the key vault is provided through RBACs, meaning that not just any user can access the
information in the vault. Say a web application wishes to access the database connection string stored
in Azure Key Vault. To gain access, applications need to run using a service principal. Under this
assumed role, they can read the secrets from the safe. There are a number of different security
settings that can further limit the access that an application has to the vault, so that it can’t update
secrets but only read them.
Access to the key vault can be monitored to ensure that only the expected applications are accessing
the vault. The logs can be integrated back into Azure Monitor, unlocking the ability to set up alerts
when unexpected conditions are encountered.
Kubernetes
Within Kubernetes, there’s a similar service for maintaining small pieces of secret information.
Kubernetes Secrets can be set via the typical kubectl executable.
Creating a secret is as simple as finding the base64 version of the values to be stored:
Then adding it to a secrets file named secret.yml for example that looks similar to the following
example:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
Finally, this file can be loaded into Kubernetes by running the following command:
These secrets can then be mounted into volumes or exposed to container processes through
environment variables. The Twelve-factor app approach to building applications suggests using the
lowest common denominator to transmit settings to an application. Environment variables are the
lowest common denominator, because they’re supported no matter the operating system or
application.
An alternative to use the built-in Kubernetes secrets is to access the secrets in Azure Key Vault from
within Kubernetes. The simplest way to do this is to assign an RBAC role to the container looking to
load secrets. The application can then use the Azure Key Vault APIs to access the secrets. However,
this approach requires modifications to the code and doesn’t follow the pattern of using environment
variables. Instead, it’s possible to inject values into a container. This approach is actually more secure
than using the Kubernetes secrets directly, as they can be accessed by users on the cluster.
In transit
There are several ways to encrypt traffic on the network in Azure. The access to Azure services is
typically done over connections that use Transport Layer Security (TLS). For instance, all the
connections to the Azure APIs require TLS connections. Equally, connections to endpoints in Azure
storage can be restricted to work only over TLS encrypted connections.
TLS is a complicated protocol and simply knowing that the connection is using TLS isn’t sufficient to
ensure security. For instance, TLS 1.0 is chronically insecure, and TLS 1.1 isn’t much better. Even within
the versions of TLS, there are various settings that can make the connections easier to decrypt. The
best course of action is to check and see if the server connection is using up-to-date and well
configured protocols.
This check can be done by an external service such as SSL labs’ SSL Server Test. A test run against a
typical Azure endpoint, in this case a service bus endpoint, yields a near perfect score of A.
Even services like Azure SQL databases use TLS encryption to keep data hidden. The interesting part
about encrypting the data in transit using TLS is that it isn’t possible, even for Microsoft, to listen in on
the connection between computers running TLS. This should provide comfort for companies
concerned that their data may be at risk from Microsoft proper or even a state actor with more
resources than the standard attacker.
While this level of encryption isn’t going to be sufficient for all time, it should inspire confidence that
Azure TLS connections are quite secure. Azure will continue to evolve its security standards as
encryption improves. It’s nice to know that there’s somebody watching the security standards and
updating Azure as they improve.
At rest
In any application, there are a number of places where data rests on the disk. The application code
itself is loaded from some storage mechanism. Most applications also use some kind of a database
such as SQL Server, Cosmos DB, or even the amazingly price-efficient Table Storage. These databases
all use heavily encrypted storage to ensure that nobody other than the applications with proper
permissions can read your data. Even the system operators can’t read data that has been encrypted.
So customers can remain confident their secret information remains secret.
Storage
The underpinning of much of Azure is the Azure Storage engine. Virtual machine disks are mounted
on top of Azure Storage. Azure Kubernetes Service runs on virtual machines that, themselves, are
hosted on Azure Storage. Even serverless technologies, such as Azure Functions Apps and Azure
Container Instances, run out of disk that is part of Azure Storage.
If Azure Storage is well encrypted, then it provides for a foundation for most everything else to also
be encrypted. Azure Storage is encrypted with FIPS 140-2 compliant 256-bit AES. This is a well-
regarded encryption technology having been the subject of extensive academic scrutiny over the last
20 or so years. At present, there’s no known practical attack that would allow someone without
knowledge of the key to read data encrypted by AES.
By default, the keys used for encrypting Azure Storage are managed by Microsoft. There are extensive
protections in place to ensure to prevent malicious access to these keys. However, users with
particular encryption requirements can also provide their own storage keys that are managed in Azure
Virtual machines use encrypted storage, but it’s possible to provide another layer of encryption by
using technologies like BitLocker on Windows or DM-Crypt on Linux. These technologies mean that
even if the disk image was leaked off of storage, it would remain near impossible to read it.
Azure SQL
Databases hosted on Azure SQL use a technology called Transparent Data Encryption (TDE) to ensure
data remains encrypted. It’s enabled by default on all newly created SQL databases, but must be
enabled manually for legacy databases. TDE executes real-time encryption and decryption of not just
the database, but also the backups and transaction logs.
The encryption parameters are stored in the master database and, on startup, are read into memory
for the remaining operations. This means that the master database must remain unencrypted. The
actual key is managed by Microsoft. However, users with exacting security requirements may provide
their own key in Key Vault in much the same way as is done for Azure Storage. The Key Vault provides
for such services as key rotation and revocation.
The “Transparent” part of TDS comes from the fact that there aren’t client changes needed to use an
encrypted database. While this approach provides for good security, leaking the database password is
enough for users to be able to decrypt the data. There’s another approach that encrypts individual
columns or tables in a database. Always Encrypted ensures that at no point the encrypted data
appears in plain text inside the database.
Setting up this tier of encryption requires running through a wizard in SQL Server Management Studio
to select the sort of encryption and where in Key Vault to store the associated keys.
Client applications that read information from these encrypted columns need to make special
allowances to read encrypted data. Connection strings need to be updated with Column Encryption
Setting=Enabled and client credentials must be retrieved from the Key Vault. The SQL Server client
must then be primed with the column encryption keys. Once that is done, the remaining actions use
the standard interfaces to SQL Client. That is, tools like Dapper and Entity Framework, which are built
on top of SQL Client, will continue to work without changes. Always Encrypted may not yet be
available for every SQL Server driver on every language.
The combination of TDE and Always Encrypted, both of which can be used with client-specific keys,
ensures that even the most exacting encryption requirements are supported.
Cosmos DB
Cosmos DB is the newest database provided by Microsoft in Azure. It has been built from the ground
up with security and cryptography in mind. AES-256bit encryption is standard for all Cosmos DB
While Cosmos DB doesn’t provide for supplying customer encryption keys, there has been significant
work done by the team to ensure it remains PCI-DSS compliant without that. Cosmos DB also doesn’t
support any sort of single column encryption similar to Azure SQL’s Always Encrypted yet.
Keeping secure
Azure has all the tools necessary to release a highly secure product. However, a chain is only as strong
as its weakest link. If the applications deployed on top of Azure aren’t developed with a proper
security mindset and good security audits, then they become the weak link in the chain. There are
many great static analysis tools, encryption libraries, and security practices that can be used to ensure
that the software installed on Azure is as secure as Azure itself. Examples include static analysis tools,
encryption libraries, and security practices.
Take, for instance, the two major schools of developing web applications: Single Page Applications
(SPAs) versus server-side applications. On the one hand, the user experience tends to be better with
SPAs and the amount of traffic to the web server can be minimized making it possible to host them
on something as simple as static hosting. On the other hand, SPAs tend to be slower to develop and
more difficult to test. Which one is the right choice? Well, it depends on your situation.
Cloud-native applications aren’t immune to that same dichotomy. They have clear advantages in
terms of speed of development, stability, and scalability, but managing them can be quite a bit more
difficult.
Years ago, it wasn’t uncommon for the process of moving an application from development to
production to take a month, or even more. Companies released software on a 6-month or even every
year cadence. One needs to look no further than Microsoft Windows to get an idea for the cadence of
releases that were acceptable before the ever-green days of Windows 10. Five years passed between
Windows XP and Vista, a further 3 between Vista and Windows 7.
It’s now fairly well established that being able to release software rapidly gives fast-moving companies
a huge market advantage over their more sloth-like competitors. It’s for that reason that major
updates to Windows 10 are now approximately every six months.
The patterns and practices that enable faster, more reliable releases to deliver value to the business
are collectively known as DevOps. They consist of a wide range of ideas spanning the entire software
development life cycle from specifying an application all the way up to delivering and operating that
application.
DevOps emerged before microservices and it’s likely that the movement towards smaller, more fit to
purpose services wouldn’t have been possible without DevOps to make releasing and operating not
just one but many applications in production easier.
Through good DevOps practices, it’s possible to realize the advantages of cloud-native applications
without suffocating under a mountain of work actually operating the applications.
There’s no golden hammer when it comes to DevOps. Nobody can sell a complete and all-
encompassing solution for releasing and operating high-quality applications. This is because each
application is wildly different from all others. However, there are tools that can make DevOps a far less
daunting proposition. One of these tools is known as Azure DevOps.
Azure DevOps
Azure DevOps has a long pedigree. It can trace its roots back to when Team Foundation Server first
moved online and through the various name changes: Visual Studio Online and Visual Studio Team
Services. Through the years, however, it has become far more than its predecessors.
Azure Repos - Source code management that supports the venerable Team Foundation Version
Control (TFVC) and the industry favorite Git. Pull requests provide a way to enable social coding by
fostering discussion of changes as they’re made.
Azure Pipelines - A build and release management system that supports tight integration with Azure.
Builds can be run on various platforms from Windows to Linux to macOS. Build agents may be
provisioned in the cloud or on-premises.
Azure Test Plans - No QA person will be left behind with the test management and exploratory
testing support offered by the Test Plans feature.
Azure Artifacts - An artifact feed that allows companies to create their own, internal, versions of
NuGet, npm, and others. It serves a double purpose of acting as a cache of upstream packages if
there’s a failure of a centralized repository.
The top-level organizational unit in Azure DevOps is known as a Project. Within each project the
various components, such as Azure Artifacts, can be turned on and off. Each of these components
provides different advantages for cloud-native applications. The three most useful are repositories,
boards, and pipelines. If users want to manage their source code in another repository stack, such as
GitHub, but still take advantage of Azure Pipelines and other components, that’s perfectly possible.
Fortunately, development teams have many options when selecting a repository. One of them is
GitHub.
GitHub Actions
Founded in 2009, GitHub is a widely popular web-based repository for hosting projects,
documentation, and code. Many large tech companies, such as Apple, Amazon, Google, and
mainstream corporations use GitHub. GitHub uses the open-source, distributed version control system
named Git as its foundation. On top, it then adds its own set of features, including defect tracking,
feature and pull requests, tasks management, and wikis for each code base.
As GitHub evolves, it too is adding DevOps features. For example, GitHub has its own continuous
integration/continuous delivery (CI/CD) pipeline, called GitHub Actions. GitHub Actions is a
community-powered workflow automation tool. It lets DevOps teams integrate with their existing
tooling, mix and match new products, and hook into their software lifecycle, including existing CI/CD
partners."
GitHub has over 40 million users, making it the largest host of source code in the world. In October of
2018, Microsoft purchased GitHub. Microsoft has pledged that GitHub will remain an open platform
that any developer can plug into and extend. It continues to operate as an independent company.
GitHub offers plans for enterprise, team, professional, and free accounts.
Source control
Organizing the code for a cloud-native application can be challenging. Instead of a single giant
application, the cloud-native applications tend to be made up of a web of smaller applications that
talk with one another. As with all things in computing, the best arrangement of code remains an open
165 CHAPTER 10 | DevOps
question. There are examples of successful applications using different kinds of layouts, but two
variants seem to have the most popularity.
Before getting down into the actual source control itself, it’s probably worth deciding on how many
projects are appropriate. Within a single project, there’s support for multiple repositories, and build
pipelines. Boards are a little more complicated, but there too, the tasks can easily be assigned to
multiple teams within a single project. It’s possible to support hundreds, even thousands of
developers, out of a single Azure DevOps project. Doing so is likely the best approach as it provides a
single place for all developer to work out of and reduces the confusion of finding that one application
when developers are unsure in which project in which it resides.
Splitting up code for microservices within the Azure DevOps project can be slightly more challenging.
1. Instructions for building and maintaining the application can be added to a README file at the
root of each repository. When flipping through the repositories, it’s easy to find these
instructions, reducing spin-up time for developers.
2. Every service is located in a logical place, easily found by knowing the name of the service.
3. Builds can easily be set up such that they’re only triggered when a change is made to the
owning repository.
4. The number of changes coming into a repository is limited to the small number of developers
working on the project.
However, this approach isn’t without its issues. One of the more gnarly development problems of our
time is managing dependencies. Consider the number of files that make up the average
node_modules directory. A fresh install of something like create-react-app is likely to bring with it
thousands of packages. The question of how to manage these dependencies is a difficult one.
If a dependency is updated, then downstream packages must also update this dependency.
Unfortunately, that takes development work so, invariably, the node_modules directory ends up with
multiple versions of a single package, each one a dependency of some other package that is
versioned at a slightly different cadence. When deploying an application, which version of a
dependency should be used? The version that is currently in production? The version that is currently
in Beta but is likely to be in production by the time the consumer makes it to production? Difficult
problems that aren’t resolved by just using microservices.
There are libraries that are depended upon by a wide variety of projects. By dividing the microservices
up with one in each repository the internal dependencies can best be resolved by using the internal
repository, Azure Artifacts. Builds for libraries will push their latest versions into Azure Artifacts for
internal consumption. The downstream project must still be manually updated to take a dependency
on the newly updated packages.
Another disadvantage presents itself when moving code between services. Although it would be nice
to believe that the first division of an application into microservices is 100% correct, the reality is that
rarely we’re so prescient as to make no service division mistakes. Thus, functionality and the code that
drives it will need to move from service to service: repository to repository. When leaping from one
repository to another, the code loses its history. There are many cases, especially in the event of an
audit, where having full history on a piece of code is invaluable.
The final and most important disadvantage is coordinating changes. In a true microservices
application, there should be no deployment dependencies between services. It should be possible to
deploy services A, B, and C in any order as they have loose coupling. In reality, however, there are
times when it’s desirable to make a change that crosses multiple repositories at the same time. Some
examples include updating a library to close a security hole or changing a communication protocol
used by all services.
Single repository
In this approach, sometimes referred to as a monorepository, all the source code for every service is
put into the same repository. At first, this approach seems like a terrible idea likely to make dealing
with source code unwieldy. There are, however, some marked advantages to working this way.
The first advantage is that it’s easier to manage dependencies between projects. Instead of relying on
some external artifact feed, projects can directly import one another. This means that updates are
instant, and conflicting versions are likely to be found at compile time on the developer’s workstation.
In effect, shifting some of the integration testing left.
When moving code between projects, it’s now easier to preserve the history as the files will be
detected as having been moved rather than being rewritten.
Another advantage is that wide ranging changes that cross service boundaries can be made in a
single commit. This activity reduces the overhead of having potentially dozens of changes to review
individually.
There are many tools that can perform static analysis of code to detect insecure programming
practices or problematic use of APIs. In a multi-repository world, each repository will need to be
iterated over to find the problems in them. The single repository allows running the analysis all in one
place.
There are also many disadvantages to the single repository approach. One of the most worrying ones
is that having a single repository raises security concerns. If the contents of a repository are leaked in
a repository per service model, the amount of code lost is minimal. With a single repository,
everything the company owns could be lost. There have been many examples in the past of this
happening and derailing entire game development efforts. Having multiple repositories exposes less
surface area, which is a desirable trait in most security practices.
The size of the single repository is likely to become unmanageable rapidly. This presents some
interesting performance implications. It may become necessary to use specialized tools such as Virtual
File System for Git, which was originally designed to improve the experience for developers on the
Windows team.
Frequently the argument for using a single repository boils down to an argument that Facebook or
Google use this method for source code arrangement. If the approach is good enough for these
companies, then, surely, it’s the correct approach for all companies. The truth of the matter is that few
companies operate on anything like the scale of Facebook or Google. The problems that occur at
those scales are different from those most developers will face. What is good for the goose may not
be good for the gander.
In the end, either solution can be used to host the source code for microservices. However, in most
cases, the management, and engineering overhead of operating in a single repository isn’t worth the
meager advantages. Splitting code up over multiple repositories encourages better separation of
concerns and encourages autonomy among development teams.
Whenever a new project is created, a template that puts in place the correct structure should be used.
This template can also include such useful items as a skeleton README file and an azure-
pipelines.yml. In any microservice architecture, a high degree of variance between projects makes bulk
operations against the services more difficult.
There are many tools that can provide templating for an entire directory, containing several source
code directories. Yeoman is popular in the JavaScript world and GitHub have recently released
Repository Templates, which provide much of the same functionality.
Task management
Managing tasks in any project can be difficult. Up front there are countless questions to be answered
about the sort of workflows to set up to ensure optimal developer productivity.
Cloud-native applications tend to be smaller than traditional software products or at least they’re
divided into smaller services. Tracking of issues or tasks related to these services remains as important
as with any other software project. Nobody wants to lose track of some work item or explain to a
customer that their issue wasn’t properly logged. Boards are configured at the project level but within
each project, areas can be defined. These allow breaking down issues across several components. The
advantage to keeping all the work for the entire application in one place is that it’s easy to move work
items from one team to another as they’re understood better.
One of the more important parts of Agile methodologies is self-introspection at regular intervals.
These reviews are meant to provide insight into what problems the team is facing and how they can
be improved. Frequently, this means changing the flow of issues and features through the
development process. So, it’s perfectly healthy to expand the layouts of the boards with additional
stages.
The stages in the boards aren’t the only organizational tool. Depending on the configuration of the
board, there’s a hierarchy of work items. The most granular item that can appear on a board is a task.
Out of the box a task contains fields for a title, description, a priority, an estimate of the amount of
work remaining and the ability to link to other work items or development items (branches, commits,
pull requests, builds, and so forth). Work items can be classified into different areas of the application
and different iterations (sprints) to make finding them easier.
The description field supports the normal styles you’d expect (bold, italic underscore and strike
through) and the ability to insert images. This makes it a powerful tool for use when specifying work
or bugs.
Tasks can be rolled up into features, which define a larger unit of work. Features, in turn, can be rolled
up into epics. Classifying tasks in this hierarchy makes it much easier to understand how close a large
feature is to rolling out.
There are different kinds of views into the issues in Azure Boards. Items that aren’t yet scheduled
appear in the backlog. From there, they can be assigned to a sprint. A sprint is a time box during
which it’s expected some quantity of work will be completed. This work can include tasks but also the
resolution of tickets. Once there, the entire sprint can be managed from the Sprint board section. This
view shows how work is progressing and includes a burn down chart to give an ever-updating
estimate of if the sprint will be successful.
By now, it should be apparent that there’s a great deal of power in the Boards in Azure DevOps. For
developers, there are easy views of what is being worked on. For project managers views into
upcoming work as well as an overview of existing work. For managers, there are plenty of reports
about resourcing and capacity. Unfortunately, there’s nothing magical about cloud-native applications
that eliminate the need to track work. But if you must track work, there are a few places where the
experience is better than in Azure DevOps.
CI/CD pipelines
Almost no change in the software development life cycle has been so revolutionary as the advent of
continuous integration (CI) and continuous delivery (CD). Building and running automated tests
against the source code of a project as soon as a change is checked in catches mistakes early. Prior to
the advent of continuous integration builds, it wouldn’t be uncommon to pull code from the
Traditionally shipping software to the production environment required extensive documentation and
a list of steps. Each one of these steps needed to be manually completed in a very error prone
process.
The sister of continuous integration is continuous delivery in which the freshly built packages are
deployed to an environment. The manual process can’t scale to match the speed of development so
automation becomes more important. Checklists are replaced by scripts that can execute the same
tasks faster and more accurately than any human.
The environment to which continuous delivery delivers might be a test environment or, as is being
done by many major technology companies, it could be the production environment. The latter
requires an investment in high-quality tests that can give confidence that a change isn’t going to
break production for users. In the same way that continuous integration caught issues in the code
early continuous delivery catches issues in the deployment process early.
The importance of automating the build and delivery process is accentuated by cloud-native
applications. Deployments happen more frequently and to more environments so manually deploying
borders on impossible.
Azure Builds
Azure DevOps provides a set of tools to make continuous integration and deployment easier than
ever. These tools are located under Azure Pipelines. The first of them is Azure Builds, which is a tool
for running YAML-based build definitions at scale. Users can either bring their own build machines
(great for if the build requires a meticulously set up environment) or use a machine from a constantly
refreshed pool of Azure hosted virtual machines. These hosted build agents come pre-installed with a
DevOps includes a wide range of out of the box build definitions that can be customized for any build.
The build definitions are defined in a file called azure-pipelines.yml and checked into the repository so
they can be versioned along with the source code. This makes it much easier to make changes to the
build pipeline in a branch as the changes can be checked into just that branch. An example azure-
pipelines.yml for building an ASP.NET web application on full framework is show in Figure 10-9.
name: $(rev:r)
variables:
version: 9.2.0.$(Build.BuildNumber)
solution: Portals.sln
artifactName: drop
buildPlatform: any cpu
buildConfiguration: release
pool:
name: Hosted VS2017
demands:
- msbuild
- visualstudio
- vstest
steps:
- task: NuGetToolInstaller@0
displayName: 'Use NuGet 4.4.1'
inputs:
versionSpec: 4.4.1
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '$(solution)'
- task: VSBuild@1
displayName: 'Build solution'
inputs:
solution: '$(solution)'
msbuildArgs: '-p:DeployOnBuild=true -p:WebPublishMethod=Package -
p:PackageAsSingleFile=true -p:SkipInvalidConfigurations=true -
p:PackageLocation="$(build.artifactstagingdirectory)\\"'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: VSTest@2
displayName: 'Test Assemblies'
inputs:
testAssemblyVer2: |
**\$(buildConfiguration)\**\*test*.dll
!**\obj\**
!**\*testadapter.dll
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: CopyFiles@2
displayName: 'Copy UI Test Files to: $(build.artifactstagingdirectory)'
inputs:
- task: PublishBuildArtifacts@1
displayName: 'Publish Artifact'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: '$(artifactName)'
condition: succeededOrFailed()
This build definition uses a number of built-in tasks that make creating builds as simple as building a
Lego set (simpler than the giant Millennium Falcon). For instance, the NuGet task restores NuGet
packages, while the VSBuild task calls the Visual Studio build tools to perform the actual compilation.
There are hundreds of different tasks available in Azure DevOps, with thousands more that are
maintained by the community. It’s likely that no matter what build tasks you’re looking to run,
somebody has built one already.
Builds can be triggered manually, by a check-in, on a schedule, or by the completion of another build.
In most cases, building on every check-in is desirable. Builds can be filtered so that different builds run
against different parts of the repository or against different branches. This allows for scenarios like
running fast builds with reduced testing on pull requests and running a full regression suite against
the trunk on a nightly basis.
The end result of a build is a collection of files known as build artifacts. These artifacts can be passed
along to the next step in the build process or added to an Azure Artifact feed, so they can be
consumed by other builds.
Each stage in the build can be automatically triggered by the completion of the previous phase. In
many cases, however, this isn’t desirable. Moving code into production might require approval from
somebody. The Releases tool supports this by allowing approvers at each step of the release pipeline.
Rules can be set up such that a specific person or group of people must sign off on a release before it
Versioning releases
One drawback to using the Releases functionality is that it can’t be defined in a checked-in azure-
pipelines.yml file. There are many reasons you might want to do that from having per-branch release
definitions to including a release skeleton in your project template. Fortunately, work is ongoing to
shift some of the stages support into the Build component. This will be known as multi-stage build
and the first version is available now!
Feature flags
In chapter 1, we affirmed that cloud native is much about speed and agility. Users expect rapid
responsiveness, innovative features, and zero downtime. Feature flags are a modern deployment
technique that helps increase agility for cloud-native applications. They enable you to deploy new
features into a production environment, but restrict their availability. With the flick of a switch, you can
activate a new feature for specific users without restarting the app or deploying new code. They
separate the release of new features from their code deployment.
Feature flags are built upon conditional logic that control visibility of functionality for users at runtime.
In modern cloud-native systems, it’s common to deploy new features into production early, but test
them with a limited audience. As confidence increases, the feature can be incrementally rolled out to
wider audiences.
• Restrict premium functionality to specific customer groups willing to pay higher subscription
fees.
• Stabilize a system by quickly deactivating a problem feature, avoiding the risks of a rollback or
immediate hotfix.
• Disable an optional feature with high resource consumption during peak usage periods.
• Conduct experimental feature releases to small user segments to validate feasibility and
popularity.
Feature flags also promote trunk-based development. It’s a source-control branching model where
developers collaborate on features in a single branch. The approach minimizes the risk and complexity
of merging large numbers of long-running feature branches. Features are unavailable until activated.
if (featureFlag) {
// Run this code block if the featureFlag value is true
} else {
// Run this code block if the featureFlag value is false
}
Note how this approach separates the decision logic from the feature code.
In chapter 1, we discussed the Twelve-Factor App. The guidance recommended keeping configuration
settings external from application executable code. When needed, settings can be read in from the
external source. Feature flag configuration values should also be independent from their codebase. By
externalizing flag configuration in a separate repository, you can change flag state without modifying
and redeploying the application.
Azure App Configuration provides a centralized repository for feature flags. With it, you define
different kinds of feature flags and manipulate their states quickly and confidently. You add the App
Configuration client libraries to your application to enable feature flag functionality. Various
programming language frameworks are supported.
Feature flags can be easily implemented in an ASP.NET Core service. Installing the .NET Feature
Management libraries and App Configuration provider enable you to declaratively add feature flags to
your code. They enable FeatureGate attributes so that you don’t have to manually write if statements
across your codebase.
Once configured in your Startup class, you can add feature flag functionality at the controller, action,
or middleware level. Figure 10-12 presents controller and action implementation:
[FeatureGate(MyFeatureFlags.FeatureA)]
public class ProductController : Controller
{
...
}
[FeatureGate(MyFeatureFlags.FeatureA)]
public IActionResult UpdateProductStatus()
{
return ObjectResult(ProductDto);
}
If a feature flag is disabled, the user will receive a 404 (Not Found) status code with no response body.
Feature flags can also be injected directly into C# classes. Figure 10-13 shows feature flag injection:
The Feature Management libraries manage the feature flag lifecycle behind the scenes. For example,
to minimize high numbers of calls to the configuration store, the libraries cache flag states for a
specified duration. They can guarantee the immutability of flag states during a request call. They also
offer a Point-in-time snapshot. You can reconstruct the history of any key-value and provide its past
value at any moment within the previous seven days.
Infrastructure as code
Cloud-native systems embrace microservices, containers, and modern system design to achieve speed
and agility. They provide automated build and release stages to ensure consistent and quality code.
But, that’s only part of the story. How do you provision the cloud environments upon which these
systems run?
Modern cloud-native applications embrace the widely accepted practice of Infrastructure as Code, or
IaC. With IaC, you automate platform provisioning. You essentially apply software engineering
practices such as testing and versioning to your DevOps practices. Your infrastructure and
deployments are automated, consistent, and repeatable. Just as continuous delivery automated the
traditional model of manual deployments, Infrastructure as Code (IaC) is evolving how application
environments are managed.
Tools like Azure Resource Manager (ARM), Terraform, and the Azure Command Line Interface (CLI)
enable you to declaratively script the cloud infrastructure you require.
Azure Resource Manager templates are a JSON-based language for defining various resources in
Azure. The basic schema looks something like Figure 10-14.
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-
Within this template, one might define a storage container inside the resources section like so:
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageAccountName')]",
"location": "[parameters('location')]",
"apiVersion": "2018-07-01",
"sku": {
"name": "[parameters('storageAccountType')]"
},
"kind": "StorageV2",
"properties": {}
}
],
An ARM template can be parameterized with dynamic environment and configuration information.
Doing so enables it to be reused to define different environments, such as development, QA, or
production. Normally, the template creates all resources within a single Azure resource group. It’s
possible to define multiple resource groups in a single Resource Manager template, if needed. You
can delete all resources in an environment by deleting the resource group itself. Cost analysis can also
be run at the resource group level, allowing for quick accounting of how much each environment is
costing.
There are many examples or ARM templates available in the Azure Quickstart Templates project on
GitHub. They can help accelerate creating a new template or modifying an existing one.
Resource Manager templates can be run in many of ways. Perhaps the simplest way is to simply paste
them into the Azure portal. For experimental deployments, this method can be quick. They can also be
run as part of a build or release process in Azure DevOps. There are tasks that will leverage
connections into Azure to run the templates. Changes to Resource Manager templates are applied
incrementally, meaning that to add a new resource requires just adding it to the template. The tooling
will reconcile differences between the current resources and those defined in the template. Resources
will then be created or altered so they match what is defined in the template.
Terraform
Cloud-native applications are often constructed to be cloud agnostic. Being so means the application
isn’t tightly coupled to a particular cloud vendor and can be deployed to any public cloud.
An example Terraform file that does the same as the previous Resource Manager template (Figure 10-
15) is shown in Figure 10-16:
provider "azurerm" {
version = "=1.28.0"
}
Terraform also provides intuitive error messages for problem templates. There’s also a handy validate
task that can be used in the build phase to catch template errors early.
As with Resource Manager templates, command-line tools are available to deploy Terraform
templates. There are also community-created tasks in Azure Pipelines that can validate and apply
Terraform templates.
Sometimes Terraform and ARM templates output meaningful values, such as a connection string to a
newly created database. This information can be captured in the build pipeline and used in
subsequent tasks.
Azure CLI scripts work well when you need to tear down and redeploy your infrastructure. Updating
an existing environment can be tricky. Many CLI commands aren’t idempotent. That means they’ll
recreate the resource each time they’re run, even if the resource already exists. It’s always possible to
add code that checks for the existence of each resource before creating it. But, doing so, your script
can become bloated and difficult to manage.
These scripts can also be embedded in Azure DevOps pipelines as Azure CLI tasks. Executing the
pipeline invokes the script.
- task: AzureCLI@2
displayName: Azure CLI
inputs:
azureSubscription: <Name of the Azure Resource Manager service connection>
scriptType: ps
scriptLocation: inlineScript
inlineScript: |
az --version
az account show
In the article, What is Infrastructure as Code, Author Sam Guckenheimer describes how, “Teams who
implement IaC can deliver stable environments rapidly and at scale. Teams avoid manual
configuration of environments and enforce consistency by representing the desired state of their
environments via code. Infrastructure deployments with IaC are repeatable and prevent runtime issues
caused by configuration drift or missing dependencies. DevOps teams can work together with a
unified set of practices and tools to deliver applications and their supporting infrastructure rapidly,
reliably, and at scale.”
The Docker containers may run on Kubernetes using a Helm Chart for deployment. The Azure
Functions may be allocated using Terraform templates. Finally, the virtual machines may be allocated
using Terraform but built out using Ansible. This is a large variety of technologies and there has been
no way to package them all together into a reasonable package. Until now.
Cloud Native Application Bundles (CNABs) are a joint effort by many community-minded companies
such as Microsoft, Docker, and HashiCorp to develop a specification to package distributed
applications.
The effort was announced in December of 2018, so there’s still a fair bit of work to do to expose the
effort to the greater community. However, there’s already an open specification and a reference
implementation known as Duffle. This tool, which was written in Go, is a joint effort between Docker
and Microsoft.
The CNABs can contain different kinds of installation technologies. This aspect allows things like Helm
Charts, Terraform templates, and Ansible Playbooks to coexist in the same package. Once built, the
packages are self-contained and portable; they can be installed from a USB stick. The packages are
cryptographically signed to ensure they originate from the party they claim.
{
"name": "terraform",
"version": "0.1.0",
"schemaVersion": "v1.0.0-WD",
"parameters": {
"backend": {
"type": "boolean",
"defaultValue": false,
"destination": {
"env": "TF_VAR_backend"
}
}
},
"invocationImages": [
{
"imageType": "docker",
"image": "cnab/terraform:latest"
}
],
"credentials": {
"tenant_id": {
"env": "TF_VAR_tenant_id"
},
"client_id": {
"env": "TF_VAR_client_id"
},
"client_secret": {
"env": "TF_VAR_client_secret"
},
"subscription_id": {
"env": "TF_VAR_subscription_id"
},
"ssh_authorized_key": {
"env": "TF_VAR_ssh_authorized_key"
}
},
"actions": {
"status": {
"modifies": true
}
}
}
The bundle.json also defines a set of parameters that are passed down into the Terraform.
Parameterization of the bundle allows for installation in various different environments.
The CNAB format is also flexible, allowing it to be used against any cloud. It can even be used against
on-premises solutions such as OpenStack.
References
• Azure DevOps
• Azure Resource Manager
• Terraform
• Azure CLI
• Cloud-native is about designing modern applications that embrace rapid change, large scale,
and resilience, in modern, dynamic environments such as public, private, and hybrid clouds.
• CNCF guidelines recommend that cloud-native applications embrace six important pillars as
shown in Figure 11-1:
• gRPC is a modern, high-performance framework that evolves the age-old remote procedure call
(RPC) protocol. Cloud-native applications often embrace gRPC to streamline messaging between
back-end services. gRPC uses HTTP/2 for its transport protocol. It can be up to 8x faster than
JSON serialization with message sizes 60-80% smaller. gRPC is open source and managed by the
Cloud Native Computing Foundation (CNCF).
• No-SQL databases refer to high-performance, non-relational data stores. They excel in their
ease-of-use, scalability, resilience, and availability characteristics. High volume services that
require sub second response time favor NoSQL datastores. The proliferation of NoSQL
technologies for distributed cloud-native systems can’t be overstated.
• NewSQL is an emerging database technology that combines the distributed scalability of NoSQL
and the ACID guarantees of a relational database. NewSQL databases target business systems
that must process high-volumes of data, across distributed environments, with full
transactional/ACID compliance. The Cloud Native Computing Foundation (CNCF) features
several NewSQL database projects.
• Resiliency is the ability of your system to react to failure and still remain functional. Cloud-native
systems embrace distributed architecture where failure is inevitable. Applications must be
constructed to respond elegantly to failure and quickly return to a fully functioning state.
• Service meshes are a configurable infrastructure layer with built-in capabilities to handle service
communication and other cross-cutting challenges. They decouple cross-cutting responsibilities
from your business code. These responsibilities move into a service proxy. Referred to as the
Sidecar pattern, the proxy is deployed into a separate process to provide isolation from your
business code.
• Infrastructure as Code is a widely accepted practice that automates platform provisioning. Your
infrastructure and deployments are automated, consistent, and repeatable. Tools like Azure
Resource Manager, Terraform, and the Azure CLI, enable you to declaratively script the cloud
infrastructure you require.