Istio Service Mesh
Istio Service Mesh
m
pl
im
en
ts
of
Istio
Explained
Getting Started with Service Mesh
REPORT
Istio Explained
Getting Started with Service Mesh
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Istio Explained,
the cover image, and related trade dress are trademarks of O’Reilly Media, Inc.
The views expressed in this work are those of the authors, and do not represent the
publisher’s views. While the publisher and the authors have used good faith efforts
to ensure that the information and instructions contained in this work are accurate,
the publisher and the authors disclaim all responsibility for errors or omissions,
including without limitation responsibility for damages resulting from the use of or
reliance on this work. Use of the information and instructions contained in this
work is at your own risk. If any code samples or other technology this work contains
or describes is subject to open source licenses or the intellectual property rights of
others, it is your responsibility to ensure that your use thereof complies with such
licenses and/or rights.
This work is part of a collaboration between O’Reilly and IBM. See our statement of
editorial independence.
978-1-492-07393-2
[LSI]
Table of Contents
Foreword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
2. Introducing Istio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Why Do We Love Istio? 11
Istio Features 12
Istio Architecture 14
Installing Istio 17
Conclusion 19
iii
4. Securing Communication Within Istio. . . . . . . . . . . . . . . . . . . . . . . . . 39
Istio Security 39
Enable mTLS Communication Between Services 43
Securing Inbound Traffic 51
Conclusion 56
5. Control Traffic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Dark Launch 59
Canary Testing 66
Resiliency and Chaos Testing 68
Controlling Outbound Traffic 75
Conclusion 78
6. Wrap-Up. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Takeaways 79
Next Steps 80
iv | Table of Contents
Foreword
v
complement to that original work. I had the pleasure of reviewing
this report, and Dan and Lin’s approach is notable in taking the time
to help you understand not just what Istio provides in terms of func‐
tionality, but also how it works and things to watch out for. Istio
provides a wealth of functionality, but in this report Dan and Lin
introduce you to its core capabilities, using consumable examples to
help you get up to speed. I hope you enjoy the report, and on behalf
of the Istio community, welcome to the world of service mesh!
— Christian Posta
Global Field CTO, Solo.io
vi | Foreword
Preface
vii
incremental approach to teaching you how to adopt a service mesh
like Istio, our goal being to set you up for gradual adoption so that
you can see benefits as quickly as possible.
Why Istio?
Though there are many service mesh options to choose from (as
you’ll see in Chapter 1), because we are most familiar with it, we
chose Istio to illustrate the benefits a service mesh can offer through
its features. We encourage you to use the information we provide in
this book to evaluate the available options and choose the solution
best suited for your needs.
Prerequisites
The working examples in this book build on Kubernetes for manag‐
ing the sample’s containers, and Kubernetes also serves as the plat‐
form for Istio itself. Thus, to get the most from our examples, it
would be helpful for you to have a basic understanding of Kuber‐
netes. To quickly get up to speed, we recommend that you check out
this Kubernetes overview and its related links: “Kubernetes: A Sim‐
ple Overview.” As with the adoption process for any new technology,
you could likely to run into difficulties with configuration or setup.
In Chapter 2, we provide an introduction to techniques and com‐
mands to help you troubleshoot the issues you may encounter on
your journey to adopting service mesh.
Acknowledgments
We would like to thank the entire Istio community for its passion,
dedication, and tremendous commitment to the Istio project.
Without the project maintainers and contributors to the project
over the years, Istio would not have the rich feature set, diverse code
base, and large ecosystem that it has today.
We also extend our thanks to John Alcorn and Ryan Claussen, the
original authors of the example Kubernetes application we use as an
exemplar in this book. Also, we would like to thank Christian Posta,
Burr Sutter, and Virginia Wilson for their reviews, feedback, and
overall wisdom that was provided during the creation of the book. A
very special thanks to Peter Wassel and Jason McGee for all of their
support and encouragement during this endeavor.
viii | Preface
CHAPTER 1
Introduction to Service Mesh
In this chapter, we explore the notion of a service mesh and the vast
ecosystem that has emerged in support of service mesh solutions.
Organizations face many challenges when managing services, espe‐
cially in a cloud native environment. We are introducing service
mesh as a key solution on your cloud native journey because we
believe that a service mesh should be a serious consideration for
managing complex interactions between services. An understanding
of service mesh and its ecosystem will help you choose an appropri‐
ate implementation for your cloud solution.
1
• Traffic management at each service endpoint becomes more
important to enable specialized routing for A/B testing or can‐
ary deployments without impacting clients within the system.
• Securing communication by encrypting the data flows is more
complicated when the services are decoupled with different
binary processes and possibly written in different languages.
• Managing timeouts and communication failures between the
services can lead to cascading failures and is more difficult to do
correctly when the services are distributed.
In Figure 1-1, you can see that the communication between two
applications such as App1 and App2 is executed via the proxies ver‐
sus directly between the applications themselves, as indicated by the
red arrows. By having communication routed between the proxies,
the proxies serve as a key control point for performing complicated
tasks such as initiating transport layer security (TLS) handshakes for
encrypted communication (shown on the red line with the lock in
Figure 1-1). Since the communication is performed between the
proxies, there is no need to embed complex networking logic in the
applications themselves. Each service mesh implementation option
has various features, but they all share this general approach.
The fact that there are many service mesh options validates the
interest of service mesh, and it shows that the community has not
selected a de facto standard as we have seen with other projects such
as Kubernetes for container orchestration. Your answers to these
questions will have an impact on the type of service mesh that you
prefer, whether it is a single vendor-controlled or a multivendor,
open source project. Let’s take a moment to review the service mesh
ecosystem and describe each implementation so that you have a bet‐
ter understanding of what is available.
Envoy
The Envoy proxy is an open source project originally created by the
folks at Lyft. The Envoy proxy is an edge and service proxy that was
custom built to deal with the complexities and challenges of cloud
native applications. While Envoy itself does not constitute a service
mesh, it is definitely a key component of the service mesh ecosys‐
tem. What you will see from exploring the service mesh implemen‐
tations is that the client-side proxy from the reference architecture
in Figure 1-1 is often implemented using an Envoy proxy.
Envoy is one of the six graduated projects in the Cloud Native Com‐
puting Foundation (CNCF). The CNCF is part of the Linux founda‐
tion, and it hosts a number of open source projects that are used to
manage modern cloud native solutions. The fact that Envoy is a
CNCF graduated project is an indicator that it has a strong commu‐
nity with adopters using an Envoy proxy in production settings.
Although Envoy was originally created by Lyft, the open source
project has grown into a diverse community, as shown by the com‐
pany contributions in the CNCF DevStats graph shown in
Figure 1-2.
Istio
The Istio project is an open source project cofounded by IBM, Goo‐
gle, and Lyft in 2017. Istio makes it possible to connect, secure, and
observe your microservices while being language agnostic. Istio has
grown to include contributions from companies beyond its original
cofounders, companies such as VMware, Cisco, and Huawei, among
others. Figure 1-3 shows company contributions over the past 12
months using the CNCF DevStats tool. As of this writing, the open
source project Knative also builds upon the Istio project, providing
tools and capabilities to build, deploy, and manage serverless work‐
loads. Istio itself builds upon many other open source projects such
as Envoy, Kubernetes, Jaeger, and Prometheus. Istio is listed as part
of the CNCF Cloud Native Landscape, under the Service Mesh
category.
The Istio control plane extends the Kubernetes API server and uti‐
lizes the popular Envoy proxy for its client-side proxies. Istio sup‐
ports mutual TLS authentication (mTLS) communication between
services, traffic shifting, mesh gateways, monitoring and metrics
with Prometheus and Grafana, as well as custom policy injection.
Istio has installation profiles such as demo and production to make
it easier to provision and configure the Istio control plane for spe‐
cific use cases.
Consul Connect
Consul Connect is a service mesh developed by HashiCorp. Consul
Connect extends HashiCorp’s existing Consul offering, which has
service discovery as a primary feature as well as other built-in fea‐
tures such as a key-value store, health checking, and service segmen‐
tation for secure TLS communication between services. Consul
Connect is available as an open source project with HashiCorp itself
being the predominant contributor. HashiCorp has an enterprise
offering for Consul Connect for purchase with support. At the time
of writing, Consul Connect was not contributed to the CNCF or
another foundation. Consul is listed as part of the CNCF Cloud
Native Landscape under the Service Mesh category.
Consul Connect uses Envoy as the sidecar proxy and the Consul
server as the control plane for programming the sidecars. Consul
Connect includes secure mTLS support between microservices and
observability using Prometheus and Grafana projects. The secure
Linkerd
The Linkerd service mesh project is an open source project as well
as a CNCF incubating project focusing on providing an ultralight
weight mesh implementation with a minimalist design. The pre‐
dominant contributors to the Linkerd project are from Buoyant, as
shown in the 12-month CNCF DevStats company contribution
graph in Figure 1-4. Linkerd has the key capabilities of a service
mesh, including observability using Prometheus and Grafana,
secure mTLS communication, and—recently added—support for
service traffic shifting. The client-side proxy used with Linkerd was
developed specifically for and within the Linkerd project itself, and
written in Rust. Linkerd provides an injector to inject proxies during
a Kubernetes pod deployment based on an annotation to the Kuber‐
netes pod specification. Linkerd also includes a user interface (UI)
dashboard for viewing and configuring the mesh settings.
Kong
Kong’s service mesh builds upon the Kong edge capabilities for
managing APIs and has delivered these capabilities throughout the
entire mesh. Though Kong is an open source project, it appears that
its contributions are heavily dominated by Kong members. Kong is
not a member of a foundation, but is listed as part of the CNCF
Cloud Native Landscape under the API Gateway category. Kong
does provide Kong Enterprise, which is a paid product with support.
Much like all the other service mesh implementations, Kong has
both a control plane to program and manage the mesh as well as a
client-side proxy. In Kong’s case the client-side proxy is unique to
the Kong project. Kong includes support for end-to-end mTLS
encryption between services. Kong promotes its extensibility feature
as a key advantage. You can extend the Kong proxy using Lua plug-
ins to inject custom behavior at the proxies.
AspenMesh
AspenMesh is unlike the other service mesh implementations in
being a supported distribution of the Istio project. AspenMesh does
have many open source projects on GitHub, but its primary direc‐
tion is not to build a new service mesh implementation but to
harden and support an open source service mesh implementation
through a paid offering. AspenMesh hosts components of Istio such
Conclusion
The service mesh ecosystem is vibrant, and you have learned that
there are many open source projects as well as vendor-specific
projects that provide implementations for a service mesh. As we
continue to explore service mesh more deeply, we will turn our
attention to the Istio project. We have selected the Istio project
because it uses the Envoy proxy, it is rich in features, it has a diverse
open source community, and, most important, we both have experi‐
ence with the project.
Now we turn our attention to our prime running example: the Istio
service mesh. After reading this chapter, you should have a good
understanding of Istio’s architecture, how it operates, and the key
tenants of the project. Because we want to provide you with a hands-
on cloud journey, we will walk you through getting Istio installed so
you can use it for tasks in later chapters.
11
Individually managing hundreds if not thousands of sidecar proxies
would be unwieldy. The Istio control plane provides you with a
declarative API for defining your service configurations and poli‐
cies, which are then propagated to the sidecar proxies with the
proper configurations by the Istio control plane. Ultimately, Istio
enables you to focus on solving business problems because error-
prone logic is removed from the application code.
In Chapter 1, we introduced you to challenges that you may experi‐
ence as you move to use the cloud and microservices. The Istio ser‐
vice mesh has capabilities that simplify your ability to solve these
challenges, including the following:
Now, let’s dive deeper into the key features of Istio and provide you
with a base understanding for how it’s architected. Having a base
understanding of the Istio architecture will help you to understand
how your interactions with the Istio control plane affect behavior
within the mesh.
Istio Features
Istio solves the challenges of managing microservices by using a
core set of features that allow you to observe, connect, and secure
your services. These features can be broken down into three main
categories: observability, traffic management, and security.
Observability
Simply by installing Istio and adding services to the mesh, you will
begin to get rich tracing, monitoring, and logging for your services.
Traffic Management
By using easy-to-configure rules, you have fine-grained control over
how traffic flows between services at both the application layer
(Layer 7) as well as the network IP address level (Layer 4). For
example, in Kubernetes you have simple round-robin load balancing
across all service endpoints. With Istio you can organize service
endpoints by version and declare policies with the control plane to
control load balancing. You can then also make determinations as to
which service to use, based on a plethora of conditions, including
the source client identity, the client input type, percentage distribu‐
tion, geography, and more. Using such rules to control the traffic
flow, you can easily adjust traffic as conditions within your applica‐
tion change.
Using Istio’s traffic-shaping features allows you to deliver changes
more rapidly because you can reduce delivery risks. Istio enables
controlled rollout of changes using various deployment patterns
such as percentage based, canary, A/B testing, and more. Istio’s
traffic-shaping support also includes features that increase the resil‐
iency of your application without having to change the code. For
example, with distributed microservices, it is more likely that you
could see network failures between service calls, or disjointed time‐
outs between service calls, which result in a poor user experience.
Istio allows you to set conditions that control how services recover
from service call failures such as circuit breakers, timeouts, and
retries. The Istio traffic-shaping features coupled with automatic
Istio Features | 13
insights make it far simpler for you to program higher resiliency and
control flows directly into the network of your service mesh.
Security
One of the most difficult features to enable in a distributed cloud
application is secure communication between services where you
have data encryption and authentication between the services. This
is challenging because coding the logic in each service is compli‐
cated, and it takes only one improper configuration to expose a
security threat. Istio provides a feature that automatically establishes
a secure channel between services by managing service identities,
certificates, and mTLS handshaking. Istio uses first-class service
identity such as a Kubernetes service account to determine the iden‐
tity of the service, which we covered in detail in “Istio Identities” on
page 41 of Chapter 4. This means you can ensure that secure chan‐
nels exist between your services with certificates that are generated
and constantly rotated, dramatically reducing possible security
threats between services.
As with all of the features in Istio, managing security between serv‐
ices is also declarative using the APIs available in the Istio control
plane. Enabling secure communication within the mesh is not an
all-or-nothing setting. Istio has settings that allow permissive secure
channels between services. Selective permissive channels make it
convenient for you to incrementally add services to the mesh
without causing failures. This feature greatly simplifies your journey
to the cloud.
Istio Architecture
At a high level, Istio consists of a data plane and a control plane, as
shown in the service mesh reference architecture in Figure 1-1.
Figure 2-1 depicts the Istio components used to implement the ser‐
vice mesh reference architecture. The Istio data plane is composed
of Envoy sidecar proxies running in the same network space as each
service to control all network communication between services, as
well as Mixer, to provide extensible policy evaluation between serv‐
ices. The Istio control plane is responsible for the APIs used to con‐
figure the proxies and Mixer as part of the data plane. The key
components of the control plane are Pilot, Citadel, Mixer, and
Galley.
Envoy
Istio uses the Envoy proxy for the sidecars as well as gateways. You
will learn more about gateways in Chapters 4 and 5. The Envoy
proxy was developed to be extensible, and Istio uses an extended
version of the Envoy proxy to provide the features and capabilities
needed to work with the Istio control plane. Envoy is deployed as a
sidecar to each service endpoint. Within a Kubernetes environment,
Envoy is injected into each Kubernetes pod as a separate container.
Ingress and egress network traffic in and out of the pod is config‐
ured to flow through the sidecar Envoy proxy. Flowing all traffic
through the Envoy sidecar provides a control point to allow Istio to
gather metrics, control traffic, evaluate policies, and encrypt data
transfer. Istio adds to many of Envoy’s built-in features, such as load
balancing, TLS termination, circuit breakers, health checking,
HTTP/2, gRPC, and much more.
Pilot
Pilot is the essential component that programs the Envoy sidecars: it
converts Istio-defined APIs into Envoy-specific configurations,
which are propagated to Envoy proxy sidecars. Responsible for ser‐
vice discovery within the service mesh, Pilot is also primarily
Istio Architecture | 15
responsible for traffic-management capabilities as well as resiliency
features such as circuit breakers and retry logic. To support service
discovery, Pilot abstracts platform-specific service discovery imple‐
mentations and converts them into a standard format used by side‐
cars that conform to the Envoy data plane APIs. The abstraction
provided by Pilot allows Istio to be used with multiple environ‐
ments, including Kubernetes, Consul, or Nomad, and provides you
with a common interface. We explore more of the details about
Pilot’s traffic management capabilities in Chapter 5.
Citadel
Citadel provides critical security capabilities within the Istio service
mesh. Citadel’s primary responsibility is to manage certificates and
provide strong service identities to enable strong service-to-service
as well as end-user authentication. With the use of Citadel, you can
upgrade communication between your microservices from sending
plain text to having data sent fully encrypted using mTLS authenti‐
cation and authorization. We’ll get into how Citadel is used to secure
communication between services in Chapter 4.
Mixer
Mixer has a dual role within Istio. It enforces access control and
usage policies across the service mesh and collects telemetry data
from the sidecar proxies as well as other Istio control-plane services.
Mixer has been designed to be extensible by allowing you to inject
your own specialized policies to be executed by Envoy proxies when
communicating between services. This same extensibility frame‐
work enables Istio to work with multiple host environments and
backends. Request-level telemetry metrics are extracted by the prox‐
ies and forwarded to Mixer for evaluation. We get into more of the
details about collecting and viewing telemetry data in Chapter 3.
Galley
Galley manages Istio’s configuration. It validates, ingests, processes,
and distributes Istio’s configuration to the other control-plane serv‐
ices. Galley ultimately insulates the other Istio components from the
details of obtaining data from the underlying platform, such as
Kubernetes.
Installing Istio | 17
Setting Up the Istio Command-Line Interface
You need to set up the Istio command-line interface (CLI) before
proceeding. The Istio CLI, istioctl, is an executable that you set up
on your local development environment. You install istioctl from
the release download by adding it to your PATH environment
variable:
$ export PATH=$PWD/bin:$PATH
You will get an output similar to the following. The istioctl utility
checks the Kubernetes API Server, Kubernetes version, whether Istio
is already installed, and whether you have permission to create the
required Kubernetes resources:
Checking the cluster to make sure it is ready for Istio installation...
Kubernetes-api
-----------------------
Can initialize the Kubernetes client.
Can query the Kubernetes API Server.
Kubernetes-version
-----------------------
Istio is compatible with Kubernetes: v1.13.10+IKS.
Istio-existence
-----------------------
Istio will be installed in the istio-system namespace.
Kubernetes-setup
-----------------------
Can create necessary Kubernetes configurations: Namespace,ClusterRole,ClusterRole-
Binding,CustomResourceDefinition,Role,ServiceAccount,Service,Deployments,Config-
Map.
SideCar-Injector
-----------------------
This Kubernetes cluster supports automatic sidecar injection. To enable automatic
sidecar injection see https://istio.io/docs/setup/kubernetes/additional-setup/
sidecar-injection/#deploying-an-app
-----------------------
Install Pre-Check passed! The cluster is ready for Istio installation.
When the CRDs are installed, you can then install the Istio demo
profile using the istio-demo.yaml Kubernetes resources:
$ kubectl apply -f install/kubernetes/istio-demo.yaml
Conclusion
You should now have a firm understanding of the key features that
the Istio service mesh offers for managing, securing, and observing
microservices, as well as the core components involved in the imple‐
mentation of these features. With this basic understanding, you are
now ready to continue your cloud journey. We’ll get deeper into
each of the key features with hands-on tasks that will provide you
with the information you’ll need to incrementally adopt a service
mesh such as Istio for managing your microservices.
Conclusion | 19
CHAPTER 3
Adding Services to the Mesh
21
The Stock Trader application shown in Figure 3-1 is a simple stock
trading sample with which you can create various stock portfolios
and add shares of stock with a commission. It keeps track of each
portfolio’s total value and detailed stock holdings. You can find the
source code of the application in repositories found in the istio-
explained GitHub organization.
One strategy for incrementally adding services into the mesh would
be to separate additional services by namespace. For example, serv‐
ices added to the mesh will be in a separate namespace from services
that will continue to remain outside the mesh.
Sidecar Injection
Adding services to the mesh requires that the client-side proxies be
associated with the service components and registered with the con‐
trol plane. With Istio, you have two methods to inject the Envoy
proxy sidecar into the microservice Kubernetes pods:
Sidecar-Injection Approaches
You can find more information about the different
approaches for injecting the Istio Envoy sidecar from
the istio.io documentation site.
Using the code snippets that follow, you can create the stock-
trader namespace enabled with automatic sidecar injection by
adding the istio-injection label to the namespace. By executing
these steps, you have identified the namespace that will be used to
add services from the Stock Trader application into the mesh:
# create the stock-trader namespace
$ kubectl create namespace stock-trader
Sidecar Injection | 23
Reviewing Service Requirements
Before you add Kubernetes services to the mesh, you need to be
aware of the pod and services requirements to ensure that your
Kubernetes services meet the minimum requirements.
Service descriptors:
• Each service port name must start with the protocol name, for
example, name: http.
Deployment descriptors:
The steps to deploy the portfolio service work exactly the same
with or without Istio. There is nothing in the steps or the configura‐
-b
*
-d
15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 17 Sep 2019 21:52:02 -0400
You will notice that the istio-proxy container has requested 0.01
CPU and 40 MB memory to start with as well as 2 CPU and 1 GB
memory for limits. You will need to budget for these settings when
managing the capacity of the cluster. Further, the requested resource
and resource limit may vary per installation profile. Also notice that
the istio-certs are mounted to the pod for the purpose of imple‐
menting mTLS, which we explore in Chapter 4.
Run commands that follow to validate that the pods are running and
that the Envoy sidecar has been injected by inspecting the number
of containers that are running. You should see 2/2, since the deploy‐
ment descriptors have a container for each service plus the newly
injected istio-proxy container for each service.
# validate pod has reached running status with sidecar injected
$ kubectl get pods -l app=trader -n stock-trader
NAME READY STATUS RESTARTS AGE
trader-7fc498f64-wkn58 2/2 Running 0 13d
At this point, you have deployed all of the services in the Stock
Trader example with some services associated with the mesh and the
data service not included in the mesh.
Use the cluster name from the output to view the list of worker
nodes in your cluster using this command:
$ ibmcloud ks workers --cluster $CLUSTER_NAME
$ export STOCK_TRADER_IP=<public IP of one of the worker nodes>
The output of the worker nodes will show the public IP addresses of
the worker nodes. If you are using another Kubernetes environment
other than IKS, you can try this next command to obtain the IP
address from Kubernetes:
$ export STOCK_TRADER_IP=$(kubectl get po -l istio=ingressgateway \
-n istio-system -o jsonpath='{.items[0].status.hostIP}')
If you are using Minikube, you can try this command to obtain the
IP address from Minikube:
$ export STOCK_TRADER_IP=$(minikube ip)
Confirm that you can access the application via its node-port. Open
a web browser and enter the following in the URL field, replacing
$STOCK_TRADER_IP with the environment variable that you have set
previously. Log in using username stock and password trader.
http://$STOCK_TRADER_IP:32388/trader/login
You can view distributed tracing information using the Jaeger dash‐
board, which you can launch using istioctl dashboard jaeger, as
shown in Figure 3-4. Select the trader.stock-trader service to
view all traces related to the service.
Click each trace to view the detailed trace spans among the micro‐
services, as shown in Figure 3-5.
When there are errors for some traces, you can click one of the
traces that contains errors to investigate the problem. In this case,
there is a 500 error return code when the trader service called the
portfolio service, as shown in Figure 3-6. These traces can help
you quickly pin down which service(s) to troubleshoot further.
Conclusion
In this chapter, you learned that adding services to a service mesh
requires little effort and requires no code changes to get valuable
telemetry support out of the box. Istio makes it even easier to add
services to the mesh by enabling automatic sidecar injection per
Kubernetes namespace. More important is that you have the ability
to control which services are added to the mesh to enable incremen‐
tal adoption of services, which is especially important for existing
applications. In the next chapter, we explore enabling secure com‐
munication between your services, taking into account that not all of
the services have been added to the mesh all at once.
Istio Security
It is always imperative to secure communication to your application
by ensuring that only trusted identities can call your services. In tra‐
ditional applications, we often see that communication to services is
secured at the edge of the application, or, to be more explicit, a net‐
work gateway (appliance or software) is configured on the network
39
in which the application is deployed. In these topologies the first
line of defense—and often the only line of defense—is at the edge of
the network prior to getting into the application. Such a deployment
topology exhibits faults when moving to a highly distributed, cloud
native solution.
Istio aims to provide security in depth to ensure that an application
can be secured even on an untrusted network. Security at depth
places security controls at every endpoint of the mesh and not sim‐
ply at the edge. Placing security controls at each endpoint ensures
defense against man-in-the-middle attacks by enabling mTLS,
encrypted traffic flow with secure service identities. Traditionally,
application code would be modified using common libraries and
approaches to establish TLS communication to other services in the
application. The traditional approach is complex, varies between
languages, and relies on the developers to follow development
guidelines to enable TLS communication between services. The tra‐
ditional approach is fraught with errors, and a single error to secure
a connection can compromise the entire application. Istio, on the
other hand, establishes and manages mTLS connections within the
mesh itself and not within the application code. Thus mTLS com‐
munication can be enabled without changing code, and it can be
done with a high degree of consistency and control ensuring far less
opportunity for error. Figure 4-1 shows the key Istio components
involved in providing mTLS communication between services in the
mesh, including the following:
Citadel
Manages keys and certificates including generation and rota‐
tion.
Istio (Envoy) Proxy
Implements secure communication between clients and servers.
Pilot
Distributes secure naming, mapping, and authentication poli‐
cies to the proxies.
Istio Identities
A critical aspect of being able to secure communication between
services requires a consistent approach to defining the service iden‐
tities. For mutual authentication between two services, the services
must exchange credentials encoded with their identity. In Kuber‐
netes, service accounts are used to provide service identities. Istio
uses secure naming information on the client side of a service invo‐
cation to determine whether the client is allowed to call the server-
side service. On the server side, the server is able to determine how
the client can access and what information can be accessed on the
service using authorization policies.
Along with service identities being encoded in certificates, secure
naming in Istio will map the service identities to the service names
that have been discovered. In simple Kubernetes terms this means a
mapping of service account (i.e., the service identity) X to a service
named Z indicates that “service account X is authorized to run ser‐
vice Z.” What this means in practice is that when a client attempts to
call service Z, Istio will check whether the identity running the ser‐
vice is actually authorized to run the service before allowing the cli‐
ent to use the service. As you learned earlier, Istio Pilot is
responsible for configuring the Envoy proxies. In a Kubernetes envi‐
ronment, Pilot will watch the Kubernetes api-server for services
being added or removed and generates the secure naming mapping
Istio Security | 41
information, which is then securely distributed to all of the Envoy
proxies in the mesh. Secure naming prevents DNS spoofing attacks
with the mapping of service identities (service accounts) to service
names.
Authorization policies are modeled after Kubernetes Role-Based
Access Control (RBAC), which defines roles with actions used
within a Kubernetes cluster and role bindings to associate roles to
identities, either user or service. Authorization policies are defined
using a ServiceRole and ServiceRoleBinding. A ServiceRole is used to
define permissions for accessing services, and a ServiceRoleBinding
grants a ServiceRole to subjects that can be a user, a group, or a ser‐
vice. This combination defines who is allowed to do what under
which conditions. Here is a simple example of a ServiceRole that
provides read access to all services under the /quotes path in the
trader namespace:
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
name: quotes-viewer
namespace: trader
spec:
rules:
- services: ["*"]
paths: ["*/quotes"]
methods: ["GET"]
You can use the istioctl authn command to validate the existing
TLS settings both client side and server side for accessing the portfo
lio service from the point of view of a trader service pod. Use these
commands to check the TLS settings for a trader pod (the client)
using the portfolio service (the server):
$ TRADER_POD=$(kubectl get pod -l app=trader -o jsonpath={.items..metadata.name} -
n stock-trader)
$ istioctl authn tls-check ${TRADER_POD}.stock-trader portfolio-service.stock-
trader.svc.cluster.local
Using Kiali
For information about using Kiali, refer to Chapter 3.
First, log in to the sample application site, which will be used to gen‐
erate load against the Stock Trader application. You can use the fol‐
lowing cURL command to log in to the Stock Trader application
from a terminal to obtain an authentication cookie:
$ curl -X POST \
http://$STOCK_TRADER_IP:32388/trader/login \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Referer: http://$STOCK_TRADER_IP:32388/trader/login' \
-H 'cache-control: no-cache' \
-d 'id=admin&password=admin&submit=Submit' \
--insecure --cookie-jar stock-trader-cookie
Then, you can generate load against the summary page using the
cached cookie:
$ while sleep 2.0; do curl -L http://$STOCK_TRADER_IP:32388/trader/summary --
insecure --cookie stock-trader-cookie; done
Default DestinationRule
There can be only one default DestinationRule in a
given namespace, and it must be named “default.” The
default DestinationRule will override the global set‐
ting and provide a default setting for all services
defined in the namespace.
You will see that you now have a default namespace scoped Destina
tionRule (default/stock-trader) that applies to the pod being exam‐
ined. Notice that the results show that the client-side authentication
requires mTLS as defined in the default DestinationRule in the
stock-trader namespace, meaning that mesh clients will send
encrypted messages. The server-side authentication remains PERMIS
SIVE, accepting both plain text and mTLS from the client due to the
mesh-wide default policy:
Executing the tls-check once again, you see that both client-side
and server-side authentication requires mTLS:
$ TRADER_POD=$(kubectl get pod -l app=trader -o jsonpath={.items..metadata.name} -
n stock-trader)
$ istioctl authn tls-check ${TRADER_POD}.stock-trader portfolio-service.stock-
trader.svc.cluster.local
You will see failures now in your cURL calls from earlier since the
server side is requiring clients to send mTLS traffic, which also
includes clients from the internet. It will be necessary to secure
inbound traffic to the service mesh as well to remove these errors:
curl: (56) Recv failure: Connection reset by peer
If you do not recall the name of your cluster, you can use the follow‐
ing command to list all the clusters that belong to you:
$ ibmcloud ks clusters
Now you can register a DNS entry using the IP address of the Istio
ingress gateway using these commands:
Check the status of the DNS entry using the IKS nlb-dns command,
which will have a result similar to this:
$ ibmcloud ks nlb-dns ls --cluster $CLUSTER_NAME
Retrieving hostnames, certificates, IPs, and health check monitors for network
load balancer (NLB) pods in cluster <YOUR_CLUSTER_NAME>...
OK
Hostname
istio-book-f0a5715bb2873122b708ede2bf765701-0001.us-
east.containers.appdomain.cloud
IP(s) Health Monitor SSL Cert Status
169.63.159.157 None created
SSL Cert Secret Name
istio-book-f0a5715bb2873122b708ede2bf765701-0001
The SSL certificate is encoded in the SSL Cert Secret Name Kuber‐
netes secret stored in the default namespace. You will need to copy
the secret into the istio-system namespace where the gateway ser‐
vice is deployed, and you will need to name the secret istio-
ingressgateway-certs. The name is a reserved name and will
automatically get loaded by the ingress gateway when a secret with
the istio-ingressgateway-certs is found.
Copy the SSL secret generated by your cloud provider into the
istio-ingressgateway-certs secret. To do this you’ll need to
export the secret name using this command where you change
<YOUR_SSL_SECRET_NAME> to the name generated by your cloud
provider:
$ export SSL_SECRET_NAME=<YOUR_SSL_SECRET_NAME>
Validate that the secret was created using the following command:
$ kubectl get secret istio-ingressgateway-certs -n istio-system
NAME TYPE DATA AGE
istio-ingressgateway-certs Opaque 2 3h5m
At this point you have secured the ingress gateway, but you haven’t
defined any services to be accessed via the gateway. Exposing a ser‐
vice outside the mesh requires that a service is bound to the gateway
using an Istio VirtualService resource. The VirtualService is
bound to the gateway using the gateways section. In this case the
VirtualService is bound to the newly configured trader-gateway.
Specifying a Hostname
You can specify a hostname instead of using “*” for a
given gateway resource and virtual service resource.
For example, you can use <YOUR_DNS_NLB_HOSTNAME>
as the hostname.
Conclusion
In this chapter, you learned that Istio provides a simple yet powerful
mechanism to manage mTLS communication between services
using strong identities defined with the SPIFFE format. Citadel is
the key Istio component responsible for the generation and rotation
of keys and certificates used for secure communication between
services within the mesh. You learned that Istio uses a declarative
model to set security policies enabling the ability to incrementally
onboard secure services into the mesh. By using a permissive model,
it is possible to have services that support both plain text and mTLS
communication, which makes it easier to incrementally move serv‐
ices into the mesh. Using Istio gateways ensures that there are
secure, encrypted communication from clients outside of the mesh
accessing services that are exposed from the mesh. Now that you
have discovered how to secure services in the mesh, we turn your
attention toward controlling traffic within the mesh.
You are now ready to take control of how traffic flows between serv‐
ices. In a Kubernetes environment, there is simple round-robin load
balancing between service endpoints. While Kubernetes does sup‐
port deployment strategies such as a rolling deployment, it is quite
coarse grained and is limited to moving to a new version of the ser‐
vice. You may find it necessary to have more than one version of the
service running and perform a dark launch or a canary test. A ser‐
vice mesh enables these types of traffic management patterns by
controlling requests and resiliency between services and controlling
the traffic entering and leaving the cluster. This chapter explores
many of these types of features to control the traffic between serv‐
ices including increasing the resiliency between the services.
Dark Launch
Dark launch allows you to deploy a service or a new version of a ser‐
vice while minimizing the impact to users; in other words, you can
keep the service in the dark. It is imperative that you can develop
and deliver new versions of your application with agility and low
risk. Using a dark-launch approach enables you to deliver new func‐
tions rapidly with reduced risk. Since Istio allows you to precisely
control how new versions of services are rolled out and accessed by
clients, you can use a dark-launch approach for delivering changes.
59
Introducing Changes as a New Version
For example, you may want to create a new version of the Stock
Trader service with the loyalty information for each portfolio owner,
starting with basic loyalty level with the possibility to generate the
loyalty level based on the portfolio’s total value. In the Trader Git‐
Hub repository, we have already created a v1 branch with the origi‐
nal version of the application and left the master branch for new
development.
After you have updated the trader service in the master branch,
you can update the version value in the deployment labels, selector
match labels, and template labels in the deploy.yaml file to reflect the
v2 version, as shown in the example that follows. Recall from Chap‐
ter 3 that version labels were added to the deployment descriptors to
provide more context for metrics and telemetry:
$ cat trader/manifests/deploy.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: traderv2
labels:
app: trader
solution: stock-trader
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: trader
version: v2
template:
metadata:
labels:
app: trader
version: v2
...
You can execute the next set of commands if you want to make the
changes in your own fork of the GitHub repositories for the Stock
Trader example. With these commands you can build a new image
with the updated changes:
$ git checkout master
$ mvn package
Dark Launch | 61
Basic Traffic Routing
Istio provides fine-grained traffic routing controls for both the client
and destination service using the Istio virtual services and destina‐
tion rules. A virtual service provides you with the ability to config‐
ure a list of routing rules that control how the Envoy sidecar proxies
route requests to a service within the service mesh. In this example,
you can define a virtual service to define the routing rules that
would be used when invoking the trader service. Since the trader
service is an edge service (i.e., a service accessible from outside the
mesh), you’ll need to bind the virtual service to the trader-gateway
to describe the route rules from the gateway to the trader service.
Using this approach, you are able to dark launch the trader-v2
deployment changes. Inspect the virtual-service-trader Virtual‐
Service using the next command to see the configured route, which
sends 100% of the requests to the destination trader-service within
the mesh. With this virtual service definition, none of the requests
to the trader-service would be routed to the v2 deployment end‐
points because all the requests are routed to the pods with the “v1”
subset label:
$ cat manifests/trader-vs-100-v1.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service-trader
spec:
hosts:
- '*'
gateways:
- trader-gateway
http:
- match:
- uri:
prefix: /trader
route:
- destination:
host: trader-service
subset: "v1"
port:
number: 9080
weight: 100
TLS Settings
In Chapter 4, you enabled a default destination rule
within the namespace for the mTLS settings. When
you define a destination rule for a destination virtual
service, you need to specify the TLS settings because
the declaration of the destination rule for the virtual
service will override the default mTLS settings, either
global or namespace scoped.
If you look at the trader-dr.yaml file from the example Stock Trader
application that follows, you’ll see that the destination-rule-trader
resource exposes two subsets, “v1” and “v2,” based on labels found
in the destination trader-service. The destination-rule-trader
shown in the example is an extremely simple case that uses the
default round-robin load-balancing strategy. It is common to have
other rules such as load balancing, connection pool size, and outlier
detection settings to detect and evict unhealthy hosts from the load-
balancing pool:
$ cat manifests/trader-dr.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destination-rule-trader
spec:
host: trader-service
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: "v1"
- name: v2
labels:
version: "v2"
You can apply the virtual service and destination rule resources,
which will program the Istio mesh and change the routing behavior.
Execute the commands that follow to essentially instruct Pilot to
program the Istio ingress gateway to route requests on port 443 via
https and a URI path with /trader to the v1 subset of the trader-
service. Note, you do not need to redeploy either the trader v1 or v2
deployments or the trader service for the changes to take effect:
Dark Launch | 63
$ kubectl apply -f manifests/trader-vs-100-v1.yaml -n stock-trader
virtualservice.networking.istio.io/virtual-service-trader configured
Namespace Scoped
Notice that you have deployed the trader-gateway,
the virtual-service-trader, and the destination-
rule-trader resources in the same stock-trader
namespace. It isn’t necessary to have the gateway
resource in the same namespace as the virtual service
resource. However, it is necessary in this case, because
the virtual-service-trader resource is referring to
the trader-gateway without a namespace. Therefore
the references are all scoped to the local namespace.
With the new mesh configurations applied, you can test the effects
with the sample application. Open a Firefox browser and visit the
https://<YOUR_DNS_NLB_HOSTNAME>/trader URL. When using
Firefox as the client browser to visit the application, you will see
your requests are routed to the v2 deployment of the trader service,
as demonstrated in Figure 5-1. Open another web browser client
such as Chrome or Safari. Now visit the same https://
Dark Launch | 65
<YOUR_DNS_NLB_HOSTNAME>/trader URL. You should now
see requests being routed to the v1 deployment of the trader
service.
Canary Testing
A canary test1 is when you deploy a new version (the canary) along
with the previous version and route a percentage of requests to the
new version to determine whether there are problems before rout‐
ing all traffic to the new version. After satisfactorily testing a new
feature with a selective set of requests, a canary test is often per‐
formed to ensure that the new version of the service not only func‐
tions properly, but also doesn’t cause a degradation in performance
or reliability. You may even place higher load on the canary deploy‐
ment monitoring the effects over time. If there are no observed ill
effects on the environment, you would adjust the routing rules to
direct all of the traffic to the canary deployment.
1 The term comes from coal mining; miners took canary birds into the mine since the
birds would be affected by carbon monoxide before the miners, thus giving crucial
advance warning about the problem.
You can now deploy the updated virtual service definition for the
trader service using the following command. There is no need to
redeploy either versions of the trader deployments to change to the
desired traffic distribution because Istio is dynamically reconfigur‐
ing the Envoy sidecars within the mesh:
$ kubectl apply -f manifests/trader-vs-80-20.yaml -n stock-trader
virtualservice.networking.istio.io/virtual-service-trader configured
Visit the trader service in your favorite web browser using https://
<YOUR_DNS_NLB_HOSTNAME>/trader/login. Validate that 20%
of the requests go to Trade v2 while 80% of the requests continue to
show the Trade v1 UI. You will not immediately see the 80/20 distri‐
bution mix until the Envoy sidecars have processed the configura‐
tion change and adjusted the routing distribution. You may need to
refresh /trader/login multiple times, perhaps 15 or more to see the
proper distribution.
Canary Testing | 67
Canary Without Istio
Container orchestration platforms such as Kubernetes
use instance scaling to manage traffic routing between
deployments as well as the number of replicas to con‐
trol the weight between the deployment endpoints.
With Istio, you can have multiple versions of the
trader service deployed at the same time and allow
them to scale up and down independently, without
affecting the traffic distribution between them. As a
result, you can scale up or down either version of the
trader service without worrying about causing an
impact to the traffic distribution among the versions of
the service. Istio allows you to decouple deployments
from traffic routing.
Retries
Istio has support to program retries for your services in the mesh
without you specifying changes to your code. By default, client
requests to each of your services in the mesh will be retried twice.
When using Istio, you can configure the number of retries and the
timeout for each retry from the point of view of a client that is call‐
ing the service. You can configure retries per service within the Istio
To set the new retries configuration, you can apply the trader-vs-
retries.yaml file using this command:
$ kubectl apply -f manifests/trader-vs-retries.yaml -n stock-trader
virtualservice.networking.istio.io/virtual-service-trader configured
Timeouts
Istio has built-in support for timeouts with client requests to serv‐
ices within the mesh. The default timeout for HTTP requests is 15
seconds. You can override the default timeout setting of a service
route within the route rule for a virtual service resource. For exam‐
ple, in the route rule within the virtual-service-trader resource,
you can add the following timeout configuration to set the timeout
of the /trader route to be 10 seconds, along with 3 retries with each
retry timeout after 2 seconds:
To see the new timeouts and retries in action, you can use this com‐
mand to apply the trader-vs-retries-timeout.yaml:
$ kubectl apply -f manifests/trader-vs-retries-timeout.yaml -n stock-trader
virtualservice.networking.istio.io/virtual-service-trader configured
Circuit Breakers
Circuit breaking is an important pattern for creating resilient micro‐
service applications. Circuit breaking allows you to limit the impact
of failures and network delays, which are often beyond your control
when making requests to dependent services. Generally, it would be
necessary to add logic directly within your code to handle situations
when the calling service fails to provide the desirable result. You
would code logic to capture the failure and make decisions on the
proper course of action, which would provide a more desirable
result to the client rather than an error message.
Second, you will inject a 90-second fault delay for 100% of the client
requests when the portfolio_user HTTP header value exactly
matches the value Jason. Using a fault injection such as this allows
you to minimize the impact to most client requests since you are
injecting the failure only on a specific client request, like so:
Apply the virtual service and destination rule changes using the fol‐
lowing commands to view the effects of the fault injection:
$ cd ..
$ kubectl apply -f stock-quote/manifests/stock-quote-vs-fault-match.yaml \
-n stock-trader
virtualservice.networking.istio.io/virtual-service-stock-quote created
To test the new fault-injection settings that you created using the
preceding steps, you’ll need to use the sample Stock Trader applica‐
tion and a user portfolio with an owner named “Jason.” You can visit
the Stock Trader application in a web browser using https://
<YOUR_DNS_NLB_HOSTNAME>/trader and log in as user stock
with password trader. Once you have logged in to the application
in your web browser, complete the following steps to see the fault
injection in action:
Since the HTTP delay was injected earlier, it will take 90 seconds to
get a response. In this case, an error occurs that needs to be fixed in
the trader or portfolio service to ensure that it can handle the net‐
work degradation failure properly and serve a useful message to the
client.
Create another user portfolio with a name other than “Jason.” Fol‐
lowing the same steps as above, retrieve the new user’s portfolio. You
will find that the request succeeds because the fault injection is not
applied to portfolios without the user name “Jason.”
The stock-quote service needs to reach out to the IEX Cloud exter‐
nal service to get the current quote of the stock. Go back to the
Stock Trader application and add a new stock to one of the portfo‐
lios, using a different stock symbol to ensure that the stock-quote
service must call the IEX Cloud external service to get the most
recent stock price. Figure 5-3 shows an example view of using the
Stock Trader application to purchase shares into a portfolio.
Because all outbound traffic is blocked by default, you will see the
following exception thrown by application class:
org.apache.cxf.microprofile.client.DefaultResponseExceptionMapper.toThrowable:33
You can examine the stock-quote pod log to get more detailed
information for the connection failure by entering the following in
your terminal:
$ kubectl logs -c stock-quote --namespace=stock-trader \
--selector="app=stock-quote,solution=stock-trader"
You should see in the log an entry similar to the example that fol‐
lows, which indicates a connection problem with the IEX Cloud
external service. This is expected because we have restricted any ser‐
vice in the mesh from accessing any other service that is external to
the mesh:
{"type":"liberty_message","host":"stock-quote-78848589c6-cgqvh",
"ibm_userDir":"\/opt\/ol\/wlp\/usr\/","ibm_serverName":"defaultServer",
"message":"javax.ws.rs.ProcessingException:
javax.net.ssl.SSLHandshakeException: SSLHandshakeException
Istio has the ability to selectively access external services using a Ser‐
vice Entry. A Service Entry allows you to define a service that is
external to the mesh and allows access by services within the mesh.
Through service entries, you can bring external services as partici‐
pants in the mesh. Create a service entry to ensure services can
access the IEX Cloud external service while still preventing access to
all other external services, like so:
$ cat stock-quote/manifests/se-iex.yaml
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: iex-service-entry
spec:
hosts:
- "cloud.iexapis.com"
ports:
- number: 443
name: https
protocol: https
resolution: DNS
Repeat the same step as earlier to revisit the Stock Trader application
to add a new stock to a portfolio. This time, you should see that the
stock is successfully added given that the call to the IEX Cloud
external service is no longer being blocked.
Similar to intercluster requests, Istio routing rules can be used with
external services to define retries, timeouts, and fault injection poli‐
cies. For example, you can set a timeout rule on calls to the
api.us.apiconnect.ibmcloud.com service used in the Stock Trader
application as shown here:
$ cat stock-quote/manifests/iex-vs.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: iex-virtual-service
spec:
hosts:
- "cloud.iexapis.com"
https:
- timeout: 3s
route:
- destination:
host: "cloud.iexapis.com"
weight: 100
Egress Gateway
Although service entries provide controlled access to
external services, when combined with an Istio egress
gateway, you can ensure that all external services are
accessed through a single exit point. Having a single
exit point allows you to provide specific security con‐
straints on the nodes as well as the pods that all traffic
leaving the mesh will pass. Refer to the official egress
tasks for more details on using egress gateway.
Conclusion
When you move to developing a cloud native solution, the dis‐
tributed nature of the services requires greater control over the flow
of traffic between the services. Basic routing and load-balancing
support in Kubernetes often falls short of what is needed to manage
traffic in these highly distributed applications. A service mesh like
Istio has the capabilities that provides you with the ability to manage
traffic flows within the mesh as well as entering and leaving the
mesh. These capabilities allow you to efficiently control rollout and
access to new features, and they make it possible to build more resil‐
ient services within the mesh, all without having to make compli‐
cated changes to your application code.
Takeaways
Above and beyond everything else, it is important to have a service
mesh strategy when using microservices in the cloud in order to get
control over the complexities introduced by the highly distributed
nature of microservices and the cloud. Beyond this key point, you
should have a better understanding of the following points:
79
• You can add services to the mesh using Istio’s automatic sidecar
injection support per Kubernetes namespace, making it easier to
incrementally adopt a service mesh, which is important for
brownfield applications.
Next Steps
If you read through this book but you did not run the steps outlined
in each chapter, we recommend that you take another pass through
and actually work through the steps with the Stock Trader applica‐
tion. You will retain more of the lessons if you try them out yourself.
If after reading this book we have piqued your interest in either ser‐
vice meshes or Istio itself and you would like to get more informa‐
tion, we recommend that you check out the following:
When you are ready, the next logical step is to apply what you have
learned on your own projects to truly see the value that you can ach‐
ieve with a service mesh.
80 | Chapter 6: Wrap-Up
About the Authors
Lin Sun is a senior technical staff member and Master Inventor at
IBM. She is a maintainer on the Istio project and also serves on the
Istio Steering Committee and Technical Oversight Committee. She
is passionate about new technologies and loves to play with them.
She holds more than 150 patents issued with USPTO.
Daniel Berg is an IBM Distinguished Engineer responsible for the
technical architecture and delivery of the IBM Cloud Kubernetes
Service and Istio. Daniel has deep knowledge of container technolo‐
gies including Docker and Kubernetes and has extensive experience
building and operating highly available cloud native services. Daniel
is a member of the Technical Oversight Committee for the Istio.io
open source service mesh project, and he is responsible for driving
the technical integration of Istio into IBM Cloud.