Unit 3 Final 1
Unit 3 Final 1
Unit 3 Final 1
It allows you to isolate an application from its host system so that the
application becomes portable.
Unlike a VM, a container contains only a light operating system with only
the elements required for the OS, such as system libraries, binaries, and
code dependencies.
3
Introduction to Dockers
The principal difference between VMs and containers is that each VM that is
hosted on a hypervisor contains a complete OS.
4
Introduction to Dockers
● Containers Vs. Virtual Machine
5
Introduction to Dockers
Why use Dockers:
Easy to install and run software without worrying about setup or dependencies.
Developers use Docker to eliminate machine problems, i.e. "but code is worked
on my laptop." when working on code together with co-workers.
Operators use Docker to run and manage apps in isolated containers for better
compute density.
Enterprises use Docker to securely built agile software delivery pipelines to ship
new application features faster and more securely.
Since docker is not only used for the deployment, but it is also a great platform
for development, and helps in increasing customer's satisfaction.
6
Introduction to Dockers
Advantages of Dockers:
It runs the container in seconds instead of minutes.
Docker allows you to use a remote repository to share your container with
others.
Docker is not a good solution for applications that require rich graphical
interface.
Docker image
Docker registry
Docker container
9
Components of Docker
Docker Client and Server:
The communication between the Docker client and the Docker host is via a
REST API.
Ex: A Docker Pull command would send an instruction to the daemon and
perform the operation by interacting with other components (image,
container, registry).
The Docker daemon itself is actually a server that interacts with the
operating system and performs services. 10
Components of Docker
Docker Client and Server:
Docker daemon constantly listens across the REST API to see if it needs to
perform any specific requests.
To trigger and start the whole process, use the Dockered command within
the Docker daemon. And it will start all of the performances.
Then you have a Docker host, which lets you run the Docker daemon and
registry.
11
Components of Docker
Docker Image:
That template is written in a YAML, which stands for Yet Another Markup
Language.
The image has several key layers, and each layer depends on the layer
below it.
12
Components of Docker
Docker Image:
Image layers are created by executing each command in the Dockerfile and
are in the read-only format.
Start with base layer, which will typically have base image and base
operating system.
These then comprise the instructions in a read-only file that would become
your Dockerfile.
13
Components of Docker
Docker Image:
14
Components of Docker
Docker Image:
In the previous image, there are four layers of instructions: From, Pull, Run
and CMD.
The From command creates a layer based on Ubuntu, and then add files
from the Docker repository to the base command of that base layer.
The Docker registry is used to host various types of images and distribute
the images from.
The repository itself is just a collection of Docker images, which are built on
instructions written in YAML and are very easily stored and shared.
Give name tags to the Docker images so that it’s easy to find and share
them within the Docker registry.
You can also create your own registry for your own use internally.
The registry that you create internally can have both public and private
images that you create.
The commands you would use to connect the registry are Push and Pull.
Use a Pull command to retrieve new clients (Docker image) created from
the Docker registry.
17
Components of Docker
Docker Registry:
A Pull command pulls and retrieves a Docker image from the Docker
registry.
A Push command allows you to take a new command that you’ve created
and push it to the registry, whether it’s Docker hub or your own private
registry.
18
Components of Docker
Docker Container:
It gives all the instructions for the solution you’re looking to run.
19
Components of Docker
Docker Container:
This is useful, especially when you have a virtual machine that has a defined
amount of memory for each environment.
20
Components of Docker
Docker Container:
21
Components of Docker
Docker Container:
If Redis image is not locally installed, it will be pulled from the registry.
After this, the new Docker container Redis will be available within your
environment so you can start using it.
Containers are lightweight because they do not have some of the additional
layers that virtual machines do.
The biggest layer Docker doesn’t have is the hypervisor, and it doesn’t need
to run on a host operating system.
22
Advanced Components of Docker
1 Docker Compose:
Use Docker Compose if you are running an Apache server with a single
database and you need to create additional containers to run additional
services without having to start each one separately.
Each node of Docker swarm is a Docker daemon, and all Docker daemons
interact using the Docker API.
Worker nodes receive and execute tasks from the manager node.
24
Installing Docker
● Docker's community edition (CE) is free and is very well suited to
developers and small teams.
● Also it is natively present on some cloud providers, such as AWS and Azure.
25
Installing Docker
● To operate, Docker needs the following elements:
1. The Docker client: This allows you to perform various operations on the
command line.
3. Docker Hub: This is a public (with a free option available) registry of Docker
images.
26
Installing Docker
● Registering on Docker Hub
● To register on Docker Hub and list Docker images, perform the following
steps:
1. Go to https:/ / hub. docker. com/ and click on the Sign up for Docker Hub
button:
27
Installing Docker
● Registering on Docker Hub
28
Installing Docker
● Registering on Docker Hub
3. Once your account is created, you can then log in to the site, and this
account will allow you to upload custom images and download Docker
Desktop.
4. To view and explore the images available from Docker Hub, go to the
Explore section.
29
Installing Docker
● Registering on Docker Hub
30
Installing Docker
● Docker Installation:
32
Installing Docker
● Docker Installation:
3. Then, take the single configuration step, which is a choice between using
Windows or Linux containers:
33
Installing Docker
● Docker Installation:
5. Finally, to start Docker, launch the Docker Desktop program. An icon will
appear in the notification bar indicating that Docker is starting. It will then
ask you to log in to Docker Hub via a small window. The startup steps of
Docker Desktop are shown in the following screenshot: 34
Installing Docker
● Docker Installation:
35
Installing Docker
● Docker Installation:
● docker --help
36
Installing Docker
● An overview of Docker's elements:
● This space will allow the storage of persistent elements (files or databases).
38
Creating a Dockerfile
● A basic Docker element is a file called a Dockerfile, which contains step-by-
step instructions for building a Docker image.
● Writing a Dockerfile:
39
Creating a Dockerfile
● Writing a Dockerfile:
40
Creating a Dockerfile
● Writing a Dockerfile:
● The required FROM statement defines the base image, which will be used
for Docker image.
● This base image can be saved either in Docker Hub or in another registry
(Ex: Artifactory, Nexus Repository, or Azure Container Registry).
● In this code example, the Apache httpd image is used and tagged the latest
version, https://hub.docker.com/_/httpd/.
41
Creating a Dockerfile
● Writing a Dockerfile:
● Then, use the COPY instruction to execute the image construction process.
42
Creating a Dockerfile
● Dockerfile Instructions Overview:
● There are other instructions that will allow to build a Docker image.
● FROM: This instruction is used to define the base image for our image, as
shown in the example detailed in the Writing a Dockerfile section.
● COPY and ADD: These are used to copy one or more local files into an
image. The Add instruction supports an extra two functionalities, to refer to
a URL and to extract compressed files.
43
Creating a Dockerfile
● Dockerfile Instructions Overview:
● RUN and CMD: This instruction takes a command as a parameter that will
be executed during the construction of the image.
● The Run instruction creates a layer so that it can be cached and versioned.
44
Creating a Dockerfile
● Dockerfile Instructions Overview:
● RUN and CMD: Write the following example of the RUN instruction in a
Dockerfile to execute the apt-get command:
● This instruction updates the apt packages that are already present in the
image and create a layer.
● Use the CMD instruction in the following example will display a docker
message:
● ENV myvar=mykey
● WORKDIR usr/local/apache2
46
Creating a Dockerfile
● Dockerfile Instructions Overview:
● EXPOSE: This command exposes the ports that the software uses, ready for
you to map to the host when running a container with the -p argument.
48
Building and Running a Container on a Local Machine
● Building a Docker image:
49
Building and Running a Container on a Local Machine
● Building a Docker image:
● The -t argument indicates the name of the image and its tag. In this
example, demobook is image and v1 is the tag.
● The . (dot) at the end of the command specifies to use the files in the
current directory.
50
Building and Running a Container on a Local Machine
● Building a Docker image:
● Executing the docker build command downloads the base image indicated
in the Dockerfile from Docker Hub, and then Docker executes the various
instructions that are mentioned in the Dockerfile.
● docker images
51
Building and Running a Container on a Local Machine
● Instantiating a new Container of an Image:
● The -d parameter indicates that the container will run in the background.
● In the -p parameter, we indicate the desired port translation; that is, in our
example, port 80 of the container will be translated to port 8080 on our
local machine.
52
Building and Running a Container on a Local Machine
● Instantiating a new Container of an Image:
● And finally, the last parameter of the command is the name of the image
and its tag.
● This command displays the ID of the container, and the container runs in
the background.
● docker ps
54
Building and Running a Container on a Local Machine
● Testing a Container locally :
● However, with the port translation and with the run command, you can test
your container on your local machine.
55
Building and Running a Container on a Local Machine
● Testing a Container locally :
56
Pushing an image to Docker Hub
● The goal of creating a Docker image that contains an application is to be
able to use it on servers that contain Docker and host the company's
applications.
● If you want to create a public image, you can push it (or upload it) to Docker
Hub, which is Docker's public (and free) registry.
57
Pushing an image to Docker Hub
● To push a Docker image to Docker Hub, perform the following steps:
1. Sign in to Docker Hub: Log in to Docker Hub using the following command:
docker login -u <your dockerhub login>
2. Retrieving the image ID: The next step consists of retrieving the ID of the
image that has been created. Execute the docker images command to
display the list of images with their ID.
58
Pushing an image to Docker Hub
● To push a Docker image to Docker Hub, perform the following steps:
3. Tag the image for Docker Hub: With the ID of the image we retrieved, we
will now tag the image for Docker Hub. To do so, the following command is
executed: docker tag <image ID> <dockerhub login>/demobook:v1
59
Pushing an image to Docker Hub
4. Push the image Docker in the Docker Hub: After tagging the image, the last
step is to push the tagged image to Docker Hub.
60
Pushing an image to Docker Hub
● To view the pushed image in Docker Hub, connect to the Docker Hub web
portal at https://hub.docker.com/ and see that the image is present.
61
Pushing an image to Docker Hub
● By default, the image pushed to Docker Hub is in public mode – everybody
can view it in the explorer and use it.
62
Pushing an image to Docker Hub
● To make this image private – that is, you must be authenticated to be able
to use it – you must go to the Settings of the image and click on the Make
private button:
63
Deploying a container to ACI with a CI/CD pipeline
● One of the reasons Docker has quickly become attractive to developers and
operations teams is that the deployment of Docker images and containers
has made CI and CD pipelines for enterprise applications easier.
● ACI is a managed service from Azure that allows you to deploy containers
very easily, without having to worry about the hardware architecture.
64
Deploying a container to ACI with a CI/CD pipeline
● In this section:
The Terraform code of the Azure ACI and its integration with our
Docker image.
65
Deploying a container to ACI with a CI/CD pipeline
● The Terraform code for ACI:
● In this code, provide Terraform code for a resource group and ACI resource
using the azurerm-container-group Terraform object.
66
Deploying a container to ACI with a CI/CD pipeline
● The Terraform code for ACI:
67
Deploying a container to ACI with a CI/CD pipeline
● The Terraform code for ACI:
● Add the Terraform code for the ACI with the azurerm-container-group
resource block:
68
Deploying a container to ACI with a CI/CD pipeline
● The Terraform code for ACI: In this code, we do the following:
● Finally, in order to protect the tfstate file, use the Terraform remote
backend by using an Azure blob storage.
69
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for the container:
● To create a CI/CD pipeline that will build image and execute the Terraform
code, use all the tools in Continuous Integration and Continuous Delivery
stage.
● To visualize the pipeline, use Azure Pipelines, which is one of the detailed
tools.
70
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for
the container:
2. Then, on the Variables tab, define the variables that will be used in the
pipeline. The following screenshot shows the information on the Variables
tab:.
72
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for
the container:
73
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for
the container:
75
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for the container:
6. The last task, Bash, allows you to execute a Bash script, and this screenshot
shows its configuration:
76
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for the container:
77
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for the container:
3. Executes terraform apply to apply the changes, with the two -var
parameters, which are our Docker Hub username as well as the tag to
apply. These parameters allow the execution of a container with the new
image that has just been pushed to Docker Hub.
78
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for the container:
7. Then, to configure the build agent to use in the Agent job options, use the
Azure Pipelines agent hosted Ubuntu 16.04, shown in the following
screenshot:
79
Deploying a container to ACI with a CI/CD pipeline
● Creating a CI/CD pipeline for the container:
81
Deploying a container to ACI with a CI/CD pipeline
● In the Azure portal, we have our ACI, aci-app, with our container,
mydemoapp :
82
Deploying a container to ACI with a CI/CD pipeline
● Notice that the container is running well.
● Now, to access our application, we need to retrieve the public FQDN URL
of the container provided in the Azure portal:
83
Deploying a container to ACI with a CI/CD pipeline
● Open a web browser with this URL Our web application is displayed
correctly:
● The next time the application is updated, the CI/CD build is triggered, a
new version of the image will be pushed into Docker Hub, and a new
container will be loaded with this new version of the image.
84
Managing Containers
Effectively with Kubernetes
Introduction to Kubernetes
There are two major container orchestration tools on the market:
Docker Swarm
Kubernetes
On the other hand, Docker Swarm is designed for ease of use, making it a
preferable choice for simple applications.
86
Introduction to Kubernetes
Features of Kubernetes
Automated Scheduling
Self-Healing Capabilities
87
Introduction to Kubernetes
Features of Kubernetes
Application-centric management
Auto-scalable infrastructure
89
Introduction to Kubernetes
Kubernetes Basics
90
Kubernetes Architecture
91
Kubernetes Architecture
Master Node:
The master node is the first and most vital component which is responsible for
the management of Kubernetes cluster.
There might be more than one master node in the cluster to check for fault
tolerance.
The master node has various components like API Server, Controller Manager,
Scheduler, and ETCD.
92
Kubernetes Architecture
API Server:
The API server acts as an entry point for all the REST commands used for
controlling the cluster.
Scheduler:
The scheduler schedules the tasks to the slave node.
93
Kubernetes Architecture
Scheduler:
It also helps you to track how the working load is used on cluster nodes.
It helps you to place the workload on resources which are available and accept
the workload.
Etcd:
etcd components store configuration detail and wright values.
Kubelet:
This gets the configuration of a Pod from the API server and ensures that the
described containers are up and running.
95
Kubernetes Architecture
Docker Container:
Docker container runs on each of the worker nodes, which runs the configured
pods.
Kube-proxy:
Kube-proxy acts as a load balancer and network proxy to perform service on a
single worker node.
Pods:
A pod is a combination of single or multiple containers that logically run
together on nodes.
96
Kubernetes - Other Key Terminologies
Replication Controllers
A replication controller is an object which defines a pod template.
Replication Sets
Replication sets are an interaction on the replication controller design with
flexibility in how the controller recognizes the pods it is meant to manage.
Deployment use replication set as a building block which adds the feature of life
cycle management.
Stateful Sets
It is a specialized pod control which offers ordering and uniqueness.
It is mainly used to have fine-grained control, which you have a particular need
regarding deployment order, stable networking, and persistent data.
98
Kubernetes - Other Key Terminologies
Daemon Sets
Daemon sets are another specialized form of pod controller that runs a copy of
a pod on every node in the cluster.
This type of pod controller is an effective method for deploying pods that allows
you to perform maintenance and offers services for the nodes themselves.
99
Kubernetes vs. Docker Swarm
100
Introduction to Kubernetes
Advantages of Kubernetes:
Easy organization of service with pods
Kubernetes can run on-premises bare metal, OpenStack, public clouds Google,
101
Azure, AWS, etc.
Introduction to Kubernetes
Advantages of Kubernetes:
Helps you to avoid vendor lock issues as it can use any vendor-specific APIs or
services except where Kubernetes provides an abstraction, e.g., load balancer
and storage.
103
Installing Kubernetes on a local machine
When developing a containerized application that is to be hosted on
Kubernetes, it is important to be able to run the application (with its
containers) on your local machine, before deploying it on remote
Kubernetes production clusters.
105
Installing Kubernetes on a local machine
2. After clicking on the Apply button, Docker Desktop will install a mini
Kubernetes cluster, and the kubectl client tool, on the local machine.
106
Installing the Kubernetes dashboard
After installing our Kubernetes cluster, there is a need for another element,
which is the Kubernetes dashboard.
In order to install the Kubernetes dashboard, which is a pre-packaged containerized
web application that will be deployed in our cluster, we will run the following
command in a Terminal:
107
Installing the Kubernetes dashboard
Its execution is shown in the following screenshot:
108
Installing the Kubernetes dashboard
To open the dashboard and connect to it from our local machine, first
create a proxy between the Kubernetes cluster and our machine by
performing the following steps:
The proxy is open on the localhost address (127.0.0.1) with the 8001 port.
109
Installing the Kubernetes dashboard
Then, in a web browser, open the
URL
http://localhost:8001/api/v1/namesp
aces/kubernetes-
dashboard/services/https:kubernete
s-dashboard:/proxy/#/login
111
First example of Kubernetes application deployment
After installing our Kubernetes cluster, deploy an application in it.
112
First example of Kubernetes application deployment
113
First example of Kubernetes application deployment
In this code, description of deployment is as follows:
114
First example of Kubernetes application deployment
In this example, chose two replicas, which can, at the very least, distribute
the traffic charge of the application (put in more replicas if there is a high
volume of load).
Therefore, if one of the two pods has a problem, the other, which is an
identical replica, will ensure the proper functioning of the application.
Then, in the containers section, we indicate the image (from the Docker
Hub) with name and tag.
Finally, the ports property indicates the port that the container will use
115
within the cluster.
First example of Kubernetes application deployment
To deploy our application, we go to our Terminal, and execute one of the
essential kubectl commands (kubectl apply) as follows:
kubectl apply -f myapp-deployment.yml
116
First example of Kubernetes application deployment
To do this in the Terminal, we execute the kubectl get pods command,
which returns the list of cluster pods.
117
First example of Kubernetes application deployment
In the preceding screenshot, the second command displays two pods, with
the name (webapp) specified in the YAML file, followed by a unique ID, and
Running status.
Also visualize the status of cluster on the Kubernetes web dashboard, the
webapp deployment with the Docker image that has been used, and the
two pods that have been created.
But, for the moment, it is only accessible inside the cluster only.
120
First example of Kubernetes application deployment
The execution of the command creates the service within the cluster, and,
to test the application, open a web browser with the http://localhost:31000
URL, and the page is displayed as follows:
122
Using HELM as a package manager
HELM is, therefore, a repository that will allow the sharing of packages
called charts, and that contain ready-to-use Kubernetes specification file
templates.
A server tool called Tiller, which is in the Kubernetes cluster, and receives
information from the client tool and installs the package charts.
123
Using HELM as a package manager
Installing Helm, and how to
use it to deploy an application:
In Windows
helm init
125
Using HELM as a package manager
3. Search charts: The packages that are contained in a HELM repository are
called charts.
So, to deploy an application, we will use its corresponding chart, and we will
pass some configuration variables of this application.
126
Using HELM as a package manager
3. Search charts: Once HELM is installed, install a chart that is in the HELM
public repository, but first, to display the list of public charts, run the following
command:
127
Using HELM as a package manager
4. Deploy an application with Helm: To illustrate the use of Helm, we will
deploy a WordPress application in Kubernetes cluster by using a Helm chart.
Helm installs a WordPress instance called mywp, and all of the Kubernetes
components, on the local Kubernetes cluster.
Also display the list of Helm packages that are installed on the cluster by
executing the following command:
helm ls 128
Using HELM as a package manager
And, to remove a package and all of its components, for example, to
remove the application installed with this package, execute the helm delete
command:
The purge parameter indicates that everything has been deleted from this
application.
129
Using Azure Kubernetes Service (AKS)
A production Kubernetes cluster can often be complex to install and
configure.
130
Using Azure Kubernetes Service (AKS)
AKS is an Azure service that allows us to create and manage a real
Kubernetes cluster as a managed service.
The use of this service is free; what is charged is the cost of the VMs on
which the nodes are installed.
131
Using Azure Kubernetes Service (AKS)
Advantages of AKS
132
Using Azure Kubernetes Service (AKS)
Advantages of AKS
Integrated
monitoring
services:
133
Using Azure Kubernetes Service (AKS)
Advantages of AKS
Very easy to scale: AKS allows the quick and direct scaling of the number of
nodes of a cluster via the portal, or via scripts.
134
Using Azure Kubernetes Service (AKS)
Advantages of AKS
Using the kubectl tool does not require any changes compared to a local
Kubernetes.
135
Using Azure Kubernetes Service (AKS)
If we have an Azure subscription and we want to use Kubernetes, it's
intuitive and quick to install.
Using the kubectl tool does not require any changes compared to a local
Kubernetes.
136
Creating a CI/CD pipeline for Kubernetes with Azure
Pipelines
Creating a complete CI/CD pipeline for Kubernetes, from the creation of a
new Docker image pushed in the Docker Hub, to its deployment in an AKS
cluster.
To build this pipeline, we'll use the Azure Pipelines service that is in Azure
DevOps.
A release that will use our YAML deployment specification file to deploy the
latest version of the image in an AKS cluster. 137
The build and push of the image in the Docker Hub
In Azure DevOps, create a new build definition that will be in Classic design
editor mode, and that will point to the source code that contains the
Docker file.
In this build definition, configure the Tasks tab with two steps, in this order:
The build and push of the Docker image.
The publication of the build artifacts, which are the K8S YAML specification files
that will be deployed during the release.
138
The build and push of the image in the Docker Hub
The sequences of the tasks that configure the build pipeline are
demonstrated in the following screenshot:
139
The build and push of the image in the Docker Hub
Detailed configuration steps
of this build pipeline:
140
The build and push of the image in the Docker Hub
Detailed configuration steps
of this build pipeline:
141
The build and push of the image in the Docker Hub
Detailed configuration steps of this build pipeline:
3. In the Variables tab, a variable is added that contains the Docker Hub
username, as shown here:
142
The build and push of the image in the Docker Hub
Detailed configuration steps of this build pipeline:
5. In the Options tab, we indicate the build number with the 2.0.patch pattern.
This build number will be the tag of the Docker image that is uploaded into the
Docker Hub. Once the configuration is finished, we save the build definition and
execute it. 143
The build and push of the image in the Docker Hub
● If the builds were successfully executed, notice the following:
● Build artifacts that contain the YAML specification for Kubernetes files:
144
Creating a CI/CD pipeline for Kubernetes with Azure
Pipelines
In the Docker Hub, a new tag on the
image that corresponds to the build
number, as well as the latest tag, as
shown in the following screenshot:.
145
Automatic deployment of the application in Kubernetes
Create a new definition of release that automatically deploys our
application in the AKS cluster that we created in the previous Using AKS
section.
1. Regarding the choice of template for the release, select the Empty template.
2. Create a stage called AKS, and inside add a task that allows the kubectl
commands (this task is present by default in the Azure DevOps tasks catalog):
146
Automatic deployment of the application in Kubernetes
147
Automatic deployment of the application in Kubernetes
3. Add the Deploy to
Kubernetes task to
the Azure Pipelines
tasks catalog with
the following
configuration.
148
Automatic deployment of the application in Kubernetes
● The settings for the Deploy to Kubernetes task are as follows:
○ Choose the endpoint of the Kubernetes cluster—the New button allows us to
add a new endpoint configuration of a cluster.
○ Finally, choose the directory, coming from the artifacts, which contains the YAML
specification files.
● This command displays the list of pods and services that are present in our
AKS Kubernetes cluster, and the result of this command is shown in the
following screenshot:
150
Automatic deployment of the application in Kubernetes
● We can see our two deployed web applications pods and the NodePort
service that exposes our applications outside the cluster.
● Then, we open a web browser with the http://localhost:31000 URL, and our
application is displayed correctly:
151
Automatic deployment of the application in Kubernetes
● We have created a complete CI/CD pipeline that deploys an application in a
Kubernetes cluster.
● If our application (HTML file) is modified, the build will create and push a
new version of the image (in the latest tag), and then the release will apply
the deployment on the Kubernetes cluster.
152
Thank You
153