The Docker Handbook - 2021 Edition
The Docker Handbook - 2021 Edition
The Docker Handbook - 2021 Edition
Prerequisites
Familiarity with the Linux Terminal
Familiarity with JavaScript (some later projects use
JavaScript)
Table of Contents
Introduction to Containerization and Docker
What is a Container?
Conclusion
Project Code
Code for the example projects can be found in the following
repository:
fhsinchy/docker-handbook-projects
Project codes used in “The Docker Handbook”
:notebook: - fhsinchy/docker-handbook-projects
fhsinchy • GitHub
spare a ⭐ to keep me motivated
Contributions
This book is completely open-source and quality contributions are
more than welcome. You can find the full content in the following
repository:
fhsinchy/the-docker-handbook
Open-source book on Docker. Contribute to
fhsinchy/the-docker-handbook development b…
fhsinchy • GitHub
spare a ⭐ to keep me motivated
If you're looking for a frozen but stable version of the book, then
freeCodeCamp will be the best place to go:
Introduction to Containerization
and Docker
According to IBM,
If you make a list of the dependencies, that list may look as follows:
Node.js
Express.js
SQLite3
Well, theoretically this should be it. But practically there are some
other things as well. Turns out Node.js uses a build tool known as no
de-gyp for building native add-ons. And according to the installation
instruction in the official repository, this build tool requires Python
2 or 3 and a proper C/C++ compiler tool-chain.
Node.js
Express.js
SQLite3
Python 2 or 3
C/C++ tool-chain
Let's assume that you've gone through all the hassle of setting up
the dependencies and have started working on the project. Does
that mean you're out of danger now? Of course not.
What if you have a teammate who uses Windows while you're using
Linux. Now you have to consider the inconsistencies of how these
two different operating systems handle paths. Or the fact that
popular technologies like nginx are not well optimized to run on
Windows. Some technologies like Redis don't even come pre-built
for Windows.
Even if you get through the entire development phase, what if the
person responsible for managing the servers follows the wrong
deployment procedure?
Your teammates will then be able to download the image from the
registry, run the application as it is within an isolated environment
free from the platform specific inconsistencies, or even deploy
directly on a server, since the image comes with all the proper
production configurations.
Now, Docker is not the only containerization tool on the market, it's
just the most popular one. Another containerization engine that I
love is called Podman developed by Red Hat. Other tools like Kaniko
by Google, rkt by CoreOS are amazing, but they're not ready to be a
drop-in replacement for Docker just yet.
Also, if you want a history lesson, you may read the amazing A Brief
History of Containers: From the 1970s Till Now which covers most
of the major turning points for the technology.
You’ll get a regular looking Apple Disk Image file and inside the file,
there will be the application. All you have to do is drag the file and
drop it in your Applications directory.
On Linux however, you don’t get such a bundle. Instead you install all
the necessary tools you need manually. Installation procedures for
different distributions are as follows:
Once the installation is done, open up the terminal and execute doc
ker --version and docker-compose --version to ensure the
success of the installation.
Although Docker performs quite well regardless of the platform
you’re on, I prefer Linux over the others. Throughout the book, I’ll be
switching between my Ubuntu 20.10 and Fedora 33 workstations.
Another thing that I would like to clarify right from the get go, is that
I won't be using any GUI tool for working with Docker throughout
the entire book.
I'm aware of the nice GUI tools available for different platforms, but
learning the common docker commands is one of the primary goals
of this book.
Container
Image
Registry
I've listed the three concepts in alphabetical order and will begin my
explanations with the first one on the list.
What is a Container?
In the world of containerization, there can not be anything more
fundamental than the concept of a container.
uname -a
# Linux alpha-centauri 5.8.0-22-generic #23-Ubuntu SMP Fri Oct 9
As you can see in the output, the container is indeed using the kernel
from my host operating system. This goes to prove the point that
containers virtualize the host operating system instead of having an
operating system of their own.
You can share any number of public images on Docker Hub for free.
People around the world will be able to download them and use
them freely. Images that I've uploaded are available on my profile
(fhsinchy) page.
Apart from Docker Hub or Quay, you can also create your own
image registry for hosting private images. There is also a local
registry that runs within your computer that caches images pulled
from remote registries.
run the docker run hello-world command, let me show you a little
diagram I've made:
It's the default behavior of Docker daemon to look for images in the
hub that are not present locally. But once an image has been
fetched, it'll stay in the local cache. So if you execute the command
again, you won't see the following lines in the output:
In this syntax:
The image name can be of any image from an online registry or your
local system. As an example, you can try to run a container using the
fhsinchy/hello-dock image. This image contains a simple Vue.js
application that runs on port 80 inside the container.
You can stop the container by simply hitting the ctrl + c key
bi ti hil th t i l i d i i f l i ff th
combination while the terminal window is in focus or closing off the
terminal window completely.
# 9f21cb77705810797c4b847dbd330d9c732ffddba14fb435470567a7a3f46cd
Unlike the previous example, you won't get a wall of text thrown at
you this time. Instead what you'll get is the ID of the newly created
container.
The order of the options you provide doesn't really matter. If you put
the --publish option before the --detach option, it'll work just the
same. One thing that you have to keep in mind in case of the run
command is that the image name must come last. If you put anything
after the image name then that'll be passed as an argument to the
container entry-point (explained in the Executing Commands Inside
a Container sub-section) and may result in unexpected situations.
How to List Containers
The container ls command can be used to list out containers that
are currently running. To do so execute following command:
docker container ls
Listed under the PORTS column, port 8080 from your local network
is pointing towards port 80 inside the container. The name gifted_s
ammet is generated by Docker and can be something completely
different in your computer.
# b1db06e400c4c5e81a93a64d30acc1bf821bed63af36cab5cdb95d25e114f5f
You can even rename old containers using the container rename
command. Syntax for the command is as follows:
The command doesn't yield any output but you can verify that the
changes have taken place using the container ls command. The r
ename command works for containers both in running state and
stopped state.
I hope that you remember the container you started in the previous
section. It's still running in the background. Get the identifier for
that container using docker container ls (I'll be using hello-dock
-container container for this demo). Now execute the following
command to stop the container:
# hello-dock-container
If you use the name as identifier, you'll get the name thrown back to
you as output. The stop command shuts down a container
gracefully by sending a SIGTERM signal. If the container doesn't stop
within a certain period, a SIGKILL signal is sent which shuts down
the container immediately.
# hello-dock-container-2
You can get the list of all containers by executing the container ls
--all command. Then look for the containers with Exited status.
# hello-dock-container
Now you can ensure that the container is running by looking at the
list of running containers using the container ls command.
# hello-dock-container-2
The main difference between the two commands is that the contain
er restart command attempts to stop the target container and
then starts it back up again, whereas the start command just starts
an already stopped container.
In case of a stopped container, both commands are exactly the same.
But in case of a running container, you must use the container res
tart command.
# 2e7ef5098bab92f4536eb9a372d9b99ed852a9a816c341127399f51a6d05385
container with the name of hello-dock has been created using the
fhsinchy/hello-dock image. The STATUS of the container is Creat
ed at the moment, and, given that it's not running, it won't be listed
without the use of the --all option.
Once the container has been created, it can be started using the con
tainer start command.
# hello-dock
docker container ls
Although you can get away with the container run command for
the majority of the scenarios, there will be some situations later on
in the book that require you to use this container create
command.
To find out which containers are not running, use the container ls
--all command and look for containers with Exited status.
# 6cf52771dde1
You can check if the container was deleted or not by using the conta
iner ls command. You can also remove multiple containers at once
by passing their identifiers one after another separated by spaces.
You can check the container list using the container ls --all
command to make sure that the dangling containers have been
removed:
If you are following the book exactly as written so far, you should
only see the hello-dock-container and hello-dock-container-2
in the list. I would suggest stopping and removing both containers
before going on to the next section.
There is also the --rm option for the container run and containe
r start commands which indicates that you want the containers
removed as soon as they're stopped. To start another hello-dock
container with the --rm option, execute the following command:
# 0d74e14091dc6262732bee226d95702c21894678efb4043663f7911c53fb79f
docker container ls
# CONTAINER ID IMAGE COMMAND C
# 0d74e14091dc fhsinchy/hello-dock "/docker-entrypoint.…" A
Now if you stop the container and then check again with the contai
# hello-dock-volatile
The container has been removed automatically. From now on I'll use
the --rm option for most of the containers. I'll explicitly mention
where it's not needed.
Well, all images are not that simple. Images can encapsulate an
entire Linux distribution inside them.
The -it option sets the stage for you to interact with any
interactive program inside a container This option is actually two
interactive program inside a container. This option is actually two
separate options mashed together.
The -t or --tty option makes sure that you get some good
formatting and a native terminal-like experience by
allocating a pseudo-tty.
You need to use the -it option whenever you want to run a
container in interactive mode. Another example can be running the
node image as follows:
Any valid JavaScript code can be executed in the node shell. Instead
of writing -it you can be more verbose by writing --interactive
--tty separately.
Assume that you want encode a string using the base64 program.
This is something that's available in almost any Linux or Unix based
operating system (but not on Windows).
In this situation you can quickly spin up a container using images like
busybox and let it do the job.
# bXktc2VjcmV0
To perform the base64 encoding using the busybox image, you can
execute the following command:
# bXktc2VjcmV0
What happens here is that, in a container run command, whatever
you pass after the image name gets passed to the default entry point
of the image.
fhsinchy/rmbyext
Recursively removes all files with given
extension(s). - fhsinchy/rmbyext
fhsinchy • GitHub
spare a ⭐ to keep me motivated
If you have both Git and Python installed, you can install this script
by executing the following command:
ls
To delete all the pdf files from this directory, you can execute the
following command:
rmbyext pdf
# Removing: PDF
# b.pdf
# a.pdf
# d.pdf
Now the problem is that containers are isolated from your local
system, so the rmbyext program running inside the container
doesn't have any access to your local file system. So, if somehow you
can map the local directory containing the pdf files to the /zone
directory inside the container, the files should be accessible to the
container.
One way to grant a container direct access to your local file system
is by using bind mounts.
A bind mount lets you form a two way data binding between the
content of a local file system directory (source) and another
directory inside a container (destination). This way any changes
made in the destination directory will take effect on the source
directory and vise versa.
Let's see a bind mount in action. To delete files using this image
instead of the program itself, you can execute the following
command:
# Removing: PDF
# b.pdf
# a.pdf
# d.pdf
The third field is optional but you must pass the absolute path of
your local directory and the absolute path of the directory inside the
container.
You can learn more about command substitution here if you want to.
I would suggest you to install Visual Studio Code with the official
Docker Extension from the marketplace. This will greatly help your
development experience.
# b379ecd5b6b9ae27c144e4fa12bdc5d0635543666f75c14039eea8d5f38e3f5
docker container ls
That's all nice and good, but what if you want to make a custom
NGINX image which functions exactly like the official one, but that's
built by you? That's a completely valid scenario to be honest. In fact,
let's do that.
FROM ubuntu:latest
EXPOSE 80
Images are multi-layered files and in this file, each line (known as
instructions) that you've written creates a layer for your image.
Now that you have a valid Dockerfile you can build an image out of
it. Just like the container related commands, the image related
commands can be issued using the following syntax:
The . at the end sets the context for this build. The context
means the directory accessible by the daemon during the
build process.
Now to run a container using this image, you can use the container
run command coupled with the image ID that you received as the
result of the build process. In my case the id is 3199372aa3fc
evident by the Successfully built 3199372aa3fc line in the
previous code block.
# ec09d4e1f70c903c3b954c8d7958421cdd1ae3d079b57f929e44131fbf8069a
docker container ls
# CONTAINER ID IMAGE COMMAND
# ec09d4e1f70c 3199372aa3fc "nginx -g 'daemon of…"
The repository is usually known as the image name and the tag
indicates a certain build or version.
Take the official mysql image, for example. If you want to run a
container using a specific version of MySQL, like 5.7, you can
execute docker container run mysql:5.7 where mysql is the
image repository and 5.7 is the tag.
Nothing will change except the fact that you can now refer to your
image as custom-nginx:packaged instead of some long random
string.
## or ##
docker image ls
The identifier can be the image ID or image repository. If you use the
repository, you'll have to identify the tag as well. To delete the custo
m-nginx:packaged image, you may execute the following command:
# Untagged: custom-nginx:packaged
# Deleted: sha256:f8837621b99d3388a9e78d9ce49fbb773017f770eea8047
# Deleted: sha256:fdc6cdd8925ac25b9e0ed1c8539f96ad89ba1b21793d061
# Deleted: sha256:c20e4aa46615fe512a4133089a5cd66f9b7da76366c9654
# Deleted: sha256:6d6460a744475a357a2b631a4098aa1862d04510f3625fe
You can also use the image prune command to cleanup all un-
tagged dangling images as follows:
# Deleted Images:
# deleted: sha256:ba9558bdf2beda81b9acc652ce4931a85f0fc7f69dbc91b
# deleted: sha256:ad9cc3ff27f0d192f8fa5fadebf813537e02e6ad472f653
# deleted: sha256:f1e9b82068d43c1bb04ff3e4f0085b9f8903a12b27196df
# deleted: sha256:ec16024aa036172544908ec4e5f842627d04ef99ee9b8d9
To visualize the many layers of an image, you can use the image his
tory command. The various layers of the custom-nginx:packaged
image can be visualized as follows:
There are eight layers of this image. The upper most layer is the
latest one and as you go down the layers get older. The upper most
layer is the one that you usually use for running containers.
Now, let's have a closer look at the images beginning from image d7
0eaf7277ea down to 7f16387f7307 . I'll ignore the bottom four
layers where the IMAGE is <missing> as they are not of our
concern.
As you can see the image comprises of many read only layers each
As you can see, the image comprises of many read-only layers, each
recording a new set of changes to the state triggered by certain
instructions. When you start a container using an image, you get a
new writable layer on top of the other layers.
This layering phenomenon that happens every time you work with
By utilizing this concept, Docker can avoid data duplication and can
use previously created layers as a cache for later builds. This results
in compact, efficient images that can be used everywhere.
In order to build NGINX from source, you first need the source of
NGINX. If you've cloned my projects repository you'll see a file
named nginx-1.19.2.tar.gz inside the custom-nginx directory.
You'll use this archive as the source for building NGINX.
Before diving into writing some code, let's plan out the process first.
The image creation process this time can be done in seven steps.
These are as follows:
ubuntu.
Now that you have a plan, let's begin by opening up old Dockerfile
and updating its contents as follows:
FROM ubuntu:latest
COPY nginx-1.19.2.tar.gz .
As you can see, the code inside the Dockerfile reflects the seven
steps I talked about above.
This code is alright but there are some places where we can make
improvements.
FROM ubuntu:latest
libpcre3 \
libpcre3-dev \
zlib1g \
zlib1g-dev \
libssl1.1 \
libssl-dev \
-y && \
ARG FILENAME="nginx-1.19.2"
ARG EXTENSION="tar.gz"
ADD https://nginx.org/download/${FILENAME}.${EXTENSION} .
./configure \
--sbin-path=/usr/bin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-pcre \
--pid-path=/var/run/nginx.pid \
--with-http_ssl_module && \
The code is almost identical to the previous code block except for a
new instruction called ARG on line 13, 14 and the usage of the ADD
docker container ls
And here is the trusty default response page from NGINX. You can
visit the official reference site to learn more about the available
instructions.
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SI
# custom-nginx built 1f3aaf40bb54 16 minutes ago 34
For an image containing only NGINX, that's too much. If you pull the
official image and check its size, you'll see how small it is:
docker image ls
In order to find out the root cause, let's have a look at the Dockerfi
le first:
FROM ubuntu:latest
libpcre3 \
libpcre3-dev \
zlib1g \
zlib1g-dev \
libssl1.1 \
libssl-dev \
-y && \
ARG FILENAME="nginx-1.19.2"
ARG EXTENSION="tar.gz"
ADD https://nginx.org/download/${FILENAME}.${EXTENSION} .
./configure \
--sbin-path=/usr/bin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-pcre \
--pid-path=/var/run/nginx.pid \
--with-http_ssl_module && \
As you can see on line 3, the RUN instruction installs a lot of stuff.
Although these packages are necessary for building NGINX from
source, they are not necessary for running it.
Out of the 6 packages that we installed, only two are necessary for
running NGINX. These are libpcre3 and zlib1g . So a better idea
would be to uninstall the other packages once the build process is
done.
FROM ubuntu:latest
EXPOSE 80
ARG FILENAME="nginx-1.19.2"
ARG EXTENSION="tar.gz"
ADD https://nginx.org/download/${FILENAME}.${EXTENSION} .
libpcre3 \
libpcre3-dev \
zlib1g \
zlib1g-dev \
libssl1.1 \
libssl-dev \
-y && \
./configure \
--sbin-path=/usr/bin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-pcre \
--pid-path=/var/run/nginx.pid \
--with-http_ssl_module && \
libpcre3-dev \
zlib1g-dev \
libssl-dev \
-y && \
As you can see, on line 10 a single RUN instruction is doing all the
necessary heavy-lifting. The exact chain of events is as follows:
From line 10 to line 17, all the necessary packages are being
installed.
Let's build an image using this Dockerfile and see the differences.
docker image ls
As you can see, the image size has gone from being 343MB to
81.6MB. The official image is 133MB. This is a pretty optimized
build, but we can go a bit further in the next sub-section.
But the good thing about Alpine is that it's built around musl libc
and busybox and is lightweight. Where the latest ubuntu image
weighs at around 28MB, alpine is 2.8MB.
FROM alpine:latest
EXPOSE 80
ARG FILENAME="nginx-1.19.2"
ARG EXTENSION="tar.gz"
ADD https://nginx.org/download/${FILENAME}.${EXTENSION} .
--virtual .build-deps \
build-base \
pcre-dev \
zlib-dev \
openssl-dev && \
./configure \
--sbin-path=/usr/bin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-pcre \
--pid-path=/var/run/nginx.pid \
--with-http_ssl_module && \
The code is almost identical except for a few changes. I'll be listing
the changes and explaining them as I go:
The --virtual option for the apk add command is used for
bundling a bunch of packages into a single virtual package for
easier management. Packages that are needed only for
Now build a new image using this Dockerfile and see the
difference in file size:
docker image ls
Where the ubuntu version was 81.6MB, the alpine one has come
down to 12.8MB which is a massive gain. Apart from the apk
package manager, there are some other things that differ in Alpine
from Ubuntu but they're not that big a deal. You can just search the
internet whenever you get stuck.
FROM python:3-alpine
WORKDIR /zone
ENTRYPOINT [ "rmbyext" ]
Th i i h h b i ki
The FROM instruction sets python as the base image, making
an ideal environment for running Python scripts. The 3-
alpine tag indicates that you want the Alpine variant of
Python 3.
docker image ls
Here I haven't provided any tag after the image name, so the image
has been tagged as latest by default. You should be able to run the
image as you saw in the previous section. Remember to refer to the
actual image name you've set, instead of fhsinchy/rmbyext here.
Once you've created the account, you'll have to sign in to it using the
docker CLI. So open up your terminal and execute the following
command to do so:
docker login
# Login with your Docker ID to push and pull images from Docker H
# Username: fhsinchy
# Password:
# WARNING! Your password will be stored unencrypted in /home/fhsi
# Configure a credential helper to remove this warning. See
# https://docs.docker.com/engine/reference/commandline/login/#cre
#
# Login Succeeded
If you do not give the image any tag, it'll be automatically tagged as
latest . But that doesn't mean that the latest tag will always refer
to the latest version. If, for some reason, you explicitly tag an older
version of the image as latest , then Docker will not make any extra
effort to cross check that.
Once the image has been built, you can them upload it by executing
the following command:
Depending on the image size, the upload may take some time. Once
it's done you should able to find the image in your hub profile page.
Just like any other project you've done in the previous sub-section,
you'll begin by making a plan of how you want this application to
run. In my opinion, the plan should be as follows:
This plan should always come from the developer of the application
that you're containerizing. If you're the developer yourself, then you
should already have a proper understanding of how this application
needs to be run.
FROM node:lts-alpine
EXPOSE 3000
USER node
WORKDIR /home/node/app
COPY ./package.json .
COPY . .
The USER instruction sets the default user for the image to
node . By default Docker runs containers as the root user.
But according to Docker and Node.js Best Practices this can
pose a security threat. So it's a better idea to run as a non-
root user whenever possible. The node image comes with a
non-root user named node which you can set as the default
user using the USER instruction.
Given the filename is not Dockerfile you have to explicitly pass the
filename using the --file option. A container can be run using this
image by executing the following command:
# 21b9b1499d195d85e81f0e8bce08f43a64b63d589c5f15cbbd0b9c0cb07ae26
But if you make any changes in your code right now, you'll see
nothing happening to your application running in the browser. This
is because you're making changes in the code that you have in your
local file system but the application you're seeing in the browser
resides inside the container file system
resides inside the container file system.
To solve this issue, you can again make use of a bind mount. Using
bind mounts, you can easily mount one of your local file system
directories inside a container. Instead of making a copy of the local
file system, the bind mount can reference the local file system
directly from inside the container.
This way, any changes you make to your local source code will
reflect immediately inside the container, triggering the hot reload
feature of the vite development server. Changes made to the file
system inside the container will be reflected on your local file
system as well.
Now that you're mounting the project root on your local file system
as a volume inside the container, the content inside the container
gets replaced along with the d d l directory containing all
gets replaced along with the node_modules directory containing all
the dependencies. This means that the vite package has gone
missing.
# 53d1cfdb3ef148eb6370e338749836160f75f076d0fbec3c2a9b059a8992de8
server not only serves the files but also provides the hot reload
feature.
In production mode, the npm run build command compiles all your
JavaScript code into some static HTML, CSS, and JavaScript files. To
run these files you don't need node or any other runtime
dependencies. All you need is a server like nginx for example.
Install nginx inside the node image and use that to serve the
static files.
This approach is completely valid. But the problem is that the node
image is big and most of the stuff it carries is unnecessary to serve
your static files. A better approach to this scenario is as follows:
Create the final image based on nginx and discard all node
related stuff.
This way your image only contains the files that are needed and
becomes really handy.
WORKDIR /app
COPY ./package.json ./
COPY . .
FROM nginx:stable-alpine
EXPOSE 80
As you can see the Dockerfile looks a lot like your previous ones
with a few oddities. The explanation for this file is as follows:
From line 3 to line 9, it's standard stuff that you've seen many
times before. The RUN npm run build command actually
compiles the entire application and tucks it inside /app/dist
directory where /app is the working directory and /dist is
the default output directory for vite applications.
#
# - Building production bundle...
#
# [write] dist/index.html 0.39kb, brotli: 0.15kb
# [write] dist/_assets/docker-handbook-github.3adb4865.webp 12.32
# [write] dist/_assets/index.eabcae90.js 42.56kb, brotli: 15.40kb
# [write] dist/_assets/style.0637ccc5.css 0.16kb, brotli: 0.10kb
# - Building production bundle...
#
# Build completed in 1.71s.
#
# Removing intermediate container 4d918cf18584
# ---> 187fb3e82d0d
# Step 7/9 : EXPOSE 80
# ---> Running in b3aab5cf5975
# Removing intermediate container b3aab5cf5975
# ---> d6fcc058cfda
# Step 8/9 : FROM nginx:stable-alpine
# stable: Pulling from library/nginx
# 6ec7b7d162b2: Already exists
# 43876acb2da3: Pull complete
# 7a79edd1e27b: Pull complete
# eea03077c87e: Pull complete
# eba7631b45c5: Pull complete
# Digest: sha256:2eea9f5d6fff078ad6cc6c961ab11b8314efd91fb8480b5d
# Status: Downloaded newer image for nginx:stable
# ---> 05f64a802c26
# Step 9/9 : COPY --from=builder /app/dist /usr/share/nginx/html
# ---> 8c6dfc34a10d
# Successfully built 8c6dfc34a10d
# Successfully tagged hello-dock:prod
Once the image has been built, you may run a new container by
executing the following command:
docker container run \
--rm \
--detach \
--name hello-dock-prod \
--publish 8080:80 \
hello-dock:prod
# 224aaba432bb09aca518fdd0365875895c2f5121eb668b2e7b2d5a99c019b95
Here you can see my hello-dock application in all its glory. Multi-
staged builds can be very useful if you're building large applications
with a lot of dependencies. If configured properly, images built in
multiple stages can be very optimized and compact.
.git
*Dockerfile*
*docker-compose*
node_modules
These two containers are completely isolated from each other and
are oblivious to each other's existence. So how do you connect the
two? Won't that be a challenge?
You may think of two possible solutions to this problem. They are as
follows:
The first one involves exposing a port from the postgres container
and the notes-api will connect through that. Assume that the
exposed port from the postgres container is 5432. Now if you try
to connect to 127.0.0.1:5432 from inside the notes-api container,
you'll find that the notes-api can't find the database server at all.
The reason is that when you're saying 127.0.0.1 inside the notes-
api container, you're simply referring to the localhost of that
container and that container only. The postgres server simply
doesn't exist there. As a result the notes-api application failed to
connect.
The second solution you may think of is finding the exact IP address
of the postgres container using the container inspect command
and using that with the port. Assuming the name of the postgres
container is notes-api-db-server you can easily get the IP address
by executing the following command:
# 172.17.0.2
Now given that the default port for postgres is 5432 , you can very
easily access the database server by connecting to 172.17.0.2:543
2 from the notes-api container.
Now that I've dismissed the possible wrong answers to the original
question, the correct answer is, you connect them by putting them
under a user-defined bridge network.
docker network ls
You should see three networks in your system. Now look at the DRI
VER column of the table here. These drivers are can be treated as
the type of network.
By default, Docker has five networking drivers. They are as follows:
docker network ls
As you can see, Docker comes with a default bridge network named
bridge . Any container you run will be automatically attached to
this bridge network:
# 7bd5f351aa892ac6ec15fed8619fc3bbb95a7dcdd58980c28304627c8f7eb07
docker network ls
As you can see a new network has been created with the given
name. No container is currently attached to this network. In the
next sub-section, you'll learn about attaching containers to a
network.
# hello-dock
# hello-dock
As you can see from the outputs of the two network inspect
commands, the hello-dock container is now attached to both the s
kynet and the default bridge network.
/ # ping hello-dock
As you can see, running ping hello-dock from inside the alpine-b
ox container works because both of the containers are under the
same user-defined bridge network and automatic DNS resolution is
working.
You can use the network disconnect command for this task. The
generic syntax for the command is as follows:
To remove the skynet network from your system, you can execute
the following command:
docker network rm skynet
You can also use the network prune command to remove any
unused networks from your system. The command also has the -f
or --force and -a or --all options.
In this project there are two containers in total that you'll have to
connect using a network. Apart from this, you'll also learn about
concepts like environment variables and named volumes. So without
further ado, let's jump right in.
To run the database server you can execute the following command:
d k t i \
docker container run \
--detach \
--name=notes-db \
--env POSTGRES_DB=notesdb \
--env POSTGRES_PASSWORD=secret \
--network=notes-api-network \
postgres:12
# a7b287d34d96c8e81a63949c57b83d7c1d71b5660c87f5172f074bd1606196d
docker container ls
The --env option for the container run and container create
commands can be used for providing environment variables to a
container. As you can see, the database container has been created
successfully and is running now.
Now what if the container gets destroyed for some reason? You'll
lose all your data. To solve this problem, a named volume can be
used.
# notes-db-data
docker volume ls
# notes-db
# notes-db
Now run a new container and assign the volume using the --volume
or -v option.
docker container run \
--detach \
--volume notes-db-data:/var/lib/postgresql/data \
--name=notes-db \
--env POSTGRES_DB=notesdb \
--env POSTGRES_PASSWORD=secret \
--network=notes-api-network \
postgres:12
# 37755e86d62794ed3e67c19d0cd1eba431e26ab56099b92a3456908c1d34679
# notes-db-data
Now the data will safely be stored inside the notes-db-data volume
and can be reused in the future. A bind mount can also be used
instead of a named volume here, but I prefer a named volume in
such scenarios.
To access the logs from the notes-db container, you can execute the
following command:
following command:
Evident by the text in line 57, the database is up and ready to accept
connections from the outside. There is also the --follow or -f
option for the command which lets you attach the console to the
logs output and get a continuous stream of text.
i i
How to Write the Dockerfile
Go to the directory where you've cloned the project code. Inside
there, go inside the notes-api/api directory, and create a new Dock
erfile . Put the following code in the file:
# stage one
FROM node:lts-alpine as builder
WORKDIR /app
COPY ./package.json .
RUN npm install --only=prod
# stage two
FROM node:lts-alpine
EXPOSE 3000
ENV NODE_ENV=production
USER node
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
COPY . .
COPY --from=builder /app/node_modules /home/node/app/node_module
This is a multi-staged build. The first stage is used for building and
installing the dependencies using node-gyp and the second stage is
for running the application. I'll go through the steps briefly:
Before you run a container using this image, make sure the database
container is running, and is attached to the notes-api-network .
# [
# {
# ...
# "State": {
# "Status": "running",
# "Running": true,
# "Paused": false,
# "Restarting": false,
# "OOMKilled": false,
# "Dead": false,
# "Pid": 11521,
# "ExitCode": 0,
# "Error": "",
# "StartedAt": "2021-01-26T06:55:44.928510218Z",
# "FinishedAt": "2021-01-25T14:19:31.316854657Z"
# },
# ...
# "Mounts": [
# {
# "Type": "volume",
# "Name": "notes-db-data",
# "Source": "/var/lib/docker/volumes/notes-db-dat
# "Destination": "/var/lib/postgresql/data",
# "Driver": "local",
# "Mode": "z",
# "RW": true,
# "Propagation": ""
# }
# ],
# ...
# "NetworkSettings": {
# ...
# "Networks": {
# "bridge": {
# "IPAMConfig": null,
# "Links": null,
# "Aliases": null,
# "NetworkID": "e4c7ce50a5a2a49672155ff498597
# "EndpointID": "2a2587f8285fa020878dd38bdc63
# "Gateway": "172.17.0.1",
# "IPAddress": "172.17.0.2",
# "IPPrefixLen": 16,
# "IPv6Gateway": "",
# "GlobalIPv6Address": "",
# "GlobalIPv6PrefixLen": 0,
# "MacAddress": "02:42:ac:11:00:02",
# "DriverOpts": null
# },
# "notes-api-network": {
# "IPAMConfig": {},
# "Links": null,
# "Aliases": [
# "37755e86d627"
# ],
# "NetworkID": "06579ad9f93d59fc3866ac628ed25
# "EndpointID": "5b8f8718ec9a5ec53e7a13cce3cb
# "Gateway": "172.18.0.1",
# "IPAddress": "172.18.0.2",
# "IPPrefixLen": 16,
# "IPv6Gateway": "",
# "GlobalIPv6Address": "",
# "GlobalIPv6PrefixLen": 0,
# "MacAddress": "02:42:ac:12:00:02",
# "DriverOpts": {}
# }
# }
# }
# }
# ]
I've shortened the output for easy viewing here. On my system, the
notes-db container is running, uses the notes-db-data volume,
and is attached to the notes-api-network bridge.
Once you're assured that everything is in place, you can run a new
container by executing the following command:
# f9ece420872de99a060b954e3c236cbb1e23d468feffa7fed1e06985d99fb91
To check if the container is running properly or not, you can use the
container ls command:
docker container ls
The API has five routes in total that you can see inside the /notes-a
pi/api/api/routes/notes.js file.
Although the container is running, there is one last thing that you'll
have to do before you can start using it. You'll have to run the
database migration necessary for setting up the database tables,
and you can do that by executing npm run db:migrate command
and you can do that by executing npm run db:migrate command
inside the container.
running container.
For this, you'll have to use the exec command to execute a custom
command inside a running container.
# / # uname -a
# Linux b5b1367d6b31 5.10.9-201.fc33.x86_64 #1 SMP Wed Jan 20 16:
You'll find four shell scripts in the notes-api directory. They are as
follows:
# ./shutdown.sh
# stopping api container --->
# notes-api
# api container stopped --->
# ./destroy.sh
# removing api container --->
# notes-api
# api container removed --->
I'm not going to explain these scripts because they're simple if-els
e statements along with some Docker commands that you've
already seen many times. If you have some understanding of the
Linux shell you should be able to understand the scripts as well
Linux shell, you should be able to understand the scripts as well.
# stage one
WORKDIR /app
COPY ./package.json .
p g j
RUN npm install
# stage two
FROM node:lts-alpine
ENV NODE_ENV=development
USER node
WORKDIR /home/node/app
COPY . .
version: "3.8"
services:
db:
image: postgres:12
container_name: notes-db-dev
volumes:
- notes-db-dev-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: notesdb
POSTGRES_PASSWORD: secret
api:
build:
context: ./api
dockerfile: Dockerfile.dev
image: notes-api:dev
container_name: notes-api-dev
environment:
DB_DATABASE: notesdb
DB_PASSWORD: secret
volumes:
- /home/node/app/node_modules
- ./api:/home/node/app
ports:
- 3000:3000
volumes:
notes-db-dev-data:
name: notes-db-dev-data
db:
image: postgres:12
container_name: notes-db-dev
volumes:
- db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: notesdb
POSTGRES_PASSWORD: secret
The image key holds the image repository and tag used for
this container. We're using the postgres:12 image for
running the database container.
api:
build:
context: ./api
dockerfile: Dockerfile.dev
image: notes-api:dev
container_name: notes-api-dev
environment:
DB_HOST: db ## same as the database service name
DB_DATABASE: notesdb
DB_PASSWORD: secret
volumes:
- /home/node/app/node_modules
- ./api:/home/node/app
ports:
- 3000:3000
The ports map defines any port mapping. The syntax, <host
port>:<container port> is identical to the --publish
option you used before.
volumes:
db-data:
name: notes-db-dev-data
You can learn about the different options for volume configuration
in the official docs.
H t St t S i i D k C
How to Start Services in Docker Compose
There are a few ways of starting services defined in a YAML file. The
first command that you'll learn about is the up command. The up
command builds any missing images, creates containers, and starts
them in one go.
Before you execute the command, though, make sure you've opened
Apart from the the up command there is the start command. The
main difference between these two is that the start command
doesn't create missing containers, only starts existing containers.
It's basically the same as the container start command.
docker-compose ps
Unlike the container exec command, you don't need to pass the -
it flag for interactive sessions. docker-compose does that
automatically.
To access the logs from the api service, execute the following
command:
# Attaching to notes-api-dev
# notes-api-dev | [nodemon] 2.0.7
# notes-api-dev | [nodemon] reading config ./nodemon.json
# notes-api-dev | [nodemon] to restart at any time, enter `rs`
# notes-api-dev | [nodemon] or send SIGHUP to 1 to restart
# notes-api-dev | [nodemon] ignoring: *.test.js
# notes-api-dev | [nodemon] watching path(s): *.*
# notes-api-dev | [nodemon] watching extensions: js,mjs,json
# notes-api-dev | [nodemon] starting `node bin/www`
# notes-api-dev | [nodemon] forking
# notes-api-dev | [nodemon] child pid: 19
# notes-api-dev | [nodemon] watching 18 files
# notes-api-dev | app running -> http://127.0.0.1:3000
This is just a portion from the log output. You can kind of hook into
the output stream of the service and get the logs in real-time by
the output stream of the service and get the logs in real-time by
using the -f or --follow option. Any later log will show up
instantly in the terminal as long as you don't exit by pressing ctrl +
c or closing the window. The container will keep running even if you
exit out of the log window.
If you've cloned the project code repository, then go inside the full
stack-notes-application directory. Each directory inside the
project root contains the code for each service and the
corresponding
Before we start Dockerfile .
with the docker-compose.yaml file let's look at a
diagram of how the application is going to work:
The router will then see if the requested end-point has /api in it. If
yes, the router will route the request to the back-end or if not, the
router will route the request to the front-end.
I will not get into the configuration of NGINX here. That topic is
kinda out of the scope of this book. But if you want to have a look at
it, go ahead and check out the /notes-api/nginx/development.conf
FROM nginx:stable-alpine
version: "3.8"
services:
db:
image: postgres:12
container_name: notes-db-dev
volumes:
- db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: notesdb
POSTGRES_PASSWORD: secret
networks:
- backend
api:
build:
context: ./api
dockerfile: Dockerfile.dev
image: notes-api:dev
image: notes api:dev
container_name: notes-api-dev
volumes:
- /home/node/app/node_modules
- ./api:/home/node/app
environment:
DB_PORT: 5432
DB_USER: postgres
DB_DATABASE: notesdb
DB_PASSWORD: secret
networks:
- backend
client:
build:
context: ./client
dockerfile: Dockerfile.dev
image: notes-client:dev
container_name: notes-client-dev
volumes:
- /home/node/app/node_modules
- ./client:/home/node/app
networks:
- frontend
nginx:
build:
context: ./nginx
dockerfile: Dockerfile.dev
image: notes-router:dev
container_name: notes-router-dev
restart: unless-stopped
ports:
- 8080:80
networks:
- backend
- frontend
volumes:
db-data:
name: notes-db-dev-data
networks:
frontend:
name: fullstack-notes-application-network-frontend
driver: bridge
backend:
name: fullstack-notes-application-network-backend
driver: bridge
The file is almost identical to the previous one you worked with. The
only thing that needs some explanation is the network
configuration. The code for the networks block is as follows:
networks:
frontend:
name: fullstack-notes-application-network-frontend
driver: bridge
backend:
name: fullstack-notes-application-network-backend
driver: bridge
# ---> 8c03fdb920f9
# Step 4/13 : COPY ./package.json .
# ---> a1d5715db999
# Step 5/13 : RUN npm install
# ---> Running in fabd33cc0986
### LONG INSTALLATION STUFF GOES HERE ###
# Removing intermediate container fabd33cc0986
# ---> e09913debbd1
# Step 6/13 : FROM node:lts-alpine
# ---> 471e8b4eb0b2
# Step 7/13 : ENV NODE_ENV=development
# ---> Using cache
# ---> b7c12361b3e5
# Step 8/13 : USER node
# ---> Using cache
# ---> f5ac66ca07a4
# Step 9/13 : RUN mkdir -p /home/node/app
# ---> Using cache
# ---> 60094b9a6183
# Step 10/13 : WORKDIR /home/node/app
# ---> Using cache
# ---> 316a252e6e3e
# Step 11/13 : COPY . .
# ---> Using cache
# ---> 3a083622b753
# Step 12/13 : COPY --from=builder /app/node_modules /home/node/a
# ---> Using cache
# ---> 707979b3371c
# Step 13/13 : CMD [ "./node_modules/.bin/nodemon", "--config", "
# ---> Using cache
# ---> f2da08a5f59b
# Successfully built f2da08a5f59b
# Successfully tagged notes-api:dev
# Building client
# Sending build context to Docker daemon 43.01kB
#
# Step 1/7 : FROM node:lts-alpine
# ---> 471e8b4eb0b2
# Step 2/7 : USER node
# ---> Using cache
# ---> 4be5fb31f862
# Step 3/7 : RUN mkdir -p /home/node/app
# ---> Using cache
# ---> 1fefc7412723
# Step 4/7 : WORKDIR /home/node/app
# ---> Using cache
# ---> d1470d878aa7
# Step 5/7 : COPY ./package.json .
# ---> Using cache
# ---> bbcc49475077
# Step 6/7 : RUN npm install
# ---> Using cache
# ---> 860a4a2af447
# Step 7/7 : CMD [ "npm", "run", "serve" ]
# ---> Using cache
# ---> 11db51d5bee7
# Successfully built 11db51d5bee7
# Successfully tagged notes-client:dev
# Building nginx
# Sending build context to Docker daemon 5.12kB
#
# Step 1/2 : FROM nginx:stable-alpine
# ---> f2343e2e2507
# Step 2/2 : COPY ./development.conf /etc/nginx/conf.d/default.co
# ---> Using cache
# ---> 02a55d005a98
# Successfully built 02a55d005a98
# Successfully tagged notes-router:dev
# Creating notes-client-dev ... done
# Creating notes-api-dev ... done
# Creating notes-router-dev ... done
# Creating notes-db-dev ... done
properly. The project also comes with shell scripts and a Makefile .
Explore them to see how you can run this project without the help of
docker-compose like you did in the previous section.
Conclusion
I would like to thank you from the bottom of my heart for the time
you've spent reading this book. I hope you've enjoyed it and have
learned all the essentials of Docker.
If you read this far, tweet to the author to show them you care.
Tweet a thanks
Our mission: to help people learn to code for free. We accomplish this by creating thousands of
videos, articles, and interactive coding lessons - all freely available to the public. We also have
thousands of freeCodeCamp study groups around the world.
Donations to freeCodeCamp go toward our education initiatives and help pay for servers,
services, and staff.
Trending Guides
Our Nonprofit
About Alumni Network Open Source Shop Support Sponsors Academic Honesty