0% found this document useful (0 votes)
35 views

15 Monitoring Part4 01

The document discusses configuring application-level monitoring of a Kubernetes cluster using Prometheus client libraries. It explains that custom monitoring is needed for applications running in the cluster. Prometheus client libraries for different programming languages expose metrics in a format that Prometheus can scrape. The document walks through integrating the Node.js Prometheus client library into a sample app to expose metrics for requests and request durations. It shows how the app code defines, tracks, and exposes the metrics via an HTTP endpoint for Prometheus to scrape. This allows visualizing and alerting on the custom application metrics alongside other infrastructure and cluster metrics.

Uploaded by

Sheroze Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

15 Monitoring Part4 01

The document discusses configuring application-level monitoring of a Kubernetes cluster using Prometheus client libraries. It explains that custom monitoring is needed for applications running in the cluster. Prometheus client libraries for different programming languages expose metrics in a format that Prometheus can scrape. The document walks through integrating the Node.js Prometheus client library into a sample app to expose metrics for requests and request durations. It shows how the app code defines, tracks, and exposes the metrics via an HTTP endpoint for Prometheus to scrape. This allows visualizing and alerting on the custom application metrics alongside other infrastructure and cluster metrics.

Uploaded by

Sheroze Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

At this point we have configured monitoring four different levels in our Kubernetes cluster.

We have a monitoring set up for Kubernetes notes, and the resource consumption on those
notes. We have monitoring for Kubernetes components as well as third party applications that
are running inside the cluster like, monitoring stack applications themselves as well as Redis
application.

Now, as a final part what's missing is monitoring for our own applications, and obviously for
our own applications. There is no explorer or a ready application that we can just deploy in the
cluster, and start scraping the metrics. Right?

We have to actually define those metrics, and we have to create that application. So, we need
some custom solution here to start monitoring our own applications.

So, how do we do that? In order to monitor our own applications with Prometheus.
We need to use Prometheus client libraries in those applications. Prometheus client libraries
are basically libraries for different programing languages that give you an abstract interface for
defining metrics in your applications that you want to expose as well as exposing those
metrics in the times series format that Prometheus can scrape.

So, that's what Prometheus client libraries will give you? And as I said, for each programing
language there is its own Prometheus library that you can use to start exposing metrics of
your application.

So, in this part we're going to learn how to expose metrics in our own application using an
example of a very simple Nodejs application, and Prometheus client library for Nodejs? Once
we have exposed metrics using these Prometheus client library we're going to deploy our
simple Nodejs application in the cluster, and we're going to configure Prometheus to start
scraping the metrics that our application is exposing through these Prometheus client library.

And once we have Prometheus scraping the metrics of our own application we can of course
configure alert rules. We can create Grafana dashboard based on those metrics to visualize
the data etc.

So, we can use those metrics just like any other metrics for other resources and applications.
So, with that let's get started and is a first step. Let's see how Prometheus client library will
look like for one of the programing languages. Which is Nodejs as an example for the
Prometheus client library we're gonna use this simple Nodejs application that you see here.

So, if I start the application it's gonna start on port 3000, and let's see what's there. There you
go. So, that's basically the whole application. We have a backend that sends the browser this
response, or this simple UI.

So, now as a first step for monitoring this application we want to expose metrics of the
application, and we're going too explicitly expose to metrics that we're going to be interested
in one of them is going to be number of requests the application is getting with this. We can
check at any point in time how much load the application has, and the second one is going to
be the duration of requests?

So basically, how long does the application take to handle the requests? Right?
So, if the application is too slow in handling the requests obviously that's a bad sign. We need
to tell the developers that there is something to be fixed.

So, we want to expose this two metrics in real projects. The way this is gonna work usually
(inaudible 4:13) engineer, or Kubernetes administrator who has set up the monitoring in the
cluster, and who also wants to take the custom applications, the internal applications. Under
monitoring you will usually go to the developers, and tell them to configure the exposing of
metrics of their own application so that you can start monitoring it with Prometheus. And
since they are the developers, and they know the code they will basically go, and find that
Prometheus library integrate that with their code.

Basically, expose some of the metrics that they are interested in, or you're interested in to
monitor, and basically write the code so that's not going to be your responsibility as a
(Inaudible 4:57) engineer, but rather a developer team's job.

So, in our case let's say we already told the developers to integrate these Prometheus client
library in the code, and expose those two metrics that we can scrape with Prometheus, and
they did it.

And going back and let's say developers have already integrated Prometheus client in this
Nodejs application.

So, we're gonna take a look at this. First of all, the client for Nodejs is called Prom client.
So, this is one of the dependencies that we have here in package json with a version, and if
you looked up Prometheus client for Nodejs in a NPM package manager you're going to see
that this is the library.

Which is mostly used for this case plus, you have all the documentation here.
Which again, developers will basically use to understand how they can use this library, and its
functions to basically start exposing the metrics that they are interested in, or you want to
start monitoring?

And again, you will have different client libraries for different programing languages. This is
just one for Nodejs. Once that's been added to the dependencies it can then be used in the
code. We have just one Nodejs file. Which is server dot js. Everything is basically happening
right here.
It's a very simple application, and this is where we are importing the Prom client library in the
code, and doing some basic configuration to expose metrics.
So, first of all we have this line here that tells sure. We want to expose number of requests,
and requests duration. But, this library also lets you expose some default metrics out of the
box without you configuring anything. So, just by doing client collective metrics it will give you
some additional metrics out of the box. Which is a great thing. We're going to expose them as
well. So, the problem will be taken every five seconds. So, those default metrics will be
updated, or expose every five seconds.

Now it's time for the metrics. The two metrics that we want to explicitly expose for our
application. First one is the number of total HDB requests to our application, and for that
we're using a counter because it's a counter type of a /metric, and we have the name of
the /metric, and we have help. Which is basically a description of the /metric. So, this will tell
the client library. Hey, we're interested in creating these metrics here with this name, and
description as well as duration of requests. Which is going to be a histogram type, and
histogram type actually works in buckets so you have multiple buckets the lowest one, and the
highest one, and depending on the value of the duration in seconds that's what we want to
monitor.

It will be basically assigned to one of the buckets. So, that's how histogram type is made up,
and this basically will tell the library these are the two metrics we want to register here.

Now, this just defines those metrics. But as a next step we want to actually track those
numbers. So basically, whenever a new request gets made to the application we want to
increment the number of requests. Right?
Or for that request we want to calculate how long that request took?
So, where is the logic for actually measuring those numbers? Well, that is in the application
logic itself right here, and let's see what we are doing here?
First of all, we're simulating the request handling duration for a random number of time, or
random duration time.
So basically, we're saying the request entered here. So, we are starting the counter, and then
we are simulating.
So, this is just a mock code. You're not going to have this in an actual code here. We're just
simulating that the request has been processed by the application logic, and after that period
of time it returned the response.

And before sending the response we are also incrementing the number of total requests. So,
in actual code your application will be a little bit more complex, and you're going to be doing a
couple of things in your application. So, for each request you're gonna to measure the timer.

So basically, register the time when the request enters, and then register a time when the
response gets sent out, and that is basically the request duration.
Which we are just simulating here. Right? As I said, we were using a random duration so that
our duration is not always the same. So, we're going to see some variations in the value. So,
it's more interesting, but generally that's how it's gonna work.
So, the bottom line here is you don't have to understand the syntax here. You don't have to
understand how the client library API actually works? You have to understand the logic.
The logic is that for every /metric that you want to expose in your application you have to
explicitly define that /metric in your code, and you have to track the value of that /metric in
your logic so you don't get this out of the box. You have to explicitly track that.

And as I said, this is going to be part of developer job because they are going to integrate this
in their own code using their respective library. And finally, once the metrics are collected. The
metrics should also be exposed. Right? And as you already know the default endpoint where
metrics gets exposed is slash metrics. .
So, we have a metrics, and point here that we are handling.
And on that slash metrics endpoint we are basically setting the content type of the metrics
data.
Which is time series data format, and we get that content type from the client. This is
Prometheus client, and finally we are sending the whole data to the UI, or to the browser
using this client dot register dot metrics function.

So, this should actually have all the metrics that you have collected using this logic, and we
should see that all in the browser. So, to check that let's actually go back to the application,
and slash metrics endpoint, and there you go.
We're exposing metrics data, and you see we have lots of data not just two, and these are
actually those default metrics that we have allowed right here.
And here you see what those metrics are. Of course you can decide whether they're
interesting for you or not, and disable them if you want.
And let's see where our two metrics are actually, and if I scroll all the way down. The last two
are our metrics. So, we have HTP request operations total that's the name of the /metric we
have defined here, and then we have should be request the second bucket.
Which is this /metric right here, and these are the buckets that we have defined right here.
And each request basically gets assigned to the respective bucket depending on how long they
took?
And then we have this duration data right here. So, now the total request is at five if I made
another request to the application, and refresh. We should see we have six total requests
made to the application. Great?
So, now we have the application that is exposing metrics, and we can track, and monitor for
this data is the next step. We want to deploy that application in the cluster so we can connect
Prometheus to it, and basically start scraping those application metrics.
Which will then allow us to configure alert rules for them, or just visualize them in Grafana, for
example.
So, let's do that as a next step. So, as a next step let's build a Docker image from our
application, and push that to a private repository so that we can deployed in Kubernetes.
Going back to the application we also have a Docker file that basically defines how the
application should be built into a Docker image?
Again, to go through this very quickly we are creating slash user slash app directory inside the
container. Then we're copying the package that json, and the whole app folder into that
directory. We're setting that as a default directory, and then inside that. Since we have
packaged .json in that folder that we just copied we can do NPM install. So, the node modules
package will be generated and the dependencies like express, and prompt client will be
installed and available for the code, and then we just start surges, and expose the port. Three
thousand inside the container. So, that's our Docker file, and with that we can build the image.
And I'm going to do that in the terminal right here, and I'm in the directory where I have
Docker file. So, I'm gonna do Docker built, and we're going to name the image with the private
repository name, and the take which I'm going to call just node app, and location of the
Docker file.
So, let's build the image. So, that's basically the whole name of the image for Docker Hub
Repository. That's where I have my private Docker repository, and that's where I'm going to
push this image to, and to push the image. First, I'm going to have to log in to my Docker Hub
private repository with the user name, and password and Docker. Push the name of the
image, and let's execute. And if I go to my Doctor Hub repository so, this is basically the name
of the repository, and go to tags. This is the latest take that I have pushed a few seconds ago.
So, I have my node app application already available here.

Which we can now deploy to Kubernetes is now of course if you have a secure pipeline set up
for your application you would just comment to the repository. It will then trigger a build, and
the build will then push it to the Docker private repository. That's gonna be a proper workflow
for building an application. In our case, I just pushed it directly from my local computer just for
the sake of simplicity for this example.
Now, that we have our application available in Docker Hub, let's actually deploy that in the
Kubernetes cluster. So, let's create a Kubernetes deployment, and service configurations for
our node app application, and I'm going to actually create that file right here in the same
repository as the application.
Now, let's call it config dot Yamal or Kubernetes conflict dot Yamal. And as always I'm going to
paste in the basic configuration that we can then adjust. So basically, this is a deployment with
a name and label. We have app nodeapp everywhere. We're labelling the deployment the port
service with the same label, and here we have the image which is going to be.
Name of the image from our doc repository. The application is configured to run on port 3000,
and let's do a service on port three thousand, and Target Port also three thousand. So, this is
just a very basic configuration for our application. Now, because we're deploying a private
repository in which we are going to actually need to give Kubernetes access to that private
repository as well. Right?
So, what we need to do is create a Docker login secret in the cluster that deployment can use
when trying to fetch this image from the repository? Which you actually learned in the 90s
module? But, as a refresher here let's see how it's done. So, we're going to create the secret
using KubeCTL comment instead of a config file. So, we're gonna do KubeCTL create a secret
with Docker registry type, and we can call our secret my registry key. So, this is the name of
the secret, and this has to be Docker Registry. Which is a type of the secret. And then we're
going to need to pass three values for the secret. One of them is the registry euro, or the
registry server the username and password. So, Docker server equals, and if you're unsure
what the registry URL for Docker Hub is you can actually look it up using Docker info, and this
gives you a bunch of information one of them being the registry URL.

So, that's actually the Docker hub server endpoint we can use here. Then we have Docker
username. In my case that's username, and then Docker password. So, the password and
execute and secret my registry key created, and now that we have that secret my registry key
let's clear this up. We're going to configure that in the deployment so that Kubernetes can
fetch that image using that Docker registry secret.

And for that we have image pull secrets, and the name of the secret. Which is my registry key
also. So, now let's apply our configuration file. And this will create node application
deployment, and service in default namespace in the cluster. So, let's execute and let's
actually switch to the command line. And if I do still get port there you go we have the
nodeapp application running in the cluster.
I have actually deleted all the Micro services so that we have a cleaner output here, and it is in
a running state as well as we have our service for nodeapp. And just to make sure everything
is running properly, and our application is exposing metrics inside the Kubernetes cluster using
the Docker image.
Let's actually do a quick check here. Let’s you put forward service nodeapp port three
thousand. So, we basically port forwarding this service with port 3000 to bind on localhost
three thousand. Let's do that and it's excess. That's actually localhost three thousand, and this
is basically sending the request to the application running inside the cluster to this instance.
Right?
So, let's refresh a couple times, and then check the metrics and point. Everything looks fine.
We have our metrics exposed, and being tracked right here. Great?

So, our application is running successfully in the cluster. Now, as a next step it's time to
configure Prometheus to start scraping the metrics endpoint of our nodeapp application.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy