Observable Patterns
Observable Patterns
Observable Patterns
Context:
Distributed tracing
Log aggregation
Application metrics
Problem:
How do we combine all the logs from multiple services into a central
location where they can be indexed, searched, filtered, and grouped to
find bugs that are contributing to a problem.
Distributed Tracing
When multiple microservices interact with each other for various business
use cases, there may be a possibility of failure of services during this inter-
communication, which can break the business flow and make it complex and
tedious to identify and debug the issue.
Observability and Monitoring
Logging is just not enough; it's a very tedious process to read logs and
identify issues when we have thousands of lines of logs of multiple
microservices on different containers in a multi-cloud environment.
In this pattern, client request and response logs for the microservices
REST API are recorded and monitored asynchronously.
Advantages:
Customizes query on these API tracing logs by the dashboard and APIs.
It also tracks the request and response time and peak usage duration.
Use cases
Implementation:
• It adds trace and span ids to all the logs,so we can just extract from a
given trace or span in a log aggregator.
• It does this by adding the filters and interacting with other Spring
components to let the correlation IDs being generated pass through to all
the system calls.
Spring Cloud Sleuth will add three pieces of information to all the logs
written by a microservice I.e. [<App Name>,<Trace ID>, <Span ID>]
2. Trace ID: Trace ID is the equivalent term for correlation ID. It’s a unique
number that represents an entire transaction.
Observability and Monitoring
Span ID: A span ID is a unique ID that represents part of the overall
transaction. Each service participating within the transaction will have its
own span ID. Span IDs are particularly relevant when we integrate with
Zipkin to visualize our transactions.
Step-1:
Add slueth dependency in pom.xml file of category service, product-brands service as shown below.
Step-2: add logger statements in category and brands service as shown below, just to know the logs
when inter-microservices communication is happening.
Observability and Monitoring
Step-3: state the product service and category service, in console log we can see the below statements
Step-4: invoke the get product categories endpoint which also, will invoke category service get all
categories end point internally using feign client. Then in category service we can see the same trace id
as that of the one in brands service.
Zipkin(https://zipkin.io/):
In Order to get started with zipkin, we no need to build a seperate microservice in side our application.
We can always have setup zipkin in a seperate server with the help of docker image or with its installer.
Step-1: download the self executable zipkin jar file as shown below:
Step-4: add the zipkin dependency in both brands and categories service as shown below:
Step-5: to collect the logs by the zipkin , add the below properties in application.properties file of the
brands and categories service as shown below:
Step-6: restart the brands and category service then we can see the services registered in zipkin server
as shown below:
Observability and Monitoring
Step-7: now execute the brands get all categories as shown below:
Step-8: then we can see the complete trace of the request from brands service to category service as
shown below:
Observability and Monitoring
Log aggregation
This is a pattern to aggregate logs at a centralized location. It’s a technique to collect logs from various
microservices and other apps and persist at another location for query and visualization on log
monitoring UI dashboards.
To complete business use cases, multiple microservices interact with each other on different containers
and servers. During this interaction, they also write thousands of lines of log messages on different
containers and servers.
It's difficult to analyse humongous end-to-end logs of business use cases on different servers for
developers and DevOps operators. Sometimes, it takes a couple of tedious days and nights to analyze
logs and identify production issues, which may cause loss of revenue and customer’s trust, which is
MOST important.
Log aggregation and analysis are very important for any organization. The following diagram has two
microservices, which writes logs locally/externally and finally all logs aggregated and forwarded to the
Centralized Log Aggregation service:
Observability and Monitoring
There should be a mechanism to aggregate end-to-end logs for given use cases sequentially for faster
analysis and debugging of logs. There are a lot of open source (OSS) and enterprise tools which
aggregate these logs from many sources and persist at the centralized location asynchronously at the
external location which is dedicated for central logging and analysis. It can be on the same cluster or on
the cloud.
It’s important to write logs asynchronously to improve application performance of the application
because in this scenario, the actual API/application response time won’t be impacted.
There are multiple log aggregation solutions like ELF, EFK, Splunk, APM, and some other enterprise APM
tools.
Structured logging can be done using various tools, which filter and format the unstructured log to the
required format with various log filter tools. Fluentd/Fluent Bit and Logstash open-source tools are
useful for making data structured.
Advantages
Stores structured and organized logs for complex queries for a given filter condition.
External centralized log’s location to avoid overhead of extra storage and compute.
Central logging and analyzing asynchronously. It makes application faster to log async logging.
Use cases
Async logging on an external server using the file system or messaging topic.
Implementation
Download latest version of Elasticsearch from this download page and unzip it any folder.
Run bin\elasticsearch.bat from command prompt.
By default, it would start at http://localhost:9200
Observability and Monitoring
Download the latest distribution from download page and unzip into any folder.
Open config/kibana.yml in an editor and set elasticsearch.url to point at your
Elasticsearch instance. In our case as we will use the local instance just uncomment
elasticsearch.url: "http://localhost:9200"
Run bin\kibana.bat from command prompt.
Once started successfully, Kibana will start on default port 5601 and Kibana UI will be available at
http://localhost:5601
Download the latest distribution from download page and unzip into any folder.
Create one file logstash.conf with following content in bin director
Now run bin/logstash -f logstash.conf to start logstash
Input
tcp {
output {
elasticsearch {
hosts=> ["localhost"]
}
Observability and Monitoring
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>4.9</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
<appender name="STASH"
class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5000</destination>
<encoder
class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<mdc /> <!-- MDC variables on the Thread will be written as JSON fields
-->
<context /> <!--Outputs entries from logback's context -->
<version /> <!-- Logstash json format version, the @version field in the
output -->
<logLevel />
<loggerName />
Observability and Monitoring
<pattern>
<pattern>
{
"serviceName": "product-brands-service"
}
</pattern>
</pattern>
<threadName />
<message />
<logstashMarkers />
<stackTrace />
</providers>
</encoder>
</appender>
<root level="info">
<appender-ref ref="STDOUT" />
<appender-ref ref="STASH" />
</root>
</configuration>
Application metrics
Application metrics patterns will help us to check application metrics, their performance, audit logs, and
so on. It’s a measure of microservices applications/REST APIs characteristics which are quantifiable or
countable.
It helps to check the performance of the REST API like how many requests an API is handling per second
and what’s the response time.
It helps to scale the application and provide faster applications to web/mobile clients. It also checks
Transaction Per Second (TPS) and other metrics of applications.
There are many tools available to check matrices of applications like Spring Boot Micrometer,
Prometheus, and APM tools like Wavefront, Dynatrace, Datadog, and so on.
They work either with push or pull models using REST APIs. For example, Grafana pulls metrics of the
applications by using the integrated Prometheus REST API handler and visualizes on the Grafana
dashboard.
Advantages
Helps in scaling hardware resources for future.
Identifies the performance of REST APIs.
Identifies Return of Investment (ROI).
Monitors and analyzes application behaviors during peak and odd hours.
Limitations
It needs extra hardware like compute, memory, and storage. It slows down the performance of
Observability and Monitoring
applications because it runs with applications and also consumes the same memory and computes with
the speed of a server.
Logstash Installation
Kibana Installation
Step-4: Update Kibana Network settings to let it know where the ES is.
vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://localhost:9200"]
Install MetricBeat
Step-1: update the machine
Observability and Monitoring
Logstash-ElasticSearch Integration
Observability and Monitoring
Step-1: Create a file called apache-01.conf in /etc/logstash/conf.d/ directory
input {
file {
tcp {
filter {
grok {
date {
geoip {
output {
elasticsearch {
Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.
Alternative Proxies: