Using Loki To Aggregate Logs in OpenShift Cluster
Using Loki To Aggregate Logs in OpenShift Cluster
OpenShift Cluster
The Loki roadmap for OpenShift outlines the future plans and
developments for this powerful logging and observability solution
replacing the old ES one. As a user-friendly and scalable platform, Loki
aims to enhance the logging experience on OpenShift by providing
efficient log aggregation, storage, and querying capabilities.
Go to the Minio’s web console to create a bucket and a user for the
Loki. Remember to set the region config in settings.
Then create a secret for storage config using by Loki.
Here we assume that the Loki and OpenShift logging operators have
been installed correctly on your cluster. Then we create the LokiStack
using the Operator UI.
Here we choose the extra small config for demo purpose. You can refer
to OpenShift’s official document for size selection in your environment.
Then create a logging CR referencing the LokiStack.
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
name: instance
namespace: openshift-logging
spec:
logStore:
type: lokistack
lokistack:
name: loki
collection:
logs:
type: vector
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
pipelines:
- name: all-to-default
inputRefs:
- infrastructure
- application
- audit
outputRefs:
- default
Wait for some time and you will be asked to refresh your console. You
will find a new menu entry ‘Logs’ in the observation items.
If you can not find the menu entry. You might probably have to enable
it in the cluster logging operator UI.
So you can have fun with the new logging store enhancement.