Earned Media Marketing Roundup

Happy Friday and welcome to the Earned Media Marketing roundup, my fortnightly speed-read through some articles that anyone in social media, influencer and content marketing might want to check out…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Logging with EFK

With ever-increasing complexity in distributed systems and growing cloud-native solutions, Observability become a very important aspect in understanding how the systems are behaving. Logs, metrics, and traces are often known as the three pillars of observability. Having said that, the overarching goal of observability is to bring better visibility into systems.

Logging is a powerful debugging mechanism for developers and operations teams at the time of troubleshooting issues. Containerized applications write logs to standard output, which is redirected to local ephemeral storage, by default. These logs are lost when the container is terminated and are not available to troubleshoot issues unless they are routed successfully to another centralized logging destination. And, that’s why logging stack is an instrumental component of running and managing Kubernetes clusters at scale. In this blog post, we will look at one of the popular centralized logging solution i.e. EFK

Elasticsearch is a real-time, distributed object storage, search and analytics engine. It excels in indexing semi-structured data such as logs. The information is serialized as JSON documents and indexed in real-time and distributed across nodes in the cluster. Elasticsearch uses an inverted index which lists all unique words and their related documents for full-text search, which is based on Apache Lucene search engine library.

Fluentd is a powerful log aggregator that supports log collection from multiple sources and sending them to multiple outputs. It’s written in Ruby with a plug-in oriented architecture.

FluentBit is a lightweight alternative of Fluentd which can be installed as an agent on edge servers. It’s implemented solely in C and has a restricted set of functionality compared to Fluentd but it shines because of its tiny memory footprint.

Kibana is the visualization engine for Elasticsearch data, with features like time-series analysis, machine learning, graph and location analysis.

Together, Fluentd, Elasticsearch and Kibana is known as “EFK stack”.

One of the most common pattern to deploy Fluent Bit and Fluentd is forwarder/aggregator pattern. This pattern includes having a lightweight instance, known as the forwarder, deployed on edge, generally where data is created, such as Kubernetes nodes. These forwarders do minimal processing and then use the forward protocol to send data to a much heavier instance of Fluentd. This heavier instance, known as the aggregator, may perform more filtering and processing before routing to the appropriate backend(s).

The advantage of this pattern is that resource utilization on the edge devices is less and undisruptive

Before you begin, ensure you have the following available to you

First thing first, create kube-logging namespace for the deployment of logging solution

We are good to start with our first deployment of Elasticsearch to receive logs stream. Elasticsearch cluster can be deployed using Elastic helm charts which will deploy different pods of master, data and client. But to keep the things simple, we will deploy Elasticsearch as a statefulset and expose it as a clusterIP service. Depending on your environment (cloud or on-prem), you can leverage different storage solutions to persist the data

In Fluentd’s configuration file, forward plugin is used for aggregate logs from fluent bit pods, and elasticsearch plugin is for forwarding logs to Elasticsearch.

We need to authorize our Fluentd application to get/list/watch pods and namespaces inside our Kubernetes cluster. As namespaces are cluster-scoped objects we need to create a ClusterRole while regulating access to them. By this way, our Fluentd would gain read-only access to the pods and namespaces inside cluster.

Thereafter, we can proceed to deploy the Fluentd as a Deployment

Aforementioned, as we discussed about the logging pattern — FluentBit would be deployed on each node using a DaemonSet and each node-level fluent bit agent would collect logs from /var/log/containers and forward them to a single Fluentd instance

Use the following yaml to create the FluentBit daemonset

Before proceeding, lets verify the status of all the pods

Everything is looking good! Lets connect to Kibana through port forwarding to our localhost

To view the logs we have to point Kibana to the Elasticsearch indexes containing the data. Follow the below steps:

This index pattern is set as default automatically. If not, choose the star icon, as shown in the screenshot preview below.

Once the index creation has been completed, you should be able to see logs from any kubernetes deployment/pods logs

Congratulations, your EFK stack has been successfully deployed!

Please note that this setup is suitable for development environment only. For production, you would need to incorporate authentication for accessing Kibana. Of course, it is not accessible outside of the cluster yet, so you need to create an ingress rule that will configure our Ingress controller to direct traffic to the pod.

In this blog, we have learned how to deploy a logging solution based on the EFK stack on our kubernetes cluster to aggregate and visualize the application logs.

Liked the blog? Don’t forget to give me a “clap”

Add a comment

Related posts:

Desacelere!

O ser humano desaprendeu o que é diminuir o ritmo. Escrevo este texto entre um compromisso e outro, vendo outras pessoas como eu: correndo de um lado para o outro, sem tempo para ficar um pouco com o…

a new cycle

Shedding out of darkness Moving out of grief The new dawn now awaits Clashing sounds of waves, Now they cleanse and help you let go For you traveller, the shore you see Is the new land The land of…

Be Happy

Why does it seems like every one complain about things that don’t matter. Who cares about how what the politician opinions are they don’t really for the most part mean what they say anyways. And as…