Tech Blog: Connecting Elasticsearch logs to Grafana and Kibana
Photo by Kevin Ku on Unsplash

Tech Blog: Connecting Elasticsearch logs to Grafana and Kibana


What is the point in collecting logs and metrics if you don’t use them? In this blog post, we will build upon our previous blog post and connect Fluent Bit log collectors to Elasticsearch along with a basic setup and comparison of Kibana and Grafana, tools often used for visualizing logs

This blog post is a part of the Velebit AI Tech Blog series where we discuss good practices for a scalable and robust production deployment.


Formatting logs: Collecting logs: Visualizing logs:

Before we start

This post is heavily dependent on the previous blog post in this series so you can always go back and check it out if you are feeling a little lost with the Fluent Bit aspect of the configuration. We are going to modify all configuration files except log generator service from ./service.yml. Just like the last time, you can find configuration files on GitHub. Now you should be ready for the rest of the blog.

Store logs to Elasticsearch

To visualize logs first we need to store historic values somewhere. We opted for Elasticsearch as we have experience with it and it has served us well for years. This means that along with Fluent Bit service that collects logs, we now also need an Elasticsearch instance. Let’s start by creating logging.yml with both of these configurations

./logging.yml
version: '2.3'

services:
    elasticsearch:
        image: elasticsearch:7.8.0
        ports:
            - "9200:9200"
        environment:
            ES_JAVA_OPTS: "-Xms1g -Xmx1g"
            discovery.type: "single-node"
        volumes:
            - elasticsearch-data:/usr/share/elasticsearch/data

    fluentbit:
        image: fluent/fluent-bit:1.4.6
        command: "/fluent-bit/bin/fluent-bit -c /configs/config.conf"
        ports:
            - "24224:24224/tcp"
            - "24224:24224/udp"
        volumes:
            - ./config.conf:/configs/config.conf
            - ./parsers.conf:/configs/parsers.conf

volumes:
    elasticsearch-data:

Note. You can easily add Kibana or Grafana to the above setup by adding

    kibana:
        image: kibana:7.8.0
        ports:
            - "5601:5601"

Kibana will automatically connect to your Elasticsearch instance, but in order to view logs you will need to create a new index pattern with settings

Index pattern settings  
Index pattern logs-*
Time Filter field name @timestamp
Table 1. Kibana Elasticsearch index pattern settings
Image 1. Kibana interface
Image 1. Kibana interface

or

    grafana:
        image: grafana/grafana:6.5.2
        ports:
            - "3000:3000"
        volumes:
            - grafana-data:/usr/share/elasticsearch/data

somewhere in the services section, along with updating the volumes section to

volumes:
    elasticsearch-data:
    grafana-data:

in order to preserve Grafana configuration and user data. Note that Kibana uses the Elasticsearch instance for saving its configuration which is why this step is not needed for Kibana.

To connect Grafana to Elasticsearch, you simply need to add a new data source and fill in:

Data Source settings  
URL http://elasticsearch:9200
Pattern daily
Index name [logs-]YYYY.MM.DD
Time field name @timestamp
Version 7.0+
Table 2. Grafana Elasticsearch Data Source settings

After this, press the Save & Test button and you are ready to go!

Image 2. Grafana interface
Image 2. Grafana interface

Which one to choose?

Kibana is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. It integrates seamlessly with Elasticsearch and provides an excellent experience for exploring both logs and various metrics extracted from logs. Elastic also offers the X-Pack license to provide even more capabilities. One example would be alerting, which does not come together with Elastic without X-Pack.

Grafana is in general more oriented towards metrics than the logs. Even though you can inspect logs and stack traces, the experience you get from Kibana is simply superior. The most important advantage Grafana has over Kibana is that even the free version of Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. This opens up a lot of possibilities. For example, we used Grafana together with InfluxDB to build a historical overview of official Covid19-related data for Croatia. You can see the board in action over at https://covid19hr.velebit.ai

Still can’t choose? Use both of them. That is exactly what we at Velebit AI do as well! We prefer Grafana for metrics and alerting, while we use Kibana once something goes wrong and we need to delve into the application logs.

Configure Fluent Bit to work with Elasticsearch

Finally, we must configure Fluent Bit to send the logs it collects to the Elasticsearch. We can do this by simply adding another OUTPUT section at the end of the config.conf file from the previous blog post.

# send logs to elasticsearch
[OUTPUT]
    Name es
    Match *
    Host elasticsearch
    Port 9200
    Logstash_Format On
    Logstash_Prefix logs
    Generate_ID On

This would make the final file look like this

./config.conf
[SERVICE]
    # import parsers defined in a parsers file
    Parsers_File /configs/parsers.conf

# collect docker logs using fluend logging driver
[INPUT]
    Name forward
    Listen 0.0.0.0
    port 24224

# make SERVICE_NAME key lowercase
[FILTER]
    Name modify
    Match *
    Rename SERVICE_NAME service_name

# try parsing log as json and lift its keys to the first-level
[FILTER]
    Name parser
    Match *
    Parser json
    Key_Name log
    Reserve_Data On

# send logs to stdout
[OUTPUT]
    Name stdout
    Match *

# send logs to elasticsearch
[OUTPUT]
    Name es
    Match *
    Host elasticsearch
    Port 9200
    Logstash_Format On
    Logstash_Prefix logs
    Generate_ID On

After modifying all the necessary files, we can start the defined services using

docker-compose -f logging.yml up -d
docker-compose -f service.yml up -d

An easy way to check whether the setup is working or not is to make an HTTP request directly to Elasticsearch, for example

curl http://localhost:9200/logs-$(date +'%Y.%m.%d')/_search\?pretty=true

which should return a list of entries in the specified index. This means you are ready to go and explore your logs in Grafana and Kibana!

In this blog post we used logs created by dummy log-generating service defined in ./service.yml file. In production environments you will have to collect logs from various sources and you can do that easily as long as Fluent Bit can parse their logs. We decided to use JSON log format for everything as it simplifies a lot of things. If you want to learn more about this you can check out our previous blog posts where we discuss how we solved the problems with formating uwsgi and Python logs as JSON.

Finally, complete cleanup can be executed using

docker-compose -f service.yml down --volumes
docker-compose -f logging.yml down


Recent blog posts

Tech Blog: Connecting Elasticsearch logs to Grafana and Kibana

Tech Blog: How to configure JSON logging in nginx?

In previous posts from this series, we discussed how we formatted UWSGI and Python logs in JSON format. We still have one important production component left: the Nginx server. This blog post will describe how the Nginx logging module works, and showcase a simple logging configuration where Nginx logger is configured to output JSON logs.
Read more

Tech Blog: Connecting Elasticsearch logs to Grafana and Kibana

An Introduction to Statistics and Data Science and Differences between Them

There is a great deal of overlap between the fields of statistics and data science, to the point where many definitions of one discipline could just as easily describe the other. While this is true, there are also many differences between them. Why is statistics important and what is its connection to data science? What is data science? What are their similarities and differences? Let’s try to understand it better at least on a basic level without too much going into subtle details.
Read more

Tech Blog: Connecting Elasticsearch logs to Grafana and Kibana

Tech Blog: Collecting logs in docker clusters

In previous blogs from this series we discussed how we formatted uwsgi and Python logs using JSON. Properly formatted logs are, however, useless if you don't have a way of accessing them. Reading raw log files on servers directly is great when working on a small number of servers, but it quickly becomes cumbersome.
Read more

We build AI for your needs.

Meet our highly experienced team who just loves to build AI and design its surrounding to incorporate it in your business. Find out for your self how much you can benefit from our fair and open approach.

Contact Us

Members of