After setting up your Kubernetes cluster in the cloud or on an on-premise environment, there will be a need to check what is going on inside your warehouse in an easy and flexible fashion. One of the best ways to do that is to investigate logs whenever you need to fix or know what happened at a particular time. While it is possible to log into your cluster and check the logs of your pods or your hosts, it suddenly becomes tedious to check each pod’s logs individually especially if you have many pods in your k8s. To make it easier for you to check the status of your cluster on one platform, we are going to deploy Elasticsearch and Kibana on an external server then ship logs from your cluster to Elasticsearch using Elastic’s beats (Filebeat, Metricbeat etc). If you already have an ELK Stack already running, then the better.

The diagram below shows the architecture we shall accomplish in this guide. It is essentially a 3 node Kubernetes cluster and one Elasticsearch and Kibana server which will be receiving logs from the cluster via Filebeat and Metricbeat log collectors.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-diagram.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="543" loading="lazy" src="data:image/svg xml,” width=”579″>

First, we shall need an Elasticsearch server with with Kibana installed as well. You can leave out Logstash but incase you need further filtering of your logs, then you can have it installed. Follow the guides below to install Elasticsearch and Kibana:

How To Install ElasticSearch 7.x on CentOS 7

How To Install Elasticsearch 7 on Debian

How To Install Elasticsearch 7,6,5 on Ubuntu

On your Elasticsearch host, make sure it can be reached from outside. You can edit the following parts in your config

$ sudo vim /etc/elasticsearch/elasticsearch.yml

# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#

Then allow the port on your firewall

sudo firewall-cmd --permanent --add-port=9200/tcp
sudo firewall-cmd --reload

Secondly, you must have a Kubernetes cluster since it is the place we are concerned about fetching logs from. We have a couple of guides that will help you set one up in case you need to bootstrap one quickly. They are shared below:

Install Kubernetes Cluster on Ubuntu with kubeadm

Install Kubernetes Cluster on CentOS 7 with kubeadm

Easily Setup Kubernetes Cluster on AWS with EKS

Deploy Kubernetes Cluster with Ansible & Kubespray

Install Production Kubernetes Cluster with Rancher RKE

After you are ready, we can proceed to install Filebeat and Metricbeat pods in our cluster to start collecting and sending the logs to ELK. Make sure you are able to run kubectl command in your Kubernetes cluster.

Step 1: Download Sample Filebeat and Metricbeat files

Log into your Kubernetes master node and run the command below to fetch Filebeat and Metricbeat yaml files provided by Elastic.

cd ~
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/filebeat-kubernetes.yaml
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/metricbeat-kubernetes.yaml

Step 2: Edit the files to befit your environment

In both files, we only need to change a few things. Under the ConfigMap, you will find elasticseach output as shown below. Change the ip (192.168.10.123) and port (9200) to that of your Elasticsearch server.

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:192.168.10.123}:${ELASTICSEARCH_PORT:9200}']
      #username: ${ELASTICSEARCH_USERNAME}
      #password: ${ELASTICSEARCH_PASSWORD}

Under DaemonSet within the same file, you will find the following configuration. Note that we are showing the areas to change. Edit the IP (192.168.10.123) and port (9200) to match that of your Elastcsearch server as well. If you have configured username and password for your Elasticsearch, you are free to add them on the commented sections shown.

        env:
        - name: ELASTICSEARCH_HOST
          value: "192.168.10.123"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        #- name: ELASTICSEARCH_USERNAME
        #  value: elastic
        #- name: ELASTICSEARCH_PASSWORD
        #  value: changeme
        - name: ELASTIC_CLOUD_ID

It should be noted that if you would wish to deploy your Filebeat and Metricbeat resources on another namespace, simply edit all instances of “kube-system” to the one of your choice.

Under Deployment, you can change the version of Filebeat and Metricbeat to be deployed by editing the image (docker.elastic.co/beats/metricbeat:7.9.0) seen on the snippet below. I am going to use version 7.9.0.

###For Metricbeat####
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.9.0

Do the same for the filebeat yml file if you would wish to change its version as well.

###For Filebeat####
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/filebeat:7.9.0

Important thing to note

If you would wish to deploy the beats on the master node, we will have to add tolerations. An example for metricbeat is shown below. That is not the whole DaemonSet config. Just the part we are interested in. You can leave the other parts intact. Add the toleration as shown on the configuration below under spec. The same can apply on filebeat configuration depending on your needs.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
###PART TO EDIT###
      # This toleration is to have the daemonset runnable on master nodes
      # Remove it if your masters can't run pods
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
####END OF EDIT###
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.9.0
        args: [
          "-c", "https://computingforgeeks.com/etc/metricbeat.yml",
          "-e",

Step 4: Deploying to Kubernetes

After we have done all of our edits and our Elasticsearch is well reachable from your Kubernetes cluster, It is time to deploy our beats. Login to your master node and run the commands below:

kubectl apply -f metricbeat-kubernetes.yaml
kubectl apply -f filebeat-kubernetes.yaml

Confirm that the pods are deployed and running successfully after some time.

$ kubectl get pods -n kube-system

NAME                                             READY   STATUS    RESTARTS   AGE
calico-kube-controllers-c9784d67d-k85hf          1/1     Running   5          11d
calico-node-brjnk                                1/1     Running   7          10d
calico-node-nx869                                1/1     Running   1          10d
calico-node-whlzf                                1/1     Running   6          11d
coredns-f9fd979d6-6vztd                          1/1     Running   5          11d
coredns-f9fd979d6-8gz4l                          1/1     Running   5          11d
etcd-kmaster.diab.mfs.co.ke                      1/1     Running   5          11d
filebeat-hlzhc                                   1/1     Running   7          7d23h <==
filebeat-mcs67                                   1/1     Running   1          7d23h <==
kube-apiserver-kmaster.diab.mfs.co.ke            1/1     Running   5          11d
kube-controller-manager-kmaster.diab.mfs.co.ke   1/1     Running   5          11d
kube-proxy-nlrbv                                 1/1     Running   5          11d
kube-proxy-zdcbg                                 1/1     Running   1          10d
kube-proxy-zvf6c                                 1/1     Running   7          10d
kube-scheduler-kmaster.diab.mfs.co.ke            1/1     Running   5          11d
metricbeat-5fw98                                 1/1     Running   7          8d  <==
metricbeat-5zw9b                                 1/1     Running   0          8d  <==
metricbeat-jbppx                                 1/1     Running   1          8d  <==

Step 5: Create Index on Kibana

Once our Pods begin running, they will immediately send an index pattern to Elasticsearch together with the logs. Login to your Kibana and Click “Stack Management>Index Management” and you should be able to see your indexes.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-stack-management-1024×455.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="455" loading="lazy" src="data:image/svg xml,” width=”1024″>

Click on “Index Management

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-index-management-1024×471.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="471" loading="lazy" src="data:image/svg xml,” width=”1024″>

And there are our indexes.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-index-management-2-1024×341.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="341" loading="lazy" src="data:image/svg xml,” width=”1024″>

To create an index pattern, click on “Index Patterns” then hit “Create Index Pattern“.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-create-index-1-1024×232.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="232" loading="lazy" src="data:image/svg xml,” width=”1024″>

On the next page, type in the name of the index pattern that matches either filebeat or metricbeat and they should show up as already matched.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-create-index-2-1024×488.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="488" loading="lazy" src="data:image/svg xml,” width=”1024″>

Create the pattern then click “Next Step

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-create-index-3-1024×487.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="487" loading="lazy" src="data:image/svg xml,” width=”1024″>

Choose “@timestamp” on the drop-down then “Create Index Pattern

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-create-index-4-1024×489.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="489" loading="lazy" src="data:image/svg xml,” width=”1024″>

Step 6: Discover your Data

After the index pattern has been created, click on “Discover”,

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-discover-1024×509.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="509" loading="lazy" src="data:image/svg xml,” width=”1024″>

Then choose the index pattern we created.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/k8s-elasticsearch-discover-2-1024×397.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="397" loading="lazy" src="data:image/svg xml,” width=”1024″>

Conclusion

Now we have lightweight beats that fetch logs and metrics from your Kubernetes cluster and ship them to an external Elasticsearch for indexing and flexible searching.