MicroK8s is a CNCF certified upstream Kubernetes deployment that runs entirely on your workstation or edge device. Being a snap it runs all Kubernetes services natively (i.e. no virtual machines) while packing the entire set of libraries and binaries needed. Installation is limited by how fast you can download a couple of hundred megabytes and the removal of MicroK8s leaves nothing behind Source: Ubuntu Page.

CNCF runs the Certified Kubernetes Conformance Program to ensure conformance so that there is smooth interoperability from one Kubernetes installation to the next. Software conformance ensures that every vendor’s version of Kubernetes supports the required APIs, as do open source community versions Source: CNCF Site.

In this guide we will get the following done together:

  • Install Kubernetes cluster using MicroK8s
  • Enable core Kubernetes addons such as dns and dashboard
  • Deploy pods and adding new nodes
  • Configure storage
  • Enable logging, prometheus and Grafana monitoring
  • Configure registry

All we will need is a Linux Distribution with support for Snap and in this guide we are going to stick with CentOS 8. Let us begin.

Step 1: Update Server and Install Snap

To kick off on a clean and ready platform, we will update our server to get the latest patches and software, get Epel Repository added then get our snap package from Epel installed. Run the commands below to get this accomplished.

sudo dnf install epel-release -y
sudo dnf update
sudo dnf -y install snapd

Disable SELinux

If you have SELinux in enforcing mode, turn it off or use Permissive mode.

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

Once the package has finished installing, the systemd unit that manages the main snap communication socket needs to be enabled as follows:

sudo systemctl enable --now snapd.socket

Furthermore, to enable classic snap support, enter the following to create a symbolic link between /var/lib/snapd/snap and /snap then add snap to PATH variable

sudo ln -s /var/lib/snapd/snap /snap
echo 'export PATH=$PATH:/var/lib/snapd/snap/bin' | sudo tee -a /etc/profile.d/mysnap.sh

After that, either log out and back in again or restart your system to ensure snap’s paths are updated correctly. And snap will now be installed. To test it out, we can search for a package and see if it is working as expected:

$ snap find microk8s

Name      Version  Publisher   Notes    Summary
microk8s  v1.19.0  canonical✓  classic  Lightweight Kubernetes for workstations and appliances

Step 2: Install MicroK8s on CentOS 8

Now that our server is updated and Snap is installed, we are ready to fetch MicroK8s comfortably and begin utilizing it to test and run our applications the containers way. To get MicroK8s installed, run the simple snap command below and we will be set. Such is the beauty of Snappy.

$ sudo snap install microk8s --classic
microk8s v1.19.0 from Canonical✓ installed

If you do not add the –classic switch, you will get an error. So kindly add it.

To be able to run the microk8s command as a sudo user, you will have to add the user to the microk8s group then log out and login back. Add the user as illustrated below:

sudo usermod -aG microk8s $USER
sudo chown -f -R $USER ~/.kube

Onc the permissions have been implemented, log out and login back in.

After that we can view installed snaps

$ snap list

Name      Version      Rev   Tracking       Publisher   Notes  
core      16-2.45.3.1  9804  latest/stable  canonical✓  core   
microk8s  v1.19.0      1668  latest/stable  canonical✓  classic

For the purposes of adding new nodes later, we will need to open ports on the server. This applies if you are running a firewall in your server. Add the ports as follows:

sudo firewall-cmd  --permanent --add-port={10255,12379,25000,16443,10250,10257,10259,32000}/tcp
sudo firewall-cmd --reload

Step 3: Manage MicroK8s on CentOS 8

MicroK8s is now installed on our server and we are ready to roll. To manage MicroK8s (i.e start, status, stop, enable, disable, list nodes etc), you simply do the following:

#####Check Status#####

$ microk8s status

microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none

#####Stop MicroK8s#####

$ microk8s stop

stop of [microk8s.daemon-apiserver microk8s.daemon-apiserver-kicker microk8s.daemon-cluster-agent microk8s.daemon-containerd microk8s.daemon-contr…Stopped

#####Start MicroK8s#####

$ microk8s start

Started.

#####List MicroK8s Nodes#####

$ microk8s kubectl get nodes

NAME     STATUS   ROLES    AGE    VERSION
master   Ready       2m6s   v1.19.0-34 1a52fbf0753680

#####Disable MicroK8s#####

$ sudo snap disable microk8s

#####Enable MicroK8s#####

$ sudo snap enable microk8s

Awesome stuff! Our MicroK8s has been installed and responding to our commands without complaints. Let us get onto the next step.

Step 4: Deploy Pods and enable dashboard

Here, we shall go ahead and deploy pods and enable the dashboard to have our work simplified with good visuals. Let us deploy a sample redis pod as follows:

$ microk8s kubectl create deployment my-redis --image=redis
deployment.apps/my-redis created

List the pods deployed

$ microk8s kubectl get pods --all-namespaces

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE  
kube-system   calico-kube-controllers-847c8c99d-f7zd2   1/1     Running   2          3h48m
kube-system   calico-node-xxhwx                         1/1     Running   2          3h48m
default       my-redis-56dcdd56-tlfpf                   1/1     Running   0          70s

And our recent redis pod is up and purring!!

In case you would wish to log into the redis instance, proceed as illustrated below:

$ microk8s kubectl exec -it my-redis-56dcdd56-tlfpf -- bash
[email protected]:/data#

To check logs of a pod, make sure you include its respective namespace because it only will check the “default” namespace if not provided.

$ microk8s kubectl logs my-redis-56dcdd56-tlfpf -n default

1:C 14 Sep 2020 12:59:32.350 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 14 Sep 2020 12:59:32.350 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 14 Sep 2020 12:59:32.350 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 14 Sep 2020 12:59:32.352 * Running mode=standalone, port=6379.
1:M 14 Sep 2020 12:59:32.352 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower 
value of 128.
1:M 14 Sep 2020 12:59:32.352 # Server initialized
1:M 14 Sep 2020 12:59:32.352 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
1:M 14 Sep 2020 12:59:32.352 * Ready to accept connections

Next, let us enable the dashboard and dns to enjoy the view of our workloads. Enable it as follows

$ microk8s enable dns dashboard

Enabling Kubernetes Dashboard
Enabling Metrics-Server
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created

We will need a token to login to the Dashboard. To Fetch a token, issue the two commands below.

$ token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
$ microk8s kubectl -n kube-system describe secret $token

Name:         default-token-gnj26
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: default
              kubernetes.io/service-account.uid: 40394cbe-7761-4de9-b49c-6d8df82aea32

Type:  kubernetes.io/service-account-token

Data

ca.crt:     1103 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InBOVTc3NVd5ZDJHT1FYRmhWZFJ5ZlBVbVpMRWN5M1BEVDdwbE9zNU5XTDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLWduajI2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MDM5NGNiZS03NzYxLTRkZTktYjQ5Yy02ZDhkZjgyYWVhMzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.vHqwKlMGX650dTuChwYWsPYZFH7zRhRjuh-BEPtgYYPfrayKU08DSC5v3CixkrZH-wXZydOPit50H5SfCZPwY1TqDNCFgbz--0GnB7GhgwSoK4laXpr42Al7EBgbMWfUEAtnWXVkObUYzF31Sfhac2AnPIBp2kFlqJt8M03uoperJuFLl5x-fDacGrcXTQqvY2m5K1oE4zE38vtaJXdzgNfBMbtUrMneihoFczzOzwPLxzJJ4eZ7vAz1svG6JHO5PDDYbV0gded0egoLQkhu4Saavf8ILUjupJdYywA2VCqB6ERrrElMBHs5tYfckfyi4f6eR59_EZkf7-neCDWTAg

Copy the token and save it in a safe place.

Next, you need to connect to the dashboard service. While the MicroK8s snap will have an IP address on your local network (the Cluster IP of the kubernetes-dashboard service), you can also reach the dashboard by forwarding its port to a free one on your host with:

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard --address 0.0.0.0 30560:443

Note that we have added –address 0.0.0.0 so that it can be reachable from any IP instead of locally (127.0.0.1) on the server. You will now be able to access the dashboard via port 30560. Make sure to enable this port on your firewall in case you have it setup in your environment.

sudo firewall-cmd --permanent --add-port=30560/tcp
sudo firewall-cmd --reload

Now open your browser and point it to the ip or FQDN of your server. That is https://[ip or FQDN]:30560. And the following login page should be displayed. You will notice it needs either a token or Kubeconfig file. We already generated a token as above (Fetch token). Simply copy it and paste it on the login page.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-dashboard-1-1024×494.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="494" loading="lazy" src="data:image/svg xml,” width=”1024″>

Paste the Token

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-dashboard-with-token-pasted-1-1024×492.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="492" loading="lazy" src="data:image/svg xml,” width=”1024″>

And you should be ushered in

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-logged-in-1024×513.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="513" loading="lazy" src="data:image/svg xml,” width=”1024″>

Step 5: Add Nodes to your Cluster

So far, we have been working on a sigle node (server) and if you would wish to scale and distribute your applications on two or more nodes (servers), then this will get your there. To add another node to your cluster, you simply need to install Snap and MicroK8S on it as it has already been covered in Step 1 and Step 2. Follow Step 1 and Step 2 on your new CentOS 8 server then continue below.

If Firewalld is running, allow ports

node-01 ~ $ export OPENSSL_CONF=/var/lib/snapd/snap/microk8s/current/etc/ssl/openssl.cnf
node-01 ~ $ sudo firewall-cmd --add-port={25000,10250,10255}/tcp --permanent
node-01 ~ $ sudo firewall-cmd --reload

On the master Node (The one we installed first), execute the following commands to get our token and join command

$ microk8s add-node

From the node you wish to join to this cluster, run the following:
microk8s join 172.26.24.237:25000/dbb2fa9e23dfbdda83c8cb2ae53eaa53

As you can observe from above, we now have the command to run on our worker node to join the cluster. Without hesitating, copy the command, log into your worker node and execute it as shown below:

On New Node, execute the following commands

node-01 ~ $ microk8s join 172.26.24.237:25000/dbb2fa9e23dfbdda83c8cb2ae53eaa53

Contacting cluster at 172.26.16.92
Waiting for this node to finish joining the cluster. ..

Step 6: Configure Storage

MicroK8s comes with a built-in storage that just needs to be enabled. To enable the storage, add /lib86 directory to the path of the LD_LIBRARY_PATH environment variable then enable the storage as below on the master node:

$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/lib64"
$ microk8s enable storage

deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon

To check if storage has been enabled, we should check our pods and ensure that hostpath-provisioner pod has been started.

$ microk8s kubectl get pods --all-namespaces

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running   2          22h
kube-system   dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running   2          22h
kube-system   metrics-server-8bbfb4bdb-mgddj               1/1     Running   2          22h
kube-system   coredns-86f78bb79c-58sbs                     1/1     Running   2          22h
kube-system   calico-kube-controllers-847c8c99d-j84p5      1/1     Running   2          22h
kube-system   calico-node-zv994                            1/1     Running   2          21h
kube-system   hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running   0          71s <==

Confirm the StorageClass created by running the command below:

$ microk8s kubectl get storageclasses

NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  8m42s

As you can see, there is a storage class named “microk8s-hostpath” cerated. This is important because this name will be used when creating PersistentVolumeClaims as will be illustrated next.

Create PersistentVolumeClaim

To have our sample PersistentVolumeClaim created, simply open up your favorite editor and add the following yaml lines. Notice microk8s-hostpath on storageClassName.

$ nano sample-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elk-data-1
spec:
  storageClassName: microk8s-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Then create the PVC by running the create command as illustrated below. You should see the created message printed out.

$ microk8s kubectl create -f sample-pvc.yaml

persistentvolumeclaim/elk-data-1 created

To confirm that our PVC has been created, simply issue the magic MicroK8S command as below. And yeah, the PVC is truly created.

$ microk8s kubectl get pvc

NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
elk-data-1   Bound    pvc-fe391d65-6909-4c76-a6c9-87b3fd5a58a1   2Gi        RWO            microk8s-hostpath   5s 

And since MicroK8S delivers Persistent Volumes dynamically, our PVC will create a Persistent Volume as can be confirmed by the command below.

$ microk8s kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS       REASON   AGE
pvc-fe391d65-6909-4c76-a6c9-87b3fd5a58a1   2Gi        RWO            Delete           Bound    default/elk-data-1   microk8s-hostpath            5m 39s

Step 7: Configure Registry

A registry is basically a storage and content delivery system, holding named Docker images, available in different tagged versions that follow development progress and releases. MicroK8S has an inbuilt registry that also just needs to be enabled and put to use. Enabling the registry is pretty straightforward like we have so far seen in the other services. The only thing to consider is that it picks 20G as the default size of the registry if you do not specify it. If you would wish to specify the size, you simply add size config as shown below. If you are happy with 20G, then ignore the size option.

$ microk8s enable registry:size=25Gi

The registry is enabled
The size of the persistent volume is 25Gi

Confirm if the registry pod was deployed

$ microk8s kubectl get pods --all-namespaces

NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE  
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running   2          22h  
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running   2          22h  
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running   2          22h  
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running   2          22h  
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running   2          22h  
kube-system          calico-node-zv994                            1/1     Running   2          22h  
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running   0          52m  
container-registry   registry-9b57d9df8-6djrn                     1/1     Running   0          3m34s <==

To test the performance of our newly created registry, we shall install Podman, pull an image and push it to the local registry. All commands are illustrated below:

$ sudo dnf -y install podman
$ podman pull redis

Confirm that the image has been pulled

$ podman images

REPOSITORY                TAG      IMAGE ID       CREATED      SIZE         9ueue state U
docker.io/library/redis   latest   84c5f6e03bf0   5 days ago   108 MB

As you can see, our image is from docker.io repository.

Next, edit Podman configuration file and include local registry under [registries.insecure] since we shall not be using any certificates . Make sure you add the IP or the host name of the server so that other nodes in your cluster can be able to reach it. The registry listens on port 32000 and we had already opened in in the firewall in Step 2.

$ sudo vim /etc/containers/registries.conf
[registries.insecure]
registries = ['172.26.16.92', '127.0.0.1']

As you can see from the Podman images command above, our image is from docker.io repository as already mentioned. Let us tag it and customize it so that it matches and gets stored to our local registry.

$ podman tag 84c5f6e03bf0 172.26.16.92:32000/custom-redis:geeksregistry
$ podman push 172.26.16.92:32000/custom-redis:geeksregistry

Getting image source signatures
Copying blob ea96cbf71ac4 done
Copying blob 2e9c060aef92 done
Copying blob 7fb1fa4d4022 done
Copying blob 07cab4339852 done
Copying blob 47d8fadc6714 done
Copying blob 45b5e221b672 done
Copying config 84c5f6e03b done
Writing manifest to image destination
Storing signatures

Run the podman images command again to confirm changes.

$ podman images

REPOSITORY                        TAG             IMAGE ID       CREATED      SIZE  
172.26.16.92:32000/custom-redis   geeksregistry   84c5f6e03bf0   5 days ago   108 MB
docker.io/library/redis           latest          84c5f6e03bf0   5 days ago   108 MB

Login to Worker node and pull an image

Now we are ready to pull an image from the local registry we have just enabled. Log into your worker node or any node with podman installed and try to pull an image from the master server. If you do not have podman installed, simply issue the command below.

node-01 ~ $ sudo dnf install -y podman

On the worker node again or on any server you would wish to pull images, edit podman configuration file and include local registry under [registries.insecure] since we shall not be using any certificates.

$ sudo vim /etc/containers/registries.conf

[registries.insecure]
registries = ['172.26.16.92', '127.0.0.1']

After everything is well done, let us now try to pull an image from the MicroK8S registry.

node-01 ~ $ podman pull 172.26.16.92:32000/custom-redis:geeksregistry

Trying to pull 172.26.16.92:32000/custom-redis:geeksregistry...
Getting image source signatures
Copying blob 08c34a4060bc done
Copying blob 50fae304733d done
Copying blob 8de9fbb8976d done
Copying blob 72c3268a4367 done
Copying blob edbd7b7fe272 done
Copying blob b6c3777aabad done
Copying config 84c5f6e03b done
Writing manifest to image destination
Storing signatures
84c5f6e03bf04e139705ceb2612ae274aad94f8dcf8cc630fbf6d91975f2e1c9

Check image details

$ podman images

REPOSITORY                        TAG             IMAGE ID       CREATED      SIZE  
172.26.16.92:32000/custom-redis   geeksregistry   84c5f6e03bf0   5 days ago   108 MB

And we now have a well functioning registry!! We shall configure logging and monitoring next on our MicroK8s cluster.

Step 8: Enable logging with FluentD, Elasticsearch and Kibana

MicroK8s comes with a package called fluentd that deploys FluentD, Elasticsearch and Kibana (EFK) automatically!!. This makes it pretty easy to enable logging in your cluster with the mature (EFK) tools available which makes it even sweeter.

For EFK to function start without errors, you will need a minimum of 8GB worth of memory and 4vCPU. In case you are limited in Memory, you can edit its Stateful Set as will be shown after we enable fluentd.

Enable fluentd as follows:

$ microk8s enable fluentd

Enabling Fluentd-Elasticsearch
Labeling nodes
node/master labeled
Addon dns is already enabled.
Adding argument --allow-privileged to nodes.
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
configmap/fluentd-es-config-v0.2.0 created
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.0.2 created
deployment.apps/kibana-logging created
service/kibana-logging created
Fluentd-Elasticsearch is enabled

In case you see elasticsearch-logging-0 as pending and its status not changing and fluentd together with kibana-logging are in CrashLoopBackOff state, login to the dashboard and click on the elasticsearch-logging-0 pod so that we can see its events. In case you see “0/1 nodes are available: 1 Insufficient memory. microk8s” error, then proceed to edit the Stateful Set as shown below.

$ microk8s kubectl get pods --all-namespaces

NAMESPACE            NAME                                         READY   STATUS              RESTARTS   AGE
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running             2          24h
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running             2          24h
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running             2          24h
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running             2          24h
kube-system          calico-node-zv994                            1/1     Running             2          24h
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running             0          156m 
container-registry   registry-9b57d9df8-6djrn                     1/1     Running             0          107m 
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running             2          24h  
kube-system          elasticsearch-logging-0                      0/1     Pending             0          4m57s <==
kube-system          kibana-logging-7cf6dc4687-bvk46              0/1     ContainerCreating   0          4m57s
kube-system          fluentd-es-v3.0.2-lj7m8                      0/1     Running             1          4m57s
<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-edit-elasticsearch-memory-1024×499.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="499" loading="lazy" src="data:image/svg xml,” width=”1024″>

After editing, delete the elasticsearch-logging-0 pod for it to be re-created with the new configuration changes. Give MicroK8s time to pull and deploy the pod. Later, everything should be running as follows. Note that if you have enough memory and CPU, you are highly unlikely to experience these errors because elasticsearch pod requests a memory of 3GB by default.

microk8s kubectl get pods --all-namespaces
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running   3          40h
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running   3          40h
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running   3          40h
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running   1          18h
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running   3          40h
container-registry   registry-9b57d9df8-6djrn                     1/1     Running   1          18h
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running   3          41h
kube-system          calico-node-zv994                            1/1     Running   3          40h
kube-system          elasticsearch-logging-0                      1/1     Running   0          20m <==
kube-system          fluentd-es-v3.0.2-j4hxt                      1/1     Running   10         25m <==
kube-system          kibana-logging-7cf6dc4687-mpsx2              1/1     Running   10         25m <==

Accessing Kibana

After the pods are running elegantly, we would wish to access Kibana interface to configure our indexes and start analyzing our logs. To do that, let us get to know kibana, fluentd and elasticsearch details. Issue the cluster-info command as follows:

$ microk8s kubectl cluster-info

Kubernetes master is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Elasticsearch is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

While the MicroK8s snap will have an IP address on your local network (the Cluster IP of the kibana-logging service), you can also reach Kibana by forwarding its port to a free one on your host as follows:

$ microk8s kubectl port-forward -n kube-system service/kibana-logging --address 0.0.0.0 8080:5601

You can check the services in the namespace where we deployed EFK by issuing the command below. You will see that “kibana-logging” service listens at port 5601 internally which we have forwarded to a free one on the server.

kubectl get services -n kube-system
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns                    ClusterIP   10.152.183.10            53/UDP,53/TCP,9153/TCP   42h
metrics-server              ClusterIP   10.152.183.8             443/TCP                  42h
kubernetes-dashboard        ClusterIP   10.152.183.88            443/TCP                  42h
dashboard-metrics-scraper   ClusterIP   10.152.183.239           8000/TCP                 42h
elasticsearch-logging       ClusterIP   10.152.183.64            9200/TCP                 48m
kibana-logging              ClusterIP   10.152.183.44            5601/TCP                 48m <==

Our Kibana is now listening at port 8080. Allow this port on the firewall if you have one running in your server as illustraed below:

sudo firewall-cmd --add-port=8080/tcp --permanent
sudo firewall-cmd --reload

After that is successful, open your browser and access Kibana’s interface by pointing it to the following url http://[IP or FQDN]:8080. You should see an interface as shown below.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-kibana-1024×522.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="522" loading="lazy" src="data:image/svg xml,” width=”1024″>

Click on “Explore on my own

Create “Index Pattern“. Logstash should show up by default due to fluentd. Choose it and create it as follows

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-kibana-add-index-1024×497.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="485" loading="lazy" src="data:image/svg xml,” width=”999″>

Choose @timestamp on the time filter then click on “Create index pattern

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-kibana-add-index-2-1024×491.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="491" loading="lazy" src="data:image/svg xml,” width=”1024″>
<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-kibana-add-index-3-1024×522.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="522" loading="lazy" src="data:image/svg xml,” width=”1024″>

After the index pattern is created, click on “Discover” Icon and you should see hits as shown below.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-kibana-add-index-4-1024×524.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="524" loading="lazy" src="data:image/svg xml,” width=”1024″>

Step 9: Enable Prometheus

MicroK8s comes with an inbuilt prometheus “module” that simply needs to be enabled. The module unpacks prometheus together with the amazing Grafana. Nothing can be better than this!! Enable it as others have been enabled as folows:

$ microk8s enable prometheus

Then check if they are being deployed

E                                         READY   STATUS              RESTARTS   AGE
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running             4          42h
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running             4          42h
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running             4          42h
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running             2          20h
container-registry   registry-9b57d9df8-6djrn                     1/1     Running             2          19h
kube-system          elasticsearch-logging-0                      1/1     Running             0          39m
kube-system          kibana-logging-7cf6dc4687-6b48m              1/1     Running             0          38m
kube-system          calico-node-zv994                            1/1     Running             4          42h
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running             4          42h
kube-system          fluentd-es-v3.0.2-pkcjh                      1/1     Running             0          38m  
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running             4          42h  
monitoring           kube-state-metrics-66b65b78bc-txzpm          0/3     ContainerCreating   0          2m45s <==
monitoring           node-exporter-dq4hv                          0/2     ContainerCreating   0          2m45s <==
monitoring           prometheus-adapter-557648f58c-bgtkw          0/1     ContainerCreating   0          2m44s <==
monitoring           prometheus-operator-5b7946f4d6-bdgqs         0/2     ContainerCreating   0          2m51s <==
monitoring           grafana-7c9bc466d8-g4vss                     1/1     Running             0          2m45s <==

Accessing Prometheus web interface

Similar to the manner we have accessed previous web interfaces, we are going to port-forward the internal pod port to a free one on the server. As we have confirmed in the previous command, Prometheus gets deployed in a namespace called “monitoring“. We can get all services in this namespace as shown below:

$ microk8s kubectl get services -n monitoring

kubectl get services -n monitoring
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
prometheus-operator     ClusterIP   None                    8443/TCP                     7m41s
alertmanager-main       ClusterIP   10.152.183.34           9093/TCP                     7m36s
grafana                 ClusterIP   10.152.183.35           3000/TCP                     7m35s
kube-state-metrics      ClusterIP   None                    8443/TCP,9443/TCP            7m35s
node-exporter           ClusterIP   None                    9100/TCP                     7m35s
prometheus-adapter      ClusterIP   10.152.183.22           443/TCP                      7m34s
prometheus-k8s          ClusterIP   10.152.183.27           9090/TCP                     7m33s
alertmanager-operated   ClusterIP   None                    9093/TCP,9094/TCP,9094/UDP   4m
prometheus-operated     ClusterIP   None                    9090/TCP                     3m59s

Let us port forward Prometheus and access it from the browser.

$ microk8s kubectl port-forward -n monitoring service/prometheus-k8s --address 0.0.0.0 9090:9090
Forwarding from 0.0.0.0:9090 -> 9090

Then point your browser to the IP address or FQDN of your server on port 9090, that is http:[IP or FQDN]:9090. As usual, if you have a firewall running on your CentOS 8 box, kindly allow it. Since we shall be port forwarding Grafana on port 3000 as well, add this port too.

sudo firewall-cmd --add-port={9090,3000}/tcp --permanent
sudo firewall-cmd --reload
<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-prometheus-1024×488.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="488" loading="lazy" src="data:image/svg xml,” width=”1024″>

Accessing Grafana web interface

In a similar manner, port forward Grafana on port 3000 as we have done in the previous ways.

$ microk8s kubectl port-forward -n monitoring service/grafana --address 0.0.0.0 3000:3000

Forwarding from 0.0.0.0:3000 -> 3000

Then point your browser to the IP address or FQDN of your server on port 3000, that is http:[IP or FQDN]:3000. And you should be greeted with the beautiful Grafana dashboard as illustrated below. The default username and password is “admin” and “admin“. You will be prompted to change it immediately. Key in your new password then submit and you will be allowed in.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-grafana-1024×528.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="528" loading="lazy" src="data:image/svg xml,” width=”1024″>

Enter new credentials

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-grafana-2-1024×521.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="521" loading="lazy" src="data:image/svg xml,” width=”1024″>

And you should be allowed in

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/09/echo/microk8s-grafana-3-1024×518.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" height="518" loading="lazy" src="data:image/svg xml,” width=”1024″>

For other information about MicroK8s, follow this Offical MicroK8S website.

Conclusion

The setup voyage has been long and had some challenges along the way but we successfully deployed MicroK8s together with logging, monitoring, dashboard and the rest as we have encountered. We hope the guide was helpful and in case of any errors observed, kindly let us know. We continue to feel honored with your relentless support and we highly appreciate it. Cheers to all people out there who work tirelessly to create tools used by developers and engineers all over the world. For other guides similar to this one, the list shared below will be helpful to look at.

Install Lens – Best Kubernetes Dashboard & IDE

Deploy Lightweight Kubernetes Cluster in 5 minutes with K3s

Install Kubernetes Cluster on Ubuntu with kubeadm

Setup Kubernetes Cluster on CentOS 7 with kubeadm