Whenever we set a different Kubernetes cluster, there are specific things we have to do. We have to be assured that the node pool has an accurate size. We have to be assured that the application is in the correct namespace. And also, we are assured that we are properly observing the cluster. This may be a chore for inexperienced users. Kubernetes can monitor numerous things, such as pods and namespaces, that may be difficult to track.

This article covers the essentials of CPU and memory usage. There’s a lot to discuss about monitoring, but we have to be assured that the metrics are observed and checked. There are various techniques to monitor the resources and several methods to approach them. Thus, it is important to ensure that the application utilizes only the proposed number of resources to avoid running out of space.

Though, it is simple to establish the auto-scaling in Kubernetes. Hence, we have to observe the metrics while we always ensure the cluster has sufficient nodes to handle the workload. One more reason to monitor the CPU and memory usage indicators is to be conscious of abrupt changes in enactment. A sudden surge in memory usage occurs. This may indicate a memory escape. A sudden surge in CPU usage occurs. This can be an indication of an unlimited loop. These metrics are absolutely useful. These are the reasons why we need to observe the metrics. We have operated the commands on the Linux system and used the top command. Once we understand the commands, we can efficiently utilize them in Kubernetes.

For running the commands in Kubernetes, we install Ubuntu 20.04. Here, we use the Linux operating system to implement the kubectl commands. Now, we install the Minikube cluster to run Kubernetes in Linux. Minikube offers an extremely smooth understanding as it provides an efficient mode to test the commands and applications.

Start Minikube:

After installing the Minikube cluster, we start the Ubuntu 20.04. Now, we have to open a terminal for running the commands. For this purpose, we press “Ctrl Alt T” on the keyboard.

In the terminal, we write the command “start minikube”, and after this, we wait a while till it effectively starts. The output of this command is provided underneath:

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2021/12/echo/How-to-Use-the-Kubectl-Top-Command-1.png" data-lazy- height="214" src="data:image/svg xml,” width=”737″>

Install the Metrics API:

The kubectl top command could not accumulate the metrics by itself. It demands the metrics to the Metrics API and represents them. The clusters, specifically the one which is provided through cloud services, even now have the Metrics API mounted. For example, a cluster delivered by Docker Desktop. We may verify that the Metrics API is embedded by executing the subsequent command:

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2021/12/echo/How-to-Use-the-Kubectl-Top-Command-2.png" data-lazy- height="57" src="data:image/svg xml,” width=”734″>

After we obtain results, the API is now mounted and ready to use. If not, we need to install it first. The procedure is mentioned below:

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2021/12/echo/How-to-Use-the-Kubectl-Top-Command-3.png" data-lazy- height="240" src="data:image/svg xml,” width=”734″>

Using the Kubectl Top:

When we are done with the installation of Metrics API, we utilize the kubectl top command. We execute the command “kubectl top pod –namespace default”. This command displays the metrics in the default namespace. Whenever we need to obtain the metric from any definite namespace, we need to identify the namespace:

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2021/12/echo/How-to-Use-the-Kubectl-Top-Command-4.png" data-lazy- height="25" src="data:image/svg xml,” width=”739″>

We observe that the various indicators are not occurring in large numbers. Get the metrics that may be obtained simply from the pod. This doesn’t appear to be that abundant in the framework of Kubernetes. However, this may be utilized to troubleshoot a diversity of problems.

If resource practice unexpectedly barbs in the cluster, we can swiftly find the pod producing the problem. This is very useful if we have multiple pods. This is because the kubectl top command can also display metrics from the distinct containers.

If we need to obtain metrics from the web app namespace, we use the following command:

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2021/12/echo/How-to-Use-the-Kubectl-Top-Command-5.png" data-lazy- height="39" src="data:image/svg xml,” width=”732″>

In this instance, we take a web app that utilizes a container to accumulate logs. From the output of this example, it’s clear that the log accumulator is initiating the source usage problem but not the web application. This is a thing wherein a lot of people find confusing. But, we know perfectly where to we begin the troubleshooting.

We can also utilize the commands to check for anything aside from the pods. Here, we are using the “kubectl top node” command to observe the metrics from the following node:

<img alt="" data-lazy- data-lazy-src="https://kirelos.com/wp-content/uploads/2021/12/echo/How-to-Use-the-Kubectl-Top-Command-6.png" data-lazy- height="25" src="data:image/svg xml,” width=”733″>

Conclusion:

By this article, we have a detailed understanding of Kubernetes metrics, how to use them in the situation of source monitoring, and why we need to be careful. CPU and usage of memory can be simple indicators we can monitor. This doesn’t appear to be necessary on highly extensible platforms, such as Kubernetes. Still, it can be essential to go through the fundamentals and utilize the tools provided. We have used the kubectl top command to monitor the Kubernetes. We hope you found this article helpful. Check out Linux Hint for more tips and information.