In our previous guide we discussed about the installation of Kubernetes Cluster on AWS by using Amazon EKS service. It is actually the quickest and easiest method of having a running Kubernetes Cluster in AWS within minutes. The setup process is fairly automated with eksctl which uses CloudFormation stacks behind the scene to bootstrap a working cluster powered by Amazon Linux worker machines.

In this tutorial I’ll walk you through the steps of installing and configuring Kubernetes metrics server in an EKS cluster. Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. It collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler.

Metrics Server offers:

  • A single deployment that works on most clusters
  • Scalable support up to 5,000 node clusters
  • Resource efficiency: Metrics Server uses 0.5m core of CPU and 4 MB of memory per node

Before you begin installation of Kubernetes Metrics Server on Amazon EKS Cluster confirm you have an EKS cluster working. You can use eksctl command to check available EKS clusters.

$ eksctl get cluster
NAME			REGION
prod-eks-cluster	eu-west-1

If you have kubeconfig locally, use it to confirm Kubernetes API server is responsive.

$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-138-244.eu-west-1.compute.internal   Ready       13h   v1.17.9-eks-4c6976
ip-192-168-176-247.eu-west-1.compute.internal   Ready       13h   v1.17.9-eks-4c6976

Metrics Server Requirements

Metrics Server has specific requirements for cluster and network configuration. These requirements aren’t the default for all cluster distributions. Please ensure that your cluster distribution supports these requirements before using Metrics Server:

How To Install Kubernetes Metrics Server on Amazon EKS Cluster

Save your kubeconfig to environment variable.

export KUBECONFIG=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster

Confirm you can run kubectl command without manually passing the path to your kubeconfig file.

$ kubectl get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-138-244.eu-west-1.compute.internal   Ready       13h   v1.17.9-eks-4c6976
ip-192-168-176-247.eu-west-1.compute.internal   Ready       13h   v1.17.9-eks-4c6976

Apply Metrics Server manifests which are available on Metrics Server releases making them installable via url:

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml

Here is the output of the resources being created.

clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

Use the following command to verify that the metrics-server deployment is running the desired number of pods:

$ kubectl get deployment metrics-server -n kube-system

NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           7m23s

$ kubectl get pods -n kube-system | grep metrics

metrics-server-7cb45bbfd5-kbrt7   1/1     Running   0          8m42s

Confirm Metrics server is active.

$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml

apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apiregistration.k8s.io/v1beta1","kind":"APIService","metadata":{"annotations":{},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"metrics-server","namespace":"kube-system"},"version":"v1beta1","versionPriority":100}}
  creationTimestamp: "2020-08-12T11:27:13Z"
  name: v1beta1.metrics.k8s.io
  resourceVersion: "130943"
  selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
  uid: 83c44e41-6346-4dff-8ce2-aff665199209
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
    port: 443
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: "2020-08-12T11:27:18Z"
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available

Metrics API can also be accessed by using the kubectl top command. This makes it easier to debug autoscaling pipelines.

$ kubectl top --help
Display Resource (CPU/Memory/Storage) usage.

 The top command allows you to see the resource consumption for nodes or pods.

 This command requires Metrics Server to be correctly configured and working on the server.

Available Commands:
  node        Display Resource (CPU/Memory/Storage) usage of nodes
  pod         Display Resource (CPU/Memory/Storage) usage of pods

Usage:
  kubectl top [flags] [options]

Use "kubectl  --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

To display cluster nodes resource usage – CPU/Memory/Storage you’ll run the command:

$ kubectl top nodes
NAME                                            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
ip-192-168-138-244.eu-west-1.compute.internal   50m          2%     445Mi           13%
ip-192-168-176-247.eu-west-1.compute.internal   58m          3%     451Mi           13%

Similar command can be used for pods.

$ kubectl top pods -A
NAMESPACE     NAME                              CPU(cores)   MEMORY(bytes)
kube-system   aws-node-glfrs                    4m           51Mi
kube-system   aws-node-sgh8p                    5m           51Mi
kube-system   coredns-6987776bbd-2mgxp          2m           6Mi
kube-system   coredns-6987776bbd-vdn8j          2m           6Mi
kube-system   kube-proxy-5glzs                  1m           7Mi
kube-system   kube-proxy-hgqm5                  1m           8Mi
kube-system   metrics-server-7cb45bbfd5-kbrt7   1m           11Mi

You can also access use kubectl get –raw to pull raw resource usage metrics for all nodes in the cluster.

$ kubectl get --raw "https://computingforgeeks.com/apis/metrics.k8s.io/v1beta1/nodes" | jq

{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "https://computingforgeeks.com/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "ip-192-168-176-247.eu-west-1.compute.internal",
        "selfLink": "https://computingforgeeks.com/apis/metrics.k8s.io/v1beta1/nodes/ip-192-168-176-247.eu-west-1.compute.internal",
        "creationTimestamp": "2020-08-12T11:44:41Z"
      },
      "timestamp": "2020-08-12T11:44:17Z",
      "window": "30s",
      "usage": {
        "cpu": "55646953n",
        "memory": "461980Ki"
      }
    },
    {
      "metadata": {
        "name": "ip-192-168-138-244.eu-west-1.compute.internal",
        "selfLink": "https://computingforgeeks.com/apis/metrics.k8s.io/v1beta1/nodes/ip-192-168-138-244.eu-west-1.compute.internal",
        "creationTimestamp": "2020-08-12T11:44:41Z"
      },
      "timestamp": "2020-08-12T11:44:09Z",
      "window": "30s",
      "usage": {
        "cpu": "47815890n",
        "memory": "454944Ki"
      }
    }
  ]
}

In our next article we’ll see how you can configure Horizontal Pod Autoscaler(HPA) in your EKS Kubernetes Cluster. In the meantime check other Kubernetes related articles available in our website.

How to force delete a Kubernetes Namespace

Install Kubernetes Cluster on Ubuntu 20.04 with kubeadm

How To Install Kubernetes Dashboard with NodePort