Kubernetes is undoubtedly the most advanced and adopted container orchestration platform powering millions of applications in production environments. One big challenge to most new Linux and Kubernetes users is on setting up the cluster. Though we have a number of guides on installation and configuration of Kubernetes clusters, this is our first guide on setting up Kubernetes cluster in AWS cloud environment with Amazon EKS.

For users new to Amazon EKS it is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Since Amazon EKS is fully compatible with Community version of Kubernetes you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.

Amazon EKS eliminates headaches around high availability by automatically detecting and replacing unhealthy control plane instances. It also becomes easy to perform upgrades in an automated version. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications, including the following:

  • Amazon ECR for container images
  • Elastic Load Balancing for load distribution
  • IAM for authentication
  • Amazon VPC for isolation

How To Deploy Kubernetes Cluster on AWS with EKS

The next sections will get more deep into installation of a Kubernetes Cluster on AWS with Amazon EKS managed service. The setup diagram looks like one shown below.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/08/echo/amazon-eks.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" src="data:image/svg xml,”>

Step 1: Install and Configure AWS CLI Tool

We need to setup AWS CLI tooling since our installation will be command line based. This is done in your local Workstation machine. Our installation is for both Linux and macOS.

--- Install AWS CLI on macOS ---
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

--- Install AWS CLI on Linux ---
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

You can then determine the version of AWS CLI that you have installed with the commands below.

$ aws --version
aws-cli/2.0.38 Python/3.7.3 Linux/4.18.0-193.6.3.el8_2.x86_64 exe/x86_64.centos.8

Configure AWS CLI credentials

After installation we need to configure our AWS CLI credentials. We’ll use the aws configure command to set up AWS CLI installation for general use.

$ aws configure
AWS Access Key ID [None]: 
AWS Secret Access Key [None]: 
Default region name [None]: 
Default output format [None]: json

Your AWS CLI details will be saved in the ~/.aws directory:

$ ls ~/.aws
config
credentials

Step 2: Install eksctl on Linux | macOS

eksctl is the a simple CLI tool used to create EKS clusters on AWS. This tool is written in Go, and uses CloudFormation. With this tool you can have a running cluster in minutes.

It has the following features as of this article writing:

  • Create, get, list and delete clusters
  • Create, drain and delete nodegroups
  • Scale a nodegroup
  • Update a cluster
  • Use custom AMIs
  • Configure VPC Networking
  • Configure access to API endpoints
  • Support for GPU nodegroups
  • Spot instances and mixed instances
  • IAM Management and Add-on Policies
  • List cluster Cloudformation stacks
  • Install coredns
  • Write kubeconfig file for a cluster

Install eksctl tool on Linux or macOS machine with the commands below.

--- Linux ---
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

--- macOS ---
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
brew upgrade eksctl && brew link --overwrite eksctl # When upgrading

Test that your installation was successful with the following command.

$ eksctl version
0.25.0

Enable Shell Completion:

--- Bash ---
echo ". > ~/.bashrc

--- Zsh ---
mkdir -p ~/.zsh/completion/
eksctl completion zsh > ~/.zsh/completion/_eksctl
# and put the following in ~/.zshrc:
fpath=($fpath ~/.zsh/completion)

# Note if you're not running a distribution like oh-my-zsh you may first have to enable autocompletion:
autoload -U compinit
compinit

Step 3: Install and configure kubectl on Linux | macOS

The kubectl command line tool is used to control Kubernetes clusters from a command line interface. The tool is installed by running the following commands in your terminal.

--- Linux ---
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/linux/amd64/kubectl
chmod  x ./kubectl
sudo mv ./kubectl /usr/local/bin

--- macOS ---
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/darwin/amd64/kubectl
chmod  x ./kubectl
sudo mv ./kubectl /usr/local/bin

After you install kubectl, you can verify its version with the following command:

$ kubectl version --short --client
Client Version: v1.17.7-eks-bffbac

The kubectl tool looks for a file named config in the $HOME/.kube directory. You can as well specify a different kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag.

Step 4: Create an Amazon EKS cluster and compute

With all the dependencies setup, we can now create an Amazon EKS cluster with a compute option to run our microservice applications. We’ll be performing an installation of the latest Kubernetes version available in Amazon EKS so we can take advantage of the latest EKS features.

You have the option of creating a cluster with one compute option then you can add any of the other options after your cluster is created. There are two standard compute options you can use.

  • AWS Fargate: Create a cluster that only runs Linux applications on AWS Fargate. You can only use AWS Fargate with Amazon EKS in some regions
  • Managed nodes: If you want to run Linux applications on Amazon EC2 instances.
<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/08/echo/eks-compute-options-1024×465.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" src="data:image/svg xml,”>

In this setup we’ll be doing installation of an EKS Cluster running Kubernetes version 1.17 and using Managed EC2 compute nodes. These are my cluster details:

  • Region: Ireland (eu-west-1)
  • Cluster name: cs-dev-eks-cluster
  • Version: 1.17 – See all available EKS versions
  • Node type: t3.medium – See all AWS Node types available
  • Total number of nodes (for a static ASG): 2
  • Maximum nodes in ASG: 3
  • Minimum nodes in ASG: 1
  • SSH public key to use for nodes (import from local path, or use existing EC2 key pair): ~/.ssh/eks.pub
  • Make nodegroup networking private
  • Let eksctl manage cluster credentials under ~/.kube/eksctl/clusters directory,
eksctl create cluster 
--version 1.17 
--name prod-eks-cluster 
--region eu-west-1 
--nodegroup-name eks-ec2-linux-nodes 
--node-type t3.medium 
--nodes 2 
--nodes-min 1 
--nodes-max 3 
--ssh-access 
--ssh-public-key ~/.ssh/eks.pub 
--managed 
--auto-kubeconfig 
--node-private-networking 
--verbose 3

The eksctl installer will automatically create and configure VPC, Internet gateway, nat gateway and routing tables for you.

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/08/echo/eks-setup-aws-01-1024×106.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" src="data:image/svg xml,”>

Subnets:

<img alt="" data-ezsrc="https://kirelos.com/wp-content/uploads/2020/08/echo/eks-setup-aws-02-1024×214.png" data-ez ezimgfmt="rs rscb8 src ng ngcb8 srcset" src="data:image/svg xml,”>

Be patient as the installation may take some time.

[ℹ]  eksctl version 0.25.0
[ℹ]  using region eu-west-1
[ℹ]  setting availability zones to [eu-west-1a eu-west-1c eu-west-1b]
[ℹ]  subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-west-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-west-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using SSH public key "https://computingforgeeks.com/Users/jkmutai/.cheat/.ssh/eks.pub" as "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes-52:ad:b5:4f:a6:01:10:b6:c1:6b:ba:eb:5a:fb:0c:b2"
[ℹ]  using Kubernetes version 1.17
[ℹ]  creating EKS cluster "prod-eks-cluster" in "eu-west-1" region with managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ]  CloudWatch logging will not be enabled for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ]  2 sequential tasks: { create cluster control plane "prod-eks-cluster", 2 sequential sub-tasks: { no tasks, create managed nodegroup "eks-ec2-linux-nodes" } }
[ℹ]  building cluster stack "eksctl-prod-eks-cluster-cluster"
[ℹ]  deploying stack "eksctl-prod-eks-cluster-cluster"
[ℹ]  building managed nodegroup stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  deploying stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "https://computingforgeeks.com/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "prod-eks-cluster" have been created
[ℹ]  nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ]  node "ip-192-168-21-191.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-35-129.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-49-234.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-78-146.eu-west-1.compute.internal" is ready
[ℹ]  waiting for at least 1 node(s) to become ready in "eks-ec2-linux-nodes"
[ℹ]  nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ]  node "ip-192-168-21-191.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-35-129.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-49-234.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-78-146.eu-west-1.compute.internal" is ready
[ℹ]  kubectl command should work with "https://computingforgeeks.com/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster", try 'kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes'
[✔]  EKS cluster "prod-eks-cluster" in "eu-west-1" region is ready

To list available clusters use the command below:

$ eksctl get cluster
NAME			REGION
prod-eks-cluster	eu-west-1

Use generated kubeconfig file to confirm if your installation was successful.

$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-21-191.eu-west-1.compute.internal   Ready       18m   v1.17.9-eks-4c6976
ip-192-168-35-129.eu-west-1.compute.internal   Ready       14m   v1.17.9-eks-4c6976
ip-192-168-78-146.eu-west-1.compute.internal   Ready       14m   v1.17.9-eks-4c6976

$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get pods -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-254fk             1/1     Running   0          19m
kube-system   aws-node-nmjwd             1/1     Running   0          14m
kube-system   aws-node-z47mq             1/1     Running   0          15m
kube-system   coredns-6987776bbd-8s5ct   1/1     Running   0          14m
kube-system   coredns-6987776bbd-bn5js   1/1     Running   0          14m
kube-system   kube-proxy-79bcs           1/1     Running   0          14m
kube-system   kube-proxy-bpznt           1/1     Running   0          15m
kube-system   kube-proxy-xchxs           1/1     Running   0          19m

Get info about used node group:

$ eksctl get nodegroup --cluster prod-eks-cluster
CLUSTER			NODEGROUP		CREATED			MIN SIZE	MAX SIZE	DESIRED CAPACITY	INSTANCE TYPE	IMAGE ID
prod-eks-cluster	eks-ec2-linux-nodes	2020-08-11T19:21:46Z	1		4		3			t3.medium

Create a cluster with existing private subnets:

When using existing public and private subnets, you’ll need

eksctl create cluster 
  --version 1.17 
  --name prod-eks-cluster 
  --region eu-west-1 
  --nodegroup-name eks-ec2-linux-nodes 
  --node-type t3.medium 
  --nodes 2 
  --nodes-min 1 
  --nodes-max 3 
  --ssh-access 
  --ssh-public-key ~/.ssh/eks.pub 
  --managed 
  --vpc-private-subnets=subnet-0597dd879c602d516,subnet-06dcc9817981d25db,subnet-0c4a73dfb9857be6a 
  --vpc-public-subnets=subnet-025b7029b62f7f922,subnet-03d1c9ee286b5e9e2,subnet-04218d8a1bf2acb11 
  --auto-kubeconfig 
  --node-private-networking 
  --verbose 3

Deleting EKS cluster

If you ever want to delete an EKS cluster you need to use eksctl delete command.

$ eksctl delete cluster --region=eu-west-1 --name=prod-eks-cluster

The removal process will have an output similar to one shown below.

[ℹ]  eksctl version 0.25.0
[ℹ]  using region eu-west-1
[ℹ]  deleting EKS cluster "prod-eks-cluster"
[ℹ]  deleted 0 Fargate profile(s)
[ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
[ℹ]  2 sequential tasks: { delete nodegroup "eks-ec2-linux-nodes", delete cluster control plane "prod-eks-cluster" [async] }
[ℹ]  will delete stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  waiting for stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes" to get deleted
[ℹ]  will delete stack "eksctl-prod-eks-cluster-cluster"
[✔]  all cluster resources were deleted

Will be updating this article with other settings before we can conclude Kubernetes cluster setup on AWS using EKS service.

Similar guides:

Install Kubernetes Cluster on Ubuntu 20.04 with kubeadm

Install Kubernetes Cluster on CentOS 7 with kubeadm

Check Pod / Container Metrics on OpenShift & Kubernetes

Migrate Docker Compose Application to Kubernetes With Kompose