Let’s learn about Kops, a Kubernetes operations tool.

Kubernetes is one of the most popular DevOps tools because of its amazing container orchestration system and features. But although Kubernetes offers so many functionalities, setting up a Kubernetes cluster from scratch is painful. This is where Kops comes into the picture.

With Kops, it is a cakewalk to create a Kubernetes cluster on cloud providers like AWS, Google Cloud, etc. It makes setting up a Kubernetes cluster hassle-free, and in this article, I will discuss this awesome tool.

What is Kops?

Kops, also known as Kubernetes operations, is an open-source, official Kubernetes project, which allows you to create, maintain, upgrade and destroy a highly available production-grade Kubernetes cluster. It provisions the cloud infrastructure also depending on the requirement. The developers of Kops describe it as kubectl for Kubernetes clusters.

Kops is mostly used in deploying AWS and GCE Kubernetes clusters. Kops officially supports only AWS, other cloud providers such as DigitalOcean, GCE, and OpenStack are in the beta stage.

If you have worked on kubectl before, you will feel comfortable working on Kops. Kops provides commands to create, get, update, delete clusters. In addition, Kops knows how to apply changes in the existing clusters as it uses declarative configuration. With Kops, you can also scale up and down a Kubernetes cluster.

Below are the features of Kops:

  • Deploys Kubernetes masters with high availability
  • Rolling cluster updates are supported
  • Automates the provisioning of AWS and GCE Kubernetes clusters
  • Manages cluster add-ons
  • Autocompletion of command in the command line
  • Generates CloudFormation and Terraform configurations
  • Supports state-sync model for dry-runs and automatic idempotency
  • Creates instance groups to support heterogeneous clusters

Installing Kops

Below are simple steps to install Kops on a Linux environment. I am using Ubuntu 20.x.

First, download Kops from the releases package. The command below downloads the latest package of Kops.

[email protected]:~$ curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

Saving to: ‘kops-linux-amd64’
100%[=========================================================================================================================================================================>] 81,964,000  8.37MB/s   in 7.1s   
2021-06-10 16:23:19 (7.84 MB/s) - ‘kops-linux-amd64’ saved [81964000/81964000]

You need to provide executable permission to the kops file you downloaded and move it to /usr/local/bin/ directory.

[email protected]:~$ sudo chmod  x kops-linux-amd64
[email protected]:~$ sudo mv kops-linux-amd64 /usr/local/bin/kops

Installation is done. Now you can run the kops command to verify the installation.

[email protected]:~$ kops
kops is Kubernetes ops.
kops is the easiest way to get a production grade Kubernetes cluster up and running. We like to think of it as kubectl for clusters.

kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE and VMware vSphere in alpha support.

Usage:
kops [command]

Available Commands:
completion Output shell completion code for the given shell (bash or zsh).
create Create a resource by command line, filename or stdin.
delete Delete clusters,instancegroups, or secrets.
describe Describe a resource.
edit Edit clusters and other resources.
export Export configuration.
get Get one or many resources.
import Import a cluster.
replace Replace cluster resources.
rolling-update Rolling update a cluster.
toolbox Misc infrequently used commands.
update Update a cluster.
upgrade Upgrade a kubernetes cluster.
validate Validate a kops cluster.
version Print the kops version information.

Flags:
--alsologtostderr log to standard error as well as files
--config string config file (default is $HOME/.kops.yaml)
-h, --help help for kops
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files (default false)
--name string Name of cluster
--state string Location of state storage
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging

Use "kops [command] --help" for more information about a command.

Check the Kops version to be sure that Kops for installed correctly.

[email protected]:~$ kops version
Version 1.20.1 (git-5a27dad)

Let us now look at few important Kops commands which are used widely by admins for executing Kubernetes operations.

Kops Commands

Below are the widely used Kops commands you must know.

kops create

The kops create command is used to register a cluster.

Syntax: kops create cluster

There are many other parameters like zone, region, instance type, number of nodes, etc., which you can add in addition to the default command.

kops update

The kops update command is used to update the cluster with the specified cluster specification.

Syntax: kops update cluster –name

You can run this command in the preview mode to be on the safer side, and once the preview output matches your expectation, you can run the command with –yes flag to apply the changes to the cluster.

kops get

The kops get command is used to list all the clusters.

Syntax: kops get clusters

kops delete

The kops delete command is used to delete a specific cluster from the registry and all the cloud resources assigned to that cluster.

Syntax: kops delete cluster –name

Just like update, you can run this command also in the preview mode.

kops rolling-update

The kops rolling-update command is used to update a Kubernetes cluster to match the cloud and kops specifications.

Syntax: kops rolling-update cluster –name

Just like update, you can run this command also in the preview mode.

kops validate

The kops validate command validates if the cluster you created is up or not. For example, if the pods and nodes are in the pending state, the validate command will return that the cluster is not healthy yet.

Syntax: kops validate cluster –wait

This command will wait and validate the cluster for the specified time. So, if you want to validate the cluster for five minutes, run the command with 5m as specified time.

That was all about Kops fundamentals, let me now show you how to create a Kubernetes cluster on AWS using Kops.

Set Up Kubernetes on AWS using Kops

Before you begin with the steps mentioned below, these are a few pre-requisites:

Installing kubectl

Firstly, I will install kubectl.

Kubectl is used to run command line commands on Kubernetes clusters. Download a kubectl package.

[email protected]:~$  curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 39.6M  100 39.6M    0     0  6988k      0  0:00:07  0:00:07 --:--:-- 6988k

You need to provide executable permission to the downloaded file and move it to /usr/local/bin/ directory.

[email protected]:~$ chmod  x ./kubectl
[email protected]:~$ sudo mv ./kubectl /usr/local/bin/kubectl

Create S3 Bucket

Once you have AWS CLI installed and configured in your Linux machine, you will be able to run aws commands. I have aws cli installed on my Ubuntu system, so let me run a simple command which will list all the buckets in S3.

[email protected]:~$ aws s3 ls

It will be empty because I don’t have any s3 bucket as of now. Let me check if any ec2 instance is running.

geekfla[email protected]:~$ aws ec2 describe-instances
{
    "Reservations": []
}

This means no ec2 instance is running as of now.

Now you need to create an s3 bucket where Kops will save all the cluster’s state information. Here I am creating an s3 bucket in the us-west-2 region with the name geekkops-bucket-1132. You can use LocationConstraint to avoid any error with the region.

[email protected]:~$ aws s3api create-bucket --bucket geekkops-bucket-1132 --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2
{
    "Location": "http://geekkops-bucket-1132.s3.amazonaws.com/"
}

If I list the aws s3 bucket again, I will get the bucket I just created.

[email protected]:~$ aws s3 ls
2021-06-10 16:30:13 geekkops-bucket-1132

Run the below command to enable the version for the s3 bucket.

[email protected]:~$ aws s3api put-bucket-versioning --bucket geekkops-bucket-1132 --versioning-configuration Status=Enabled

Generate Key

Generate ssh key for which will be used by Kops for cluster login and password generation.

[email protected]:~$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:fH4JCBXMNRqzk1hmoK cXmwSFaeBsuGBA5IWMkNuvq0 [email protected]
The key's randomart image is:
 ---[RSA 2048]---- 
|O=. .  Xoo |
|B   .. @o* . |
|.= =. = = |
|o o o o o |
| . . . S o |
| o. = o . . |
| . .=   . o |
| ..   . |
| E . |
 ----[SHA256]----- 

Export Environment Variables

Expose the cluster name and s3 bucket as environment variables. This will be applicable only for the current session. I am using the suffix ‘.k8s.local’ because I am not using any preconfigured DNS.

[email protected]:~$ export KOPS_CLUSTER_NAME=geekdemo1.k8s.local
[email protected]:~$ export KOPS_STATE_STORE=s3://geekkops-bucket-1132

Create the Cluster

Use the kops create command to create the cluster. Below are the parameters I am using to create a Kubernetes cluster on AWS using Kops:

  • --cloud tells the cloud provider I am using
  • --zones is the zone where the cluster instance will get deployed
  • --node-count is the number of nodes to deploy in the Kubernetes cluster
  • --node-size and --master-size are the ec2 instance types, I am using the micro instances
  • --name is the cluster name
[email protected]:~$ kops create cluster --cloud=aws --zones=eu-central-1a --node-count=1 --node-size=t2.micro --master-size=t2.micro --name=${KOPS_CLUSTER_NAME}
I0216 16:35:24.225238    4326 subnets.go:180] Assigned CIDR 172.20.32.0/19 to subnet eu-central-1a
I0216 16:35:24.068088    4326 create_cluster.go:717] Using SSH public key: /home/ubuntu/.ssh/id_rsa.pub
Previewing changes that will be made:

I0216 16:35:24.332590    4326 apply_cluster.go:465] Gossip DNS: skipping DNS validation
I0216 16:35:24.392712    4326 executor.go:111] Tasks: 0 done / 83 total; 42 can run
W0216 16:35:24.792113    4326 vfs_castore.go:604] CA private key was not found
I0216 16:35:24.938057    4326 executor.go:111] Tasks: 42 done / 83 total; 17 can run
I0216 16:35:25.436407    4326 executor.go:111] Tasks: 59 done / 83 total; 18 can run
I0216 16:35:25.822395    4326 executor.go:111] Tasks: 77 done / 83 total; 2 can run
I0216 16:35:25.823088    4326 executor.go:111] Tasks: 79 done / 83 total; 2 can run
I0216 16:35:26.406919    4326 executor.go:111] Tasks: 81 done / 83 total; 2 can run
I0216 16:35:27.842148    4326 executor.go:111] Tasks: 83 done / 83 total; 0 can run

  LaunchTemplate/master-eu-central-1a.masters.geekdemo1.k8s.local
        AssociatePublicIP       true
        HTTPPutResponseHopLimit 1
        HTTPTokens              optional
        IAMInstanceProfile      name:masters.geekdemo1.k8s.local id:masters.geekdemo1.k8s.local
        ImageID                 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210119.1
        InstanceType            t2.micro
        RootVolumeSize          64
        RootVolumeType          gp2
        RootVolumeEncryption    false
        RootVolumeKmsKey
        SSHKey                  name:kubernetes.geekdemo1.k8s.local-3e:19:92:ca:dd:64:d5:cf:ff:ed:3a:92:0f:40:d4:e8 id:kubernetes.geekdemo1.k8s.local-3e:19:92:ca:dd:64:d5:cf:ff:ed:3a:92:0f:40:d4:e8
        SecurityGroups          [name:masters.geekdemo1.k8s.local]
        SpotPrice
        Tags                    {k8s.io/cluster-autoscaler/node-template/label/kubernetes.io/role: master, k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: master-eu-central-1a, k8s.io/role/master: 1, kops.k8s.io/instancegroup: master-eu-central-1a, Name: master-eu-central-1a.masters.geekdemo1.k8s.local, KubernetesCluster: geekdemo1.k8s.local, kubernetes.io/cluster/geekdemo1.k8s.local: owned, k8s.io/cluster-autoscaler/node-template/label/node-role.kubernetes.io/master: }

  Subnet/eu-central-1a.geekdemo1.k8s.local
        ShortName               eu-central-1a
        VPC                     name:geekdemo1.k8s.local
        AvailabilityZone        eu-central-1a
        CIDR                    172.20.32.0/19
        Shared                  false
        Tags                    {KubernetesCluster: geekdemo1.k8s.local, kubernetes.io/cluster/geekdemo1.k8s.local: owned, SubnetType: Public, kubernetes.io/role/elb: 1, Name: eu-central-1a.geekdemo1.k8s.local}

  VPC/geekdemo1.k8s.local
        CIDR                    172.20.0.0/16
        EnableDNSHostnames      true
        EnableDNSSupport        true
        Shared                  false
        Tags                    {kubernetes.io/cluster/geekdemo1.k8s.local: owned, Name: geekdemo1.k8s.local, KubernetesCluster: geekdemo1.k8s.local}

  VPCDHCPOptionsAssociation/geekdemo1.k8s.local
        VPC                     name:geekdemo1.k8s.local
        DHCPOptions             name:geekdemo1.k8s.local

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster geekdemo1.k8s.local
 * edit your node instance group: kops edit ig --name=geekdemo1.k8s.local nodes-eu-central-1a
 * edit your master instance group: kops edit ig --name=geekdemo1.k8s.local master-eu-central-1a

Finally configure your cluster with: kops update cluster --name geekdemo1.k8s.local --yes –admin

Run kops get command to see if the cluster got created.

[email protected]:~$ kops get cluster
NAME                    CLOUD   ZONES
geekdemo1.k8s.local     aws     eu-central-1a

Update the Cluster

To apply the specified cluster specifications to the cluster, run the kops update command.

[email protected]:~$ kops update cluster --name geekdemo1.k8s.local --yes --admin
I0216 16:38:16.800767    4344 apply_cluster.go:465] Gossip DNS: skipping DNS validation
I0216 16:38:16.919282    4344 executor.go:111] Tasks: 0 done / 83 total; 42 can run
W0216 16:38:17.343336    4344 vfs_castore.go:604] CA private key was not found
I0216 16:38:18.421652    4344 keypair.go:195] Issuing new certificate: "etcd-clients-ca"
I0216 16:38:18.450699    4344 keypair.go:195] Issuing new certificate: "etcd-peers-ca-main"
I0216 16:38:19.470785    4344 keypair.go:195] Issuing new certificate: "etcd-manager-ca-main"
I0216 16:38:19.531852    4344 keypair.go:195] Issuing new certificate: "etcd-peers-ca-events"
I0216 16:38:19.551601    4344 keypair.go:195] Issuing new certificate: "apiserver-aggregator-ca"
I0216 16:38:19.571834    4344 keypair.go:195] Issuing new certificate: "etcd-manager-ca-events"
I0216 16:38:19.592090    4344 keypair.go:195] Issuing new certificate: "master"
W0216 16:38:19.652894    4344 vfs_castore.go:604] CA private key was not found
I0216 16:38:19.653013    4344 keypair.go:195] Issuing new certificate: "ca"
I0216 16:38:24.344075    4344 executor.go:111] Tasks: 42 done / 83 total; 17 can run
I0216 16:38:24.306125    4344 executor.go:111] Tasks: 59 done / 83 total; 18 can run
I0216 16:38:26.189798    4344 executor.go:111] Tasks: 77 done / 83 total; 2 can run
I0216 16:38:26.190464    4344 executor.go:111] Tasks: 79 done / 83 total; 2 can run
I0216 16:38:26.738600    4344 executor.go:111] Tasks: 81 done / 83 total; 2 can run
I0216 16:38:28.810100    4344 executor.go:111] Tasks: 83 done / 83 total; 0 can run
I0216 16:38:29.904257    4344 update_cluster.go:313] Exporting kubecfg for cluster
kops has set your kubectl context to geekdemo1.k8s.local

Cluster is starting.  It should be ready in a few minutes.

Suggestions:
 * validate cluster: kops validate cluster --wait 10m
 * list nodes: kubectl get nodes --show-labels
 * ssh to the master: ssh -i ~/.ssh/id_rsa [email protected]
 * the ubuntu user is specific to Ubuntu. If not using Ubuntu please use the appropriate user based on your OS.
 * read about installing addons at: https://kops.sigs.k8s.io/operations/addons.

If you immediately check the Kubernetes nodes are running or not, you will get an error. You need to be a little patient and wait for a few minutes (5-10) till the cluster is created.

[email protected]:~$ kubectl get nodes
Unable to connect to the server: dial tcp: lookup api-geekdemo1-k8s-local-dason2-1001342368.eu-central-1.elb.amazonaws.com on 127.0.0.53:53: no such host

Validate the Cluster

I am running the validate command for 5 minutes to check if the cluster is up and healthy or not. In the validate output, you will be able to see the node details when they are up.

[email protected]:~$ kops validate cluster --wait 5m
Validating cluster geekdemo1.k8s.local
INSTANCE GROUPS
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-central-1a    Master  t2.micro        1       1       eu-central-1a
nodes-eu-central-1a     Node    t2.micro        1       1       eu-central-1a

List the Nodes and Pods

Now run the below command to check if all the nodes are ready and running. You can see both master and node are ready status.

[email protected]:~$ kubectl get nodes
NAME                                             STATUS   ROLES    AGE     VERSION
ip-173-19-35-156.eu-central-1.compute.internal   Ready    master   10m     v1.20.1
ip-172-36-23-149.eu-central-1.compute.internal   Ready    node     5m38s   v1.20.1

You can check all the pods running in the Kubernetes cluster.

[email protected]:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                                     READY   STATUS    RESTARTS   AGE
kube-system   dns-controller-8d8889c4b-xp9dl                                           1/1     Running   0          8m26s
kube-system   etcd-manager-events-ip-173-19-35-156.eu-central-1.compute.internal       1/1     Running   0          10m
kube-system   etcd-manager-main-ip-173-19-35-156.eu-central-1.compute.internal         1/1     Running   0          10m
kube-system   kops-controller-9skdk                                                    1/1     Running   3          6m51s
kube-system   kube-apiserver-ip-173-19-35-156.eu-central-1.compute.internal            2/2     Running   0          10m
kube-system   kube-controller-manager-ip-173-19-35-156.eu-central-1.compute.internal   1/1     Running   6          10m
kube-system   kube-dns-696cb84c7-g8nhb                                                 3/3     Running   0          4m27s
kube-system   kube-dns-autoscaler-55f8f75459-zlxbr                                     1/1     Running   0          7m18s
kube-system   kube-proxy-ip-173-19-35-156.eu-central-1.compute.internal                1/1     Running   0          10m
kube-system   kube-proxy-ip-172-36-23-149.eu-central-1.compute.internal                1/1     Running   0          7m2s
kube-system   kube-scheduler-ip-173-19-35-156.eu-central-1.compute.internal            1/1     Running   5          10m

Delete the Cluster

Just like creating a Kubernetes cluster, deleting a Kubernetes cluster using Kops is very straightforward. This kops delete command will remove all the cloud resources of the cluster and the cluster registry itself.

[email protected]:~$ kops delete cluster --name geekdemo1.k8s.local --yes
TYPE                    NAME                                                                            ID
autoscaling-config      master-eu-central-1a.masters.geekdemo1.k8s.local                                lt-0cc11aec1943204e4
autoscaling-config      nodes-eu-central-1a.geekdemo1.k8s.local                                         lt-0da65d2eaf6de9f5c
autoscaling-group       master-eu-central-1a.masters.geekdemo1.k8s.local                                master-eu-central-1a.masters.geekdemo1.k8s.local
autoscaling-group       nodes-eu-central-1a.geekdemo1.k8s.local                                         nodes-eu-central-1a.geekdemo1.k8s.local
dhcp-options            geekdemo1.k8s.local                                                             dopt-0403a0cbbfbc0c72b
iam-instance-profile    masters.geekdemo1.k8s.local                                                     masters.geekdemo1.k8s.local
iam-instance-profile    nodes.geekdemo1.k8s.local                                                       nodes.geekdemo1.k8s.local
iam-role                masters.geekdemo1.k8s.local                                                     masters.geekdemo1.k8s.local
iam-role                nodes.geekdemo1.k8s.local                                                       nodes.geekdemo1.k8s.local
instance                master-eu-central-1a.masters.geekdemo1.k8s.local                                i-069c73f2c23eb502a
instance                nodes-eu-central-1a.geekdemo1.k8s.local                                         i-0401d6b0d4fc11e77
iam-instance-profile:nodes.geekdemo1.k8s.local  ok
load-balancer:api-geekdemo1-k8s-local-dason2    ok
iam-instance-profile:masters.geekdemo1.k8s.local        ok
iam-role:masters.geekdemo1.k8s.local    ok
instance:i-069c73f2c23eb502a    ok
autoscaling-group:nodes-eu-central-1a.geekdemo1.k8s.local       ok
iam-role:nodes.geekdemo1.k8s.local      ok
instance:i-0401d6b0d4fc11e77    ok
autoscaling-config:lt-0cc11aec1943204e4 ok
autoscaling-config:lt-0da65d2eaf6de9f5c ok
autoscaling-group:master-eu-central-1a.masters.geekdemo1.k8s.local      ok
keypair:key-0d82g920j421b89dn   ok
Deleted kubectl config for geekdemo1.k8s.local

Deleted cluster: "geekdemo1.k8s.local"

Conclusion

I hope this article on Kops was helpful, and you got to learn something new today. Kops is a fantastic tool to work with Kubernetes on the cloud. So go ahead and try out the steps mentioned in this article and set up your Kubernetes cluster on AWS using Kops.