Provision and Manage Amazon EKS Cluster using eksctl

What is Amazon EKS?

Amazon Elastic Kubernetes Service (EKS) is a managed Kubernetes service provided by AWS. EKS uses Kubernetes upstream releases, which means you can use everything with EKS which you probably used with Kubernetes vanilla implementation.

In addition to supporting the upstream release, AWS provides some EKS specific features which makes it easy to manage Kubernetes implementation on AWS.

Some of the noted features are following:

  1. Managed Control Plane — It consists of managed master nodes which run system components like API services, etcd, controller manager, etc. By default, API servers run in two Availability Zones (AZs) and etcd runs in three AZs. All master nodes run in EKS/AWS managed VPC
  2. Managed Node Groups — AWS has introduced managed node groups as well, which means you can run worker nodes using a managed node group. It simplifies some of the admin tasks like worker node upgrades and automates the provisioning and lifecycle management of worker nodes.

What is eksctl?

eksctl is a utility which is used to create and managed Amazon EKS Clusters. It was first developed by Weaveworks and now officially supported by AWS.

Let’s take a look at how we can use eksctl to create and manage EKS cluster.

First we need to install eksctl utility on the our workspace. You can download the latest version from here. At the time of this writing, eksctl latest version is 0.24.0, which supports EKS version 1.17, which is again the latest version of Kubernetes supported by EKS at the time of this writing.

At the bare minimum, you can simply run the command eksctl create cluster and it will provision several resources for you, which includes:

1. VPC
2. EKS Cluster
3. Managed node group with 2 instances of type m5.large
4. All required IAM roles, security groups, etc.

Now that may sound fascinating but that doesn’t work most of the time in real world scenarios. Most of the time you would find a VPC already created and if you’re working in an Enterprise then VPC and other network related components might be managed by a dedicated Network team.

Hence, I would strongly suggest not to use eksctl to create VPC but use it only for EKS related resources like EKS cluster, managed node groups, Fargate containers, spot instances, IAM roles, etc.

eksctl also supports provisioning of resources using config file, which is the preferred way because you can version control your EKS cluster config. Below is a similar config file which can be used to create EKS cluster with a managed node group:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-demo
region: us-east-1
version: "1.16"
vpc:
id: "vpc-088bef105xxxxxxxxxx"
cidr: "172.32.0.0/16"
subnets:
private:
us-east-1a:
id: "subnet-0a30872473xxxxxxxx"
cidr: "172.32.2.0/24"
us-east-1b:
id: "subnet-0234d17566xxxxxxx"
cidr: "172.32.3.0/24"managedNodeGroups:
- name: eks-ng
minSize: 2
maxSize: 2
desiredCapacity: 2
instanceType: t3.small
volumeSize: 5
privateNetworking: true
ssh:
allow: true
publicKeyPath: ~/.ssh/id_rsa.pub
labels: {env: dev}
tags:
costid: devops
iam:
withAddonPolicies:
externalDNS: true
autoScaler: true
ebs: true
efs: true
cloudWatch: true
albIngress: true

Above config file creates an EKS cluster of version 1.16 in an existing VPC. It also creates EKS cluster in the private subnets for better security.

The section managedNodeGroups creates a managed worker nodes group with 2 instances of type t3.small with EBS volume of 5GB attached with each node. You can define privateNetworking to ensure that your worker nodes also run in private subnets. If you want to allow SSH to worker nodes then you can specify the public key on the workspace where you are running eksctl utility. If you want to use SSH Key Pair generated in AWS then you can simply type the name of existing key pair (without specifying the path).

Finally, you can specify additional policies that would be attached with the IAM role provisioned by eksctl. Above example would provide access to Route53, AutoScaling, EBS, EFS, CloudWatch and ALB in addition to necessary EKS related permissions.

Once your definition file is ready you can simply create a cluster using command eksctl create cluster -f /path/to/configfile.yaml and it will provision everything required for EKS cluster in 15–20 minutes.

eksctl also supports adding Fargate capability to your EKS cluster, run cluster with Spot instances, simplify EKS cluster upgrades and much more.

Leave a Reply

Your email address will not be published. Required fields are marked *