published: 10th of March 2024
One of my goals for 2024 is the get intimate with Kubernetes. I am a total noob when it comes to Kubernetes, so running a Kubernetes cluster on AWS is a good place to start. Alot of companies are using a hosted Kubernetes service like AWS Elastic Kubernetes Service (EKS), so I think it's a good idea to get familiar with it.
In this post, I will show you how to use Terraform to create an EKS cluster on AWS and deploy an application to it with the Helm provider. We will also be using kubectl to interact with the cluster.
The following software versions were used in this post.
Before you start, you will need to install a few CLI tools. I am using Terraform locally, so you will need to install it if you haven't already. Additionaly, you will need to install AWS CLI, Kubectl, and Helm. Please consult the official documentation for each tool to install it on your system.
This post assumes that you already have an AWS account and have the necessary permissions to create an EKS cluster.
We will create the following files as part of this project.
.
├── main.tf
├── outputs.tf
├── provider.tf
├── variables.tf
└── nginx-variables.yaml
The provider.tf file is used to define the provider plugins that we will be using in our project.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.7.0"
}
tls = {
source = "hashicorp/tls"
version = "~> 4.0.4"
}
cloudinit = {
source = "hashicorp/cloudinit"
version = "~> 2.3.2"
}
}
required_version = "~> 1.3"
}
provider "aws" {
region = var.region
}
provider "kubernetes" {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name]
}
}
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", var.cluster_name]
}
}
}
We define the project variables in the variables.tf file.
variable "region" {
description = "AWS region"
type = string
default = "ap-southeast-2"
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
default = "test-eks-cluster"
}
variable "cluster_version" {
description = "Version of the EKS cluster"
type = string
default = "1.29"
}
variable "vpc_cidr" {
description = "CIDR range for the VPC"
type = string
default = "10.0.0.0/20"
}
variable "subnet_addbits" {
description = "The number of additional bits to add to the VPC CIDR to create subnets"
type = number
default = 4
}
variable "eks_add_on_repo" {
description = "The repo number from https://docs.aws.amazon.com/eks/latest/userguide/add-ons-images.html"
type = number
default = 602401143452
}
variable "eks_ami_type" {
description = "The AMI type for the node group"
type = string
default = "AL2_x86_64"
}
variable "eks_ami_variant" {
description = "The AMI variant for the node group"
type = string
default = "t3.small"
}
variable "eks_node_group" {
description = "The node group configuration"
type = object({
min_size = number
max_size = number
desired_size = number
})
default = {
min_size = 1
max_size = 2
desired_size = 1
}
}
variable "app_namespace" {
description = "The namespace to deploy the application"
type = string
default = "test-namespace"
}
variable "app_name" {
description = "The name of the application"
type = string
default = "test-app"
}
The eks_add_on_repo variable is region dependent. Find the correct number for your region here.
The vpc_cidr variable determines the size of the CIDR for the VPC. I am using a /20 as I want to use /24's for my subnets.
The subnet_addbits variable is used to determine the size of the subnets. I am using 4 additional bits (/20 from VPC CIDR + 4) to create /24's.
The main.tf file is used to define the resources that we will be creating in our project.
Thankfully, there is a couple of Terraform modules that do most of the heavy lifting for us. The supporting
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.0.0"
name = var.cluster_name
cidr = var.vpc_cidr
azs = ["${var.region}a", "${var.region}b", "${var.region}c"]
private_subnets = slice(cidrsubnets(var.vpc_cidr, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits), 0, 3)
public_subnets = slice(cidrsubnets(var.vpc_cidr, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits, var.subnet_addbits), 3, 6)
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
public_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "19.15.3"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
eks_managed_node_group_defaults = {
ami_type = var.eks_ami_type
}
eks_managed_node_groups = {
one = {
name = "node-group-1"
instance_types = [var.eks_ami_variant]
min_size = var.eks_node_group.min_size
max_size = var.eks_node_group.max_size
desired_size = var.eks_node_group.desired_size
}
two = {
name = "node-group-2"
instance_types = [var.eks_ami_variant]
min_size = var.eks_node_group.min_size
max_size = var.eks_node_group.max_size
desired_size = var.eks_node_group.desired_size
}
}
}
data "aws_iam_policy" "ebs_csi_policy" {
arn = "arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy"
}
module "irsa_ebs_csi" {
source = "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
version = "4.7.0"
create_role = true
role_name = "AmazonEKSTFEBSCSIRole-${module.eks.cluster_name}"
provider_url = module.eks.oidc_provider
role_policy_arns = [data.aws_iam_policy.ebs_csi_policy.arn]
oidc_fully_qualified_subjects = ["system:serviceaccount:kube-system:ebs-csi-controller-sa"]
}
resource "aws_eks_addon" "ebs_csi" {
cluster_name = module.eks.cluster_name
addon_name = "aws-ebs-csi-driver"
addon_version = "v1.20.0-eksbuild.1"
service_account_role_arn = module.irsa_ebs_csi.iam_role_arn
tags = {
"eks_addon" = "ebs-csi"
"terraform" = "true"
}
}
resource "kubernetes_service_account" "service_account" {
metadata {
name = "aws-load-balancer-controller"
namespace = "kube-system"
labels = {
"app.kubernetes.io/name" = "aws-load-balancer-controller"
"app.kubernetes.io/component" = "controller"
}
annotations = {
"eks.amazonaws.com/role-arn" = module.irsa_ebs_csi.iam_role_arn
"eks.amazonaws.com/sts-regional-endpoints" = "true"
}
}
}
resource "helm_release" "alb_controller" {
name = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
namespace = "kube-system"
depends_on = [
kubernetes_service_account.service_account
]
set {
name = "region"
value = var.region
}
set {
name = "vpcId"
value = module.vpc.vpc_id
}
set {
name = "image.repository"
value = "${var.eks_add_on_repo}.dkr.ecr.${var.region}.amazonaws.com/amazon/aws-load-balancer-controller"
}
set {
name = "serviceAccount.create"
value = "false"
}
set {
name = "serviceAccount.name"
value = "aws-load-balancer-controller"
}
set {
name = "clusterName"
value = var.cluster_name
}
}
resource "kubernetes_namespace" "test_app" {
metadata {
name = var.app_namespace
}
}
resource "helm_release" "test_app" {
name = var.app_name
repository = "https://charts.bitnami.com/bitnami"
chart = "nginx"
namespace = var.app_namespace
values = [
file("${path.module}/nginx-variables.yaml")
]
}
data "kubernetes_service" "test_app" {
depends_on = [helm_release.test_app]
metadata {
name = var.app_name
namespace = var.app_namespace
}
}
At a high level, the main.tf file does the following:
The outputs.tf file show configuration values that are useful to interact with the cluster.
output "region" {
description = "AWS region"
value = var.region
}
output "cluster_name" {
description = "Kubernetes Cluster Name"
value = module.eks.cluster_name
}
output "cluster_endpoint" {
description = "Endpoint for EKS control plane"
value = module.eks.cluster_endpoint
}
output "cluster_security_group_id" {
description = "Security group ids attached to the cluster control plane"
value = module.eks.cluster_security_group_id
}
output "app_endpoint" {
value = "http://${data.kubernetes_service.test_app.status.0.load_balancer.0.ingress.0.hostname}"
}
The nginx-variables.yaml file is where we define the configuration for the nginx helm chart.
replicaCount: 2
serverBlock: |-
server {
listen 0.0.0.0:8080;
location / {
default_type text/html;
return 200 "hello from kubernetes!\n";
}
}
The replicaCount variable is used to define the number of replicas of the application to run. The serverBlock variable is used to define the nginx configuration.
Now that we have all the files in place, we can create the EKS cluster. Use the terraform init command to initialize the project and install all the provider plugins.
Use the terraform apply command to create the cluster. It takes around 15 minutes to create the cluster so grab a coffee, do some pushups, or whatever you like to pass the time.
Once the cluster is created, you will see similar outputs this.
cluster_endpoint = "https://5A00986DD5DC17F14B84A16225646089.gr7.ap-southeast-2.eks.amazonaws.com"
cluster_name = "test-eks-cluster"
cluster_security_group_id = "sg-035c9fe5dfa9ca830"
nginx_endpoint = "http://a3c79431042194fbdaf0277d3888386b-877897415.ap-southeast-2.elb.amazonaws.com"
region = "ap-southeast-2"
With the cluster is up and running, we can use kubectl to interact with it.
First, use the aws CLI tool to apply your cluster access credentials to your local kubeconfig file.
aws eks --region $(terraform output -raw region) update-kubeconfig \
--name $(terraform output -raw cluster_name)
Inspect the cluster information.
kubectl cluster-info
Lets look at the services running in the cluster.
kubectl get services --all-namespaces
Check out the running pods.
kubectl get pods --all-namespaces
You can also browse to the app_endpoint to see the application running.
curl http://a3c79431042194fbdaf0277d3888386b-877897415.ap-southeast-2.elb.amazonaws.com
In this post, we created an AWS Elastic Kubernets Service (EKS) cluster using Terraform. We also deployed an application to the cluster via the Helm provider and interacted with the cluster using kubectl.
✌️ Peace out nerds. Stay weird! ✌️
https://developer.hashicorp.com/terraform/tutorials/kubernetes/eks
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html
https://www.linkedin.com/learning/running-kubernetes-on-aws-eks-22163437
https://docs.aws.amazon.com/eks/latest/userguide/eks-optimized-ami.html
https://docs.aws.amazon.com/eks/latest/userguide/network_reqs.html
https://developer.hashicorp.com/terraform/language/functions/cidrsubnet
https://developer.hashicorp.com/terraform/tutorials/kubernetes/helm-provider
https://helm.sh/docs/intro/install/
https://getbetterdevops.io/terraform-with-helm/
https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html