RSVP Event

Meet us at HIMSS26

Booth #10224 | Venetian Level 1 | AI Pavilion to explore our secure, scalable, and compliant AI solutions for healthcare’s toughest challenges.

Building an EKS Cluster with Terraform: A Modern, Modular Approach

profileImg
Ravindra Singh
30 October 20255 min

If EKS already runs Kubernetes for you, why complicate it with Terraform?

That’s usually the first question developers ask, and it’s a good one. EKS gives you a managed control plane, but not a complete setup. The real complexity lives outside it: networking, IAM, storage, and scaling rules.

That’s where most teams hit a wall. The first cluster works, but reproducing it is another story. One missed configuration or permission, and suddenly, everything behaves differently than it should.

Terraform brings order to that chaos. It adds structure, consistency, and confidence, the kind that comes from knowing your infrastructure will behave the same way every single time.

According to Firefly’s State of Infrastructure as Code 2025 report, nearly 90 percent of organizations have adopted IaC practices to manage their cloud environments, with Terraform leading as the most widely used tool. The reason isn’t hype; it is reliability.

And that’s exactly what we’ll build in this guide. But before we dive into the cluster itself, let’s clear the basics first.

1. Why Pair Terraform with EKS?

Terraform and EKS complement each other perfectly. EKS delivers a managed Kubernetes foundation, while Terraform provides the structure to build everything around it through reusable, version-controlled code. Together, they make your infrastructure predictable, scalable, and easy to manage across environments.

When paired, they create a balance of automation and control that most teams spend months trying to achieve manually. Terraform defines the “how,” and EKS handles the “where,” making it simpler to build clusters that scale without surprises.

1.1 Why Teams Choose This Combination

Predictability: Every environment, from development to production, follows the same configuration. No more surprises or hidden differences.

Speed: One command replaces hours of provisioning work. Terraform automates everything that would otherwise require manual setup.

Traceability: Every change is captured in version control, making it easy to audit, review, or roll back safely.

Scalability: Modules enable you to extend your infrastructure or deploy new clusters quickly without requiring code rewriting.

These are the reasons why Terraform and EKS have become the preferred foundation for modern cloud operations.

Now that the “why” is clear, let’s get practical!

2. Setting Up Your Environment

Before writing any Terraform code, you need a ready environment. Think of this step as laying the foundation before building the structure. Ensuring the right tools and versions are in place will make your deployment smooth and error-free.

2.1 Required Tools

  • Terraform – The core of your setup. Install it here following the official guide.
  • AWS CLI – Your command-line bridge to AWS. Install and configure it with your credentials.
  • kubectl – The tool to communicate with your Kubernetes cluster once it is up. Download it here.
Pro Tip: Always keep these tools up to date. Many deployment issues arise from mismatches between Terraform providers or outdated CLI binaries.

2.2 Verify Your Setup

Once installed, run the following commands to confirm everything’s configured properly:

terraform -v

aws --version

kubectl version --client

If all three return valid versions, you’re good to go.

3. Cloning and Initializing Your Project

Your environment is ready. Now it’s time to pull the Terraform project that contains all the configurations for your EKS setup, including modules for IAM, ACM, and ECR.

Step 1: Clone the Repository

Run the following commands in your terminal.

git clone https://github.com/ravindrasinghh/Kubernetes-Playlist.git

cd Kubernetes-Playlist/Lesson1/

This repository is structured to help you quickly deploy a complete EKS environment. Each file represents a component of the infrastructure that we will explore shortly.

Pro Tip: Create a new workspace before you make any changes. It keeps your environments clean and avoids accidental conflicts later.

terraform workspace new dev

Step 2: Initialize and Plan

Now that the files are in place, initialize Terraform and check the plan before applying any changes.

terraform init

terraform plan

terraform apply
  • terraform init sets up your local environment by downloading all required providers and modules.
  • terraform plan displays a preview of what Terraform will create or modify in AWS.
  • terraform apply executes those changes and begins provisioning resources.

It is good practice to always review the plan output before applying it. A quick scan can identify potential misconfigurations and prevent the creation of unwanted resources.

4. Understanding the Terraform Components

Every file in the repository has a specific role in building your EKS cluster and the surrounding AWS infrastructure. Understanding this structure helps you customize and scale it later with confidence.

4.1 ACM Certificates (acm.tf)

This file manages SSL certificates through AWS Certificate Manager and validates them automatically using Route 53.

module "acm_backend" {

  source      = "terraform-aws-modules/acm/aws"

  version     = "4.0.1"

  domain_name = "codedevops.cloud"

  subject_alternative_names = [

    "*.codedevops.cloud"

  ]

  zone_id             = data.aws_route53_zone.main.id

  validation_method   = "DNS"

  wait_for_validation = true

  tags = {

    Name = "${local.project}-${local.env}-backend-validation"

  }

}

data "aws_route53_zone" "main" {

  name = "codedevops.cloud."

}

This setup allows your services to use HTTPS securely. It is essential when exposing endpoints, APIs, or ingress controllers to the public internet.

4.2 AWS Account Data (data.tf)

data "aws_caller_identity" "current" {}

This block fetches your AWS account ID dynamically. It is useful for tagging and referencing without hardcoding account numbers in your configuration.

4.3 Container Registry (ecr.tf)

resource "aws_ecr_repository" "foo" {

  for_each             = toset(local.ecr_names)

  name                 = each.key

  image_tag_mutability = "MUTABLE"

  tags = {

    Name = "${local.project}-${local.env}-ecr"

  }

}

This creates an Amazon ECR repository for storing your container images. You can push, version, and manage Docker images here before deploying them to your EKS cluster.

Pro Tip: Use immutable tags in production environments to prevent overwriting critical images.

4.4 The EKS Cluster (eks.tf)

This is the core module that provisions the cluster, node groups, and add-ons.

module "eks" {

  source  = "terraform-aws-modules/eks/aws"

  version = "20.13.1"

  cluster_name = local.cluster_name

  cluster_version = local.cluster_version

  ...

  tags = local.default_tags

}

This file defines how your cluster is created and managed. It specifies the Kubernetes version, VPC details, and node configurations, and it links the necessary IAM roles.

Pro Tip: Enable audit logging for your EKS cluster. It helps track API activity and quickly identify security issues.

4.5 Networking Permissions (IAM Role for VPC CNI)

resource "aws_iam_role" "vpc_cni" {

  name = "${local.prefix}-vpc-cni"

  assume_role_policy = <<EOF

{

  "Version": "2012-10-17",

  "Statement": [

    {

      "Effect": "Allow",

      "Principal": {

        "Federated": "${module.eks.oidc_provider_arn}"

      },

      "Action": "sts:AssumeRoleWithWebIdentity",

      "Condition": {

        "StringEquals": {

          "${module.eks.oidc_provider}:sub": "system:serviceaccount:kube-system:aws-node"

        }

      }

    }

  ]

}

EOF

}

This IAM role allows the AWS VPC CNI add-on to manage network interfaces and IP assignments for pods. Without it, cluster networking and pod communication may fail under scale.

4.6 Environment Variables and Defaults (locals.tf and variables.tf)

These files define reusable configurations such as project names, environments, tags, and cluster details.

locals {

  environment = terraform.workspace

  project = "codedevops"

  default_tags = {

    environment = local.environment

    managed_by  = "terraform"

    project     = local.project

  }

}

Keeping these values in one place helps you maintain consistency across all files.

4.7 Outputs (output.tf)

After deployment, Terraform displays key outputs such as the cluster name, endpoint, and certificate ARN.

output "cluster_name" {

  value       = module.eks.cluster_name

  description = "The name of the created EKS cluster."

}

You can reference these outputs in CI/CD pipelines or connect them to monitoring tools later.

4.8 Provider Configuration and State Management (provider.tf)

This file defines the Terraform backend and required providers.

terraform {

  backend "s3" {

    bucket = "devsecops-backend-codedevops"

    key    = "secops-dev.tfstae"

    region = "ap-south-1"

  }

}

Storing the Terraform state file in S3 ensures that your state is consistent across teams and remains intact even after local machine resets.

5. Configuring Environment Variables

Now that the components are in place, it is time to define environment-specific values. This is where Terraform becomes truly powerful. You can manage multiple environments, such as development, staging, and production, using a single codebase with different configurations.

5.1 The terraform.tfvars File

This file contains all the variable values for your environment. It includes your cluster details, region, VPC information, and node group configuration.

environments = {

  default = {

    cluster_name                   = "codedevops-cluster"

    env                            = "default"

    region                         = "ap-south-1"

    vpc_id                         = "vpc-02af529e05c41b6bb"

    vpc_cidr                       = "10.0.0.0/16"

    public_subnet_ids              = ["subnet-09aeb297a112767b2", "subnet-0e25e76fb4326ce99"]

    cluster_version                = "1.29"

    cluster_endpoint_public_access = true

    ecr_names                      = ["codedevops"]

    eks_managed_node_groups = {

      generalworkload-v4 = {

        min_size       = 1

        max_size       = 1

        desired_size   = 1

        instance_types = ["m5a.xlarge"]

        capacity_type  = "SPOT"

        disk_size      = 60

        ebs_optimized  = true

      }

    }

    cluster_enabled_log_types = ["audit"]

  }

}

The file above describes a default environment that creates an EKS cluster with managed node groups, auditing enabled, and Elastic Container Registry names defined.

You can duplicate this block to create configurations for other environments, such as staging or production, simply by adjusting variables like region or instance type.

5.2 Why This Step Matters

Keeping configurations environment-specific ensures flexibility without breaking consistency. You can safely test updates or upgrades in one workspace before applying them to another.

Pro Tip: Never hardcode credentials or secrets in your .tfvars file. Use AWS Secrets Manager or Terraform Cloud variables to handle sensitive information securely.

With your environment variables defined, the cluster is now ready to be deployed. The next step is to verify the setup and ensure your cluster is accessible through the command line.

6. Verifying and Accessing the Cluster

After applying your Terraform configuration, the cluster is now live in AWS. The next step is to verify that everything is working correctly and that you can connect to it using the AWS CLI and kubectl.

6.1 Accessing the Cluster

To configure kubectl and connect to your new EKS cluster, run the following command:

aws eks --region <region> update-kubeconfig --name <cluster_name>

This command updates your local kubeconfig file with the cluster’s authentication details. Once complete, you can start interacting with your cluster just like any other Kubernetes environment.

6.2 Checking the Cluster Status

To verify the connection, run:

kubectl get nodes

If everything is set up correctly, you should see a list of worker nodes with the status Ready. Each node represents an EC2 instance managed by EKS as part of your defined node groups.

If your nodes are not showing up or remain in a non-ready state, check the following:

  • The security groups associated with your cluster may be too restrictive.
  • The IAM role attached to the node group might be missing necessary permissions.
  • The subnets may not have proper tags or internet access if you used public nodes.
Pro Tip: Always double-check your networking configuration before troubleshooting at the application layer. Most EKS connectivity issues trace back to VPC or security group settings.

6.3 Validating Add-ons

You can also check that EKS add-ons such as CoreDNS and the VPC CNI plugin are running properly by executing:

kubectl get pods -n kube-system

All system pods should appear in a running state. If any of them are pending or crash looping, it is often related to IAM or network permissions defined earlier in your Terraform setup.

Once you have confirmed connectivity and system stability, your EKS cluster is ready for workloads. The final step is to review common troubleshooting scenarios and fine-tune your setup for reliability.

7. Troubleshooting and Best Practices

Even with Terraform automating most of the work, EKS clusters can still run into issues. Understanding where to look and how to fix problems quickly is what separates a smooth deployment from a frustrating one.

7.1 Common Issues and Fixes

7.1.1 Cluster Creation Fails

If your cluster does not finish creating, review the terraform plan output for missing permissions or misconfigured variables. Check IAM policies first, as many errors stem from roles lacking required EKS or EC2 permissions.

7.1.2 Nodes Do Not Appear or Remain NotReady

This usually points to networking issues. Verify that your subnets have proper routing and that security groups allow required ports between worker nodes and the control plane. Additionally, ensure that the node IAM role has access to the EKS APIs.

7.1.3 kubectl Cannot Connect to the Cluster

Confirm that your AWS CLI region matches the cluster’s region and that your kubeconfig file was updated correctly. Run aws eks list-clusters to verify the cluster name.

7.1.4 ACM Certificate Validation is Stuck

If your ACM certificate remains in a pending state, check Route 53 for the automatically created DNS records. Ensure they exist and point to the correct hosted zone.

7.2 Best Practices for Reliable EKS Management

7.2.1. Always Review Before You Apply

Run terraform plan before every terraform apply to confirm what resources are being created or modified. This habit prevents accidental changes in shared environments.

7.2.2. Use Version Control

Keep all Terraform files in Git. It provides a complete history of infrastructure changes and allows for rollbacks when needed.

7.2.3. Separate Environments by Workspaces

Using Terraform workspaces for dev, staging, and production helps isolate environments and reduces the risk of cross-environment conflicts.

7.2.4. Monitor Your Cluster

Set up AWS CloudWatch or Prometheus to track cluster metrics and logs. Observability helps you catch performance issues before they affect users.

7.2.5. Use Infrastructure Modules Wisely

Leverage official Terraform modules for EKS and ACM, but version them carefully. Updates can introduce breaking changes, so always test in a non-production workspace first.

With your cluster verified and fine-tuned, you now have a fully functional EKS environment managed entirely through Terraform. The final step is to summarize what you have achieved and highlight key lessons for maintaining and scaling your setup.

8. Conclusion

You have now built a complete Amazon EKS cluster using Terraform, from setting up the environment to validating your deployment and resolving common issues. What once required hours of manual configuration through the AWS Console can now be replicated in minutes using infrastructure as code.

This approach simplifies provisioning, brings stability and consistency to every deployment. With Terraform and EKS working together, your clusters become scalable, auditable, and easy to manage across environments. You gain time to focus on performance, security, and innovation instead of constant reconfiguration.

At Coditas, we help teams reach this same level of automation and reliability through AWS-native solutions. From optimizing Kubernetes environments to modernizing cloud infrastructure, our engineers design systems that are efficient, secure, and built to scale.

Ready to take your AWS infrastructure to the next level?

Let’s talk.

References:

https://www.firefly.ai/state-of-iac-2025

https://dev.to/aws-builders/create-an-eks-cluster-using-terraform-1734

Featured Blog Posts
DevOps, Technology Trends, Data Privacy & Security, Product Management
Mastering High Availability on AWS with Terraform: A Comprehensive Guide
profileImg
Ravindra Singh
28 June 20245 min read
AWS, DevOps, Cloud, Technology Trends, Data Privacy and Security
How to Automate Static Website Deployment with GitHub Actions and AWS S3
profileImg
Ravindra Singh
8 October 20255 min read
AWS, DevOps, Cloud, Technology Trends, Data Privacy and Security
How to Scale Kubernetes Governance with Kyverno and Slack
profileImg
Ravindra Singh
8 October 20255 min read

Need help with your Business?

Don’t worry, we’ve got your back.

0/1000