Provision an EKS cluster (AWS)
AWS's Elastic Kubernetes Service (EKS) is a managed service that lets you deploy, manage, and scale containerized applications on Kubernetes.
In this tutorial, you will deploy an EKS cluster using Terraform. Then, you will configure kubectl
using Terraform output and verify that your cluster is ready to use.
Warning
AWS EKS clusters cost $0.10 per hour, so you may incur charges by running this tutorial. The cost should be a few dollars at most, but be sure to delete your infrastructure promptly to avoid additional charges. We are not responsible for any charges you may incur.
Why deploy with Terraform?
While you could use the built-in AWS provisioning processes (UI, CLI, CloudFormation) for EKS clusters, Terraform provides you with several benefits:
Unified Workflow - If you already use Terraform to deploy AWS infrastructure, you can use the same workflow to deploy both EKS clusters and applications into those clusters.
Full Lifecycle Management - Terraform creates, updates, and deletes tracked resources without requiring you to inspect an API to identify those resources.
Graph of Relationships - Terraform determines and observes dependencies between resources. For example, if an AWS Kubernetes cluster needs a specific VPC and subnet configurations, Terraform will not attempt to create the cluster if it fails to provision the VPC and subnet first.
Prerequisites
The tutorial assumes some basic familiarity with Kubernetes and kubectl
but does
not assume any pre-existing deployment.
You can complete this tutorial using the same workflow with either Terraform Community Edition or HCP Terraform. HCP Terraform is a platform that you can use to manage and execute your Terraform projects. It includes features like remote state and execution, structured plan output, workspace resource summaries, and more.
Select the Terraform Community Edition tab to complete this tutorial using Terraform Community Edition.
This tutorial assumes that you are familiar with the Terraform and HCP Terraform workflows. If you are new to Terraform, complete the Get Started collection first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials first.
For this tutorial, you will need:
- Terraform v1.3+ installed locally.
- an HCP Terraform account and organization.
- HCP Terraform locally authenticated.
- an HCP Terraform variable set configured with your AWS credentials.
- an AWS account
- the AWS CLI v2.7.0/v1.24.0 or newer, installed and configured
- AWS IAM Authenticator
- kubectl v1.24.0 or newer
Set up and initialize your Terraform workspace
In your terminal, clone the example repository for this tutorial.
Change into the repository directory.
This example repository contains configuration to provision a VPC, security groups, and an EKS cluster with the following architecture:
The configuration defines a new VPC in which to provision the cluster, and uses the public EKS module to create the required resources, including Auto Scaling Groups, security groups, and IAM Roles and Policies.
Open the main.tf
file to review the module configuration. The eks_managed_node_groups
parameter configures the cluster with three nodes across two node groups.
Initialize configuration
Set the TF_CLOUD_ORGANIZATION
environment variable to your HCP Terraform
organization name. This will configure your HCP Terraform integration.
Initialize your configuration. Terraform will automatically create the learn-terraform-eks
workspace in your HCP Terraform organization.
Note
This tutorial assumes that you are using a tutorial-specific HCP Terraform organization with a global variable set of your AWS credentials. Review the Create a Credential Variable Set for detailed guidance. If you are using a scoped variable set, assign it to your new workspace now.
Provision the EKS cluster
Run terraform apply
to create your cluster and other necessary resources.
Confirm the operation with a yes
.
This process can take up to 10 minutes. Upon completion, Terraform will print your configuration's outputs.
Configure kubectl
After provisioning your cluster, you need to configure kubectl
to interact with it.
First, open the outputs.tf
file to review the output values. You will use the region
and cluster_name
outputs to configure kubectl
.
Run the following command to retrieve the access credentials for your cluster
and configure kubectl
.
You can now use kubectl
to manage your cluster and deploy Kubernetes configurations to it.
Verify the Cluster
Use kubectl
commands to verify your cluster configuration.
First, get information about the cluster.
Notice that the Kubernetes control plane location matches the cluster_endpoint
value from the terraform apply
output above.
Now verify that all three worker nodes are part of the cluster.
You have verified that you can connect to your cluster using kubectl
and that all three worker nodes are healthy. Your cluster is ready to use.
Clean up your workspace
You have now provisioned an EKS cluster, configured kubectl
,
and verified that your cluster is ready to use.
To learn how to manage your EKS cluster using the Terraform Kubernetes Provider, leave your cluster running and continue to the Kubernetes provider tutorial.
Note
This tutorial's example configuration contains only the configuration to create an EKS cluster. You can also use Terraform to manage the Kubernetes objects deployed to a cluster, such as Deployments and Services. We recommend that you keep the Terraform configuration for each of those concerns separate, since their workflows and owners often differ.
Destroy the resources you created in this tutorial to avoid incurring extra charges. Respond yes
to the prompt to confirm the operation.
If you used HCP Terraform for this tutorial, after destroying your resources, delete the learn-terraform-eks
workspace from your HCP Terraform organization.
Next steps
For more information on the EKS module, visit the EKS module page in the Terraform Registry.
To learn how to manage Kubernetes resources, your EKS cluster, or existing Kubernetes clusters, visit the Kubernetes provider tutorial.
You can also use the Kubernetes provider to deploy custom resources.
For a more in-depth Kubernetes example, see Deploy Consul and Vault on a Kubernetes Cluster using Run Triggers (this tutorial is GKE based).