Deploying an EKS cluster with Terragrunt
The goal of this walk-through is to create and provision an AWS EKS cluster with Terragrunt and to introduce a possible architecture for complex cloud infrastructure with multiple resources in more than one environment.
Introduction
Terragrunt is an open source Terraform wrapper. It provides tools for keeping your Terraform configurations DRY (Don’t Repeat Yourself) with advantages over Terraform like:
- get rid of duplicated backend code
- execute terraform commands on more than one module
- provision multiple environments
- work with multiple AWS account
The goal of this walk-through is to create and provision an AWS EKS cluster with Terragrunt and to introduce a possible architecture for complex cloud infrastructure with multiple resources in more than one environment.
Prerequisites
Before we begin, make sure that:
- you have an AWS account
- your AWS credentials are properly configured
- Terraform and Terragrunt are installed on your machine
Project Architecture
Folder structure
For configurations that apply to the whole project, we create a terragrunt.hcl file in the repo root. This is where we configure the Terraform backend.
remote_state {
backend = "s3"
generate = {
path = "state.tf"
if_exists = "overwrite_terragrunt"
}
config = {
key = "${path_relative_to_include()}/terraform.tfstate"
bucket = "<PROJECT_NAME>-state"
region = "<AWS_REGION>"
encrypt = true
dynamodb_table = "<PROJECT_NAME>-lock-table"
}
}
generate "provider" {
path = "provider.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "aws" {
region = "<AWS_REGION>"
}
EOF
}
In the repo root, we can define every other file that applies to the whole project. For example, we can pin down the currently used version of Terraform and Terragrunt.
live/
prod/
dev/
eks/
terragrunt.hcl
vpc/
terragrunt.hcl
env.hcl
terragrunt.hcl
Configuring environment variables
In the same Terragrunt project, we can define as many environments as we want. The environment folders will contain the AWS resources in modules as well as an env.hcl file with the environment variables. In the environment file, we define every attribute regarding our resources. In an ideal situation this file can act as a single source of truth, every change in our project can be archived by changes in this file.
locals {
env = "dev"
project = "terragrunt-eks"
# EKS variables
eks_cluster_name = "${local.env}-${local.project}-cluster"
eks_cluster_version = "1.27"
eks_create_aws_auth_configmap = false
eks_manage_aws_auth_configmap = true
# EKS allowed users
aws_auth_users = [
{
userarn = "arn:aws:iam::<AWS_ID>:user/<USER>"
username = "<USERNAME>"
groups = ["system:masters"]
}
]
# VPC variables
vpc_cidr = "10.0.0.0/16"
vpc_enable_nat_gateway = true
vpc_enable_single_nat_gateway = true
availability_zone = [<YOUR-AVAILABILITY-ZONES>]
tags = {
Name = "${local.env}-${local.project}"
Environment = "${local.env}"
}
}
Choosing Terraform Modules
Working with Terraform modules doesn’t allow us to create a highly customized infrastructure, however, it comes with many advantages. It reduces the time to spend and the amount of code to write and later maintain in our project.
The Terraform registry has thousands of modules. In this case, for our EKS cluster, we will use terraform-aws-modules/eks/aws. In our environment, we create a directory for the eks module with a terragrunt.hcl file. This terragrunt file will only contain the information specific to this module.
terraform {
source = "tfr:///terraform-aws-modules/eks/aws//.?version=19.15.3"
}
include "root" {
path = find_in_parent_folders()
}
include "env" {
path = find_in_parent_folders("env.hcl")
expose = true
merge_strategy = "no_merge"
}
inputs = {
cluster_version = include.env.locals.eks_cluster_version
cluster_name = include.env.locals.eks_cluster_name
vpc_id = dependency.vpc.outputs.vpc_id
subnet_ids = dependency.vpc.outputs.private_subnets
// other optional inputs
create_aws_auth_configmap = include.env.locals.eks_create_aws_auth_configmap
manage_aws_auth_configmap = include.env.locals.eks_manage_aws_auth_configmap
}
dependency "vpc" {
config_path = "${get_original_terragrunt_dir()}/../vpc"
mock_outputs = {
vpc_id = "vpc-00000000"
private_subnets = [
"subnet-00000000",
"subnet-00000001",
"subnet-00000002",
]
}
}
- terraform block: defining the source module with its version
- include root block: needed for Terragrunt to be able to use configuration files in parent folders
- include env block: needed for Terragrunt to be able to use the variables defined in our env file
- inputs block: module inputs
- dependency block: modules the current module depends on
Creating the vpc module
terraform {
source = "tfr:///terraform-aws-modules/vpc/aws//.?version=5.1.1"
}
include "root" {
path = find_in_parent_folders()
}
include "env" {
path = find_in_parent_folders("env.hcl")
expose = true
merge_strategy = "no_merge"
}
inputs = {
cidr = include.env.locals.vpc_cidr
azs = include.env.locals.availability_zone
private_subnets = [for k, v in include.env.locals.availability_zone : cidrsubnet(include.env.locals.vpc_cidr, 4, k)]
public_subnets = [for k, v in include.env.locals.availability_zone : cidrsubnet(include.env.locals.vpc_cidr, 8, k + 48)]
enable_nat_gateway = include.env.locals.vpc_nat_gateway
single_nat_gateway = include.env.locals.vpc_single_nat_gateway
tags = include.env.locals.tags
}
Manage EKS aws-auth configmap
In order to access the cluster we are going to create an auth-config with our aws user. For Terragrunt to access our cluster while creating it we use the Kubernetes provider.
Step 1: Create a generate provider block in the terragrunt.hcl file
generate "provider-local" {
path = "provider-local.tf"
if_exists = "overwrite"
contents = file("../../../provider-config/eks/eks.tf")
}
Step 2: Define the provider for authenticating to the cluster in the provider-config/eks/eks.tf file
provider "kubernetes" {
host = aws_eks_cluster.this[0].endpoint
cluster_ca_certificate = base64decode(aws_eks_cluster.this[0].certificate_authority.0.data)
token = data.aws_eks_cluster_auth.default.token
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", aws_eks_cluster.this[0].id]
}
}
data "aws_eks_cluster_auth" "default" {
name = var.cluster_name
}
Conclusion
We defined an AWS EKS cluster with the help of Terraform and Terragrunt. The only thing left now is to apply it. From the project root we can use the commands:
- terragrunt run-all init : to initialize the project
- terragrunt run-all plan : to see the detailed plan of the project
- terragrunt run-all apply : to create the cluster