Refactor monolithic Terraform configuration
Some Terraform projects start as a monolith, a Terraform project managed by a single main configuration file in a single directory, with a single state file. Small projects may be convenient to maintain this way. However, as your infrastructure grows, restructuring your monolith into logical units will make your Terraform configurations less confusing and safer to manage.
These tutorials are for Terraform users who need to restructure Terraform configurations as they grow. In this tutorial, you will provision two instances of a web application hosted in an S3 bucket that represent production and development environments. The configuration you use to deploy the application will start in as a monolith. You will modify it to step through the common phases of evolution for a Terraform project, until each environment has its own independent configuration and state.
Prerequisites
Although the concepts in this tutorial apply to any module creation workflow, this tutorial uses Amazon Web Services (AWS) modules.
To follow this tutorial you will need:
- An AWS account Configure one of the authentication methods described in our AWS Provider Documentation. The examples in this tutorial assume that you are using the Shared Credentials file method with the default AWS credentials file and default profile.
- The AWS CLI
- The Terraform CLI
Launch Terminal
This tutorial includes a free interactive command-line lab that lets you follow along on actual cloud infrastructure.
Apply a monolith configuration
In your terminal, clone the example repository. It contains the configuration used in this tutorial.
Tip
Throughout this tutorial, you will have the option to check out branches that correspond to the version of Terraform configuration in that section. You can use this as a failsafe if your deployment is not working correctly, or to run the tutorial without making changes manually.
Navigate to the directory.
Your root directory contains four files and an "assets" folder. The root directory files compose the configuration as well as the inputs and outputs of your deployment.
main.tf
- configures the resources that make up your infrastructure.variables.tf
- declares input variables for yourdev
andprod
environment prefixes, and the AWS region to deploy to.terraform.tfvars.example
- defines your region and environment prefixes.outputs.tf
- specifies the website endpoints for yourdev
andprod
buckets.assets
- houses your webapp HTML file.
In your text editor, open the main.tf
file. The file consists of a few different resources:
The
random_pet
resource creates a string to be used as part of the unique name of your S3 bucket.Two
aws_s3_bucket
resources designatedprod
anddev
, which each create an S3 bucket. Notice that thebucket
argument defines the S3 bucket name by interpolating the environment prefix and therandom_pet
resource name.Two
aws_s3_bucket_acl
resources designatedprod
anddev
, which set apublic-read
ACL for your buckets.Two
aws_s3_bucket_website_configuration
resources designatedprod
anddev
, which configure your buckets to host websites.Two
aws_s3_bucket_policy
resources designatedprod
anddev
, which allow anyone to read the objects in the corresponding bucket.Two
aws_s3_object
resources designatedprod
anddev
, which load the file in the localassets
directory using the built infile()
function and upload it to your S3 buckets.
Terraform requires unique identifiers - in this case prod
or dev
for each s3
resource - to create separate resources of the same type.
Open the terraform.tfvars.example
file in your repository and edit it with your own variable definitions. Change the region to your nearest location in your text editor.
Save your changes in your editor and rename the file to terraform.tfvars
. Terraform automatically loads variable values from any files that end in .tfvars
.
In your terminal, initialize your Terraform project.
Then, apply the configuration.
Accept the apply plan by entering yes
in your terminal to create the 5 resources.
Navigate to the web address from the Terraform output to display the deployment in a browser. Your directory now contains a state file, terraform.tfstate
.
Separate configuration
Defining multiple environments in the same main.tf
file may become hard to manage as you add more resources. The HashiCorp Configuration Language (HCL), which is the language used to write Terraform configurations, is meant to be human-readable and supports using multiple configuration files to help organize your infrastructure.
You will organize your current configuration by separating the configurations into two separate files — one root module for each environment.
To split the configuration, first make a copy of main.tf
and name it dev.tf
.
Rename the main.tf
file to prod.tf
.
You now have two identical files. Open dev.tf
and remove any references to the production environment by deleting the resource blocks with the prod
ID. Repeat the process for prod.tf
by removing any resource blocks with the dev
ID.
Tip
To fast-forward to this file separated configuration, checkout the branch in your example repository by running
git checkout file-separation
.
Your directory structure will look similar to the one below.
Although your resources are organized in environment-specific files, your variables.tf
and terraform.tfvars
files contain the variable declarations and definitions for both environments.
Terraform loads all configuration files within a directory and appends them together, so any resources or providers with the same name in the same directory will cause a validation error. If you were to run a terraform
command now, your random_pet
resource and provider
block would cause errors since they are duplicated across the two files.
Edit the prod.tf
file by commenting out the terraform
block, the provider
block, and the random_pet
resource. You can comment out the configuration by adding a /*
at the beginning of the commented out block and a */
at the end, as shown below.
With your prod.tf
shared resources commented out, your production environment will still inherit the value of the random_pet
resource in your dev.tf
file.
Simulate a hidden dependency
You may want your development and production environments to share bucket names, but the current configuration is particularly dangerous because of the hidden resource dependency built into it. Imagine that you want to test a random pet name with four words in development. In dev.tf
, update your random_pet
resource's length
attribute to 4
.
You might think you are only updating the development environment because you only changed dev.tf
, but remember, this value is referenced by both prod
and dev
resources.
Enter yes
when prompted to apply the changes.
Note that the operation updated all five of your resources by destroying and recreating them. In this scenario, you encountered a hidden resource dependency because both bucket names rely on the same resource.
Carefully review Terraform execution plans before applying them. If an operator does not carefully review the plan output or if CI/CD pipelines automatically apply changes, you may accidentally apply breaking changes to your resources.
Destroy your resources before moving on. Respond to the confirmation prompt with a yes
.
Separate states
The previous operation destroyed both the development and production environment resources. When working with monolithic configuration, you can use the terraform apply
command with the -target
flag to scope the resources to operate on, but that approach can be risky and is not a sustainable way to manage distinct environments. For safer operations, you need to separate your development and production state.
State separation signals more mature usage of Terraform; with additional maturity comes additional complexity. There are two primary methods to separate state between environments: directories and workspaces.
To separate environments with potential configuration differences, use a directory structure. Use workspaces for environments that do not greatly deviate from one another, to avoid duplicating your configurations. Try both methods in the tabs below to understand which will serve your infrastructure best.
By creating separate directories for each environment, you can shrink the blast radius of your Terraform operations and ensure you will only modify intended infrastructure. Terraform stores your state files on disk in their corresponding configuration directories. Terraform operates only on the state and configuration in the working directory by default.
Directory-separated environments rely on duplicate Terraform code. This may be useful if you want to test changes in a development environment before promoting them to production. However, the directory structure runs the risk of creating drift between the environments over time. If you want to reconfigure a project with a single state file into directory-separated states, you must perform advanced state operations to move the resources.
After reorganizing your environments into directories, your file structure should look like the one below.
Create prod
and dev
directories
Create directories named prod
and dev
.
Move the dev.tf
file to the dev
directory, and rename it to main.tf
.
Copy the variables.tf
, terraform.tfvars
, and outputs.tf
files to the dev
directory
Your environment directories are now one step removed from the assets
folder where your webapp lives. Open the dev/main.tf
file in your text editor and edit the file to reflect this change by editing the file path in the content
argument of the aws_s3_object
resource with a /..
before the assets
subdirectory.
You will need to remove the references to the prod
environment from your dev
configuration files.
First, open dev/outputs.tf
in your text editor and remove the reference to the prod
environment.
Next, open dev/variables.tf
and remove the reference to the prod
environment.
Finally, open dev/terraform.tfvars
and remove the reference to the prod
environment.
Create a prod
directory
Rename prod.tf
to main.tf
and move it to your production directory.
Move the variables.tf
, terraform.tfvars
, and outputs.tf
files to the prod
directory.
Repeat the steps you took in the dev
directory, and uncomment out the random_pet
and provider
blocks in main.tf
.
First, open prod/main.tf
and edit it to reflect new directory structure by adding /..
to the file path in the content
argument of the aws_s3_object
, before the assets
subdirectory.
Next, remove the references to the dev
environment from prod/variables.tf
, prod/outputs.tf
, and prod/terraform.tfvars
.
Finally, uncomment terraform
block, the provider
block, and the random_pet
resource in prod/main.tf
.
Tip
To fast-forward to this configuration, run git checkout directories
.
Deploy environments
To deploy, change directories into your development environment.
This directory is new to Terraform, so you must initialize it.
Run an apply for the development environment and enter yes
when prompted to accept the changes.
You now have only one output from this deployment. Check your website endpoint in a browser.
Repeat these steps for your production environment.
This directory is new to Terraform, so you must initialize it first.
Run your apply for your production environment and enter yes
when prompted to accept the changes. Check your website endpoint in a browser.
Now your development and production environments are in separate directories, each with their own configuration files and state.
Destroy infrastructure
Before moving on to the second approach to environment separation, destroy both the dev
and prod
resources.
To learn about another method of environment separation, navigate to the "Workspaces" tab.
Next steps
In this exercise, you learned how to restructure a monolithic Terraform configuration that managed multiple environments. You separated those environments by creating different directories or workspaces, and state files for each. To learn more about how to organize your configuration, review the following resources:
Learn how to use and create modules to combat configuration drift.
Learn about how HCP Terraform eases state management and using Terraform as a team.
Learn how to use remote backends and migrate your configuration.