In this project, I aimed to consolidate my Terraform knowledge by demonstrating best practices while provisioning AWS resources. The source code for this project can be found here!
Project Overview
The architecture diagram below illustrates an AWS-EC2 instance running a containerized Nginx server within a VPC, complete with a subnet and an internet gateway. To deploy our application, we need to create the following AWS resources:
AWS VPC
AWS Subnet
AWS Internet Gateway
AWS Route Table
AWS EC2 Instance
AWS Key Pair
AWS Security Group
Integration of Resources: The Non-Terraform Way
1. Create a VPC
Navigate to the VPC Dashboard in the AWS Management Console.
Click on "Create VPC".
Specify a CIDR block (e.g., 10.0.0.0/16).
Name your VPC and click "Create".
2. Create a Subnet
Navigate to the Subnets section within the VPC dashboard.
Click on "Create Subnet".
Select your VPC, specify a CIDR block (e.g., 10.0.1.0/24), and select an availability zone.
Name your subnet and click "Create".
3. Create an Internet Gateway
Navigate to the Internet Gateways section in the VPC dashboard.
Click on "Create Internet Gateway".
Name your Internet Gateway and click "Create".
Attach the Internet Gateway to your VPC by selecting it and clicking "Actions" -> "Attach to VPC".
4. Create a Route Table
Navigate to the Route Tables section in the VPC dashboard.
Click on "Create Route Table".
Select your VPC and name your route table.
Add a route by selecting the route table, clicking "Routes" -> "Edit routes" -> "Add route".
Specify 0.0.0.0/0 as the destination and your Internet Gateway as the target.
Associate the route table with your subnet by selecting "Subnet Associations" -> "Edit subnet associations" and selecting your subnet.
5. Create a Security Group
Navigate to the Security Groups section in the EC2 dashboard.
Click on "Create Security Group".
Name your security group and select your VPC.
Add inbound rules for HTTP (port 80) and SSH (port 22) with the source type as "Anywhere".
Save the security group.
6. Create a Key Pair
Navigate to the Key Pairs section in the EC2 dashboard.
Click on "Create Key Pair".
Specify a name for your key pair.
Download the key pair file (.pem) and save it securely.
7. Launch an EC2 Instance
Navigate to the EC2 dashboard.
Click on "Launch Instance".
Select an Amazon Machine Image (AMI) (e.g., Ubuntu).
Choose an instance type (e.g., t2.micro).
Configure instance details:
Select your VPC.
Select your subnet.
Enable auto-assign public IP.
Add storage if needed.
Add tags to name your instance.
Configure the security group by selecting the security group you created earlier.
Select the key pair you created and launch the instance.
8. Connect to Your Instance
SSH into your instance using the key pair file:
ssh -i /path/to/your-key-pair.pem ubuntu@your-instance-public-ip
9. Install Docker and Run Nginx
Update the package list and install Docker:
Start Docker
Pull and run the Nginx container
sudo docker pull nginx sudo docker run -d -p 80:80 nginx
That's a lot of work. What if I ask you to create the same architecture 10 times more? It would be a long & frustrating task and there's a big chance of you missing out on some resources or setting up the connections between them.
The Problem
There are two major problems -
Creating resources manually is time-consuming.
There's a high chance of missing resources or incorrect configurations.
The Solution - Terraform (IaC) Way
Terraform provides a wide range of providers that can be used to provision and manage resources on different platforms such as AWS, GCP, Azure, and many others. Providers are plugins that enable Terraform to interact with APIs of cloud platforms, SaaS providers, and other services. Each provider offers a set of resource types and data sources that enable the creation, modification, and deletion of infrastructure components.
Provisioning Resources Using Terraform
All the code for the steps below can be found here.
Download and Install Terraform from their official website.
Create a directory on your system and inside it create providers.tf file.
mkdir my-terraform-project
cd my-terraform-project
touch main.tf
- Define the AWS Provider in the providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.53.0"
}
}
}
provider "aws" {
region = var.aws_region
access_key = var.aws_access_key
secret_key = var.aws_secret_key
}
You have to create an AMI Account on AWS to get the access and secret keys.
resource "aws_vpc" "my-vpc" {
cidr_block = var.vpc_cidr_block
tags = {
Name = "${var.env_prefix}-${var.vpc_name}"
}
}
resource "aws_subnet" "my-subnet-1" {
vpc_id = aws_vpc.my-vpc.id
cidr_block = var.subnet_cidr_block
availability_zone = var.availability_zone
tags = {
Name = "${var.env_prefix}-${var.subnet_name}"
}
}
resource "aws_internet_gateway" "my-igw" {
vpc_id = aws_vpc.my-vpc.id
tags = {
Name = "${var.env_prefix}-my-igw"
}
}
resource "aws_route_table" "my-route-table" {
vpc_id = aws_vpc.my-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.my-igw.id
}
tags = {
Name = "${var.env_prefix}-my-route-table"
}
}
resource "aws_route_table_association" "my-route-table-association" {
subnet_id = aws_subnet.my-subnet-1.id
route_table_id = aws_route_table.my-route-table.id
}
resource "aws_security_group" "my-sg" {
name = "${var.env_prefix}-my-sg"
vpc_id = aws_vpc.my-vpc.id
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.my_ip]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.env_prefix}-my-sg"
}
}
resource "aws_key_pair" "my-key" {
key_name = var.key_name
public_key = file("${var.key_path}")
}
resource "aws_instance" "my-ec2" {
ami = data.aws_ami.latest-ubuntu-image.id
instance_type = var.instance_type
subnet_id = aws_subnet.my-subnet-1.id
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.my-sg.id]
associate_public_ip_address = true
user_data = file("setup.sh")
tags = {
Name = "${var.env_prefix}-my-ec2"
}
}
Here, we have defined all of the resources we'll be needing and have set up the connections between them by using the resource IDs.
- Lastly, we have to create a few more files -
variables.tf - Here we'll be defining all the variables we have used in the main.tf and providers.tf files.
variable "aws_region" { type = string description = "AWS Default Region" } variable "aws_access_key" { type = string description = "AWS Admin Access Key" } variable "aws_secret_key" { type = string description = "AWS Admin Secret Key" } variable "vpc_cidr_block" { type = string description = "CIDR block for the VPC" } variable "vpc_name" { type = string description = "Name of the VPC" } variable "subnet_cidr_block" { type = string description = "CIDR block for the subnet" } variable "subnet_name" { type = string description = "Name of the subnet" } variable "availability_zone" { type = string description = "Availability Zone" } variable "env_prefix" { type = string description = "Environment Prefix" } variable "my_ip" { type = string description = "My IP" } variable "instance_type" { type = string description = "Instance Type" } variable "key_name" { type = string description = "Key Pair Name" } variable "key_path" { type = string description = "Key Pair Path" }
data.tf - Stores data variables whose values we are fetching from AWS. For example - latest-ubuntu-image.
data "aws_ami" "latest-ubuntu-image" { most_recent = true owners = ["099720109477"] filter { name = "name" values = ["ubuntu/images/hvm-ssd/ubuntu-*"] } }
outputs.tf - This would store all the output variables which we can access later.
output "aws_ami_id" { value = data.aws_ami.latest-ubuntu-image.id } output "public_ip" { value = aws_instance.my-ec2.public_ip }
varvalues.tfvars - Here, we store/pass the values of the variables we have defined in the variables.tf file. It is recommended to keep it outside of your git history to avoid giving access to unknown users. Hence, you won't find it in my repository either. You have to define the values for all the variables in the file as follows -
variable1-name = "variable1-value" variable2-name = "variable2-value"
- setup.sh - This is the script that was defined in the main.tf. This file contains a basic bash script to install and run Nginx on the provisioned EC2 instance.
#!/bin/bash
apt-get update
apt-get install -y docker.io
systemctl enable docker
systemctl start docker
docker pull nginx
docker run -d -p 8080:80 --name nginx-container nginx
That's all the files we need. Again too lengthy right? But you won't have to redefine anything again and the same configuration can be tweaked or reused to create similar resources again.
- Initialize terraform to download the providers and related resources data.
terraform init
- Run the below command to create an execution plan, allowing you to see what Terraform will do when you apply the configuration.
terraform plan -var-file="varvalues.tfvars"
- Run the below command to apply the configuration.
terraform apply -var-file="varvalues.tfvars"
- Finally, you can verify the changes on your AWS Account to ensure all the resources were created successfully.
That's how you create resources using Terraform. To update/delete resources all you have to do is change the configuration files and execute steps 8 and 9 to update the resources.
Best Practices
Explore the code available in this repository to understand what are Terraform Modules and how they can be used to make your code more modular. You can compare terraform modules to functions in traditional programming which can be reused by providing different inputs to get different outputs.
Thank you for reading! I hope you learned something from this blog. Do share your thoughts using comments.
Happy Learning!