To deploy a static website with AWS and Terraform, a robust and scalable solution involves using S3 for storage, CloudFront for content distribution, and Route 53 for DNS management. In this guide, we will walk through the process of setting up an S3-backed static website using Terraform.
Prerequisite
- Terraform cloud account
- Once you create the account, we will create a Project and a Workspace in order to use it as a remote store for the
tfstate
- Once you create the account, we will create a Project and a Workspace in order to use it as a remote store for the
- AWS account
- After the account creation we will need to setup
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYenv both locally and in the Terraform Workspace.
- After the account creation we will need to setup
- Domain name
- A domain name is required to be used. We have to create a hosted zone for the domain name before continuing with the article.
Project Folder Structure
/project-root
├── backend/
├── frontend/
│ ├── provider.tf
│ ├── backend.tf
│ ├── variables.tf
│ ├── outputs.tf
│ ├── locals.tf
│ ├── aws_acm_certificate.tf
│ ├── aws_cloudfront_distribution.tf
│ ├── aws_route53_record.tf
│ ├── aws_s3_bucket_policy.tf
│ ├── aws_s3_bucket.tf
│ ├── aws_s3_object.tf
│ ├── cloudfront_function.js
│ ├── terraform.tfvars.example
│ ├── README.md
│ └── dist/
│ ├── index.html
│ └── error.html
├── bin/
│ └── create-s3-bucket.sh
├── .gitignore
└── README.md
Populating the files
Local Variables
First we create a locals.tf file containing local variables that can be used in every .tf file we will create:
locals {
dist_dir = "${path.module}/dist"
module_name = basename(abspath(path.module))
prefix = var.prefix
# Aligns file extensions with their appropriate MIME types, guaranteeing accurate content delivery for various assets.
content_types = {
".html" : "text/html",
".css" : "text/css",
".js" : "application/javascript",
".json" : "application/json",
".xml" : "application/xml",
".jpg" : "image/jpeg",
".jpeg" : "image/jpeg",
".png" : "image/png",
".gif" : "image/gif",
".svg" : "image/svg+xml",
".webp" : "image/webp",
".ico" : "image/x-icon",
".woff" : "font/woff",
".woff2" : "font/woff2",
".ttf" : "font/ttf",
".eot" : "application/vnd.ms-fontobject",
".otf" : "font/otf"
}
}
Input Variables
Next we create the variables.tf file, which will hold input variables passed to Terraform using .tfvars file:
variable "aws_region" {
description = "AWS Region"
type = string
}
variable "prefix" {
type = string
description = "Prefix for resources"
}
variable "domain_name" {
type = string
description = "Domain name for the website"
}
variable "bucket_name" {
type = string
description = "Name of the S3 bucket"
}
variable "common_tags" {
type = map(string)
description = "Common tags for all resources"
}
.tfvars file
This is a variable definitions file. It contains the values of the variables to be passed during the execution of terraform:
aws_region = "eu-west-1"
prefix = "static-website"
domain_name = "<your-domain>"
bucket_name = "<bucket-name-created-later-with-the-script-create-s3-bucket>"
common_tags = {
ManagedBy = "Terraform"
Project = "Static Website"
}
This file should not be committed to any VCS, thus we add it to .gitignore file. To give an example of the content of the file, we create a .tfvars.example file.
Amazon S3
Amazon S3 is used to store our static website files. Before diving into the resources definition using Terraform, we need to create a bucket with a unique name in out account:
- Create an S3 bucket with a random name using the
create-s3-bucketscript:./bin/create-s3-bucket.sh - Substitue the bucket name to
.tfvars. - Use the bucket name in Terraform configurations.
Content of ./bin/create-s3-bucket.sh
#!/bin/bash
# Generate a random bucket name using a timestamp and random string
BUCKET_NAME="my-bucket-$(date +%s)-$RANDOM"
# Define the AWS region (change if needed)
AWS_REGION="eu-west-2"
# Create the S3 bucket
aws s3api create-bucket --bucket "$BUCKET_NAME" --region "$AWS_REGION" --create-bucket-configuration LocationConstraint="$AWS_REGION"
# Output the bucket name
echo "S3 bucket created: $BUCKET_NAME"
S3 Bucket
We retrieve the bucket using a data source instead of resource.
data "aws_s3_bucket" "website" {
bucket = var.bucket_name
}
Explanation:
- data “aws_s3_bucket” “website”
- This defines a data source, meaning Terraform will look up an existing S3 bucket instead of provisioning a new one.
- bucket = var.bucket_name
- This specifies the bucket name to fetch, using the value stored in the Terraform variable var.bucket_name.
S3 Website Configuration
We configure the bucket to act as a static website.
resource "aws_s3_bucket_website_configuration" "website" {
bucket = data.aws_s3_bucket.website.id
index_document {
suffix = "index.html"
}
error_document {
key = "error.html"
}
}
Breakdown
- resource “aws_s3_bucket_website_configuration” “website”
- This defines a resource to configure an S3 bucket for website hosting.
- bucket = data.aws_s3_bucket.website.id
- This links the configuration to an existing S3 bucket fetched via data.aws_s3_bucket.website.
- index_document
- Defines the default page users see when accessing the site.
- error_document
- Specifies the file to display when an error occurs (e.g., 404 Not Found).
Bucket Ownership and Access Control
Ensuring proper ownership and private access:
resource "aws_s3_bucket_ownership_controls" "website" {
bucket = data.aws_s3_bucket.website.id
rule {
object_ownership = "BucketOwnerPreferred"
}
}
resource "aws_s3_bucket_public_access_block" "website" {
bucket = data.aws_s3_bucket.website.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_acl" "website" {
depends_on = [
aws_s3_bucket_ownership_controls.website_bucket,
aws_s3_bucket_public_access_block.website_bucket
]
bucket = data.aws_s3_bucket.website.id
acl = "private"
}
Explanation:
- aws_s3_bucket_ownership_controls
- Defines who owns objects uploaded to the S3 bucket.
- object_ownership = “BucketOwnerPreferred” ensures that the bucket owner retains control over uploaded objects, even if uploaded by other AWS accounts or IAM users.
- aws_s3_bucket_public_access_block
- Blocks all forms of public access to the S3 bucket by enabling these restrictions
- aws_s3_bucket_acl
- Ensures the Access Control List (ACL) for the bucket is set to “private”, meaning:
- Only the bucket owner has access.
- No public or external access is granted.
- Ensures the Access Control List (ACL) for the bucket is set to “private”, meaning:
Uploading Static Files
Deploy all static files from the dist_dir.
resource "aws_s3_object" "static_files" {
for_each = fileset(local.dist_dir, "**")
bucket = data.aws_s3_bucket.static_website.id
key = each.key
source = "${local.dist_dir}/${each.key}"
content_type = lookup(local.content_types, regex("\\.[^.]+$", each.value), null)
etag = filemd5("${local.dist_dir}/${each.value}")
}
Explanation:
- for_each = fileset(local.dist_dir, “**”)
- Uses Terraform’s fileset() function to list all files inside the directory defined in local.dist_dir.
- The ** pattern ensures that all files, including those in subdirectories, are included.
- Terraform will iterate over each file, treating each one as an individual aws_s3_object resource.
- bucket = data.aws_s3_bucket.static_website.id
- Specifies the existing S3 bucket where files will be uploaded.
- Uses data.aws_s3_bucket.static_website.id, which retrieves the bucket ID dynamically.
- key = each.key
- Defines the destination path for the file inside S3.
- Since each.key represents the file’s relative path within local.dist_dir, the file structure is preserved.
- source = “${local.dist_dir}/${each.key}”
- Specifies the absolute path to the local file being uploaded.
- Ensures Terraform knows where to find the file on disk.
- content_type = lookup(local.content_types, regex(“\\.[^.]+$”, each.value), null)
- Determines the correct MIME type for each file using local.content_type
- etag = filemd5(“${local.dist_dir}/${each.value}”)
- Prevents unnecessary uploads by using a checksum (MD5 hash) of the file.
- If the file hasn’t changed, Terraform won’t re-upload it, optimizing deployments.
SSL with AWS ACM
To enable HTTPS, we provision an SSL certificate using AWS Certificate Manager.
resource "aws_acm_certificate" "ssl_cert" {
provider = aws.acm_provider
domain_name = "static-web.${var.domain_name}"
subject_alternative_names = ["*.static-web.${var.domain_name}"]
validation_method = "DNS"
tags = var.common_tags
lifecycle {
create_before_destroy = true
}
}
Explanation:
- provider = aws.acm_provider
- Specifies which AWS provider configuration to use.
- domain_name = “static-web.${var.domain_name}”
- Defines the primary domain the SSL certificate will secure.
- subject_alternative_names = [“*.static-web.${var.domain_name}”]
- Adds a wildcard domain (*) for subdomains.
- lifecycle { create_before_destroy = true }
- Ensures zero downtime when replacing the certificate.
Certificate Validation
resource "aws_acm_certificate_validation" "ssl_cert" {
provider = aws.acm_provider
certificate_arn = aws_acm_certificate.ssl_cert.arn
validation_record_fqdns = [for record in aws_route53_record.ssl_cert_validation : record.fqdn]
timeouts {
create = "30m"
}
}
Explanation:
- validation_record_fqdns = [for record in aws_route53_record.ssl_cert_validation : record.fqdn]
- Extracts the fully qualified domain names (FQDNs) from Route 53 DNS records created for certificate validation.
- Uses Terraform’s for loop to gather all fqdn values from
- timeouts { create = “30m” }
- Extends Terraform’s timeout for certificate validation to 30 minutes.
Route53 Records Creation
resource "aws_route53_record" "ssl_cert_validation" {
for_each = {
for dvo in aws_acm_certificate.ssl_cert.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.existing.zone_id
}
Explanation:
- for_each = {…} (Dynamic DNS Record Creation)
- Uses Terraform’s for loop to create multiple DNS records dynamically.
CloudFront (CDN Configuration)
CloudFront speeds up website delivery and ensures security.
Origin Access Control (OAC)
This resource creates an origin access control (OAC) for a CloudFront distribution that restricts access to the origin.
resource "aws_cloudfront_origin_access_control" "oac" {
name = "OAC ${data.aws_s3_bucket.static_website.bucket}"
description = "Origin Access Controls for Static Website Hosting on ${var.bucket_name}"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
CloudFront Distribution
This resource defines the CloudFront distribution, which is responsible for delivering content from an S3 bucket to end users.
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = data.aws_s3_bucket.static_website.bucket_regional_domain_name
origin_id = "static-web.${var.bucket_name}-origin"
origin_access_control_id = aws_cloudfront_origin_access_control.oac.id
}
comment = "static-web.${var.domain_name} distribution"
enabled = true
is_ipv6_enabled = true
http_version = "http2and3"
price_class = "PriceClass_100" // Use only North America and Europe
aliases = [
"static-web.${var.domain_name}",
"www.static-web.${var.domain_name}"
]
default_root_object = "index.html"
default_cache_behavior {
cache_policy_id = "4135ea2d-6df8-44a3-9df3-4b5a84be39ad"
viewer_protocol_policy = "redirect-to-https"
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
compress = true
target_origin_id = "static-web.${var.bucket_name}-origin"
function_association {
event_type = "viewer-request"
function_arn = aws_cloudfront_function.www_redirect.arn
}
}
restrictions {
geo_restriction {
restriction_type = "none"
locations = []
}
}
viewer_certificate {
acm_certificate_arn = aws_acm_certificate_validation.ssl_cert_validation.certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
}
tags = var.common_tags
}
Explanation:
- origin
- Specifies the source of the content that CloudFront will distribute.
- aliases
- A list of domain names (CNAMEs) associated with the distribution. This allows CloudFront to respond to requests for these custom domain names.
- viewer_certificate
- Specifies the SSL/TLS certificate settings for the distribution
CloudFront Function for Redirection
A CloudFront function to redirect “www.” to the root domain.
resource "aws_cloudfront_function" "www_redirect" {
name = "${local.prefix}-www-redirect"
comment = "Redirects www to root domain"
runtime = "cloudfront-js-1.0"
code = file("./cloudfront_function.js")
publish = true
}
cloudfront_function.js file
function handler(event) {
var request = event.request;
var hostHeader = request.headers.host.value;
var domainRegex = /(?:.*\.)?([a-z0-9\-]+\.[a-z]+)$/i;
var match = hostHeader.match(domainRegex);
if (!match || !hostHeader.startsWith('www.')) {
return request;
}
// Extract the root domain
var rootDomain = match[1];
// Construct and return the redirect response
return {
statusCode: 301,
statusDescription: 'Moved Permanently',
headers: {
"location": { "value": "https://" + rootDomain + request.uri },
"cache-control": { "value": "max-age=3600" }
}
};
}
We decided to go with CloudFront Function because it provide some crucial benefits in high-traffic environment compared to Lambda Functions:
- Simplicity
- Low Latency
- Cost-Effective
- Ease of Deployment
more on this at official documentation: https://aws.amazon.com/it/blogs/aws/introducing-cloudfront-functions-run-your-code-at-the-edge-with-low-latency-at-any-scale/
S3 Bucket Policy for CloudFront
Ensure CloudFront has read-only access to the S3 bucket.
resource "aws_s3_bucket_policy" "allow_cloudfront" {
bucket = data.aws_s3_bucket.static_website.id
policy = data.aws_iam_policy_document.cloudfront.json
}
data "aws_iam_policy_document" "cloudfront" {
statement {
sid = "AllowCloudFrontServicePrincipalReadOnlyAccess"
effect = "Allow"
actions = ["s3:GetObject"]
resources = [
data.aws_s3_bucket.static_website.arn,
"${data.aws_s3_bucket.static_website.arn}/*",
]
principals {
type = "Service"
identifiers = ["cloudfront.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "AWS:SourceArn"
values = [
aws_cloudfront_distribution.s3_distribution.arn
]
}
}
}
Route 53 – Domain Configuration
We use Route 53 for domain management and DNS configuration.
Hosted Zone
This defines a data block that retrieves information about an existing Route 53 hosted zone.
data "aws_route53_zone" "existing" {
name = var.domain_name
}
DNS Records
We define A records to point to CloudFront.
resource "aws_route53_record" "root_a" {
zone_id = data.aws_route53_zone.existing.zone_id
name = "static-web.${var.domain_name}"
type = "A"
alias {
name = aws_cloudfront_distribution.s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
evaluate_target_health = false
}
}
resource "aws_route53_record" "www_a" {
zone_id = data.aws_route53_zone.existing.zone_id
name = "www.static-web.${var.domain_name}"
type = "A"
alias {
name = aws_cloudfront_distribution.s3_distribution.domain_name
zone_id = aws_cloudfront_distribution.s3_distribution.hosted_zone_id
evaluate_target_health = false
}
}
How to Set Up
- Run Terraform to deploy the infrastructure:
terraform initterraform apply
This setup ensures a secure, scalable, and highly available static website deployment using AWS and Terraform. 🚀
Thank you for reading this article. For a deeper dive and a more hands-on experience, check out the full project on GitHub repository link: https://github.com/denisgulev/application-boilerplate
🔗 Read More
For a step-by-step guide to deploying a Flask Backend to EC2, check out the full article: