Hybrid Cloud Application Deployment using Terraform

Shaileshchoudhary
5 min readJun 16, 2020

Create multiple resources on the AWS cloud and integrate them with GitHub for static web applications using Terraform.

High-level flow

Terraform on AWS

Controlling all the resources on AWS cloud which means from creating, destroying, display the resources created using commands and some scripts on an intelligent tool named terraform from HashiCrop. This tool is far more intelligent than just accessing AWS can perform any action over a remote system(maybe cloud) or local system(Personal OS). And working onto this tool creates a document that helps you to deploy the same environment deployed on your working machine anywhere in the world.

Terraform Basic commands

  • terraform init: Initializes a Terraform working directory
    – It must be within the same directory as the .tf files or nothing will happen.
  • terraform validate: Confirms the Terraform file's syntax is correct
    – Always run this to confirm the code is built correctly and will not have errors.
  • terraform apply: Builds or changes infrastructure
    – It will show the execution plan and requires a yes or no to execute unless you use the --auto-approve flag, which will make it execute automatically.
  • terraform destroy: Deletes and removes Terraform-managed infrastructure
    – This will permanently remove anything created and stored in the state file from the cluster.

Task: Create/launch Application using Terraform

1. Create the key and security group which allows the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. Developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Solution:

Before moving onto the task we need to download terraform extract and add its path in system variables that make it accessible from any location in the machine.

set environment variable

Extract terraform zip in a folder and copy its path and follow:

system properties>environment variables>system variables>path(double click)

click new > paste your copied path > press ok

$aws configure — profile terrauser

Now create a folder and initialize terraform in folder

notepad ec2.tf

The above file contains code to launch an instance, create key-pair, configure security group, create 1GiB EBS volume and mount it on web hosting folder i.e /var/www/html(for amazon ami and Redhat ami)

notepad s3.tf

The s3.tf will create an s3 bucket, create CloudFront, download GitHub image folder, and redirect source image to s3 bucket and CloudFront will deliver the content in the bucket, and provide CloudFront URL for the image in s3 bucket

file should be in format with .tf extension

Create a Key Pair

will Create a Key

Configure Security Group

profile connect & security group

A security group with ssh enabled to get into an instance using putty or ssh and inbound of port 80 should be enabled to host a web server using this security group.

Launch Instance

ec2 instance with above-configured security group

Create an ebs in same region of instance launched

ebs created in same zone

Developer uploaded some code With CloudFront URL

simple HTML code

Mount EBS at web-hosting location into instance

Establish connection with instance and mount

GitHub images source file

Create S3 Bucket And CloudFront

//Creating S3 bucket
resource "aws_s3_bucket" "s3bucket" {
bucket = "shailesh12341234"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
//Downloading content from Github
resource "null_resource" "download" {
depends_on = [aws_s3_bucket.s3bucket,]
provisioner "local-exec" {
command = "git clone https://github.com/shaileshchoudhary/multihybridtask1images.git"
}
}
// Uploading file to bucket
resource "aws_s3_bucket_object" "upload_image1" {
depends_on = [aws_s3_bucket.s3bucket , null_resource.download]
bucket = aws_s3_bucket.s3bucket.id
key = "mainpage.png"
source = "multihybridtask1images/mainpage.png"

acl = "public-read"
}
// Creating Cloudfront Distribution

resource "aws_cloudfront_distribution" "cdndistribution" {
depends_on = [aws_s3_bucket.s3bucket , null_resource.download, ]
origin {
domain_name = aws_s3_bucket.s3bucket.bucket_regional_domain_name
origin_id = "S3-shailesh12341234-id"
custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}

enabled = true

default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-shailesh12341234-id"

forwarded_values {
query_string = false

cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

restrictions {
geo_restriction {

restriction_type = "none"
}
}

viewer_certificate {
cloudfront_default_certificate = true
}
}
output "domain-name" {
value = aws_cloudfront_distribution.cdndistribution.domain_name
}

Terraform output on AWS

s3 and CloudFront deployment
deployment successfully completed
Allow port 80 and 22
1 GiB volume in same region
ec2-instance launched
bucket created and image uploaded
1GiB ebs mount on web host folder

Final Static webpage launched on ec2 instance

Thanks for reading….

--

--