AWS Terraform.

Project Description:

  1. Create the key pair and security group which allow the port 80.
  2. Launch an EC2 instance by using the key pair and security group which we have created in Step 1.
  3. Launch one Volume (EBS) and mount that volume into /var/www/html
  4. The developer has uploaded the code into GitHub repo also the repo has some images.
  5. Copy the GitHub repo code into /var/www/html
  6. Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.
  7. Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
  8. Create a snapshot of EBS volume

Pre-requisites:

  1. we should have AWS account.
  2. Need to configure with Terraform In local system.
  3. AWS configure to connect through CLI command

Run the code and it will set the whole environment

terraform init
terraform apply -auto-approve

Create the key pair and security group which allow the port 80.Provider

The provider helps the terraform to identify in which cloud the task need to perform. and it also helps to download the particular Plugins for it.

Creating Keypair

We are going to create a Key-Pair which can be used to login into our instance. For creating a Key-Pair, you can use the RSA Algorithm that gives you a key encrypted in the format of RSA.

Creating a Security Group

A Security Group(SG) acts as a virtual firewall for our EC2 instances to control incoming and outgoing traffic. Inbound(ingress) rules control the incoming traffic to your instance, and outbound(egress) rules control the outgoing traffic from our instance.

  • - -

provider “aws” {
region = “ap-south-1”
profile = “terraform1”
}

resource “tls_private_key” “task_key1” {
algorithm = “RSA”
}

resource “aws_key_pair” “generated_key” {
key_name = “task_key1”
public_key = tls_private_key.task_key1.public_key_openssh

depends_on = [
tls_private_key.task_key1
]
}

resource “local_file” “taskkey1-file” {
content = tls_private_key.task_key1.private_key_pem
filename = “task_key1.pem”

depends_on = [
tls_private_key.task_key1
]
}

resource “aws_security_group” “my_security2” {
name = “my_security2”
description = “Allow inbound traffic”

ngress {
description = “allow_my_clients of HTTP”
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

ingress {
description = “allow_my_clients of SSH”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

tags = {
Name = “allow_my_client”

}
}

  • - -

Launch an EC2 instance by using the key pair and security group which we have created in Step 1.

We are launching an EC2 instance with the LinuxO.S. and after launching we are installing some packages like Git, https and PHP and starting the web services by connecting with the EC2 instance.

resource “aws_instance” “task_instance” {

depends_on = [
aws_security_group.my_security2
]

ami =”ami-005956c5f0f757d37"
instance_type =”t2.micro”
key_name = aws_key_pair.generated_key.key_name
security_groups = [“my_security2”]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.task_key1.private_key_pem
host = aws_instance.task_instance.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo yum install git httpd php -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,


]
}

tags={
Name=”task_instance”
}
}

Launch one Volume (EBS) and mount that volume into /var/www/html

resource “aws_ebs_volume” “my_volume” {
availability_zone = aws_instance.task_instance.availability_zone
size = 1
tags = {
Name = “ebs_volume1”
}
}

Volume attachment

resource “aws_volume_attachment” “volume_att” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.my_volume.id
instance_id = aws_instance.task_instance.id
force_detach = true
}

resource “null_resource” “nullremote1” {
depends_on = [
aws_volume_attachment.volume_att
]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.task_key1.private_key_pem
host = aws_instance.task_instance.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/srinivas-reddy4244/my_task.git /var/www/html”
]
}
}

resource “null_resource” “nulllocal31” {
depends_on = [
null_resource.nullremote1,
]
provisioner “local-exec” {
command = “git clone https://github.com/srinivas-reddy4244/my_task.git C:/Users/sathish reddy/Desktop/task2/repo/”
when = destroy
}
}

The developer has uploaded the code into GitHub repo also the repo has some images.

Copy the GitHub repo code into /var/www/html

Create an S3 bucket, and copy/deploy the images from GitHub repo into the S3 bucket and change the permission to public readable.

resource “aws_s3_bucket” “task1bucket-buck” {
depends_on = [
null_resource.nulllocal31,
]
bucket = “task1bucket-buck”
force_destroy = true
acl = “public-read”
policy = <<POLICY
{
“Version”: “2012–10–17”,
“Id”: “MYBUCKETPOLICY”,
“Statement”: [
{
“Sid”: “PublicReadGetObject”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:*”,
“Resource”: “arn:aws:s3:::task1bucket-buck/*”
}
]
}
POLICY

S3 Bucket Object:

resource “aws_s3_bucket_object” “object” {
depends_on = [ aws_s3_bucket.task1bucket-buck,
null_resource.nullremote1,
null_resource.nulllocal31,
]
bucket = aws_s3_bucket.task1bucket-buck.id
key = “one”
source = “C:/Users/sathish reddy/Desktop/task2/img/srinivas.jpg”
etag = “C:/Users/sathish reddy/Desktop/task2/img/srinivas.jpg”
acl = “public-read”
content_type = “image/jpg”
}

Create a Cloudfront using S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

locals {
s3_origin_id = “myS3Origin”
}
resource “aws_cloudfront_origin_access_identity” “oai” {
comment = “CloudFront S3 sync”
}
resource “aws_cloudfront_distribution” “s3_distribution” {
depends_on = [
aws_key_pair.generated_key,
aws_instance.task_instance
]
origin {
domain_name = aws_s3_bucket.task1bucket-buck.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.oai.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
comment = “ClouFront S3 sync”
default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
viewer_protocol_policy = “allow-all”
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
# Cache behavior with precedence 0
ordered_cache_behavior {
path_pattern = “/content/immutable/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”, “OPTIONS”]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
headers = [“Origin”]
cookies {
forward = “none”
}
}
min_ttl = 0
default_ttl = 86400
max_ttl = 31536000
compress = true
viewer_protocol_policy = “redirect-to-https”
}
# Cache behavior with precedence 1
ordered_cache_behavior {
path_pattern = “/content/*”
allowed_methods = [“GET”, “HEAD”, “OPTIONS”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = “none”
}
}
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
compress = true
viewer_protocol_policy = “redirect-to-https”
}
price_class = “PriceClass_200”
restrictions {
geo_restriction {
restriction_type = “none”
}
}
tags = {
Environment = “production”
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

resource “null_resource” “nullremote2” {
depends_on = [ aws_cloudfront_distribution.s3_distribution, ]
connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.task_key1.private_key_pem
host = aws_instance.task_instance.public_ip
}
provisioner “remote-exec” {
inline = [
“sudo su << EOF”,
“echo \”<img src=’https://${aws_cloudfront_distribution.s3_distribution.domain_name}/${aws_s3_bucket_object.object.key }’>\” >> /var/www/html/index.html”,
“EOF”
]
}
}

Opening in Chrome:

resource “null_resource” “nulllocal3” {
depends_on = [
null_resource.nullremote2,
]
provisioner “local-exec” {
command = “start chrome ${aws_instance.task_instance.public_ip}/index.html”
}
}

output “myos_ip” {
value = aws_instance.task_instance.public_ip
}
output “private_key” {
value = tls_private_key.task_key1.private_key_pem
}

To End the Task “Run the command”

terraform destroy -auto-approve

Conclusion:

Successfully created AWS infrastructure using services like EC2, EBS, S3, CloudFront, and configure a web server on it. The web page configured on the web-server is shown above. The content of the web page is coming from Github which is developed by a developer and the image is coming from the CloudFront using S3.

Github Link:

https://github.com/srinivas-reddy4244/my_task

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store