Terraform: Infrastructure as code.

Originally developed by Hashicorp, Terraform allows you to define your infrastructure, platform and services using declarative code, i.e. you tell it what scenario you want to get to, and it takes care on the in-between steps in getting you there.

CODE PHASE

E.g., if you wanted to get from a current state of “nothing” to a state of “virtual machine (vm)” and “kubernetes cluster (k8s)” networked together by a “virtual private cloud (vpc)”, then you’d start with an empty terraform file (ascii text file with an extension of .tf) and inside there’d be three main elements, the vm, the k8s cluster and the vpc above. This is called the “Code” phase.

PLAN PHASE

The plan phase compares the current state with the desired state, and forms a plan based on the differences, i.e. we need a VM, we need a K8s cluster and we need a VPC network.

APPLY PHASE

Next, is the Apply phase. Terraform works against the Cloud providers using real API’s, your API token, spin up these resources and output some autogenerated output variables along the way such as the kubernetes dashboard URL etc

PLUGGABLE BY DESIGN

Terraform has a strong open source community and is pluggable by design. There are modules and resources available to connect up to any IAAS cloud provider and automate the deployment of infrastructure there.

Although Terraform is widely considered to support the provisioning of Infrastructure as code from various Infrastructure as a Service (IaaS) cloud providers, it has expanded its use cases into Platform as Service (PaaS) and Software as a Service (SaaS) areas as well.

DEVOPS FIRST

Terraform is a DevOps tool. that is designed and works with the DevOps feedback loop in mind. i.e. If we take our desired scenario above of a VPC, VM and Kubernetes cluster and decide that we want to add a load balancer, then we would add a Load Balancer requirement to the .tf file in the code phase and the plan phase would compare the current state and see that the load balancer is the only change / only new requirement. By controlling the infrastructure using code in the terraform pipeline instead of directly configuring and changing the infrastructure away from it’s original terraform “recipe”, you can avoid “configuration drift”. This is called a “DevOps first” approach and is what gives us the consistency and scalability we want from a cloud based infrastructure, managed using DevOps practices.

INFRASTRUCTURE AS CODE

These days it’s increasingly important to automate your infrastructure as applications can be deployed into production hundreds of times a day. In addition, infrastructure is fleeting and can be provisioned or de-provisioned according to the demand to provide the service that meets customer requirements but also keep control of costs from cloud providers.

IMPERATIVE vs DECLARATIVE APPROACH

An imperative approach allows you to storyboard and define how your infrastructure is provisioned from nothing through to the final state, using a CLI in say, a bash script. This is great for recording how the infrastructure was initially provisioned, and can also make creating similar environments for testing etc but it doesn’t scale well and you are still at risk of others making undocumented changes that send your Dev, Test etc environments out of sync (they should always match) and risks that afore mentioned configuration drift.

So, we use a declarative approach instead, defining the FINAL STATE of the infrastructure using a tool like terraform, and letting the public cloud provider handle the rest. So instead of defining every step, you just define the final state.

IMMUTABLE vs MUTABLE INFRASTRUCTURE

Imagine a scenario where you have your scripts and you run them to get to v1.0 of your infrastructure. You then have a new requirement for a database to be added to the current infrastructure mix of VPC, VM and K8s elements. So, you modify your declarative code, execute it against your existing Dev environment, then assuming you’re happy, make the same change to 100 or 1000 other similar environments, only for it to not work properly, leaving you in a configuration drift state.

To eliminate the risk of this occurring, we can copy and modify our original code that got us to v1.0 and then execute it to create an entirely new and separate v2.0 state. This also ensures that your infrastructure can move to scale. It is expensive while there are v1.0 and v2.0 infrastructures running simultaneously but is considered best practice, and you can always revert to the v1.0 which remains running while v2.0 is deployed.

So, this immutable infrastructure approach, i.e. can not be changed/mutated once deployed, is preferable in order to reduce/eliminate risks with changing mutable infrastructure.

INSTALLING TERRAFORM

I found that terraform was not available via my package repositories, nor from my Software Manager on Linux Mint. So, I downloaded the Linux 64 bit package of the hour from terraform’s website

Unzipping the downloadable reveals a single executable file, terraform.
After unzipping terraform, move the executable to /usr/local/bin

I’m using an AWS account, so I’ll need to use the AWS Provider in Terraform.

Next create a .tf file for our project that will initialize the terraform aws provider in the aws region we want and run the terraform init command.

#OpenShot Terraform Project
provider "aws" {
  region  = "eu-west-2"
}
Our openshot.tf file simply declares the provider (aws) and the region and is read by the terraform init command executed in the same directory.
Initialization takes a few seconds.

If you haven’t already got one then you’ll need to set up an account on aws.amazon.com. It uses card payment data but is free for personal use for one year. The setup process is slick as you’d expect from amazon, and you’ll be up and running in just a few minutes.

Log into your AWS Management Console.

In your AWS Cloud Management Console, Click Services, EC2.
Click Instances, Launch Instance.
Click AWS Marketplace and search for openshot

We don’t need to select it since we’re using terraform to build our infrastructure as code, so we can have terraform perform this search. The OpenShot AMI is a terraform DataSource AMI since its an image that already exists – not a Resource AMI i.e. we’re using an existing AMI not creating a new one. Terraform can perform this search for us in our .tf file.

Note that the documentation on terraform’s website uses /d/ or /r/ in the URL for datasources and resources of similarly named elements.

The Google search aws ami datasource terraform will take you here

We can add a few more lines to our openshot.tf file to have terraform search for the AMI in the AWS Marketplace as follows.

#OpenShot Terraform Project
provider "aws" {
  region  = "eu-west-2"
}

data "aws_ami" "openshot_ami" {
  most_recent = true
  owners = ["aws-marketplace"]

  filter {
    name = "name"
    values = ["*OpenShot*"]
  }
}

The next thing we need to do is create a security group, so that we can access our EC2 instance.

Google search for terraform aws security groups will take us here

Copy and paste the example code into our openshot.tf file and make a few adjustments to allow access from http and ssh. Note that you should restrict SSH access to your own IP address to prevent exposing your SSH server to the world.

#OpenShot Terraform Project
provider "aws" {
  region  = "eu-west-2"
}

data "aws_ami" "openshot_ami" {
  most_recent = true
  owners = ["aws-marketplace"]

  filter {
    name = "name"
    values = ["*OpenShot*"]
  }
}
resource "aws_security_group" "openshot_allow_http_ssh" {
  name        = "openshot_allow_http_ssh"
  description = "Allow HTTP and SSH inbound traffic"

  ingress {
    description = "Inbound HTTP Rule"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "Inbound SSH Rule"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    #NOTE: YOU SHOULD RESTRICT THIS TO YOUR IP ADDRESS!
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "openshot_allow_http_ssh"
  }
}

We need to configure a Key Pair for our Amazon EC2 Instance for the SSH to ultimately work. We’ll return to this later when we configure the EC2 instance details in the openshot.tf file we’re constructing.

Configure a SSH Key Pair here.
Creating an SSH keypair for our openshot project

Save the openshot-ssh.pem file to your project folder when prompted.

TERRAFORM PLAN

Next, we can test this with the terraform plan command.

terraform plan fails at this stage as we’ve not specified the credentials for our AWS account.

AWS CREDENTIALS

The AWS credentials can be specified as environment variables or in a shared credentials file in your home directory in ~/.aws/credentials more on the options here

The ~/.aws/credentials file can be created by installing the awscli package from your repositories, and running aws configure

Note that you should create an IAM user in your Amazon Management Console, and not create credentials for your aws root user account. I have created an account called matt for example. You’ll receive an email with a 12 digit ID for each IAM user which you’ll need to log in as well as the username and password for that user. The root user just uses an email address and password to log in.

Once you’ve logged in to your AWS Management Console, the credentials are obtained here…

Click on your account user@<12-digit-number>, My Security Credentials

After pasting the Access Key ID in, you’ll be asked for the Secret key next. In the event you don’t have it, you can simply CTRL-C out of aws configure, go back to the aws management console and generate a new one. You’ll only be shown the private key one time, so be sure to copy it, then re-run aws configure and enter the new access key id and secret key. I specified eu-west-2 (London) as my default location and json as my output format.

Note that the credentials are stored in plain text in ~/.aws.credentials

Depending on your situation, you may be able to Deactivate the previous Access Key ID and delete it from AWS Management Console if it’s never going to be used.

TERRAFORM PLAN USING CREDENTIALS FILE

Now if we re-run our terraform plan we see it succeeds.

re-run terraform plan and this time it used the creds found in ~/.aws/credentials

You can see that the Plan outcome at the end is 1 to add, 0 to change, 0 to destroy.

EC2 INSTANCE

Now we’re ready to specify our EC2 Instance and we just need the final section and edit it as shown below based on the data source and resource names specified elsewhere in the terraform file, adding key_name=”openshot-ssh” to refer to our .pem file we created earlier when we generated a SSH key pair for the EC2 Instance on the AWS Management console.

#OpenShot Terraform Project
provider "aws" {
  region  = "eu-west-2"
}

data "aws_ami" "openshot_ami" {
  most_recent = true
  owners = ["aws-marketplace"]

  filter {
    name = "name"
    values = ["*OpenShot*"]
  }
}
resource "aws_security_group" "openshot_allow_http_ssh" {
  name        = "openshot_allow_http_ssh"
  description = "Allow HTTP and SSH inbound traffic"

  ingress {
    description = "Inbound HTTP Rule"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    description = "Inbound SSH Rule"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    #NOTE: YOU SHOULD RESTRICT THIS TO YOUR IP ADDRESS!
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "openshot_allow_http_ssh"
  }
}

resource "aws_instance" "web" {
  ami           = "data.aws_ami.openshot_ami.id"
  instance_type = "c5.xlarge"
  security_groups = ["{aws_security_group.openshot_allow_http_ssh.name}"]
  key_name = "openshot-ssh"

  tags = {
    Name = "OpenShot"
  }
}

Re-run terraform plan and we can see our

Re-running terraform plan shows our openshot ami resource on our aws ec2 instance
and ends with the message Plan: 2 to add, 0 to change, 0 to destroy.

OUTPUT

Lastly, we can add an output section that outputs the public IP for our OpenShot AMI

output "IP" {
  value = "${aws_instance.web.public_ip}"
}

TERRAFORM APPLY

When our terraform plan is ready, we can execute the command terraform apply and terraform will re-execute terraform plan before prompting for input prior to executing the script against the aws cloud provider and building our infrastructure.

terraform apply

Did you like this?
Tip cyberfella with Cryptocurrency

Donate Bitcoin to cyberfella

Scan to Donate Bitcoin to cyberfella
Scan the QR code or copy the address below into your wallet to send some bitcoin:

Donate Bitcoin Cash to cyberfella

Scan to Donate Bitcoin Cash to cyberfella
Scan the QR code or copy the address below into your wallet to send bitcoin:

Donate Ethereum to cyberfella

Scan to Donate Ethereum to cyberfella
Scan the QR code or copy the address below into your wallet to send some Ether:

Donate Litecoin to cyberfella

Scan to Donate Litecoin to cyberfella
Scan the QR code or copy the address below into your wallet to send some Litecoin:

Donate Monero to cyberfella

Scan to Donate Monero to cyberfella
Scan the QR code or copy the address below into your wallet to send some Monero:

Donate ZCash to cyberfella

Scan to Donate ZCash to cyberfella
Scan the QR code or copy the address below into your wallet to send some ZCash:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.