Terraform for Azure — Infrastructure as code (IaC) Including Multi environments and Deployment pipelines using GitHub Actions

Gibin Francis
7 min readNov 27, 2023

Hi All, today am planning to share how we can use Terraform to automate our Infrastructure, using Infrastructure as Code(IaC). For the same we will be using Terraform and Azure as our cloud platform. Am not going deep into the Terraform or Azure, straight into the code.

We will be using below Folder structure through out the session

Infra 
|- backend.tf
|- variables.tf
|- main.tf
|- envs
|-dev
|- dev.tfvars
|-test
|- test.tfvars
|-prod
|- prod.tfvars

lets take the first file

backend.tf

This file will hold all the backend or supporting information for the terraform to run.


terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.53.0"
}
}

backend "azurerm" {
resource_group_name = "your_reosurce_group"
storage_account_name = "your_storage_account_name"
container_name = "tfstate" //you can change the name if needed
key = "terraform.tfstate" //you can change the name if needed
subscription_id = "your_subscription_id"
tenant_id = "your_tenant_id"
}
}

provider "azurerm" {

features {}

client_id = var.AZURE_AD_CLIENT_ID
client_secret = var.AZURE_AD_CLIENT_SECRET
tenant_id = var.AZURE_AD_TENANT_ID
subscription_id = var.AZURE_SUBSCRIPTION_ID
skip_provider_registration = true
}

Don’t get worried with the same, will explain how it works

  • First thing we need to mention is the Terraform block
  • Inside the same we will create required provider, in our case our provider is “azurerm
  • Now we are heading to the backend section, why we need a backend ? when ever we run the Terraform, respective cloud resources will get created, but when we make change, we want to avoid the recreation and update the changes only, for the same, we need to store the “state” for that we are using backend
  • In this case we are using azure storage account as our state store, so we are giving our necessary information like storage account name resource group etc
  • Wait a minute, how we have the storage in cloud even before our infra script?
  • We can use two ways to achieve the same
  • first run the terraform to crate a storage account with “local” as your storage account, then execute the remaining terraform with “storage account” as your state store, in this execution, there will be an option coming to copy the local storage state file to cloud and you can procced with option to sync
  • Second option is to have a manually created common storage for all the executions. you have to specify the subscription id and tenant id if you are creating resources on a different subscription
  • Along with the setting we will also provide a “key” name for the state, this will be file name for the state, lets keep the name, we will show how we can change the same based on environments
  • Now lets create the provider, here we will be using “azurerm” and we give the necessary information used for the same using the variables.
  • We will cover the uncovered tags in later point

Lets get into the variables file

variables.tf

variables file will be used to create the variables which we can access using the script

// Storage account NAME
variable "INFRA_ENVIRONMENT" {}

// Resource group name
variable "RESOURCE_GROUP_NAME" {}

// Azure AD client id
variable "AZURE_AD_CLIENT_ID" {}

// Azure subscription id
variable "AZURE_SUBSCRIPTION_ID" {}

// Azure AD tenant id
variable "AZURE_AD_TENANT_ID" {}

//Resource prefix to be used
variable "RESOURCE_PREFIX" {}

//resource location
variable "RESOURCE_LOCATION" {}


//------------------resource specific ------------------//

//app service plan os type
variable "appserviceplan_os_type" {}

//app service plan sku name
variable "appserviceplan_sku_name" {}


//------------------SECRETS------------------//
// Azure AD client secret
variable "AZURE_AD_CLIENT_SECRET" {
sensitive = true
}

Its very straight forward file, we will define all our variables.

  • Will define variables using “variable” keyword
  • add more information lie description, type, default based on the need
  • We can mark “sensitive” to avoid printing sensitive information while we use the tools or pipelines.

Now Lets straight into the main.tf, where we write our main code for infrastructure.

main.tf

This will be the file where we will be writing all our terraform infra creation code. In this example am simply creating an app service in azure.

# Create an app service plan
resource "azurerm_service_plan" "appserviceplan" {
name = "${var.RESOURCE_PREFIX}-appservplan-${var.INFRA_ENVIRONMENT}"
location = var.RESOURCE_LOCATION
resource_group_name = var.RESOURCE_GROUP_NAME
os_type = var.appserviceplan_os_type
sku_name = var.appserviceplan_sku_name
worker_count = 1
}

here we are using command to create an app service plan, you can find similar template for you respective resource in terraform website. will share couple of them later below.

  • Here we create resources using “resource” keyword
  • Give which resource you are using and a name to identify the resource with our terraform
  • Give all the information necessary, Here we are using interpolated name for the resource with some prefix and environment name attached for readability of the resource.

Now lets create the respective environment folder and “.tfvars” in it, lets look one sample

envs/test/test.tfvars

this will be the file holding the information for the environment

INFRA_ENVIRONMENT     = "test" //change based on your environement
RESOURCE_GROUP_NAME = "your_reource_group_name"
AZURE_AD_CLIENT_ID = "your_client_id"
AZURE_SUBSCRIPTION_ID = "your_subscription_id"
AZURE_AD_TENANT_ID = "your_tenant_id"
RESOURCE_PREFIX = "your_prefix_key"
RESOURCE_LOCATION = "West Europe" //change based on your location

appserviceplan_os_type = "Linux" //change based on your need
appserviceplan_sku_name = "B1" //change based on your need

Now we are done to start the execution of our terraform script, for that we will be taking help of below commands in terraform

terraform fmt : terraform format, to format the files

terraform init : terraform initialize , to initialize, here we will be using the parameter “-backend-config=key=test.terraform_state” to override the key provided in our “backend.tf” file, so now the state become individual to environment, you need to rename based on your environment.

terraform validate: terraform validate all the changes are fine

terraform plan : terraform plan to check the changes are fine and create a plan based on the existing state. while planning now we want to provide the values to our variables, based on our environment, so lets add the arguemnt like this “-var-file=./envs/test/test.tfvars -out=tfplan”

terraform apply : terraform apply yo apply the changes, here we need to mention the “-auto-approve tfplan” to avoid the approval on pipeline and also identify the respective plan to choose

terraform destroy : terraform to remove the changes if needed

Now lets get into The github action pipeline to automate the same. am not going deep into the steps as we already covered the same.

name: 'Infra Creation - Test'

on:
workflow_dispatch:

jobs:
terraformTest:
name: 'Terraform apply - Test'
environment: test
env:
ARM_CLIENT_ID: ${{ vars.AZURE_AD_CLIENT_ID }}
ARM_CLIENT_SECRET: ${{ secrets.AZURE_AD_CLIENT_SECRET }}
ARM_SUBSCRIPTION_ID: ${{ vars.AZURE_SUBSCRIPTION_ID }}
ARM_TENANT_ID: ${{ vars.AZURE_AD_TENANT_ID }}
TF_VAR_AZURE_AD_CLIENT_SECRET: ${{ secrets.AZURE_AD_CLIENT_SECRET }}

runs-on: ubuntu-latest

defaults:
run:
shell: bash

steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout@v2

- name: 'Terraform Format'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.14.8
tf_actions_subcommand: 'fmt'
tf_actions_working_dir: "./Infra"
args: "--recursive"

- name: 'Terraform Init'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.14.8
tf_actions_subcommand: 'init'
tf_actions_working_dir: "./Infra"
args: "-backend-config=key=test.terraform_state"

- name: 'Terraform Validate'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.14.8
tf_actions_subcommand: 'validate'
tf_actions_working_dir: "./Infra"

- name: 'Terraform Plan'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.14.8
tf_actions_subcommand: 'plan'
tf_actions_working_dir: "./Infra"
args: "-var-file=./envs/test/test.tfvars -out=tfplan"

- name: Terraform Apply
if: github.ref == 'refs/heads/main'
uses: hashicorp/terraform-github-actions@master
with:
tf_actions_version: 0.14.8
tf_actions_subcommand: 'apply'
tf_actions_working_dir: "./Infra"
args: "-auto-approve tfplan"

Cool…

Now your infra will create based on your request, you can use the pipeline to automate rest of the environments, which will have individual state files for your environments

Lets add to the same with some bits and pieces.

you can use data “azurerm_client_config” “current” {} in backend to use the existing azure rm information like below

terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.53.0"
}
}

backend "azurerm" {
resource_group_name = "your_reosurce_group"
storage_account_name = "your_storage_account_name"
container_name = "tfstate" //you can change the name if needed
key = "terraform.tfstate" //you can change the name if needed
subscription_id = "your_subscription_id"
tenant_id = "your_tenant_id"
}
}

provider "azurerm" {

features {}

client_id = var.AZURE_AD_CLIENT_ID
client_secret = var.AZURE_AD_CLIENT_SECRET
tenant_id = var.AZURE_AD_TENANT_ID
subscription_id = var.AZURE_SUBSCRIPTION_ID
skip_provider_registration = true
}

data "azurerm_client_config" "current" {}

and use the same like in your main.tf ( template for azure key vault policy )

tenant_id    = data.azurerm_client_config.current.tenant_id

If you want to use the data from previous resource, you can use based on their name like below

key_vault_id = azurerm_key_vault.keyvault.id

incase if you want to skip some properties from state, you can use like below

lifecycle {
ignore_changes = [
public_network_access_enabled,
network_acls.0.bypass
]
}

Please find the combined example



#key vault
resource "azurerm_key_vault" "keyvault" {
name = "${var.RESOURCE_PREFIX}-azkeyvault-${var.INFRA_ENVIRONMENT}"
location = var.RESOURCE_LOCATION
resource_group_name = var.RESOURCE_GROUP_NAME
enabled_for_disk_encryption = true
tenant_id = var.AZURE_AD_TENANT_ID
soft_delete_retention_days = 7
purge_protection_enabled = true
sku_name = "standard"
public_network_access_enabled = true
enabled_for_deployment = true
enabled_for_template_deployment = true

network_acls {
default_action = "Allow"
bypass = "AzureServices"
}

access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id

secret_permissions = [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore"
]
}

lifecycle {
ignore_changes = [
public_network_access_enabled,
enabled_for_deployment,
enabled_for_template_deployment,
network_acls.0.default_action,
network_acls.0.bypass
]
}
}



resource "azurerm_key_vault_access_policy" "keyvaultaccesspolicy" {
key_vault_id = azurerm_key_vault.keyvault.id
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = azurerm_app_service.webappui.identity.0.principal_id

secret_permissions = [
"Get",
"List",
"Set",
"Delete",
"Recover",
"Backup",
"Restore"
]
}

If you can to use an expiry date in keyvault, you can refer local variables like below and find the date from the same.

If you want to mark a dependency you can use “depends_on” to mark it existing resources.

locals {
days_to_hours = var.DAYS_TO_EXPIRE * 24
expiration_date = timeadd(formatdate("YYYY-MM-DD'T'HH:mm:ssZ", timestamp()), "${local.days_to_hours}h")
}

resource "azurerm_key_vault_secret" "keyvaultsecpgpwd" {
name = "POSTGRES-KEY"
value = var.AZURE_POSTGRES_KEY
key_vault_id = azurerm_key_vault.keyvault.id
expiration_date = local.expiration_date
depends_on = [
azurerm_key_vault.keyvault
]
}

And you should use “TF_VAR_your_env_name” to pass the environment variable to variables in terraform.

Cool, now our pipeline and infrastructure is ready.

Enjoy Keep coding… :)

--

--

Gibin Francis

Technical guy interested in MIcrosoft Technologies, IoT, Azure, Docker, UI framework, and more