Now that we have completed the groundwork from part 3 and the certificate from part 4, we are ready to build the infrastructure to support our to-do app on Microsoft Azure.
We will use an extremely basic todo app backend for this demo, as was used in our AWS demo earlier in the series. Create a fork of this repository and this repository to use for the rest of this demo.
As with the previous two stages, we create a new stage with the following structure:
stage_name
- main.tf
- outputs.tf
- variables.tf
- versions.tf
- config.tfvars
Tl;dr - You can find the stage 3 implementation here.
First, we set the versions.tf
which will set providers for this stage of the demo. This implements the GitHub provider, which allows Terraform to create GitHub resources such as secrets.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.74.0"
}
github = {
source = "integrations/github"
}
}
}
provider "azurerm" {
features {}
}
provider "azuread" {}
provider "github" {
token = var.github_token
owner = var.github_username
}
Next, we define the input variables for the third stage of the demo, defining variables for the rest of the implementation.
variable "prefix" {
description = "Prefix for all resources"
type = string
}
variable "application_port" {
description = "Application Port"
type = number
}
variable "virtual_network" {
description = "Azure Virtual Network Name"
type = string
}
variable "public_subnet" {
description = "Azure Virtual Network Public Subnet Name"
type = string
}
variable "private_subnet" {
description = "Azure Virtual Network Private Subnet Name"
type = string
}
variable "key_vault" {
description = "Azure Key Vault Name"
type = string
}
variable "resource_group" {
description = "Azure Resource Group Name"
type = string
}
variable "public_ip_id" {
description = "Azure Public IP Address Id"
type = string
}
variable "container_registry" {
description = "Azure Container Registry Name"
type = string
}
variable "repository_name_backend" {
description = "Github Backend Repository Name"
type = string
}
variable "repository_name_frontend" {
description = "Github Frontend Repository Name"
type = string
}
variable "repository_branch_frontend" {
description = "Github Frontend Repository Branch"
type = string
}
variable "repository_branch_backend" {
description = "Github Backend Repository Branch"
type = string
}
variable "github_token" {
description = "Github Token"
type = string
}
variable "github_username" {
description = "Github Username"
type = string
}
variable "subdomain" {
description = "Subdomain"
type = string
default = "www"
}
variable "zone_name" {
description = "Zone Name"
type = string
}
As in the previous post, we map the input variables to the values used during the script's implementation. You can find instructions to generate the proper GitHub Personal Access token here.
## Values imported from Stages 1 and 2 of the Demo
public_ip_id = "todoapp-publicip"
virtual_network = "todoapp-network"
public_subnet = "public-subnet"
private_subnet = "private-subnet"
container_registry = "todoappacr"
key_vault = "todoapp-kv"
resource_group = "todoapp-resources"
prefix = "todoapp"
## Values specific to the script's implementation
zone_name = "YOURDOMAINHERE.COM" #your preregistered domain name
github_token = "YOUR_GITHUB_PERSONAL_ACCESS_TOKEN"
github_username = "YOUR_GITHUB_USERNAME"
#Forked Demo repositories
repository_name_frontend = "todo-app-frontend"
repository_name_backend = "todo-app-backend"
subdomain = "www" #must match from previous stages
#Fixed values for this demo's implementation - do not modify
repository_branch_frontend = "main"
repository_branch_backend = "main"
application_port = 5000
Next, we begin to build the central portion of the main.tf
implementation, starting with importing all the infrastructure created in stages one and two. We use data
blocks in Terraform to represent the network configuration, container registry, apps, key vault, and public IP. We also set up access to the application repositories.
data "azurerm_client_config" "current" {}
data "azuread_client_config" "current" {}
data "azurerm_resource_group" "this" {
name = var.resource_group
}
data "azurerm_user_assigned_identity" "this" {
name = "${var.prefix}-identity"
resource_group_name = data.azurerm_resource_group.this.name
}
data "azurerm_public_ip" "this" {
name = var.public_ip_id
resource_group_name = data.azurerm_resource_group.this.name
}
data "azurerm_dns_zone" "this" {
name = var.zone_name
resource_group_name = data.azurerm_resource_group.this.name
}
data "azurerm_key_vault" "this" {
name = var.key_vault
resource_group_name = data.azurerm_resource_group.this.name
}
data "azurerm_key_vault_secret" "domain_certificate" {
name = "${var.prefix}-domain-certificate"
key_vault_id = data.azurerm_key_vault.this.id
}
data "azurerm_subnet" "public" {
name = var.public_subnet
virtual_network_name = var.virtual_network
resource_group_name = data.azurerm_resource_group.this.name
}
data "azurerm_subnet" "private" {
name = var.private_subnet
virtual_network_name = var.virtual_network
resource_group_name = data.azurerm_resource_group.this.name
}
data "azurerm_container_registry" "this" {
name = var.container_registry
resource_group_name = data.azurerm_resource_group.this.name
}
data "github_actions_public_key" "backend_public_key" {
repository = var.repository_name_backend
}
data "github_actions_public_key" "frontend_public_key" {
repository = var.repository_name_frontend
}
Next, we build the database using Azure CosmosDB with Mongo compatibility. Azure Cosmos DB with MongoDB API provides a globally distributed, multi-model database service for large-scale applications with a wide-ranging scalability and geographic distribution, and it is fully compatible with MongoDB applications, drivers, and tools.
###### Database
resource "azurerm_cosmosdb_account" "this" {
name = "${var.prefix}-cosmosdb"
location = data.azurerm_resource_group.this.location
resource_group_name = data.azurerm_resource_group.this.name
offer_type = "Standard"
kind = "MongoDB"
consistency_policy {
consistency_level = "Session"
max_interval_in_seconds = 10
max_staleness_prefix = 200
}
geo_location {
location = data.azurerm_resource_group.this.location
failover_priority = 0
}
}
resource "azurerm_cosmosdb_mongo_database" "this" {
name = "${var.prefix}-mongo-db"
resource_group_name = data.azurerm_resource_group.this.name
account_name = azurerm_cosmosdb_account.this.name
}
locals {
cosmosdb_credentials = jsonencode({
endpoint = azurerm_cosmosdb_account.this.endpoint,
username = azurerm_cosmosdb_account.this.name,
password = azurerm_cosmosdb_account.this.primary_key,
database = azurerm_cosmosdb_mongo_database.this.name,
connection_string = azurerm_cosmosdb_account.this.connection_strings[0]
})
}
resource "azurerm_key_vault_secret" "cosmosdb_credentials" {
name = "${var.prefix}-cosmosdb-credentials"
value = local.cosmosdb_credentials
key_vault_id = data.azurerm_key_vault.this.id
}
In this section, we set up the Cosmos Account, a container for the database itself. We also store the database credentials in the Azure Key Vault as a secret for use with the container account.
Next, we add Azure Container Instances to host the containerized application. Azure Container Instances (ACI) offers a solution for any scenario operating in isolated containers without orchestration, providing a platform to efficiently run containerized applications in a managed, serverless Azure environment.
###Container Account
locals {
parsed_credentials = jsondecode(azurerm_key_vault_secret.cosmosdb_credentials.value)
}
resource "azurerm_container_group" "this" {
name = "${var.prefix}-containergroup"
location = data.azurerm_resource_group.this.location
resource_group_name = data.azurerm_resource_group.this.name
os_type = "Linux"
subnet_ids = [data.azurerm_subnet.private.id]
image_registry_credential {
server = data.azurerm_container_registry.this.login_server
username = data.azurerm_container_registry.this.admin_username
password = data.azurerm_container_registry.this.admin_password
}
container {
name = "${var.prefix}-container"
image = "${data.azurerm_container_registry.this.login_server}/${var.prefix}acr:v1"
cpu = "0.5"
memory = "1.5"
environment_variables = {
COSMOSDB_USERNAME = local.parsed_credentials["username"],
COSMOSDB_PASSWORD = local.parsed_credentials["password"],
COSMOSDB_ENDPOINT = local.parsed_credentials["endpoint"],
COSMOSDB_DATABASE = local.parsed_credentials["database"],
COSMOSDB_CONNECTION_STRING = local.parsed_credentials["connection_string"],
NODEPORT = "${var.application_port}"
DOMAIN = "${var.subdomain}.${data.azurerm_dns_zone.this.name}"
}
ports {
port = var.application_port
protocol = "TCP"
}
}
ip_address_type = "Private"
depends_on = [
azurerm_cosmosdb_account.this,
data.azurerm_container_registry.this,
github_repository_file.backend_workflow
]
}
We pass into the container instances the database credentials and the container registry created in the first stage.
Next, we add an Azure Application Gateway. Azure Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications, optimized for high availability and reliability, by distributing incoming HTTP requests across multiple servers.
resource "azurerm_application_gateway" "this" {
name = "${var.prefix}-appgateway"
location = data.azurerm_resource_group.this.location
resource_group_name = data.azurerm_resource_group.this.name
identity {
type = "UserAssigned"
identity_ids = [azurerm_user_assigned_identity.this.id]
}
sku {
name = "Standard_v2"
tier = "Standard_v2"
capacity = 1
}
ssl_policy {
policy_type = "Predefined"
policy_name = "AppGwSslPolicy20170401S"
}
probe {
name = "aci-health-probe"
protocol = "Http"
path = "/health"
port = var.application_port
host = azurerm_container_group.this.ip_address
interval = 30 # Time interval between probes in seconds
timeout = 30 # Timeout for the probe in seconds
unhealthy_threshold = 3 # Number of consecutive failures before marking as unhealthy
}
gateway_ip_configuration {
name = "${var.prefix}-gateway-ip-configuration"
subnet_id = data.azurerm_subnet.public.id
}
frontend_port {
name = "frontend-port-https"
port = 443
}
ssl_certificate {
name = "${var.prefix}-domain-certificate"
key_vault_secret_id = data.azurerm_key_vault_secret.domain_certificate.id
}
frontend_ip_configuration {
name = "frontend-ip-configuration"
public_ip_address_id = data.azurerm_public_ip.this.id
}
backend_address_pool {
name = "backend-address-pool"
fqdns = [azurerm_container_group.this.ip_address]
}
backend_http_settings {
name = "backend-http-settings"
cookie_based_affinity = "Disabled"
port = var.application_port
protocol = "Http"
request_timeout = 60
probe_name = "aci-health-probe"
}
http_listener {
name = "https-listener"
frontend_ip_configuration_name = "frontend-ip-configuration"
frontend_port_name = "frontend-port-https"
protocol = "Https"
ssl_certificate_name = "${var.prefix}-domain-certificate"
}
request_routing_rule {
name = "https-request-routing-rule"
priority = 1
rule_type = "Basic"
http_listener_name = "https-listener"
backend_address_pool_name = "backend-address-pool"
backend_http_settings_name = "backend-http-settings"
}
}
The code above orchestrates the deployment of an Azure Application Gateway, specifying configurations for its identity, SKU, SSL policy, health probe, IP configurations, and SSL certificate, utilizing both predefined values and variables. Additionally, it defines the frontend and backend settings, HTTPS listener, and a request routing rule, ensuring the gateway correctly handles HTTPS requests, routing them based on the prescribed rules and health probes while interacting with other Azure resources like subnets, public IPs, and Key Vault secrets.
Next, we implement an Azure Static Site to host the frontend React Application. Azure Static Web Apps is a service from Azure that allows developers to build and deploy full-stack web apps from a GitHub repository, offering seamless integration with GitHub Actions for CI/CD and providing features like automatic SSL certification, custom domains, authentication, and authorization capabilities, while automatically handling the scaling of the app, making it a suitable solution for static content-based web applications developed using HTML, CSS, JavaScript, or frameworks like Angular, React, and Vue.js.
##Frontend Static Site
resource "azurerm_static_site" "this" {
name = "${var.prefix}-static-site"
location = "eastus2"
resource_group_name = data.azurerm_resource_group.this.name
sku_size = "Standard"
sku_tier = "Free"
}
resource "azurerm_dns_cname_record" "this" {
depends_on = [data.azurerm_dns_zone.this]
name = var.subdomain
zone_name = data.azurerm_dns_zone.this.name
resource_group_name = data.azurerm_resource_group.this.name
ttl = 300
record = azurerm_static_site.this.default_host_name
}
resource "azurerm_static_site_custom_domain" "this" {
static_site_id = azurerm_static_site.this.id
domain_name = "${azurerm_dns_cname_record.this.name}.${azurerm_dns_cname_record.this.zone_name}"
validation_type = "cname-delegation"
depends_on = [data.external.dns_cname_check]
}
resource "azurerm_key_vault_secret" "deployment_secret" {
name = "deployment-secret"
value = azurerm_static_site.this.api_key
key_vault_id = data.azurerm_key_vault.this.id
}
data "external" "dns_cname_check" {
depends_on = [azurerm_dns_cname_record.this]
program = ["./scripts/check_dns_propagation.sh", "www.${data.azurerm_dns_zone.this.name}", "${azurerm_static_site.this.default_host_name}.", "CNAME"]
}
The code above automates the deployment of an Azure Static Web App (defined by azurerm_static_site
) with specified SKU and location, and configures a custom domain for it by creating a DNS CNAME record (using azurerm_dns_cname_record
) and validating it (with azurerm_static_site_custom_domain
), while also storing the static site's API key as a secret in Azure Key Vault (azurerm_key_vault_secret
). Additionally, it utilizes an external data source (data.external
) to execute a shell script that checks DNS propagation, ensuring the CNAME record is propagated correctly before proceeding with the custom domain validation.
#!/bin/bash
# Ensure a domain is provided as a parameter
if [ "$#" -ne 3 ]; then
echo "Usage: $0 "
exit 1
fi
# Domain and expected TXT record value
DOMAIN="$1"
EXPECTED_VALUE="$2"
RECORD_TYPE="$3"
# DNS servers to check
DNS_SERVERS=(
"1.1.1.1" # Cloudflare
"1.0.0.1" # Cloudflare
"75.75.75.75" # Comcast
"75.75.76.76" # Comcast
)
# Max attempts to check DNS propagation
MAX_ATTEMPTS=2
# Time to sleep between attempts in seconds
SLEEP_TIME=10
# Attempt counter
ATTEMPT=0
# Log file path
LOG_FILE="./scripts/dns_${RECORD_TYPE}_propagation.log"
# Clear previous log file
echo "" > "$LOG_FILE"
# JSON output variable
JSON_OUTPUT="{"
while [ "$ATTEMPT" -lt "$MAX_ATTEMPTS" ]; do
FOUND="true"
JSON_OUTPUT="{"
for server in "${DNS_SERVERS[@]}"; do
RESULT=$(dig @$server +short $RECORD_TYPE $DOMAIN)
RESULT=$(echo $RESULT | tr -d '"')
# Get the current timestamp
TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
# Check if the result matches the expected value
if [ "$RESULT" != "$EXPECTED_VALUE" ]; then
FOUND="false"
JSON_OUTPUT+="\"server_${server//./_}\": \"not found\","
echo "$TIMESTAMP - $DOMAIN @ $server: not found" >> "$LOG_FILE"
else
JSON_OUTPUT+="\"server_${server//./_}\": \"found\","
echo "$TIMESTAMP - $DOMAIN @ $server: found" >> "$LOG_FILE"
fi
done
# Remove trailing comma and close JSON object
JSON_OUTPUT="${JSON_OUTPUT%,}}"
# If the record was found on all DNS servers, break the loop
if [ "$FOUND" == "true" ]; then
break
fi
# Increment the attempt counter and sleep before trying again
ATTEMPT=$((ATTEMPT + 1))
#echo "Attempt $ATTEMPT/$MAX_ATTEMPTS . Trying again in $SLEEP_TIME seconds..."
sleep "$SLEEP_TIME"
done
# Output the JSON object with the result
echo $JSON_OUTPUT
Finally, we get to the heart of the matter: setting up a CI / CD pipeline with GitHub Actions. GitHub Actions is a CI/CD and automation feature in GitHub that allows developers to create, customize, and execute workflows directly in their repositories, enabling automatic build, test, and deployment of applications based on specific triggers like push, pull requests, or issue creation.
##Github Actions CI/CD
resource "github_actions_secret" "deployment_secret" {
repository = var.repository_name_frontend
secret_name = "DEPLOYMENT_SECRET"
plaintext_value = azurerm_key_vault_secret.deployment_secret.value
}
resource "github_actions_secret" "acr_username" {
repository = var.repository_name_backend
secret_name = "ACR_USERNAME"
plaintext_value = data.azurerm_container_registry.this.admin_username
}
resource "github_actions_secret" "acr_password" {
repository = var.repository_name_backend
secret_name = "ACR_PASSWORD"
plaintext_value = data.azurerm_container_registry.this.admin_password
}
resource "github_repository_file" "frontend_workflow" {
depends_on = [azurerm_static_site.this, github_actions_secret.deployment_secret]
overwrite_on_create = true
repository = var.repository_name_frontend
branch = var.repository_branch_frontend
file = ".github/workflows/frontend.yml"
content = <<-EOF
name: CI/CD Pipeline
on:
push:
branches:
- ${var.repository_branch_frontend}
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: yarn install
- name: Build
run: yarn build
- name: Create config.json
run: |
echo '{
"REACT_APP_BACKEND_URL": "${data.azurerm_dns_zone.this.name}"
}' > build/config.json
- name: Deploy to Azure Static Web App
uses: azure/static-web-apps-deploy@v1
with:
azure_static_web_apps_api_token: $${{ secrets.DEPLOYMENT_SECRET }}
action: "upload"
app_location: "build"
EOF
}
resource "github_repository_file" "backend_workflow" {
depends_on = [data.azurerm_container_registry.this,
github_actions_secret.acr_username,
github_actions_secret.acr_password
]
overwrite_on_create = true
repository = var.repository_name_backend
branch = var.repository_branch_backend
file = ".github/workflows/backend.yml"
content = <<-EOT
name: Push Docker image to custom registry
on:
push:
branches:
- ${var.repository_branch_backend}
jobs:
push_to_registry:
name: Build and push Docker image
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v2
- name: Log in to Docker registry
uses: azure/docker-login@v1
with:
login-server: ${data.azurerm_container_registry.this.login_server}
username: $${{ secrets.ACR_USERNAME }}
password: $${{ secrets.ACR_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
push: true
tags: ${data.azurerm_container_registry.this.login_server}/${var.prefix}acr:v1
EOT
}
data "http" "dispatch_event_backend" {
url = "https://api.github.com/repos/${var.github_username}/${var.repository_name_backend}/dispatches"
method = "POST"
request_headers = {
Accept = "application/vnd.github.everest-preview+json"
Authorization = "token ${var.github_token}"
}
request_body = jsonencode({
event_type = "my-event"
})
depends_on = [github_repository_file.backend_workflow]
}
data "http" "dispatch_event_frontend" {
url = "https://api.github.com/repos/${var.github_username}/${var.repository_name_frontend}/dispatches"
method = "POST"
request_headers = {
Accept = "application/vnd.github.everest-preview+json"
Authorization = "token ${var.github_token}"
}
request_body = jsonencode({
event_type = "my-event"
})
depends_on = [github_repository_file.frontend_workflow]
}
The code above automates the GitHub Actions CI/CD workflow setup for two repositories, one for the frontend React Application and one for the backend Node/Express API. The resources from the GitHub Integration with Terraform create the necessary secrets (like deployment secrets and Azure Container Registry credentials) and defined workflows in the repositories that handle build and deployment tasks, such as building a static web app, creating a configuration file, deploying to Azure Static Web App, and building and pushing a Docker image to a custom registry. Additionally, it uses HTTP data sources to trigger GitHub repository dispatch events, initiating the defined workflows upon changes and ensuring an automated build and deployment pipeline.
And there you have it - a working todo app running on Microsoft Azure with CI/CD enabled. Read on to the following article in the series, looking at the same implementation in Google Cloud Platform.