A Comprehensive Guide to a GCP-GitHub Terraform Implementation
In the realm of cloud architecture, the magic unfolds when you marry the robustness of Terraform with the scalability of Google Cloud Platform (GCP). The crux of any such integration lies in the providers and variables, setting the stage for the subsequent deployment details. This guide is a comprehensive dissection of a Terraform script intended for GCP, demystifying each section for aspiring cloud architects.
We will use an extremely basic todo app backend for this demo, as was used in our AWS and Azure demos earlier in the series. Create a fork of this repository and this repository for the rest of this demo, and be sure to use the 'gcp' branches, respectively.
Tl;dr - You can find the entire implementation here. These scripts are ready to run with a properly configured config.tfvars
:
github_token = <YOUR_GITHUB_TOKEN>
github_username = <YOUR_GITHUB_USERNAME>
repository_name_frontend = "todo-app-frontend"
repository_name_backend = "todo-app-backend"
repository_branch_frontend = "gcp"
repository_branch_backend = "gcp"
project_name = <YOUR_GCP_PROJECT_NAME>
You can get the value of the default project created with your GCP account here.
Prerequisites:
- A free GCP account with a Google Service Account that has 'Owner' privileges.
- GCloud CLI (https://cloud.google.com/sdk/docs/install) and authenticated at the command line:
gcloud auth login
- Terraform installed locally with the Google Cloud Provider and the GitHub Provider
As before, we set up the Terraform project with the following structure:
project-name
-main.tf
-outputs.tf
-variables.tf
-versions.tf
-config.tfvars
Providers and Variables: The Cornerstones
The bedrock of the script is its providers and variables:
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.1"
}
github = {
source = "integrations/github"
}
}
}
provider "google" {
credentials = file("gcp-creds-v2.json")
region = "us-central1"
zone = "us-central1-a"
}
provider "github" {
token = var.github_token
owner = var.github_username
}
provider "random" {}
Providers essentially define the platforms Terraform will interact with. While google
and github
are straightforward, integrating with GCP and GitHub, respectively, the random
provider is versatile and often used for generating unique resource identifiers or randomized values.
Meanwhile, variables bestow flexibility, allowing users to modify deployments without touching the core logic, enhancing reusability.
variable "repository_name_backend" {
description = "The name of the backend repository"
type = string
}
variable "repository_name_frontend" {
description = "The name of the frontend repository"
type = string
}
variable "repository_branch_backend" {
description = "The name of the backend repository branch"
type = string
}
variable "repository_branch_frontend" {
description = "The name of the frontend repository branch"
type = string
}
variable "github_username" {
description = "The name of the GitHub user"
type = string
}
variable "github_token" {
description = "The GitHub token"
type = string
}
variable "region" {
description = "The region to deploy to"
type = string
default = "us-central1"
}
variable "container_port" {
description = "The port the container listens on"
type = number
default = 5000
}
variable "prefix" {
description = "The prefix to use for all resources"
type = string
default = "todoapp"
}
variable "project_name" {
description = "The name of the project"
type = string
}
variable "collection_name" {
description = "The name of the collection"
type = string
default = "todos"
}
Google Project Services: Enabling Cloud Features
Every cloud platform is structured as a suite of services specializing in a particular domain: storage, computing, or artificial intelligence. Google Cloud is no exception:
resource "google_project_service" "services" {
project = var.project_name
for_each = toset([
"run.googleapis.com",
"compute.googleapis.com",
"containerregistry.googleapis.com",
"secretmanager.googleapis.com",
"cloudresourcemanager.googleapis.com",
"cloudapis.googleapis.com",
"servicemanagement.googleapis.com",
"servicecontrol.googleapis.com",
"container.googleapis.com",
"firestore.googleapis.com",
"iam.googleapis.com",
"cloudfunctions.googleapis.com",
"cloudbuild.googleapis.com",
])
service = each.key
disable_on_destroy = false
}
The google_project_service
Terraform resource is pivotal when you're setting up your Google Cloud architecture. It acts as a switchboard, enabling or disabling specific Google Cloud services for your project. Think of a Google Cloud project as an empty house. While the house has the potential for lighting, plumbing, and heating, you'd need to enable each utility individually. Similarly, while Google Cloud offers many services, not all are enabled by default for every project.
Let's discuss why this selective approach matters. Firstly, it's a boon for security. You minimize potential attack vectors by ensuring only necessary services are turned on. If a service isn't being used, it shouldn't be left enabled. Secondly, it aids in cost management. Google Cloud pricing is often tied to service usage. By toggling off unneeded services, you prevent inadvertent costs.
Lastly, it's about decluttering and focus. As any cloud architect will attest, large-scale cloud projects can get complex quickly. Ensuring only the services vital to your application's architecture are active brings clarity to your project, streamlining development and operational workflows.
Google Cloud Service Account
The service account is the linchpin:
resource "google_service_account" "application_sa" {
account_id = "terraform"
display_name = "created-sa-for-app"
project = var.project_name
}
resource "google_service_account_key" "application_sa_key" {
service_account_id = google_service_account.application_sa.name
}
resource "google_service_account_iam_binding" "sa_actas_binding" {
service_account_id = google_service_account.application_sa.name
role = "roles/iam.serviceAccountUser"
members = [
"serviceAccount:${google_service_account.application_sa.email}"
]
}
It's more than just an identity; it defines permissions, scopes, and the roles that GCP resources can assume. Deploying without a service account is analogous to entering a secured facility without an ID—neither feasible nor advisable. This pivotal component ensures that services act under authorized confines. Note that this service account differs from the one you use to authenticate your local terraform development environment. We adhere to best practices by creating a service account that Terraform only uses to create and run the application. We also add an iam_binding
to allow cloud run to use this service account (more on that later).
Artifact Registry and CI/CD Workflow Integration
The world of containerized applications demands efficient storage and management of container images. Google's Artifact Registry caters to this need, providing a single place to manage deployment artifacts with fine-grained access control, artifact metadata, and consistency across your deployments.
resource "google_artifact_registry_repository" "backend" {
project = var.project_name
location = var.region
repository_id = var.repository_name_backend
description = "Backend repository"
format = "DOCKER"
docker_config {
immutable_tags = false
}
}
resource "google_project_iam_member" "artifact_registry_reader" {
project = var.project_name
role = "roles/artifactregistry.reader"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "google_project_iam_member" "artifact_registry_writer" {
project = var.project_name
role = "roles/artifactregistry.writer"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "github_actions_secret" "deployment_secret_backend" {
repository = var.repository_name_backend
secret_name = "GCP_SA_KEY"
plaintext_value = base64decode(google_service_account_key.application_sa_key.private_key)
}
resource "github_repository_file" "backend_workflow" {
overwrite_on_create = true
repository = var.repository_name_backend
branch = var.repository_branch_backend
file = ".github/workflows/backend-gcp-workflow.yml"
content = templatefile("backend-github-workflow.yml", {
gcp_region = var.region
gcp_branch = var.repository_branch_backend
gcp_service = "${var.prefix}-api"
gcp_repo_backend = var.repository_name_backend
gcp_image = "${var.region}-docker.pkg.dev/${var.project_name}/${var.repository_name_backend}/${var.repository_name_backend}:latest"
})
depends_on = [github_actions_secret.deployment_secret_backend, google_artifact_registry_repository.backend]
}
Artifact Registry for Backend: Within our Terraform code, we define an google_artifact_registry_repository
resource for the backend. This denotes the creation of a Docker image repository within the Artifact Registry. Docker images for our backend service will be stored and managed here. This centralizes container image management and integrates seamlessly with other GCP services, like Google Cloud Run, which we've used to deploy our service. Having an Artifact Registry ensures that our container images are stored securely, are easily accessible for deployments, and benefit from the reliability and speed of Google Cloud.
But storage is just one part of the equation. The real magic happens when this integrates with the CI/CD workflow, enabling continuous deployment and integration from code change to actual service update.
resource "github_repository_file" "backend_workflow" {
overwrite_on_create = true
repository = var.repository_name_backend
branch = var.repository_branch_backend
file = ".github/workflows/backend-gcp-workflow.yml"
content = templatefile("backend-github-workflow.yml", {
gcp_region = var.region
gcp_branch = var.repository_branch_backend
gcp_service = "${var.prefix}-api"
gcp_repo_backend = var.repository_name_backend
gcp_image = "${var.region}-docker.pkg.dev/${var.project_name}/${var.repository_name_backend}/${var.repository_name_backend}:latest"
})
depends_on = [github_actions_secret.deployment_secret_backend, google_artifact_registry_repository.backend]
}
name: Backend CI/CD Pipeline
on:
push:
branches:
- ${gcp_branch}
jobs:
push_to_registry:
name: Build Backend API
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Setup GCP Authentication
uses: google-github-actions/auth@v1
with:
credentials_json: $${{ secrets.GCP_SA_KEY }}
- name: Login to Artifact Registry
uses: docker/login-action@v3
with:
registry: ${gcp_region}-docker.pkg.dev
username: _json_key
password: $${{ secrets.GCP_SA_KEY }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
push: true
tags: ${gcp_image}
- name: Check if Cloud Run Service Exists
id: check_service
run: |
if gcloud run services describe ${gcp_service} --region=${gcp_region} --platform=managed; then
echo "service_exists=true" >> $GITHUB_OUTPUT
else
echo "service_exists=false" >> $GITHUB_OUTPUT
fi
- name: Deploy to Cloud Run
if: steps.check_service.outputs.service_exists == 'true'
uses: 'google-github-actions/deploy-cloudrun@v1'
with:
service: ${gcp_service}
region: ${gcp_region}
image: ${gcp_image}
GitHub Repository File for CI/CD Workflow: The github_repository_file
resource is pivotal in our GitHub Actions integration. It represents a specific file within our GitHub repository, in this case, our CI/CD workflow file for the backend. GitHub Actions uses YAML files to define the CI/CD workflow. This Terraform resource essentially pushes our predefined CI/CD logic (usually stored in a local .yml
or .yaml
file) to the GitHub repository, ensuring that every code push or specified trigger initiates the CI/CD process.
Our workflow contains steps like building the backend application, creating a Docker container image, and then pushing this image to our Artifact Registry repository. Once in the Artifact Registry, our container image is ready for deployment.
What makes GitHub Actions genuinely powerful is its deep integration with GitHub. Developers can view the progress of their workflows directly in their repositories, quickly access logs, and troubleshoot if needed. This tight-knit feedback loop can significantly accelerate the development cycle, making it easier to spot and rectify errors early.
Furthermore, with the vast marketplace of actions available, the possibilities are limitless. Whether you want to integrate with third-party tools, set up complex deployment strategies, or even automate mundane tasks like labeling issues, there's probably an action for that.
Repository Dispatch: The github_repository_dispatch
resource is particularly intriguing. It allows external triggers for a GitHub Actions workflow. In this scenario, the start_frontend_workflow
action is designed to be initiated externally, possibly to ensure the frontend gets deployed after certain conditions (like successful backend deployment) are met.
Integrating Artifact Registry and our CI/CD workflow bridges two crucial domains: container image storage and automated deployment. The updated container image finds its place in the Artifact Registry with each code change. It stands ready for subsequent deployment, ensuring an efficient, secure, and streamlined development-to-deployment lifecycle.
Google Cloud Functions, Secrets Manager, and Firestore: Serverless Meets NoSQL
The power of serverless is magnified when integrated with modern databases:
resource "google_secret_manager_secret" "this" {
project = var.project_name
secret_id = "${var.prefix}-secret"
replication {
auto {}
}
}
resource "google_secret_manager_secret_version" "individual_secret" {
secret = google_secret_manager_secret.this.id
secret_data = base64decode(google_service_account_key.application_sa_key.private_key)
}
resource "google_secret_manager_secret_iam_member" "function_secret_access" {
project = var.project_name
secret_id = google_secret_manager_secret.this.secret_id
role = "roles/secretmanager.secretAccessor"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "google_project_iam_member" "secret_access" {
project = var.project_name
role = "roles/secretmanager.secretAccessor"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "random_pet" "project_id" {
length = 2
separator = "-"
}
resource "google_firestore_database" "database" {
project = var.project_name
name = "${random_pet.project_id.id}-db"
location_id = "nam5"
type = "FIRESTORE_NATIVE"
}
resource "google_project_iam_member" "firestore_iam" {
project = var.project_name
role = "roles/datastore.owner"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "google_cloudfunctions_function" "insert_firestore_doc" {
name = "insert-firestore-doc"
description = "Inserts a default document into Firestore"
available_memory_mb = 128
source_archive_bucket = google_storage_bucket.bucket.name
source_archive_object = google_storage_bucket_object.archive.name
trigger_http = true
runtime = "python310"
entry_point = "main"
project = var.project_name
service_account_email = google_service_account.application_sa.email
environment_variables = {
PROJECT_ID = var.project_name
COLLECTION_NAME = var.collection_name
DATABASE_NAME = google_firestore_database.database.name,
SECRET_NAME = google_secret_manager_secret.this.secret_id,
}
}
resource "google_project_iam_member" "function_invoker" {
project = var.project_name
role = "roles/cloudfunctions.invoker"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "google_project_iam_binding" "cloud_function_firestore_writer" {
project = var.project_name
role = "roles/datastore.user"
members = [
"serviceAccount:${google_service_account.application_sa.email}"
]
}
resource "google_storage_bucket" "bucket" {
name = "${var.prefix}-cloud-function-bucket"
location = "US"
force_destroy = true
project = var.project_name
}
variable "gcf_insert_firestore_doc_zip" {
type = string
default = "./scripts/insert-firestore-doc/index.zip"
}
resource "null_resource" "delete_old_archive_firestore_doc" {
provisioner "local-exec" {
command = "rm -f ${var.gcf_insert_firestore_doc_zip}"
}
triggers = {
always_recreate = "${timestamp()}" # Ensure it runs every time
}
}
data "archive_file" "gcf_insert_firestore_doc" {
depends_on = [null_resource.delete_old_archive_firestore_doc]
type = "zip"
source_dir = "./scripts/insert-firestore-doc"
output_path = var.gcf_insert_firestore_doc_zip
}
resource "google_storage_bucket_object" "archive" {
depends_on = [data.archive_file.gcf_insert_firestore_doc]
name = "insert-firestore-doc.zip"
bucket = google_storage_bucket.bucket.name
source = data.archive_file.gcf_insert_firestore_doc.output_path
}
resource "null_resource" "invoke_function" {
depends_on = [
google_cloudfunctions_function.insert_firestore_doc,
google_firestore_database.database,
google_secret_manager_secret.this
]
provisioner "local-exec" {
command = "gcloud functions call ${google_cloudfunctions_function.insert_firestore_doc.name} --region ${var.region}"
}
triggers = {
always_run = "${timestamp()}" # This will always execute the provisioner.
}
}
resource "null_resource" "destroy_database" {
triggers = {
project_name = var.project_name
database_name = google_firestore_database.database.name
}
provisioner "local-exec" {
when = destroy
command = "gcloud alpha firestore databases delete --database=${self.triggers.database_name} --project=${self.triggers.project_name} --quiet"
}
}
Firestore is Google Cloud's flagship NoSQL database, designed for the modern web. It offers a flexible, scalable, and real-time data storage solution with offline synchronization and robust querying capabilities. In a sense, Firestore is to data what Cloud Functions are to logic—dynamic, scalable, and real-time.
Google Cloud Functions is a serverless execution environment that allows developers to build and deploy scalable, single-purpose functions without managing the underlying infrastructure. These functions can be triggered by various Google Cloud events or HTTP requests, enabling easy integration with other Google Cloud services and external systems.
We use Google Cloud Functions here to create a collection and insert data for the demo application. We provision a bucket in Google Cloud Storage to store the function, archive the function's code, and upload it to the bucket. Finally, we invoke the function that writes a document to the Firestore collection.
import json
import requests
from flask import jsonify
from google.oauth2 import service_account
from google.cloud import secretmanager
from google.auth.transport import requests as gauth_requests
import os
def create_firestore_document(project_id, collection_id, database_name, access_token, data):
# Firestore URL
url = f"https://firestore.googleapis.com/v1/projects/{project_id}/databases/{database_name}/documents/{collection_id}"
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
"x-goog-request-params": f"project_id={project_id}&database_id={database_name}"
}
response = requests.post(url, headers=headers, data=json.dumps(data))
return response.json()
def does_collection_exist(project_id, collection_id, database_name, access_token):
# Firestore URL to list documents
url = f"https://firestore.googleapis.com/v1/projects/{project_id}/databases/{database_name}/documents/{collection_id}?pageSize=1"
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json",
"x-goog-request-params": f"project_id={project_id}&database_id={database_name}"
}
response = requests.get(url, headers=headers)
json_response = response.json()
# Check if the 'documents' key exists in the response
return 'documents' in json_response and len(json_response['documents']) > 0
def get_secret(secret_id, project_id):
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret.
name = f"projects/{project_id}/secrets/{secret_id}/versions/latest"
# Get the secret value.
response = client.access_secret_version(request={"name": name})
# Return the decoded payload.
return response.payload.data.decode("UTF-8")
def main(request):
SECRET_NAME = os.environ.get("SECRET_NAME")
PROJECT_ID = os.environ.get("PROJECT_ID")
COLLECTION_ID = os.environ.get("COLLECTION_NAME")
DATABASE_NAME = os.environ.get("DATABASE_NAME")
secret_value = get_secret(SECRET_NAME, PROJECT_ID)
service_account_info = json.loads(secret_value)
credentials = service_account.Credentials.from_service_account_info(
service_account_info,
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
request_object = gauth_requests.Request()
credentials.refresh(request_object)
access_token = credentials.token
# Check if the collection exists
collection_exists = does_collection_exist(PROJECT_ID, COLLECTION_ID, DATABASE_NAME, access_token)
if not collection_exists:
DATA = {
'fields': {
'title': {'stringValue': 'This is an example task'},
'completed': {'booleanValue': True}
}
}
response = create_firestore_document(PROJECT_ID, COLLECTION_ID, DATABASE_NAME, access_token, DATA)
return json.dumps(response, indent=4)
else:
return (jsonify({"error": "Collection already exists and has documents. No document was added."}), 409)
Google Secret Manager is a cloud service that allows users to store, manage, and access sensitive information such as API keys, passwords, and certificates securely. It provides versioning, auditing, and fine-grained access controls, ensuring that secrets are kept safe and accessible only to authorized entities.
One aside for clarity: as of this writing, the GCP Terraform provider does not support the destruction of databases. To keep the implementation clean, I have included a local execution of a gcloud cli command to destroy the database when running terraform destroy
Deploying Containers with Google Cloud Run and Artifact Registry Integration
In the realm of serverless computing, Google Cloud Run stands out. It empowers developers to run containers in a fully managed environment, abstracting away infrastructure management and focusing purely on the code.
// Deploy container to Cloud Run
resource "google_cloud_run_service" "api_service" {
project = var.project_name
depends_on = [
time_sleep.wait_for_it,
github_repository_file.backend_workflow,
google_firestore_database.database,
google_artifact_registry_repository.backend,
]
name = "${var.prefix}-api"
location = var.region
template {
spec {
containers {
image = "${var.region}-docker.pkg.dev/${var.project_name}/${var.repository_name_backend}/${var.repository_name_backend}:latest"
ports {
container_port = var.container_port
}
env {
name = "NODEPORT"
value = var.container_port
}
env {
name = "PROJECT_ID"
value = var.project_name
}
env {
name = "SECRET_NAME"
value = google_secret_manager_secret.this.secret_id
}
env {
name = "SECRET_VERSION"
value = google_secret_manager_secret_version.individual_secret.version
}
env {
name = "COLLECTION_NAME"
value = var.collection_name
}
env {
name = "DATABASE_NAME"
value = google_firestore_database.database.name
}
}
service_account_name = google_service_account.application_sa.email
}
}
traffic {
percent = 100
latest_revision = true
}
}
resource "google_project_iam_member" "api_service_iam_member" {
role = "roles/run.admin"
member = "serviceAccount:${google_service_account.application_sa.email}"
project = var.project_name
}
// IAM: Allow unauthenticated access to Cloud Run service
resource "google_cloud_run_service_iam_member" "public_access" {
service = google_cloud_run_service.api_service.name
location = google_cloud_run_service.api_service.location
role = "roles/run.invoker"
member = "allUsers"
project = var.project_name
}
resource "time_sleep" "wait_for_it" {
depends_on = [google_artifact_registry_repository.backend, data.http.dispatch_event_backend]
create_duration = "3m"
}
Google Cloud Run, at its essence, allows you to take a container (which packages an application and its dependencies) and run it in the cloud without worrying about the underlying infrastructure. This is ideal for microservices architectures or any situation where scalability and speed are paramount. You only pay for the exact resources you use, and the service scales up or down automatically based on the traffic.
The integration between Artifact Registry and Cloud Run becomes evident in the Terraform code. The container images stored in the Artifact Registry are pulled and deployed on Cloud Run. The benefits of this duo are numerous. For one, the transition from development to production becomes smoother. Changes in application code are transformed into container images, stored in the Artifact Registry, and then swiftly deployed to Cloud Run for live traffic. Environment variables are passed to the service to allow the service to authenticate to other parts of the infrastructure, including Secret Manager and Firestore.
Moreover, with the Artifact Registry's fine-grained access controls and vulnerability scanning, you add an extra layer of security to your deployment pipeline. Ensuring that only secure and compliant images make their way to Cloud Run safeguards your application's integrity.
In summary, the tandem of Google Cloud Run and Artifact Registry offers a streamlined, secure, and efficient way to manage containerized applications from development to deployment. It exemplifies how the modular nature of Google Cloud services can be harnessed to create robust and scalable cloud architectures.
Hosting a static website on Google Cloud Platform
Google Cloud Storage (GCS) is a robust, scalable, and durable object storage service offered by Google Cloud Platform. Beyond its primary use case of storing and retrieving large amounts of data, GCS also provides the capability to host static websites. Users can serve content directly from a GCS bucket, making it an ideal solution for hosting static assets like HTML, CSS, JavaScript, and images. Developers can deploy static websites without needing a traditional web server by configuring a bucket for website hosting and pointing a custom domain to it. Furthermore, when combined with Google's global infrastructure and edge locations, the static content is delivered rapidly and reliably to end-users, often benefiting from reduced latency and increased availability.
resource "google_storage_bucket" "static_website_bucket" {
name = "${var.prefix}-static-website-bucket"
location = "US"
force_destroy = true
project = var.project_name
website {
main_page_suffix = "index.html"
not_found_page = "404.html"
}
}
resource "google_storage_bucket_iam_member" "bucket_iam_member" {
bucket = google_storage_bucket.static_website_bucket.name
role = "roles/storage.objectAdmin"
member = "serviceAccount:${google_service_account.application_sa.email}"
}
resource "google_storage_bucket_iam_member" "public_read" {
bucket = google_storage_bucket.static_website_bucket.name
role = "roles/storage.legacyObjectReader"
member = "allUsers"
}
resource "google_artifact_registry_repository" "frontend" {
project = var.project_name
location = var.region
repository_id = var.repository_name_frontend
description = "Frontend repository"
format = "DOCKER"
docker_config {
immutable_tags = false
}
}
resource "github_actions_secret" "deployment_secret_frontend" {
repository = var.repository_name_frontend
secret_name = "GCP_SA_KEY"
plaintext_value = base64decode(google_service_account_key.application_sa_key.private_key)
}
resource "github_repository_file" "frontend_workflow" {
depends_on = [
github_actions_secret.deployment_secret_frontend,
google_artifact_registry_repository.frontend,
google_storage_bucket.static_website_bucket,
google_cloud_run_service.api_service
]
overwrite_on_create = true
repository = var.repository_name_frontend
branch = var.repository_branch_frontend
file = ".github/workflows/frontend-gcp-workflow.yml"
content = templatefile("frontend-github-workflow.yml", {
gcp_branch = var.repository_branch_frontend
gcp_backend_url = replace(google_cloud_run_service.api_service.status[0].url, "https://", "")
gcp_bucket_name = google_storage_bucket.static_website_bucket.name
})
}
data "http" "dispatch_event_frontend" {
url = "https://api.github.com/repos/${var.github_username}/${var.repository_name_frontend}/dispatches"
method = "POST"
request_headers = {
Accept = "application/vnd.github.everest-preview+json"
Authorization = "token ${var.github_token}"
}
request_body = jsonencode({
event_type = "start-frontend-workflow"
})
depends_on = [github_repository_file.frontend_workflow]
}
Here, we are creating a bucket to store the build from our React.js application and with a single repository file, creating a workflow that builds the web assets, uploads them to the bucket, and makes the website available to view.
Outputs from the script found in outputs.tf
display both the URL of the API and the public access website.
output "api_url" {
description = "The URL of the deployed API."
value = google_cloud_run_service.api_service.status[0].url
}
output "website_url" {
description = "The URL of the static website."
value = "https://${google_storage_bucket.static_website_bucket.name}.storage.googleapis.com/index.html"
}
Integrating GitHub with GitHub Actions in our Terraform code bridges the gap between code changes and cloud deployments, creating a seamless pipeline that ensures rapid and reliable delivery of software changes.
Conclusion
Mastering cloud architecture is akin to weaving a complex tapestry—every stitch matters. This deep dive into Terraform's orchestration for GCP offers a blueprint for architects to design, deploy, and dominate the cloud landscape. With each section intricately connected to the next, the integration showcases the meticulousness required to build robust, scalable, and secure cloud solutions.
That's the last part of our deep dives into CI/CD implementations on the big three cloud platforms. Continue to the next post for an analysis and final thoughts on this series.