This challenge lab is the last one in the automating infrastructure in google cloud using Terraform. It builds upon the previous labs and tests your ability to import, build, add a remote backend reprovision, destroy & update infrastructure, and lastly test connectivity.
Task 1. Create the configuration files
touch main.tf
touch variables.tf
mkdir modules
cd modules
mkdir instances
cd instances
touch instances.tf
touch outputs.tf
touch variables.tf
cd ..
mkdir storage
cd storage
touch storage.tf
touch outputs.tf
touch variables.tf
cd
The following commands create empty files in the cloud shell to be used by Terraform. Make the empty files and directories in Cloud Shell.
Task 2. Add the following to each variable.tf file
variable "region" {
default = "us-east1"
}
variable "zone" {
default = "us-east1-a"
}
variable "project_id" {
default = "<FILL IN PROJECT ID>"
}
The default region statement is dependent on the lab’s current instruction so make changes if necessary. also, add your current project id in the brackets.
Add the following to the main.tf file
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.55.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
zone = var.zone
}
module "instances" {
source = "./modules/instances"
}
Run terraform init in Cloud Shell in the root directory to initialize terraform.
Import infrastructure

Navigate to Compute Engine > VM Instances. Click on tf-instance-1. Copy the Instance ID down somewhere to use later.
Navigate to Compute Engine > VM Instances. Click on tf-instance-2. Copy the Instance ID down somewhere to use later.
Next, navigate to modules/instances/instances.tf. Copy the following configuration into the file.
resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-1"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-1"
zone = var.zone
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
To import the first instance, use the following command, using the Instance ID for tf-instance-1 you copied down earlier
terraform import module.instances.google_compute_instance.tf-instance-1 <Instance ID – 1>
To import the second instance, use the following command, using the Instance ID for tf-instance-2 you copied down earlier.
terraform import module.instances.google_compute_instance.tf-instance-2 <Instance ID – 2>
The two instances have now been imported into your Terraform configuration. You can now optionally run the commands to update the state of Terraform. Type yes in the dialogue after you run the apply command to accept the state changes.
- terraform plan
- terraform apply
Task 3. Configure a remote backend
resource "google_storage_bucket" "storage-bucket" {
name = "BUCKET_NAME"
location = "US"
force_destroy = true
uniform_bucket_level_access = true
}
The bucket name is provided in the lab instructions.
module "storage" {
source = "./modules/storage"
}
add the above code in the main.tf file
Run the following commands to initialize the module and create the storage bucket resource. Type yes in the dialogue after you run the apply command to accept the state changes.
- terraform init
- terraform apply
Next, update the main.tf file so that the terraform block looks like the following. Fill in your GCP Project ID for the bucket argument definition.
terraform {
backend "gcs" {
bucket = "BUCKET_NAME"
prefix = "terraform/state"
}
required_providers {
google = {
source = "hashicorp/google"
version = "3.55.0"
}
}
}
Run the following to initialize the remote backend. Type yes at the prompt.
terraform init
Task 4. Modify and update infrastructure.
Navigate to modules/instances/instance.tf. Replace the entire contents of the file with the following
resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
resource "google_compute_instance" "tf-instance-3" {
name = "INSTANCE_NAME"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
tf instance-3 names should be replaced with the name given in the lab instructions..run the below command after saving the .tf file
- terraform init
- terraform apply
Task 5. Destroy resources.
Taint the tf-instance-3 resource by running the following command
terraform taint module.instances.google_compute_instance.tf-instance-3
Run the following commands to apply the changes:
- terraform init
- terraform apply
Remove the tf-instance-3 resource part from the instances.tf file. Delete the following code chunk from the file
resource "google_compute_instance" "tf-instance-3" {
name = "INSTANCE NAME"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "default"
}
}
Run the following commands to apply the changes. Type yes at the prompt.
terraform apply
Task 6. Use a module from the Registry.

Copy and paste the following into the main.tf file:
module "vpc" {
source = "terraform-google-modules/network/google"
version = "~> 3.2.2"
project_id = var.project_id
network_name = "VPC_NAME"
routing_mode = "GLOBAL"
subnets = [
{
subnet_name = "subnet-01"
subnet_ip = "10.10.10.0/24"
subnet_region = "us-central1"
},
{
subnet_name = "subnet-02"
subnet_ip = "10.10.20.0/24"
subnet_region = "us-central1"
subnet_private_access = "true"
subnet_flow_logs = "true"
description = "This subnet has a description"
}
]
}
replace the VPC name, subnet region, and subnet ip with the lab instructions.
Run the following commands to initialize the module and create the VPC. Type yes at the prompt.
- terraform init
- terraform apply
Navigate to modules/instances/instances.tf. Replace the entire contents of the file with the following:
resource "google_compute_instance" "tf-instance-1" {
name = "tf-instance-1"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "VPC_NAME"
subnetwork = "subnet-01"
}
}
resource "google_compute_instance" "tf-instance-2" {
name = "tf-instance-2"
machine_type = "n1-standard-2"
zone = var.zone
allow_stopping_for_update = true
boot_disk {
initialize_params {
image = "debian-cloud/debian-10"
}
}
network_interface {
network = "VPC_NAME"
subnetwork = "subnet-02"
}
}
Run the following commands to initialize the module and update the instances. Type yes at the prompt
- terraform init
- terraform apply
Task 7. Configure a firewall.
Add the following resource to the main.tf file and fill in the GCP Project ID:
resource "google_compute_firewall" "tf-firewall" {
name = "tf-firewall"
network = "projects/<PROJECT_ID>/global/networks/VPC_NAME"
allow {
protocol = "tcp"
ports = ["80"]
}
source_tags = ["web"]
source_ranges = ["0.0.0.0/0"]
}
change the project id and VPC name with the current lab instructions.
the firewall is configured to accept HTTP requests and the source ranges are from the public internet. if you are unfamiliar with port numbers check out my previous article where I highlight common port numbers for cloud professionals.