Build a Load Balanced Application#

This lab is a walk-through showing how to make several complex changes that need to be made in order to create a load balanced application. Those changes are:

  1. Instantiate multiple instances to handle the load

  2. Create the resources necessary to perform load balancing

  3. Design health checks that determine if our application is running

Further Reading#

The Cloud Load Balancing documentation on Google is excellent and well written. Google supports Terraform with examples:

Make a lab13 Directory#

In your git repository, create copy the base directory to create a new directory for this lab.

$ cp -R base lab13
$ cd lab13

Step 1: Make Multiple Instances#

Terraform makes it easy to repeat a resource, which is great because it would be awful to copy-and-paste an instance 100 times! We’re going to start by changing the instance declaration to look like this:

resource "google_compute_instance" "webservers" {
  count        = 3
  name         = "web${count.index}"
  machine_type = "e2-micro"

  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-2004-lts"
    }
  }

  network_interface {
    network = google_compute_network.vpc_network.name
    access_config {
    }
  }

  labels = {
    role: "web"
  }
}

We made three changes:

  1. We added the count declaration:

    count = 3
    
  2. We changed the name to make sure each gets a unique name:

    name = "web${count.index}"
    
  3. We added a label that we can use with Ansible:

    labels = {
        role: "web"
    }
    

Step 2: Update the Output#

We’ll be creating three instances so we want to see all three IP addresses. Update the output block to look like this:

output "external-ip" {
  value = google_compute_instance.webservers[*].network_interface[0].access_config[0].nat_ip
}

Stop!

Apply your Terraform configuration to make sure it works.

Step 3: Configure the Servers with Ansible#

Now that you have three servers running we can really use the power of Ansible. The GCP inventory plugin has this line in it:

keyed_groups:
  - prefix: gcp
    key: labels

This creates inventory groups based on labels applied in GCP. The ansible-inventory tool will confirm that the labels are applied:

$ ansible-inventory --graph -i ../inventory.gcp.yaml
@all:
  |--@gcp_role_web:
  |  |--34.132.145.119
  |  |--34.133.65.186
  |  |--35.193.204.3
  |--@ungrouped:

Add the following play to your playbook.yaml. Notice the play is only run on hosts in the gcp_role_web group!

- hosts: gcp_role_web
  name: Install Apache on the web servers
  become: yes
  tasks:
    - name: Install packages 
      ansible.builtin.apt:
        name:
          - apache2
          - php
    - name: Update index.html so we can see the difference between hosts 
      blockinfile:
        path: /var/www/html/index.html
        owner: www-data 
        marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
        insertafter: <body> 
        block: | 
          <h1>This is the server {{ inventory_hostname }}</h1>

Run ansible to start the webserver on your hosts and verify that you can see each of the web pages.

Step 4: Create a Health Check#

In order for load balancing to work we have to ensure that each of the individual instances in the group can be assessed for proper operation. In the case of a web server, a health check is as simple as loading any web page on the server to make sure it comes back okay.

resource "google_compute_health_check" "webservers" {
  name = "webserver-health-check"

  timeout_sec        = 1
  check_interval_sec = 1

  http_health_check {
    port = 80
  }
}

It’s sometimes necessary to update the firewall to enable health checks. Google’s health checks come from a special network of Google servers. In our case the health check uses the same port that we globally enable (“80”) so there’s no need to update.

Step 5: Create a Target Group and Service#

An instance group binds instances together so they can be referred to as a group. This resource identifies that each of the instances in google_compute_instance.webservers[*] is a member of the instance group.

resource "google_compute_instance_group" "webservers" {
  name        = "cis91-webservers"
  description = "Webserver instance group"

  instances = google_compute_instance.webservers[*].self_link

  named_port {
    name = "http"
    port = "80"
  }
}

Step 6: Create a Service#

The backend service resource tells Google that each of the instances in the google_compute_instance_group.webservers implements a service on a particular port. When we create the load balancer we’ll point it toward this backend service.

resource "google_compute_backend_service" "webservice" {
  name      = "web-service"
  port_name = "http"
  protocol  = "HTTP"

  backend {
    group = google_compute_instance_group.webservers.id
  }

  health_checks = [
    google_compute_health_check.webservers.id
  ]
}

Step 7: The Resources that Make a Load Balancer#

A load balancer is not a single resource, it’s a group of related resources. Add each of the resources below to create the load balancer:

URL Map#

A URL map is the mapping of URL pattern to the selected backend. This URL map is simple, it maps all URLs to the only backend that is in our application:

resource "google_compute_url_map" "default" {
  name            = "my-site"
  default_service = google_compute_backend_service.webservice.id
}

HTTP Proxy#

The HTTP proxy does the computational work of forwarding. According to Google:

Target proxies are referenced by one or more forwarding rules. In the case of external HTTP(S) load balancers and internal HTTP(S) load balancers, proxies route incoming requests to a URL map. In the case of SSL proxy load balancers and TCP proxy load balancers, target proxies route incoming requests directly to backend services.

resource "google_compute_target_http_proxy" "default" {
  name     = "web-proxy"
  url_map  = google_compute_url_map.default.id
}

Reserve an IP Address for the Load Balancer#

Load balancers need their own IP address. This resource reserves one for us:

resource "google_compute_global_address" "default" {
  name = "external-address"
}

Forwarding Rule#

Finally, we tie the pieces together with a forwarding rule. The forwarding rule binds the target proxy (which contains the URL map) to the external address we reserved for the load balancer. With this resource in the load balancer will become ready:

resource "google_compute_global_forwarding_rule" "default" {
  name                  = "forward-application"
  ip_protocol           = "TCP"
  load_balancing_scheme = "EXTERNAL"
  port_range            = "80"
  target                = google_compute_target_http_proxy.default.id
  ip_address            = google_compute_global_address.default.address
}

We need to see the IP address of the load balancer, so add this output:

output "lb-ip" {
  value = google_compute_global_address.default.address
}

Step 8: Apply and Wait#

With all of the configuration entered, you can apply the changes and wait. It takes a few minutes for an external load balancer to become ready.

Turn In#

Turn in a screenshot of your load balanced application and your main.tf and playbook.yaml file.