Introduction
Part 3 of this series showed how to create multiple virtual servers in Terraform. We also used Terraform to configure the servers so that they are aware of each other. This allows the two servers to act as failover / highly available pairs.
In this blog post, we'll look at how to leverage multiple providers in Terraform in order to build a Consul cluster. Even better, we'll be able to make the cluster any size we like: whether 3 nodes or 300 nodes, all we'll have to do is tell Terraform the number.
As mentioned in Part 1 of this series, one of the many great features of Terraform is its ability to support multiple cloud service providers (such as Amazon AWS, Microsoft Azure and OpenStack). Up until now, we've been leveraging a single OpenStack provider. Now we'll add in a second provider ' Amazon AWS ' to provide a cloud-based DNS service.
We'll also start using Consul, which is another tool from HashiCorp (the same group that makes Terraform). Consul provides service discovery, as well as key value and monitoring services
Service discovery can be thought of as a catalogue of services. When an application wants to know if the environment has a MySQL service available, it can query the catalogue to get more details. If Consul already existed in our Terraform configuration, we could use it to help us build the Consul cluster. However, since Consul is providing the service catalogue, it can't exactly query itself while it's still being built. A bit of a chicken and egg scenario. To get around this, we'll use Amazon's Route 53 DNS service.
Required Materials
Just like Part 3, this is a complex demo that has several different pieces. I have bundled everything together here
This demo will be run on Cybera's Rapid Access Cloud, which is an IPv6 enabled cloud. This is important to note because in order to use the provider's resources without having to modify a different IaaS cloud, the cloud will need to support IPv6.
In addition, you'll also need an Amazon AWS account. If you've never used Amazon AWS before, you can sign up for their free tier. If you're not eligible for the free tier, the Route 53 service should only cost $1-2 per month.
To follow along, make sure you have a DNS zone hosted with Route 53. Instructions on how to do that can be found here.
The Terraform Configuration
count
variable "count" {
default = 3
}
Key Pair
resource "openstack_compute_keypair_v2" "consul" {
name = "consul"
public_key = "${file("keys/consul.pub")}"
}
$ ssh-keygen -f key/id_rsa
Security Group
resource "openstack_compute_secgroup_v2" "consul" {
name = "consul"
description = "Rules for consul tests"
rule {
from_port = 1
to_port = 65535
ip_protocol = "tcp"
cidr = "0.0.0.0/0"
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "tcp"
cidr = "::/0"
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "udp"
cidr = "0.0.0.0/0"
}
rule {
from_port = 1
to_port = 65535
ip_protocol = "udp"
cidr = "::/0"
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "0.0.0.0/0"
}
rule {
from_port = -1
to_port = -1
ip_protocol = "icmp"
cidr = "::/0"
}
}
Server Group
resource "openstack_compute_servergroup_v2" "consul" {
name = "consul"
policies = ["anti-affinity"]
}
Instances
resource "openstack_compute_instance_v2" "consul" {
count = "${var.count}"
name = "${format("consul-%02d", count.index+1)}"
image_name = "Ubuntu 14.04"
flavor_name = "m1.tiny"
key_pair = "consul"
security_groups = ["${openstack_compute_secgroup_v2.keepalived.name}"]
scheduler_hints {
group = "${openstack_compute_servergroup_v2.consul.id}"
}
}
Null Resource
resource "null_resource" "consul" {
count = "${var.count}"
connection {
user = "ubuntu"
key_file = "keys/consul"
host = "${element(openstack_compute_instance_v2.consul.*.access_ip_v6, count.index)}"
}
provisioner "file" {
source = "files"
destination = "files"
}
provisioner "remote-exec" {
inline = [
"sudo cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/",
"sudo bash /home/ubuntu/files/install.sh"
]
}
}
The "file" provisioner copies over everything from the "files" directory.
The "remote-exec" provisioner performs two steps:
Amazon Route 53 Resources
resource "aws_route53_record" "consul-aaaa" {
zone_id = "YOUR-ZONE-ID"
name = "consul.YOUR-DOMAIN-NAME"
type = "AAAA"
ttl = "60"
records = ["${replace(openstack_compute_instance_v2.consul.*.access_ip_v6, "/[[]]/", "")}"]
}
resource "aws_route53_record" "consul-txt" {
zone_id = "YOUR-ZONE-ID"
name = "consul.YOUR-DOMAIN-NAME"
type = "TXT"
ttl = "60"
records = ["${formatlist("%s.z.terrarum.net", openstack_compute_instance_v2.consul.*.name)}"]
}
resource "aws_route53_record" "consul-individual" {
count = 3
zone_id = "YOUR-ZONE-ID"
name = "${format("consul-%02d.z.terrarum.net", count.index+1)}"
type = "AAAA"
ttl = "60"
records = ["${replace(element(openstack_compute_instance_v2.consul.*.access_ip_v6, count.index), "/[[]]/", "")}"]
}
The above three resources are creating three types of DNS records:
Consul Installation Script
The consul installation script can be found here. It does the following:
-
Installs the "unzip" package (since Consul is distributed as a .zip file).
-
Creates a user called "consul", as well as the files and directories required by the Consul service.
-
Downloads and installs Consul and the Consul web application.
-
Makes a DNS query for the TXT record of "consul.YOUR-DOMAIN-NAME".
-
Loops through the results of the TXT record and adds them to the Consul configuration file.
-
Starts the Consul service.
Step 4 is the most important part here, as it allows us to break out of the chicken / egg scenario of bootstrapping our Consul cluster. Terraform created the virtual machines for us and then created DNS records of those virtual machines. When the install script runs, it now queries for those DNS records and allows Consul to self-configure itself. This is a very powerful feature to have in a tool.
Action
With all of the resources in place, running "terraform apply" will do the following:
-
Declare a variable called "count" with a default value of 3.
-
Create three null resources that provision the three virtual machines to:
1. Copy the "files" directory to the virtual machine
2. Run the "install.sh" script -
Create an AAAA DNS record with the IPv6 addresses of all three virtual machines.
-
Create a TXT DNS record with the IPv6 addresses of all three virtual machines.
-
Create an AAAA DNS records with the IPv6 address for each virtual machine.
Results
Once Terraform has finished, you can now log in to your Consul cluster:
$ ssh consul.YOUR-DOMAIN-NAME
root@consul-02:~# consul members
Node Address Status Type Build Protocol DC
consul-03 [2605::3eff:fe07:1875]:8301 alive server 0.5.2 2 honolulu
consul-01 [2605::3eff:feea:e84]:8301 alive server 0.5.2 2 honolulu
consul-02 [2605::3eff:fe79:1586]:8301 alive server 0.5.2 2 honolulu
All three have discovered each other with no help from us!
With the Consul web application installed, you can also visit http://consul.YOUR-DOMAIN-NAME:8500/ui and interact with Consul through a browser.