Ubuntu Bionic Beaver changes
Ubuntu Bionic deprecates
ifupdown in favor of
netplan.io (see Bionic release
The loopback interface configuration has been updated within our
The up and down hooks may also be achieved via
explained on the netplan FAQ entry: Use pre-up, post-up, etc. hook
A look a the canonical Elastic IP use-case: highly available load balancers
Exoscale recently introduced Elastic IP addresses, which provide a way to share a reserved IP address across several instances. In this article we will explore the canonical use of Elastic IPs: a shared IP on two load-balancers with automatic failover. We will make use of the Exoscale API to infer most configuration details.
Elastic IP addresses can be held by any number of instances, but in the case of instance failure, they must be revoked through the Exoscale API. Automatic failover typically involves using tools such as keepalived, heartbeat, and corosync.
In this article, we will take advantage of our own tool, exoip to ensure liveness of load-balancer instances and to ensure the Elastic IP is held by a functioning instance. We will perform the following, step-by-step:
- Defining our sample architecture
- Setting-up security groups
- Creating instances
- Application server configuration
- Load-balancer configuration
- Reserving an Elastic IP address
- Automatic failover
- Setting-up exoip
Defining our sample architecture
For the purpose of the article, let’s assume we have 2 load-balancers distributing requests to 3 application servers:
- lb1.example.com: 198.51.100.11
- lb2.example.com: 198.51.100.12
- app1.example.com: 198.51.100.21
- app2.example.com: 198.51.100.22
- app3.example.com: 198.51.100.23
Both load-balancer and application server instances will expose their service on port 80. Setting-up a proper TLS environment is beyond the scope of this article. For an in-depth description of how to correctly deploy a secure TLS web serving environment, this article from NGINX is a must-read.
Through-out the article, we will assume that DNS records exist for the above list of instances. Setting-up records won’t be covered in this article. For an in-depth description on how to set-up DNS zones and records, you may refer to the Exoscale DNS documentation.
default security group is not described below, for an introductory
guide on how to do initial security group configuration, please refer to the security group documentation.
Setting-up security groups
We will start by taking care of firewalling between instances by creating rules in
three security groups:
ICMPaccess to all instances.
load-balancerwill allow external access to port 80.
app-serverwill allow port 80 access from the
load-balancer security group contents should be as follows:
app-server security group contents should be as follows:
Using our cs utility, rules may be provisioned as follows
EXOSCALE_ACCOUNT environment variable is set to the target organization’s name):
cs createSecurityGroup name=load-balancer \ description="Load balancer instances" cs createSecurityGroup name=app-server \ description="Application server instances" cs authorizeSecurityGroupIngress startPort=80 \ endPort=80 \ protocol=TCP \ cidrlist=0.0.0.0/0 \ securitygroupname=load-balancer cs authorizeSecurityGroupIngress startPort=80 \ endPort=80 \ securitygroupname=app-server \ protocol=TCP \ 'usersecuritygrouplist.account'=$EXOSCALE_ACCOUNT \ 'usersecuritygrouplist.group'=load-balancer
Now that our firewalling configuration is ready, we can create instances as described above. We will create instances using Ubuntu 16.04 as the operating system template.
This gives us the following instance list:
Application server configuration
For the purpose of the article, we will stick to the basics and our
application server will not serve much of an application… On
app3 we will simply install
nginx and run the
sudo apt install nginx
Our load-balancer instances will run HAProxy, first, let’s install it on both
sudo apt install haproxy
HAProxy comes bundled with good defaults in
We will keep those and add our additional configuration at the end of the file:
cat << EOF | sudo tee -a /etc/haproxy/haproxy.cfg listen web bind 0.0.0.0:80 mode http server app1 app1.example.com:80 check server app2 app2.example.com:80 check server app3 app3.example.com:80 check EOF
Once the configuration is provisioned on both
lb2, HAProxy will need to be reloaded:
sudo systemctl reload haproxy
We can now verify that we are able to see the default nginx
landing page on the IPs held by
lb2. At this point, we
have a valid load-balancing configuration to our three application
Up until now, DNS records pointing to both
lb2 as well as relying n HTTP clients’ DNS resolving libraries were
needed to perform failover for H/A.
For the remainder of this article, we will focus on using Exoscale Elastic IP addresses and our exoip tool to host both of these load-balancers behind a single IP address, with automatic failover in case of failure.
Reserving an Elastic IP address
Allocating a new Elastic IP address is a simple operation from our console or API. For details about how to perform console operations pleaser refer to the EIP documentation.
At Exoscale, Elastic IPs are additional IPs which need to be be provisioned on instances. To help share a single Elastic IP across several instances without worrying too much about configuration, we have written a small tool called exoip.
Now that we have an Elastic IP, we can assign it to any or all of our load-balancers, but we need to take care of removing it from instances in the case of failure. In practice, this would incur downtime and be burdensome.
A typical approach would be to couple one of these tools with scripts to be ran during state transitions. While these tools are good solutions, they are generic and do not take advantage of the API capabilities a cloud environment can offer. On the other-hand, the network protocol used to ensure peer liveness and perform state transitions is very simple and well documented.
To ease the configuration of automatic failover, we have introduced exoip which implements a protocol similar in nature to VRRP or CARP, but takes advantage of the Exoscale API to infer most configuration.
The basic idea behind exoip is that any number of hosts may participate in the ownership of an Elastic IP. When hosts fail to report their status, their Elastic IP association is revoked by other cluster members. Additionally, exoip provides support for the most common use-case: when all owners of an Elastic IP share a common Security Group.
By default, exoip uses UDP port 12345 to communicate, so we will
add this to our
load-balancer security group:
cs authorizeSecurityGroupIngress startPort=12345 \ endPort=12345 \ securitygroupname=load-balancer \ protocol=UDP \ 'usersecuritygrouplist.account'=$EXOSCALE_ACCOUNT \ 'usersecuritygrouplist.group'='load-balancer'
Now, let’s retrieve exoip, which is available at https://github.com/exoscale/exoip. The latest release is always available at https://github.com/exoscale/exoip/releases/latest:
VERSION=0.3.6 wget https://github.com/exoscale/exoip/releases/download/$VERSION/exoip
Trusting executables directly fetched from the internet is never a great idea, we can verify exoip comes from a trusted source using the following:
wget https://github.com/exoscale/exoip/releases/download/$VERSION/exoip.asc gpg --recv-keys E458F9F85608DF5A22ECCD158B58C61D4FFE0C86 gpg --verify exoip.asc
Provided the signature verification process succeeded, the binary can be moved to an executable location:
sudo mv exoip /usr/local/bin sudo chmod +x /usr/local/bin/exoip
Once exoip is installed all is left to do is to configure a new
interface, let’s put its configuration in
auto lo:1 iface lo:1 inet static address 198.51.100.50 netmask 255.255.255.255 exoscale-peer-group load-balancer exoscale-api-key EXO....... exoscale-api-secret LZ..... up /usr/local/bin/exoip -W &
This ensures the following steps are taken:
- Configure a new loopback, holding our allocated IP.
- Use members of the
load-balancersecurity group as peers.
- Configure Exoscale API credentials.
- Launch exoip in watchdog mode when this interface comes up.
The interface may now be brought up:
sudo ifup lo:1
We can confirm in the console that our EIP is correctly assigned:
We can now try stopping and starting instances to see the assignment operation performed by exoip.
exoip logs all its operations through syslog, we can access its event log with
journalctl -xe -t exoip
With this guide, we described a simple way of setting-up robust automatic failover for load-balancers or other applications needing to share IPs.
It must be said that exoip is only one tool, and will not cater to all possible use-cases. When implementing automatic failover, other approaches should also be investigated to find the best fit. In any case, exoip might come in handy since it can also be ran to perform one-off associations and dissociations between Elastic IPs and instances.
We also have written in-depth documentation on subjects pertaining to EIP:
That being said, we understand that you would rather focus on writing your application than configuring HAProxy. Rest assured that we work hard to expand our catalog of network services this year to provide you with a programmable load-balancing service. Stay tuned!