This article is a contribution from official Partner Camptocamp renowned open source experts headquartered in Lausanne and with presence in France and Germany. It shows how one can quickly build on top of the Exoscale products and official terraform driver to add functionality and even simpler user experience.

Terraform usage at Camptocamp

Following the recent announcement of Exoscale’s managed Kubernetes service, we gave it a test run to deploy our standard stack of tools. As usual, we wanted to do it “as Code”, so we chose Terraform for the task.

Since the release of Exoscale’s Terraform provider v0.22.0, it is now possible to create SKS clusters as code.

To deploy a cluster you’ll need to create all these resources:

  • An exoscale_sks_cluster,
  • One or more exoscale_sks_nodepools,
  • An exoscale_affinity per node pool to ensure that all nodes in a pools are in the same anti-affinity group in case of outage in an hypervisor,
  • An exoscale_security_group for your node pools,
  • An exoscale_security_group_rule to allow Calico traffic behind your nodes,
  • An exoscale_security_group_rule to allow nodePorts access from everywhere,
  • An exoscale_security_group_rule to allow access to logs and exec.

To ease the deployment of all these resources, we decided to write a Terraform module that we published on the Terraform registry.

In order to use it, simply copy this HCL code:

module "sks" {
  source  = "camptocamp/sks/exoscale"
  version = "0.3.1"

  name = "test"
  zone = "de-fra-1"

  nodepools = {
    "router" = {
      instance_type = "medium"
      size          = 2
    "compute" = {
      instance_type = "small"
      size          = 3

output "kubeconfig" {
  value     = module.sks.kubeconfig
  sensitive = true

Export your API keys:

$ export EXOSCALE_API_KEY=...

Then run:

$ terraform apply

This will deploy an SKS cluster with 2 nodepools (one that we’ll dedicate for our Ingress Controller and one to host our applications), one anti-affinity group per nodepool and a security group with proper rules so that everything runs properly (you’ll still have to open access to http and https ports if needed).

You can retrieve the kubeconfig for the kube-admin user using this command:

$ terraform output -json kubeconfig | jq -r . > ~/.kube/config

NOTE: make sure to not overwrite a previous cluster configuration or prefer working with environment variables with KUBECONFIG=~/path/to/sks-config

You should then be able to connect to the cluster:

$ kubectl get pods --all-namespaces

And voilà.