Skip to content

Platform Engineering: Building an Internal Developer Platform

April 27, 2026  
Platform EngineeringIDPKubernetesKarpenter

Platform Engineering with Exoscale: How We Built an Internal Developer Platform on SKS with Karpenter

Introduction

At Gravitek, we help companies design and operate platform engineering solutions. A core part of that work includes building Internal Developer Platforms (IDPs): the kind of tooling that gives development teams a standardized, self-service foundation for shipping software without drowning in infrastructure complexity.

When Exoscale offered us the opportunity to test their Scalable Kubernetes Service (SKS) and its new Karpenter integration, we decided to build a concrete example: a fully functional IDP running entirely on Exoscale, showcasing what SKS and Karpenter can do in a real-world platform engineering scenario.

This article walks through what we built and how we leveraged SKS features. Whether you are evaluating Exoscale for your Kubernetes workloads or curious about what a modern Internal Developer Platform or Portal looks like in practice, this should give you a solid picture.


What is an Internal Developer Platform?

Before diving in, a quick primer for those unfamiliar with the concept.

An Internal Developer Platform (IDP) is a set of tools and automations that form a technical foundation for development teams. In the context of platform engineering, the goal is to give teams a standardized, self-service way to build and ship software. Instead of having every team reinvent the wheel — setting up their own CI/CD pipeline, monitoring stack, and Kubernetes deployments — an IDP provides “golden paths”: curated, secure, and maintained workflows managed by a platform team.

Think of it as a product built for your developers. They get self-service capabilities (spin up a database, deploy an app) while the platform team ensures everything stays consistent, secure, and well-governed.


The Gravitek IDP: Architecture Overview

To put SKS through its paces, we designed a GitOps-driven, self-service Internal Developer Platform that exercises many of the platform’s capabilities and supports key platform engineering requirements: managed Kubernetes, Karpenter autoscaling, Cilium networking, and integration with the broader cloud-native ecosystem. Here is what we deployed.

Cluster setup for an Internal Developer Platform

We provisioned an Exoscale SKS Pro cluster in the ch-gva-2 (Geneva) zone, using Cilium as the CNI (eBPF-based, replacing kube-proxy). The infrastructure is managed with Terraform and the Exoscale provider.

The cluster uses a two-tier node pool strategy:

  • A static infrastructure pool (1x standard.small node, tainted) that runs always-on platform tools: ArgoCD, Crossplane with its providers (Exoscale, OVH, Scaleway, Kubernetes), CloudNativePG Operator, External Secrets Operator, Tailscale Operator, Trivy Operator, and Victoria Metrics with Grafana.
  • A Karpenter-managed workload pool (0 to N nodes) that scales dynamically for application workloads, including Backstage and its PostgreSQL database (managed by CloudNativePG).

Terraform state is stored in Exoscale SOS (S3-compatible object storage), keeping the whole infrastructure lifecycle within the Exoscale ecosystem.

This separation ensures platform tools are always available, while workload nodes scale (and scale to zero) based on actual demand, supporting a more efficient and resilient platform engineering setup.

Platform components when engineering the IDP

Everything is deployed via ArgoCD using Helm charts, following a pure GitOps workflow. Here is what runs on the cluster:

  • ArgoCD: GitOps continuous delivery, watching our Git repositories and applying changes automatically.
  • Crossplane: Cloud resource provisioning engine with providers for Exoscale, OVH, Scaleway, and Kubernetes. Let’s developers request infrastructure (PostgreSQL databases, virtual machines) through Kubernetes-native APIs using a custom platform.gravitek.io API group.
  • External Secrets Operator: Syncs secrets from Infisical (EU instance) into Kubernetes. Secrets never touch Git.
  • Tailscale Operator: Provides VPN-only access to all platform services. More on this below.
  • Victoria Metrics + Grafana: Observability stack for metrics and dashboards.
  • Trivy Operator: Continuous container image vulnerability scanning.
  • CloudNativePG: In-cluster PostgreSQL operator, used as the database backend for Backstage.
  • Backstage: The developer portal, providing a service catalog, scaffolder templates, TechDocs, Kubernetes cluster visibility, Crossplane resource catalog, automatic template generation from XRDs, GitHub SSO, and DORA metrics tracking.
  • Karpenter: Exoscale-managed node autoscaler and the star of this article.

Architecture

Developer workflow on the Internal Developer Platform

The developer experience works like this:

  1. A developer goes to Backstage and picks a software template (e.g., “provision a PostgreSQL database”).
  2. Backstage generates a Crossplane claim and commits it to the claims repository via a pull request.
  3. An auto-merge CI workflow validates and merges the claim.
  4. ArgoCD picks up the change and applies it to the cluster.
  5. Crossplane translates the claim into actual cloud resources (e.g., an Exoscale managed database).
  6. The resource appears in the Backstage catalog, visible and documented.

Each resource supports T-shirt sizing (small, medium, large) with provider-specific mappings, and developers can target different cloud providers via label selectors, making multi-cloud provisioning a first-class capability of the platform.

No tickets, no manual provisioning. Just Git and self-service.


Karpenter on SKS: Platform Engineering with Smart Autoscaling

Karpenter was the SKS feature we were most eager to put to the test, and the main reason this example of building an Internal Developer Portal is worth sharing.

What is Karpenter and why does it matter for platform engineering?

Karpenter is a next-generation Kubernetes autoscaler. Unlike the classic Cluster Autoscaler, which scales predefined node groups up or down, Karpenter provisions individual nodes based on the actual resource requirements of each pod. It picks the best instance type, reacts faster to load changes, and aggressively consolidates underutilized capacity.

On Exoscale SKS, Karpenter is available as a managed add-on on Pro clusters. No manual installation, no IAM configuration to set up, you enable it at cluster creation, and Exoscale handles the rest.

Our Karpenter configuration for the Internal Developer Platform

We use Karpenter with a straightforward two-pool approach: static infrastructure pools and workload pools.

The static infrastructure pool is a classic SKS node pool (1 node, tainted for platform tools). This is not managed by Karpenter, it runs the always-on components that need to be available before Karpenter can even scale up workload nodes.

The workload pool is entirely Karpenter-managed. It uses two Kubernetes CRDs:

  • ExoscaleNodeClass: Defines Exoscale-specific parameters (image template, disk size, security groups)
  • NodePool: Defines scaling constraints (allowed instance types, resource limits, consolidation policy)

Dashboard

The key benefit for our platform engineering stack: scale-to-zero. When no application workloads are running (which is most of the time for a demo/testing platform), the workload pool drops to zero nodes. When someone deploys something (Backstage, a test application) Karpenter spins up the right-sized node in seconds.

What we observed using Karpenter

  • Fast provisioning: New nodes come up quickly, which was a pleasant surprise compared to our experience with Cluster Autoscaler.
  • Consolidation works well: When the load decreases, Karpenter properly drains nodes (respecting Pod Disruption Budgets) before terminating them. No sudden workload disruption.
  • Drift mechanism: When we upgraded the cluster’s Kubernetes version, Karpenter automatically used the new version for newly provisioned nodes. No manual image updates needed.
  • Cost savings: For an Internal Developer Platform that is not running 24/7, scale-to-zero is a game changer. We only pay for the static infrastructure node when no workloads are active.

From a platform engineering perspective, the fact that Karpenter comes pre-installed and pre-configured on SKS Pro clusters saved us significant setup time compared to doing it ourselves on a bare Kubernetes cluster.


Challenge: Zero Public Ingress with Tailscale

One design choice we made early on: no service of our Internal Development Platform should be exposed publicly. ArgoCD, Grafana, Backstage — these are internal tools that have no business being on the public internet.

The traditional approach and its problems

Typically, you would set up a gateway API, a load balancer, TLS certificates, and then add authentication layers on top. On a cloud provider, this means extra costs (load balancer billing) and a larger attack surface.

Our approach: Tailscale Operator

Instead, we deployed the Tailscale Operator on the cluster. Tailscale creates a WireGuard-based mesh VPN that connects authorized users directly to cluster services, without exposing any public endpoint.

Each service that needs to be accessible gets a Tailscale hostname:

  • ArgoCD: argocd-server.<tailnet>.ts.net
  • Grafana: grafana.<tailnet>.ts.net
  • Backstage: backstage.<tailnet>.ts.net

These hostnames are only reachable from devices on the Tailscale network. Tailscale handles identity-based authentication (SSO), end-to-end WireGuard encryption, and automatic TLS certificates via Let’s Encrypt.

Tailscale Operator

Why this matters when building an Internal Developer Platform

  • Zero public attack surface: Nothing is reachable from the internet.
  • No load balancer costs: Tailscale replaces the traditional ingress + NLB pattern entirely, which is a real cost saver.
  • Simple configuration: No complex network rules or firewall management. Access is identity-based — if you are on the Tailscale network and authorized, you can reach the service.
  • No single point of failure: Tailscale is a mesh network, not a centralized VPN gateway.

This platform engineering approach fits into a broader security model that also includes secret management through Infisical (EU instance, Machine Identity authentication with read-only scope), continuous image scanning via Trivy, node isolation through dedicated tainted pools, and kubeconfig certificate rotation every 30 days.

On Exoscale specifically, this worked without any particular issue. The Tailscale Operator runs on the tainted infrastructure node and integrates cleanly with the SKS networking model.


A European Cloud: A Real Alternative for Platform Engineering

Exoscale is a European cloud provider, hosted in datacenters across Europe, with native GDPR compliance. In a context where digital sovereignty is becoming a strategic concern for many organizations, this is worth highlighting.

Our experience shows that SKS is a mature offering, capable of running serious platform engineering workloads. Karpenter brings the level of automation you would expect from a modern managed Kubernetes service, and it all runs on 100% European infrastructure. This is no longer a trade-off between sovereignty and technical capability — it is a credible alternative to American hyperscalers.


Conclusion: Why Exoscale Works for Platform Engineering and an Internal Developer Portal

Building this Internal Developer Platform (IDP) on Exoscale SKS confirmed what we hoped: you can run a serious, production-grade platform engineering stack on a European cloud without compromise. SKS is solid, Karpenter delivers real autoscaling benefits out of the box, and the overall experience is smooth enough that we would confidently recommend it for platform engineering teams looking at sovereign Kubernetes options.

We’ve demonstrated just one way to use SKS and Karpenter — yours might look very different depending on your workloads. The point is that the building blocks are there, and they work well together.

In terms of budget, the current configuration enables the deployment of a comprehensive Internal Developer Platform, with initial operating costs under 100€ per month.

If digital sovereignty matters to your organization and you are looking to build or modernize your Internal Developer platform, Gravitek can help you get there. Let’s talk.

LinkedIn Bluesky