Once you start using Kubernetes to manage your containerized applications, you will most likely embrace the command line interface kubectl. Since the start of Kubernetes, I have found kubectl to be great to work with (compared to other CLIs). It is relatively easy to get used to once you know the various Kubernetes API objects and it is very empowering.

Being able to scale an application with kubectl scale, being able to edit manifests on the fly with kubectl edit, and check the history of a deployment with kubectl rollout are all terrific commands that make using Kubernetes extremely enjoyable.

Very quickly however, you will realize that the CLI is limited (they all are) and that to take full advantage of the API specification you need to start dealing with the configuration files (e.g manifests) for each API object.

You will do so also because you want to keep a source of truth and you want to have a change review process for all your objects. If you don’t, then multiple users might step on each other and manipulate live objects, leaving you in a state of confusion: what should be my cluster state ?

Hence you will slowly abandon single imperative commands like kubectl set and start applying your changes by declaring them in their source with kubectl apply. One command to rule them all!!!!

However one problem will remain: how do you author the object manifests and what is the best way to modify them?

In this blog I will introduce you to kustomize, a relatively new tool that allows you to keep a declarative mindset while modifying your objects.

The Kubernetes Kustomize command

Imperative versus Declarative Kubernetes Object Management

If you have not already embraced a declarative mindset, the journey that will lead you from an imperative to a declarative mode of operation will be fairly natural. The documentation is terrific on this subject and I invite you to read on the differences and how to migrate from one mode to another.

Using Kubernetes does this to you! You truly embrace the API, you forget about the underlying infrastructure and you manipulate your cluster state both from an operational and user perspective using object declaration.

This will naturally bring you to using your version control system as the source of truth. Your admins will start operating the cluster via pull requests, which will unify the workflows between developers and operators (what some people now call GitOps).

Configuration Tools Sprawl in the Kubernetes Ecosystem

It is impossible to talk about Kubernetes manifests authoring and application configuration without mentioning that there are a ton of Kubernetes configuration tools out there.

While a lot of people will be asking “What is THE tool that I should use?”, I believe there is no “one tool”, there is no silver bullet. Your background, your preferences, your familiarity with a programming language and an API will dictate what you’d rather use. The power is in the API, if you’d rather use Python or Golang or BASH to interact with it, go for it. The tool does not matter as long as it does the job, reliably and efficiently.

Brian Grant one of the Kubernetes steering committee member and core founders of the project wrote a long document about application configuration with pros and cons (The document is also on GitHub. In the document he listed over 50 tools and linked to a spreadsheet that when I checked had 67 tools like:

A comparison of these tools would be pointless. They are all at different stages of development, are seeing different adoption, and are written in different languages. More importantly, they are all in the Kubernetes application configuration space but they all tackle slightly different problems

Kompose for instance (which I started) was meant to ease the transition from Docker-compose to Kubernetes. Ksonnet started as a manifest authoring tool to solve the “face full of yaml problem”.

Different problems, different times, different languages, different personas lead to different tools. Still only one API.

Thoughts on the Cloud Native Computing Foundation (CNCF) Helm Project

When dealing with Kubernetes applications, one of the first thoughts and challenges is to manage the multiple object manifests and deal with all the .yaml that will start accumulating. You will need a way to manage them.

Helm was the first tool to tackle that problem. I had the chance to see its rebirth after it was first created by Deis. The Deis team has since then joined Microsoft.

Helm is terrific to provide a one-command install:

helm install stable/redis

The application package is a so-called Chart. When Helm started, the Charts represented a relatively small templatization of the Kubernetes objects making up an application. But little by little the templatization took over and now the Helm templates that make up a chart are fully templatized, see below for an example:

apiVersion: v1
kind: Service
metadata:
  name: {{ template "minecraft.fullname" . }}
  labels:
    app: {{ template "minecraft.fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
spec:
  type: {{ .Values.minecraftServer.serviceType }}
  ports:
  - name: minecraft
    port: 25565
    targetPort: minecraft
    protocol: TCP
  selector:
    app: {{ template "minecraft.fullname" . }}

I am not a big fan of this, because this basically moved the API to what is called the values.yaml file or like I heard it called the override file. You can specify an override file with the helm set command and this makes me feel very imperative. I don’t know where my source of truth is anymore. It is a mix of my override file and my object template.

That said, the alternative is to keep a bunch of YAML files around, with a lot of identical parts and you are left having to deal with tracking all the different variants and why you did those changes. Not ideal either.

This feeling makes me particularly open to kustomize which:

… provides a new, purely declarative approach to configuration customization that adheres to and leverages the familiar and carefully designed Kubernetes API.

The original blog is a great read and recaps some of what I have been talking about. Let’s give it a try.

Kubernetes kustomize Installation

Installing kustomize is straightforward, you can grab a release from GitHub or if you have a working Golang environment you should be able to do:

go get sigs.k8s.io/kustomize

With kustomize installed you are ready to use it, but first have a look at the available commands:

kustomize --help

Potential incompatibilities

Kustomize belongs to the SIGs organization and is evolving at a fast pace. At the time of this article, the version was v1.0.8. New versions are encouraged but may require some adaptations in the following examples.

Using kustomize to generate Kubernetes Manifests

While helm leveraged templates and override files, kustomize aims to stick to the Kubernetes API objects as is (i.e no templatization) and generate new objects using a kustomization.yaml file which declarativy defines the changes that need to happen to a given API resource (aka object).

In other terms, you write a patch that kustomize applies to your objects.

For instance, say you want to add labels to a resource. Instead of being imperative with kubectl label you can use a kustomize.yaml file and then build the new object.

Consider a pod.yaml like:

apiVersion: v1
kind: Pod
metadata:
  name: kusto
spec:
  containers:
  - name: kusto
    image: nginx

And a kustomization.yaml like:

commonLabels:
  super: blog
resources:
- pod.yaml

Then you can generate the new Pod object with:

$ kustomize build
apiVersion: v1
kind: Pod
metadata:
  labels:
    super: blog
  name: kusto
spec:
  containers:
  - image: nginx
    name: kusto

You have extracted the common pattern (i.e your basic Pod spec), and you have declaratively defined the customization (aka variant) of it.

Another quick example for the road. Say you wanted to upate the image version. You would use commonLabels like:

commonLabels:
  super: blog
imageTags:
  - name: nginx
    newTag: 1.8.0
resources:
- pod.yaml

And this would render:

apiVersion: v1
kind: Pod
metadata:
  labels:
    super: blog
  name: kusto
spec:
  containers:
  - image: nginx:1.8.0
    name: kusto

So clearly kustomize also has a bit of his own “API/DSL” but you can avoid that with patches.

That’s all for now, I hope you learned something new. I definitely believe it is worth having a look at kustomize . Personally, I now keep all my manifests in version control and operate on them via PR. I do not templatize them and I trigger updates in my cluster by running a kubectl apply at the end of my build pipeline.

My dirty secret is that I have some sed in there, but now I will move to kustomize.