Pragmatic GitOps: Part 1 — Kustomize

Andrew Pitt
10 min readMar 7, 2021

The Missing Link

GitOps is a hot topic. If you’re working with Kubernetes then you have probably already started reading up on GitOps or you are already well down the GitOps path. This mini blog series is intended for those who are just starting out. Particularly those who might not be in a “green field” situation and want to see how they can ease themselves into GitOps, re-use what they already have, and not feel overwhelmed by the process.

If this describes your situation, then you might also have some questions about where GitOps fits with your existing CI/CD processes. If you already have a mature CI/CD pipeline in place, then you might be tempted (as I was originally) to write off GitOps as unnecessary. As you will see, it really boils down to using the right tool for the job. Once you start to understand the benefits, GitOps feels like the missing link in your DevOps process. What benefits am I talking about?

  1. Consistency — With a GitOps approach, the state of your cluster and the applications running on it is described as yaml files stored and versioned in a git repository (or multiple git repositories).
  2. Control — Changes to your cluster should happen as pull requests that can be accepted (or rejected) either by cluster admins, team leads, or whoever has ownership over a particular aspect of a cluster (namespace, group membership, application configuration, etc…). This also provides a natural audit trail of cluster and application configuration.
  3. Disaster Recovery — Although GitOps doesn’t protect your data (such as persistent volumes), when taking a GitOps approach to managing your cluster you do have at least 1/2 of a DR strategy in place. Your cluster configuration is stored and versioned in git!

On that note… let’s get started!

Templating with Kustomize

I still remember that “a-ha!” moment when the simplicity and power of GitOps really clicked. It started of as an exercise in reproducing a lab I had attended at a conference and ended in a quest to see if there was anything that couldn’t (or maybe shouldn’t) be GitOps-ified(tm)!

Before we get deep into the full GitOps experience, it’s important to start from a solid foundation. For many, that foundation is Kustomize. As is often the case, there are many different options out there (plain yaml or Helm, to name two alternatives), and there’s really no reason why you can’t mix and match . In this post, however, I’ll be focusing on Kustomize!

Kustomize takes the copy/paste out of managing multiple versions of your Kubernetes resources. Rather than duplicating an entire directory of yaml (Deployments, Services, PersitentVolumeClaims, etc…) for each “environment” you may have (DEV, TEST, QA, PROD, etc…), Kustomize lets you have a single “base” directory containing your common resources, along with “overlay” directories that only contain files or patches specific to each environment.

Did I mention that Kustomize is also baked right into both kubectl and the oc cli? Kustomize is sounding better and better!

Although I’ll only touch on the basics today, Kustomize offers a lot of great features to help you stick to DRY (Don’t Repeat Yourself) principles.

So where should you start? Perhaps a “brown field” example will help jump start your Kustomize journey!

Brown Field to Kustomize in Two Easy Steps

Brown field — Image credit https://jooinn.com
Brown Field — Image credit https://jooinn.com

Not everyone has the luxury of starting with brand new projects (Green Field). I’d say most of us have a lot of existing applications out there, many of them running in a Kubernetes environment already. So, it would be natural to ask:

“If I didn’t start out using Kustomize, is it too late to start now?”

Of course not! In fact, it’s not all that hard get on the GitOps band wagon even with existing applications. In fact, it only takes three easy steps:

  1. Export your existing Kubernetes resources.
  2. Tidy things up a bit.
  3. Addkustomization.yaml files listing your resources, namespaces, and patches.

Take a look at this example application. It consists of a Spring Boot application that connects to a MySQL database. I’m using OpenShift 4 (my Kubernetes platform of choice), however, these tools and processes apply to any reasonably recent Kubernetes distribution.

A Spring Boot app and MySQL running Red Hat OpenShift Container Platform.

Just read along for now, but if you would like to replicate this experiment yourself, there are instructions and the source Github repository at the bottom of the page.

In terms of resources, this project includes:

  • 1 x Deployment (app)
  • 1 x DeploymentConfig (MySQL)
  • 2 x Service (app and MySQL)
  • 1 x PersistentVolumeClaim (MySQL storage)
  • 1 x Secret* (MySQL user and password)
  • 1 x Route (ingress for app)

* No… you shouldn’t check an unencrypted Secret into a git repository. There are a number of ways to properly handle Secrets in a GitOps world (Sealed Secrets, Vault, etc…), but that’s a topic for a different post.

Since all of this already exists in the gitops-demo namespace, the question is how do you GitOps-ify it?

The easiest answer is to use kubectl or oc (the OpenShift CLI) to export the yaml representation of these objects.

Step 1: Export your resources

When you export Kubernetes resources the resulting yaml can be quite busy. There is a lot of extra metadata that gets added to each resource by Kubernetes.

To make exports cleaner, I’ve started using kubectl-neat. It’s a nice tool that can help strip out unnecessary fields, leaving you with much tidier yaml files.

There are a few different ways you can use this tool. Personally, I like piping my kubectl get commands to kubectl-neat and then redirect the result into a file. For example:

$ kubectl get deployment myapp -o yaml -n demo \
| kubectl-neat > myapp-deployment.yaml

Since I’m doing this a lot, I’ve made a little function I call “neat” that I’ve added to my .bash_profile. It looks like this:

neat() {
if [[ "$3" != "-n" ]]; then
echo "Usage: neat <kind> <name> -n <namespace>"
else
kubectl get $1 $2 -o yaml -n $4 | kubectl-neat > $2-$1.yaml
fi
}

Since this demo application is called “Pet Clinic”, I’ve created a directory on my machine called petclinic. This will be the root directory for a new git repository and it’s also where we will being.

Using my neat function (you can always use the long form), I exported all of the resources in the gitops-demo namespace into a sub directory of petclinic called base like so:

// Create and switch to base directory.
$ mkdir base
$ cd base
// Database resources.
$ neat deploymentconfig petclinicdb -n gitops-demo
$ neat svc petclinicdb -n gitops-demo
$ neat pvc petclinicdb -n gitops-demo
$ neat secret petclinicdb -n gitops-demo
// App Resources
$ neat deployment petclinic -n gitops-demo
$ neat svc petclinic -n gitops-demo
$ neat route petclinic -n gitops-demo
// List the files that were created.
$ ls -A1
petclinic-deployment.yaml
petclinic-route.yaml
petclinic-svc.yaml
petclinicdb-deploymentconfig.yaml
petclinicdb-pvc.yaml
petclinicdb-secret.yaml
petclinicdb-svc.yaml

There are now 7 files in our “base” directory that are 98% ready to go! So, what’s left?

Step 2: Tidy up the yaml

Thankfully, kubectl-neat did most of the hard work. There are still a few small details to cover, such as:

  1. You will want to remove namespace from each file. It’s not a hard requirement, but it will save some confusion as you will be applying these manifests to different namespaces.
  2. In petclinicdb-deploymentconfig.yaml, I deleted spec.template.spec.containers[0].image, since it references an OpenShift ImageStream that is part of the OpenShift catalog. I also deleted the lastTriggeredImage element from the ImageChange trigger.
  3. In petclinic-svc.yaml and petclinicdb-svc.yaml delete spec.clusterIP and spec.clusterIPs. Your service will get a new IP when it is created.
  4. For petclinic-route.yaml, I deleted spec.host so that the exposed URL will be automatically generated for each new namespace.
  5. I deleted all of the “annotations” in petclinicdb-pvc.yaml, as they are specific to the state of the existing PVC. I also deleted the volumeName (since this is generated by Kubernetes) and the storageClassName, since I want to use whatever the “default” storage class is in my clusters.

You can use your discretion to delete any extra metadata that you might not need or want (for example, deployment revision annotations).

You can see the final result in the base directory of the accompanying Github repository.

What we have now is clean yaml that can be re-used in any new namespace. Nice! It doesn’t take a lot of work to clean things up, and kubectl-neat does most of the heavy lifting. I think you will find the end result worth the effort, as you now have a great base that you can apply to as many new environments (or even clusters) as you like.

Step 3: Add a kustomization.yaml file and your first overlays

Overlays allow for a solid base, with small changes layered on top.
Layers — Credit: Ambaday Sasi via https://pixabay.com

Now that we have a generic yaml representation of our application environment, it’s time to add our first kustomization.yaml file.

In the same “base” directory where our yaml files exist, we create a file called kustomization.yaml with the following contents:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- petclinic-deployment.yaml
- petclinic-route.yaml
- petclinic-svc.yaml
- petclinicdb-deploymentconfig.yaml
- petclinicdb-pvc.yaml
- petclinicdb-secret.yaml
- petclinicdb-svc.yaml

Easy, right? This is the most basic kustomization.yaml file there is. It simply lists the resources from this directory that need to be applied.

Now for something truly useful - let’s create two new environments for our app! The new environments will live in the gitops-dev and gitops-prodnamespaces.

For this we make dev and prod “overlays”. To do this, create an “overlays” directory at the root of your application directory (at the same level as “base”). Inside “overlays”, create directories for dev and prod. Our directory structure now looks like this:

petclinic/
├── base/
├── overlays/
│ ├── dev/
│ ├── prod/

For this simple application, the only difference between “dev” and “prod” is the namespace and the tag that is used in each deployment.

First, in the “dev” directory create a new file called namespace.yaml. The contents of this file is the yaml definition of the “dev” namespace:

apiVersion: v1
kind: Namespace
metadata:
name: gitops-dev

In the same “dev” directory, we will also create a new kustomization.yaml file with the following contents:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: gitops-devbases:
- ../../base

resources:
- namespace.yaml
images:
- name: quay.io/pittar/petclinic
newTag: gitops-dev

Now the power of Kustomize starts to reveal itself.

This file is simple yet very powerful. To go stanza by stanza:

  • namespace declares the namespace that will be added to all of your resources, including those from bases.
  • bases lists any base directories that should be included. In this case, this is the base directory with the app and database manifests. Each directory that is listed as a “base” should have its own kustomization.yaml file in its root.
  • resources lists any resources in this directory that should be applied. Only namespace.yaml in this case.
  • images allows you to replace the location and/or tag of images found in your manifests. In this case, we want to change the tag of the petclinic image to gitops-dev (it uses latest in the base yaml).

Next, we switch over to the “prod” directory and add a namespace.yaml and kustomization.yaml file that look like this:

apiVersion: v1
kind: Namespace
metadata:
name: gitops-prod

And…

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: gitops-prodbases:
- ../../base

resources:
- namespace.yaml
images:
- name: quay.io/pittar/petclinic
newTag: gitops-prod

That’s it! We now have dev and prod Pet Clinic environments defined. If you want to add more environments (for example TEST, DEMO, etc…), simply create more directories inside “overlays” and with their own namespace.yaml and kustomization.yaml (pointing to the appropriate image tags, of course).

The payoff… deploying your environments

Now that the hard work is done, it’s time to try it out!

Earlier, I mentioned that Kustomize is built into kubectl and oc. This means we can use the -k flag to apply our resources based on a specific overlay.

To create the dev environment, we simply run the following command from the root of the petclinic directory:

$ kubectl apply -k overlays/dev

This will gather the resources listed in the kustomization.yaml file as well as those in bases, apply the specified namespace to these resources, then update the image to the defined tag.

To create the prod environment, run the same command, but specify the “prod” overlay:

$ kubectl apply -k overlays/prod

You should now notice that you have two new namespaces in your cluster — gitops-dev and gitops-prod. Woot!

If you explore these namespaces you can see that the database and app have been deployed, and the app deployment is using the correct image tag.

Conclusion

As you can see, templating with Kustomize is powerful and easy. It’s also an important foundation for the next step in your GitOps journey — adding a GitOps lifecycle tool into the equation!

In the next entry in this series, I’ll show you how to build on this example and use Argo CD (the upstream for OpenShift GitOps) to add lifecycle management to your GitOps repositories.

Reproducing this scenario

If you would like to reproduce this scenario, everything you need is available in this Github repository: https://github.com/pittar-blogs/pragmatic-gitops-01-kustomize

This has been developed and tested using OpenShift 4.7, but OpenShift 4.5+ should work fine. You can also run through this scenario on your local machine using CodeReady Containers.

If you would prefer to try this on a real OpenShift cluster you can use the free OpenShift Developer Sandbox! Note that you have three pre-provisioned namespaces in a Developer Sandbox. If you choose to use the Developer Sandbox, simply update the “namespace” elements in your kustomization.yaml overlay files accordingly and remove the namespace.yaml files, since you will not be able to create your own namespaces.

--

--

Andrew Pitt

Solutions Architect at Red Hat, specializing in Application Development, Middleware, and Cloud-Native Design.