Real-world Helm practices for perfect continuous delivery to Kubernetes

Tim Evdokimov
10 min readFeb 15, 2021

Exploring Helm capabilities to go far beyond ‘kubectl apply’: non-redundant multi-environment configurations, convenient secret management and reliable continuous delivery pipelines.

(photo courtesy of https://unsplash.com/photos/XWNbUhUINB8)

Kubernetes has brought us the concept of descriptor-based software configuration management. Helm has become a de-facto standard in automated Kubernetes deployments, providing set of features goes far beyond bare kubectl apply -f ...

Let’s look at smart ways to implement typical real-world scenarios for Kubernetes deployments.

For those of you not familiar with Helm, there are three essential parts in it:

  • templating
  • cluster state management
  • chart repositories

We will talk about templating and cluster state management in this article, leaving chart repositories for later.

This is how Helm templating engine works:

Helm workflow

Helm takes sources:

  • files, text and binary
  • .yaml templates
  • values, from .yaml files and command line

and produces .yaml files to be applied to Kubernetes cluster. Files get embedded into Secrets and ConfigMaps, and values are inlined into the templates.

You can use resulting templates as-is, with kubectl, or you may let Helm apply them to the cluster to leverage it’s powerful state management features: the advantage of native Helm deployments is that Helm saves data of each deployment in separate Secret, and is able to revert current deployment in case of failure.

Let’s see how you can apply some clever Helm techniques, to solve quite typical tasks that a real-world software engineer may face:

  • managing the configuration of your environments,
  • keeping secrets safe but convenient to use,
  • making your continuous delivery pipelines reliable.

Multi-environment configurations

Most likely your application runs in a variety of environments. The simplest division could be between test and production, but in practice it tends to be more complex. Test environments come in different flavours, some are disposable, unstable and experimental, running code from unmerged pull requests that is likely to break often. Other environments are more like user acceptance (non-production, but very much alike). Such environments could be exposed to multiple teams and expected to be stable, most of the time.

Production environments share some features like real private keys for payment integrations, but also can vary: per region, canary release vs. main bulk, or maybe per customer/tenant if your delivery is more SaaS-ish.

You may have heard of 12 factor app. One of the factors is to keep the values in the environment, rather then in the app. What they didn’t tell you, however, is how exactly to get these values into the environments in a transparent, safe and controlled way, and where to store it all — especially when you’ve got to support many environments, similar on some aspects, but different in others, and what to do with some of the configuration values that are sensitive and better be kept secret.

To successfully manage all that complexity with Helm charts, some enhancements are needed to the standard chart layout.

We want to eliminate redundancy and duplication of similar configuration items and templates, and keep it as simple as application, running on developer’s laptop.

Therefore:

  • we don’t want to make our app’s configuration to be dependent on the way it is deployed —in most cases, all it needs is just a bunch of configuration files in a directory,
  • each configuration value and/or file should be kept at one, and only one place,
  • if the values or files vary per environment, only varying parts are kept per environment,
  • adding a file to configuration must require just that — adding a file: no changes in five other yamls!

So this is the way to store environment configurations in the Helm chart, as part of application deployment manifest. Additional directories are shown in bold.

/env             // environment-specific values
/<chart-name> // chart directory
/files // configuration files and templates
/templates // Helm template files
Chart.yaml // chart title file
values.yaml // file with default values

Value files for environments

Let’s look at the structure of your environments closer.

For this example, assume you have TEST and PROD envs, and for TEST you have also -STABLE and -PR (pull request, instable) flavours, and for PROD you have EU and US regions.

This brings the following structure for /env (values) and /files directories:

/env              
TEST.yaml // common values for test envs
TEST-PR.yaml // values only for PR/unstable
TEST-STABLE.yaml // values only for stable
PROD.yaml // common values for prod envs
PROD-EU.yaml // values only for EU
PROD-US.yaml // values only for US
/files
/TEST // common files for test envs
/TEST-PR // files only for PR/unstable
/TEST-STABLE // files only for stable
/PROD // common files for prod envs
/PROD-EU // files only for EU
/PROD-US // files only for US
/shared // files shared for all envs

Now, let’s look inside each of the directories inside /files:

...
/PROD
/binary
/text
/PROD-EU
/binary
/text
...
/shared
/binary
/text
/secret-text

If some type of file is not needed for a given environment/flavour configuration, that directory may be omitted.

We assume for now that you won’t have secret binary files: a secret binary file, encrypted by a symmetric key with decent (let’s say 120+ bit) entropy using reliable tools can be kept as-is in the git repo, provided that the key is kept properly secret. Without knowing the password, the content of the binary file looks like totally random bits.

Disclaimer: this particular setup was approved by security department of a large international financial structure, for JKS (Java keystores) storing private keys of workload identities. Please check with your own security department, whether the same is applicable in your case. For some environments, the requirements may be different, but in the worst case you’d have to store binary secret files as a whole in the vault, making them available for the pipeline, and adding some template code to store these in the Kubernetes’ Secrets. But you’ll manage it, I believe!

Most of the time, you won’t need secret text files that are not shared — after all, you can’t keep them in your git repo! Hence, secret text files are always just templates, to which the secret values are injected. (We’ll come to that in next section on secret injection in details)

When this chart gets deployed, the values from different files are first flattened into one, single yaml, in the order of decreasing priority:

  • environment-flavour (TEST-STABLE.yaml)
  • environment-class (TEST.yaml)
  • general values (values.yaml)

With Helm, you can do that just stacking the --values options one after another:

helm upgrade --install <chart> path/to/<chart> --strict --values env/<env>.yaml --values env/<env>-<flavour>.yaml

Flavour and class .yaml files contain distinctive attributes, TEST-STABLE.yaml contains, among others:

envFlavour: STABLE

and TEST.yaml contains

envClass: TEST

In the final, flattened yaml values file, all these values come together. And in the next section we will use these values to pick files from /files directories.

Single ConfigMap and Secret to inject all files

Now, behold. Here’s a single ConfigMap and Secret configuration, that will pull all your files into Kubernetes yaml descriptors: binary files and text files will go into ConfigMap, and secret text files will go to Secret.

# to keep 'self' reference for nested loops
{{- $self := . -}}
# build list of directories from which the files are picked - shared, then environment class, then environment + flavour
{{ $sources := (list "shared" .Values.envClass (printf "%s-%s" .Values.envFlavour .Values.envClass ) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp
# inject non-secret text files, processed as templates
data:
{{ range $env := $sources }}
{{ range $path, $bytes := $self.Files.Glob (printf "files/%s/text/*" $env) }}
{{ base $path }}: {{ tpl ($self.Files.Get $path) $ | quote }}
{{ end }}
{{ end }}
# inject binary files
binaryData:
{{ range $env := $sources }}
{{ range $path, $bytes := $self.Files.Glob (printf "files/%s/binary/*" $env) }}
{{ base $path }}: {{ $self.Files.Get $path | b64enc | quote }}
{{ end }}
{{ end }}
---
apiVersion: v1
kind: Secret
metadata:
name: myapp
labels:
type: Opaque
data:
# inject secret text files, processed as templates
{{ range $env := $sources }}
{{ range $path, $bytes := $self.Files.Glob (printf "files/%s/secret-text/*" $env) }}
{{ base $path }}: {{ tpl ($self.Files.Get $path) $ | b64enc | quote }}
{{ end }}
{{ end }}

That’s all you need! You won’t ever need to change that, if you add another configuration file: once a file is added to its proper place, it will automatically be picked into Kubernetes configuration yaml, and no changes are required to the template.

Everything’s a template!

You may wonder, why bother doing that, if Helm contains standard functions for injecting files into ConfigMaps and Secrets already.

Well, here’s why: have you noticed that little tpl function applied when text files are injected? Yes — this is exactly what we need there, to treat each text file as a template on its own. This allows you to treat all text configuration files as Helm templates: you can use {{ .Values.myValue }} in each and every file, not just your Kubernetes yamls.

Any type of configuration file — plaintext/properties, yaml, HOCON — could be templated: as in example below:

akka {
persistence {

cassandra {

keyspace = {{ .Values.cassandra.keyspace | quote }}
table = "{{ .Values.cassandra.tablePrefix }}messages"

Handling quotes in templates

Some values may require quoting before injecting in yaml files:

databasePassword: {{ .Values.databasePassword | quote }}

For other files you may need to remove quotes, to set values into plaintext .properties file:

param/username={{ .Values.username | trimAll "\"" }}

Projected volumes

It is a good practice to combine ConfigMaps and Secrets into one volume, so that the application works with just files, and doesn’t really care where these came from. Kubernetes supports projected volumes to assist in that. Add volume as a projected volume to your deployment:

volumes:
- name: properties
projected:
defaultMode: 0640
sources:
- configMap:
name: myapp
- secret:
name: myapp

And then you can mount it just as single volume to /conf directory of your application.

Linting charts

It’s a good practice to run a ‘linting’ check with your charts against all environment values, to fail when a parameter is missing, or a template contains a syntax error, early in the pipeline, rather than at the time of deployment.

# repeat this command for all valid combinations of <env> and <flavour>helm lint --debug path/to/<chart> --strict --values env/<env>.yaml --values env/<env>-<flavour>.yaml

Secret management

Now let’s talk about values that we want to keep secret: API keys, database passwords and keys opening binary keystores.

Such secrets are usually kept in some sort of vault, with controlled and audited access. Helm has a plugin for secret management — but in most corporate (oh shi.., compliance, governance, risk management — what does it all mean??) environments secret management is centralised, when developers are not allowed to, and generally don’t need to, touch the production secrets.

When a delivery pipeline runs a deployment job in such environment, the identity of this pipeline allows it to fetch the secrets from the vault and inject them into the configuration files.

In most cases secrets, at the time of deployment, become just humble environment variables of the pipeline, like in Azure Devops or Circle. Hence, all you need is to inject them into the stack of values. To achieve that, just keep a section in your chart’s values.yaml, like that:

# secrets
database_username: "${UserNameSecret}"
database_password: "${DatabasePasswordSecret}"

And then, in your pipeline, use envsubst or similar utility:

cat <chart>/values.yaml | envsubst > <chart>/values-injected.yaml
mv <chart>/values-injected.yaml <chart>/values.yaml

This gives you an advantage of treating your secret values just as {{ .Value.xxx }} all over your chart, where you can conveniently inject them, where needed.

Keep in mind, however, that nothing prevents you from injecting secrets to some non-secret files by mistake, and exposing them unwittingly. If this becomes a concern, consider a convention of naming all secret values as XXXSecret and doing something like (in Bash syntax):

EXPOSED_SECRETS=$(grep Secret <chart>/files | grep -v secret-files | wc -l)if [ $EXPOSED_SECRETS -gt 0 ]; then fail "Secrets are exposed"; fi

It will add some protection against accidental secret exposure, provided that naming convention is followed.

Atomic environment updates

Now, the final part — how to leverage Helm hooks to make your delivery pipelines reliable.

Let’s assume you have just deployed new build of the application, or a new set of configuration values. Now, you definitely want to know, whether that deployment is broken or not. If it’s OK you can continue working off your other sprint’s tasks, but if it is broken, for whatever reason, you want the environment to be restored to last known good state, automatically. Without any intervention from your side, and all these annoying colleagues’ messages, “why am I getting 5xx error once again?”.

That may sound complicated — especially if you want to implement it yourself, armed with just bare kubectl apply -f ... But, fortunately, Helm does a lot out of the box.

Atomic deployments

If you don’t do it yet — just add Helm --atomic flag to force rollback of the environment update, if current update didn’t result in pods starting up:

helm upgrade --install my-chart some/path/to/my-chart --atomic --debug --timeout 300s

Helm will rollback, if pods didn’t start, with all their health/readiness checks within provided timeout.

Test hooks

As a cherry on top, in addition to that, you can add a Kubernetes Job in your deployment to run smoke tests, just after pods were started, with some Helm annotations. If such job fails, the deployment will roll back.

apiVersion: batch/v1
kind: Job
metadata:
name: myapp
labels:
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
spec:
template:
metadata:
name: myapp-smoke-test
spec:
restartPolicy: Never
containers:
- name: tests
image: test-image:
command: ['/bin/sh',
'-c',
'/test/run-test.sh']

When both --atomic flag and post-upgrade/post-install test hook with smoke-tests is implemented in your pipeline, you can be sure that your pipeline is reliable. It goes green when (and only when) application has been successfully deployed, successfully started and some smoke end-to-end tests ran to validate it. Should anything fail — the environment will be recovered back into last known good state. All automated!

Helm allows you defining several hooks in particular order. A dedicated job could also be used after successful smoke test to warm up application instances before hitting them with live traffic.

Conclusion

Helm is an open-source tool to manage deployments and configuration changes of Kubernetes’ clusters.

With advanced templating and test hooks, it opens a possibility to make continuous deployments smooth and reliable, to the point they are barely noticed by anyone.

--

--