2020-11-10: Helm v3 (CNCF)
Helm version 1
In 2015, during a hackathon at Deis (a PaaS company), Matt Butcher codes a "package manager for Kubernetes" named "Kid's Place". His team does win the hackathon. For most people in the team, it's the first time coding in Golang.
The name is changed to "Helm" (applying nautical vocabulary, like Kubernetes). A v1 version is presented to KubeCon 2015.
Helm basic principle
Objective: Abstracting the cluster configuration and K8S resources in a set of easier YAML files.
- Write a Helm Chart in YAML describing your app: what goes into the pod, which labels, env vars, which semver, which git repository. The chart format allows templating.
- Lint your chart
helm lint my/chart
- Version your chart in a chart repo
- When you want to release you app, you use the helm client to install a Helm Release to the cluster.
helm install my/chart(dry-run option is available)
- K8S gets configured according to your chart.
helm lsto see what's installed
- Rollback / uninstall / upgrade your chart on the cluster:
In January 2016, Google calls Deis and propose them to buy the company and merge Helm v1 and Kubernetes DM (Deployment Manager) = Helm v2.
Helm v2 is the result of the merge between Kubernetes DM and Helm v1.
Helm v2 (wrong) assumption
Helm v2 is based on the following assumption, which was true in 2015:
K8S will NEVER be multi-tenant. Each team will install its own K8S and run 5-15 nodes tops.
- No "one namespace per team": everyone is in the same namespace
- There was no CRDs / Operators at the time.
- There was no RBAC in K8S back then! No authz, no authn!
- Tiller is the central authorithy - installed on the cluster.
- Tiller is heavily loaded with logic: rendering chart to K8S manifests, call K8S API etc.
- Helm v2 client is very lightweight
- Releases are stored in the default
kube-systemnamespace, via a
Tiller is bad:
- cluster-wide permissions!
- not architectured around users & permissions
- workaround: people have to install multiple instances of Tiller on the cluster
Helm Release object is bad:
- cannot store different releases for the same chart because everything is in
kube-systemnamespace (ex: user1 wants to deploy a version N but user2 wants to deploy version N-1)
ConfigMapis cleartext: not the best
Helm v3 is almost a rollback to Helm v1 (Helm Classic)
- Tiller has been removed
- Helm Client gets to talk directly with K8S
- Releases objects are stored in the chart namespace via
Secrets. Ex: deploying a chart in Staging env will store the release object in the Staging namespace.
Helm v3 and Charts v2
Helm v3 is compatible with Helm Charts v2 and v1.
Helm v1 and v2 are compatible with Helm Charts v1.
(The creator admits it's a bad versioning idea, but too late xD)