Stop pretending it isn’t your own clumsy distribution. Seriously.


Haven’t heard of “vanilla Kubernetes”?

In the ever competitive age of selling Kubernetes, there seems to be a trend towards promising vendor neutrality. After all, the promise of Kubernetes is to free you from the tyranny of your cloud provider, right? It’s only natural that the mentality of avoiding lock-in be applied to Kubernetes itself. The community has arrived on the colloquial term “vanilla Kubernetes” to describe an installation of the only the upstream components as released by the Kubernetes project.

“Vanilla Kubernetes” has been the subject of many marketing campaigns. Whether there is a philosophically disagreement with modifying the upstream experience or the product just arriving late to market, Kubernetes businesses have been appealing to the zeitgeist that there is value in maintaining installations that are as true as possible to the Kubernetes project. You’re most likely to hear this term referenced in a marketing white paper (not to be confused with a technical white paper) or when someone is talking about a cluster they stood up on their own from scratch.

“But I installed Kubernetes from upstream!” – You

Imagine a similar argument being applied to GNU/Linux distributions. If you took the upstream Linux kernel and started adding your own userspace to it, what do you think most people would call it? I believe most would agree to call it your own distribution (or Linux From Scratch if you followed a book doing it). So, why aren’t we calling these types of Kubernetes installations distributions, too?

Why is it that almost every major Linux distribution patches their kernel? It’s to ensure their supported workloads are guaranteed to run on top of their configuration. The only effort in the Kubernetes community that has focused on this degree of compatibility is OpenShift. It may not be important for most users now, but will be critical when Kubernetes branches out to more diverse hardware and has years of legacy with which to remain compatible.

The Linux analogy can only go so far.

Kubernetes does bring its own “userspace” with the core API objects like Secrets, Pods, Replica Sets, and Deployments. So if you only use these objects every cluster is the same, right? As it turns out, there is so much configurable functionality for Kubernetes like API server flags and details of storage configuration that often cannot be introspected from within the cluster. Unless you set it up yourself or have good documentation from those that installed the cluster, you’re left partially in the dark.

This configuration issue only compounds over time as your use of Kubernetes becomes more sophisticated. If Operators and custom controllers, which use the Kubernetes API to simplify user interactions within the cluster, cannot introspect configuration, then configuration must be manually maintained in two places. Now imagine updating your cluster only to find that an Operator is now logging errors when it attempts to make Kubernetes API, but not crashing the pod! Depending on how critical the Operator, this can be a silent killer.

Where do we go from here?

The obvious answer is to stop installing from upstream directly. Rather, the community needs to gain consensus that there is more value in working together to maintain a variety of distributions of Kubernetes that each have a focus. Strongly opinionated distributions where users can refer to the mantra/philosophy of the distribution to answer to intuit how something is done under the covers is something powerful that we expect from Linux distributions.

As of right now, Typhoon is the only community distribution that I know and, while I believe Typhoon could be far more opinionated and smaller in scope, Dalton has done an amazing job. It is only natural that Typhoon currently have a large scope because of the lack of competition. My recommendation is the next time someone wants to install their own Kubernetes, get them to install Typhoon and contribute back to the project. Struggling behind closed doors isn’t the way anyone wants to run Kubernetes.