Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save qrli/37b925e4e3e70d631d9abe7f3896708b to your computer and use it in GitHub Desktop.
Save qrli/37b925e4e3e70d631d9abe7f3896708b to your computer and use it in GitHub Desktop.
Kubernetes was designed to manage a data center, but developers use it to run a website

Recently Google Cloud announced they will charge for control plane of GKE. That’s some small amount, but users got angry. One reason is that many companies are running many Kubernetes clusters, which surprised Google people. After looking into the discussion, it cleared out my long time curiosity.

Its Designed Usage

Remember that Kubernetes started in competition with DC/OS, which manages 1000s of machines. It is supposed to manage a whole data center, and all your applications are supposed to be deployed inside, with namespace for isolation. While in Cloud, it is reasonable that a company runs a single Kubernetes cluster (or a few, but no more), to be shared by all teams and to isolate from other companies. So, you need one Ops team to manage Kubernetes, and all other teams can focus on applications.

How We Got Here

But for majority developers, Kubernetes is compared with much simpler orchestrators like Swarm, Nomad, Service Fabric, etc., which are typically used to host a single application (or a few). So of course, Kubernetes won the comparison because it has many more advanced features “for free”. Only when more and more people get on board, developers started to complain about its complexity.

But that’s good for Cloud vendors. The complexity and effort to operate a Kubernetes cluster is a good argument for using a managed solution of cloud. That’s a good business. Especially for Enterprise clients, who want to lift and shift their legacy systems, and who want to keep portable across cloud vendors. Even though AWS prefers they own solution and used to refuse Kubernetes, they provided managed Kubernetes option in the end. And this helped Kubernetes to be universal.

Real World Usage

For big enterprise systems, it should fit Kubernetes, right? Turned out not always. It is common for enterprise software to have very strong security and isolation requirement. And Kubernetes namespace is not strong isolation. There is on going topic to make Kubernetes secure for multi-tenant use, which requires HyperV-like isolation for each container instance, thus not so promising. So, the solution we can use today is to have separate clusters. Yes, a cluster per tenant.

On the other hand, for small projects, a large number of them are really small, in that they used to run on only a few VMs. Only a small percent of them need cluster with more than 50 nodes. There do be valid uses, while the majority use it only because everyone is talking about it. Why don’t we group small projects into a share kubernetes cluster then? It does happen in some companies, especially those with dedicated Ops team who serve many dev teams. However, it is still not easy for a normal company to ensure fault isolation for a single cluster. So, typically we still need to have different clusters for dev, QA, and production. And even for production, we may still need to isolate if some apps are critical while some not.

Conclusion

So, few people know that Kubernetes was designed to manage large clusters. People just use it as a one-size-fit-all solution, and it works. It is hard to say whether it is good or bad. But as it is already the case, it would be better to have some simplification and optimization for the small scale use cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment