K8s deployment tools – An overview
Kubernetes is just another infrastructure platform to run your new modern cloud-native applications on. But if you have been working with it, you probably know that it is complex, difficult to setup and maintain and that many organisations have security concerns and challenges running it at scale.
In the cloud-native landscape there is a proliferation of tools, software and services that try to address these problems. But the landscape is as complex as Kubernetes itself and many standards are still under development.
So, how do you decide which tool(s) to use when there is so much to choose from? What is the best tool? Which tool best suites your needs?
Today I am not going full depth and cover things like Service Mesh, Security & Compliance, Observability, CI/CD et cetera. I would like to start at the beginning of every Kubernetes journey. How do I stand up a new, up-stream, compliant Kubernetes cluster in an easy fully automated way? K8s deployment tools are the answer, but which one?
Well, it depends
What purpose does your new Kubernetes cluster have?
Is it for testing new Kubernetes features, do you want to run and test your newly developed application, or do you want it to run in production, do you want to use it for running pipelines to build container images? So many use cases to choose from but probably the most important question you need to ask yourself is: where do I want to run my cluster?
On your local machine, in an edge location, in your on-prem data center or in the cloud? And from a day 2 operations standpoint, what about availability, scalability and manageability?
These are all platform related questions. When I go to the CNCF landscape and zoom in on Platform tools there’s a distinction between Certified Kubernetes Installers, Distributions and Hosted Kubernetes.
Installers, Distributions and Hosted
An Installer is a mechanism to deploy and maintain up-stream compliant Kubernetes clusters while a Distribution is more opinionated, has specific features, pre-installed network & storage interfaces, capabilities for day 2 operations and in most cases the ability to install or hook-up tooling for cluster management, monitoring, observability, CI/CD etc.
Examples of Installers are Minikube, Kind, kOps and Kubespray. Examples of Distributions are K3s, Rancher, Openshift and VMware TKG. And finally, there are the Hosted Kubernetes like EKS (AWS), AKS (Azure) and GKE (Google). While these platform tools are focusing on Kubernetes primarily, I should also mention Ansible, Terraform and Pulumi. These are Infrastructure-as-Code tools that can not only deploy Kubernetes on-prem or in the cloud, but also can deploy and configure a variety of other cloud infrastructure components like VPC’s, Virtual Machines, Load Balancers, Storage (NetApp CVO ;-) ), Security Groups etc.
Edge, on-prem or in the cloud
Some of the tools I mentioned above are better for running Kubernetes local, while others can be used in on-prem data centers to deploy on bare-metal and/or Virtual Machines, and others to deploy Kubernetes in the cloud or maybe do both.
Local | Edge/Data Center | Cloud | |
Minikube | yes | ||
Kind | yes | ||
kOps | yes | ||
Kubespray | yes | yes | |
K3s | yes | yes | |
Rancher | yes | yes | |
Openshift | yes | yes | |
VMware TKG | yes | yes | |
EKS (eksctl) | yes | ||
Terraform | yes | yes |
K8s deployment tools that I use
The tools that I make use of in my lab environment are Kind, K3s, Rancher, VMware TKG, kOps, eksctl and Terraform.
Let’s dive a bit deeper in each tool.
Kind
Kind is a tool for running local Kubernetes clusters. Kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
What’s special about Kind is that the cluster will be deployed inside Docker containers and that the tool allows you to deploy different types of clusters; single node, single master with multiple workers or multiple masters with multiple workers. These clusters are very easy to deploy using a single YAML file.
There’s one disadvantage though, on MacOS and Windows, Docker does not expose the docker network to the host. Because of this limitation, containers (including Kind nodes) are only reachable from the host via port-forwards. Setting up an Ingress Controller can be used as a cross-platform workaround.
K3s
K3s is a lightweight Kubernetes distribution developed by Rancher. It’s created for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. The binary is less than 40Mb and it can easily be run on a very small server with minimal requirements like a Raspberry Pi.
To keep it a light distribution, external dependencies have been minimized and some functionalities are removed or replaced, for example:
- The default cluster datastore is SQLite
- The default container runtime is containerd
- Automatic TLS management
Powerful “batteries-included” features have been added, such as: local storage provider, Flannel network provider, service load balancer, network policy controller, Helm controller and Traefik ingress controller.
Just as Kind, K3s supports deployment of different types of cluster architectures including multiple masters. What’s different though, is that K3s supports the use of other cluster datastores for high availability like etcd, MySQL, MariaDB and PostgreSQL.
The installation and cluster deployment are very easy and done with just one command to kickoff the installation script.
Also, additional utilities will be installed, including kubectl, crictl, ctr, k3s-killall.sh and k3s-uninstall.sh
For installing worker nodes and adding them to the cluster, run the installation script with some additional environment variables.
To make it even more “automated” you can use k3sup for installing K3s on remote systems, as I did in a previous post.
Rancher
Rancher is a multi-cluster orchestration platform. It lets operations teams deploy, manage and secure Kubernetes across the enterprise, on-prem and in public cloud.
Rancher includes RKE, an extremely simple, lightning-fast Kubernetes installer that can deploy Kubernetes on bare-metal and virtualized servers on-prem or in public cloud.
Rancher supports any certified Kubernetes distribution to be managed, such as RKE, K3s but also the major distributions in public cloud including EKS, AKS and GKE.
It provides a rich catalog of services for building, deploying and scaling containerized applications, including app packaging, CI/CD, logging and monitoring.
VMware TKG
VMware Tanzu Kubernetes Grid (TKG) provisions and manages the lifecycle of Tanzu Kubernetes clusters making use of Cluster API. A Tanzu Kubernetes cluster is an opinionated installation of open-source Kubernetes that is built and supported by VMware.
TKG can be deployed using the Tanzu CLI across on-prem and public cloud environments, including VMware vSphere, Microsoft Azure and Amazon EC2.
It provides the services such as networking (Antrea, Calico), authentication (Pinniped), ingress control (Contour), logging (Fluentbit), registry (Harbor) that Enterprise Kubernetes requires and can easily be hooked-up with VMware Tanzu Mission Control for multi-team multi-cluster management.
TKG is also available as fully integrated service (TKGS) within vSphere and lets you create and operate Tanzu Kubernetes clusters natively in vSphere 7 with Tanzu. vSphere with Tanzu leverages many reliable vSphere features to improve Kubernetes experience, including vCenter SSO, the content library for Kubernetes software distributions, vSphere networking, vSphere storage, vSphere HA and DRS and vSphere security.
One big disadvantage of VMware Tanzu solutions is that, although most is based on open-source technology, VMware makes it enterprise consumable which is not for free unfortunately.
kOps
kOps, or Kubernetes Operations, can be compared to Rancher RKE, it’s an installer to create, destroy, upgrade and maintain production-grade Kubernetes clusters in public cloud.
AWS and GCP are currently official supported while Azure is in Alpha status.
What’s so special about kOps is that it also provisions (and destroys) the necessary cloud infrastructure and has extensive possibilities to customize your cluster and enable all kinds of add-ons, such as AWS Load Balancer Controller, Cluster Autoscaler, Cert-manager, Metrics Server, Snapshot Controller etc.
kOps stores the state and representation of your cluster to a dedicated S3 bucket. This bucket will become the source of truth for the cluster configuration for automatic idempotency and makes it possible to do dry-runs before applying changes.
Eksctl
Eksctl is a simple CLI tool for creating and managing cluster on EKS – Amazon’s managed Kubernetes service for EC2. It’s written in Go and uses Cloud Formation for deploying and destroying the necessary cloud infrastructure.
Clusters can be created directly using the CLI and flags. But you can also create clusters using a single YAML config file. And just as with kOps, the dry-run feature allows you to inspect and change your configuration before proceeding to creating a cluster.
Terraform
Terraform is a tool to manage the entire lifecycle of infrastructure using Infrastructure-as-Code. That means declaring infrastructure components in configuration files that are then used by Terraform to provision, adjust and tear down infrastructure resources in various cloud providers. Examples of resources include physical machines, VMs, network switches, storage arrays, containers etc.
Terraform relies on plugins called “Providers” to interact with remote systems and understand API interactions with the underlying infrastructure such as a public cloud service (AWS, Azure, GCP) or on-prem resources (vSphere). Terraform presently supports more than 70 providers. All providers integrate into and operate with Terraform exactly the same way.
State management is a key component of Terraform. The Terraform state file keeps track of all changes in an environment. It is important to keep the state file safe and secure. By keeping the state file in high-available object storage, teams can safely share and interact with a single state that is always current.
As you can see, there are a lot of tools to choose from, each with their pros and cons. Is there a tool who can do it all? Probably not, but some solutions come very close, for example Rancher or VMware TKG with Tanzu Mission Control.
Most of these tools/solutions are open-source and free of cost which makes it very easy to install and start using it for your own use cases and see if it works for you or not.
So, grab a tool and start deploying Kubernetes!
Related Posts
Leave a Reply Cancel reply
You must be logged in to post a comment.