Last October, VMware’s VMworld was held. As usual, a lot of new VMware products, updates and features were announced. But one stood out; the announcement of Tanzu Community Edition.

In my recent post on Kubernetes deployment tools, I wrote a bit on VMware Tanzu Kubernetes Grid (TKG). TKG is available as a commercial stand-alone version and integrated in vSphere 7.x with Tanzu Standard. Both require a commercial license.

Now there is Tanzu Community Edition, in short TCE. TCE is a freely available, community-supported, open source distribution of VMware Tanzu. It’s a full-featured, easy to use/manage Kubernetes platform leveraging Cluster API to provide declarative deployment and management of Kubernetes clusters.

In this post I describe how to:

  • Deploy a TCE Management cluster in VMware vSphere using the UI
  • Roll out your first TCE Workload cluster
  • Install a software package on it
  • Scale the cluster using the Tanzu CLI.

Prerequisites

Before rolling out Kubernetes with TCE make sure the prerequisites are installed and available on your local machine:

  • Latest version of docker
  • Latest version of kubectl
  • Latest version of Tanzu Community Edition (TCE)
  • an SSH public key

For more details, go to the online documentation on installing TCE.

In my case, I use Mac and installed Docker Desktop plus kubectl and TCE using Homebrew.

Tanzu Community Edition

You need to download and import supported OS images based on Photon v3 or Ubuntu 20.04 in your vSphere environment used by TCE.

Tanzu Community Edition

Deploying the Management cluster

Tanzu Community Edition supports two methods of provisioning a Kubernetes cluster, Standalone clusters or Workload clusters managed by a Management cluster.

Standalone clusters are the easiest way of provisioning a development environment. But the Standalone cluster feature is actively under development and is in an experimental phase. Also, at this moment they cannot scale. That’s why I don’t recommend Standalone clusters beyond a quick development environment.

Managed Workload clusters at the other hand are fully-featured, can be high-available, can scale and can provide user access to the Kubernetes API via identity and authentication management and therefore are production-ready.

Let’s start by deploying a Management cluster using the following CLI command in your terminal.

tanzu management-cluster create --ui

Tanzu Community Edition

A web browser will start and shows you the TCE installer. Click on Deploy in the vSphere square.

Tanzu Community Edition

Enter your vCenter Server FQDN hostname or IP address plus credentials. Click Connect. Select your Datacenter and enter a SSH public key to be used for logging in to the provisioned nodes. Click Next.

Tanzu Community Edition

Select a plan. In my case I use the Development plan with just a single control plane. Next, select the instance type (size) for the Control plane node(s) and Worker node(s). Enter a cluster name and select the Control plane endpoint provider. In my case I select kube-vip because I do not have a VMware NSX-ALB installed. Enter the IP address of the Control plane endpoint. This requires a static IP address! Click Next.

Tanzu Community Edition

Skip step 3 and 4, they are optional.

Select the VM folder, Datastore and Cluster where to provision the cluster nodes. Click Next.

Tanzu Community Edition

Select the Network to be connected and if necessary change the cluster service and/or cluster pod CIDR. If needed, enable Proxy Settings. Click Next.

Tanzu Community Edition

Enter your OIDC or LDAPS details if you want to provide user access to the Kubernetes API using one of these methods. Otherwise, disable it. Click Next.

Tanzu Community Edition

Select the OS image with the correct OS and Kubernetes version to be used. In my case, Ubuntu 20.04 with Kubernetes version 1.21.2 Click Next.

Tanzu Community Edition

Review the configuration. Click on Deploy Management Cluster to start the installation. You can also copy the CLI command and use this in your terminal instead of the UI.

Tanzu Community Edition

Tanzu Community Edition

Wait for the installation to successfully complete and close your web browser.

Tanzu Community Edition

Login to vCenter Server and verify that the Management cluster nodes are up and running.

Tanzu Community Edition

In your terminal switch the kubectl context to the Management cluster. To view the details of the cluster use the following commands:

  • tanzu management-cluster get
  • kubectl get nodes

Tanzu Community Edition

Deploying your first Workload cluster

With the Management cluster in place, you can use the TCE CLI to deploy Workload clusters. In TCE, your application workloads run on Workload clusters. To deploy a Workload cluster, you first have to create a configuration file.

The easiest way to obtain an initial configuration file for a Workload cluster is to make a copy of the Management cluster configuration file and to update it. As I installed the Management cluster from the installer interface, the YAML configuration file is located here: ~/.config/tanzu/tkg/clusterconfigs/<MGNT-CLUSTER-NAME>.yaml

You can also use a configuration file template. Here’s an example for vSphere:

#! ---------------------------------------------------------------------
#! Basic cluster creation configuration
#! ---------------------------------------------------------------------

# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: vsphere-dev
CNI: antrea
INFRASTRUCTURE_PROVIDER: vsphere

#! ---------------------------------------------------------------------
#! Node configuration
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:

# VSPHERE_NUM_CPUS: 2
# VSPHERE_DISK_GIB: 40
# VSPHERE_MEM_MIB: 4096

VSPHERE_CONTROL_PLANE_NUM_CPUS: 2
VSPHERE_CONTROL_PLANE_DISK_GIB: 20
VSPHERE_CONTROL_PLANE_MEM_MIB: 4096
VSPHERE_WORKER_NUM_CPUS: 2
VSPHERE_WORKER_DISK_GIB: 20
VSPHERE_WORKER_MEM_MIB: 4096

# CONTROL_PLANE_MACHINE_COUNT: 1
WORKER_MACHINE_COUNT: 2
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:

#! ---------------------------------------------------------------------
#! vSphere configuration
#! ---------------------------------------------------------------------

VSPHERE_USERNAME: <VC_USERNAME>
VSPHERE_PASSWORD: <VC_PASSWORD>
VSPHERE_SERVER: <VC_FQDN_OR_IP>
VSPHERE_DATACENTER: /Homelab
VSPHERE_RESOURCE_POOL: /Homelab/host/Cluster-01/Resources
# VSPHERE_TEMPLATE:
VSPHERE_NETWORK: /Homelab/network/VM-network
VSPHERE_DATASTORE: /Homelab/datastore/VMFS_NAS01_DS01
# VSPHERE_STORAGE_POLICY_ID
VSPHERE_FOLDER: "/Homelab/vm/Tanzu Kubernetes"
VSPHERE_TLS_THUMBPRINT: AA:3B:64:D1:50:BA:7F:C5:51:ED:1E:13:A2:23:F3:1C:D5:A1:99:ED
VSPHERE_INSECURE: false
VSPHERE_SSH_AUTHORIZED_KEY: <PUB_SSH_KEY>
VSPHERE_CONTROL_PLANE_ENDPOINT: <STATIC_IP>

#! ---------------------------------------------------------------------
#! NSX-T specific configuration for enabling NSX-T routable pods
#! ---------------------------------------------------------------------

# NSXT_POD_ROUTING_ENABLED: false
# NSXT_ROUTER_PATH: ""
# NSXT_USERNAME: ""
# NSXT_PASSWORD: ""
# NSXT_MANAGER_HOST: ""
# NSXT_ALLOW_UNVERIFIED_SSL: false
# NSXT_REMOTE_AUTH: false
# NSXT_VMC_ACCESS_TOKEN: ""
# NSXT_VMC_AUTH_HOST: ""
# NSXT_CLIENT_CERT_KEY_DATA: ""
# NSXT_CLIENT_CERT_DATA: ""
# NSXT_ROOT_CA_DATA: ""
# NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials"
# NSXT_SECRET_NAMESPACE: "kube-system"

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

IDENTITY_MANAGEMENT_TYPE: none

# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

TKG_HTTP_PROXY_ENABLED: false
# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""

ENABLE_AUDIT_LOGGING: true

ENABLE_CEIP_PARTICIPATION: false

ENABLE_DEFAULT_STORAGE_CLASS: true

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

# OS_ARCH: amd64
# OS_NAME: ubuntu
# OS_VERSION: "20.04"

#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------

ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------

# ANTREA_NO_SNAT: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: false
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: false

First create two namespaces. This helps to organize and manage your different projects and related clusters.

Open the terminal and switch the kubectl context to the Management cluster. Create two namespaces, one for development and one for production, using the following commands:

  • kubectl create namespace vsphere-dev
  • kubectl create namespace vsphere-prd

Tanzu Community Edition

Then use the configuration file template from above to create your first development Workload cluster.

  • tanzu cluster create tanzuce-dev1 --file tanzuce-vsphere-dev.yaml

Tanzu Community Edition

When the cluster is deployed successfully, login to vCenter Server and verify that the Workload cluster nodes are up and running.

Tanzu Community Edition

To view the details of your newly deployed cluster, use the following command in your terminal:

  • tanzu cluster list
  • tanzu cluster get tanzuce-dev1 -n vsphere-dev

To manage and interact with the Workload cluster using kubectl, you first need to get the kubeconfig file and switch the context.

  • tanzu cluster kubeconfig get tanzuce-dev1 --admin -n vsphere-dev
  • kubectl config use-context tanzuce-dev1-admin@tanzuce-dev1 (tip! use kubectx)
  • kubectl get nodes

Tanzu Community Edition

Installing additional software packages

Once your Workload cluster(s) is deployed, you may want to add extra capabilities like monitoring, logging, backup or having a local container registry. These capabilities can be added via the Tanzu Community Edition package repository with a set of pre-packaged software bundles.

Let’s view what repository is standard available and which packages are already installed:

  • tanzu package repository list -A
  • tanzu package installed list -A

Tanzu Community Edition

To install additional packages, you first need to add the TCE repository.

  • tanzu package repository add tce-repo--url projects.registry.vmware.com/tce/main:0.9.1 --namespace tanzu-package-repo-global --create-namespace
  • tanzu package repository list -A

Tanzu Community Edition

With the TCE repository added, let’s view which packages are available.

  • tanzu package available list -A

Tanzu Community Edition

To install cert-manager for example, first view the different versions available. Then install the latest available version of cert-manager.

  • tanzu package available list cert-manager.community.tanzu.vmware.com -A
  • tanzu package install cert-manager --package-name cert-manager.community.tanzu.vmware.com --version 1.5.3

Tanzu Community Edition

Verify if the cert-manager pods are successfully deployed.

  • kubectl get pods -A

Tanzu Community Edition

Scaling your workload cluster

While you use your workload clusters by deploying packages and other software, you may run into capacity problems and need to add extra resources. TCE can help you with that by scaling your cluster using the CLI.

Clusters can be scaled up or down. Also, Control plane nodes and Worker nodes can be scaled independently. For example, a single-node Control plane cluster with two Worker nodes can be scaled to a high-available cluster with three Control plane nodes and three Worker nodes, using just one command.

tanzu cluster scale <CLUSTER_NAME> --controlplane-machine-count 3 --worker-machine-count 3 --namespace <NAMESPACE>

Let’s see this in action. To scale your deployed Workload cluster from two to three Worker nodes, use the command:

  • tanzu cluster scale tanzuce-dev1 -w 3 -n vsphere-dev 

Tanzu Community Edition

Tanzu Community Edition

When the cluster is updated successfully, login to vCenter Server and verify that a Worker node is added and is up and running.

Tanzu Community Edition

To scale the cluster down to just one Worker node, use the command:

  • tanzu cluster sacle tanzuce-dev1 -w 1 -n vsphere-dev

Tanzu Community Edition

In Closing

Tanzu Community Edition is a very easy-to-use tool to build, run and manage Kubernetes platforms to support modern applications. It’s completely free and fully-featured and can be installed on your local machine, in vSphere, and on AWS and Azure as supported public clouds. Hopefully, GCP will soon be added as supported cloud. 

Some things I miss though. For instance, compared to for example Suse Rancher, is a management tool with an UI and API to get more visibility across multiple clusters, clouds and teams and the capability to deploy from this UI (or API) and configure global policies and guardrails.

VMware Tanzu Mission Control (TMC) can help with that. In the last step of the installer UI, TCE can integrate with TMC. At this time, Tanzu Mission Control is commercially licensed only. But there is light on the horizon. Recently announced is a free tier for Tanzu Mission Control, planned for 2022: Tanzu Mission Control Starter!

With this addition, VMware will have a complete open-source Kubernetes platform ready to take on OpenShift and Rancher. I can’t wait and see how this will develop in the future!