In 2017 I wrote a blog on Containers as a Service. In that time Docker containers were the next new big thing and VMware created a tightly integrated solution for vSphere, named vSphere Integrated Containers (VIC) to run containers on vSphere like a VM and manage it with vCenter Server while developers could leverage the Docker API/CLI. To manage these containers, container images and configurations in a cool interface, VMware created Admiral and Admiral was eventually integrated into vRealize Automation 7.x to create self-service capabilities for applications running in containers.

Today, containers are the basis for all modern applications and a well written 12-factor application can easily consist of tens of containers who need to be managed and orchestrated as one entity. This is where Kubernetes comes in. Kubernetes is the defacto standard for declarative container orchestration. Meanwhile, VMware introduced a new Modern Application strategy and a set of products (Tanzu) to easily Build, Run and Manage applications based on containers and running on Kubernetes. Also, vRealize Automation 8 and vRealize Automation Cloud were released, with a complete new architecture and agile Cloud Management capabilities. The Admiral integration for containers is removed but new capabilities were introduced to support Kubernetes. Kubernetes as a Service was born.

Clusters, Namespaces and Pipelines

vRealize Automation supports multiple Kubernetes use cases like requesting and deploying Kubernetes Clusters and Namespaces from the self-Service Catalog in Service Broker and manage them through Cloud Assembly. Or using Kubernetes as an end-point in Code Stream for deploying modern applications using delivery pipelines.

All use cases start with a blueprint. I created two blueprints, a single-node Kubernetes cluster and a scalable multi-node Kubernetes cluster, that deploy Kubernetes on a fully automated way so that it can easily be connected up to vRealize Automation. There’s also a third blueprint to create a Kubernetes Namespace with a memory quota limit.

The blueprints can be downloaded here from my GitHub account.

Before importing the blueprints and deploying it, make sure that you have a Ubuntu 18.04 template ready to be used as image mapping for Cloud Assembly. The Ubuntu template must have Cloud-init installed and correctly configured. To create such an image follow this blog here.

The blueprint makes use of a static IP address. Make sure that you have a network profile setup using the internal (or external) IPAM functionality.

Blueprint details

The single-node cluster blueprint is build up as follows:

  • Use inputs for configuring the CPU and memory specs of the node, select the version of Kubernetes and set the IP-range for the load balancer
  • Write configuration files for MetalLB and a Kubernetes service account
  • Set a new root password and allow root login from SSH
  • Install Docker as container runtime
  • Install Kubernetes (kubeadm, kubectl, kubelet and kubernetes-cni)
  • Disable swap
  • Setup Kubernetes cluster using kubeadm
  • Export KUBECONFIG file
  • Install Weave Net as network overlay for Kubernetes
  • Enable (taint) the Master node for scheduling containers
  • Install MetalLB for load-balancing 
  • Install Kubernetes dashboard
  • Export Kubernetes certificate
  • Export Kubernetes bearer token
  • Install Helm

The scalable multi-node cluster blueprint uses the same construct but consists of one Master node and a minimum of one Worker node. In the blueprint this is an extra input to deploy a minimum of 1 and a maximum of 4 Worker nodes, 2 is the default. The Master node is not tainted and therefore can not schedule containers.

The multi-node deployment starts with the Worker node(s), then deploys the Master node and finally joins the Worker node(s) with the cluster. For this, two important scripts are added in the blueprint to “register the nodes” and run the join command.

Once the deployment is successful, the generated token and certificate can be used to register the Kubernetes cluster to vRealize Automation and access the Kubernetes dashboard.

Deploy Kubernetes from Catalog

Once you imported the Kubernetes blueprints in Cloud Assembly, change them accordingly to fit your environment. Then version, release and add them to the Service Broker catalog.

First request a single-node Kubernetes cluster for dev/test.

  1. Go to the Service Broker catalog and request the single-node Kubernetes cluster.
  2. Enter a deployment name, select your project, select the size of the master node and Kubernetes version. Finally enter an IP-range for the MetalLB load balancer.
  3. After a successful deployment, use SSH to logon to the console of the master node and check if the Kubernetes cluster is installed properly by running the following commands:
    • export KUBECONFIG=/etc/kubernetes/admin.config
    • kubectl get nodes
    • kubectl get pods -A
  4. A certificate and bearer token are automatically created and available on the master node. These can be used to register the Kubernetes cluster in Cloud Assembly.
  5. In Cloud Assembly, select the Infrastructure tab. In the left menu, under Resources, select Kubernetes. Then select Add External.
  6. Enter a name, the address of the cluster (https://<MASTER NODE IP>:6443) and the certificate. Make sure to select Global for Sharing. Select your location and optional your Cloud proxy. Select Bearer token as Credentials type and enter the token. Click Validate, then Save.
  7. Once added, click on Open to see the details of the cluster like node configuration and namespaces. Also from here, you can download the kubeconfig file to get access to the cluster. This kubeconfig file can also be downloaded from the deployment details.

Repeat step 1 to 7 to deploy a multi-node Kubernetes cluster. The only difference is that during request you have the ability to select the number and size of worker nodes.

Create a Kubernetes zone

Requesting and deploying Kubernetes clusters using the self-service catalog is one use case. 

For the second use case, requesting Kubernetes namespaces, we need to have a Kubernetes zone added to a project.

  1. In Cloud Assembly, select the Infrastructure tab. Under Configure, select Kubernetes Zones.
  2. Add a new Kubernetes zone. Select the All external clusters account and enter a name for the zone.
  3. Select the Clusters tab and add the deployed single-node and multi-node cluster.
  4. Also add tags to the clusters to be used in your namespace blueprint. For example, env:dev and env:prod
  5. Finally, add the Kubernetes zone to your project.

Requesting Kubernetes namespaces

Import the Kubernetes namespace blueprint in Cloud Assembly, change it accordingly to fit your environment. Then version, release and add it to the Service Broker catalog and request it.

  1. Enter a deployment name, select your project and select which Kubernetes environment to configure the namespace. Enter a name for the namespace and select the memory resource quota limit.
  2. Once successfully deployed, you can get the kubeconfig file from the deployment details to access to Kubernetes cluster namespace.
  3. Details of the namespace can be verified by using SSH to logon to the console of the master node and running the following commands:
    • export KUBECONFIG=/etc/kubernetes/admin.config
    • kubectl get namespaces
    • kubectl describe namespace <YOUR NAMESPACE>
  4. This can also be verified in Cloud Assembly by opening the cluster details and selecting the Namespace tab.

In this blog I showed you how easy it is to do Kubernetes as a Service by requesting and deploying Kubernetes Clusters and Namespaces using the self-service catalog in Service Broker and manage them through Cloud Assembly.

In some follow up blogs I’ll explain how to monitor these clusters using vRealize Operations and Log Insight and use them as end-point in Code Stream delivery pipelines for deploying modern applications.