How to install Rancher on k3s
For those who don’t know Rancher, Rancher is an enterprise container management platform built for organizations with multiple teams that deploy containers in production across multiple cloud environments. Rancher makes it easy to run Kubernetes everywhere, meet IT requirements and empower DevOps teams. It’s a great opensource alternative for VMware vSphere with Tanzu, VMware Tanzu Kubernetes Grid with Tanzu Mission Control or Redhat Openshift.
Rancher provides an intuitive user interface to manage application workloads. The user does not need to have in-depth knowledge of Kubernetes concepts to start using Rancher. Rancher catalog contains a set of useful DevOps tools. Rancher is certified with a wide selection of cloud native ecosystem products, including, for example, security tools, monitoring systems, container registries, and storage and networking drivers.
The following figure illustrates the role Rancher plays in IT and DevOps organizations. Each team deploys their applications on the public or private clouds they choose. IT administrators gain visibility and enforce policies across all users, clusters, and clouds.
To install a k3s cluster with a high-available control plane, you need a minimum of three nodes. In my case, I deployed three Virtual Machines with a fixed IP address on VMware vSphere 7.0u1 with Ubuntu 20.04 installed and a user vmware configured.
You also need to create four DNS entries in your domain. Three for the k3s nodes and one for the kube-vip load balancer VIP address. For example:
k3s-rancher.homelab.int – 10.0.0.180 (VIP)
k3s-node-1.homelab.int – 10.0.0.181
k3s-node-2.homelab.int – 10.0.0.182
k3s-node-3.homelab.int – 10.0.0.183
Also make sure you can login to the k3s nodes remotely using a username and ssh key. To realize that, first create or use an ssh key and install this on the remote machine.
#On your local machine git clone git://github.com/centic9/generate-and-send-ssh-key cd generate-and-send-ssh-key chmod +x generate-and-send-ssh-key.sh ./generate-and-send-ssh-key.sh --file ~/.ssh/sshkey --user vmware --host 10.0.0.181
Login to the first node and configure SSH to use Public Key Authentication.
ssh firstname.lastname@example.org sudo sed -i "s/.*PubkeyAuthentication.*/PubkeyAuthentication yes/g" /etc/ssh/sshd_config sudo systemctl restart sshd
Add user vmware to the sudo group and change the sudoers file.
sudo adduser vmware sudo sudo visudo #replace this line %sudo ALL=(ALL:ALL) ALL #by this line %sudo ALL=(ALL:ALL) NOPASSWD:ALL
Disable swap. Then logout and reboot.
sudo swapoff -a sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab sudo reboot
Repeat these steps on the second and third node.
TIP! Use Hasicorp Packer to create vSphere templates with above settings (and others) already in place. Then you only have to deploy a VM from this template and apply a fixed IP address making use of a vSphere customization specification. This is what I did ;-)
Verify you can login to the first node using ssh.
Install k3s using k3sup
For the next steps, make sure you have installed the following tools on your local machine.
K3s is a fully CNCF compliant lightweight Kubernetes distribution. Easy to install, half the memory, all in a binary of less than 100 MB.
k3sup is a light-weight utility to get from zero to KUBECONFIG with k3s on any local or remote VM. All you need is
ssh access and the
k3sup binary to get
kubectl access immediately.
Install the first k3s master server using k3sup.
k3sup install --ip 10.0.0.181 --tls-san 10.0.0.180 --cluster --k3s-channel latest --merge --local-path $HOME/.kube/config --context=k3s-ha-cluster --user vmware --ssh-key $HOME/.ssh/sshkey
--tls-sanis required to advertise the kube-vip VIP address, so that K3s will create a valid certificate for the API server.
--k3s-channelis specifying the latest version of K3s, which in this instance will be 1.20, by the time you run this tutorial, it may have changed, in which can you can give
1.20as the channel, or a specific version with
- note the
--clusterflag, which tells the server to use etcd to create a cluster for the servers we will join later on
--mergeall allow us to merge the KUBECONFIG from the K3s to our local file
Check if k3s kubernetes is installed succesfully.
kubectx k3s-ha-cluster kubectl get nodes
Kube-vip is a lightweight solution that provides Kubernetes Virtual IP and Load-Balancer for both control plane and Kubernetes services.
Login as root into the first k3s node and apply RBAC settings for kube-vip.
curl -s https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml
By downloading this manifest and placing it in the k3s manifest directory, this manifest will automatically be applied by k3s and create a kube-vip serviceAccount, kube-vip-role clusterRole and a kube-vip-binding clusterRoleBinding.
Next step is to fetch the kube-vip container, create an kube-vip alias and generate a kube-vip manifest which will deploy a daemonset.
ctr image pull docker.io/plndr/kube-vip:0.3.2 alias kube-vip="ctr run --rm --net-host docker.io/plndr/kube-vip:0.3.2 vip /kube-vip" export VIP=10.0.0.180 export INTERFACE=ens192 kube-vip manifest daemonset --arp --interface $INTERFACE --address $VIP --controlplane --leaderElection --taint --inCluster | sudo tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml #Edit the kube-vip.yaml tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists
Check if kube-vip is correctly installed and ping the VIP address.
Logout and change your local KUBECONFIG to use the VIP address.
Joining the other nodes
Next step is to join the second and third node to the k3s cluster using k3sup.
k3sup join --ip 10.0.0.182 --server --server-ip 10.0.0.180 --k3s-channel latest --user vmware --ssh-key $HOME/.ssh/sshkey k3sup join --ip 10.0.0.183 --server --server-ip 10.0.0.180 --k3s-channel latest --user vmware --ssh-key $HOME/.ssh/sshkey
Check if the nodes are added to the cluster as additional master nodes.
Now you have a high-available k3s cluster with an embedded etcd database using kube-vip as load balancer in front of the kubernetes control plane.
With the high-available kubernetes cluster in place it’s finally time to install Rancher.
Make sure you have installed helm on your local machine!
Let’s start with some prerequisites such as creating some namespaces and adding helm repos for Rancher and cert-manager.
kubectl create namespace cattle-system kubectl create namespace cert-manager kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.crds.yaml helm repo add rancher-latest https://releases.rancher.com/server-charts/latest helm repo add jetstack https://charts.jetstack.io helm repo update
Next step is to install cert-manager using helm.
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.0.4 kubectl get pods --namespace cert-manager
The final step is to install Rancher using helm.
helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=k3s-rancher.homelab.int kubectl -n cattle-system rollout status deploy/rancher kubectl -n cattle-system get deploy rancher
Now try to login to the GUI of Rancher by opening a web browser and pointing it at the DNS entry of your kube-vip VIP address. In my case, k3s-rancher.homelab.int (10.0.0.180).
If this page pops up then Rancher is installed correctly.
Enter a password for the admin account. Choose the default view “I want to create or manage multiple clusters”. Select the “I agree to the terms and conditions for using Rancher” and click Continue.
Enter the Rancher Server URL and click Save URL.
And there you have it, Rancher is up and running, ready to deploy and manage kubernetes clusters and their workloads in the private and/or public cloud of your choice.
- K8s deployment tools - An overview by Dimitri De Swart
- Setting-up-Kubernetes-Clusters-in-Rancher by Dimitri De Swart
- How to deploy RKE on vSphere with Rancher by Dimitri De Swart
- Deploying Kubernetes with Tanzu Community Edition - A HowTo by Dimitri De Swart
- 16 - Install Rancher using Helm by Dimitri De Swart