Kubernetes persistent volumes with NetApp Trident - Part 2

NetApp TridentIn my previous blog post I wrote about using NetApp Trident for orchestrating Kubernetes Persistent Volumes running on NetApp ONTAP storage. In Part 1 I’ve explained installing and configuring Trident and how to create and resize a Persistent Volume. One of the advantages using Trident is that it has some capabilities other CSI drivers don’t have such as snapshotting and cloning. In this part I’ll explain how to enable Volume Snapshots in Kubernetes, create a snapshot of a Persistent Volume Claim, make a clone of the snapshot and then use this clone with a new version of an application.

Enable Volume Snapshots in Kubernetes

Before we start, let me point out a very useful tool If you want to know if your CSI driver is installed correctly and what capabilities are supported. kubestr is a great tool for that! Download kubestr here.

Trident

Using kubestr on my config shows that Trident supports snapshots but that no Volume Snapshot Classes were found. The Kubernetes Volume Snapshot feature is available since Kubernetes v1.20 but is not enabled by default.

To enable and use it, the Volume Snapshot CRD’s and Controller have to be installed.

Let’s download the Kubernetes CSI External Snapshotter, install the CRD’s and then the Controller.

git clone https://github.com/kubernetes-csi/external-snapshotter.git

cd external-snapshotter

kubectl create -f client/config/crd

kubectl create -f deploy/kubernetes/snapshot-controller

NetApp Trident

Next step is defining a Volume Snapshot Class.

Create a yaml file for the Volume Snapshot Class. For example, sc-volumesnapshot.yaml Use Trident CSI as the driver.

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: ontap-csi-snapclass
driver: csi.trident.netapp.io
deletionPolicy: Delete

Notice the deletionPolicy. The deletionPolicy enables you to configure what happens to a VolumeSnapshotContent when the VolumeSnapshot object it is bound to is to be deleted. If the deletionPolicy is Delete, then the underlying storage snapshot will be deleted along with the VolumeSnapshotContent object. If the deletionPolicy is Retain, then both the underlying snapshot and VolumeSnapshotContent remain.

Create the Volume Snapshot Class using kubectl

kubectl create -f sc-volumesnapshot.yaml

kubectl get volumesnapshotclasses.snapshot.storage.k8s.io

NetApp Trident

With Kubernetes Volume Snapshots being enabled and a Volume Snapshot Class in place, let’s run kubestr again.

NetApp Trident

Now it sees the new Volume Snapshot Class. Also it shows a command to test the snapshot functionality. Let’s test it.

kubestr csicheck -s ontap-nas-auto-export -v ontap-csi-snapclass

NetApp Trident

As you can see, kubestr creates a pod with a PVC. Makes a snapshot of the PVC. Then restores (clone) the pod and PVC using the snapshot. If everything works as expected, the test is successful.

Create a snapshot

Ok, we’re ready to snapshot an existing Persistent Volume Claim (PVC). In this example I’ll use the PVC blog-content and its content created in Part 1.

Create a yaml file for the PVC snapshot. For example, pvc-snapshot.yaml Use the newly created Volume Snapshot Class ontap-csi-snapclass and point it to the existing PVC blog-content.

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: blog-snapshot
spec:
  volumeSnapshotClassName: ontap-csi-snapclass
  source:
    persistentVolumeClaimName: blog-content

Create the snapshot in the ghost Namespace using kubectl

kubectl create -n ghost -f pvc-snaphot.yaml

kubectl get volumesnapshot -n ghost

NetApp Trident

Now, check and show the PVC and snapshot using tridentctl

cd trident-installer

./tridentctl -n trident get volume

./tridentctl -n trident get snapshot

NetApp Trident

Also check out the volume in NetApp ONTAP System Manager and the snapshot being created under the Snapshot Copies tab.

Trident

Now that you have an application with a Persistent Volume (PV) and a snapshot of that PV, what can you do with it?

First of all, having a snapshot makes it a lot easier to recover data if someone accidentally (or on purpose) deletes anything.

Another use case when you have snapshots of your data, is that you can use this copy to perform a data-in-place application upgrade. By incorporating this flow of; creating a data snapshot, deploying a new version of your application and mounting the data snapshot, in a CI/CD pipeline makes it very easy to test your new application and the compatibility of your data with this new version of your application.

Perform a data-in-place application upgrade

So, let’s do a data-in-place upgrade of the Ghost web application and upgrade it from version 2.6 to 3.13. First we create a new PVC by making a clone of the snapshot previous taken. Then deploy a new version of the Ghost application and mount the newly created PVC and test if the application works by viewing the existing content created in Part 1.

Create a yaml file for the new PVC. For example, pvc-from-snap.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: blog-content-fromsnap
  labels:
    scenario: clone
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: ontap-nas-auto-export
  dataSource:
    name: blog-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io

Notice the dataSource. In this case I create a new PVC blog-content-fromsnap from the Volume Snapshot blog-snapshot which results in having a exact copy of the existing PVC blog-content.

Create the PVC in the ghost Namespace using kubectl

kubectl create -n ghost -f pvc-from-snap.yaml

kubectl get pvc -n ghost

NetApp Trident

Check NetApp ONTAP System Manager for the new Persistent Volume being created. Also check out the Clone Hierarchy tab.

NetApp Trident

Now, deploy a new version of the Ghost application.

Create two yaml files. One for the Ghost Deployment a one for the Ghost Service. For example, deployv3.yaml and servicev3.yaml In the Deployment manifest make use of the newly created PVC blog-content-fromsnap.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: blogv3
  labels:
    scenario: clone
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blogv3
  template:
    metadata:
      labels:
        app: blogv3
    spec:
      containers:
      - name: blog
        image: ghost:3.13-alpine
        imagePullPolicy: Always
        ports:
        - containerPort: 2368
        env:
        - name: url
          value: http://my-blogv3.homelab.int
        volumeMounts:
        - mountPath: /var/lib/ghost/content
          name: content
      volumes:
      - name: content
        persistentVolumeClaim:
          claimName: blog-content-fromsnap
apiVersion: v1
kind: Service
metadata:
  name: blogv3
  labels:
    scenario: clone
spec:
  #if you have a load balancer installed use type: LoadBalancer
  type: NodePort
  selector:
    app: blogv3
  ports:
  - protocol: TCP
    port: 80
    targetPort: 2368
    nodePort: 30071

Create the Ghost Deployment and Service in the ghost Namespace using kubectl

kubectl create -n ghost -f deployv3.yaml

kubectl create -n ghost -f servicev3.yaml

NetApp Trident

NetApp Trident

Login to Rancher to verify the deployment. Select your Cluster and Project. Then the Workloads tab to view your newly deployed Ghost version 3 application.

Select the Volumes tab to view the Persistent Volumes.

NetApp Trident

Finally, select the Service Discovery tab to view the available services and click on the tcp link to open the Ghost version 3 web page.

NetApp Trident

NetApp Trident

As you can see, the created content saved in the initial PV blog-content is still available and shown as a story in the home screen. The data-in-place upgrade was successful! Now re-direct your users to the new website, make the old website inaccessible and shut it down. That’s it. Mission accomplished, again!