NetApp is certainly one of the most popular storage brands, with quite a big portfolio of cost-effective storage solutions. With the spreading of Kubernetes, they developed Trident, their own Container Storage Interface (CSI) compatible storage orchestrator.
In the "NetApp Astra Trident Tutorial NFS Kubernetes CSI HowTo" we see how easy it is to deploy and set it up.
What Is Astra Trident
Astra Trident is an open-source storage orchestrator for containers and Kubernetes distributions, including Anthos, specifically designed to provide a seamless integration with the entire NetApp storage portfolio, including NetApp ONTAP, supporting also NFS and iSCSI connections.
Since it supports CSI, it enables the creation and configuration of persistent storage external to the Kubernetes cluster, as well as advanced functionality such as snapshots.
Once deployed, it runs:
- The Trident Controller Pod: it is the one actually running the CSI Controller plugin, and is responsible for provisioning and managing volumes
- The Trident Node Pods: these privileged Pods, running the CSI Node plugin, are responsible for mounting and unmounting storage for Pods running on the Kubernetes node
Install Astra Trident
There are two ways for deploying Trident: the Helm chart and the Trident installer. In this post we use the Trident installer, downloading the installer on every Kubernetes master node and using the "tridentctl" command line utility to deploy it inside Kubernetes, but you may use Helm if you are more comfortable with it.
I am showing this method not only because it is the recommended way, but also because in my opinion it is nice to have the "tridentctl "command line utility available on the Kubernetes node, being able to run statements directly from the host' shell.
Kubernetes Masters
Browse the Trident project on GitHub looking for the most up to date stable release and store its URL in the TRIDENT_PACKAGE environment variable as follows:
TRIDENT_PACKAGE=https://github.com/NetApp/trident/releases/download/v24.10.0/trident-installer-24.10.0.tar.gz
then, on every Kubernetes master node, create the "/opt/trident" directory and install Trident by running:
sudo mkdir -m 755 /opt/trident
wget -qO- ${TRIDENT_PACKAGE} | sudo tar xvz --strip-components=1 -C /opt/trident
lastly, add the "/opt/trident" directory to the "PATH" variable - just create the "/etc/profiles.d/trident.sh" with the following contents:
export PATH=${PATH}:/opt/trident
Kubernetes Worker Nodes
On every worker node, and in general on every node where you want to run workload with NFS Persistent Volumes managed by the Astra Trident, it is necessary to install the NFS utilities:
sudo dnf install -y nfs-utils
Deploy On Kubernetes
On one of the Kubernetes master nodes, deploy Astra Trident on the Kubernetes cluster - in this example we are deploying to the "trident" namespace:
tridentctl install -n trident
the output must look like as follows:
INFO Starting Trident installation. namespace=trident
INFO Created namespace. namespace=trident
INFO Created controller service account.
INFO Created controller role.
INFO Created controller role binding.
INFO Created controller cluster role.
INFO Created controller cluster role binding.
INFO Created node linux service account.
INFO Creating or patching the Trident CRDs.
INFO Applied latest Trident CRDs.
INFO Added finalizers to custom resource definitions.
INFO Created Trident service.
INFO Created Trident encryption secret.
INFO Created Trident protocol secret.
INFO Created Trident resource quota.
INFO Created Trident deployment.
INFO Created Trident daemonset.
INFO Waiting for Trident pod to start.
INFO Trident pod started. deployment=trident-controller namespace=trident pod=trident-controller-4134215492-atlqe
INFO Waiting for Trident REST interface.
INFO Trident REST interface is up. version=24.10.0
let's have a look to the Kubernetes side - we can check the deployed pods by typing:
kubectl get pods -n trident -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
the output is as follows:
NAME STATUS NODE
trident-controller-4134215492-atlqe Running kubew-ca-up1a005
trident-node-linux-27hw4 Running kubea-ca-up1a001
trident-node-linux-bdmpj Running kubea-ca-up1b002
trident-node-linux-jclcm Running kubea-ca-up1c003
trident-node-linux-jx7hr Running kubew-ca-up1a005
trident-node-linux-vr8bl Running kubew-ca-up1b006
trident-node-linux-zh7rn Running kubew-ca-up1c007
Configure Astra Trident
We can now begin the actual configuration.
Configure A Storage Provisioning Backend
The first configuration to do is connecting a storage provisioning backend: this is used by the Astra Trident controller to provision on the fly storage volumes that are used by Persistent Volumes.
Connecting a storage provisioning backend requires creating a manifest YAML file with all the required settings: on one of the Kubernetes master nodes, create the "backend.yml" manifest file using the following snippet as a template:
---
version: 1
storageDriverName: ontap-nas
backendName: netap-p1-0
managementLIF: 10.21.15.120
dataLIF: 10.21.20.219
svm: nfs-kube-p1-0
username: kube-p1-0
password: B4q3v0e.db1e-
limitAggregateUsage: 80%
limitVolumeSize: 200Gi
nfsMountOptions: nfsvers=4
defaults:
spaceReserve: volume
exportPolicy: nfs-kube-p1-0
snapshotPolicy: default
snapshotReserve: '10'
Of course the actual values depends on how you configured your storage device - the above example is for a ontap NAS
I don't provide an explanation of the settings since it would be a duplicate of the official manual - you can get the best explanation for each of them on the Trident official documentation.
Once done, apply the manifest to create and connect the "netap-p1-0" backend:
tridentctl -n trident create backend -f backend.yml
If everything properly worked, the backend must be listed when typing:
tridentctl get backend -n trident
the output must look like as follows:
+------------+----------------+--------------------------------------+--------+------------+---------+
| NAM E | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+------------+----------------+--------------------------------------+--------+------------+---------+
| netap-p1-0 | ontap-nas | 1b3f4204-8ba9-4980-9be7-903c7a3946bf | online | normal | 0 |
+------------+----------------+--------------------------------------+--------+------------+---------+
Configure Storage Classes
Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:
Even a small contribution is always welcome!
The next (and last) step is to configure one or more storage classes, setting one of them as the default one. In this post we create the following two storage classes:
- ontap-gold-delete - Persistent Volumes (PV) provisioned with this storage class are automatically deleted after the bound Persistent Volume Claim (PVC) is deleted
- ontap-gold-retain - Persistent Volumes (PV) provisioned with this storage class are retained after the bound Persistent Volume Claim (PVC) is deleted
Create the "storage-classes.yml" file with the following contents:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ontap-gold-delete
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.trident.netapp.io
parameters:
backendType: "ontap-nas"
media: "ssd"
provisioningType: "thin"
snapshots: "true"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ontap-gold-retain
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: csi.trident.netapp.io
parameters:
backendType: "ontap-nas"
media: "ssd"
provisioningType: "thin"
snapshots: "true"
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
then apply it using the "kubectl" command line utility:
kubectl apply -f storage-classes.yml
we can display them by running:
kubectl get StorageClass
the output must look like as follows:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ontap-gold-delete (default) csi.trident.netapp.io Delete Immediate true 1m
ontap-gold-retain csi.trident.netapp.io Retain Immediate true 1m
please look how the "ontap-gold-delete" storage class is marked as default (as requested).
if everything properly worked, the storage class must be available also when using the "tridentctl" command-line utility:
tridentctl get storageclass -n trident
the output must look like as follows:
+-------------------+
| NAME |
+-------------------+
| ontap-gold-delete |
| ontap-gold-retain |
+-------------------+
Checking The Logs
Of course things do not always work as expected, so sometimes it is necessary to troubleshoot. The easiest way to have a look at the log is directly using the "tridentctl" command line tool with the "logs" argument.
For example:
./tridentctl logs -n trident
in this example the output is:
time="2024-10-11T14:45:40Z" level=info msg="Running Trident storage orchestrator." binary=/trident_orchestrator build_time="Fri Jul 26 11:49:09 PM EDT 2024" version=24.06.1
time="2024-10-11T14:45:40Z" level=info msg="Initializing metrics frontend." address=":8001" logLayer=metrics_frontend requestID=b9b7f5fa-73f8-4287-93b8-9ed06650f697 requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Added frontend." name=metrics
time="2024-10-11T14:45:40Z" level=info msg="Initializing K8S helper frontend." logLayer=csi_frontend requestID=317989a9-fca0-4d96-b4fa-718b8c194c29 requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="K8S helper determined the container orchestrator version." gitVersion=v1.28.13+rke2r1 logLayer=csi_frontend requestID=317989a9-fca0-4d96-b4fa-718b8c194c29 requestSource=Internal version=1.28 workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="K8S helper frontend initialized." logLayer=csi_frontend requestID=317989a9-fca0-4d96-b4fa-718b8c194c29 requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Initializing K8S helper frontend." logLayer=csi_frontend requestID=cc6f2723-0d0b-45ef-a2d0-fc6f94edf1c7 requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Added frontend." logLayer=core name=k8s_csi_helper requestID=20498d3e-a23b-4cb6-a2e9-9f319cb4bcbd requestSource=Unknown workflow="core=init"
time="2024-10-11T14:45:40Z" level=info msg="Initializing CSI frontend." name=kubew-ca-up1a005 version=24.06.1
time="2024-10-11T14:45:40Z" level=info msg="Initializing CSI controller frontend." logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=CREATE_DELETE_VOLUME logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=PUBLISH_UNPUBLISH_VOLUME logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=LIST_VOLUMES logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=CREATE_DELETE_SNAPSHOT logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=LIST_SNAPSHOTS logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=EXPAND_VOLUME logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=CLONE_VOLUME logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=LIST_VOLUMES_PUBLISHED_NODES logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling controller service capability." capability=SINGLE_NODE_MULTI_WRITER logLayer=csi_frontend requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=SINGLE_NODE_WRITER requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=SINGLE_NODE_SINGLE_WRITER requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=SINGLE_NODE_MULTI_WRITER requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=SINGLE_NODE_READER_ONLY requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=MULTI_NODE_READER_ONLY requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=MULTI_NODE_SINGLE_WRITER requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Enabling volume access mode." logLayer=csi_frontend mode=MULTI_NODE_MULTI_WRITER requestID=1b59286a-0419-4865-a5a3-dfd674e794be requestSource=Internal workflow="plugin=create"
time="2024-10-11T14:45:40Z" level=info msg="Added frontend." logLayer=core name=csi requestID=20498d3e-a23b-4cb6-a2e9-9f319cb4bcbd requestSource=Unknown workflow="core=init"
time="2024-10-11T14:45:40Z" level=info msg="Added frontend." logLayer=core name=crd requestID=20498d3e-a23b-4cb6-a2e9-9f319cb4bcbd requestSource=Unknown workflow="core=init"
time="2024-10-11T14:45:40Z" level=info msg="Initializing HTTP REST frontend." address="127.0.0.1:8000"
time="2024-10-11T14:45:40Z" level=info msg="Added frontend." name="HTTP REST"
time="2024-10-11T14:45:40Z" level=info msg="Initializing HTTPS REST frontend." address=":8443"
time="2024-10-11T14:45:40Z" level=info msg="Added frontend." name="HTTPS REST"
time="2024-10-11T14:45:40Z" level=info msg="Created Trident-ACP REST API Client."
time="2024-10-11T14:45:40Z" level=info msg="Activating metrics frontend." address=":8001" logLayer=metrics_frontend requestID=980a56e9-a6a6-457c-b8ac-2eef2369cb82 requestSource=Internal workflow="plugin=activate"
time="2024-10-11T14:45:40Z" level=info msg="Activating HTTP REST frontend." address="127.0.0.1:8000"
time="2024-10-11T14:45:40Z" level=info msg="Activating HTTPS REST frontend." address=":8443"
time="2024-10-11T14:45:45Z" level=error msg="Error creating ONTAP REST API client for initial call. Falling back to ZAPI." error="unable to get details for SVM nfs-kube-c1; &{<nil>} (*models.ErrorResponse) is not supported by the TextConsumer, can be resolved by supporting TextUnmarshaler interface" logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:45Z" level=info msg="Using ZAPI client" Backend=netap-p1-0 logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:45Z" level=info msg="Controller serial numbers." logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal serialNumbers="651727000149,651727000148" workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Storage driver initialized." driver=ontap-nas logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Created new storage backend." backend="&{0xc000235808 netap-p1-0 true online normal map[naspm_dotnaa_1_data:0xc0000c12c0] map[] false}" logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Newly added backend satisfies no storage classes." backend=netap-p1-0 logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing storage class." handler=Bootstrap logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal storageClass=ontap-gold workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added 6 existing volume(s)" logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing node." handler=Bootstrap logLayer=core node=kubea-ca-up1a001 requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing node." handler=Bootstrap logLayer=core node=kubea-ca-up1b002 requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing node." handler=Bootstrap logLayer=core node=kubea-ca-up1c003 requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing node." handler=Bootstrap logLayer=core node=kubew-ca-up1a005 requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing node." handler=Bootstrap logLayer=core node=kubew-ca-up1b006 requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Added an existing node." handler=Bootstrap logLayer=core node=kubew-ca-up1c007 requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=error msg="Transaction monitor blocked by bootstrap error." error="Trident is initializing, please try again later" logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Trident bootstrapped successfully." logLayer=core requestID=e4d60ec8-e33e-495a-a2dc-b9c53a659a05 requestSource=Internal workflow="core=bootstrap"
time="2024-10-11T14:45:46Z" level=info msg="Activating K8S helper frontend." logLayer=csi_frontend requestID=9d926a4d-af92-45c5-bb42-6ef402ae4dfb requestSource=Internal workflow="plugin=activate"
time="2024-10-11T14:45:46Z" level=info msg="Activating CRD frontend." logLayer=crd_frontend logSource=trident-crd-controller requestID=462caeb4-79fc-46cc-afcf-7196b2f78db4 requestSource=Unknown
time="2024-10-11T14:45:46Z" level=info msg="Activating CSI frontend." logLayer=csi_frontend requestID=d8719cfc-3250-4d52-b7ea-be42928216e0 requestSource=Internal workflow="plugin=activate"
time="2024-10-11T14:45:46Z" level=info msg="Starting Trident CRD controller."
time="2024-10-11T14:45:46Z" level=info msg="Starting periodic node access reconciliation service." logLayer=core requestID=b7567c2c-453f-4176-b3f1-a0576c7ed102 requestSource=Periodic workflow="core=node_reconcile"
time="2024-10-11T14:45:46Z" level=info msg="Starting periodic backend state reconciliation service." logLayer=core requestID=2132782d-1b89-40f3-a2a0-f31c3a2f6e4d requestSource=Periodic workflow="core=backend_reconcile"
time="2024-10-11T14:45:46Z" level=info msg="Listening for GRPC connections." name=/plugin/csi.sock net=unix
time="2024-10-11T14:45:46Z" level=warning msg="K8S helper could not add a storage class: storage class ontap-gold already exists" logLayer=csi_frontend name=ontap-gold parameters="map[backendType:ontap-nas media:ssd provisioningType:thin snapshots:true]" provisioner=csi.trident.netapp.io requestID=7fe47354-13ef-4d4a-b568-006670cce418 requestSource=Kubernetes workflow="storage_class=create"
time="2024-10-11T14:45:46Z" level=info msg="Node reconciliation complete." logLayer=csi_frontend requestID=9d926a4d-af92-45c5-bb42-6ef402ae4dfb requestSource=Internal workflow="plugin=activate"
time="2024-10-11T14:45:59Z" level=info msg="Determined topology labels for node: map[]" logLayer=rest_frontend node=kubea-ca-up1a001 requestID=df3646bc-46e8-4c8a-be70-426e3d2deeb5 requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:45:59Z" level=info msg="Added a new node." handler=AddOrUpdateNode logLayer=rest_frontend node=kubea-ca-up1a001 requestID=df3646bc-46e8-4c8a-be70-426e3d2deeb5 requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:46:20Z" level=info msg="Determined topology labels for node: map[]" logLayer=rest_frontend node=kubew-ca-up1a005 requestID=61ff372f-6530-429c-adc4-a993cee4cc6c requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:46:20Z" level=info msg="Added a new node." handler=AddOrUpdateNode logLayer=rest_frontend node=kubew-ca-up1a005 requestID=61ff372f-6530-429c-adc4-a993cee4cc6c requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:47:45Z" level=info msg="Determined topology labels for node: map[]" logLayer=rest_frontend node=kubea-ca-up1c003 requestID=e15650aa-ee44-4552-aafa-a44f97fb286a requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:47:45Z" level=info msg="Added a new node." handler=AddOrUpdateNode logLayer=rest_frontend node=kubea-ca-up1c003 requestID=e15650aa-ee44-4552-aafa-a44f97fb286a requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:47:45Z" level=info msg="Determined topology labels for node: map[]" logLayer=rest_frontend node=kubea-ca-up1b002 requestID=ef14d470-efd7-488b-9b50-f03b3813f854 requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:47:45Z" level=info msg="Added a new node." handler=AddOrUpdateNode logLayer=rest_frontend node=kubea-ca-up1b002 requestID=ef14d470-efd7-488b-9b50-f03b3813f854 requestSource=REST workflow="trident_rest=logger"
time="2024-10-11T14:48:07Z" level=info msg="Unpublishing volume from node." logLayer=core node=kubew-ca-up1a005 requestID=355061db-0813-49e2-9c57-fba28ee594a3 requestSource=CSI volume=pvc-9c91463e-8714-4d9f-90ec-1c3e81e663f5 workflow="controller=unpublish"
time="2024-10-11T14:48:07Z" level=info msg="Unpublishing volume from node." logLayer=core node=kubew-ca-up1a005 requestID=38dd6d64-6259-4ddf-9d08-e627e99297a0 requestSource=CSI volume=pvc-9c91463e-8714-4d9f-90ec-1c3e81e663f5 workflow="controller=unpublish"
time="2024-10-11T14:48:07Z" level=warning msg="Volume is not published to node; doing nothing." error="unable to get volume publication record" logLayer=core nodeName=kubew-ca-up1a005 requestID=38dd6d64-6259-4ddf-9d08-e627e99297a0 requestSource=CSI volumeName=pvc-9c91463e-8714-4d9f-90ec-1c3e81e663f5 workflow="controller=unpublish"
Provision A Volume
If everything went smooth so far, we can now test the automatic provision of a Persistent Volume - it is enough to declare a PVC manifest and apply it.
As an example, create the "test-pvc.yml" file with the following contents:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The above manifest defines a very trivial 1Gb Persistent Volume Claim (PVC) with ReadWrite access only from a single pod.
Apply it using the "kubectl" command line utility as usual:
kubectl apply -f test-pvc.yml
we can have a look at how the PVC has been actually processed by Kubernetes by typing:
kubectl get pvc
the output must look like the following one:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
test-pvc Bound pvc-257d268d-e6fe-4885-a600-cb85c96cdc3f 1Gi RWO ontap-gold-delete 5s
as you see, the PVC called "test-pvc" was caught by the Astra Trident controller that created the "pvc-257d268d-e6fe-4885-a600-cb85c96cdc3f" Persistent Volume (PV) on the fly: as soon as Kubernetes saw a PV with matching criteria to the "test-pvc" PVC, it immediately bound them.
Since the "ontap-gold-delete" storage class was marked as the default one, and the PVC did not explicitly specify the storage class to use, it automatically selected the "ontap-gold-delete" storage class.
We can of course have a look to the provisioned PV by typing:
kubectl get pv
the output must look like the following one:
NAME CAPACITY ACCESS MODE RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-257d268d-e6fe-4885-a600-cb85c96cdc3f 1Gi RWO Delete Bound default/test-pvc ontap-gold-delete <unset>
As expected, since the storage class is "ontap-gold-delete", the PV's reclaim policy is "Delete".
Mind that the Astra Trident controller can be configured to use more than just one backend - for example a fast one for IO intensive traffic, and a slower one for less IO sensitive applications.
The above output says nothing about the PV's backend source. If you need storage-specific information like that, you can rely again on the "tridentctl" command line utility.
For example:
tridentctl get volume -n trident
in this case the output is:
+------------------------------------------+---------+--------------------+----------+--------------------------------------+-------+---------+
| NAME | SIZE | STORAGE CLASS | PROTOCOL | BACKEND UUID | STATE | MANAGED |
+------------------------------------------+---------+--------------------+----------+--------------------------------------+-------+---------+
| pvc-257d268d-e6fe-4885-a600-cb85c96cdc3f | 1.0 GiB | ontap-gold-delete | file | 1b3f4204-8ba9-4980-9be7-903c7a3946bf | | true |
+------------------------------------------+---------+--------------------+----------+--------------------------------------+-------+---------+
If everything properly worked, we can now delete the PVC we created:
kubectl delete pvc test-pvc
Since the StorageClass' RetainPolicy was set to Delete, then the Persistent Volume is deleted and Astra Trident takes care to drop the volume from the backend, actually freeing back the storage.
Footnotes
As you see Astra Trident seamlessly integrates with Kubernetes implementing the CSI in a very effective way - it also implements Prometheus metrics to detect provisioned and used storage capacity, ... but this of course requires the use of Prometheus.
This will be the topic of the next post: deploying Prometheus and Grafana to make sense of what happens in the Kubernetes cluster.
Stay tuned, and if t]you like the contents, ... a tip is always welcome.
If you appreciate this strive please and if you like this post and any other ones, just share this and the others on Linkedin - sharing and comments are an inexpensive way to push me into going on writing - this blog makes sense only if it gets visited.