
NetApp is certainly one of the most popular storage brands, with quite a big portfolio of cost-effective storage solutions. With the spreading of Kubernetes, they developed Trident, their own Container Storage Interface (CSI) compatible storage orchestrator.
In the "NetApp Astra Trident Tutorial NFS Kubernetes CSI HowTo" we see how easy it is to deploy and set it up.
What Is Astra Trident
Astra Trident is an open-source storage orchestrator for containers and Kubernetes distributions, including Anthos, specifically designed to provide a seamless integration with the entire NetApp storage portfolio, including NetApp ONTAP, supporting also NFS and iSCSI connections.
Since it supports CSI, it enables the creation and configuration of persistent storage external to the Kubernetes cluster, as well as advanced functionality such as snapshots.
Once deployed, it runs:
- The Trident Controller Pod: it is the one actually running the CSI Controller plugin, and is responsible for provisioning and managing volumes
- The Trident Node Pods: these privileged Pods, running the CSI Node plugin, are responsible for mounting and unmounting storage for Pods running on the Kubernetes node
Install Astra Trident
There are two ways for deploying Trident: the Helm chart and the Trident installer. In this post we use the Trident installer, downloading the installer on every Kubernetes master node and using the "tridentctl" command line utility to deploy it inside Kubernetes, but you may use Helm if you are more comfortable with it.

I am showing this method not only because it is the recommended way, but also because in my opinion it is nice to have the "tridentctl "command line utility available on the Kubernetes node, being able to run statements directly from the host' shell.
Kubernetes Masters
Browse the Trident project on GitHub looking for the most up to date stable release and store its URL in the TRIDENT_PACKAGE environment variable as follows:
then, on every Kubernetes master node, create the "/opt/trident" directory and install Trident by running:
lastly, add the "/opt/trident" directory to the "PATH" variable - just create the "/etc/profiles.d/trident.sh" with the following contents:
Kubernetes Worker Nodes
On every worker node, and in general on every node where you want to run workload with NFS Persistent Volumes managed by the Astra Trident, it is necessary to install the NFS utilities:
Deploy On Kubernetes
On one of the Kubernetes master nodes, deploy Astra Trident on the Kubernetes cluster - in this example we are deploying to the "trident" namespace:
the output must look like as follows:
let's have a look to the Kubernetes side - we can check the deployed pods by typing:
the output is as follows:
Configure Astra Trident
We can now begin the actual configuration.
Configure A Storage Provisioning Backend
The first configuration to do is connecting a storage provisioning backend: this is used by the Astra Trident controller to provision on the fly storage volumes that are used by Persistent Volumes.
Connecting a storage provisioning backend requires creating a manifest YAML file with all the required settings: on one of the Kubernetes master nodes, create the "backend.yml" manifest file using the following snippet as a template:
Of course the actual values depends on how you configured your storage device - the above example is for a ontap NAS

I don't provide an explanation of the settings since it would be a duplicate of the official manual - you can get the best explanation for each of them on the Trident official documentation.
Once done, apply the manifest to create and connect the "netap-p1-0" backend:
If everything properly worked, the backend must be listed when typing:
the output must look like as follows:
Configure Storage Classes

Are you enjoying these high quality free contents on a blog without annoying banners? I like doing this for free, but I also have costs so, if you like these contents and you want to help keeping this website free as it is now, please put your tip in the cup below:
Even a small contribution is always welcome!
The next (and last) step is to configure one or more storage classes, setting one of them as the default one. In this post we create the following two storage classes:
- ontap-gold-delete - Persistent Volumes (PV) provisioned with this storage class are automatically deleted after the bound Persistent Volume Claim (PVC) is deleted
- ontap-gold-retain - Persistent Volumes (PV) provisioned with this storage class are retained after the bound Persistent Volume Claim (PVC) is deleted
Create the "storage-classes.yml" file with the following contents:
then apply it using the "kubectl" command line utility:
we can display them by running:
the output must look like as follows:
please look how the "ontap-gold-delete" storage class is marked as default (as requested).
if everything properly worked, the storage class must be available also when using the "tridentctl" command-line utility:
the output must look like as follows:
Checking The Logs
Of course things do not always work as expected, so sometimes it is necessary to troubleshoot. The easiest way to have a look at the log is directly using the "tridentctl" command line tool with the "logs" argument.
For example:
in this example the output is:
Provision A Volume
If everything went smooth so far, we can now test the automatic provision of a Persistent Volume - it is enough to declare a PVC manifest and apply it.
As an example, create the "test-pvc.yml" file with the following contents:
The above manifest defines a very trivial 1Gb Persistent Volume Claim (PVC) with ReadWrite access only from a single pod.
Apply it using the "kubectl" command line utility as usual:
we can have a look at how the PVC has been actually processed by Kubernetes by typing:
the output must look like the following one:
as you see, the PVC called "test-pvc" was caught by the Astra Trident controller that created the "pvc-257d268d-e6fe-4885-a600-cb85c96cdc3f" Persistent Volume (PV) on the fly: as soon as Kubernetes saw a PV with matching criteria to the "test-pvc" PVC, it immediately bound them.
Since the "ontap-gold-delete" storage class was marked as the default one, and the PVC did not explicitly specify the storage class to use, it automatically selected the "ontap-gold-delete" storage class.
We can of course have a look to the provisioned PV by typing:
the output must look like the following one:
As expected, since the storage class is "ontap-gold-delete", the PV's reclaim policy is "Delete".
Mind that the Astra Trident controller can be configured to use more than just one backend - for example a fast one for IO intensive traffic, and a slower one for less IO sensitive applications.
The above output says nothing about the PV's backend source. If you need storage-specific information like that, you can rely again on the "tridentctl" command line utility.
For example:
in this case the output is:
If everything properly worked, we can now delete the PVC we created:
Since the StorageClass' RetainPolicy was set to Delete, then the Persistent Volume is deleted and Astra Trident takes care to drop the volume from the backend, actually freeing back the storage.
Footnotes
As you see Astra Trident seamlessly integrates with Kubernetes implementing the CSI in a very effective way - it also implements Prometheus metrics to detect provisioned and used storage capacity, ... but this of course requires the use of Prometheus.
This will be the topic of the next post: deploying Prometheus and Grafana to make sense of what happens in the Kubernetes cluster.
Stay tuned, and if t]you like the contents, ... a tip is always welcome.
If you appreciate this strive please and if you like this post and any other ones, just share this and the others on Linkedin - sharing and comments are an inexpensive way to push me into going on writing - this blog makes sense only if it gets visited.