This guide provides a simple example of how to use the Lustre CSI driver with dynamic provisioning. Dynamic provisioning allows you to create storage backed by Google Cloud Managed Lustre instances on demand and use them as volumes for stateful workloads.
Ensure that you have followed the PSA User Guide to set up PSA for your VPC network.
If you followed the installation guide to install the CSI driver, a StorageClass named lustre-rwx
should already exist in your cluster. Alternatively, you can create a custom StorageClass with specific parameters. The Lustre CSI driver supports the following parameters:
-
network: (Optional) The VPC network where the Lustre instance will be created. If not specified, the network of the GKE cluster will be used. To create a Lustre instance in a shared VPC network, provide the full network name, e.g.,
projects/<host-project-id>/global/networks/<vpc-network-name>
. -
filesystem: (Optional) The filesystem name for the Lustre instance. It must be an alphanumeric string (up to 8 characters), beginning with a letter. If not provided, the CSI driver will automatically generate a filesystem name in the format
lfs<NNNNN>
(e.g.,lfs97603
).Note: If you want to mount multiple Lustre instances on the same node, it is recommended to create a separate StorageClass for each instance, and ensuring a unique filesystem name for each. This is necessary because the filesystem name must be unique on each node.
-
labels: (Optional) A set of key-value pairs to assign labels to the Managed Lustre instance.
-
description: (Optional) A description of the instance (2048 characters or less).
Apply the example PVC configuration:
kubectl apply -f ./examples/dynamic-prov/dynamic-pvc.yaml
Check that the PVC has been successfully bound to a PV:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
lustre-pvc Bound pvc-be98607a-7a37-40b7-b7d7-28c9adce7b77 16Ti RWX lustre-rwx <unset> 24s
kubectl apply -f ./examples/dynamic-prov/dynamic-pod.yaml
It may take a few minutes for the Pod to reach the Running
state:
kubectl get pods
Expected output:
NAME READY STATUS RESTARTS AGE
lustre-pod 1/1 Running 0 11s
Once you've completed your experiment, delete the Pod and PVC.
Note: The PV is created with a
"Delete"
persistentVolumeReclaimPolicy
, meaning that deleting the PVC will also delete the PV and the underlying Lustre instance.
kubectl delete pod lustre-pod
kubectl delete pvc lustre-pvc
kubectl get pv
Expected output:
No resources found
Note: It may take a few minutes for the underlying Lustre instance to be fully deleted.