Skip to content

Latest commit

 

History

History
138 lines (91 loc) · 3.86 KB

preprov-guide.md

File metadata and controls

138 lines (91 loc) · 3.86 KB

Importing an Existing Lustre Instance - User Guide

This guide provides a simple example of how to use the Lustre CSI driver to import and connect to an existing Lustre instance that has been pre-provisioned by an administrator.

Importing a Lustre Instance as a Persistent Volume

If you haven't already provisioned a Google Cloud Managed Lustre instance, follow the instructions here to create one.

Creating a Persistent Volume for a Lustre Instance

Prerequisite

Before applying the Persistent Volume (PV) and Persistent Volume Claim (PVC) manifest, update ./examples/pre-prov/preprov-pvc-pv.yaml with the correct values:

  • volumeHandle: Update with the correct project ID, zone, and Lustre instance name.
  • storage: This value should match the size of the underlying Lustre instance.
  • volumeAttributes:
    • ip must point to the Lustre instance IP.
    • filesystem must be the Lustre instance's filesystem name.

1. Create a Persistent Volume (PV) and Persistent Volume Claim (PVC)

Apply the example PV and PVC configuration:

kubectl apply -f ./examples/pre-prov/preprov-pvc-pv.yaml

2. Verify that the PVC and PV are bound

kubectl get pvc

Expected output:

NAME          STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
preprov-pvc   Bound    preprov-pv   16Ti           RWX                        76s

Using the Persistent Volume in a Pod

1. Deploy the Pod

kubectl apply -f ./examples/pre-prov/preprov-pod.yaml

2. Verify that the Pod is running

It may take up to a few minutes for the Pod to reach the Running state:

kubectl get pods

Expected output:

NAME           READY   STATUS    RESTARTS   AGE
lustre-pod     1/1     Running   0          11s

Cleaning Up

1. Delete the Pod and PVC

Once you've completed your experiment, delete the Pod and PVC.

Note: The PV was created with a "Retain" persistentVolumeReclaimPolicy, meaning that deleting the PVC will not remove the PV or the underlying Lustre instance.

kubectl delete pod lustre-pod
kubectl delete pvc preprov-pvc

2. Check the PV status

After deleting the Pod and PVC, the PV should report a Released state:

kubectl get pv

Expected output:

NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                 STORAGECLASS   REASON   AGE
preprov-pv   16Ti      RWX            Retain           Released   default/preprov-pvc                           2m28s

3. Reuse the PV

To reuse the PV, remove the claim reference (claimRef):

kubectl patch pv preprov-pv --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'

The PV should now report an Available status, making it ready to be bound to a new PVC:

kubectl get pv

Expected output:

NAME        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
preprov-pv   16Ti      RWX            Retain           Available                                   19m

4. Delete the PV (If No Longer Needed)

If the PV is no longer needed, delete it.

Note: Deleting the PV does not remove the underlying Lustre instance.

kubectl delete pv preprov-pv