-
Notifications
You must be signed in to change notification settings - Fork 0
csi_cordon_drain
Jaime Conchello edited this page Sep 3, 2025
·
1 revision
This example shows a test of a Deployment with one pod using a RWO PVC, which moves to another node when the original node is cordoned and drained, while the PVC remains available with its data intact.
Define a PersistentVolumeClaim requesting 1Gi of storage:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc-opennebula
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: opennebula-fsApply the PVC:
kubectl apply -f pvc.yamlCreate a Deployment with a single replica that mounts the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pvc-cordon-drain-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pvc-cordon-drain
template:
metadata:
labels:
app: pvc-cordon-drain
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c", "echo $HOSTNAME >> /data/example && sleep infinity"]
volumeMounts:
- mountPath: /data
name: pvc-storage
volumes:
- name: pvc-storage
persistentVolumeClaim:
claimName: test-pvc-opennebulaApply the Deployment:
kubectl apply -f deployment.yamlCheck that the pod is running and see which node it is on:
$ kubectl get pods -l app=pvc-cordon-drain -o wide
NAME READY STATUS NODE
pvc-cordon-drain-deployment-<XX> 1/1 Running capone-workload-md-0-gt8rh-tn866Mark the node as unschedulable so no new pods are scheduled there:
kubectl cordon capone-workload-md-0-gt8rh-tn866Evict the pod safely so it moves to another node:
kubectl drain capone-workload-md-0-gt8rh-tn866 --ignore-daemonsets --delete-emptydir-dataCheck that the pod has been rescheduled on a new node and is running:
$ kubectl get pods -l app=pvc-cordon-drain -o wide
NAME READY STATUS NODE
pvc-cordon-drain-deployment-<XX> 1/1 Running capone-workload-md-0-gt8rh-7snfw- Overview
- Installation and Requirements
- Testing PersistentVolumeClaims
- Developer Information