Skip to content

csi_cordon_drain

Jaime Conchello edited this page Sep 3, 2025 · 1 revision

Cordon and Drain Test

This example shows a test of a Deployment with one pod using a RWO PVC, which moves to another node when the original node is cordoned and drained, while the PVC remains available with its data intact.

Step 1: Create a PVC

Define a PersistentVolumeClaim requesting 1Gi of storage:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc-opennebula
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: opennebula-fs

Apply the PVC:

kubectl apply -f pvc.yaml

Step 2: Create a deployment

Create a Deployment with a single replica that mounts the PVC:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pvc-cordon-drain-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pvc-cordon-drain
  template:
    metadata:
      labels:
        app: pvc-cordon-drain
    spec:
      containers:
        - name: app
          image: busybox
          command: ["sh", "-c", "echo $HOSTNAME >> /data/example && sleep infinity"]
          volumeMounts:
            - mountPath: /data
              name: pvc-storage
      volumes:
        - name: pvc-storage
          persistentVolumeClaim:
            claimName: test-pvc-opennebula

Apply the Deployment:

kubectl apply -f deployment.yaml

Step 3: Verify Pod and Node

Check that the pod is running and see which node it is on:

$ kubectl get pods -l app=pvc-cordon-drain -o wide
NAME                               READY   STATUS   NODE
pvc-cordon-drain-deployment-<XX>   1/1     Running  capone-workload-md-0-gt8rh-tn866

Step 4: Cordon the Node

Mark the node as unschedulable so no new pods are scheduled there:

kubectl cordon capone-workload-md-0-gt8rh-tn866

Step 5: Drain the Node

Evict the pod safely so it moves to another node:

kubectl drain capone-workload-md-0-gt8rh-tn866 --ignore-daemonsets --delete-emptydir-data

Step 6: Verify Pod Rescheduling

Check that the pod has been rescheduled on a new node and is running:

$ kubectl get pods -l app=pvc-cordon-drain -o wide
NAME                               READY   STATUS   NODE
pvc-cordon-drain-deployment-<XX>   1/1     Running  capone-workload-md-0-gt8rh-7snfw

Clone this wiki locally