-
Notifications
You must be signed in to change notification settings - Fork 0
csi_dev
This section walks through the process of deploying the CSI driver on a CAPONE cluster. The deployment consists of three main steps:
- Deploy the Management Cluster: Create a local management cluster to orchestrate workload clusters.
- Deploy the Workload Cluster: Spin up a Kubeadm workload cluster with Flannel installed.
- Build and Deploy CSI Driver: Use Tilt to build the CSI container image and deploy the CSI Helm chart in the workload cluster.
These steps are automated with several Makefile targets for your convenience.
- Kubectl: Required for interacting with both the management and workload clusters.
-
Docker: Must be installed on the machine.
- When using a local insecure container registry, the container runtime must be configured to trust it.
For Docker, add the registry to the list of insecure registries in
/etc/docker/daemon.json
(for reference, the value of<REGISTRY_IP>
is defined in the section below):{ "insecure-registries": ["<REGISTRY_IP>:5005"] }
- When using a local insecure container registry, the container runtime must be configured to trust it.
For Docker, add the registry to the list of insecure registries in
-
Virtual Networks (VNETs) required by CAPONE chart, with the following network names:
- PUBLIC NETWORK NAME: service
- PRIVATE NETWORK NAME: private
To use different network names, the chart installation in the Makefile can be edited.
Before proceeding, gather the following information:
-
Registry IP: The IP address where the container registry will be exposed. It must be reachable from the workload cluster. In this example, we use the bridge IP
172.20.0.1
. -
OpenNebula API endpoint: The IP or hostname of the OpenNebula XML-RPC endpoint, reachable from the workload cluster (e.g, bridge IP
172.20.0.1
). - OpenNebula credentials: A user account and password with access to the necessary resources.
Create a file named .env in your working directory with the following information (there exists an example .env.sample
file in the repository), and ensure it is included in the user’s environment variables:
Important
Replace the placeholder values (<>
) with your specific installation details.
LOCAL_REGISTRY=<REGISTRY_IP>:5005
LOCAL_TAG=latest
ONE_XMLRPC=http://<OPENNEBULA_ENDPOINT>:2633/RPC2
ONE_AUTH=<USERNAME>:<PASSWORD>
DEBUG_PORT=5000
WORKLOAD_CLUSTER_NAME=capone-workload
WORKER_NODES=2
Below is a complete description of all the variables:
Variable Name | Description |
---|---|
LOCAL_REGISTRY |
Local container registry to push CSI driver image. |
LOCAL_TAG |
Tag to use for the CSI Driver image. |
ONE_XMLRPC |
OpenNebula XML-RPC endpoint URL. |
ONE_AUTH |
Authentication credentials for OpenNebula (user:password ). |
DEBUG_PORT |
Port for tilt debugging. |
WORKLOAD_CLUSTER_NAME |
Name of the workload cluster to create. |
WORKER_NODES |
Number of worker nodes in the workload cluster. |
Run the following command in the root directory of the repository:
make mgmt-cluster-create
Before proceeding with the workload cluster deployment, ensure the management cluster has been initialized by running.
kubectl get pods -A
You should see an output similar to this one, with the capone-controller-manager up and running:
NAMESPACE NAME READY STATUS RESTARTS AGE
capone-system capone-controller-... 1/1 Running 0 10m
[...]
Next, deploy the Kubeadm workload cluster with Flannel:
make workload-cluster-deploy
Retrieve the workload cluster kubeconfig to access it. This command generates a file named kubeconfig-workload.yaml
in the root directory:
make workload-cluster-kubeconfig
Finally, confirm that the cluster is up and running before proceeding with the CSI driver installation:
$ kubectl get nodes --kubeconfig kubeconfig-workload.yaml
NAME STATUS ROLES AGE VERSION
capone-workload-584br Ready control-plane 3m3s v1.31.4
capone-workload-md-0-nsjcr-7j4mm Ready <none> 104s v1.31.4
capone-workload-md-0-nsjcr-t4fcb Ready <none> 89s v1.31.4
Use Tilt to build the CSI container image and deploy the CSI Helm chart:
make tilt-up
After running the command, press the Space bar or open http://localhost:10350/
in a browser to monitor the deployment. The CSI driver will be ready once all resources reach a healthy state.
Check that the CSI DaemonSet is running on all nodes:
$ kubectl get ds opennebula-csi-node -n kube-system --kubeconfig kubeconfig-workload.yaml
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
opennebula-csi-node 3 3 3 3 3 <none> 82s
Then, verify that the CSI controller StatefulSet is healthy:
$ kubectl get sts opennebula-csi-controller -n kube-system --kubeconfig kubeconfig-workload.yaml
NAME READY AGE
opennebula-csi-controller 1/1 5m56s
From now, any change to the CSI driver code base will be redeployed on the development Kubernetes cluster automatically with Tilt.
In order to debug the Opennebula CSI driver with tilt, you should follow the steps above until Step 3: Build and Deploy the CSI Driver. Then, instead of executing make tilt-up
execute:
make tilt-up-debug
This will build a CSI driver container with a delve debug server exposing the specified DEBUG_PORT
(5000 by default) port for debugging.
Then, you can port-forward the pod PORT to your localhost through kubectl
:
kubectl port-forward pod/opennebula-csi-controller-0 <local_port>:<DEBUG_PORT> &
e.g.
kubectl port-forward pod/opennebula-csi-controller-0 5001:5000 &
Now, you can start debugging the OpenNebula CSI driver from your IDE or debug client connecting to that remote port, e.g. in Visual Code:
{
"version": "0.2.0",
"configurations": [
{
"name": "Connect to server",
"type": "go",
"request": "attach",
"mode": "remote",
"remotePath": "${workspaceFolder}",
"port": 5001,
"host": "127.0.0.1"
}
]
}
First, delete the CSI driver deployment done with Tilt:
make tilt-down
or in case you started it in debug mode:
make tilt-down-debug
Then you can destroy the workload cluster:
make workload-cluster-destroy
And once the workload cluster is destroyed (i.e. the workload cluster OpenNebula VM machines have been destroyed) you can proceed to delete the management cluster that is running in Kind:
make mgmt-cluster-destroy
Finally, if you want to clean all the built artifacts and temporary directories you can also execute
make clean
- Overview
- Installation and Requirements
- Testing PersistentVolumeClaims
- Developer Information