Skip to content

Conversation

@jseguillon
Copy link

virtink is a solution to deploy micro-vms in Kubernetes. It supports ClusterAPI.

It's a small project but it works very well and coupling it with Kamaji gives great results.

I propose to include it in Kamaji with this PR.

@prometherion
Copy link
Member

We're happy when Kamaji can easily integrate with other community projects, and we're definitely open to this.

However, I'm not sure the integration with virtink CAPI works without any issue, mostly due to this:
https://github.com/smartxworks/cluster-api-provider-virtink/blob/cf3d5668db51a8ff1674c53a4e1df715631d7ce4/controllers/virtinkcluster_controller.go#L150

Essentially, the provider takes for granted it has to create a Service for the Control Planes, however this is already managed by Kamaji.

We have 2 options here:

  1. Interact with the Virtink community, explain the use case, and add a flag, or an annotation, to wait the assignment of a control plane address by external Control Plane providers
  2. as we're doing with Kubevirt, let ignore the Cluster handler using the CAPI annotation (managed-by) in order to let Kamaji creates the Service and patch the Control Planes Endpoint field.

@jseguillon
Copy link
Author

jseguillon commented Oct 28, 2025

Hi,
FYI here is the cluster yaml source if you want to reproduce and test a kamaji + virtink cluster : https://gist.github.com/jseguillon/1e588d3cea095ce67ecce205f29fd89f

Indeed I got into trouble with services and had to patch a little in order to make this cluster work. Reason is default control plane service does not have any endpoint:

NAME                        ENDPOINTS                                                        AGE
capi-kamaji-virtink00       <none>                                                           3m24s
capi-kamaji-virtink00-kcp   10.42.0.162:6443,10.42.0.163:6443,10.42.0.164:6443 + 3 more...   11d
kubernetes                  172.19.34.189:6443                                               6d6h

In the example gist you may notice I superset the controlplane endpoint in order to get a working cluster:

apiVersion: cluster.x-k8s.io/v1beta2
kind: Cluster
...
  controlPlaneEndpoint:                        
    host: capi-kamaji-virtink00-kcp.default.svc
    port: 6443

I did try to tweak annotations with:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: VirtinkCluster
...
spec:
  controlPlaneServiceTemplate:
    metadata:
      annotations:
        cluster.x-k8s.io/managed-by: kamaji
      namespace: default

but it did not help.

Maybe there are some things I still dont' understand ? I'm new user of both Kamaji and virtink :)

Please let me know if I can give more information to help you add virtink in Kamaji.

PS: I also pushed up to date container images. Those are still work in progress and probably os container is sub-optimal in term of layers size but it should be good enough to help you test the gist I gave you.

@prometherion
Copy link
Member

@jseguillon you have to disable using the annotation cluster.x-k8s.io/managed-by on the Infrastructure Cluster: https://github.com/clastix/cluster-api-control-plane-provider-kamaji/blame/955882eed5a67e3cb45b9a3d87f7bbe793793433/docs/providers-kubevirt.md#L44-L50

@jseguillon
Copy link
Author

jseguillon commented Nov 18, 2025

Hey thanks for the update.
I did a test with the annotation now at correct place but it does not really change things on the VMs/Services or maybe I'm missing something.

Could you please give me some pointers on source code related to

as we're doing with Kubevirt, let ignore the Cluster handler using the CAPI annotation (managed-by) in order to let Kamaji creates the Service and patch the Control Planes Endpoint field.

I'd like to dig more and maybe propose some patch on this solution for virtink.

@prometherion
Copy link
Member

@jseguillon according to your changes, it seems you're missing RBAC to patch the Virtink cluster:

//+kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=awsclusters;azureclusters;hetznerclusters;kubevirtclusters;nutanixclusters;packetclusters;ionoscloudclusters,verbs=patch;get;list;watch
//+kubebuilder:rbac:groups=infrastructure.cluster.x-k8s.io,resources=kubevirtclusters/status;nutanixclusters/status;packetclusters/status,verbs=patch

Once the entry is there, run make release and it will create the required RBAC manifests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants