Skip to content

v1.10 MaxLength validation breaks machine creation with large ignition config #12168

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
networkhell opened this issue May 8, 2025 · 3 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@networkhell
Copy link

What steps did you take and what happened?

After upgrading to cluster-api v1.10.1, specifying an ignition config in spec.ignition.containerLinuxConfig.additionalConfig in manifests referencing one of the following crds:
kubeadmconfigs.bootstrap.cluster.x-k8s.io
kubeadmconfigtemplates.bootstrap.cluster.x-k8s.io
kubeadmcontrolplanes.controlplane.cluster.x-k8s.io
kubeadmcontrolplanetemplates.controlplane.cluster.x-k8s.io

with a size above 10240 bytes will prevent machines from being created or updated.

What did you expect to happen?

I expect my machines to be created / updated. So the MaxLength validation should be set to sane limit. I guess this depends on a.) the cloud provider and b.) ignition spec itself.
I will try to provide a.) for kolla installed openstack with default values.

As already discussed on cluster-api slack it would be great if we could provide large ignition configs via Secret or ConfigMap. But this should not necessarily be the scope of this issue.

Cluster API version

v1.10.1 for core, kubeadm-bootstrap and kubeadm-control-plane
v0.12.3 for CAPO

Kubernetes version

v1.32.4

Anything else you would like to add?

Logs from capi-controller-manager

E0508 09:58:58.505422       1 controller.go:347] "Reconciler error" err="failed to sync replicas: failed to clone bootstrap configuration from KubeadmConfigTemplate test01-dev-nsc01/test01-dev-nsc01-test01-dev-nsc01-node-muc5-b-xmk6f while creating a machine: KubeadmConfig.bootstrap.cluster.x-k8s.io \"test01-dev-nsc01-node-muc5-b-qljnw-cr6xw-ddwfw\" is invalid: spec.ignition.containerLinuxConfig.additionalConfig: Too long: may not be more than 10240 bytes" controller="machineset" controllerGroup="cluster.x-k8s.io" controllerKind="MachineSet" MachineSet="test01-dev-nsc01/test01-dev-nsc01-node-muc5-b-qljnw-cr6xw" namespace="test01-dev-nsc01" name="test01-dev-nsc01-node-muc5-b-qljnw-cr6xw" reconcileID="5ed0acd9-6546-4dd4-8334-dd21c1515419"

Label(s) to be applied

/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 8, 2025
@chrischdi
Copy link
Member

/priority important-soon
/triage accepted

Next step: Identify what new max value is suitable.

xref: https://kubernetes.slack.com/archives/C8TSNPY4T/p1746706504153769

@k8s-ci-robot k8s-ci-robot added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-priority Indicates an issue lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 14, 2025
@chrischdi
Copy link
Member

/help

@k8s-ci-robot
Copy link
Contributor

@chrischdi:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label May 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants