-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Open
Labels
area/machinepoolIssues or PRs related to machinepoolsIssues or PRs related to machinepoolskind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.needs-priorityIndicates an issue lacks a `priority/foo` label and requires one.Indicates an issue lacks a `priority/foo` label and requires one.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Description
What steps did you take and what happened?
When a K8s upgrade is performed on a Managed cluster, new nodes will come up with new UIDs. However, the MachinePool controller has an early return condition that only validates the count of NodeRefs but doesn't check if the UIDs are still valid. This leads to MachinePools retaining stale NodeRef UIDs after upgrades, causing UID mismatches that persist until manual intervention.
What did you expect to happen?
Expected the MachinePool nodeRef to contain correct Node UIDs even after Kubernetes upgrade.
Cluster API version
Cluster API - v1.9.4
Kubernetes version
Kubernetes version - v1.30 & v1.31
Anything else you would like to add?
Created AKS cluster
Cluster-API-Provider-Azure version - v1.18.0
Label(s) to be applied
/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
Metadata
Metadata
Assignees
Labels
area/machinepoolIssues or PRs related to machinepoolsIssues or PRs related to machinepoolskind/bugCategorizes issue or PR as related to a bug.Categorizes issue or PR as related to a bug.needs-priorityIndicates an issue lacks a `priority/foo` label and requires one.Indicates an issue lacks a `priority/foo` label and requires one.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.