-
-
Notifications
You must be signed in to change notification settings - Fork 42
feat: ✨ add support for multiple worker_node types and labels #262
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Commitlint-CheckThanks for your contribution ❤️ commitlint has detected that all commit messages in this PR follow the conventional commit format 🎉 |
Terraform-Check (version: 1.9.8): ✅🖌 Terraform Format: ✅
⚙️ Terraform Init: ✅
🤖 Terraform Validate: ✅
|
Terraform-Check (version: 1.8.5): ✅🖌 Terraform Format: ✅
⚙️ Terraform Init: ✅
🤖 Terraform Validate: ✅
|
Hey @mrclrchtr, |
Hi, thank you very much for the pr. I'm currently full of work.. But I will try to find time to make a review soon. |
hey @mrclrchtr. I understand we always have a varrying amount of time, that we can donate for open source projects. Take your time. Could you just please answer me one question? Otherwise I need to roll my own, because time is a limiting factor for me. Thank you so much and I hope your doing well :) |
Thank you for your understanding. I'm trying to do too many things at once at the moment... That's very good PR. I'll definitely merge it. But I'd like to test it a little more and make some changes if necessary. Thanks again. |
0678fb4
to
691a551
Compare
So, as I said, I think the PR is good in terms of code. But there was one problem: # module.talos.hcloud_server.workers["worker-1"] will be destroyed
# (because hcloud_server.workers is not in configuration)
- resource "hcloud_server" "workers" {
...
# module.talos.hcloud_server.workers_legacy["worker-1"] will be created
+ resource "hcloud_server" "workers_legacy" {
... I found taints here: siderolabs/talos#9895 Adding new workers to a legacy cluster worked. That's fine with me now. Do you have any comments? |
- Add taints field to worker_nodes variable for workload isolation - Implement registerWithTaints in kubelet configuration per Talos best practices - Update README with taint configuration examples - Add taints to both legacy and new worker configurations - Remove unused debug variable in talos_patch_worker.tf Based on Talos discussion #9895, taints are applied at node registration using kubelet.registerWithTaints to comply with NodeRestriction admission. 🤖 Generated with Claude Code Co-Authored-By: Claude <noreply@anthropic.com>
thank you so much for taking the time to review! |
I've already made the changes. Just take another look at it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! 🔥
🎉 This PR is included in version 2.17.0 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Why
I want to cut costs for everyone by supporting different kind of nodes.
Users of the Cluster might use NodeAffinities tied to node labels to schedule their arm64/amd64 workloads to the individual nodes.
I would have included taints as well, but I couldn't find talos documentation for it.
Maybe you have some ideas.
How to expand from here
Having support for legacy module syntax is important.
We could uniform the way control-plane-nodes are configured as well.
We could add some additional information in the worker_nodes like additional placement_groups or specific patches.
We could provide static names / ids, for preexisting worker / control planes to adopt a cluster.
Key Changes Made
variables.tf
Added a new worker_nodes variable that accepts a list of worker node configurations
Each worker node group can specify:
type: Server type (cx22, cax22, etc.)
labels: Kubernetes labels to apply (optional)
Enhanced Server Logic
server.tf
Backward Compatibility: The module now supports both old and new variable formats simultaneously
Mixed Architecture Support: Automatically detects ARM vs x86 server types and uses appropriate images
Individual Configuration: Each worker node can have different server types, and labels
Smart Indexing: Maintains proper indexing for IP addresses and naming across both legacy and new workers
Network Updates
network.tf
Updated IP allocation to handle the total count from both legacy and new worker configurations
Maintains proper IP address assignment for all worker nodes regardless of configuration method
Talos Configuration
talos_patch_worker.tf
Per-Node Labels: Each worker node can have custom Kubernetes labels
Flexible Configuration: Supports both simple and complex worker node setups