-
Notifications
You must be signed in to change notification settings - Fork 879
GCP: experiment nodepools per SIG #8004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Related: - kubernetes#8004 Setup a dedicated nodepool with taints using an external terraform module. We want evaluate running prowjobs on COS with newer machine types. Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Wait, why would we do nodepools per sig? What's the purpose? Won't this give us worse bin-packing than sharing one node pool ...? |
cc @kubernetes/sig-k8s-infra-leads |
I think breaking prowjobs could help simplify maintenance and potentially improve performance. A split per SIG looks like the simple approach to me for this expermient. Also we don't have to do it for all the SIGs. We can target only 2-3 SIGs to do this. with more nodepools we can have different instance types, disk types, etc.
I think not really for the case of prowjobs by owned SIG node. |
Doing a few sigs in order to experiment with different node config and then consolidate on one afterwards makes sense to me. Permanently splitting wouldn't, the issue description could do with more detail. As written it sounds like an experiment towards permanently splitting. |
I still wouldn't frame them as sig node pools. Or even split them, we should just try desired node pool configs on specific jobs which will span sigs |
🤔 what are the cons of doing a permanent split ? |
worse bin packing / autoscaling more complex and confusing prowjob config poorer portability between clusters false sense of per-sig utilization (in reality a lot of jobs are for many sigs, maybe "sig testing" owns them or something but they're producing test results for many sigs) |
Related to: - kubernetes/k8s.io#8004 Use tolerations to schedule e2e-containerd prowjobs to a dedicated nodepool added in kubernetes/k8s.io#8035. Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Related to: - kubernetes/k8s.io#8004 Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Related to: - kubernetes/k8s.io#8004 Signed-off-by: Arnaud Meukam <ameukam@gmail.com>
Experiment if split prowjobs on different GKE nodepools allocated per SIG is viable.
SIG CI Node gave a +1 for participation to the experiment.
Experiment should be done after 1.33 is out and before 1.34 code freeze.
TODO:
cc @SergeyKanzhelev
/assign
/sig k8s-infra
/sig node
/priority backlog
/milestone v1.34
The text was updated successfully, but these errors were encountered: