-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Seeing slow updates to client-side field indexers #3093
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It is a bit hidden, but the relevant code is not maintained in controller-runtime but imported, specifically its the Refs:
I suggest to create an issue in k/k and link it here |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
While scale testing
kubernetes-sigs/karpenter
, we created thousands of pending pods at one time. We have a cache field indexer on pending pods that we use to get all of the pending pods on the cluster before we start executing our scheduling loop. We noticed (under high load) that the pending pods were deployed to the cluster but Karpenter was not attempting to schedule these pending pods/did not retrieve these pending pods back when pulling them from the field indexer.In particular, this seemed like the field indexer may be single threaded or may be bottlenecking under high load. What was stranger is that Karpenter has other threads that it runs in its containers where it publishes metrics on pods. These threads (which weren't using the field indexers) observed the pending pods on the cluster much quicker than the thread that was using the field indexers.
Do we know if there is anything that could cause the field indexers to be slow when placed under high load (are they single threaded) and are there ways that we could improve this performance so that we can rely on these indexers? Or should we look to remove them if we care deeply about the performance of retrieving these pods?
The text was updated successfully, but these errors were encountered: