-
Notifications
You must be signed in to change notification settings - Fork 2k
Feature/helm separate workers #4679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Summary
This PR separates the monolithic Celery background worker into specialized worker deployments in the Helm charts, enabling better resource allocation and task management.
- Added 7 distinct Celery worker deployments in
/deployment/helm/charts/onyx/templates/
for specialized tasks: indexing, heavy processing, light tasks, monitoring, primary tasks, user files indexing, and beat scheduling - Introduced
celery_shared
configuration invalues.yaml
for common image settings across workers - Each worker deployment uses dedicated queues (e.g.,
connector_indexing
,monitoring
,periodic_tasks
) for task specialization - All workers currently run with privileged security context and root user access, which may need security review
- Resource limits and requests are undefined in
values.yaml
, which should be configured for production deployments
9 file(s) reviewed, 5 comment(s)
Edit PR Review Bot Settings | Greptile
backend/tests/integration/multitenant_tests/tenants/test_tenant_creation.py
Show resolved
Hide resolved
backend/tests/integration/multitenant_tests/tenants/test_tenant_creation.py
Show resolved
Hide resolved
backend/tests/integration/multitenant_tests/tenants/test_tenant_creation.py
Show resolved
Hide resolved
…ure/helm-separate-workers
…ure/helm-separate-workers # Conflicts: # backend/tests/integration/multitenant_tests/tenants/test_tenant_creation.py
privileged: true | ||
runAsUser: 0 | ||
enableMiniChunk: "true" | ||
resources: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it best to leave all the resources as blank? Or do we have an idea of a reasonable resource limit/request we can put in based on our cloud?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could put something in. Was just defaulting to what was there.
* add test * try breaking out background workers * fix helm lint complaints * rename disabled files more * try different folder structure * fix beat selector * vespa setup should break on success * improved instructions for basic helm chart testing --------- Co-authored-by: Richard Kuo (Onyx) <rkuo@onyx.app>
* add test * try breaking out background workers * fix helm lint complaints * rename disabled files more * try different folder structure * fix beat selector * vespa setup should break on success * improved instructions for basic helm chart testing --------- Co-authored-by: Richard Kuo (Onyx) <rkuo@onyx.app>
* add test * try breaking out background workers * fix helm lint complaints * rename disabled files more * try different folder structure * fix beat selector * vespa setup should break on success * improved instructions for basic helm chart testing --------- Co-authored-by: Richard Kuo (Onyx) <rkuo@onyx.app>
Description
Fixes https://linear.app/danswer/issue/DAN-1956/update-helm-charts-to-separate-workers
How Has This Been Tested?
[Describe the tests you ran to verify your changes]
Backporting (check the box to trigger backport action)
Note: You have to check that the action passes, otherwise resolve the conflicts manually and tag the patches.