Skip to content

feat(syncctl): make sync pod resource limits configurable via env vars#1241

Open
mpanius wants to merge 1 commit intojitsucom:newjitsufrom
mpanius:feat/configurable-sync-pod-resources
Open

feat(syncctl): make sync pod resource limits configurable via env vars#1241
mpanius wants to merge 1 commit intojitsucom:newjitsufrom
mpanius:feat/configurable-sync-pod-resources

Conversation

@mpanius
Copy link

@mpanius mpanius commented Feb 14, 2026

Summary

Sync task pods spawned by syncctl have hardcoded resource limits (source: 1 CPU / 8Gi RAM, sidecar: 500m CPU / 4Gi RAM). This makes it impossible to tune resource allocations for smaller clusters without forking.

This PR adds environment variables to configure CPU/memory requests and limits for both source and sidecar containers. Default values match the current hardcoded values — zero breaking changes.

New environment variables

Variable Default Description
SYNCCTL_SOURCE_CPU_REQUEST_MILLICORES 100 CPU request for source container (millicores)
SYNCCTL_SOURCE_CPU_LIMIT_MILLICORES 1000 CPU limit for source container (millicores)
SYNCCTL_SOURCE_MEMORY_REQUEST_MI 256 Memory request for source container (MiB)
SYNCCTL_SOURCE_MEMORY_LIMIT_MI 8192 Memory limit for source container (MiB)
SYNCCTL_SOURCE_JAVA_OPTS -Xmx7000m JAVA_OPTS for source container (JVM heap)
SYNCCTL_SIDECAR_CPU_REQUEST_MILLICORES 0 CPU request for sidecar container (millicores)
SYNCCTL_SIDECAR_CPU_LIMIT_MILLICORES 500 CPU limit for sidecar container (millicores)
SYNCCTL_SIDECAR_MEMORY_REQUEST_MI 0 Memory request for sidecar container (MiB)
SYNCCTL_SIDECAR_MEMORY_LIMIT_MI 4096 Memory limit for sidecar container (MiB)

Important: SOURCE_MEMORY_LIMIT_MI and SOURCE_JAVA_OPTS must be adjusted together

The source container runs Airbyte connectors (mostly JVM-based). The default -Xmx7000m reserves 7Gi for JVM heap, with the 8Gi memory limit leaving ~1Gi for non-heap JVM memory (metaspace, thread stacks, native memory).

When reducing the memory limit, SOURCE_JAVA_OPTS must be adjusted accordingly to prevent OOM kills:

Memory limit Recommended JAVA_OPTS
8192 MiB (default) -Xmx7000m
4096 MiB -Xmx3500m
2048 MiB -Xmx1500m

Motivation

On small self-hosted clusters the hardcoded 8Gi memory limit on the source container can cause node-level OOM when connectors consume up to the limit, evicting other workloads. There is currently no way to restrict this without modifying the source code.

Changes

  • bulker/sync-controller/config.go — added 9 config fields with mapstructure tags and defaults
  • bulker/sync-controller/job_runner.go — replaced hardcoded resource values with config references
  • bulker/sync-controller/README.md — documented new environment variables in Configuration table

Test plan

  • Code compiles (go build ./sync-controller/...)
  • Deploy with default values — verify sync pods have the same resources as before
  • Deploy with custom values — verify sync pods respect the new limits

Currently, resource limits and requests for sync task pods (source and
sidecar containers) are hardcoded. This makes it impossible to adjust
resource allocations for smaller clusters or specific workloads without
forking the codebase.

This change introduces environment variables to configure CPU and memory
requests/limits for both source and sidecar containers spawned by syncctl.
Default values match the current hardcoded values, so this is a
non-breaking change.

New environment variables:
- SYNCCTL_SOURCE_CPU_REQUEST_MILLICORES (default: 100)
- SYNCCTL_SOURCE_CPU_LIMIT_MILLICORES (default: 1000)
- SYNCCTL_SOURCE_MEMORY_REQUEST_MI (default: 256)
- SYNCCTL_SOURCE_MEMORY_LIMIT_MI (default: 8192)
- SYNCCTL_SOURCE_JAVA_OPTS (default: -Xmx7000m)
- SYNCCTL_SIDECAR_CPU_REQUEST_MILLICORES (default: 0)
- SYNCCTL_SIDECAR_CPU_LIMIT_MILLICORES (default: 500)
- SYNCCTL_SIDECAR_MEMORY_REQUEST_MI (default: 0)
- SYNCCTL_SIDECAR_MEMORY_LIMIT_MI (default: 4096)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments