Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion deployment/helm/charts/onyx/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ home: https://www.onyx.app/
sources:
- "https://github.com/onyx-dot-app/onyx"
type: application
version: 0.2.9
version: 0.2.10
appVersion: latest
annotations:
category: Productivity
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{{- if and .Values.keda.enabled .Values.keda.apiServer .Values.keda.apiServer.enabled }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: The triple conditional check will fail if any of the nested values don't exist. Consider adding the corresponding KEDA configuration section to values.yaml to prevent template rendering failures.

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" . }}-api-server-scaledobject
namespace: {{ .Release.Namespace }}
labels:
{{- include "onyx-stack.labels" . | nindent 4 }}
app: api-server
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" . }}
pollingInterval: {{ .Values.keda.apiServer.pollingInterval | default 30 }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using default here overrides an explicit value of 0 for pollingInterval by falling back to 30; prefer preserving user-provided zero values.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/api-server-scaledobject.yaml at line 15:

<comment>Using default here overrides an explicit value of 0 for pollingInterval by falling back to 30; prefer preserving user-provided zero values.</comment>

<file context>
@@ -0,0 +1,24 @@
+    apiVersion: apps/v1
+    kind: Deployment
+    name: {{ include &quot;onyx-stack.fullname&quot; . }}-api-server
+  pollingInterval: {{ .Values.keda.apiServer.pollingInterval | default 30 }}
+  cooldownPeriod: {{ .Values.keda.apiServer.cooldownPeriod | default 300 }}
+  minReplicaCount: {{ .Values.keda.apiServer.minReplicas | default 1 }}
</file context>

cooldownPeriod: {{ .Values.keda.apiServer.cooldownPeriod | default 300 }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using default here overrides an explicit value of 0 for cooldownPeriod, changing semantics (e.g., immediate scale-down) by forcing 300 instead.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/api-server-scaledobject.yaml at line 16:

<comment>Using default here overrides an explicit value of 0 for cooldownPeriod, changing semantics (e.g., immediate scale-down) by forcing 300 instead.</comment>

<file context>
@@ -0,0 +1,24 @@
+    kind: Deployment
+    name: {{ include &quot;onyx-stack.fullname&quot; . }}-api-server
+  pollingInterval: {{ .Values.keda.apiServer.pollingInterval | default 30 }}
+  cooldownPeriod: {{ .Values.keda.apiServer.cooldownPeriod | default 300 }}
+  minReplicaCount: {{ .Values.keda.apiServer.minReplicas | default 1 }}
+  maxReplicaCount: {{ .Values.keda.apiServer.maxReplicas | default 10 }}
</file context>

minReplicaCount: {{ .Values.keda.apiServer.minReplicas | default 1 }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using default here prevents minReplicaCount from being set to 0 (scale-to-zero), because 0 is considered empty and falls back to 1. Prefer checking key presence to preserve 0.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/api-server-scaledobject.yaml at line 17:

<comment>Using default here prevents minReplicaCount from being set to 0 (scale-to-zero), because 0 is considered empty and falls back to 1. Prefer checking key presence to preserve 0.</comment>

<file context>
@@ -0,0 +1,24 @@
+    name: {{ include &quot;onyx-stack.fullname&quot; . }}-api-server
+  pollingInterval: {{ .Values.keda.apiServer.pollingInterval | default 30 }}
+  cooldownPeriod: {{ .Values.keda.apiServer.cooldownPeriod | default 300 }}
+  minReplicaCount: {{ .Values.keda.apiServer.minReplicas | default 1 }}
+  maxReplicaCount: {{ .Values.keda.apiServer.maxReplicas | default 10 }}
+  triggers:
</file context>

maxReplicaCount: {{ .Values.keda.apiServer.maxReplicas | default 10 }}
# Use HPA mode to generate an HPA that works alongside existing HPA infrastructure
hpaName: {{ include "onyx-stack.fullname" . }}-api-server-keda-hpa
triggers:
- type: cpu
metadata:
type: Utilization
value: {{ .Values.keda.apiServer.cpuThreshold | default "70" | quote }}
{{- if .Values.keda.apiServer.memoryThreshold }}
- type: memory
metadata:
type: Utilization
value: {{ .Values.keda.apiServer.memoryThreshold | quote }}
{{- end }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
{{- if and .Values.keda.enabled .Values.keda.celeryWorkers.enabled }}
{{- range $workerType, $workerConfig := .Values.keda.celeryWorkers }}
{{- if and (ne $workerType "enabled") $workerConfig.enabled (ne $workerType "docprocessing") (ne $workerType "docfetching") }}
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" $ }}-celery-worker-{{ $workerType }}-scaledobject
namespace: {{ $.Release.Namespace }}
labels:
{{- include "onyx-stack.labels" $ | nindent 4 }}
app: celery-worker-{{ $workerType }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" $ }}-celery-worker-{{ $workerType }}
pollingInterval: {{ $workerConfig.pollingInterval | default 30 }}
cooldownPeriod: {{ $workerConfig.cooldownPeriod | default 300 }}
minReplicaCount: {{ $workerConfig.minReplicas | default 1 }}
maxReplicaCount: {{ $workerConfig.maxReplicas | default 10 }}
triggers:
# Default Prometheus-based trigger for Redis queue depth
# Scaling Logic:
# - When queue depth > 5: Scale up by factor of 2 (moderate scaling)
# - When queue depth <= 5: Scale down by factor of 0.5 (conservative scaling)
# - Threshold of 1 ensures scaling triggers when metric value > 1
- type: prometheus
metadata:
serverAddress: "http://prometheus-redis.monitoring.svc.cluster.local:9090"
metricName: "redis_key_size_sum"
metricType: "Value"
threshold: "1"
query: |
# Simplified scaling logic for generic celery workers
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This # comment line is embedded in the PromQL query string and will break query parsing; move comments above query: or remove them.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/celery-worker-common-scaledobject.yaml at line 36:

<comment>This `#` comment line is embedded in the PromQL query string and will break query parsing; move comments above `query:` or remove them.</comment>

<file context>
@@ -0,0 +1,51 @@
+        metricType: &quot;Value&quot;
+        threshold: &quot;1&quot;
+        query: |
+          # Simplified scaling logic for generic celery workers
+          # Returns 2 when queue depth &gt; 5, 0.5 when &lt;= 5
+          # This creates a clear scaling decision boundary
</file context>
Suggested change
# Simplified scaling logic for generic celery workers

# Returns 2 when queue depth > 5, 0.5 when <= 5
# This creates a clear scaling decision boundary
(
(sum(redis_key_size{key=~"connector_{{ $workerType }}.*"}) > 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PromQL comparison lacks bool, so it won’t produce 1/0; add bool to return 1 when the condition is true and 0 otherwise to match the intended constant-factor scaling.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/celery-worker-common-scaledobject.yaml at line 40:

<comment>PromQL comparison lacks `bool`, so it won’t produce 1/0; add `bool` to return 1 when the condition is true and 0 otherwise to match the intended constant-factor scaling.</comment>

<file context>
@@ -0,0 +1,51 @@
+          # Returns 2 when queue depth &gt; 5, 0.5 when &lt;= 5
+          # This creates a clear scaling decision boundary
+          (
+            (sum(redis_key_size{key=~&quot;connector_{{ $workerType }}.*&quot;}) &gt; 5)
+              * 2
+          )
</file context>

* 2
)
+
(
(sum(redis_key_size{key=~"connector_{{ $workerType }}.*"}) <= 5)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add bool to the <= comparison so the expression evaluates to 1/0 before multiplying by 0.5.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/celery-worker-common-scaledobject.yaml at line 45:

<comment>Add `bool` to the `&lt;=` comparison so the expression evaluates to 1/0 before multiplying by 0.5.</comment>

<file context>
@@ -0,0 +1,51 @@
+          )
+          +
+          (
+            (sum(redis_key_size{key=~&quot;connector_{{ $workerType }}.*&quot;}) &lt;= 5)
+              * 0.5
+          )
</file context>

* 0.5
)
{{- end }}
{{- end }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{{- if and .Values.keda.enabled .Values.keda.celeryWorkers.docfetching .Values.keda.celeryWorkers.docfetching.enabled }}
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" . }}-celery-worker-docfetching-scaledobject
namespace: {{ .Release.Namespace }}
labels:
{{- include "onyx-stack.labels" . | nindent 4 }}
app: celery-worker-docfetching
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" . }}-celery-worker-docfetching
pollingInterval: {{ .Values.keda.celeryWorkers.docfetching.pollingInterval | default 30 }}
cooldownPeriod: {{ .Values.keda.celeryWorkers.docfetching.cooldownPeriod | default 300 }}
minReplicaCount: {{ .Values.keda.celeryWorkers.docfetching.minReplicas | default 1 }}
maxReplicaCount: {{ .Values.keda.celeryWorkers.docfetching.maxReplicas | default 10 }}
triggers:
# Default Prometheus-based trigger for Redis queue depth
# Scaling Logic:
# - When queue depth > 5: Scale up by factor of 2 (aggressive scaling)
# - When queue depth <= 5: Scale down by factor of 0.5 (conservative scaling)
# - Threshold of 1 ensures scaling triggers when metric value > 1
- type: prometheus
metadata:
serverAddress: "http://prometheus-redis.monitoring.svc.cluster.local:9090"
metricName: "redis_key_size_sum"
metricType: "Value"
threshold: "1"
query: |
# Simplified scaling logic for docfetching workers
# Returns 2 when queue depth > 5, 0.5 when <= 5
# This creates a clear scaling decision boundary
(
(sum(redis_key_size{key=~"connector_docfetching.*"}) > 5)
* 2
)
+
(
(sum(redis_key_size{key=~"connector_docfetching.*"}) <= 5)
* 0.5
)
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{{- if and .Values.keda.enabled .Values.keda.celeryWorkers.docprocessing .Values.keda.celeryWorkers.docprocessing.enabled }}
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" . }}-celery-worker-docprocessing-scaledobject
namespace: {{ .Release.Namespace }}
labels:
{{- include "onyx-stack.labels" . | nindent 4 }}
app: celery-worker-docprocessing
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" . }}-celery-worker-docprocessing
pollingInterval: {{ .Values.keda.celeryWorkers.docprocessing.pollingInterval | default 30 }}
cooldownPeriod: {{ .Values.keda.celeryWorkers.docprocessing.cooldownPeriod | default 300 }}
minReplicaCount: {{ .Values.keda.celeryWorkers.docprocessing.minReplicas | default 1 }}
maxReplicaCount: {{ .Values.keda.celeryWorkers.docprocessing.maxReplicas | default 50 }}
triggers:
# Default Prometheus-based trigger for Redis queue depth
# Scaling Logic:
# - When queue depth > 20: Scale up by factor of 4 (very aggressive scaling)
# - When queue depth <= 20: Scale down by factor of 0.25 (very conservative scaling)
# - Threshold of 1 ensures scaling triggers when metric value > 1
- type: prometheus
metadata:
serverAddress: "http://prometheus-redis.monitoring.svc.cluster.local:9090"
metricName: "redis_key_size_sum"
metricType: "Value"
threshold: "1"
query: |
# Simplified scaling logic for docprocessing workers
# Returns 4 when queue depth > 20, 0.25 when <= 20
# This creates a clear scaling decision boundary for high-volume processing
(
(sum(redis_key_size{key=~"connector_docprocessing.*"}) > 20)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PromQL comparison lacks 'bool', returning the metric value instead of 1/0; this makes the query scale by 4×queue depth rather than a constant factor.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/celery-worker-docprocessing-scaledobject.yaml at line 37:

<comment>PromQL comparison lacks &#39;bool&#39;, returning the metric value instead of 1/0; this makes the query scale by 4×queue depth rather than a constant factor.</comment>

<file context>
@@ -0,0 +1,46 @@
+          # Returns 4 when queue depth &gt; 20, 0.25 when &lt;= 20
+          # This creates a clear scaling decision boundary for high-volume processing
+          (
+            (sum(redis_key_size{key=~&quot;connector_docprocessing.*&quot;}) &gt; 20)
+              * 4
+          )
</file context>

* 4
)
+
(
(sum(redis_key_size{key=~"connector_docprocessing.*"}) <= 20)
* 0.25
)
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
{{- if and .Values.keda.enabled .Values.keda.modelServers.enabled }}
{{- range $serverType, $serverConfig := .Values.keda.modelServers }}
{{- if and (ne $serverType "enabled") $serverConfig.enabled }}
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" $ }}-{{ $serverType }}-model-server-scaledobject
namespace: {{ $.Release.Namespace }}
labels:
{{- include "onyx-stack.labels" $ | nindent 4 }}
app: {{ $serverType }}-model-server
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" $ }}-{{ $serverType }}-model
pollingInterval: {{ $serverConfig.pollingInterval | default 30 }}
cooldownPeriod: {{ $serverConfig.cooldownPeriod | default 300 }}
minReplicaCount: {{ $serverConfig.minReplicas | default 1 }}
maxReplicaCount: {{ $serverConfig.maxReplicas | default 5 }}
triggers:
- type: cpu
metadata:
type: Utilization
value: {{ $serverConfig.cpuThreshold | default "70" | quote }}
{{- end }}
{{- end }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
{{- if and .Values.keda.enabled .Values.keda.slackbot .Values.keda.slackbot.enabled }}
# Note: This KEDA ScaledObject works alongside existing HPA using KEDA's HPA mode
# KEDA generates an HPA that can coexist with traditional HPA infrastructure
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" . }}-slackbot-scaledobject
namespace: {{ .Release.Namespace }}
labels:
{{- include "onyx-stack.labels" . | nindent 4 }}
app: slackbot
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" . }}-slackbot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Deployment name inconsistency: This targets {{ include "onyx-stack.fullname" . }}-slackbot but existing HPA templates (api-hpa.yaml) target {{ include "onyx-stack.fullname" . }}. Verify the actual slackbot deployment name to ensure correct targeting.

pollingInterval: {{ .Values.keda.slackbot.pollingInterval | default 30 }}
cooldownPeriod: {{ .Values.keda.slackbot.cooldownPeriod | default 300 }}
minReplicaCount: {{ .Values.keda.slackbot.minReplicas | default 1 }}
maxReplicaCount: {{ .Values.keda.slackbot.maxReplicas | default 3 }}
# Use HPA mode to generate an HPA that works alongside existing HPA infrastructure
hpaName: {{ include "onyx-stack.fullname" . }}-slackbot-keda-hpa
triggers:
- type: cpu
metadata:
type: Utilization
value: {{ .Values.keda.slackbot.cpuThreshold | default "70" | quote }}
{{- if .Values.keda.slackbot.memoryThreshold }}
- type: memory
metadata:
type: Utilization
value: {{ .Values.keda.slackbot.memoryThreshold | quote }}
{{- end }}
{{- end }}
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
{{- if and .Values.keda.enabled .Values.keda.webServer .Values.keda.webServer.enabled }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Risk of conflict: Both KEDA ScaledObject and existing HPA (webserver.autoscaling.enabled) can target the same deployment simultaneously, causing scaling conflicts. Consider adding mutual exclusion logic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Guard the ScaledObject so it does not render when the webserver HPA is enabled to prevent conflicting autoscalers on the same Deployment.

Prompt for AI agents
Address the following comment on deployment/helm/charts/onyx/templates/keda/web-server-scaledobject.yaml at line 1:

<comment>Guard the ScaledObject so it does not render when the webserver HPA is enabled to prevent conflicting autoscalers on the same Deployment.</comment>

<file context>
@@ -0,0 +1,24 @@
+{{- if and .Values.keda.enabled .Values.keda.webServer .Values.keda.webServer.enabled }}
+apiVersion: keda.sh/v1alpha1
+kind: ScaledObject
</file context>

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ include "onyx-stack.fullname" . }}-web-server-scaledobject
namespace: {{ .Release.Namespace }}
labels:
{{- include "onyx-stack.labels" . | nindent 4 }}
app: web-server
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "onyx-stack.fullname" . }}
pollingInterval: {{ .Values.keda.webServer.pollingInterval | default 30 }}
cooldownPeriod: {{ .Values.keda.webServer.cooldownPeriod | default 300 }}
minReplicaCount: {{ .Values.keda.webServer.minReplicas | default 1 }}
maxReplicaCount: {{ .Values.keda.webServer.maxReplicas | default 5 }}
# Use HPA mode to generate an HPA that works alongside existing HPA infrastructure
hpaName: {{ include "onyx-stack.fullname" . }}-web-server-keda-hpa
triggers:
- type: cpu
metadata:
type: Utilization
value: {{ .Values.keda.webServer.cpuThreshold | default "70" | quote }}
{{- if .Values.keda.webServer.memoryThreshold }}
- type: memory
metadata:
type: Utilization
value: {{ .Values.keda.webServer.memoryThreshold | quote }}
{{- end }}
{{- end }}
84 changes: 84 additions & 0 deletions deployment/helm/charts/onyx/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,90 @@
# Global pull policy for all Onyx component images
pullPolicy: "IfNotPresent"

keda:
# Master switch for all KEDA functionality - disabled by default
# KEDA works alongside existing HPA infrastructure using HPA mode
# This provides advanced scaling triggers while maintaining HPA stability and coexistence
enabled: false

Check failure on line 16 in deployment/helm/charts/onyx/values.yaml

View workflow job for this annotation

GitHub Actions / helm-chart-check

16:1 [trailing-spaces] trailing spaces
# API Server autoscaling configuration
# KEDA generates an HPA named: {release}-api-server-keda-hpa
# Can coexist with api.autoscaling.* configuration
apiServer:
enabled: false
pollingInterval: 30
cooldownPeriod: 300
minReplicas: 1
maxReplicas: 10
cpuThreshold: "70"
memoryThreshold: "80" # Optional: enable memory-based scaling

Check failure on line 28 in deployment/helm/charts/onyx/values.yaml

View workflow job for this annotation

GitHub Actions / helm-chart-check

28:1 [trailing-spaces] trailing spaces
# Web Server autoscaling configuration
# KEDA generates an HPA named: {release}-web-server-keda-hpa
# Can coexist with webserver.autoscaling.* configuration
webServer:
enabled: false
pollingInterval: 30
cooldownPeriod: 300
minReplicas: 1
maxReplicas: 5
cpuThreshold: "70"
memoryThreshold: "80" # Optional: enable memory-based scaling

Check failure on line 40 in deployment/helm/charts/onyx/values.yaml

View workflow job for this annotation

GitHub Actions / helm-chart-check

40:1 [trailing-spaces] trailing spaces
# Slackbot autoscaling configuration
# KEDA generates an HPA named: {release}-slackbot-keda-hpa
# Can coexist with slackbot.autoscaling.* configuration
slackbot:
enabled: false
pollingInterval: 30
cooldownPeriod: 300
minReplicas: 1
maxReplicas: 3
cpuThreshold: "70"
memoryThreshold: "80" # Optional: enable memory-based scaling

Check failure on line 52 in deployment/helm/charts/onyx/values.yaml

View workflow job for this annotation

GitHub Actions / helm-chart-check

52:1 [trailing-spaces] trailing spaces
# Model Servers autoscaling configuration
# KEDA generates HPAs for each enabled model server
# Can coexist with existing model server autoscaling
modelServers:
enabled: false
# Individual model server configurations can be added here
# Example:
# inference:
# enabled: false
# pollingInterval: 30
# cooldownPeriod: 300
# minReplicas: 1
# maxReplicas: 5
# cpuThreshold: "70"
# memoryThreshold: "80"

Check failure on line 68 in deployment/helm/charts/onyx/values.yaml

View workflow job for this annotation

GitHub Actions / helm-chart-check

68:1 [trailing-spaces] trailing spaces
# Celery Workers autoscaling configuration
# KEDA provides Redis queue-based scaling for high-performance worker management
# Can coexist with existing celery worker autoscaling
celeryWorkers:
enabled: false
# Individual worker configurations can be added here
# Example:
# docprocessing:
# enabled: false
# pollingInterval: 30
# cooldownPeriod: 300
# minReplicas: 1
# maxReplicas: 50
# light:
# enabled: false
# pollingInterval: 30
# cooldownPeriod: 300
# minReplicas: 1
# maxReplicas: 10
# primary:
# enabled: false
# pollingInterval: 30
# cooldownPeriod: 300
# minReplicas: 1
# maxReplicas: 10

postgresql:
primary:
persistence:
Expand Down
Loading