Skip to content

Is there a way around the helm operator treating every reconciliation as an "upgrade" #6897

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vishalbalaji-v opened this issue Jan 15, 2025 · 1 comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@vishalbalaji-v
Copy link

Type of question

General operator-related help

Question

What did you do?

I have an operator deployed that watches for a specific custom resource to deploy an application. I create this CR in specific namespaces that I want the app to be deployed in.

- group: charts.ORG.com
  version: v1alpha1
  kind: CR_kind
  chart: CR_name
  watchDependentResources: false
  overrideValues:
    image.tag: 0.0.x

This app has a Job that needs to run only once, on startup. I have defined it like this.

apiVersion: batch/v1
kind: Job
metadata:
  name: app-job
  annotations:
    helm.sh/hook: post-install, post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded

Also note that that the Custom Resource in my case is used only for update of version. The operator version is tied to the app version. So the steps to deployment are

  • Operator of v0.0.x gets deployed
  • It reads override values which is added by CI pipeline for image tag of v0.0.x
  • Once operator gets deployed, it deploys app of v0.0.x on specific namespaces

What did you expect to see?

I expected this job to run once, every time there is an upgrade on the version of the operator, which consequently has an upgrade on the version of the service.

What did you see instead? Under which circumstances?

I see that the job runs every ~10 minutes on each namespace, corresponding with a reconcile from the helm operator. I surmise this is because the helm operator does a helm upgrade on each reconcile of the CR that triggers the job.

Environment

Operator type:

Kubernetes cluster type:

$ operator-sdk version
1.37

$ go version (if language is Go)

$ kubectl version
1.29.8

Additional context

I have tried removing the post-upgrade hook on the job

apiVersion: batch/v1
kind: Job
metadata:
  name: app-job
  annotations:
    helm.sh/hook: post-install
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded

However, this results in the job running literally only once, when the service gets created first.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

2 participants