A Kubernetes controller to manage the lifecycle of any Kubernetes resource, namespaced or not, through simple, time-based annotations.
This controller allows you to automate common operational tasks such as cleaning up temporary resources or scheduling periodic application restarts without needing to create complex CronJobs or custom wrapper objects.
Here are a few scenarios where lifecycle-controller can simplify your workflow.
Scenario: A developer spins up resources for a feature branch. To prevent cluttering the cluster, these resources should be automatically deleted after 3 days.
Solution: Apply a delete-after annotation to the resources.
apiVersion: apps/v1
kind: Deployment
metadata:
name: feature-branch-x-backend
namespace: dev-features
annotations:
lifecycle.cezary.dev/delete-after: "3d" # Deletes this deployment after 3 days
spec:
# ...Scenario: A legacy application has a slow memory leak. To ensure stability, the operations team wants to restart it every night at 3:00 AM in their local timezone.
Solution: Use the restart-cron and timezone annotations.
apiVersion: apps/v1
kind: Deployment
metadata:
name: legacy-app
namespace: production
annotations:
lifecycle.cezary.dev/restart-cron: "0 3 * * *" # Daily at 3:00 AM
lifecycle.cezary.dev/timezone: "America/New_York"
spec:
# ...Scenario: A database migration is scheduled for Saturday at 2:00 AM UTC. The application pods need to be restarted immediately after to pick up the new schema. This can be scheduled in advance.
Solution: Use the restart-at annotation with a specific timestamp.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database
namespace: production
annotations:
lifecycle.cezary.dev/restart-at: "2025-10-25T02:00:00Z"
spec:
# ...The controller's behavior is configured entirely through annotations.
-
For all resources:
lifecycle.cezary.dev/timezone: The timezone to use for all time-based annotations on the resource. Default isUTC. This timezone is used to interpret all datetime strings and cron expressions for this resource. In case an invalid timezone is provided, the controller will post a warning Event on the resource, and take no action.lifecycle.cezary.dev/reference-point: (string) Specifies the starting point for relative duration timers (for-afterannotations).applyTimestamp(default): The timer starts when the controller processes the-afterannotation. Re-applying the manifest resets the timer ("keep-alive" behavior).creationTimestamp: The timer starts from the resource's creation time. This creates a fixed TTL that is not affected by subsequent updates.
lifecycle.cezary.dev/delete-at: Absolute TTL. The controller deletes the resource at or after this specific date and time (e.g.,2024-12-31T23:59:59).- The value should be an ISO 8601 format timestamp. The timezone is determined by the
timezoneannotation. - This can be applied directly to a
Namespaceto trigger its deletion. Kubernetes will handle the subsequent removal of all resources within that namespace.
- The value should be an ISO 8601 format timestamp. The timezone is determined by the
lifecycle.cezary.dev/delete-after: Relative TTL (e.g.,5m,1h,3d). The controller processes this annotation by calculating an absolute deletion time based on the time it first notices the annotation. It then adds alifecycle.cezary.dev/delete-atannotation to the resource with this calculated time. To prevent re-calculation and make the state explicit, the originallifecycle.cezary.dev/delete-afterannotation is then removed. Supportss,m,h, andd(days).lifecycle.cezary.dev/dry-run: "true": A per-resource annotation that makes the controller log the actions it would take without executing them. This can also be set via a global flag on the controller.
-
Only for pod-spawning resources (Deployments, StatefulSets, DaemonSets, etc.):
lifecycle.cezary.dev/restart-at: Performs a one-time rolling restart at a specific date and time.lifecycle.cezary.dev/restart-after: Performs a one-time rolling restart after a relative duration (e.g.,1h). The controller converts this to an absoluterestart-atannotation. Supportss,m,h, andd(days).lifecycle.cezary.dev/restart-every: Performs a rolling restart on a recurring, relative basis (e.g.,7dto restart weekly). Supportss,m,h, andd(days).lifecycle.cezary.dev/restart-cron: Performs a rolling restart based on a cron expression (e.g.,"0 3 * * *"for daily at 3 AM).- A resource is considered pod-spawning if it has a
spec.template.metadata.annotationsfield. The restart mechanism works by patching this field, which is the standard Kubernetes pattern for triggering a rolling update.
-
Restart Mechanism
- Triggering a Restart - To initiate a rolling restart, the controller injects a
lifecycle.cezary.dev/restartedAt: "<timestamp>"annotation into the resource'sspec.template.metadata.annotations. This is the standard mechanism that causes Kubernetes to detect a change in the pod template and trigger a rollout. - State Tracking for Recurring Restarts - For
restart-everyandrestart-cronschedules, the controller maintains its state using a top-levellifecycle.cezary.dev/last-restart-timestamp: "<timestamp>"annotation on the resource. This timestamp serves as the anchor for calculating the next restart, ensuring the schedule remains stable over time.- Initialization - If the
last-restart-timestampannotation is missing on a resource with a recurring restart schedule, the controller adds it and sets its value to the current time. This bootstraps the schedule. - Reconciliation Logic - On each check, the controller performs the following steps:
- Reads the schedule (
restart-everyorrestart-cron) and thelast-restart-timestamp, - Calculates the
nextScheduledRestarttime based on the last one, - Compares the current time to the
nextScheduledRestarttime, - If the current time is at or after
nextScheduledRestart, the controller triggers the restart, - It then updates the
lifecycle.cezary.dev/last-restart-timestampto the value ofnextScheduledRestart. This anchors the next cycle to the previous scheduled time, preventing schedule drift.
- Reads the schedule (
- Initialization - If the
- Cleanup - After a one-time
restart-ataction is successfully triggered, the controller will remove the originallifecycle.cezary.dev/restart-atannotation to ensure the action is idempotent.
- Triggering a Restart - To initiate a rolling restart, the controller injects a
-
Note on Precedence:
- Conflicting action types - If a resource mixes annotations from different action families (any combination of
restart-*anddelete-*), the controller treats it as a misconfiguration. It will post a warningEventon the resource and take no action. - Multiple annotations of one family - If more than one annotation of the same family is present on a resource, the controller applies the most specific one ("most specific wins").
- In case of restarts:
restart-afteris a convenience annotation that is converted intorestart-atby the controller.restart-at(a specific, one-time event) takes highest priority.restart-cron(a specific, recurring schedule) is next.restart-every(a relative interval) has the lowest priority.
- In case of deletes:
delete-afteris a convenience annotation that is converted intodelete-atby the controller.delete-at(a specific, one-time event) takes highest priority.
- Conflicting action types - If a resource mixes annotations from different action families (any combination of
Annotations that use relative durations (delete-after, restart-after, restart-every) start their from a configurable reference point. While the controller's processing of annotations is usually immediate, factors like high cluster load or controller downtime can introduce delays.
By default, the timer starts from the applyTimestamp. This means the timer begins when the controller processes the annotation on the resource.
Crucially, the timer will be reset every time you re-apply the manifest containing the -after annotation. The controller treats each apply as a new declaration of intent and recalculates the absolute -at timestamp based on the current time. This "keep-alive" behavior is the default and is very useful for development environments where resources should persist as long as they are actively being worked on.
To set a fixed lifetime that is not reset on subsequent updates, you can change the reference point. By adding the annotation lifecycle.cezary.dev/reference-point: "creationTimestamp", you instruct the controller to start the timer from the moment the resource was created (metadata.creationTimestamp). This creates a strict, fixed TTL that is ideal for CI preview environments or automated test resources that must be cleaned up after a set period, regardless of any updates.
For time-critical operations or to set a fixed expiration that does not change on subsequent applies, it is recommended to use the absolute time annotations (delete-at, restart-at, restart-cron). These define a specific, unambiguous point in time for the action to occur, making them more reliable for scheduled maintenance or cleanup.
By default, the controller discovers and watches all available resources (*.*) across all namespaces. In production or multi-tenant environments, you may want to restrict this scope to improve security and performance.
The controller supports glob-style patterns for filtering resources and namespaces via command-line flags (or Helm values).
--watch-resource- (Repeatable) Glob pattern for resources to watch.- Format:
<resource>.<group>(e.g.deployments.apps,pods,*.k8s.io). - If not provided, all resources are watched (unless excluded by ignore rules).
- Format:
--ignore-resource- (Repeatable) Glob pattern for resources to strictly ignore. Takes precedence over watch rules.--watch-namespace- (Repeatable) Glob pattern for namespaces to watch (e.g.default,dev-*).- Strict Scoping: If provided, the controller will only watch resources inside matching namespaces. It will automatically exclude cluster-scoped resources (like
Nodes) with the exception ofNamespaceobjects themselves, provided their name matches the pattern.
- Strict Scoping: If provided, the controller will only watch resources inside matching namespaces. It will automatically exclude cluster-scoped resources (like
--ignore-namespace- (Repeatable) Glob pattern for namespaces to strictly ignore. Takes precedence over watch rules.
Watch only deployments in dev-* namespaces:
--watch-resource=deployments.apps --watch-namespace=dev-*Watch everything except secrets and anything in kube-system:
--ignore-resource=secrets --ignore-namespace=kube-systemlifecycle-controller is both as:
- A standalone container that you can configure to run in your own environment
- A Helm chart for easy deployment into a Kubernetes cluster
By default, the controller generates a ClusterRole with wildcard (*) permissions for resources and apiGroups. This allows it to dynamically discover and manage any resource type.
When installing via Helm, if you configure the scope.watchResources value, the chart will automatically restrict the generated ClusterRole to only contain permissions for those specific resources. This ensures the controller operates with the Principle of Least Privilege.
If you are running the binary manually or managing RBAC yourself, you should ensure your ClusterRole permissions match the resources you intend to manage.
You can install the controller directly from the Helm repository:
helm repo add lifecycle-controller https://cstanislawski.github.io/lifecycle-controller
helm repo update
helm install lifecycle-controller lifecycle-controller/lifecycle-controller \
--namespace lifecycle-controller \
--create-namespaceTo configure scoping via Helm (which also tightens RBAC):
controllerManager:
scope:
watchResources:
- "deployments.apps"
- "statefulsets.apps"
watchNamespaces:
- "default"
- "dev-*"