Summary
helm-charts/infisical-standalone-postgres/templates/infisical.yaml renders a fresh timestamp into both the Deployment metadata and the pod template on every helm template / helm upgrade:
# lines 7 and 23
updatedAt: {{ now | date "2006-01-01 MST 15:04:05" | quote }}
Because this value lands on spec.template.metadata.annotations, Kubernetes recomputes the pod-template-hash every time the chart re-renders. Under any GitOps controller that reconciles on a schedule — ArgoCD, Flux, Helmfile + cron — this creates a fresh ReplicaSet on every reconcile cycle, continuously rolling pods without any actual change.
Impact observed (2026-04-17/18, prod)
Deployment revision: 62 in ~17 days on a single-replica prod deployment
- 11 ReplicaSets in 8 hours on 2026-04-17, each differing only in the
updatedAt annotation value
- User-visible "Infisical is slow" experience — each rollout leaves a ~30s readiness gap while the new pod bootstraps Node.js (readiness probe also has no configurable
timeoutSeconds, defaults to 1s — any GC pause > 1s during the window times out the probe)
kubectl rollout history shows revisions with no actual change cause
Reproduction
$ helm template infisical-prod infisical-standalone --version 1.8.0 \
| grep updatedAt
updatedAt: "2026-04-17 UTC 23:29:54"
updatedAt: "2026-04-17 UTC 23:29:54"
$ sleep 7 && helm template infisical-prod infisical-standalone --version 1.8.0 \
| grep updatedAt
updatedAt: "2026-04-17 UTC 23:30:01"
updatedAt: "2026-04-17 UTC 23:30:01"
Each render is a different manifest. Any system that applies the second render as an "update" triggers a pod rotation.
Root cause
{{ now }} is non-deterministic. Emitting it into pod-hash-sensitive fields (annotations on spec.template.metadata) tells Kubernetes the pod spec changed when the user's intent has not.
Proposed fix
Remove the updatedAt annotation entirely. It provides no operational value (the Deployment's .status already tracks observedGeneration / lastTransitionTime; kubectl rollout history tracks revisions). If some consumer truly needs a rendered timestamp, expose it as an opt-in value:
{{- if $infisicalValues.emitUpdatedAtAnnotation }}
updatedAt: {{ now | date "2006-01-02 MST 15:04:05" | quote }}
{{- end }}
…and document the side effect on the values README.
(Note: the reference format string in the current template uses 2006-01-01 instead of Go's canonical 2006-01-02. The second 01 happens to collide with the month placeholder so both positions substitute to current month — this is why the observed output has the same two digits in the date portion. Not the cause of the churn, but worth fixing at the same time.)
Related chart ergonomics
While we're here, these readiness/liveness defaults are aggressive enough to matter in prod:
readinessProbe:
httpGet:
path: /api/status
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
# (no timeoutSeconds → K8s defaults to 1s, which is tight for Node.js)
# (no livenessProbe, no startupProbe)
Would welcome exposing the probe block as a values knob so operators can tune it without forking the chart:
infisical:
readinessProbe:
httpGet: {path: /api/status, port: 8080}
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
livenessProbe: { ... }
startupProbe: { ... }
Happy to open the PR myself if this reproduces on your side and you'd accept the fix. Just wanted to file the issue first so other operators can find this thread.
Environment
- Chart:
infisical-standalone 1.8.0 (Cloudsmith registry)
- Source:
helm-charts/infisical-standalone-postgres/templates/infisical.yaml lines 7, 23
- Kubernetes: 1.34.6+rke2r1
- Controller: ArgoCD 3.3.2
- Observed on: production, 2026-04-17 through 2026-04-18
Summary
helm-charts/infisical-standalone-postgres/templates/infisical.yamlrenders a fresh timestamp into both theDeploymentmetadata and the pod template on everyhelm template/helm upgrade:Because this value lands on
spec.template.metadata.annotations, Kubernetes recomputes thepod-template-hashevery time the chart re-renders. Under any GitOps controller that reconciles on a schedule — ArgoCD, Flux, Helmfile + cron — this creates a fresh ReplicaSet on every reconcile cycle, continuously rolling pods without any actual change.Impact observed (2026-04-17/18, prod)
Deploymentrevision: 62 in ~17 days on a single-replica prod deploymentupdatedAtannotation valuetimeoutSeconds, defaults to 1s — any GC pause > 1s during the window times out the probe)kubectl rollout historyshows revisions with no actual change causeReproduction
Each render is a different manifest. Any system that applies the second render as an "update" triggers a pod rotation.
Root cause
{{ now }}is non-deterministic. Emitting it into pod-hash-sensitive fields (annotations onspec.template.metadata) tells Kubernetes the pod spec changed when the user's intent has not.Proposed fix
Remove the
updatedAtannotation entirely. It provides no operational value (the Deployment's.statusalready tracksobservedGeneration/lastTransitionTime;kubectl rollout historytracks revisions). If some consumer truly needs a rendered timestamp, expose it as an opt-in value:…and document the side effect on the values README.
(Note: the reference format string in the current template uses
2006-01-01instead of Go's canonical2006-01-02. The second01happens to collide with the month placeholder so both positions substitute to current month — this is why the observed output has the same two digits in the date portion. Not the cause of the churn, but worth fixing at the same time.)Related chart ergonomics
While we're here, these readiness/liveness defaults are aggressive enough to matter in prod:
Would welcome exposing the probe block as a values knob so operators can tune it without forking the chart:
Happy to open the PR myself if this reproduces on your side and you'd accept the fix. Just wanted to file the issue first so other operators can find this thread.
Environment
infisical-standalone1.8.0 (Cloudsmith registry)helm-charts/infisical-standalone-postgres/templates/infisical.yamllines 7, 23