Skip to main content
This article describes how to run background (daemon) containers in Argo Workflows and why you might choose them. Daemon containers allow a workflow to start a container that continues running in the background while subsequent steps execute. This pattern is useful for bringing up short-lived supporting services—such as databases, caches, or test fixtures—that other steps use during the workflow run. Argo automatically terminates daemon containers when the workflow exits the scope (for example, when the workflow completes or is terminated). Here is a minimal example that launches a background Redis instance used by a test step:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  name: daemon-workflow
  namespace: argo
spec:
  entrypoint: main
  templates:
  - name: main
    steps:
    - - name: start-database
        template: mock-database
      - name: run-tests
        template: test-step
  - name: mock-database
    daemon: true # This keeps the container running in the background
    container:
      image: redis:alpine
      command: [redis-server]
      readinessProbe:
        tcpSocket:
          port: 6379
        initialDelaySeconds: 5
        periodSeconds: 5
  - name: test-step
    container:
      image: alpine
      command: [sh, -c]
      args: ["echo 'Testing against database...'; sleep 60"]
How it works
  • templates.main: defines two parallel steps: start-database (launches the daemon) and run-tests (executes the tests).
  • templates.mock-database: sets daemon: true. Argo starts this container and does not wait for it to finish—so the container runs in the background for the workflow’s lifetime. The readinessProbe lets other steps know when Redis is accepting connections.
  • templates.test-step: an example step that prints a message and sleeps for 60 seconds. While this step runs, the Redis daemon remains available so tests can connect to it.
Daemon containers are intended for short-lived supporting services used only during a workflow run. Use daemon: true for ephemeral helpers; for long-lived or production-facing services, use Kubernetes Deployments or StatefulSets instead.
Observing the daemon lifecycle
  • While the workflow is running, you will see a pod for the mock-database in the Running state.
  • When the workflow completes (or is terminated), Argo transitions the daemon pod to Completed and cleans it up automatically.
Screenshot of the Argo Workflows web UI showing a small DAG for "daemon-workflow" with two completed steps ("run-tests" and "start-database") on the left. The right panel shows workflow summary details like name, ID, pod name, host node, phase (Succeeded), and start/end times.
For troubleshooting or verification, run kubectl in the workflow namespace (for example, argo):
kubectl get pods -n argo
You should observe the database pod marked Running while the workflow steps are executing, and then Completed after the workflow finishes. Using daemon: true ensures a specific container remains available to other steps for the workflow’s lifetime.
Resource TypeWhen to useExample pattern
Daemon container (daemon: true)Short-lived support services scoped to a workflow run (tests, ephemeral caches)Background Redis for an integration test
Sidecar containerPer-pod companion processes that must share lifecycle with the primary containerLogging agent or proxy alongside an application container
Deployment/StatefulSetLong-running, production services that require scaling, persistence, or stable network identitiesProduction Redis cluster or database
Additional resources and references
Do not rely on daemon containers for production services or long-lived state. Daemon containers are cleaned up when the workflow exits; for persistent, scalable services use Kubernetes Deployments, StatefulSets, or external managed services.

Watch Video