Skip to main content
This article explains the Argo Workflows architecture: how Argo runs on Kubernetes, how the control and execution planes are separated, and how Argo integrates with external systems (artifact stores, identity providers, monitoring, etc.). Understanding these components and their interactions helps you design secure, scalable workflow deployments on Kubernetes. At a high level there are two logical planes:
  • Control plane (Argo namespace) — components that manage and orchestrate workflows.
  • Execution plane (user namespaces) — where the workflow tasks run as Kubernetes pods.
Below are the key components and how they interact.

Argo namespace (control plane)

  • Argo Server: exposes the Argo REST API and the web UI. Users and CLIs interact with Argo Server; it’s commonly fronted by a LoadBalancer or Ingress for external access. For high availability, Argo Server can be deployed with multiple replicas.
  • Workflow Controller: the reconciliation engine that watches Workflow custom resources (CRs), advances their state, and creates or updates Kubernetes resources (Pods, ConfigMaps, Secrets, etc.) to execute workflow steps. The controller runs in the Argo namespace and requires RBAC permissions via its ServiceAccount to operate cluster resources.

User namespace (execution plane)

  • Workflow Pods: when the controller schedules work, it creates pods in a target namespace (commonly one namespace per workflow or per tenant). These pods run containers that perform each workflow step. Large or parallel workflows generally produce multiple pods (often one per step). Pod lifecycle is managed by the Kubernetes control plane (kube-scheduler, kubelet, etc.).

Kubernetes API (cluster interactions)

  • Argo components (controller, server, agents) interact with the Kubernetes API server to create and manage resources such as Pods, Services, ConfigMaps, Secrets, and PersistentVolumeClaims.
  • Access is governed by Kubernetes RBAC and the ServiceAccounts assigned to Argo components.

User interaction paths

Users and systems submit and manage workflows through several interfaces. Below is a quick reference.
InterfaceUse caseLink
argo CLISubmit, manage, and inspect workflows from the terminalhttps://argoproj.github.io/argo-workflows/cli_installation/
kubectlInteract with Workflow CRs directly using standard Kubernetes toolinghttps://kubernetes.io/docs/reference/kubectl/
Web UIVisualize and manage workflows in a browserhttps://argoproj.github.io/argo-workflows/
API clients / SDKsAutomate workflow submission and retrieval programmatically (e.g., Python SDK)https://github.com/argoproj/argo-workflows/tree/master/sdk/python
Webhooks / Event triggersEvent-driven workflow triggers via Argo Events or custom webhook integrationshttps://argoproj.github.io/argo-events/

External integrations

Argo workflows typically rely on external systems for storage, authentication, and observability. Common integration patterns:
IntegrationPurposeExamples
Artifact storesPersist workflow inputs/outputs (artifacts)Amazon S3, Google Cloud Storage, MinIO
Workflow archivingArchive workflow metadata, logs, and history for retention and auditingMySQL, PostgreSQL, S3-compatible backends
Authentication / AuthorizationDelegate user authentication and map identities to RBACOAuth/OIDC providers (OpenID Connect)
Monitoring / MetricsEmit metrics for scraping and alertingPrometheus + Grafana
Links and references:

Lifecycle summary

  1. A user submits a Workflow to the Argo Server (via argo CLI, UI, or API).
  2. The Workflow Controller detects the new Workflow CR and starts reconciling its state.
  3. The controller creates the necessary Kubernetes pods in the chosen namespace to execute each step defined in the Workflow spec.
  4. Pods read inputs and write outputs to configured artifact stores; the controller observes pod status and advances the workflow accordingly.
  5. After completion, workflow metadata and logs can be archived to a database or object store and metrics emitted for monitoring and alerting.
A high-level architecture diagram of Argo Workflows on a Kubernetes cluster. It shows users and CLIs interacting with an Argo server/load balancer and workflow controller, workflow pods running in namespaces, and external services like artifact storage, OAuth provider, and monitoring.
Note: The Workflow Controller reconciles Workflow CRs and must have the ServiceAccount RBAC permissions to create and manage the resources your workflows require (pods, configmaps, secrets, pvc, etc.). Verify Role/ClusterRole bindings for multi-namespace or cluster-scoped workflows.

Watch Video