In this lesson we will create and verify an EventBus for Argo Events. An EventBus is the messaging backbone that routes events from event sources to sensors. Argo Events supports multiple EventBus backends; this guide compares the common options and shows how to create a NATS-based EventBus.Documentation Index
Fetch the complete documentation index at: https://notes.kodekloud.com/llms.txt
Use this file to discover all available pages before exploring further.
- NATS (native, managed by Argo Events)
- NATS (exotic — connect to an existing NATS streaming server)
- NATS JetStream
- Kafka

EventBus backend comparison
| Backend | Use case | Notes |
|---|---|---|
| NATS (native) | Simple deployments where Argo Events should manage the NATS streaming cluster | Argo Events will create StatefulSet(s) and services |
| NATS (exotic) | Use an existing, externally managed NATS cluster | Useful when you already operate NATS and want Argo Events to connect to it |
| JetStream | Durable stream storage and advanced delivery guarantees | Use a pinned JetStream version in production |
| Kafka | Enterprise-grade messaging systems already in use | Kafka must be provisioned and managed outside Argo Events |
NATS (native)
The most common EventBus for Argo Events is the native NATS streaming implementation that Argo Events manages for you. A minimal native EventBus looks like this:NATS streaming (native) typically expects at least three replicas to form a quorum for high availability. Set replicas to match your HA requirements and ensure your cluster has capacity to schedule those pods.
NATS (exotic) — connect to an existing NATS streaming server
If you already run a NATS streaming cluster, instruct Argo Events to connect to it by using the exotic configuration:JetStream
Argo Events supports JetStream for advanced streaming features. Enable JetStream in the EventBus spec and pin a version for production stability:Do NOT use “latest” in production. Pin a concrete JetStream version to avoid unexpected upgrades or incompatibilities.
Kafka
Kafka is supported as an EventBus backend as well. You must manage the Kafka brokers outside Argo Events:Anti-affinity and scheduling considerations
To improve availability, configure Kubernetes pod anti-affinity so EventBus pods are spread across nodes or zones. Anti-affinity helps prevent correlated failures (for example, all replicas on a single node). The exact anti-affinity configuration depends on your cluster topology and scheduling constraints. Recommendation checklist:- Ensure storageClassName and volume sizes meet persistence needs.
- Set pod anti-affinity to spread replicas across nodes/availability-zones.
- Confirm the cluster has capacity for the required replica count.
Create the EventBus in the cluster
- Create a file named eventbus.yaml with your chosen EventBus configuration (for example, the native NATS YAML above).
- Apply it to the argo-events namespace:

Monitor and verify the EventBus
After you apply the YAML, the EventBus controller will create Kubernetes resources. For native NATS this typically includes a StatefulSet and associated Services. Monitor pod creation and readiness:- Apply EventBus YAML:
- kubectl -n argo-events apply -f eventbus.yaml
- Inspect pods:
- kubectl get pods -n argo-events
- Check EventBus status and computed config:
- kubectl get eventbus default -o yaml
- kubectl get eventbus default -o json | jq ‘.status.config’