HashiCorp Certified: Consul Associate Certification

Explain Consul Architecture

Consensus Protocol Raft

In this lesson, we dive into Consul’s consensus protocol—Raft. Running exclusively on Consul server agents, Raft guarantees reliable replication of log entries and cluster state changes across the server peer set. By ensuring every new entry is replicated on all servers, Consul prevents data loss if any servers fail. Raft orchestrates leadership election, log replication, and quorum management. We’ll explore each of these components in detail.

Note

This Raft implementation was later extended to power HashiCorp Vault’s integrated storage starting in Vault 1.4.


Consensus Glossary

The image is a slide titled "Consensus Glossary" with definitions for "Log" and "Peer Set," explaining their roles in a consistent log and log replication. It includes a small illustration of a person at a computer.

Log
An ordered sequence of entries representing every change to the cluster. Entries include server additions/removals and key-value writes, updates, or deletions. Replaying the log reconstructs the cluster’s current state, so all servers must agree on entry content and order to form a consistent log.

Peer Set
All server nodes that participate in Raft replication within a datacenter. In a five-server cluster example, the peer set consists of those five servers.

Quorum
A majority of nodes in the peer set required to elect a leader and commit log entries.

majority = (n + 1) / 2

For a five-node cluster:

majority = (5 + 1) / 2 = 3

Losing more than two servers drops you below quorum, halting cluster operations.

Warning

If your cluster falls below quorum, no new entries can be committed, and the cluster becomes unavailable for writes.


Server Roles and Responsibilities

Each Consul server (Raft node) operates in one of three states:

RoleDescription
FollowerDefault state; processes leader-forwarded requests, accepts log replication, votes in elections.
CandidateTransient state during election; solicits votes from peers.
LeaderReceives all client writes, processes queries, commits and replicates new log entries.

The image outlines the responsibilities of leaders and followers in a consensus protocol, detailing tasks such as log entry ingestion, query processing, and vote casting.

Only the leader can append entries and decide when they’re committed based on acknowledgments from a quorum of servers. Followers forward write requests to the leader and store the replicated entries.


Leader Election Process

Raft elections use randomized timeouts and heartbeat messages:

  1. The leader sends periodic heartbeats to all followers.
  2. Each follower has a randomly assigned election timeout (e.g., 150–300 ms).
  3. If a follower doesn’t receive a heartbeat before its timeout expires, it assumes the leader has failed, becomes a candidate, votes for itself, and requests votes from peers.
  4. A new leader is elected once a candidate secures votes from a majority of the peer set.

The image explains the leader election process in a consensus protocol, highlighting randomized election timeouts and the steps involved when a heartbeat isn't received from the leader.

In a five-node cluster example, the follower with the shortest timeout triggers the first election when the leader’s heartbeats stop:

The image explains a consensus protocol for leader election based on randomized election timeouts, detailing the process of leader heartbeats, server timeouts, and the election process when a heartbeat isn't received. It includes a diagram showing the transition from leader to candidate and followers with specific timeout values.


Client Interaction with the Raft Cluster

Clients only need to contact any single Consul server. The leader handles all writes and coordinates replication:

  1. A client issues a key-value write to one server.
  2. If that server isn’t the leader, it forwards the request.
  3. The leader appends the entry to its log, commits it, and replicates it to followers.

The image illustrates a "Consensus Protocol" involving a release engineer interacting with a Consul system, which includes a leader node and multiple follower nodes, highlighting operations like Consul Raft and replication.

This model ensures clients connect to a single endpoint while Raft maintains strong consistency and durability across the cluster.


Watch Video

Watch video content

Previous
Basic Consul Architecture