Vault cluster lost quorum recovery
With Vault's Integrated Storage, Raft quorum maintenance is a consideration for configuring and operating your Vault environment. Quorum, in the Raft cluster, is lost permanently when there is no way to recover enough nodes to reach consensus and elect a leader. Without a quorum, read and write operations cannot be performed within the cluster.
The cluster quorum is dynamically updated when additional servers join the
cluster. Quorum is calculated with the formula (n+1)/2
, where n
is the
number of servers in the cluster. For example, for a 3-server cluster, you will
need at least 2 servers operational for the cluster to function properly,
(3+1)/2 = 2
. Specifically, you will need 2 servers active at all times to be
able to perform read and write operations. (See the deployment
table.)
Note
There is an exception to this rule if the -non-voter
option is used while
joining the cluster, it is only available in Vault Enterprise.
Scenario overview
When two of the three servers encountered an outage, the cluster loses quorum and becomes inoperable.
Although one of the servers is fully functioning, the cluster won't be able to process read or write requests.
Example:
In this tutorial, you will recover from the permanent loss of two-of-three Vault servers by converting it into a single-server cluster.
The last server must be fully operational to successfully complete this procedure.
Note
Sometimes a quorum is lost due to autopilot and nodes marked as unhealthy but the service is still running. On unhealthy node(s), you must stop services before running the peers.json procedure.
In a 5 node cluster or in the case of non voters, you must stop other healthy before performing the peers.json recovery.
Locate the storage directory
On the healthy Vault server, locate the Raft storage directory. To discover the
location of the directory, review your Vault configuration file. The storage
stanza will contain the path
to the directory.
Example:
In this example, the path
is the file system path where all the Vault data is
stored and the node_id
is the identifier for the server in the Raft cluster.
The example node_id
is vault_1
.
Create the peers.json file
Inside the storage directory (/vault/data
), there is a folder named raft
.
To enable the single, remaining Vault server to reach quorum and elect itself as
the leader, create a raft/peers.json
file that contains the server
information. The file should be formatted as a JSON array containing the node
ID, address:port, and suffrage information of the healthy Vault server
(e.g. vault_1
).
Example:
id (string: <required>)
- Specifies the node ID of the server.address (string: <required>)
- Specifies the host and port of the server. The port is the server's cluster port.non_voter (bool: <false>)
- This controls whether the server is a non-voter.
Restart Vault
Restart the Vault process to enable Vault to load the new peers.json
file.
Note
If you use Systemd, a SIGHUP
signal will not work.
Verify success
The recovery procedure is successful when Vault starts up and displays these messages in the system logs.
Unseal Vault
If not configured to use auto-unseal, unseal Vault and then check the status.
Example:
View the peer list
You now have a cluster with one server that can reach the quorum. Verify that there is only one server in the cluster with vault operator raft list-peers
command.
Next steps
In this tutorial, you recovered the loss of quorum by converting a 3-server
cluster into a single-server cluster using the peers.json
. The peers.json
file enabled you to manually overwrite the Raft peer list to the one remaining
server, which allowed that server to reach quorum and complete a leader
election.
If the failed servers are recoverable, the best option is to bring them back
online and have them reconnect to the cluster using the same host addresses.
This will return the cluster to a fully healthy state. In such an event, the
raft/peers.json
should contain the node ID, address:port, and suffrage
information of each Vault server you wish to be in the cluster.
See the Outage Recovery documentation for more detail.