Autopilot
Autopilot is a set of new features added in Nomad 0.8 to allow for automatic operator-friendly management of Nomad servers. It includes cleanup of dead servers, monitoring the state of the Raft cluster, and stable server introduction.
To enable Autopilot features (with the exception of dead server cleanup), the
raft_protocol
setting in the server stanza must be set to 3 on all servers.
This setting defaults to 2; a cluster configured with protocol 2 can be upgraded
to protocol 3 with a rolling update, provided time for membership to stabilize
following each server update. During an upgrade from raft protocol 2 to 3, use
the nomad operator raft list-peers
command between server updates to verify
that each server identifier is replaced with a UUID. For more information,
consult the Version Upgrade section on Raft Protocol versions.
Configuration
The configuration of Autopilot is loaded by the leader from the agent's Autopilot settings when initially bootstrapping the cluster:
After bootstrapping, the configuration can be viewed or modified either via the
operator autopilot
subcommand or the
/v1/operator/autopilot/configuration
HTTP endpoint:
View the configuration.
Update the configuration.
View the configuration to confirm your changes.
Dead server cleanup
Dead servers will periodically be cleaned up and removed from the Raft peer set, to prevent them from interfering with the quorum size and leader elections. This cleanup will also happen whenever a new server is successfully added to the cluster.
Prior to Autopilot, it would take 72 hours for dead servers to be automatically
reaped, or operators had to script a nomad force-leave
. If another server
failure occurred, it could jeopardize the quorum, even if the failed Nomad
server had been automatically replaced. Autopilot helps prevent these kinds of
outages by quickly removing failed servers as soon as a replacement Nomad server
comes online. When servers are removed by the cleanup process they will enter
the "left" state.
This option can be disabled by running nomad operator autopilot set-config
with the -cleanup-dead-servers=false
option.
Server health checking
An internal health check runs on the leader to monitor the stability of servers. A server is considered healthy if all of the following conditions are true:
Its status according to Serf is 'Alive'
The time since its last contact with the current leader is below
LastContactThreshold
Its latest Raft term matches the leader's term
The number of Raft log entries it trails the leader by does not exceed
MaxTrailingLogs
The status of these health checks can be viewed through the
/v1/operator/autopilot/health
HTTP endpoint, with a top level Healthy
field indicating the overall status of the cluster:
Stable server introduction
When a new server is added to the cluster, there is a waiting period where it
must be healthy and stable for a certain amount of time before being promoted to
a full, voting member. This can be configured via the ServerStabilizationTime
setting.
The following Autopilot features are available only in Nomad Enterprise version 0.8.0 and later.
Server read and scheduling scaling
With the non_voting_server
option, a server can be explicitly marked as a
non-voter and will never be promoted to a voting member. This can be useful when
more read scaling is needed; being a non-voter means that the server will still
have data replicated to it, but it will not be part of the quorum that the
leader must wait for before committing log entries. Non voting servers can also
act as scheduling workers to increase scheduling throughput in large clusters.
Redundancy zones
Prior to Autopilot, it was difficult to deploy servers in a way that took advantage of isolated failure domains such as AWS Availability Zones; users would be forced to either have an overly-large quorum (2-3 nodes per AZ) or give up redundancy within an AZ by deploying only one server in each.
If the EnableRedundancyZones
setting is set, Nomad will use its value to look
for a zone in each server's specified redundancy_zone
field.
Here's an example showing how to configure this:
Nomad will then use these values to partition the servers by redundancy zone, and will aim to keep one voting server per zone. Extra servers in each zone will stay as non-voters on standby to be promoted if the active voter leaves or dies.
Upgrade migrations
Autopilot in Nomad Enterprise supports upgrade migrations by default. To disable
this functionality, set DisableUpgradeMigration
to true.
When a new server is added and Autopilot detects that its Nomad version is newer than that of the existing servers, Autopilot will avoid promoting the new server until enough newer-versioned servers have been added to the cluster. When the count of new servers equals or exceeds that of the old servers, Autopilot will begin promoting the new servers to voters and demoting the old servers. After this is finished, the old servers can be safely removed from the cluster.
To check the Nomad version of the servers, either the autopilot health
endpoint or the nomad members
command can be used:
Migrations without a Nomad version change
The EnableCustomUpgrades
field can be used to override the version information
used during a migration, so that the migration logic can be used for updating
the cluster when changing configuration.
If the EnableCustomUpgrades
setting is set to true
, Nomad will use its value
to look for a version in each server's specified upgrade_version
tag. The
upgrade logic will follow semantic versioning and the upgrade_version
must be
in the form of either X
, X.Y
, or X.Y.Z
.