Stateful workloads with Nomad host volumes
Nomad host volumes can manage storage for stateful workloads running inside your Nomad cluster. This tutorial walks you through deploying a MySQL workload using a host volume for persistent storage.
Nomad host volumes provide a workload-agnostic way to specify resources,
available for Nomad drivers like exec
, java
, and docker
. See the
host_volume
specification for more information about
supported drivers. Nomad is also aware of host volumes during the scheduling
process, enabling it to make scheduling decisions based on the availability of
host volumes on a specific client.
This can be contrasted with Nomad support for Docker volumes. Because Docker volumes are managed outside of Nomad and the Nomad scheduler is not aware of them, Docker volumes have to either be deployed to all clients or operators have to use an additional, manually-maintained constraint to inform the scheduler where they are present.
Prerequisites
To perform the tasks described in this guide, you need to have a Nomad environment (v0.12.0 or greater) with Consul installed. You can use this Terraform environment to provision a sandbox environment. This tutorial will assume a cluster with one server node and three client nodes.
Note
This tutorial is for demo purposes and only assumes a single server node. Please consult the reference architecture for production configuration.
Install the MySQL client
You will use the MySQL client to connect to our MySQL database and verify our data. Ensure it is installed on a node with access to port 3306 on your Nomad clients:
Ubuntu:
CentOS:
macOS via Homebrew:
Build the host volume
Create a target directory
On a Nomad client node in your cluster, create a directory that will be used for
persisting the MySQL data. For this example, let's create the directory
/opt/mysql/data
.
You might need to change the owner on this folder if the Nomad client does not
run as the root
user.
Configure the client
Edit the Nomad configuration on this Nomad client to create the host volume.
Add a host_volume
block to the client
block of your Nomad configuration:
Save this change, and then restart the Nomad service on this client to make the
host volume active. While still on the client, you can verify that the host
volume is configured by using the nomad node status
command as shown below:
Deploy MySQL
Create the job file
You are now ready to deploy a MySQL database that can use Nomad host volumes for
storage. Create a file called mysql.nomad.hcl
and provide it the following
contents:
Notes about the above job specification
The service name is
mysql-server
which you will use later to connect to the database.The
read_only
argument is supplied on all of the volume-related stanzas in to help highlight all of the places you would need to change to make a read-only volume mount. Please see thehost_volume
,volume
, andvolume_mount
specifications for more details.For lower-memory instances, you might need to reduce the requested memory in the resources stanza to harmonize with available resources in your cluster.
Run the job
Register the job file you created in the previous step with the following command:
Check the status of the allocation and ensure the task is running:
Write data to MySQL
Connect to MySQL
Using the mysql client (installed earlier), connect to the database and access the information:
The password for this demo database is password
.
Note
This tutorial is for demo purposes and does not follow best practices for securing database passwords. See Keeping Passwords Secure for more information.
Consul is installed alongside Nomad in this cluster so you are able to
connect using the mysql-server
service name you registered with our task in
our job file.
Add test data
Once you are connected to the database, verify the table items
exists:
Display the contents of this table with the following command:
Now add some data to this table (after you terminate our database in Nomad and bring it back up, this data should still be intact):
Run the INSERT INTO
command as many times as you like with different values.
Once you are done, type exit
and return back to the Nomad client command
line:
Destroy the database job
Run the following command to stop and purge the MySQL job from the cluster:
Verify no jobs are running in the cluster:
In more advanced cases, the directory backing the host volume could be a mounted network filesystem, like NFS, or cluster-aware filesystem, like glusterFS. This can enable more complex, automatic failure-recovery scenarios in the event of a node failure.
Re-deploy and verify
Using the mysql.nomad.hcl
job file from earlier, re-deploy the
database to the Nomad cluster.
Once you re-connect to MySQL, you should be able to see that the information you added prior to destroying the database is still present:
Clean up
Once you have completed this guide, you should perform the following cleanup steps:
Stop and purge the
mysql-server
job.Remove the
host_volume "mysql"
stanza from your Nomad client configuration and restart the Nomad service on that clientRemove the /opt/mysql/data folder and as much of the directory tree that you no longer require.
Summary
In this guide, you configured a host volume on a Nomad client using a client-local directory. You created a job that mounted this volume to a Docker MySQL container and wrote data that persisted beyond the job's lifecycle.