Stateful workloads with Container Storage Interface
Nomad’s Container Storage Interface (CSI) integration can manage external storage volumes for stateful workloads running inside your cluster. CSI providers are third-party plugins that run as Nomad jobs and can mount volumes created by your cloud provider. Nomad is aware of CSI-managed volumes during the scheduling process, enabling it to schedule your workloads based on the availability of volumes on a specific client.
Each storage provider builds its own CSI plugin, and you can leverage all of them in Nomad. You can launch jobs that claim storage volumes from AWS Elastic Block Storage (EBS) or Elastic File System (EFS) volumes, GCP persistent disks, Digital Ocean droplet storage volumes, or vendor-agnostic third-party providers like Portworx. This also means that many plugins written by storage providers to support Kubernetes will support Nomad as well. You can find a list of plugins in the Kubernetes CSI Developer Documentation.
Unlike Nomad’s host_volume
feature, CSI-managed volumes can be added
and removed from a Nomad cluster without changing the Nomad client
configuration.
Using Nomad’s CSI integration consists of three core workflows: running CSI plugins, registering volumes for those plugins, and running jobs that claim those volumes. In this guide, you'll run the AWS Elastic Block Storage (EBS) plugin, register an EBS volume for that plugin, and deploy a MySQL workload that claims that volume for persistent storage.
Prerequisites
To perform the tasks described in this guide, you need:
a Nomad environment on AWS with Consul installed. You can use this Terraform environment to provision a sandbox environment. This tutorial will assume a cluster with one server node and two client nodes.
Nomad v1.3.0 or greater
Note
This tutorial is for demo purposes and only assumes a single server node. Consult the reference architecture for production configuration.
Install the MySQL client
You will use the MySQL client to connect to our MySQL database and verify our data. Ensure it is installed on a node with access to port 3306 on your Nomad clients:
Ubuntu:
CentOS:
macOS via Homebrew:
Deploy an AWS EBS volume
Next, create an AWS EBS volume for the CSI plugin to mount where needed for your jobs using the same Terraform stack you used to create the Nomad cluster.
Add the following new resources to your Terraform stack.
Run terraform plan
and terraform apply
to create the new IAM
policy and EBS volume. Then run terraform output ebs_volume > volume.hcl
. You'll use this file later to register the volume with
Nomad.
Notes about the above Terraform configuration
The IAM policy document and role policy are being added to the existing instance role for your EC2 instances. This policy will give the EC2 instances the ability to mount the volume you've created in Terraform, but will not give them the ability to create new volumes.
The EBS volume resource is the data volume you will attach via CSI later. The output will be used to register the volume with Nomad.
Enable privileged Docker jobs
CSI Node plugins must run as privileged Docker jobs because they use bidirectional mount propagation in order to mount disks to the underlying host.
CSI Plugins running as node
or monolith
type require root privileges (or
CAP_SYS_ADMIN on Linux) to mount volumes on the host. With the Docker task
driver, you can use the privileged = true
configuration, but no other default
task drivers currently have this option.
Nomad’s default configuration does not allow privileged Docker jobs, and must be edited to allow them.
Bidirectional mount propagation can be dangerous and can damage the host operating system. For this reason, it is only allowed in privileged containers.
To enable, edit the configuration for all of your Nomad clients, and set
allow_privileged
to true inside of the Docker plugin’s configuration.
Restart the Nomad client process to load this new configuration.
If your Nomad client configuration does not already specify a Docker plugin configuration, this minimal one will allow privileged containers. Add it to your Nomad client configuration and restart Nomad.
There are certain Docker configurations that can prevent privileged containers from performing mounts on the host. The error message will likely contain the phrase "linux mounts: path ... is mounted on ... but it is not a shared mount". More information can be found in the Docker forums
If you do not have privileged containers enabled in Nomad, you will receive the following error when you submit the plugin-aws-ebs-nodes job:
Deploy the EBS plugin
Plugins for CSI are run as Nomad jobs with a plugin
stanza. The
official plugin for AWS EBS can be found on GitHub in the
aws-ebs-csi-driver
repository. It’s packaged as a
Docker container that you can run with the Docker task driver.
Each CSI plugin supports one or more types: Controllers and
Nodes. Node instances of a plugin need to run on every Nomad client
node where you want to mount volumes. You'll probably want to run Node
plugins instances as Nomad system
jobs. Some plugins also require
coordinating Controller instances that can run on any Nomad client
node.
The AWS EBS plugin requires a controller plugin to coordinate access
to the EBS volume, and node plugins to mount the volume to the EC2
instance. You'll create a controller job as a nomad service
job and
the node job as a Nomad system
job.
Create a file for the controller job called plugin-ebs-controller.nomad.hcl
with the following content.
Create a file for the node job named plugin-ebs-nodes.nomad.hcl
with the following content.
Deploy the plugin jobs
Deploy both jobs with nomad job run plugin-ebs-controller.nomad.hcl
and
nomad job run plugin-ebs-nodes.nomad.hcl
. It will take a few moments
for the plugins to register themselves as healthy with Nomad after the
job itself is running. You can check the plugin status with the nomad plugin status
command.
Note that the plugin does not have a namespace, even though the jobs that launched it do. Plugins are treated as resources available to the whole cluster in the same way as Nomad clients.
Register the volume
The CSI plugins need to be told about each volume they manage, so for
each volume you'll run nomad volume register
. Earlier you used
Terraform to output a volume.hcl
file with the volume definition.
The volume status output above indicates that the volume is ready to be scheduled, but has no allocations currently using it.
Deploy MySQL
Create the job file
You are now ready to deploy a MySQL database that can use Nomad host
volumes for storage. Create a file called mysql.nomad.hcl
and provide it
the following contents.
Notes about the above job specification
The service name is
mysql-server
which you will use later to connect to the database.The
read_only
argument is supplied on all of the volume-related stanzas in to help highlight all of the places you would need to change to make a read-only volume mount. Consult thevolume
, andvolume_mount
specifications for more details.For lower-memory instances, you might need to reduce the requested memory in the resources stanza to harmonize with available resources in your cluster.
Run the job
Register the job file you created in the previous step with the following command.
The allocation status will have a section for the CSI volume, and the volume status will show the allocation claiming the volume.
Write data to MySQL
Connect to MySQL
Using the mysql client (installed earlier), connect to the database and access the information.
The password for this demo database is password
.
Note
This tutorial is for demo purposes and does not follow best practices for securing database passwords. Consult Keeping Passwords Secure for more information.
Consul is installed alongside Nomad in this cluster so you are able to
connect using the mysql-server
service name you registered with our
task in our job file.
Add test data
Once you are connected to the database, verify the table items
exists.
Display the contents of this table with the following command.
Now add some data to this table (after you terminate our database in Nomad and bring it back up, this data should still be intact).
Run the INSERT INTO
command as many times as you like with different
values.
Once you are done, type exit
and return back to the Nomad client
command line.
Destroy the database job
Run the following command to stop and purge the MySQL job from the cluster.
Verify mysql is no longer running in the cluster.
Re-deploy and verify
Using the mysql.nomad.hcl
job file from earlier, re-deploy
the database to the Nomad cluster.
Once you re-connect to MySQL, you should be able to verify that the information you added prior to destroying the database is still present.
Cleanup
Once you have completed this guide, you should perform the following cleanup steps.
Stop and purge the
mysql-server
job.Unregister the EBS volume from Nomad with
nomad volume deregister mysql
.Stop and purge the
plugin-aws-ebs-controller
andplugin-aws-ebs-nodes
job.Destroy the EBS volume with
terraform destroy
.
Summary
In this guide, you deployed a CSI plugin to Nomad, registered an AWS EBS volume for that plugin, and created a job that mounted this volume to a Docker MySQL container that wrote data that persisted beyond the job’s lifecycle.