Users with the exec driver and host volumes
With Nomad's host volume support, you can mount any directory on the Nomad client into an allocation. These directories can be simple filesystem directories on a client, or mounted filesystems like NFS or GlusterFS.
Jobs can ask Nomad to mount these volumes to tasks within a task group.
In this tutorial, you will use a host volume named scratch
to provide
persistent storage to the job. To provide isolation between the tasks, the job
uses the default user for exec jobs—nobody
—and a user named user1
.
Prerequisites
- Nomad with host volumes support—v0.10.0 or greater
- Linux (any version, any distribution)
- Nomad's exec driver available and enabled
Create the directory structure
Create a directory to back your Nomad host volume on your client node. Set it as your current directory.
Inside that directory, make a directory named nobody
.Change the owner of the
directory to the nobody
user.
Create a user named user1
on the same machine then repeat the previous steps.
You may choose to use a different username if you prefer; just remember to
stay consistent as you go.
The -M
flag stop adduser from creating a home directory. The -U
flag
creates a group with the same name as the user and adds the user to this
group.
Remove group and world access to the user1 directory.
Configure the host volume
Add the configuration to present this directory as a host volume in your Nomad configuration.
Did you know? Nomad has an option to read all configuration files in a
directory and automatically merge them together. You can use this feature to
create modular configuration layouts. You can use the
-config
flag more than once when running Nomad.
Add the host volume specification inside of your client stanza like this.
If you are using more than one configuration file as suggested in the tip, don't forget to include the surrounding client stanza so that the configuration merges properly.
Restart the nomad service on this node. For example, to restart a service named
nomad
using systemd, run the following:
Verify the host volume is available
Once Nomad has restarted on the client, use the nomad node status
command to
verify that the host volume is available. Specify the -verbose
flag to view
the list of host volumes. If you are still in a shell on the client node, you
can use the -self
flag rather than providing the node's ID as the parameter.
The command prints a lot of useful output. For now, focus on the Host
Volumes section. Verify that scratch
is present and agrees with the
configuration you specified earlier.
Run the job
Fetch the scratch job from Github.
Open the scratch.nomad file in a text editor and take a moment to familiarize yourself with the job. It contains two exec tasks that start simple bash sleep loops and then it mounts the scratch host volume into both of them. This provides just enough environment to connect into in the later steps.
Run the scratch
job.
Troubleshooting startup
If your output contains a message like the following, verify that your host volume is properly configured and look for typos.
Nomad jobs using a host volume create an implicit scheduler constraint. This ensures they run on client nodes with the host volume available or wait until one is available. This message indicates that there was no client available that met the constraint.
Interact with the scratch directory
Run nomad status scratch
and make note of the Allocation ID from the command
output. You need it for running the nomad alloc exec
command to interact with
the sample environments. Export it into an environment variable named ALLOC_ID
for convenience.
This one-liner can collect it for you from Nomad and store it into the variable.
Connect to the "nobody" task
Use nomad alloc exec
to connect to the "nobody" task.
At the task container's default bash prompt, verify that you are the "nobody"
user by running whoami
Change to the /scratch
folder.
Do a long listing of the files in the scratch directory.
Verify that you do not have access to the user1 folder using ls
and cd
.
Change into the nobody
folder. Use echo
and redirection to create a file.
Exit the nomad alloc exec
session by typing exit
.
Connect to the "user1" task
Start a new exec session, this time to the "user1" task.
At the task container's default bash prompt, verify that you are the "user1"
user by running whoami
Change to the /scratch
folder.
Do a long listing of the files in the scratch directory.
Since the nobody folder allows for read access, read the file you created earlier.
Change into the nobody folder and verify that user1 does not have write access.
Change back to the /scratch/user1
folder. Create a file here using echo and
redirection.
Exit the nomad alloc exec
session by typing exit
.
Verify persistence
Stop the job and restart it.
Fetch its new allocation ID into your ALLOC_ID environment variable
Connect to the "user1" task
Start a new exec session to the "user1" task.
At the task container's default bash prompt, verify that you are the "user1"
user by running whoami
Use the cat
command to output the files that you created earlier in the
scratch folder.
Exit the nomad alloc exec
session by typing exit
.
Clean up
To clean up after this tutorial, do the following.
Stop the "scratch" job
Remove tutorial artifacts from Nomad client
If you are no longer in a shell session to your Nomad client, connect to the Nomad client you created and configured the host volume on.
Remove the host volume configuration; restart the Nomad client process
Edit the Nomad configuration to remove the host volume stanza or additional configuration file you created earlier. Restart the Nomad process.
Verify that the host volume doesn't appear in the node status output.
Remove the folder backing the host volume
Delete the folder that you created to back the host volume.
Remove the tutorial user
Dig deeper
In this tutorial, you used Nomad's ability to set an exec
task's user context
to provide finer grained access permissions to a directory backing a Nomad host
volume. You can use this technique to reduce the number of host volume
configurations that you need to manage in your Nomad configuration.
Learn more about the key concepts used in this tutorial.