Worker-aware targets
This tutorial explores worker-aware target configuration and applying worker filters.
Background
In the multi-datacenter and multi-cloud operating models, patterns of dividing up controllers, workers and targets into appropriate regions or networks is often desired to reduce latency or comply with security standards.
Worker-aware targets allow for specification of a boolean-expression filter against worker tags to control which workers are allowed to handle a given target's session. This pattern effectively "ties" a worker to a given target. A common example is allowing a single set of controllers to live in one region, and then place workers in many regions where the targets they proxy live.
This tutorial covers the process of defining worker tags, and applying target filters to force Boundary to only connect with workers available on the target's network.
Tutorial Contents
- Get setup
- Define worker tags
- Restart the workers
- Define worker filters
- Configure updated target filters
- Verify target availability
Prerequisites
Docker is installed
Docker-Compose is installed
Tip
Docker Desktop 20.10 and above include the Docker Compose binary, and does not require separate installation.
A Boundary binary greater than 0.1.5 in your
PATH
. This tutorial uses the 0.7.5 version of Boundary.Terraform 0.13.0 or greater in your
PATH
A psql binary greater than 13.0 in your
PATH
A redis-cli binary greater than 6.0 in your
PATH
A mysql binary greater than 8.0 in your
PATH
In addition to Docker, Terraform and the Boundary binary, it is important
that the psql
, redis
, and mysql
executables are available in your path to
complete this tutorial. Ensure they are properly installed before attempting to
connect to the database targets provided with the Docker lab environment.
Get setup
The demo environment provided for this tutorial includes a Docker Compose cluster that deploys these containers:
- A Boundary 0.7.5 controller server
- A Postgres database
- 2 worker instances (worker1, worker2)
- 3 database targets (postgres, mysql, redis)
The Terraform Boundary
Provider is
also used in this tutorial to easily provision resources using Docker, and must
be available in your PATH
when deploying the demo environment.
To learn more about the various Boundary components, refer back to the Start a Development Environment tutorial.
Deploy the lab environment
The lab environment can be downloaded or cloned from the following Github repository:
In your terminal, clone the repository to get the example files locally:
Move into the
learn-boundary-target-aware-workers
folder.Ensure that you are in the correct directory by listing its contents.
The repository contains the following files:
run
: A script used to deploy and tear down the Docker-Compose configuration.compose/docker-compose.yml
: The Docker-Compose configuration file describing how to provision and network the boundary cluster and targets.compose/controller.hcl
: The controller configuration file.compose/worker1.hcl
: The worker1 configuration file.compose/worker2.hcl
: The worker2 configuration file.terraform/main.tf
: The terraform provisioning instructions using the Boundary provider.terraform/outputs.tf
: The terraform outputs file for printing user connection details.
This tutorial makes it easy to launch the test environment with the
run
script.Any resource deprecation warnings in the output can safely be ignored.
The user details are printed in the shell output, and can also be viewed by inspecting the
terraform/terraform.tfstate
file. You will need the user1auth_method_id
to authenticate via the CLI and establish sessions later on.You can tear down the environment at any time by executing
./run cleanup
.To verify that the environment deployed correctly, print the running docker containers and notice the ones named with the prefix "boundary".
This tutorial focuses on the relationship between the controller, workers, and the three targets
boundary_postgres_1
,boundary_redis_1
, andboundary_mysql_1
.
Here is a diagram that shows the Boundary cluster network configuration. The targets are only able to communicate with the worker that lives on their same network.
The pre-defined network schema associates the workers with these targets:
- postgres: worker1
- mysql: worker1
- redis: worker2
If a target is misconfigured and associates a target with the incorrect worker, Boundary will produce an error stating that no workers are available to handle a connection request.
Query the targets
Start by authenticating using the CLI as user1
with the password of
password
. You will need user1's auth_method_id
printed when deploying the
lab environment.
Example: The auth method ID is ampw_1tT18L3AZd
in the following example.
In this tutorial connections are established using the boundary connect
command. This requires the psql
, redis-cli
and mysql
CLI tools to be
available in your path, and additional connection options that are provided in
the examples. If these tools don't work for you, refer back to the tutorial
prerequisites and ensure they are installed and in your PATH
.
Try establishing a connection with the boundary_postgres_1
target. The target
name of postgres
was defined in the terraform/main.tf
file, and provisioned
with the Boundary Terraform provider.
In the example below, the psql CLI option -l
lists the available databases
after a connection is made. Re-run the command a few times until you are
prompted for a password, which is postgres
.
After running the command a few times you should eventually be able to establish a session. If you were prompted for a password on the first try, re-run the command until you receive the error message above.
The test1
database output confirms the existence of a test database, defined
in the compose/docker-compose.yml
Docker configuration for this sample target.
Note
If you are unable to connect to the postgres target at all, expand the Troubleshooting section below.
Occasionally the controller container may have issues initializing connection with the workers on first boot. If this occurs, you may be unable to establish a connection to any of the targets.
Remember, the following error is expected:
The above error should occur occasionally, but sometime you should be prompted for the password for the postgres database.
If the controller container needs to be restarted, you may receive one of the following errors when connecting to the postgres target:
Error 1:
Error 2:
You may also simply receive a 400 message that No workers are available to
handle this session, or all have been filtered
with every request that you
make.
If any of these error messages persist after executing boundary connect
repeatedly, try restarting the boundary_controller_1
container once or twice:
If you are still unable to establish a connection, re-provision the environment
by executing ./run cleanup
followed by ./run all
, and then try again.
The connection to the Postgres target is intermittent. What's going on?
Boundary's current configuration does not define which worker is allowed to handle a request.
Recall that the targets are isolated to the following network configuration:
- postgres: worker1
- mysql: worker1
- redis: worker2
When Boundary attempts to establish the a connection to the postgres target via
worker2, a psql: error
message is returned stating that the connection could
not be made because that target is not available on worker2's network.
Next, try querying the redis target. You should find similar behavior.
After trying a few times, you should be able to get a response of PONG
from
the redis target.
Both the postgres and redis targets only allow for connections when the correct worker is selected by Boundary to handle the request.
The lab environment purposely misconfigured the mysql target to demonstrate what happens when worker filters are applied incorrectly. You will fix this issue by updating the tags and filters in the following sections.
Try to make a connection to the mysql target.
The request returns Message: No workers are available to handle this session, or all have been filtered.
In the following sections you will learn how to
correctly assign worker tags, and create filters that assign targets to the
worker available on the target's same network.
This tutorial makes use of the boundary connect
command to establish
sessions, but the Boundary Desktop
App could also be used to open
connections.
Worker tags
Worker tags allow for descriptions of where traffic should be routed and what targets they should be tied to. These tags are arbitrary, and left to the administrator to define and enforce.
Worker tag structure
Tags are defined as sets of key/value pairs in a worker's HCL configuration file.
HCL is JSON-compatible, so the tags can also be written in pure JSON. This has the benefit of mapping closely to the filter structure that will be implemented later.
Note that filter tags can also be specified using a pure key=value syntax.
This format has some limitations, like the inability to use an =
as part of
the key name.
Define worker tags
The lab environment for this tutorial includes predefined worker tags. Here is
the contents of the worker stanza in the compose/worker1.hcl
file:
With the current configuration, the tags that could be used for this worker are name: worker1
,
region: us-east-1
, and type: prod
.
For more specificity, these filters can be updated to specify what types of targets they communicate with. The postgres and mysql database targets should be handled by worker1, which lives on the same network.
Update the worker stanza in the compose/worker1.hcl
file to include three new
tags under the type key of database, postgres, and mysql.
Perhaps our dev environments live in the us-west-1 region, and use Redis for
testing purposes. Update the worker stanza in the compose/worker2.hcl
file
with type tags of database and redis.
Restart the workers
With the updated worker tags in place, restart the workers to deploy the new configuration file.
Note
In non-containerized environments it is sufficient to stop the
boundary server
process and restart it with an updated worker.hcl
file.
Restart worker1:
And restart worker2:
With the new tags in place it is time to apply filters to the targets.
Worker filters
Targets need filters applied to enable the controller to associate them with the appropriate worker. These filters can be applied via the CLI or using the Boundary Terraform provider.
Worker filter structure
Target filters are regular expressions that reference worker tags.
As an example, here is a simple filter that searches for workers that begin with
the name worker
.
"/name" matches "worker[12]"
This expression would return worker1 or worker2 in the final worker set.
More strict filtering can be easily applied, like the following expression that strictly matches targets named worker2.
"/name" == "worker2"
Complex filters can be created by grouping expressions. In the next example,
only workers with a name of worker1
and a region tag of us-east-1
would
match.
"/name" == "worker1" and "us-east-1" in "/tags/region"
And further complexity can be created by compounding expressions. These are
created by grouping an expression in parenthesis and using values like and
,
or
, and not
. In the last example workers must have a region tag of
us-east-1
and a name of worker1
, or a type tag of redis to match. In the
worker configurations defined above, this would allow worker1 or worker 2 to
handle the request.
("us-east-1" in "/tags/region" and "/name" == "worker1") or "redis" in "/tags/type"
If an expression fails due to a key not being found within the input data, the worker is not included in the final set. Ensure all workers that should match a given filter are populated with tags referenced in the filter string. As a corollary, it is not possible to distinguish between a worker that is not included due to the expression itself and a worker that did not have correct tags.
Define target worker filters
Next target filters will be applied for the postgres, redis and mysql targets.
Discover the target ids for the postgres, redis and mysql targets using recursive listing and filters.
Copy the postgres target ID, and update the target with a simple filter that
selects workers with a name of worker1
. When using the CLI a filter is
specified using the -worker-filter
option.
Double quotes are part of the filter syntax. When using the CLI, it is likely
easier to surround the -worker-filter
argument with single quotes. Otherwise
escape syntax needs to be used when surrounding the expression with double
quotes.
Notice the Worker Filter:
line, and ensure it contains the correct filter
expression. If an expression fails due to a key not being found within the input
data the worker will not be included in the final set.
For redis apply the following filter:
"us-west-1" in "/tags/region" or "redis" in "/tags/type"
For redis this will return worker2, which has a region tag of us-west-1
and a
type tag of redis
.
And for mysql implement this filter:
"/name" == "worker1" or ("prod" in "/tags/type" and "database" in "/tags/type")"
For the mysql target only worker1 is allowed to handle requests, because the
filter allows a name tag of worker1
or a type tag of prod
and database
.
Verify target availability
With the workers tagged and filters in place for the targets, read the target data to ensure the filters were applied correctly.
First use recursive listing and a filter to find the target ids postgres, redis, and mysql.
Copy the target id for postgres and verify that it was set correctly by reading the target details.
Look at the Worker Filter:
line and verify that the filter query is correct.
Repeat this process for the redis
and mysql
targets.
Establish sessions
Now the filters can be validated by establishing sessions using boundary connect
.
Verify that a session can be reliably established to the postgres target,
entering the password postgres
when prompted.
Verify that a session can be reliably established to the redis target.
Notice that the proxy information is displayed prior to the client response when
using boundary connect -exec
.
Verify that a session can be reliably established to the mysql target.
If you are not able to establish sessions to these targets, carefully check the filters applied in the previous section, and re-define them if any are set incorrectly. If the filters look correct, verify that the tags were properly applied, and that the workers were restarted to apply the new configuration.
Cleanup and teardown
The Boundary cluster containers and network resources can be cleaned up
using the provided run
script.
Check your work with a quick docker ps
and ensure there are no more containers
with the boundary_
prefix leftover. If unexpected containers still exist,
execute docker rm -f CONTAINER_NAME
against each to remove them.