Manage workers with HCP Boundary
HCP Boundary allows organizations to register their own self-managed workers. Self-managed workers can be deployed in private networks while still communicating with an upstream HCP Boundary cluster.
Note
Deploying self-managed workers with HCP Boundary requires the Boundary Enterprise binary for Linux, MacOS, Windows, BSD and Solaris. The workers also need to be up-to-date with the HCP control plane, otherwise new features will not work. The control plane version can be checked in the HCP Boundary portal. This tutorial was tested using Boundary 0.13.2.
HCP Boundary is an identity-aware proxy that sits between users and the infrastructure they want to connect to. The proxy has two components:
- A control plane that manages state around users under management, targets, and access policies.
- Worker nodes, assigned by the control plane once a user authenticates into HCP Boundary and selects a target.
Self-managing your workers allows Boundary users to securely connect to private endpoints (such as SSH services on hosts, databases, or HashiCorp Vault) without exposing a private network to the public or HashiCorp-managed resources.
This tutorial demonstrates the basics of how to register and manage workers using HCP Boundary.
Prerequisites
This tutorial assumes you have:
- Access to an HCP Boundary instance
- Completed the previous HCP Administration tutorials
- A publicly accessible Ubuntu instance configured as a target (see the Manage Targets tutorial)
Self-managed HCP worker binaries exist for Linux, MacOS, Windows, BSD and Solaris. This tutorial provides two options for configuring the worker instance:
- A publicly accessible Ubuntu instance to be used as a worker OR
- Deploy a worker locally
Regardless of the method used, Workers must install the Boundary Enterprise binary to be registered with HCP. If using the first option, you can follow this guide to create a publicly accessible Amazon EC2 instance to use for this tutorial.
Configure the worker
To configure a self-managed worker, the following details are required:
- HCP Cluster URL (Boundary address)
- Auth Method ID (from the Admin Console)
- Admin login name and password
Visit the Getting Started on HCP tutorial if you need to locate any of these values.
Warning
For the purposes of this tutorial it is important that the security group policy for the AWS worker instance accepts incoming TCP connections on port 9202 to allow Boundary client connections. To learn more about creating this security group and attaching it to your instance, check the AWS EC2 security group documentation. The screenshot below shows an example of this security group policy.
Log in and download Boundary Enterprise
Log in to the Ubuntu instance that will be configured as a worker.
For example, using SSH:
Note
The above example is for demonstrative purposes. You will need to supply your Ubuntu instance's username, public IP address, and public key to connect. If using AWS EC2, check this article to learn more about connecting to a Linux instance using SSH.
Create a new folder to store your Boundary config file. This tutorial
creates the boundary/
directory in the user's home directory to store the
worker config. If you do not have permission to create this directory, create
the folder elsewhere.
Next, download and install the Boundary Enterprise binary.
Note
The binary version should match the version of the HCP control
plane. Check the control plane's version in the HCP Boundary portal, and
download the appropriate version using wget. The example below installs the
0.13.2 version of the boundary binary, versioned as 0.13.2+ent
.
Enter the following command to install the latest version of the Boundary Enterprise binary on Ubuntu.
Once installed, verify the version of the boundary binary.
Ensure the Version Number matches the version of the HCP Boundary control plane. They should match in order to get the latest HCP Boundary features.
Write the worker config
Next, create a new file named /home/ubuntu/boundary/worker.hcl
.
Open the file with a text editor, such as Vi.
Paste the following configuration into the worker config file:
1 2 3 4 5 6 7 8 9 10111213141516
Update the following values in the worker.hcl
file:
<cluster_id>
on line 3 should be replaced with the HCP Boundary Cluster ID, such asc3a7a20a-f663-40f3-a8e3-1b2f69b36254
<worker_public_addr>
on line 11 should be replaced with the public IP address of the ubuntu worker, such as107.22.128.152
The <cluster-id>
on line 3 can be determined from the UUID in the HCP
Boundary Cluster URL. For example, if you Cluster URL is:
https://c3a7a20a-f663-40f3-a8e3-1b2f69b36254.boundary.hashicorp.cloud
The public_addr
should match the public IP or DNS name of your Ubuntu
instance.
Note the listener "tcp"
stanza:
The address
port is set to 0.0.0.0:9202
. This port should already be
configured by the AWS security group for this instance to accept inbound TCP
connections. If a custom listener port is desired, it should be defined here.
Save this file.
Workers have three configuration fields that can be specified:
auth_storage_path
is a local path where a worker will store its credentials. Storage should not be shared between workers.hcp_boundary_cluster_id
accepts a Boundary cluster id and will be used by a worker when initially connecting to HCP Boundary. This field is set external to the workers stanza.Your cluster id is the UUID in the controller URL. For example, if your controller URL is:
https://abcd1234-e567-f890-1ab2-cde345f6g789.boundary.hashicorp.cloud
Then your cluster id is
abcd1234-e567-f890-1ab2-cde345f6g789
.initial_upstreams
indicates the address or addresses a worker will use when initially connecting to Boundary. This is an alternative to setting the HCP cluster id, and is set within the workers stanza. Unless utilizing multi-hop sessions, this field should be left unset, as settinghcp_boundary_cluster_id
is sufficient.The worker configured with the
hcp_boundary_cluster_id
is known as the ingress worker, which provides access to the HCP Boundary cluster. Ingress workers ensure that connectivity is always available even if the HCP-managed upstream workers change.
Note
In the above example both the auth_storage_path
and
hcp_boundary_cluster_id
are specified. If initial_upstreams
was configured
instead, then the hcp_boundary_cluster_id
would be omitted. Do not set both
hcp_boundary_cluster_id
and initial_upstreams
together, as the HCP cluster
ID will take precedence.
To see all valid config options, refer to the worker configuration docs.
Start the worker
With the worker config defined, start the worker server. Provide the full path
to the worker config file (such as /home/ubuntu/boundary/worker.hcl
).
1 2 3 4 5 6 7 8 9 101112131415161718
The worker will start and begin attempting to connect to the upstream Controller, printing a log message "worker is not authenticated to an upstream, not sending status".
The worker also outputs its authorization request as Worker Auth Registration
Request. This will also be saved to a file, auth_request_token
, defined by the
auth_storage_path
in the worker config.
Note the Worker Auth Registration Request:
value on line 12. This value can
also be located in the /boundary/auth_request_token
file. Copy this value.
Exit the Ubuntu worker.
Open a terminal session on your local machine, where Boundary 0.9.0 or greater is installed.
Register the worker with HCP
HCP workers can be registered using the Boundary CLI or Admin Console Web UI.
Authenticate to HCP Boundary as the admin user.
Log in to the HCP portal.
From the HCP Portal's Boundary page, click Open Admin UI - a new page will open.
Enter the admin username and password you created when you deployed the new instance and click Authenticate.
Once logged in, navigate to the Workers page.
Notice that only HCP workers are listed.
Click New.
The new workers page can be used to construct the contents of the
worker.hcl
file.
Do not fill in any of the worker fields.
Providing the following details will construct the worker config file contents for you:
- Boundary Cluster ID
- Worker Public Address
- Config file path
- Worker Tags
The instructions on this page provide details for installing the Boundary Enterprise binary and deploying the constructed config file.
Because the worker has already been deployed, only the Worker Auth Registration Request key needs to be provided on this page.
Scroll down to the bottom of the New Worker page and paste the Worker Auth Registration Request key you copied earlier.
Click Register Worker.
Click Done and notice the new worker on the Workers page.
Worker-aware targets
From the Manage Targets tutorial you should already have a configured target.
List the available targets:
Export the target ID as an environment variable:
Boundary can use worker tags that define key-value pairs targets can use to determine where they should route connections.
A simple tag was included in the worker.hcl
file from before:
This config creates the resulting tags on the worker:
In this scenario, only one worker is allowed to handle connections to the Ubuntu target. This worker functions as both the "ingress" worker, which handles initial connections from clients, and an "egress" worker, which establishes the final connection to the target.
In a "multi-hop" worker scenario the egress worker is the last worker in a
series of "hops" to reach targets in private network enclaves. Multi-hop workers
are explored in the next tutorial. The upstream
worker tag will be used in
next tutorial to set up multi-hop.
The Tags
or Name
of the worker (worker1
) can be used to create a
worker filter for the target.
Update this target to add a worker tag filter that searches for workers that
have the worker1
tag. Boundary will consider any worker with this tag assigned
to it an acceptable proxy for this target.
Note
The type: "upstream"
tag could have also been used, or a filter that searches for the name of the worker directly ("/name" == "worker1"
).
With the filter assigned, any connections to this target will be forced to proxy through the worker.
Finally, establish a connection to the target. Enter your instance's login
name after the -l
option and the path to your instance's public key after the
-i
option.
Sessions can be managed using the same methods discussed in the Manage Sessions tutorial.
When finished, the session can be terminated manually, or canceled via another authenticated Boundary command. Sessions can also be managed using the Admin Console UI.
Note
To cancel this session using the CLI, you will need to open a new
terminal window and re-export the BOUNDARY_ADDR
and BOUNDARY_AUTH_METHOD_ID
environment variables. Then log back into Boundary using boundary
authenticate
.
Cancel the existing session.
Summary
This tutorial demonstrated self-managed worker registration with HCP Boundary and discussed worker management. You deployed a self-managed worker, registered the worker with HCP Boundary, and tested the proxy connection to a target.
To continue learning about Boundary, check out the Multi-Hop Sessions tutorial.