Cluster peering on Kubernetes technical specifications
This reference topic describes the technical specifications associated with using cluster peering in your Kubernetes deployments. These specifications include required Helm values and required custom resource definitions (CRDs), as well as required Consul components and their configurations. To learn more about Consul's cluster peering feature, refer to cluster peering overview.
For cluster peering requirements in non-Kubernetes deployments, refer to cluster peering technical specifications.
General requirements
Make sure your Consul environment meets the following prerequisites:
- Consul v1.14 or higher
- Consul on Kubernetes v1.0.0 or higher
- At least two Kubernetes clusters
You must also configure the following service mesh components in order to establish cluster peering connections:
Helm specifications
Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. The following values must be set in the Helm chart to enable mesh gateways:
Refer to the following example Helm configuration:
After mesh gateways are enabled in the Helm chart, you can separately configure Mesh CRDs.
CRD specifications
You must create the following CRDs in order to establish a peering connection:
PeeringAcceptor
: Generates a peering token and accepts an incoming peering connection.PeeringDialer
: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
Refer to the following example CRDs:
Mesh gateway specifications
To change Consul's default configuration and enable cluster peering through mesh gateways, use a mesh configuration entry to update your network's service mesh proxies globally:
In
cluster-01
create theMesh
custom resource withpeeringThroughMeshGateways
set totrue
.mesh.yamlApply the mesh CRD to
cluster-01
.Apply the mesh CRD to
cluster-02
.
Note
For help setting up the cluster context variables used in this example, refer to assign cluster IDs to environmental variables.
When cluster peering through mesh gateways, consider the following deployment requirements:
- A Consul cluster requires a registered mesh gateway in order to export services to peers in other regions or cloud providers.
- The mesh gateway must also be registered in the same admin partition as the exported services and their
exported-services
configuration entry. An enterprise license is required to use multiple admin partitions with a single cluster of Consul servers. - To use the
local
mesh gateway mode, you must register a mesh gateway in the importing cluster. - Define the
Proxy.Config
settings using opaque parameters compatible with your proxy. For additional Envoy proxy configuration information, refer to Gateway options and Escape-hatch overrides.
Mesh gateway modes
By default, all cluster peering connections use mesh gateways in remote mode. Be aware of these additional requirements when changing a mesh gateway's mode.
- For mesh gateways that connect peered clusters, you can set the
mode
as eitherremote
orlocal
. - The
none
mode is invalid for mesh gateways with cluster peering connections.
To learn how to change the mesh gateway mode to local
on your Kubernetes deployment, refer to configure the mesh gateway mode for traffic between services.
Exported service specifications
The exported-services
CRD is required in order for services to communicate across partitions with cluster peering connections. Basic guidance on using the exported-services
configuration entry is included in Establish cluster peering connections.
Refer to exported-services
configuration entry for more information.