Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

CN2 Intercluster Endpoint Discovery

SUMMARY Starting with Release 23.4, Cloud-Native Contrail Networking (CN2) supports intercluster endpoint discovery.

Overview

CN2 provides a BGP-based control plane that allows you to inter-connect multiple Kubernetes clusters for exchanging routing information. This enables cross-cluster pod-to-pod and pod-to-service communication. Previously, this communication was limited to IP address-based access.

DNS within Kubernetes is automatically configured to make the service discoverable by DNS name according to the convention <service>.<namespace>.svc.<cluster-domain>. For example, foo.default.svc.cluster.local. To enable the discovery of the service on a peer cluster, a corresponding service needs to be created by you with a corresponding attribute to be able to address the remote pods.

The benefit of this feature is to leverage BGP in CN2 to export information about available services to peer CN2 clusters regarding pods, so that they are discovered (by name), and accessed from the peer BGP cluster.

Configure CN2 Clusters to Share Routes

This section describes how to configure two CN2 clusters to leak routes to each other. The following examples show how this is done between two clusters.

Prerequisites

This feature requires the following:

  • CN2 Release 23.4 is installed and operational.

  • You are operating in a working cloud networking environment using Kubernetes orchestration.

  • You have two CN2 clusters running.

Configure BGP Peering

Each of the two CN2 clusters should be configured to have a unique Autonomous System AS number to identify the cluster in a BGP network. In the following example, we have two CN2 clusters, the first with AS 64513 and the second with AS 64514.

Define a Custom Route Target (RT)

A custom RT is required to make service addresses routable from CN2 VirtualNetworks default-servicenetwork and default-podnetwork. In the following example, the RT 42000000 is used, which is in the user range if 32-bit ASN support is enabled.

Configure BGPRouter for Cluster 2 on Cluster 1

Note that in the following example, the name, address, and identifier fields should match the values configured at deployment time on cluster 2.

Configure BGPRouter for Cluster 1 on Cluster 2

Note that in the following example, the name, address, and identifier fields should match the values configured at deployment time on cluster 2.

Verify that BGP Peering Works

In the above example, the cluster 1 IP address is 10.74.190.74 and can be validated with the following URL:

The cluster 2 IP address is 10.74.190.55 and can be validated with the following URL:

From the URL, you should see two BGP neighbors listed on each system and the state of each neighbor should be Established.

Create Export Service

On the cluster exporting the service (hereby cluster-export.local), create a service that you want to export in one cluster with the additional label core.juniper.net/serviceExport: <export-name>.

Create Import Service

On the cluster importing the service (hereby cluster-import.local), create a corresponding service with the same name with the additional label core.juniper.net/serviceImport: <export-name>.

The clusterIP must be set to None. This identifies the service as "headless" and no Endpoint is created for it. Port configuration is also required. Do not set the target port since the targetPort must match port.

After the YAML file is deployed, you can access the service from pods in cluster-import.local via my-service.my-namespace.svc.cluster-import.local. You now have two CN2 clusters that are exchanging routes through BGP.

Example YAML files are located in feature_tests/tests/test-yaml/endpoint-discovery/.