Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Manage Multi-Cluster CN2

SUMMARY Learn how to perform life cycle management tasks specific to a multi-cluster installation.

This section covers tasks that are specific to a multi-cluster installation. If you want to perform management tasks in a specific cluster within the multi-cluster installation, then see Manage Single Cluster CN2.

Attach a Workload Cluster in Release 23.1

Use this procedure to create and attach a distributed workload cluster to an existing central cluster in release 23.1.

The general procedure is:

  1. Create the workload cluster (without a node group).

  2. Enable communications between the workload cluster and the central cluster:

    • Allow connectivity between the workload cluster VPC and the central cluster VPC. You can accomplish this in a number of ways. The method that we'll use is VPC peering.

    • Create routes for traffic to flow between the two clusters.

    • Configure secrets to authenticate Kubernetes control plane traffic between the two clusters.

  3. On the central cluster, create the kubemanager that manages the new distributed workload cluster.

  4. Apply the CN2 distributed workload cluster manifest and add a node group.

The manifests that you will use in this example procedure are multi-cluster/distributed_cluster_deployer_example.yaml, multi-cluster/distributed_cluster_certmanager_example.yaml, and multi-cluster/distributed_cluster_vrouter_example.yaml. The procedure assumes that you've placed these manifests into a manifests directory.

Note:

Before starting, make sure you've created the central cluster. See Install Multi-Cluster CN2 in Release 23.1.

  1. Store relevant central cluster information into variables that we'll use later.
    If you're coming directly from installing the central cluster in Install Multi-Cluster CN2 in Release 23.1, then skip this step because you've already set the variables.
    Otherwise:
  2. Create the distributed workload cluster. Do not add node groups to the cluster yet.
    1. Create a yaml file that describes the cluster. We'll name this file eksctl-workload-cluster.yaml.
      For example:

      Populate the file with your desired values:

      • name - the name you want to call the cluster

      • region - the AWS region of your cluster

      • serviceIPv4CIDR - the IP address subnet you want to assign to Kubernetes services

      • cidr - the IP address subnet you want to assign to your VPC

      Make sure the service IP and VPC CIDRs in the workload cluster differ from those in the central cluster.

    2. Apply this yaml to create the distributed workload cluster (without node groups).
    3. Store the workload cluster's region and VPC CIDR into variables for later use.
      For example:
  3. Show the contexts of the clusters.
    When we issue kubectl commands later, we'll need to direct the commands to the desired cluster. We'll use <central-cluster-context> and <workload-cluster-context> for that purpose.
  4. Configure VPC peering to allow the central cluster VPC and the distributed workload cluster VPC to communicate with each other.
    1. Store the workload VPC ID into a variable for later use.
    2. Create the peering request and store into a variable for later use.
    3. Accept the peering request.
  5. Add routes to each VPC so that the central and workload cluster VPCs can route to each other.
    1. Add a route to the central cluster VPC to allow it to reach the workload cluster VPC.
    2. Add a route to the workload cluster VPC to allow it to reach the central cluster VPC.
  6. Configure the appropriate central cluster VPC security group to accept traffic from this workload cluster VPC.
    If you're coming to this procedure directly from Install Multi-Cluster CN2 in Release 23.1, then go straight to step 6.c because you've already set the CENTRAL_SG_ID variable.
    1. Find the security group ID for the security group by looking up the EC2 instance ID of the central cluster.
      Look in the output for the security group you created in Install Multi-Cluster CN2 in Release 23.1. If you're following this example, the security group is called workload-to-central.
    2. Store the GroupID corresponding to workload-to-central into a variable.
    3. Add a rule to allow incoming traffic from the workload cluster VPC.
  7. Similarly, configure a workload cluster VPC security group to accept traffic from the central cluster VPC.
    1. Create a security group on the workload cluster VPC.
    2. Add a rule to allow incoming traffic from the central cluster VPC.
  8. Configure the distributed workload cluster to allow access from the central cluster.
    There a few ways to do this. We'll configure the aws-auth config map to add a mapping for the central cluster's EC2 instance role. This allows all workloads on the central cluster to access the workload cluster.
    If you want to have finer grain access control and only grant access to the necessary CN2 components, then you can use fine-grained IAM roles for service accounts. With this approach, you would create new roles for the central cluster's contrail-k8s-deployer and contrail-k8s-kubemanager service accounts and map those instead. See AWS documentation (https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/) on how to configure IAM roles for service accounts.
    1. Get the central cluster role and store into a variable for later use.
    2. Create a mapping for the central cluster's role.
  9. On the central cluster, attach the new workload cluster by creating a kubemanager for the new distributed workload cluster.
    1. Create the secret that describes the distributed workload cluster.
      Store the following into a yaml file. In this example, we call it access-workload.yaml.Create the secret:
    2. Create the kubemanager manifest with the following content. Choose a meaningful name for the manifest (for example, kubemanager-cluster1.yaml).
      Table 1 explains the parameters that you need to set.
      Table 1: Kubemanager CRD
      Parameter Meaning Example
      name The name of the custom resource. kubemanager-cluster1
      image The repository where you pull images enterprise-hub.juniper.net/contrail-container-prod/contrail-k8s-kubemanager:23.1.0.282
      podV4Subnet The IPv4 pod subnet that you want to use for the distributed workload cluster.

      This subnet must be unique within the entire multi-cluster.

      10.111.64.0/18
      serviceV4Subnet The IPv4 service subnet that you configured earlier for the distributed workload cluster.

      This subnet must be unique within the entire multi-cluster.

      172.30.0.0/16
      clusterName The name of the workload cluster. cn2-m-workload
      kubeconfigSecretName The name of the secret that you just created for the workload cluster.
      Note:

      This secret is not derived from the kubeconfig as the name suggests.

      access-workload
      listenerPort The port that the Contrail controller listens on for communications with this workload cluster.

      Set the port for the first workload cluster to 19446 and increment by 1 for each subsequent workload cluster.

      19446
      constantRouteTargetNumber The route target for this workload cluster.

      Set the route target for the first workload cluster to 7699 and increment by 100 for each subsequent workload cluster.

      7699
    3. On the central cluster, apply the kubemanager manifest you just created.
  10. Install CN2 components on the distributed workload cluster.
    1. Create the secret that describes the central cluster.
      Store the following into a yaml file. In this example, we call it access-central.yaml.Create the contrail-deploy namespace:Create the secret:
    2. On the workload cluster, apply the deployer manifest. The deployer provides life cycle management for the CN2 components.
    3. On the workload cluster, apply the cert-manager manifest. The cert-manager provides encryption for all management and control plane connections.
    4. On the workload cluster, add the vRouter.
    5. On the workload cluster, add the worker nodes (node group).
      eksctl create nodegroup --region $WORKLOAD_REGION --cluster cn2-m-workload --node-type m5.large --node-ami-family AmazonLinux2 --max-pods-per-node 100 --node-private-networking --node-security-groups $WORKLOAD_SG_ID
    6. Verify that all pods are up. This may take a few minutes.
  11. Check over the installation.
    1. On the central cluster, verify that you can see the distributed workload cluster's namespaces.

      The namespaces are in the following format: <kubemanager-name>-<workload-cluster-name>-<namespace>. For example:

You've now created and attached a distributed workload cluster to an existing central cluster.