Manage Multi-Cluster CN2
SUMMARY Learn how to perform life cycle management tasks specific to a multi-cluster installation.
This section covers tasks that are specific to a multi-cluster installation. If you want to perform management tasks in a specific cluster within the multi-cluster installation, then see Manage Single Cluster CN2.
Attach a Workload Cluster in Release 23.1
Use this procedure to create and attach a distributed workload cluster to an existing central cluster in release 23.1.
The general procedure is:
-
Create the workload cluster (without a node group).
-
Enable communications between the workload cluster and the central cluster:
-
Allow connectivity between the workload cluster VPC and the central cluster VPC. You can accomplish this in a number of ways. The method that we'll use is VPC peering.
-
Create routes for traffic to flow between the two clusters.
-
Configure secrets to authenticate Kubernetes control plane traffic between the two clusters.
-
-
On the central cluster, create the kubemanager that manages the new distributed workload cluster.
-
Apply the CN2 distributed workload cluster manifest and add a node group.
The manifests that you will use in this example procedure are multi-cluster/distributed_cluster_deployer_example.yaml, multi-cluster/distributed_cluster_certmanager_example.yaml, and multi-cluster/distributed_cluster_vrouter_example.yaml. The procedure assumes that you've placed these manifests into a manifests directory.
Before starting, make sure you've created the central cluster. See Install Multi-Cluster CN2 in Release 23.1.
-
Store relevant central cluster information into variables that we'll use
later.
If you're coming directly from installing the central cluster in Install Multi-Cluster CN2 in Release 23.1, then skip this step because you've already set the variables.Otherwise:
export CENTRAL_REGION=us-west-2
export CENTRAL_VPC_CIDR=192.168.0.0/16
export CENTRAL_VPC_ID=$(eksctl get cluster --name cn2-m-central -o json --region $CENTRAL_REGION | jq -r .[0].ResourcesVpcConfig.VpcId)
-
Create the distributed workload cluster. Do not add node groups to the
cluster yet.
-
Create a yaml file that describes the cluster. We'll name this file
eksctl-workload-cluster.yaml.
For example:
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: cn2-m-workload region: us-east-2 kubernetesNetworkConfig: serviceIPv4CIDR: 172.30.0.0/16 vpc: cidr: 10.120.0.0/16
Populate the file with your desired values:
-
name
- the name you want to call the cluster -
region
- the AWS region of your cluster -
serviceIPv4CIDR
- the IP address subnet you want to assign to Kubernetes services -
cidr
- the IP address subnet you want to assign to your VPC
Make sure the service IP and VPC CIDRs in the workload cluster differ from those in the central cluster.
-
-
Apply this yaml to create the distributed workload cluster (without
node groups).
eksctl create cluster --without-nodegroup --config-file eksctl-workload-cluster.yaml
-
Store the workload cluster's region and VPC CIDR into variables for
later use.
For example:
export WORKLOAD_REGION=us-east-2
export WORKLOAD_VPC_CIDR=10.120.0.0/16
-
Create a yaml file that describes the cluster. We'll name this file
eksctl-workload-cluster.yaml.
-
Show the contexts of the clusters.
kubectl config get-contexts
When we issue kubectl commands later, we'll need to direct the commands to the desired cluster. We'll use <central-cluster-context> and <workload-cluster-context> for that purpose.CURRENT NAME CLUSTER AUTHINFO NAMESPACE * <central-cluster-context> cn2-m-central.us-west-2.eksctl.io <central-cluster-auth> <workload-cluster-context> cn2-m-workload.us-east-2.eksctl.io <workload-cluster-auth>
-
Configure VPC peering to allow the central cluster VPC and the distributed
workload cluster VPC to communicate with each other.
-
Store the workload VPC ID into a variable for later use.
WORKLOAD_VPC_ID=$(eksctl get cluster --name cn2-m-workload -o json --region $WORKLOAD_REGION | jq -r .[0].ResourcesVpcConfig.VpcId) && echo $WORKLOAD_VPC_ID
-
Create the peering request and store into a variable for later
use.
PEER_ID=$(aws ec2 create-vpc-peering-connection --vpc-id="$CENTRAL_VPC_ID" --peer-vpc-id="$WORKLOAD_VPC_ID" --peer-region="$WORKLOAD_REGION" --query VpcPeeringConnection.VpcPeeringConnectionId --output text) && echo $PEER_ID
-
Accept the peering request.
aws ec2 accept-vpc-peering-connection --region $WORKLOAD_REGION --vpc-peering-connection-id $PEER_ID
-
Store the workload VPC ID into a variable for later use.
-
Add routes to each VPC so that the central and workload cluster VPCs can
route to each other.
-
Add a route to the central cluster VPC to allow it to reach the
workload cluster VPC.
for rtb in $(aws ec2 describe-route-tables --region $CENTRAL_REGION --filters Name=vpc-id,Values="$CENTRAL_VPC_ID" --query 'RouteTables[*].RouteTableId' | jq -r .[]) do aws ec2 create-route --region $CENTRAL_REGION --route-table-id "$rtb" --destination-cidr-block "$WORKLOAD_VPC_CIDR" --vpc-peering-connection-id "$PEER_ID" done
-
Add a route to the workload cluster VPC to allow it to reach the
central cluster VPC.
for rtb in $(aws ec2 describe-route-tables --region $WORKLOAD_REGION --filters Name=vpc-id,Values="$WORKLOAD_VPC_ID" --query 'RouteTables[*].RouteTableId' | jq -r .[]) do aws ec2 create-route --region $DIST_REGION --route-table-id "$rtb" --destination-cidr-block "$CENTRAL_VPC_CIDR" --vpc-peering-connection-id "$PEER_ID" done
-
Add a route to the central cluster VPC to allow it to reach the
workload cluster VPC.
-
Configure the appropriate central cluster VPC security group to accept
traffic from this workload cluster VPC.
If you're coming to this procedure directly from Install Multi-Cluster CN2 in Release 23.1, then go straight to step 6.c because you've already set the CENTRAL_SG_ID variable.
-
Find the security group ID for the security group by looking up the
EC2 instance ID of the central cluster.
Look in the output for the security group you created in Install Multi-Cluster CN2 in Release 23.1. If you're following this example, the security group is calledaws ec2 describe-instances --instance-ids <ec2-instance-id> | grep -A10 SecurityGroups
workload-to-central
. -
Store the
GroupID
corresponding toworkload-to-central
into a variable.export CENTRAL_SG_ID=<GroupID>
-
Add a rule to allow incoming traffic from the
workload cluster VPC.
aws ec2 authorize-security-group-ingress --region $CENTRAL_REGION --group-id "$CENTRAL_SG_ID" --protocol "all" --cidr "$WORKLOAD_VPC_CIDR"
-
Find the security group ID for the security group by looking up the
EC2 instance ID of the central cluster.
-
Similarly, configure a workload cluster VPC security group to accept
traffic from the central cluster VPC.
-
Create a security group on the workload cluster VPC.
WORKLOAD_SG_ID=$(aws ec2 create-security-group --region $WORKLOAD_REGION --vpc-id $WORKLOAD_VPC_ID --group-name central-to-workload --description "Allow central to workload" | jq -r .GroupId) && echo $WORKLOAD_SG_ID
-
Add a rule to allow incoming traffic from the central cluster
VPC.
aws ec2 authorize-security-group-ingress --region $WORKLOAD_REGION --group-id "$WORKLOAD_SG_ID" --protocol "all" --cidr "$CENTRAL_VPC_CIDR"
-
Create a security group on the workload cluster VPC.
-
Configure the distributed workload cluster to allow access from the central
cluster.
There a few ways to do this. We'll configure the aws-auth config map to add a mapping for the central cluster's EC2 instance role. This allows all workloads on the central cluster to access the workload cluster.If you want to have finer grain access control and only grant access to the necessary CN2 components, then you can use fine-grained IAM roles for service accounts. With this approach, you would create new roles for the central cluster's
contrail-k8s-deployer
andcontrail-k8s-kubemanager
service accounts and map those instead. See AWS documentation (https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/) on how to configure IAM roles for service accounts.-
Get the central cluster role and store into a variable for later
use.
CENTRAL_NODE_ROLE=$(eksctl get iamidentitymapping --cluster cn2-m-central --region $CENTRAL_REGION -o json | jq -r .[].rolearn) && echo $CENTRAL_NODE_ROLE
-
Create a mapping for the central cluster's role.
eksctl create iamidentitymapping --cluster cn2-m-dist --region=$DIST_REGION --arn $CENTRAL_NODE_ROLE --group system:masters
-
Get the central cluster role and store into a variable for later
use.
-
On the central cluster, attach the new workload cluster by creating a
kubemanager for the new distributed workload cluster.
-
Create the secret that describes the distributed workload
cluster.
Store the following into a yaml file. In this example, we call it access-workload.yaml.
Create the secret:kind: EKS config: aws_region: us-east-2 eks_cluster_name: cn2-m-workload
kubectl --context=<central-cluster-context> create secret generic access-workload -n contrail --from-file=kubeconfig=access-workload.yaml
-
Create the kubemanager manifest with the following content. Choose
a meaningful name for the manifest (for example,
kubemanager-cluster1.yaml).
Table 1 explains the parameters that you need to set.apiVersion: configplane.juniper.net/v1alpha1 kind: Kubemanager metadata: name: <CR name> namespace: contrail spec: common: replicas: 1 containers: - image: <contrail-image-repository> name: contrail-k8s-kubemanager podV4Subnet: <pod-v4-subnet-of-remote-cluster> serviceV4Subnet: <service-v4-subnet-of-remote-cluster> clusterName: <worker-cluster-name> kubeconfigSecretName: <secret-name> listenerPort: <listener-port> constantRouteTargetNumber: <rt-number>
Table 1: Kubemanager CRD Parameter Meaning Example name The name of the custom resource. kubemanager-cluster1 image The repository where you pull images enterprise-hub.juniper.net/contrail-container-prod/contrail-k8s-kubemanager:23.1.0.282 podV4Subnet The IPv4 pod subnet that you want to use for the distributed workload cluster. This subnet must be unique within the entire multi-cluster.
10.111.64.0/18 serviceV4Subnet The IPv4 service subnet that you configured earlier for the distributed workload cluster. This subnet must be unique within the entire multi-cluster.
172.30.0.0/16 clusterName The name of the workload cluster. cn2-m-workload kubeconfigSecretName The name of the secret that you just created for the workload cluster. Note:This secret is not derived from the kubeconfig as the name suggests.
access-workload listenerPort The port that the Contrail controller listens on for communications with this workload cluster. Set the port for the first workload cluster to 19446 and increment by 1 for each subsequent workload cluster.
19446 constantRouteTargetNumber The route target for this workload cluster. Set the route target for the first workload cluster to 7699 and increment by 100 for each subsequent workload cluster.
7699 -
On the central cluster, apply the kubemanager manifest you just
created.
kubectl --context=<central-cluster-context> apply -f kubemanager-cluster1.yaml
-
Create the secret that describes the distributed workload
cluster.
-
Install CN2 components on the distributed workload cluster.
-
Create the secret that describes the central cluster.
Store the following into a yaml file. In this example, we call it access-central.yaml.
Create the contrail-deploy namespace:kind: EKS config: aws_region: us-west-2 eks_cluster_name: cn2-m-central
Create the secret:kubectl --context=<workload-cluster-context> create ns contrail-deploy
kubectl --context=<workload-cluster-context> create secret generic access-central -n contrail-deploy --from-file=kubeconfig=access-central.yaml
-
On the workload cluster, apply the deployer manifest. The deployer
provides life cycle management for the CN2 components.
kubectl --context=<workload-cluster-context> apply -f manifests/distributed_cluster_deployer_example.yaml
-
On the workload cluster, apply the cert-manager manifest. The
cert-manager provides encryption for all management and control
plane connections.
kubectl apply --context=<workload-cluster-context> -f manifests/distributed_cluster_certmanager_example.yaml
-
On the workload cluster, add the vRouter.
kubectl apply --context=<workload-cluster-context> -f manifests/distributed_cluster_vrouter_example.yaml
-
On the workload cluster, add the worker nodes (node group).
eksctl create nodegroup --region $WORKLOAD_REGION --cluster cn2-m-workload --node-type m5.large --node-ami-family AmazonLinux2 --max-pods-per-node 100 --node-private-networking --node-security-groups $WORKLOAD_SG_ID
-
Verify that all pods are up. This may take a few minutes.
kubectl get pods -A
-
Create the secret that describes the central cluster.
-
Check over the installation.
-
On the central cluster, verify that you can see the distributed
workload cluster's namespaces.
kubectl --context=<workload-cluster-context> get ns
The namespaces are in the following format:
<kubemanager-name>-<workload-cluster-name>-<namespace>
. For example:kubemanager-workload1-cn2-m-workload-contrail
-
On the central cluster, verify that you can see the distributed
workload cluster's namespaces.