Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Install Single Cluster CN2 on Amazon EKS

Release: CN2 23.1
{}
Change Release
date_range 21-Jul-23

SUMMARY See examples on how to install single cluster CN2 on Amazon EKS.

In a single cluster deployment, CN2 is the networking platform and CNI plug-in for that cluster. Figure 1 shows an Amazon EKS cluster with three worker nodes running the Contrail controller. The Amazon EKS control plane communicates with worker nodes in the user VPC over an Elastic Network Interface (ENI). In a typical deployment, there would be additional worker nodes that run the user workloads.

Figure 1: CN2 on Amazon EKS CN2 on Amazon EKS

The procedures in this section show basic examples of how you can use the provided Amazon EKS blueprints, Helm charts, and YAML manifests to install CN2 on an Amazon EKS cluster. We cover both installing CN2 in a brand new cluster and in an existing cluster.

You're not limited to the deployment described in these sections nor are you limited to using the provided files and manifests. CN2 supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest tailored to your specific situation.

Install Single Cluster CN2 Using Amazon EKS Blueprints in Release 23.1

Use this procedure to install CN2 using Amazon EKS blueprints for Terraform in release 23.1.

The blueprint that we provide performs the following:

  • creates a new sample VPC, 3 private subnets, and 3 public subnets

  • creates Internet gateway for public subnets and NAT gateway for private subnets

  • creates EKS Cluster control plane with one managed node group (desired nodes set to 3)

  • deploys CN2 as Amazon EKS cluster CNI

  1. Clone the AWS Integration and Automation repository. This is where the Terraform manifests are stored.
    content_copy zoom_out_map
    git clone https://github.com/Juniper/terraform-aws-eks-blueprints.git -b awseks-23.1 --single-branch
  2. Add your enterprise-hub.juniper.net access credentials to terraform-aws-eks-blueprints/examples/eks-cluster-with-cn2/variables.tf for the container_pull_secret variable .
    The credentials that you add must be base64-encoded. See Configure Repository Credentials for an example of how to obtain and encode your credentials.
  3. Run terraform init. This command initializes a working directory containing Terraform configuration files.
    content_copy zoom_out_map
    cd examples/eks-cluster-with-cn2
    content_copy zoom_out_map
    terraform init
  4. Run terraform plan. This command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
    content_copy zoom_out_map
    export AWS_REGION=<ENTER YOUR REGION>   # Select your own region
    content_copy zoom_out_map
    terraform plan
    Verify the resources created by this execution.
  5. Run terraform apply. This command executes the Terraform plan you just created.
    content_copy zoom_out_map
    terraform apply
    Enter yes to apply and create the cluster.
  6. Obtain the cluster name and other details of your new Amazon EKS cluster from the Terraform output or from the AWS Console.
  7. Copy the kubeconfig onto your local computer.
    content_copy zoom_out_map
    aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>
  8. Check over your new cluster.
    List your worker nodes:
    content_copy zoom_out_map
    kubectl get nodes
    List all the pods:
    content_copy zoom_out_map
    kubectl get pods -A
  9. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.
  10. If you run into problems, clean up the cluster and try the installation again.
    To clean up the cluster, destroy the Kubernetes addons, the Amazon EKS cluster, and the VPC. You must run these terraform commands in the examples/eks-cluster-with-cn2 directory.
    content_copy zoom_out_map
    cd examples/eks-cluster-with-cn2
    content_copy zoom_out_map
    terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
    terraform destroy -target="module.eks_blueprints" -auto-approve
    terraform destroy -target="module.vpc" -auto-approve
    Then destroy any remaining resources:
    content_copy zoom_out_map
    terraform destroy -auto-approve

Install Single Cluster CN2 Using Helm Charts in Release 23.1

Use this procedure to install CN2 on an existing Amazon EKS cluster using Helm charts in release 23.1. In this example, the existing Amazon EKS cluster is running the VPC CNI.

  1. Add the Juniper Networks CN2 Helm repository.
    content_copy zoom_out_map
    helm repo add cn2 https://juniper.github.io/cn2-helm/
  2. Install CN2.
    content_copy zoom_out_map
    helm install cn2eks cn2/cn2-eks --set imagePullSecret="<base64-encoded-credential>"
    See Configure Repository Credentials for one way to get your credentials.
  3. Use standard kubectl commands to check on the installation.
    content_copy zoom_out_map
    kubectl get nodes
    Check that the nodes are up. If the nodes are not up, wait a few minutes and check again.
    content_copy zoom_out_map
    kubectl get pods -n contrail

    Check that the pods have a STATUS of Running. If not, wait a few minutes for the pods to come up.

  4. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.

Install Single Cluster CN2 Using YAML Manifests in Release 23.1

Use this procedure to install CN2 using YAML manifests in release 23.1.

We use eksctl to create a cluster in this example, but you can use any other method as long as you remember to remove the CNI.

The manifests that you will use in this example procedure are amazon-eks/single-cluster/single_cluster_deployer_example.yaml and amazon-eks/single-cluster/cert-manager.yaml. The procedure assumes that you've placed these manifests into a manifests directory.

  1. Create an EKS cluster without a node group.
    content_copy zoom_out_map
    eksctl create cluster --name mycluster --without-nodegroup
    Take note of the service IP address subnet. You'll need this in a later step. By default, Amazon EKS assigns service IP addresses from either the 10.100.0.0/16 or the 172.20.0.0/16 CIDR blocks.
    content_copy zoom_out_map
    eksctl get cluster --name mycluster -o json | jq .[0].KubernetesNetworkConfig
  2. Configure the service IP address subnet for the Contrail kubemanager. This subnet must match the service IP address subnet of the cluster.
    Edit the single_cluster_deployer_example.yaml manifest and look for the serviceV4Subnet configuration in the Kubemanager section.
    content_copy zoom_out_map
    serviceV4Subnet: 172.20.0.0/16
    Change the subnet as necessary to match the service IP address subnet of the cluster.
  3. If desired, specify the three nodes where you want to run the Contrail controller.
    By default, the supplied manifest contains tolerations that allow the Contrail controller to tolerate any taint. This means that the Contrail controller will install on any node. Use node selectors (or node affinity) to force the Contrail controller to install on the nodes that you want. Then taint those nodes to prevent other pods from being scheduled there. Repeat for the other two nodes.
  4. Apply the cert-manager manifest. The cert-manager provides encryption for all CN2 management and control plane connections.
    content_copy zoom_out_map
    kubetl apply -f manifests/cert-manager.yaml
  5. Apply the Contrail deployer manifest.
    content_copy zoom_out_map
    kubectl apply -f manifests/single_cluster_deployer_example.yaml
  6. Attach managed or self-managed worker nodes running Amazon EKS-optimized AMI to the cluster.
    Ensure you pick an AMI that is running a kernel that is supported by CN2.
    content_copy zoom_out_map
    eksctl create nodegroup --cluster mycluster --node-type m5.xlarge --node-ami-family AmazonLinux2 --max-pods-per-node 100 --node-private-networking
  7. (Optional) Install Contrail tools and run preflight checks. See Run Preflight and Postflight Checks in Release 23.1.
    Correct any errors before proceeding.
  8. Use standard kubectl commands to check on the deployment.
    content_copy zoom_out_map
    kubectl get nodes
    Check that the nodes are up. If the nodes are not up, wait a few minutes and check again.
    content_copy zoom_out_map
    kubectl get pods -n contrail

    Check that the pods have a STATUS of Running. If not, wait a few minutes for the pods to come up.

  9. (Optional) Run postflight checks. See Run Preflight and Postflight Checks in Release 23.1.
footer-navigation