Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

external-header-nav
keyboard_arrow_up
list Table of Contents
file_download PDF
keyboard_arrow_right

Install Single Cluster Shared Network Contrail

date_range 20-Oct-22

SUMMARY See examples on how to install single cluster Contrail in a deployment where Kubernetes traffic and Contrail traffic share the same network.

In a single cluster shared network deployment:

  • Contrail is the networking platform and CNI plug-in for that cluster. The Contrail controller runs in the Kubernetes control plane, and the Contrail data plane components run on all nodes in the cluster.

  • Kubernetes and Contrail traffic share a single network.

Figure 1 shows the cluster that you'll create if you follow the single cluster shared network example. The cluster consists of a single control plane node and two worker nodes.

All nodes shown can be VMs or bare metal servers.

Figure 1: Single Cluster Shared Network Contrail Single Cluster Shared Network Contrail

All communication between nodes in the cluster and between nodes and external sites takes place over the single 172.16.0.0/24 fabric virtual network. The fabric network provides the underlay over which the cluster runs.

The local administrator is shown attached to a separate network reachable through a gateway. This is typical of many installations where the local administrator manages the fabric and cluster from the corporate LAN. In the procedures that follow, we refer to the local administrator station as your local computer.

Note:

Connecting all cluster nodes together is the data center fabric, which is shown in the example as a single subnet. In real installations, the data center fabric is a network of spine and leaf switches that provide the physical connectivity for the cluster.

In an Apstra-managed data center, this connectivity would be specified through the overlay virtual networks that you create across the underlying fabric switches.

The procedures in this section show basic examples of how you can use the provided manifests to create the specified Contrail deployment. You're not limited to the deployment described in this section nor are you limited to using the provided manifests. Contrail supports a wide range of deployments that are too numerous to cover in detail. Use the provided examples as a starting point to roll your own manifest tailored to your specific situation.

Table 1: Single Cluster Shared Network Examples
Release Kernel Mode Data Plane DPDK Data Plane
22.3 Install Single Cluster Shared Network Contrail Running Kernel Mode Data Plane in Release 22.3 Install Single Cluster Shared Network Contrail Running DPDK Data Plane in Release 22.3
Note:

The provided manifests may not be compatible between releases. Make sure you use the manifests for the release that you're running.

Install Single Cluster Shared Network Contrail Running Kernel Mode Data Plane in Release 22.3

Use this procedure to install Contrail in a single cluster shared network deployment running a kernel mode data plane in release 22.3.

The manifest that you will use in this example procedure is single-cluster/single_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
  2. Apply the Contrail deployer manifest.
    content_copy zoom_out_map
    kubectl apply -f manifests/single_cluster_deployer_example.yaml

    It may take a few minutes for the nodes and pods to come up.

  3. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      content_copy zoom_out_map
      kubectl get nodes
      NAME          STATUS   ROLES                  AGE   VERSION
      k8s-cp0       Ready    control-plane,master   65m   v1.20.7
      k8s-worker0   Ready    <none>                 63m   v1.20.7
      k8s-worker1   Ready    <none>                 62m   v1.20.7
      
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.
      content_copy zoom_out_map
      kubectl get pods -A -o wide
      NAMESPACE         NAME                                        READY   STATUS    RESTARTS   AGE     IP            NODE          <trimmed>
      contrail-deploy   contrail-k8s-deployer-747689445-7rx52       1/1     Running   0          44m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-control-0                          2/2     Running   0          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-k8s-apiserver-6b544788f4-mpk5d     1/1     Running   0          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-k8s-controller-75b8d7b846-rvg7h    1/1     Running   2          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-k8s-kubemanager-6c8b7bd5f5-mwdpj   1/1     Running   5          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-vrouter-masters-pl4zf              3/3     Running   0          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-vrouter-nodes-2tnqq                3/3     Running   0          40m     172.16.0.12    k8s-worker0   <none>       <none>
      contrail          contrail-vrouter-nodes-66xnw                3/3     Running   0          40m     172.16.0.13    k8s-worker1   <none>       <none>
      kube-system       coredns-657959df74-25sdx                    1/1     Running   0          3m19s   10.233.64.2    k8s-cp0       <none>       <none>
      kube-system       coredns-657959df74-rprzv                    1/1     Running   0          66m     10.233.65.0    k8s-worker0   <none>       <none>
      kube-system       dns-autoscaler-b5c786945-pcgsq              1/1     Running   0          66m     10.233.65.1    k8s-worker0   <none>       <none>
      kube-system       kube-apiserver-k8s-cp0                      1/1     Running   0          69m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       kube-controller-manager-k8s-cp0             1/1     Running   0          69m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       kube-proxy-k5mcp                            1/1     Running   0          67m     172.16.0.13    k8s-worker1   <none>       <none>
      kube-system       kube-proxy-sccjm                            1/1     Running   0          67m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       kube-proxy-wqbt8                            1/1     Running   1          67m     172.16.0.12    k8s-worker0   <none>       <none>
      kube-system       kube-scheduler-k8s-cp0                      1/1     Running   0          69m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       nginx-proxy-k8s-worker0                     1/1     Running   0          67m     172.16.0.12    k8s-worker0   <none>       <none>
      kube-system       nginx-proxy-k8s-worker1                     1/1     Running   0          67m     172.16.0.13    k8s-worker1   <none>       <none>
      

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for enterprise-hub.juniper.net. For example:

      content_copy zoom_out_map
      user@node:~# ping enterprise-hub.juniper.net
      ping: enterprise-hub.juniper.net: Temporary failure in name resolution
      
      Note:

      Although enterprise-hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check /etc/systemd/resolved.conf to make sure the DNS entry is correct for your network. After you update the configuration, restart the DNS service:

      content_copy zoom_out_map
      user@node:~# systemctl restart systemd-resolved
    4. If you run into a problem you can't solve or if you made a mistake during the install, simply uninstall Contrail and start over. To uninstall Contrail, see Uninstall Contrail.
  4. (Optional) Run postflight checks. See Run Preflight and Postflight Checks.

Install Single Cluster Shared Network Contrail Running DPDK Data Plane in Release 22.3

Use this procedure to install Contrail in a single cluster shared network deployment running a DPDK data plane in release 22.3.

The manifest that you will use in this example procedure is single-cluster/single_cluster_deployer_example.yaml. The procedure assumes that you've placed this manifest into a manifests directory.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
    • Enable multus version 0.3.1.
  2. Specify the DPDK nodes.
    For each node running DPDK, label it as follows:
    content_copy zoom_out_map
    kubectl label node <node-name> agent-mode=dpdk
    By labeling the nodes in this way, Contrail will use the DPDK configuration specified in the manifest.
  3. Apply the Contrail deployer manifest.
    content_copy zoom_out_map
    kubectl apply -f manifests/single_cluster_deployer_example.yaml

    It may take a few minutes for the nodes and pods to come up.

  4. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      content_copy zoom_out_map
      kubectl get nodes
      NAME          STATUS   ROLES                  AGE   VERSION
      k8s-cp0       Ready    control-plane,master   65m   v1.20.7
      k8s-worker0   Ready    <none>                 63m   v1.20.7
      k8s-worker1   Ready    <none>                 62m   v1.20.7
      
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.
      content_copy zoom_out_map
      kubectl get pods -A -o wide
      NAMESPACE         NAME                                        READY   STATUS    RESTARTS   AGE     IP            NODE          <trimmed>
      contrail-deploy   contrail-k8s-deployer-747689445-7rx52       1/1     Running   0          44m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-control-0                          2/2     Running   0          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-k8s-apiserver-6b544788f4-mpk5d     1/1     Running   0          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-k8s-controller-75b8d7b846-rvg7h    1/1     Running   2          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-k8s-kubemanager-6c8b7bd5f5-mwdpj   1/1     Running   5          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-vrouter-masters-pl4zf              3/3     Running   0          40m     172.16.0.11    k8s-cp0       <none>       <none>
      contrail          contrail-vrouter-nodes-2tnqq                3/3     Running   0          40m     172.16.0.12    k8s-worker0   <none>       <none>
      contrail          contrail-vrouter-nodes-66xnw                3/3     Running   0          40m     172.16.0.13    k8s-worker1   <none>       <none>
      kube-system       coredns-657959df74-25sdx                    1/1     Running   0          3m19s   10.233.64.2    k8s-cp0       <none>       <none>
      kube-system       coredns-657959df74-rprzv                    1/1     Running   0          66m     10.233.65.0    k8s-worker0   <none>       <none>
      kube-system       dns-autoscaler-b5c786945-pcgsq              1/1     Running   0          66m     10.233.65.1    k8s-worker0   <none>       <none>
      kube-system       kube-apiserver-k8s-cp0                      1/1     Running   0          69m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       kube-controller-manager-k8s-cp0             1/1     Running   0          69m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       kube-proxy-k5mcp                            1/1     Running   0          67m     172.16.0.13    k8s-worker1   <none>       <none>
      kube-system       kube-proxy-sccjm                            1/1     Running   0          67m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       kube-proxy-wqbt8                            1/1     Running   1          67m     172.16.0.12    k8s-worker0   <none>       <none>
      kube-system       kube-scheduler-k8s-cp0                      1/1     Running   0          69m     172.16.0.11    k8s-cp0       <none>       <none>
      kube-system       nginx-proxy-k8s-worker0                     1/1     Running   0          67m     172.16.0.12    k8s-worker0   <none>       <none>
      kube-system       nginx-proxy-k8s-worker1                     1/1     Running   0          67m     172.16.0.13    k8s-worker1   <none>       <none>
      

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for enterprise-hub.juniper.net. For example:

      content_copy zoom_out_map
      user@node:~# ping enterprise-hub.juniper.net
      ping: enterprise-hub.juniper.net: Temporary failure in name resolution
      
      Note:

      Although enterprise-hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check /etc/systemd/resolved.conf to make sure the DNS entry is correct for your network. After you update the configuration, restart the DNS service:

      content_copy zoom_out_map
      user@node:~# systemctl restart systemd-resolved
    4. If you run into a problem you can't solve or if you made a mistake during the install, simply uninstall Contrail and start over. To uninstall Contrail, see Uninstall Contrail.
  5. (Optional) Run postflight checks. See Run Preflight and Postflight Checks.
external-footer-nav