Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install Single Cluster Contrail

Use this procedure to install Contrail in a single cluster deployment.

In a single cluster deployment, Contrail is the networking platform and CNI plug-in for that cluster. The Contrail controller runs in the Kubernetes control plane, and the Contrail data plane components run on all nodes in the cluster.

Figure 1 shows the cluster that you'll create if you follow the single cluster setup example. The cluster consists of a single control plane node and two worker nodes.

All nodes shown can be VMs or bare metal servers.

Figure 1: Single Cluster Contrail Single Cluster Contrail

All communication between nodes in the cluster and between nodes and external sites takes place over the single 172.16.0.0/24 fabric virtual network. The fabric network provides the underlay over which Kubernetes runs. You're not limited to using a single fabric virtual network. You can segregate your traffic onto multiple fabric virtual networks (subnets) if you desire. Configuring multiple fabric virtual networks in this fashion is beyond the scope of this document.

The local administrator is shown attached to a separate network reachable through a gateway. This is typical of many installations where the local administrator manages the fabric and cluster from the corporate LAN. In the procedures that follow, we refer to the local administrator station as your local computer.

Note:

Connecting all cluster nodes together is the data center fabric, which is simplified in the example into a single subnet. In real installations, the data center fabric is a network of spine and leaf switches that provide the physical connectivity for the cluster.

In an Apstra-managed data center, this connectivity would be specified through the overlay virtual networks that you create across the underlying fabric switches.

  1. Create a Kubernetes cluster. You can follow the example procedure in Create a Kubernetes Cluster or you can use any other method. Create the cluster with the following characteristics:
    • Cluster has no CNI plug-in.
    • Disable Node Local DNS.
    • If you're running DPDK, then enable multus version 0.3.1.
  2. Modify the deployer.yaml manifest as necessary. The deployer.yaml manifest that we provide is a sample that you may need to tailor to your setup.

    Edit deployer.yaml to match your setup:

    1. If you're running DPDK, change the contrail-vrouter-dpdk-nodes section as follows:
      • Change the interface name from eth1 to match the interface name in your setup.

      • Additionally, specify the correct driver in the dpdk section (either uio_pci_generic or vfio-pci):

      • Change other parameters in the dpdk section as desired.

    2. Change the contrail-vrouter-masters section as appropriate.

      The provided deployer.yaml file specifies a distinct interface for the data (vRouter) traffic, which is different from our example cluster. In our cluster, the nodes have a single interface that carries all control and data traffic. Remove the following lines that describe the distinct vRouter interface.

    3. Similarly, remove the following lines in the contrail-vrouter-nodes section.
    4. Edit the replicas configuration to match your setup.
      The provided deployer.yaml is intended for a cluster that has three control plane nodes. In our cluster, we have one control plane node. Change the replicas configuration from 3 to 1 for all pods. Here is the result after you make the change.
  3. If you're running DPDK, then set the agent-mode to dpdk on all nodes running DPDK:
    By labeling the nodes in this way, Contrail will pull the appropriate images and use the appropriate configuration for those nodes.
  4. Apply the Contrail deployer manifest.

    It may take a few minutes for the nodes and pods to come up.

  5. Use standard kubectl commands to check on the deployment.
    1. Show the status of the nodes.
      You can see that the nodes are now up. If the nodes are not up, wait a few minutes and check again.
    2. Show the status of the pods.

      All pods should now have a STATUS of Running. If not, wait a few minutes for the pods to come up.

    3. If some pods remain down, debug the deployment as you normally do. Use the kubectl describe command to see why a pod is not coming up. A common error is a network or firewall issue preventing the node from reaching the Juniper Networks repository.

      Here is an example of a DNS problem.

      Log in to each node having a problem and check name resolution for hub.juniper.net. For example:

      Note:

      Although hub.juniper.net is not configured to respond to pings, we can use the ping command to check domain name resolution.

      In this example, the domain name is not resolving. Check the domain name server configuration to make sure it's correct.

      For example, in a Ubuntu system running systemd resolved, check /etc/systemd/resolved.conf to make sure the DNS entry is correct for your network. After you update the configuration, restart the DNS service:

    4. If you run into a problem you can't solve or if you made a mistake during the install, simply uninstall Contrail and start over. To uninstall Contrail, see Uninstall Contrail.