Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deployment Models

SUMMARY Learn about single cluster and multi-cluster CN2.

Cloud-Native Contrail Networking (CN2) is available both as an integrated networking platform in a single Kubernetes cluster and as a centralized networking platform to multiple distributed Kubernetes clusters. In both cases, Contrail works as an integrated component of your infrastructure by watching where workloads are instantiated and connecting those workloads to the appropriate overlay networks.

Single Cluster Deployment

Cloud-Native Contrail Networking (CN2) is available as an integrated networking platform in a single Kubernetes cluster, watching where workloads are instantiated and connecting those workloads to the appropriate overlay networks.

In a single-cluster deployment (Figure 1), the Contrail controller sits in the Kubernetes control plane and provides the network configuration and network control planes for the host cluster. The Contrail data plane components sit in all nodes and provide the packet send and receive function for the workloads.

Figure 1: Single Cluster Deployment Single Cluster Deployment

Multi-Cluster Deployment

In a multi-cluster deployment (Figure 2), the Contrail controller resides in its own Kubernetes cluster and provides networking to other clusters. The Kubernetes cluster that the Contrail controller resides in is called the central cluster. The Kubernetes clusters that house the workloads are called the distributed workload clusters.

Figure 2: Multi-Cluster Deployment Multi-Cluster Deployment

Centralizing the network function in this way makes it not only easier to configure and manage, but also easier to apply consistent network policy and security.

Figure 3 provides more detail on this setup. The Contrail controller sits in the Kubernetes control plane of the central cluster and contains a kubemanager for each workload cluster that it serves. There are typically no worker nodes in the central cluster. Instead, the workloads reside in the worker nodes in the distributed workload clusters. The Contrail CNI plugin and vRouter sit in the worker nodes of the workload clusters. The Kubernetes control plane in the workload clusters do not contain any Contrail controller components.

Figure 3: Multi-Cluster Components Multi-Cluster Components

The multi-cluster Contrail controller differs from the single-cluster Contrail controller in two main ways:

  • The multi-cluster Contrail controller has a contrail-k8s-kubemanager pod instantiated for each distributed workload cluster. As part of the procedure to connect a distributed workload cluster to the central cluster, you explicitly create and assign a contrail-k8s-kubemanager deployment that watches for changes to resources that affect its assigned workload cluster.
  • The multi-cluster Contrail controller uses multi-cluster watch technology to detect changes in the distributed workload clusters.

The function of the multi-cluster contrail-k8s-kubemanager pod is identical to its single-cluster counterpart. It watches for changes to regular Kubernetes resources that affect its assigned cluster and acts on the changes accordingly.

All other Contrail components in a multi-cluster deployment behave in the same way as in a single-cluster deployment. The network control plane, for example, communicates with data plane components using XMPP, outside of regular Kubernetes REST channels. Because of this, the network control plane is indifferent to whether the data plane components that it communicates with reside in the same cluster or in different clusters. The only requirement is that the data plane components are reachable.