CN2 Components
This section is intended to provide a brief and general overview of the components that make up the Cloud-Native Contrail Networking (CN2) solution. These components are generally common to all Kubernetes distributions running CN2. Nevertheless, differences do exist, and these are called out explicitly where necessary.
The CN2 architecture consists of pods that perform the network configuration plane and network control plane functions, and pods that perform the network data plane functions.
-
The network configuration plane refers to the functionality that enables CN2 to manage its resources and interact with the rest of the Kubernetes control plane.
-
The network control plane represents CN2's full-featured SDN capability. It uses BGP to communicate with other controllers and XMPP to communicate with the distributed data plane components on the worker nodes.
-
The network data plane refers to the packet transmit and receive function on every node, especially on worker nodes where the workloads reside.
The pods that perform the configuration and control plane functions reside on Kubernetes control plane nodes in most distributions. The pods that perform the data plane functions reside on both Kubernetes control plane nodes and Kubernetes worker nodes. (In Amazon EKS, all CN2 pods reside on worker nodes.)
Table 1 describes the main CN2 components. Depending on configuration, there might be other components as well (not shown) that perform ancillary functions such as certificate management and status monitoring.
Pod Name | Where | Description | |
---|---|---|---|
Configuration Plane1 | contrail-k8s-apiserver | Control Plane Node2 |
This pod is an aggregated API server that is the entry point for managing all Contrail resources. It is registered with the regular kube-apiserver as an APIService. The regular kube-apiserver forwards all network-related requests to the contrail-k8s-apiserver for handling. There is one contrail-k8s-apiserver pod per Kubernetes control plane node. |
contrail-k8s-controller | Control Plane Node2 |
This pod performs the Kubernetes control loop function to reconcile networking resources. It constantly monitors networking resources to make sure the actual state of a resource matches its intended state. There is one contrail-k8s-controller pod per Kubernetes control plane node. |
|
contrail-k8s-kubemanager | Control Plane Node2 |
This pod is the interface between Kubernetes resources and Contrail resources. It watches the kube-apiserver for changes to regular Kubernetes resources such as service and namespace and acts on any changes that affect the networking resources. In a single-cluster deployment, there is one contrail-k8s-kubemanager pod per Kubernetes control plane node. In a multi-cluster deployment, there is additionally one contrail-k8s-kubemanager pod for every distributed workload cluster. |
|
Control Plane1 | contrail-control | Control Plane Node2 |
This pod passes configuration to the worker nodes and performs route learning and distribution. It watches the kube-apiserver for anything affecting the network control plane and then communicates with its BGP peers and/or vRouter agents (over XMPP) as appropriate. There is one contrail-control pod per Kubernetes control plane node. |
Data Plane | contrail-vrouter-nodes | Worker Node |
This pod contains the vRouter agent and the vRouter itself. The vRouter agent acts on behalf of the local vRouter when interacting with the Contrail controller. There is one agent per node. The agent establishes XMPP sessions with two Contrail controllers to perform the following functions:
The vRouter provides the packet send and receive function for the co-located pods and workloads. It provides the CNI plug-in functionality. |
contrail-vrouter-masters3 | Control Plane Node | This pod provides the same functionality as the contrail-vrouter-nodes pod, but resides on the control plane nodes. | |
1The components that make up the network configuration plane and the network control plane are collectively called the Contrail controller. 2Worker node if running on Amazon EKS. 3Not present if running on Amazon EKS. |
Figure 1 shows these components in the context of a Kubernetes cluster in most distributions and Figure 2 shows these components in the context of Amazon EKS.
For clarity and to reduce clutter, the figures do not show the data plane pods on the node with the Contrail controller.
When running on upstream Kubernetes, CN2 uses the main Kubernetes etcd database. When running on OpenShift or when running on Amazon EKS, CN2 uses its own etcd database.
For all distributions, the kube-apiserver is the entry point for Kubernetes REST API calls for the cluster. It directs all networking requests to the contrail-k8s-apiserver, which is the entry point for Contrail API calls. The contrail-k8s-apiserver translates incoming networking requests into REST API calls to the respective CN2 objects. In some cases, these calls may result in the Contrail controller sending XMPP messages to the vRouter agent on one or more worker nodes or sending BGP messages (not shown) to other control plane nodes or external routers. These XMPP and BGP messages are sent outside of regular Kubernetes node-to-node communications.
The contrail-k8s-kubemanager (cluster) components are only present in multi-cluster deployments. For more information on the different types of deployment, see Deployment Models.
Figure 3 shows a cluster with multiple Contrail controllers. These controllers reside on control plane nodes (most distributions). The Kubernetes components communicate with each other using REST. The Contrail controllers exchange routes with each other using iBGP, outside of the regular Kubernetes REST interface. For redundancy, the vRouter agents on worker nodes always establish XMPP communications with two Contrail controllers.