Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

external-header-nav
keyboard_arrow_up
list Table of Contents
file_download PDF
keyboard_arrow_right

Manage Single Cluster CN2

date_range 09-Jan-23

SUMMARY Learn how to perform life cycle management tasks in a single cluster installation.

Overview

The way that you manage a Kubernetes cluster does not change when CN2 is the CNI plug-in. Once CN2 is installed, CN2 components work seamlessly with Kubernetes components to provide the networking infrastructure.

The Contrail controller is constantly watching and reacting to cluster events as they occur. When you add a new node, the Contrail data plane components are automatically deployed. When you delete a node, the Contrail controller automatically deletes networking resources associated with that node. CN2 works seamlessly with kubectl and other tools such as Prometheus and Grafana.

Back Up the Contrail Etcd Database in Release 22.4

Use this example procedure in release 22.4 to back up the Contrail etcd database.

Note:

The following steps refer to a Contrail controller node. A Contrail controller node is a worker node that is running a Contrail controller.

  1. Install etcdctl on all Contrail controller nodes.
    1. Log in to one of the Contrail controller nodes.
      For example:
      content_copy zoom_out_map
      ssh core@172.16.0.11
    2. Download etcd. This example downloads to the /tmp directory.
      content_copy zoom_out_map
      ETCD_VER=v3.4.13
      curl -L https://storage.googleapis.com/etcd/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
    3. Untar and move the etcd executable to a directory in your path (for example /usr/local/bin).
      content_copy zoom_out_map
      tar -xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp
      sudo mv /tmp/etcd-${ETCD_VER}-linux-amd64/etcdctl /usr/local/bin
    4. Check that you've installed etcd.
      content_copy zoom_out_map
      etcdctl version
      etcdctl version: 3.4.13
      API version: 3.4
      
    5. Repeat on all the Contrail controller nodes.
  2. Get a list of the contrail-etcd pods.
    content_copy zoom_out_map
    kubectl get pods -A | grep contrail-etcd
    Take note of the contrail-etcd pod names, the IP addresses, and the nodes they're running on. You'll need this information in the next few steps.
  3. Copy the etcd certificate and key files from the pods to the Contrail controller nodes.
    We run kubectl on the Contrail controller nodes in this step. We assume you've set up kubeconfig on these nodes in its default location (~/.kube/config).
    1. Pick a contrail-etcd pod (for example, contrail-etcd-0) and log in to the Contrail controller node that's hosting that pod.
    2. Copy the certificate and key files from that contrail-etcd pod to the hosting Contrail controller node.
      In this example, we're copying the certificates and key files from the contrail-etcd-0 pod to local files on this node.
      content_copy zoom_out_map
      kubectl exec --namespace contrail-system contrail-etcd-0 -c contrail-etcd -- cat /etc/member-tls/ca.crt > ./ca.crt
      kubectl exec --namespace contrail-system contrail-etcd-0 -c contrail-etcd -- cat /etc/member-tls/tls.crt > ./tls.crt
      kubectl exec --namespace contrail-system contrail-etcd-0 -c contrail-etcd -- cat /etc/member-tls/tls.key > ./tls.key
      
      This copies the certificate and key files from the contrail-etcd-0 pod to ca.crt, tls.crt, and tls.key in the current directory on this control plane node.
    3. Repeat for each contrail-etcd pod.
  4. Back up the etcd database on one of the Contrail controller nodes. You only need to back up the database on one node.
    1. Log back in to one of the control plane nodes.
    2. Back up the etcd database.
      This example saves the database to /tmp/etcdbackup.db on this Contrail controller node.
      content_copy zoom_out_map
      etcdctl snapshot save /tmp/etcdbackup.db --endpoints=<etcd-pod-ip>:<etcd-port>
       --cacert=ca.crt --cert=tls.crt --key=tls.key
      where <etcd-pod-ip> is the IP address of the pod on this node and the <etcd-port> is the port that etcd is listening on (by default, 12379).
  5. Copy the database to a safe location.

Restore the Contrail Etcd Database in Release 22.4

Use this example procedure in release 22.4 to restore the Contrail etcd database from a snapshot on an Amazon EKS cluster.

Note:

The following steps refer to a Contrail controller node. A Contrail controller node is a worker node that is running a Contrail controller.

  1. Copy the snapshot you want to restore to all the Contrail controller nodes.
    The steps below assume you've copied the snapshot to /tmp/etcdbackup.db on all the Contrail controller nodes.
  2. Restore the snapshot.
    1. Log in to one of the Contrail controller nodes. In this example, we're logging in to the Contrail controller node that is hosting contrail-etcd-0.
    2. Restore the etcd database to the contrail-etcd-0 pod on this Contrail controller node.
      This creates a contrail-etcd-0.etcd directory on the node.
      content_copy zoom_out_map
      ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcdBackup.db \
      --name=contrail-etcd-0 \
      --initial-cluster=contrail-etcd-0=https://<contrail-etcd-0-ip>:12380,\
      contrail-etcd-1=https://<contrail-etcd-1-ip>:12380,\
      contrail-etcd-2=https://<contrail-etcd-2-ip>:12380 \
      --initial-advertise-peer-urls= https://<contrail-etcd-0-ip>:12380 \
      --cacert=ca.crt --cert=tls.crt --key=tls.key
      where --name=contrail-etcd-0 specifies that this command is restoring the database to contrail-etcd-0, --initial-cluster=... lists all the contrail-etcd members in the cluster, and --initial-advertise-peer-urls=... refers to the IP address and port number that the contrail-etcd-0 pod is listening on.
    3. Repeat for the other contrail-etcd pods on their respective Contrail controller nodes, substituting the --name and --initial-advertise-peer-urls values with the respective contrail-etcd pod name and IP address.
  3. Stop the contrail-etcd pods.
    This sets the replicas to 0, which effectively stops the pods.
    content_copy zoom_out_map
    kubectl patch etcds.datastore.juniper.net contrail-etcd -n contrail-system --type=merge -p '{"spec": {"common": {"replicas": 0}}} 
  4. Replace contrail-etcd data with the data from the snapshot.
    1. SSH into one of the Contrail controller nodes.
    2. Replace the data. Recall that the snapshot is stored in the contrail-etcd-<xxx>.etcd directory.
      content_copy zoom_out_map
      sudo rm -rf /var/lib/contrail-etcd/snapshots 
      sudo mv /var/lib/contrail-etcd/etcd/member /var/lib/contrail-etcd/etcd/member.bak 
      sudo mv contrail-etcd-<xxx>.etcd/member /var/lib/contrail-etcd/etcd/ 
       
      where contrail-etcd-xxx is the name of the contrail-etcd pod on the Contrail controller node that you logged in to.
    3. Repeat for the other Contrail controller nodes.
  5. Start the contrail-etcd pods.
    This sets the replicas to 3, which effectively starts the pods.
    content_copy zoom_out_map
    kubectl patch etcds.datastore.juniper.net contrail-etcd -n contrail-system --type=merge -p '{"spec": {"common": {"replicas": 3}}} 
  6. Restart the contrail-system apiserver and controller.
    Delete all the contrail-k8s-apiserver and contrail-k8s-controller pods.
    content_copy zoom_out_map
    kubectl delete pod <contrail-k8s-apiserver-xxx> -n contrail-system
    content_copy zoom_out_map
    kubectl delete pod <contrail-k8s-controller-xxx> -n contrail-system
    These pods will automatically restart.
  7. Restart the vrouters.
    Delete all the contrail-vrouter-nodes pods.
    content_copy zoom_out_map
    kubectl delete pod <contrail-vrouter-nodes-xxx> -n contrail
    These pods will automatically restart.
  8. Check that all pods are in running state.
    content_copy zoom_out_map
    kubectl get pods -n contrail-system
    content_copy zoom_out_map
    kubectl get pods -n contrail
external-footer-nav