Create a Kubernetes Cluster
Use this example procedure to create an upstream Kubernetes cluster.
We provide this example procedure purely for informational purposes. There are multiple ways you can create a cluster, such as with kubeadm, kOps, or kubespray, among others.
In this example, we'll use kubespray and Ansible to create the cluster. Kubespray uses Ansible playbooks, which makes cluster creation fairly straightforward. To make the steps easier to follow, we'll use a separate installer machine to perform the installation and to run kubectl and other tools.
For more information on creating a cluster, see the official Kubernetes documentation (https://kubernetes.io/docs/home/).
The command line examples below don't always show absolute directory paths. We leave it to you to apply these commands within your directory structure.
-
Install a fresh OS on the installer machine, configuring the OS minimally for the
following:
- static IP address and mask (for example, 172.16.0.10/24 for our single cluster) and gateway
- access to one or more DNS servers
- SSH connectivity including root SSH access
- NTP
- From your local computer, SSH into the installer machine as the sudo user.
-
Install ansible.
sudo apt install ansible
-
Install kubectl. In this example, we run kubectl on the installer machine. If you want
to run kubectl on another machine (for example, your local computer), download and install
kubectl on that machine instead.
- Set up and update the Kubernetes
repository.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
- Install kubectl.
sudo apt install kubectl
- Set up and update the Kubernetes
repository.
-
If you want to install Contrail Analytics, then install Helm 3.0 or later.
-
Set up and update the Helm repository.
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt update
-
Install Helm.
sudo apt install helm
-
-
Configure SSH password-less root access from the installer machine to the control plane
and worker nodes. This allows ansible to log in to these nodes when you run the playbook
later.
-
Create an SSH key.
user@installer:~$ ssh-keygen
In this example, we store the SSH key in its default location ~/.ssh/id_rsa.pub.
-
Copy the key to the root user on the control plane and worker nodes. For
example:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.0.11 ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.0.12 ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.0.13
-
Create an SSH key.
-
Clone the kubespray directory.
For example:
user@installer:~/contrail$ git clone https://github.com/kubernetes-sigs/kubespray -b release-2.16
This creates a clone of the kubespray directory at the present location (in ~/contrail in this example).
-
Configure the pod and service subnets if desired.
The default subnets used by kubespray are defined in the kubespray/roles/kubespray-defaults/defaults/main.yaml file. Look for the following parameters in that file and change accordingly.
kube_service_addresses: 10.233.0.0/18 kube_pods_subnet: 10.233.64.0/18 kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116 kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
Note:If you're creating a multi-cluster CN2 setup, you must configure different pod and service subnets on each cluster. These subnets must be unique within the entire multi-cluster.
If you're following the multi-cluster example in this document, then leave the subnets on the central cluster at their defaults and configure the subnets on the workload cluster as follows:
kube_service_addresses: 10.234.0.0/18 kube_pods_subnet: 10.234.64.0/18 kube_service_addresses_ipv6: fd85:ee78:d8a6:8608::1000/116 kube_pods_subnet_ipv6: fd85:ee78:d8a6:8608::1:0000/112
-
Disable Node Local DNS.
In the kubespray/roles/kubespray-defaults/defaults/main.yaml file, set
enable_nodelocaldns: false
. -
If you're running DPDK in your cluster, then configure multus.
Multus is required when running DPDK.
-
Enable multus.
In kubespray/roles/kubespray-defaults/defaults/main.yaml, enable multus:
kube_network_plugin_multus: true
-
Set the multus version to 0.3.1, which is the version required for running DPDK.
You set the version in two files.
In kubespray/roles/network_plugin/multus/defaults/main.yml, configure the multus version:
multus_cni_version: "0.3.1"
In kubespray/extra_playbooks/roles/network_plugin/multus/defaults/main.yml, configure the multus version:multus_cni_version: "0.3.1"
-
Enable multus.
-
Create the inventory file for ansible to use. For example:
all: hosts: # list all nodes here k8s-cp0: # desired hostname ansible_host: 172.16.0.11 k8s-worker0: ansible_host: 172.16.0.12 k8s-worker1: ansible_host: 172.16.0.13 vars: ansible_user: root artifacts_dir: /tmp/mycluster cluster_name: mycluster.contrail.lan container_manager: crio # container runtime docker_image_repo: <your docker repository URL> download_container: false download_localhost: true download_run_once: true enable_dual_stack_networks: true enable_nodelocaldns: false etcd_deployment_type: host host_key_checking: false kube_network_plugin: cni kube_network_plugin_multus: false kubeconfig_localhost: true kubectl_localhost: true kubelet_deployment_type: host override_system_hostname: true kube-master: hosts: # hostname of control plane node (from hosts section) k8s-cp0: kube-node: hosts: # hostnames of worker nodes (from hosts section) k8s-worker0: k8s-worker1: etcd: hosts: # hostname of control plane node (from hosts section) k8s-cp0: k8s-cluster: children: kube-master: kube-node:
The host names (
k8s-cp0
,k8s-worker0
,k8s-worker1
) that you specify in the file are automatically configured on the node when theoverride_system_hostname
parameter is set totrue
.Note:If you're creating a multi-cluster CN2 setup, you must configure different node names for each node in the multi-cluster. Node names must be unique across the entire multi-cluster.
Note:If you're running DPDK, set
kube_network_plugin_multus: true
.If you want to run with a different container runtime, change the
container_manager
value above.Ensure
enable_nodelocaldns
is set tofalse
.If you want to run with a different number of control plane and worker nodes, adjust the inventory accordingly.
-
Check that ansible can SSH into the control plane and worker nodes based on the
contents of the inventory.yaml file. In this example, the
inventory.yaml file is in the ~/contrail
directory.
user@installer:~/contrail$ ansible -i inventory.yaml -m ping all [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details k8s-cp0 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } k8s-worker0 | SUCCESS => {ansi "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } k8s-worker1 | SUCCESS => {ansi "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" }
-
To create the cluster, run the playbook from the kubespray
directory. Adjust the command below to reference the inventory.yaml
within your directory structure.
user@installer:~/contrail/kubespray$ ansible-playbook -i ../inventory.yaml cluster.yml
This step can take a an hour or more to complete, depending on the size of your cluster.Note:You can safely ignore network and CNI warnings and errors because you haven't configured a CNI yet. If a fatal error occurs, ansible will stop the playbook.
-
Copy the cluster's secure token to the default ~/.kube/config
location. The kubeconfig must be at that default location for CN2 tools to work.
You can find the secure token location from the inventory.yaml file. If you use the inventory file in this example, the token is in /tmp/mycluster.
mkdir ~/.kube
cp /tmp/mycluster/admin.conf ~/.kube/config
Note:If you have a kubeconfig that already holds tokens for existing clusters, then you'll need to merge rather than overwrite the ~/.kube/config file.
-
Use standard kubectl commands to check on the health of the cluster.
-
Show the status of the nodes.
You can see that the nodes are not ready because there is no CNI plug-in. This is expected because you haven't installed CN2 yet.user@installer:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-cp0 NotReady control-plane,master 4m57s v1.20.7 k8s-worker0 NotReady <none> 3m17s v1.20.7 k8s-worker1 NotReady <none> 2m45s v1.20.7 user@installer:~$ kubectl describe node k8s-cp0 <trimmed> Conditions: Type Status <trimmed> Reason Message ---- ------ ------ ------- MemoryPressure False KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False KubeletHasSufficientPID kubelet has sufficient PID available Ready False KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network Addresses: <trimmed>
-
Show the status of the pods.
All pods should have a STATUS of Running except for the DNS pods. The DNS pods do not come up because there is no networking. This is what we expect.user@installer:~$ kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE <trimmed> kube-system coredns-657959df74-rprzv 0/1 Pending 0 6m44s <none> <none> <none> <none> kube-system dns-autoscaler-b5c786945 0/1 Pending 0 6m40s <none> <none> <none> <none> kube-system kube-apiserver-k8s-cp0 1/1 Running 0 9m27s 172.16.0.11 k8s-cp0 <none> <none> kube-system kube-controller-manager- 1/1 Running 0 9m27s 172.16.0.11 k8s-cp0 <none> <none> kube-system kube-proxy-k5mcp 1/1 Running 0 7m28s 172.16.0.13 k8s-worker1 <none> <none> kube-system kube-proxy-sccjm 1/1 Running 0 7m28s 172.16.0.11 k8s-cp0 <none> <none> kube-system kube-proxy-wqbt8 1/1 Running 0 7m28s 172.16.0.12 k8s-worker0 <none> <none> kube-system kube-scheduler-k8s-cp0 1/1 Running 0 9m27s 172.16.0.11 k8s-cp0 <none> <none> kube-system nginx-proxy-k8s-worker0 1/1 Running 0 8m2s 172.16.0.12 k8s-worker0 <none> <none> kube-system nginx-proxy-k8s-worker1 1/1 Running 0 7m30s 172.16.0.13 k8s-worker1 <none> <none>
-
Show the status of the nodes.