Create a Kubernetes Cluster
Use this example procedure to create an upstream Kubernetes cluster.
We provide this example procedure purely for informational purposes. There are multiple ways you can create a cluster. In this example, we'll use kubespray.
To make the steps easier to follow, we'll use a separate installer machine to perform the installation and to run kubectl and other tools.
For more information on creating a cluster, see the official Kubernetes documentation (https://kubernetes.io/docs/home/).
The command line examples below don't show the directory paths. We leave it to you to apply these commands within your directory structure.
Before you start, make sure you've brought up the servers or VMs that you plan to use for the cluster nodes.
-
Install a fresh OS on the installer machine, configuring the OS minimally for the
following:
- static IP address and mask (for example, 172.16.0.10/24 for our single cluster) and gateway
- access to one or more DNS servers
- SSH connectivity including root SSH access
- NTP
- From your local computer, SSH into the installer machine as the root user.
-
Install kubectl. In this example, we run kubectl on the installer machine. If you want
to run kubectl on another machine (for example, your local computer), download and install
kubectl on that machine instead.
-
This downloads kubectl version 1.24.3:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.24.3/bin/linux/amd64/kubectl
-
Make the file executable and move it to a directory in your path (for example,
/usr/local/bin).
chmod +x kubectl
sudo mv kubectl /usr/local/bin
-
This downloads kubectl version 1.24.3:
-
Install the Python virtual environment for the Python version you're running. In this
example, we're running Python 3.8.
python3 --version
apt install -y python3.8-venv
-
If you want to install Contrail Analytics, then install Helm 3.0 or later.
For information on how to install Helm, see https://helm.sh/docs/intro/install/.
-
Configure SSH password-less root access from the installer machine to the control plane
and worker nodes. This allows ansible to log in to these nodes when you run the playbook
later.
-
Create an SSH key.
ssh-keygen
In this example, we store the SSH key in its default location ~/.ssh/id_rsa.pub.
-
Copy the key to the root user on the control plane and worker nodes. For
example:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.0.11
ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.0.12
ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.16.0.13
-
Create an SSH key.
-
Clone the kubespray directory. In this example, we're cloning release 2.20.
For example:
git clone https://github.com/kubernetes-sigs/kubespray -b release-2.20
-
Install the required packages to run this version of kubespray. The required packages
are listed in kubespray/requirements.txt. We'll install these
packages inside a Python virtual environment.
-
Set up the virtual environment.
cd kubespray
python3 -m venv env
The virtual environment is indicated in the prompt by env.source env/bin/activate
-
Install the required packages within the virtual environment.
pip3 install -r requirements.txt
Note:Perform subsequent steps in this virtual environment until the cluster is set up. This virtual environment ensures you're running the correct version of ansible for this version of kubespray.
-
Set up the virtual environment.
-
Configure the pod and service subnets if desired.
The default subnets used by kubespray are defined in the kubespray/roles/kubespray-defaults/defaults/main.yaml file. Look for the following parameters in that file and change accordingly.
kube_service_addresses: 10.233.0.0/18 kube_pods_subnet: 10.233.64.0/18 kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116 kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112
Note:If you're creating a multi-cluster CN2 setup, you must configure different pod and service subnets on each cluster. These subnets must be unique within the entire multi-cluster.
If you're following the multi-cluster example in this document, then leave the subnets on the central cluster at their defaults and configure the subnets on the workload cluster as follows:
kube_service_addresses: 10.234.0.0/18 kube_pods_subnet: 10.234.64.0/18 kube_service_addresses_ipv6: fd85:ee78:d8a6:8608::1000/116 kube_pods_subnet_ipv6: fd85:ee78:d8a6:8608::1:0000/112
-
If you're running DPDK in your cluster, then configure multus.
Multus is required when running DPDK.
-
Enable multus.
In kubespray/roles/kubespray-defaults/defaults/main.yaml, enable multus:
kube_network_plugin_multus: true
-
Set the multus version to 0.3.1, which is the version required for running DPDK.
You set the version in two files.
In kubespray/roles/network_plugin/multus/defaults/main.yml, configure the multus version:
multus_cni_version: "0.3.1"
In kubespray/extra_playbooks/roles/network_plugin/multus/defaults/main.yml, configure the multus version:multus_cni_version: "0.3.1"
-
Enable multus.
-
Create the inventory file for ansible to use. For example:
all: hosts: # list all nodes here k8s-cp0: # desired hostname ansible_host: 172.16.0.11 k8s-worker0: ansible_host: 172.16.0.12 k8s-worker1: ansible_host: 172.16.0.13 vars: ansible_user: root artifacts_dir: /tmp/mycluster cluster_name: mycluster.contrail.lan container_manager: crio # container runtime download_container: false download_localhost: true download_run_once: true enable_dual_stack_networks: true enable_nodelocaldns: false etcd_deployment_type: host host_key_checking: false kube_network_plugin: cni kube_network_plugin_multus: false kube_version: v1.24.3 kubeconfig_localhost: true kubectl_localhost: true kubelet_deployment_type: host override_system_hostname: true kube-master: hosts: # hostname of control plane node (from hosts section) k8s-cp0: kube-node: hosts: # hostnames of worker nodes (from hosts section) k8s-worker0: k8s-worker1: etcd: hosts: # hostname of control plane node (from hosts section) k8s-cp0: k8s-cluster: children: kube-master: kube-node:
The host names (
k8s-cp0
,k8s-worker0
,k8s-worker1
) that you specify in the file are automatically configured on the node when theoverride_system_hostname
parameter is set totrue
.Note:If you're creating a multi-cluster CN2 setup, you must configure different node names for each node in the multi-cluster. Node names must be unique across the entire multi-cluster.
Note:If you're running DPDK, set
kube_network_plugin_multus: true
.If you want to run with a different container runtime, change the
container_manager
value above.Ensure
enable_nodelocaldns
is set tofalse
.If you want to run with a different number of control plane and worker nodes, adjust the inventory accordingly.
-
Check that ansible can SSH into the control plane and worker nodes based on the
contents of the inventory.yaml file. In this example, the
inventory.yaml file is in the ~/contrail
directory.
ansible -i inventory.yaml -m ping all
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details k8s-cp0 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } k8s-worker0 | SUCCESS => {ansi "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } k8s-worker1 | SUCCESS => {ansi "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" }
-
To create the cluster, run the playbook from the kubespray
directory. Adjust the command below to reference the inventory.yaml
within your directory structure.
ansible-playbook -i inventory.yaml cluster.yml
This step can take a an hour or more to complete, depending on the size of your cluster.Note:You can safely ignore network and CNI warnings and errors because you haven't configured a CNI yet.
-
Optionally, deactivate the Python virtual environment. We've finished running ansible,
so we no longer need the virtual environment.
deactivate
-
Copy the cluster's secure token to the default ~/.kube/config
location. The kubeconfig must be at that default location for CN2 tools to work.
You can find the secure token location from the inventory.yaml file. If you use the inventory file in this example, the token is in /tmp/mycluster.
mkdir ~/.kube
cp /tmp/mycluster/admin.conf ~/.kube/config
Note:If you have a kubeconfig that already holds tokens for existing clusters, then you'll need to merge rather than overwrite the ~/.kube/config file.
-
Use standard kubectl commands to check on the health of the cluster.
-
Show the status of the nodes.
kubectl get nodes
You can see that the nodes are not ready because there is no CNI plug-in. This is expected because you haven't installed CN2 yet.NAME STATUS ROLES AGE VERSION k8s-cp0 NotReady control-plane,master 4m57s v1.20.7 k8s-worker0 NotReady <none> 3m17s v1.20.7 k8s-worker1 NotReady <none> 2m45s v1.20.7 user@installer:~$ kubectl describe node k8s-cp0 <trimmed> Conditions: Type Status <trimmed> Reason Message ---- ------ ------ ------- MemoryPressure False KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False KubeletHasSufficientPID kubelet has sufficient PID available Ready False KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network Addresses: <trimmed>
-
Show the status of the pods.
kubectl get pods -A -o wide
All pods should have a STATUS of Running except for the DNS pods. The DNS pods do not come up because there is no networking. This is what we expect.NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE <trimmed> kube-system coredns-657959df74-rprzv 0/1 Pending 0 6m44s <none> <none> <none> <none> kube-system dns-autoscaler-b5c786945 0/1 Pending 0 6m40s <none> <none> <none> <none> kube-system kube-apiserver-k8s-cp0 1/1 Running 0 9m27s 172.16.0.11 k8s-cp0 <none> <none> kube-system kube-controller-manager- 1/1 Running 0 9m27s 172.16.0.11 k8s-cp0 <none> <none> kube-system kube-proxy-k5mcp 1/1 Running 0 7m28s 172.16.0.13 k8s-worker1 <none> <none> kube-system kube-proxy-sccjm 1/1 Running 0 7m28s 172.16.0.11 k8s-cp0 <none> <none> kube-system kube-proxy-wqbt8 1/1 Running 0 7m28s 172.16.0.12 k8s-worker0 <none> <none> kube-system kube-scheduler-k8s-cp0 1/1 Running 0 9m27s 172.16.0.11 k8s-cp0 <none> <none> kube-system nginx-proxy-k8s-worker0 1/1 Running 0 8m2s 172.16.0.12 k8s-worker0 <none> <none> kube-system nginx-proxy-k8s-worker1 1/1 Running 0 7m30s 172.16.0.13 k8s-worker1 <none> <none>
-
Show the status of the nodes.