Add a Worker Node
Use this procedure to add a worker node to an OpenShift cluster.
We provide this example procedure purely for informational purposes. See Red Hat OpenShift documentation (https://docs.openshift.com/) for the official procedure.
This procedure shows an example of early binding. In early binding, you generate an ISO that is preconfigured for the existing cluster. When the node boots with that ISO, the node automatically reaches out to the existing cluster.
This causes one or more CertificateSigningRequests (CSRs) to be sent from the new node to the existing cluster. A CSR is simply a request to obtain the client certificates for the (existing) cluster. You'll need to explicitly approve these requests. Once approved, the existing cluster provides the client certificates to the new node, and the new node is allowed to join the existing cluster.
- Log in to the machine (VM or BMS) that you're using as the Assisted Installer client. The Assisted Installer client machine is where you issue Assisted Installer API calls to the Assisted Installer service hosted by Red Hat.
-
Prepare the deployment by setting the environment variables that you'll use in later
steps.
-
Set up the same SSH key that you use for the existing cluster.
In this example, we retrieve that SSH key from its default location ~/.ssh/id_rsa.pub and store into a variable.
export CLUSTER_SSHKEY=$(cat ~/.ssh/id_rsa.pub)
-
If you no longer have the image pull secret, then download the image pull secret
from your Red Hat account onto your local computer. The pull secret allows your
installation to access services and registries that serve container images for
OpenShift components.
If you're using the Red Hat hosted Assisted Installer, you can download the pull secret file (pull-secret) from the https://console.redhat.com/openshift/downloads page. Copy the pull-secret file to the Assisted Installer client machine. In this example, we store the pull-secret in a file called pull-secret.txt.
Strip out any whitespace, convert the contents to JSON string format, and store to an environment variable, as follows:
export PULL_SECRET=$(sed '/^[[:space:]]*$/d' pull-secret.txt | jq -R .)
-
If you no longer have your offline access token, then copy the offline access token
from your Red Hat account. The OpenShift Cluster Manager API Token allows you (on the
Assisted Installer client machine) to interact with the Assisted Installer API service
hosted by Red Hat.
The token is a string that you can copy and paste to a local environment variable. If you're using the Red Hat hosted Assisted Installer, you can copy the API token from https://console.redhat.com/openshift/downloads.
export OFFLINE_ACCESS_TOKEN='<paste offline access token here>'
-
Generate (refresh) the token from the OFFLINE_ACCESS_TOKEN. You will use this
generated token whenever you issue API commands.
export TOKEN=$(curl --silent --data-urlencode "grant_type=refresh_token" --data-urlencode "client_id=cloud-services" --data-urlencode "refresh_token=${OFFLINE_ACCESS_TOKEN}" https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token | jq -r .access_token)
Note:This token expires regularly. When this token expires, you will get an HTTP 4xx response whenever you issue an API command. Refresh the token when it expires, or alternatively, refresh the token regularly prior to expiry. There is no harm in refreshing the token when it hasn't expired.
-
Get the OpenShift cluster ID of the existing cluster.
For example:
Save it to a variable:oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' 1777102a-1fe1-407a-9441-9d0bad4f5968
export $OS_CLUSTER_ID="1777102a-1fe1-407a-9441-9d0bad4f5968"
-
Set up the remaining environment variables.
Table 1 lists all the environment variables that you need to set in this procedure, including the ones described in the previous steps.
Table 1: Environment Variables Variable Description Example CLUSTER_SSHKEY The (public) SSH key you use for the existing cluster. You must use this same key for the new node you're adding. – PULL_SECRET The image pull secret that you downloaded, stripped and converted to JSON string format. – OFFLINE_ACCESS_TOKEN The OpenShift Cluster Manager API Token that you copied. – TOKEN The token that you generated (refreshed) from the OFFLINE_ACCESS_TOKEN. – CLUSTER_NAME The name of the existing cluster. mycluster CLUSTER_DOMAIN The base domain of the existing cluster. contrail.lan OS_CLUSTER_ID The OpenShift cluster ID of the existing cluster. 1777102a-1fe1-407a-9441-9d0bad4f5968 AI_URL The URL of the Assisted Installer service. This example uses the Red Hat hosted Assisted Installer. https://api.openshift.com
-
Set up the same SSH key that you use for the existing cluster.
-
Import the existing cluster.
curl -X POST "$AI_URL/api/assisted-install/v2/clusters/import" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -d "{\"name\":\"$CLUSTER_NAME\",\"openshift_cluster_id\":\"$OS_CLUSTER_ID\",\"api_vip_dnsname\":\"api.$CLUSTER_NAME.$CLUSTER_DOMAIN\"}"
When you import the cluster, the Assisted Installer service returns a cluster ID for the AddHostsCluster. Look carefully for the cluster ID embedded in the response. For example:
"id":"19b809b5-69c4-42d8-9e5e-56aae4aba386"
-
Generate the discovery boot ISO. You will use this ISO to boot up the node that you're
adding to the cluster.
The ISO is customized to your infrastructure based on the infrastructure environment that you'll set up.
-
Create a file that describes the infrastructure environment. In this example, we
name it infra-envs-addhost.json.
where:cat << EOF > ./infra-envs-addhost.json { "name": "<InfraEnv Name>", "ssh_authorized_key": "$CLUSTER_SSHKEY", "pull_secret": $PULL_SECRET, "cluster_id": "<AddHostsCluster ID>", "openshift_version": "4.8", "user_managed_networking": <same as for existing cluster>, "vip_dhcp_allocation": <same as for existing cluster>, "base_dns_domain": "$CLUSTER_DOMAIN", } EOF
- InfraEnv Name is the name you want call the InfraEnv.
- AddHostsCluster ID is the cluster ID of the AddHostsCluster (obtained in the previous step).
user_managed_networking
andvip_dhcp_allocation
are set to the same values as for the existing cluster.
-
Register the InfraEnv. In response, the Assisted Installer service assigns an
InfraEnv ID and builds the discovery boot ISO based on the specified infrastructure
environment.
curl -X POST "$AI_URL/api/assisted-install/v2/infra-envs" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -d @infra-envs-addhosts.json
When you register the InfraEnv, the Assisted Installer service returns an InfraEnv ID. Look carefully for the InfraEnv ID embedded in the response. For example:
"id":"78d20699-f25b-462c-bc1d-4738590a9344"
Store the InfraEnv ID into a variable. For example:
export INFRA_ENV_ID="78d20699-f25b-462c-bc1d-4738590a9344"
-
Get the image download URL.
The Assisted Installer service returns the image URL.curl -s $AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/downloads/image-url -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" | jq '.url'
-
Download the ISO and save it to a file. In this example, we save it to
ai-liveiso-addhosts.iso.
curl -L "<image URL>" -H "Authorization: Bearer $TOKEN" -o ./ai-liveiso-addhosts.iso
-
Create a file that describes the infrastructure environment. In this example, we
name it infra-envs-addhost.json.
- Boot the new worker node with the discovery boot ISO. Choose the boot method most convenient for your infrastructure. Ensure that the new node boots up attached to a network that has access to the Red Hat hosted Assisted Installer.
-
Install the new node to the existing cluster.
-
Inspect the new node to make sure its role is set to worker.
curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" $AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts | jq '.'
Proceed only when the host role is set to worker.
-
Get the host ID of the new node.
export HOST_ID=$(curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" $AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts | jq -r '.[].id')
-
Install the new node.
curl -X POST "$AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts/$HOST_ID/actions/install" -H "accept: application/json" -H "Authorization: Bearer $TOKEN" | jq -r
-
Check on the progress of the installation.
The new node will eventually reboot.curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" $AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENV_ID/hosts | jq '.'
-
Inspect the new node to make sure its role is set to worker.
-
Once the new node has rebooted, it will try to join the existing cluster. This causes
one or more CertificateSigningRequests (CSRs) to be sent from the new node to the existing
cluster. You will need to approve the CSR requests.
-
Check for pending CSRs.
For example:
root@ai-client:~/contrail# oc get csr -A NAME AGE SIGNERNAME REQUESTOR CONDITION csr-gblnm 20s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending
You may need to repeat this command periodically until you see a pending CSR.
-
Approve the CSRs.
For example:
root@ai-client:~/contrail# oc adm certificate approve csr-gblnm certificatesigningrequest.certificates.k8s.io/csr-gblnm approved
-
Check for pending CSRs.
-
Verify that the new node is up and running in the existing cluster.
oc get nodes