Install with User-Managed Networking
Use this procedure to bring up a cluster with user-managed networking. User-managed networking refers to a deployment where you explicitly provide an external load balancer for your installation.
Figure 1 shows a cluster of 3 control plane nodes and 2 worker nodes running on bare metal servers or virtual machines in a user-managed networking setup that includes an external load balancer. All communication between nodes in the cluster and between nodes and external sites take place over the single 172.16.0.0/24 fabric virtual network.
A separate machine acts as the Assisted Installer client. The Assisted Installer client is where you use curl commands to issue API calls to the Assisted Installer service to create the cluster. In the example in this procedure, we use the Assisted Installer client machine to host a DNS/DHCP server for the subnet as well.
The local administrator is shown attached to a separate network reachable through a gateway. This is typical of many installations where the local administrator manages the fabric and cluster from the corporate LAN. In the procedures that follow, we refer to the local administrator station as your local computer.
Connecting all nodes together is the data center fabric, which is shown in the example as a single subnet. In real installations, the data center fabric is a network of spine-and-leaf switches that provide the physical connectivity for the cluster.
In an Apstra-managed data center, you would specify this connectivity through the overlay virtual networks that you create across the underlying fabric switches.
This procedure uses the Assisted Installer service hosted by Red Hat, and covers the more common early binding use case where you register the cluster before bringing up the hosts.
- Log in to the Assisted Installer client machine. The Assisted Installer client machine is where you issue Assisted Installer API calls to the Assisted Installer service.
-
Prepare the deployment by setting the environment variables that you'll use in later
steps.
-
Create an SSH key that you'll use to access the nodes in your cluster. Save the key
to an environment variable.
root@ai-client:~# ssh-keygen
In this example, we've stored the SSH key in its default location ~/.ssh/id_rsa.pubexport CLUSTER_SSHKEY=$(cat ~/.ssh/id_rsa.pub)
-
Download the image pull secret from your Red Hat account onto your local computer.
The pull secret allows your installation to access services and registries that serve
container images for OpenShift components.
If you're using the Red Hat hosted Assisted Installer, you can download the pull secret file (pull-secret) from the https://console.redhat.com/openshift/downloads page. Copy the pull-secret file to the Assisted Installer client machine. In this example, we store the pull-secret in a file called pull-secret.txt.
Strip out any whitespace, convert the contents to JSON string format, and store to an environment variable, as follows:
export PULL_SECRET=$(sed '/^[[:space:]]*$/d' pull-secret.txt | jq -R .)
-
Copy the offline access token from your Red Hat account. The OpenShift Cluster
Manager API Token allows you (on the Assisted Installer client machine) to interact
with the Assisted Installer API service hosted by Red Hat.
The token is a string that you can copy and paste to a local environment variable. If you're using the Red Hat hosted Assisted Installer, you can copy the API token from https://console.redhat.com/openshift/downloads.
export OFFLINE_ACCESS_TOKEN='<paste offline access token here>'
-
Generate (refresh) the token from the OFFLINE_ACCESS_TOKEN. You will use this
generated token whenever you issue API commands.
export TOKEN=$(curl --silent --data-urlencode "grant_type=refresh_token" --data-urlencode "client_id=cloud-services" --data-urlencode "refresh_token=${OFFLINE_ACCESS_TOKEN}" https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token | jq -r .access_token)
Note:This token expires regularly. When this token expires, you will get an HTTP 4xx response whenever you issue an API command. Refresh the token when it expires, or alternatively, refresh the token regularly prior to expiry. There is no harm in refreshing the token when it hasn't expired.
-
Set up the remaining environment variables.
Table 1 lists all the environment variables that you need to set in this procedure, including the ones described in the previous steps.
Table 1: Environment Variables Variable Description Example CLUSTER_SSHKEY The (public) ssh key that you generated. This key will be installed on all cluster nodes. – PULL_SECRET The image pull secret that you downloaded, stripped and converted to JSON string format. – OFFLINE_ACCESS_TOKEN The OpenShift Cluster Manager API Token that you copied. – TOKEN The token that you generated (refreshed) from the OFFLINE_ACCESS_TOKEN. – CLUSTER_NAME The name you want to call the cluster. This is the name that will appear in the Red Hat Hybrid Cloud Console UI. Note:This name must be in lowercase.
mycluster CLUSTER_DOMAIN The base domain that you want to assign to the cluster. Cluster objects will be assigned names in this domain. contrail.lan CLUSTER_NET The overlay cluster network. Pods are assigned IP addresses on this network. 10.128.0.0/14 CLUSTER_SVC_NET The overlay service network. Services are assigned IP addresses on this network. 172.31.0.0/16 CLUSTER_HOST_PFX The subnet prefix length for assigning IP addresses from CLUSTER_NET. This defines the subset of CLUSTER_NET IP addresses to use for pod IP address assignment. 23 AI_URL The URL of the Assisted Installer service. This example uses the Red Hat hosted Assisted Installer. https://api.openshift.com
-
Create an SSH key that you'll use to access the nodes in your cluster. Save the key
to an environment variable.
-
Register the cluster with the Assisted Installer service. By registering, you're
telling the Assisted Installer service about the attributes of the cluster you want to
create. In response, the Assisted Installer service creates a cluster resource and returns
a cluster identifier that uniquely identifies that cluster resource.
-
Create the file that describes the cluster you want to create. In this example, we
name the file deployment.json.
cat << EOF > ./deployment.json { "kind": "Cluster", "name": "$CLUSTER_NAME", "openshift_version": "<openshift_version>", "ocp_release_image": "quay.io/openshift-release-dev/ocp-release:<coreOS_version>-x86_64", "base_dns_domain": "$CLUSTER_DOMAIN", "hyperthreading": "all", "cluster_network_cidr": "$CLUSTER_NET", "cluster_network_host_prefix": $CLUSTER_HOST_PFX, "service_network_cidr": "$CLUSTER_SVC_NET", "user_managed_networking": true, "ssh_public_key": "$CLUSTER_SSHKEY", "pull_secret": $PULL_SECRET } EOF
Note:Ensure you specify a <coreOS_version> (for example,
4.12.0
) that is listed in https://www.juniper.net/documentation/us/en/software/cn-cloud-native/cn2-tested-integrations/cn-cloud-native-tested-integrations/concept/cn-cloud-native-tested-integrations.html for the <openshift_version> (for example,4.12
) you're using. Also ensure that the <openshift_version> that you specify is compatible with the CN2 release that you're installing. -
Register the cluster and save the CLUSTER_ID to an environment variable. Reference
the deployment.json file you just created.
export CLUSTER_ID=$(curl -s -X POST "$AI_URL/api/assisted-install/v2/clusters" -d @./deployment.json --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" | jq -r '.id')
-
Update the cluster attributes to specify that you want the cluster to use CN2 as
the networking technology.
curl --header "Content-Type: application/json" --request PATCH --data '"{\"networking\":{\"networkType\":\"Contrail\"}}"' -H "Authorization: Bearer $TOKEN" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/install-config
-
Review the changes.
curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/install-config | jq -r
Once you've registered your cluster, you can point your browser to the Red Hat Hybrid Cloud Console (https://console.redhat.com/openshift) to watch the progress of the installation. You can search for your cluster by cluster name or cluster ID. -
Create the file that describes the cluster you want to create. In this example, we
name the file deployment.json.
-
Generate the discovery boot ISO. You will use this ISO to boot the nodes in your
cluster.
The ISO is customized to your infrastructure based on the infrastructure environment that you'll set up.
-
Create a file that describes the infrastructure environment. In this example, we
name it infra-envs.json.
With early binding, the infrastructure environment includes the cluster details.cat << EOF > ./infra-envs.json { "name": "$CLUSTER_NAME", "ssh_authorized_key": "$CLUSTER_SSHKEY", "pull_secret": $PULL_SECRET, "image_type": "full-iso", "cluster_id": "$CLUSTER_ID", "openshift_version": "4.12", "cpu_architecture": "x86_64" } EOF
-
Register the InfraEnv. In response, the Assisted Installer service assigns an
InfraEnv ID and builds the discovery boot ISO based on the specified infrastructure
environment. Reference the infra-envs.json file you just created.
Store the InfraEnv ID in a variable.
export INFRA_ENVS_ID=$(curl -s -X POST "$AI_URL/api/assisted-install/v2/infra-envs" -d @infra-envs.json --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" | jq -r '.id')
-
Patch the ISO with the proper certificate for the extended API server.
You do this by applying an ignition file. Copy the contents below into an infra-ignition.json file. The contents contain an encoded script that configures the extended API server with the proper certificate.
{"ignition_config_override": "{\"ignition\":{\"version\":\"3.1.0\"},\"systemd\":{\"units\":[{\"name\":\"ca-patch.service\",\"enabled\":true,\"contents\":\"[Service]\\nType=oneshot\\nExecStart=/usr/local/bin/ca-patch.sh\\n\\n[Install]\\nWantedBy=multi-user.target\"}]},\"storage\":{\"files\":[{\"path\":\"/usr/local/bin/ca-patch.sh\",\"mode\":720,\"contents\":{\"source\":\"data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKc3VjY2Vzcz0wCnVudGlsIFsgJHN1Y2Nlc3MgLWd0IDEgXTsgZG8KICB0bXA9JChta3RlbXApCiAgY2F0IDw8RU9GPiR7dG1wfSB8fCB0cnVlCmRhdGE6CiAgcmVxdWVzdGhlYWRlci1jbGllbnQtY2EtZmlsZTogfAokKHdoaWxlIElGUz0gcmVhZCAtYSBsaW5lOyBkbyBlY2hvICIgICAgJGxpbmUiOyBkb25lIDwgPChjYXQgL2V0Yy9rdWJlcm5ldGVzL2Jvb3RzdHJhcC1zZWNyZXRzL2FnZ3JlZ2F0b3ItY2EuY3J0KSkKRU9GCiAgS1VCRUNPTkZJRz0vZXRjL2t1YmVybmV0ZXMvYm9vdHN0cmFwLXNlY3JldHMva3ViZWNvbmZpZyBrdWJlY3RsIC1uIGt1YmUtc3lzdGVtIHBhdGNoIGNvbmZpZ21hcCBleHRlbnNpb24tYXBpc2VydmVyLWF1dGhlbnRpY2F0aW9uIC0tcGF0Y2gtZmlsZSAke3RtcH0KICBpZiBbWyAkPyAtZXEgMCBdXTsgdGhlbgoJcm0gJHt0bXB9CglzdWNjZXNzPTIKICBmaQogIHJtICR7dG1wfQogIHNsZWVwIDYwCmRvbmUK\"}}]},\"kernelArguments\":{\"shouldExist\":[\"ipv6.disable=1\"]}}"}
Apply the ignition file that you just created.curl -s -X PATCH $AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENVS_ID -d @infra-ignition.json --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" | jq -r '.id'
-
Download the ISO. In this example, we call it
ai-liveiso-$CLUSTER_ID.iso, referencing the cluster identifier in
the name.
export IMAGE_URL=$(curl -H "Authorization: Bearer $TOKEN" -L "$AI_URL/api/assisted-install/v2/infra-envs/$INFRA_ENVS_ID/downloads/image-url" | jq -r '.url')
curl -H "Authorization: Bearer $TOKEN" -L "$IMAGE_URL" -o ./ai-liveiso-$CLUSTER_ID.iso
-
Create a file that describes the infrastructure environment. In this example, we
name it infra-envs.json.
-
Boot the 3 control plane nodes with the discovery boot ISO.
-
Choose the boot method most convenient for your infrastructure. Ensure that the
machines boot up attached to a network that has access to the Red Hat hosted Assisted
Installer service.
In the example network shown in Figure 1, the nodes have a single interface and that interface is attached to the 172.16.0.0/24 network, which has external connectivity to the Assisted Installer service.
-
Check the cluster status:
curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -H "get_unregistered_clusters: false" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID?with_hosts=true | jq -r '.status'
The status should indicate ready when the nodes come up successfully. -
Check the validations status:
curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -H "get_unregistered_clusters: false" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID?with_hosts=true | jq -r '.validations_info' | jq .
The validations status shows whether you've defined your cluster properly. There should be no errors in the output. -
Check the hosts:
curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -H "get_unregistered_clusters: false" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID?with_hosts=true | jq -r '.hosts'
The output shows details on all the nodes you've booted.You can filter for specific information, such as host IDs:curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -H "get_unregistered_clusters: false" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID?with_hosts=true | jq -r '.hosts[].id'
and host roles:curl -s -X GET --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -H "get_unregistered_clusters: false" $AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID?with_hosts=true | jq -r '.hosts[].role'
-
Confirm that the roles have been assigned.
On your browser, go to the Red Hat Hybrid Cloud Console (https://console.redhat.com/openshift) and click on your cluster to see details of your cluster. You can search for your cluster by cluster name or by cluster ID.
Proceed to the next step only when the nodes (hosts) come up and the roles have been assigned successfully as control plane nodes. Since we've only booted the 3 control plane nodes, you'll see the roles assigned in the UI as control plane, worker.
-
Choose the boot method most convenient for your infrastructure. Ensure that the
machines boot up attached to a network that has access to the Red Hat hosted Assisted
Installer service.
-
Repeat step 5 to boot up
the worker nodes.
When you query the host roles, you'll see the worker nodes in auto-assign state. This is expected. The Assisted Installer service assigns roles for these nodes later when you install the cluster.
-
If you're running your worker nodes with a DPDK data path, then label each worker node
that is running a DPDK data path as follows:
where $HOST_ID is the host identifier of the worker node that you want to run DPDK.curl --location --request PATCH $AI_URL'/api/assisted-install/v2/infra-envs/$INFRA_ENVS_ID/hosts/$HOST_ID --header 'Content-Type: application/json' -H "Authorization: Bearer $TOKEN" --data-raw '{"node_labels":[{"key": "agent-mode","value": "dpdk"},{"key": "node-role.kubernetes.io/dpdk","value" : ""}]}'
-
Upload the CN2 manifests (that you downloaded in Before You Install) to the
Assisted Installer service.
For convenience, you can use the following bash script. The script assumes you've placed the manifests in the manifests directory. If you use this script, make sure that:
- you place all the manifests you want to use in this directory, including all manifests that you want to use from the subdirectories
- you don't place any other YAML files in this directory
The script loops through all the *.yaml files in the manifests directory, encodes them in base64, and applies them to the cluster.
#!/bin/bash MANIFESTS=(manifests/*.yaml) total=${#MANIFESTS[@]} i=0 for file in "${MANIFESTS[@]}"; do i=$(( i + 1 )) eval "CONTRAIL_B64_MANIFEST=$(cat $file | base64 -w0)"; eval "BASEFILE=$(basename $file)"; eval "echo '{\"file_name\":\"$BASEFILE\", \"folder\":\"manifests\", \"content\":\"$CONTRAIL_B64_MANIFEST\"}' > $file.b64"; printf "\nProcessing file: $file\n" curl -s -X POST "$AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests" -d @$file.b64 --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" done echo echo echo "Total Manifests: $total" echo echo "Manifest List:" curl -s -X GET "$AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/manifests" --header "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" | jq -r
-
Start the installation of the cluster.
curl -X POST "$AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/actions/install" -H "accept: application/json" -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" | jq
The Assisted Installer service creates the cluster based on the cluster resource that you defined. First, the Assisted installer assigns one of the control plane nodes as the bootstrap node, which in turn prepares the other nodes. One by one, you'll see the non-bootstrap nodes reboot into the cluster, with the non-bootstrap control plane nodes rebooting first, then the worker nodes, and finally the bootstrap node.
The installation can take an hour or more. You can watch the progress of the installation on the Red Hat Hybrid Cloud Console (https://console.redhat.com/openshift).
If the Red Hat Hybrid Cloud Console shows the installation stalling, log in to each node with username core and make sure the host can resolve domain names, especially the ones you configured in your DNS/DHCP server as well as the Red Hat Assisted Installer service and Juniper Networks repository domain names. Most common installation problems can be traced back to network configuration errors such as incorrect DNS configuration. Also, in some environments, the nodes might shut down instead of rebooting. In that case, manually boot the nodes that have shut down.
-
Download the kubeconfig for the cluster.
-
Download the kubeconfig to a local kubeconfig file.
curl -s -X GET "$AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/downloads/credentials?file_name=kubeconfig" -H "accept: application/octet-stream" -H "Authorization: Bearer $TOKEN" > kubeconfig
- Copy the kubeconfig to ~/.kube/config. The kubeconfig must be at that default ~/.kube/config location because the contrailstatus command that you'll use later expects the kubeconfig to be at its default location.
-
Download the kubeconfig to a local kubeconfig file.
-
Verify the installation.
-
Check the status of all the pods. Confirm that all pods are either in Running or
Completed states.
kubectl get pods -A -o wide
-
For those pods not in Running or Completed states, use the kubectl
describe command to investigate further.
kubectl describe pod <pod name> -n <namespace>
A common problem is failing to download an image. If this is the case:
- check that your network can reach the Juniper Networks repository
- check that the node where the failed pod resides is configured to access a DNS server
- check that the node where the failed pod resides can ping the repository by hostname
-
Check the status of all the pods. Confirm that all pods are either in Running or
Completed states.
-
(Optional) Log in to the OpenShift Web Console to monitor your cluster.
The Web Console URL is in the form: https://console-openshift-console.apps.<cluster-name>.<cluster-domain>.
- Ensure the browser where you want to access the Web Console is on a machine that has access to the Web Console URL. You may need to add an entry for the hostname of that console to the /etc/hosts file on that machine. Recall that this mapping is the *.apps.mycluster.contrail.lan mapping that you configured in the DNS server in Before You Install.
-
Download the kubeadmin password.
curl -s -X GET "$AI_URL/api/assisted-install/v2/clusters/$CLUSTER_ID/downloads/credentials?file_name=kubeadmin-password" -H "accept: application/octet-stream" -H "Authorization: Bearer $TOKEN"
- Log in to the OpenShift Web Console with username kubeadmin and the downloaded password.