Before You Install
-
Set up an account with Red Hat and set up an account with Juniper Networks.
You'll need the Red Hat account to use the hosted Assisted Installer service, and you'll need the Juniper Networks account to download the CN2 manifests from the Juniper Networks download site (https://support.juniper.net/support/downloads/?p=contrail-networking) and access the container repository at https://enterprise-hub.juniper.net.
- Set up the fabric network and connect your nodes to the fabric depending on whether you're installing with user-managed networking or cluster-managed networking.
-
Configure the installation machine. This is the computer where you're issuing the
Assisted Installer commands or the Advanced Cluster Management commands.
-
Install a fresh OS on the installation machine, configuring the OS minimally for
the following:
- static IP address and mask (for example, 172.16.0.10/24) and gateway
- access to one or more DNS servers
- SSH connectivity including root SSH access
- NTP
- curl (for Assisted Installer only)
- jq (for Assisted Installer only)
-
Install Helm 3.0 or later (optional). Helm is needed if you want to install
Contrail Analytics.
The following steps are copied from https://helm.sh/docs/intro/install/ for your convenience:
-
Download the get_helm.sh script:
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
-
Install Helm.
chmod 700 get_helm.sh
./get_helm.sh
-
-
Install a fresh OS on the installation machine, configuring the OS minimally for
the following:
-
Download the CN2 manifests from Juniper Networks.
- Download the CN2 manifests (see Manifests) onto your local computer.
-
Copy the downloaded manifests and tools package to the installation machine and
extract.
tar -xzvf contrail-manifests-openshift-<version>.tgz
-
Identify the manifests you want to use and copy them to a separate directory. For
an explanation of the manifests, see Manifests for your
release.
Make sure you copy over all the manifests that you plan on using, including the manifests from the subdirectories if applicable. In our example, we copy the manifests to a manifests directory. Don't copy the subdirectories themselves. The manifests directory should be a flat directory.
-
Populate the manifests with your repository login credentials.
Add your repository login credentials to the contrail-manifests-openshift/auth-registry manifests. These manifests should be in the manifests directory from the previous step. See Configure Repository Credentials for information on how to add your repository credentials.
-
Customize the manifests for your environment as necessary.
If you're running your cluster nodes on VMs, edit the following files to reference the actual names of your interfaces. These manifests disable checksum offloads on the named interface on the VM. (Checksum offload is usually only supported on real NICs on bare metal servers.)
- 99-disable-offload-master.yaml - This manifest disables offload on the control plane nodes on the interface used for Kubernetes control plane traffic. This is the interface that attaches to the 172.16.0.0/24 network in our examples.
- 99-disable-offload-worker.yaml - This manifest disables offload on the worker nodes on the interface used for Kubernetes control plane traffic. This is the interface that attaches to the 172.16.0.0/24 network in our examples.
- 99-disable-offload-master-vrrp.yaml - This manifest disables offload on the control plane nodes on the interface used for Contrail control plane and user data plane traffic. Include this only when running a separate interface for Contrail control and data traffic (such as when using cluster-managed networking). This is the interface that attaches to the 10.16.0.0/24 network in our cluster-managed networking example.
- 99-disable-offload-worker-vrrp.yaml - This manifest disables offload on the worker nodes on the interface used for Contrail control plane and user data plane traffic. Include this only when running a separate interface for Contrail control and data traffic (such as when using cluster-managed networking). This is the interface that attaches to the 10.16.0.0/24 network in our cluster-managed networking example.
Look for the line
ExecStart=/sbin/ethtool -K ens3 tx off
orExecStart=/sbin/ethtool -K ens4 tx off
in these manifests and change the interface name to match the interface name on your control plane or worker node as appropriate. -
Specify the Contrail control and data network if you're using cluster-managed
networking.
Edit the following file to reference the subnet and gateway that you're using for Contrail control plane and user data plane traffic:
- 99-network-configmap.yaml - This manifest specifies the network for Contrail control plane and user data plane traffic. Uncomment the contrail-network-config ConfigMap specification in the manifest and specify the appropriate subnet and gateway (for example, 10.16.0.0/24 and 10.16.0.254).
-
If you're integrating CN2 with Juniper Apstra, configure your Juniper Apstra login
credentials.
Configure your Apstra login credentials in the contrail-manifests-openshift/plugins/111-apstra-secret.yaml manifest. Make sure the username and password that you specify are base64-encoded. For more information, see https://www.juniper.net/documentation/us/en/software/cn-cloud-native23.4/cn-cloud-native-feature-guide/index.html.
-
Install contrailstatus on the installation machine. Contrailstatus is a kubectl plug-in
you can use to query CN2 microservices and CN2-specific resources.
The contrailstatus executable is packaged within the downloaded tools package. Extract and copy the kubectl-contrailstatus executable to /usr/local/bin.
-
Install a load balancer (if you're running with user-managed networking). This step is
not required when running with cluster-managed networking.
In this example, we run haxproxy on the installation machine. You can choose to run a different load balancer for your installation.
-
Install the load balancer.
For example:
sudo dnf install haproxy
-
Configure the load balancer.
We use a single IP address (172.16.0.10) that distributes API and ingress traffic to the nodes in the cluster.
Table 1: Example Load Balancer Entries Type of Traffic Front End Back End api
172.16.0.10:6443
172.16.0.11:6443
172.16.0.12:6443
172.16.0.13:6443
api-int 172.16.0.10:22623
172.16.0.11:22623
172.16.0.12:22623
172.16.0.13:22623
https 172.16.0.10:443
172.16.0.14:443
172.16.0.15:443
http 172.16.0.10:80
172.16.0.14:80
172.16.0.15:80
frontend api bind 172.16.0.10:6443 default_backend controlplaneapi frontend apiinternal bind 172.16.0.10:22623 default_backend controlplaneapiinternal frontend secure bind 172.16.0.10:443 default_backend secure frontend insecure bind 172.16.0.10:80 default_backend insecure backend controlplaneapi balance roundrobin server cp0 172.16.0.11:6443 check server cp1 172.16.0.12:6443 check server cp2 172.16.0.13:6443 check backend controlplaneapiinternal balance roundrobin server cp0 172.16.0.11:22623 check server cp1 172.16.0.12:22623 check server cp2 172.16.0.13:22623 check backend secure balance roundrobin server worker0 172.16.0.14:443 check server worker1 172.16.0.15:443 check backend insecure balance roundrobin server worker0 172.16.0.14:80 check server worker1 172.16.0.15:80 check
-
Start the load balancer.
For example:
systemctl start haproxy
Note:If you're running with selinux, you may need to explicitly allow haproxy to listen on ports (
setsebool -P haproxy_connect_any 1)
.
-
Install the load balancer.
-
Install a DNS/DHCP server in your network to serve the Kubernetes nodes.
In this example, we run dnsmasq on the installation machine. You can choose to run a different DNS/DHCP server for your installation.
-
Install the DNS/DHCP server.
Dnsmasq is preinstalled on some RHEL OS packages. If it's not preinstalled, you can install it as follows:
sudo dnf install dnsmasq
-
Configure the domain name and DHCP entries.
Table 2: Example DHCP Assignments Fully-Qualified Domain Name IP Address ocp1.mycluster.contrail.lan 172.16.0.11 ocp2.mycluster.contrail.lan 172.16.0.12 ocp3.mycluster.contrail.lan 172.16.0.13 ocp4.mycluster.contrail.lan 172.16.0.14 ocp5.mycluster.contrail.lan 172.16.0.15 Note:When using the Assisted Installer service, the fully-qualified domain name is constructed as follows:
<hostname>.<cluster name>.<domain name>
In this example, we use ocpn as the hostname, mycluster as the cluster name, and contrail.lan as the domain name.
-
Configure your DNS entries.
Table 3: Example DNS Entries Hostname IP Address Note ocp1.mycluster.contrail.lan 172.16.0.11 Same as DHCP assignment ocp2.mycluster.contrail.lan 172.16.0.12 Same as DHCP assignment ocp3.mycluster.contrail.lan 172.16.0.13 Same as DHCP assignment ocp4.mycluster.contrail.lan 172.16.0.14 Same as DHCP assignment ocp5.mycluster.contrail.lan 172.16.0.15 Same as DHCP assignment api.mycluster.contrail.lan 172.16.0.10 Load balancer for external API traffic. Required for user-managed networking only. api-int.mycluster.contrail.lan 172.16.0.10 Load balancer for internal API traffic. Required for user-managed networking only. apps.mycluster.contrail.lan 172.16.0.10 Load balancer for ingress traffic. Required for user-managed networking only. *.apps.mycluster.contrail.lan 172.16.0.10 Load balancer for ingress traffic. Required for user-managed networking only. dhcp-host=52:54:00:00:11:11,ocp1.mycluster.contrail.lan,172.16.0.11 dhcp-host=52:54:00:00:11:22,ocp2.mycluster.contrail.lan,172.16.0.12 dhcp-host=52:54:00:00:11:33,ocp3.mycluster.contrail.lan,172.16.0.13 dhcp-host=52:54:00:00:11:44,ocp4.mycluster.contrail.lan,172.16.0.14 dhcp-host=52:54:00:00:11:55,ocp5.mycluster.contrail.lan,172.16.0.15 host-record=api.mycluster.contrail.lan,172.16.0.10 address=/.apps.mycluster.contrail.lan/172.16.0.10 address=/api-int.mycluster.contrail.lan/172.16.0.10
-
Start the DNS/DHCP server.
For example:
systemctl start dnsmasq
-
Install the DNS/DHCP server.
-
Download the OpenShift command line interface tool (oc) from Red Hat. This package
includes kubectl.
- On the browser of your local computer, go to https://console.redhat.com/openshift/downloads#tool-oc and download the OpenShift command line interface tool (oc).
-
Copy the downloaded package to the Assisted Installer client machine and
untar.
tar -xzvf openshift-client-linux.tar.gz README.md oc kubectl
- Copy the oc and kubectl executables into a directory in your path (for example, /usr/local/bin).