This topic covers Contrail Networking in Red Hat Openshift
environments that are using Contrail Networking Release 21-based releases.
Starting in Release 22.1, Contrail Networking evolved into Cloud-Native
Contrail Networking. Cloud-Native Contrail offers significant enhancements
to optimize networking performance in Kubernetes-orchestrated environments.
Cloud-Native Contrail supports Red Hat Openshift and we strongly recommend
using Cloud-Native Contrail for networking in environments using Red
Hat Openshift.
Starting in Contrail Networking Release 2011, you can install
–Contrail Networking with Red Hat Openshift 4.5 in multiple
environments.
This document shows one method of installing Red Hat Openshift
4.5 with Contrail Networking in two separate contexts—on a VM
running in a KVM module and within Amazon Web Services (AWS).
There are many implementation and configuration options available
for installing and configuring Red Hat OpenShift 4.5 and the scope
of all options is beyond this document. For additional information
on Red Hat Openshift 4.5 implementation options, see the OpenShift Container Platform 4.5 Documentation from Red Hat.
This document includes the following sections:
How to Install Contrail Networking and Red Hat OpenShift 4.5
using a VM Running in a KVM Module
This section illustrates how to install Contrail
Networking with Red Hat OpenShift 4.5 orchestration, where Contrail
Networking and Red Hat Openshift are running on virtual machines (VMs)
in a Kernel-based Virtual Machine (KVM) module.
This procedure can also be performed to configure an environment
where Contrail Networking and Red Hat OpenShift 4.5 are running in
an environment with bare metal servers. You can, for instance, use
this procedure to establish an environment where the master nodes
host the VMs that run the control plane on KVM while the worker nodes
operate on physical bare metal servers.
This procedure is used to install Contrail Networking and Red
Hat OpenShift 4.5 orchestration on a virtual machine (VM) running
in a Kernel-based Virtual Machine (KVM) module. Support for Contrail
Networking installations onto VMs in Red Hat OpenShift 4.5 environments
is introduced in Contrail Networking Release 2011. See Contrail Networking Supported Platforms.
You can also use this procedure to install Contrail Networking
and Red Hat OpenShift 4.5 orchestration on a bare metal server.
This procedure should work with all versions of Openshift 4.5.
Prerequisites
This document makes the following assumptions about your environment:
The term master node refers to the
nodes that build the control plane in this document.
Worker nodes: 4 CPU, 16GB RAM, 120GB SSD storage
Note:
The term worker node refers to nodes
running compute services using the data plane in this document.
Helper node: 4 CPU, 8GB RAM, 30GB SSD storage
In single node deployments, do not use spinning disk arrays
with low Input/Output Operations Per Second (IOPS) when using Contrail
Networking with Red Hat Openshift. Higher IOPS disk arrays are required
because the control plane always operates as a high availability setup
in single node deployments.
IOPS requirements vary by environment due to multiple factors
beyond Contrail Networking and Red Hat Openshift. We, therefore, provide
this guideline but do not provide direct guidance around IOPS requirements.
Install Contrail Networking and Red Hat Openshift 4.5
Perform these steps to install Contrail Networking
and Red Hat OpenShift 4.5 using a VM running in a KVM module:
If the worker nodes are running on physical bare metal
servers in your environment, this virtual network will be a bridge
network with IP address allocations within the same subnet. This addressing
scheme is similar to the scheme for the KVM server.
Create a Helper Node with a Virtual Machine Running CentOS
7 or 8
This procedure requires a helper node with a virtual
machine that is running either CentOS 7 or 8.
To create this helper node:
Download the Kickstart file for the helper node from the
Red Hat repository:
Copy the vars.yaml file into the top-level directory:
content_copyzoom_out_map
# cp docs/examples/vars.yaml .
Review the vars.yml file. Consider changing any value that requires
changing in your environment.
The following values should be reviewed especially carefully:
The domain name, which is defined using the domain: parameter in the dns: hierarchy.
If you are using local DNS servers, modify the forwarder parameters—forwarder1: and forwarder2: are used
in this example—to connect to these DNS servers.
Hostnames for master and worker nodes. Hostnames are defined
using the name: parameter in either the primaries: or workers: hierarchies.
IP and DHCP settings. If you are using a custom bridge
network, modify the IP and DHCP settings accordingly.
VM and BMS settings.
If you are using a VM, set the disk: parameter
as disk: vda.
If you are using a BMS, set the disk: parameter
as disk: sda.
If you are using physical servers to host worker nodes,
change the provisioning interface for the worker nodes to the mac
address.
Review the vars/main.yml file to
ensure the file reflects the correct version of Red Hat OpenShift.
If you need to change the Red Hat Openshift version in the file, change
it.
In the following sample main.yml file,
Red Hat Openshift 4.5 is installed:
After the playbook is run, gather information about your
environment and confirm that all services are active and running:
content_copyzoom_out_map
# /usr/local/bin/helpernodecheck services
Status of services:
===================
Status of dhcpd svc -> Active: active (running) since Mon 2020-09-28 05:40:10 EDT; 33min ago
Status of named svc -> Active: active (running) since Mon 2020-09-28 05:40:08 EDT; 33min ago
Status of haproxy svc -> Active: active (running) since Mon 2020-09-28 05:40:08 EDT; 33min ago
Status of httpd svc -> Active: active (running) since Mon 2020-09-28 05:40:10 EDT; 33min ago
Status of tftp svc -> Active: active (running) since Mon 2020-09-28 06:13:34 EDT; 1s ago
Unit local-registry.service could not be found.
Status of local-registry svc ->
Create the Ignition Configurations
To create Ignition configurations:
On your hypervisor and helper nodes, check that your NTP
server is properly configured in the /etc/chrony.conf file:
content_copyzoom_out_map
chronyc tracking
The installation fails with a X509: certificate has
expired or is not yet valid message when NTP is not properly
configured.
Create a location to store your pull secret objects:
content_copyzoom_out_map
# mkdir -p ~/.openshift
From Get Started
with Openshift website, download your pull secret and save it
in the ~/.openshift/pull-secret directory.
content_copyzoom_out_map
# ls -1 ~/.openshift/pull-secret
/root/.openshift/pull-secret
An SSH key is created for you in the ~/.ssh/helper_rsa directory after completing the previous step. You can use this key
or create a unique key for authentication.
From the hypervisor, use PXE booting to launch the virtual
machine or machines. If you are using a bare metal server, use PXE
booting to boot the servers.
The following actions occur as a result of this step:
a bootstrap node virtual machine is created.
the bootstrap node VM is connected to the PXE server.
The PXE server is our helper node.
an IP address is assigned from DHCP.
A Red Hat Enterprise Linux CoreOS (RHCOS) image is downloaded
from the HTTP server.
The ignition file is embedded at the end of the installation
process.
Use SSH to run the helper RSA:
content_copyzoom_out_map
# ssh -i ~/.ssh/helper_rsa core@192.168.7.20
Review the logs:
content_copyzoom_out_map
journalctl -f
On the bootstrap node, a temporary etcd and bootkube is
created.
You can monitor these services when they are running by entering
the sudo crictl ps command.
content_copyzoom_out_map
[core@bootstrap ~]$ sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME POD ID
33762f4a23d7d 976cc3323... 54 seconds ago Running manager 29a...
ad6f2453d7a16 86694d2cd... About a minute ago Running kube-apiserver-insecure-readyz 4cd...
3bbdf4176882f quay.io/... About a minute ago Running kube-scheduler b3e...
57ad52023300e quay.io/... About a minute ago Running kube-controller-manager 596...
a1dbe7b8950da quay.io/... About a minute ago Running kube-apiserver 4cd...
5aa7a59a06feb quay.io/... About a minute ago Running cluster-version-operator 3ab...
ca45790f4a5f6 099c2a... About a minute ago Running etcd-metrics 081...
e72fb8aaa1606 quay.io/... About a minute ago Running etcd-member 081...
ca56bbf2708f7 1ac19399... About a minute ago Running machine-config-server c11...
Note:
Output modified for readability.
From the hypervisor, launch the VMs on the master nodes:
Look for the DEBUG Bootstrap status: complete and the INFO It is now safe to remove the bootstrap resources messages to confirm that the installation is complete.
content_copyzoom_out_map
INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp4.example.com:6443...
INFO API v1.13.4+838b4fa up
INFO Waiting up to 30m0s for bootstrapping to complete...
DEBUG Bootstrap status: completeINFO It is now safe to remove the bootstrap resources
Do not proceed to the next step until you see these messages.
From the hypervisor, delete the bootstrap VM and launch
the worker nodes.
Note:
If you are using physical bare metal servers as worker
nodes, skip this step.
Your installation might be waiting for worker nodes to
approve the certificate signing request (CSR). The machineconfig node
approval operator typically handles CSR approval.
CSR approval, however, sometimes has to be performed manually.
To check pending CSRs:
content_copyzoom_out_map
# oc get csr
To approve all pending CSRs:
content_copyzoom_out_map
# oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
You may have to approve all pending CSRs multiple times, depending
on the number of worker nodes in your environment and other factors.
To monitor incoming CSRs:
content_copyzoom_out_map
# watch -n5 oc get csr
Do not move to the next step until incoming CSRs have stopped.
# openshift-install wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.ocp4.example.com:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ocp4/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp4.example.com
INFO Login to the console with user: kubeadmin, password: XXX-XXXX-XXXX-XXXX
This procedure is used to install Contrail Networking and Red
Hat OpenShift 4.5 orchestration in AWS. Support for Contrail Networking
and Red Hat OpenShift 4.5 environments is introduced in Contrail Networking
Release 2011. See Contrail Networking Supported Platforms.
Prerequisites
This document makes the following assumptions about your environment:
You have an SSH key that you can generate or provide on
your local machine during the installation.
Configure DNS
A DNS zone must be created and available in Route 53 for your
AWS account before starting this installation. You must also register
a domain for your Contrail cluster in AWS Route 53. All entries created
in AWS Route 53 are expected to be resolvable from the nodes in the
Contrail cluster.
For information on configuring DNS zones in AWS Route 53, see
the Amazon Route 53 Developer Guide from AWS.
Configure AWS Credentials
The installer used in this procedure creates multiple resources
in AWS that are needed to run your cluster. These resources include
Elastic Compute Cloud (EC2) instances, Virtual Private Clouds (VPCs),
security groups, IAM roles, and other necessary network building blocks.
AWS credentials are needed to access these resources and should
be configured before starting this installation.
An install-config.yaml file needs to be
created and added to the current directory. A sample install-config.yaml file is provided below.
Be aware of the following factors while creating the install-config.yaml file:
The networkType field is usually
set as OpenShiftSDN in the YAML file by default.
For configuration pointing at Contrail cluster nodes, the networkType field needs to be configured as Contrail.
OpenShift master nodes need larger instances. We recommend
setting the type to m5.2xlarge or larger for
OpenShift nodes.
Most OpenShift worker nodes can use the default instance
sizes. You should consider using larger instances, however, for high
demand performance workloads.
The scope of each potential configuration changes is beyond
the scope of this document.
Common configuration changes include:
Modify the 00-contrail-02-registry-secret.yaml file to
providing proper configuration with credentials to a registry. The
most commonly used registry is the Contrail repository at hub.juniper.net.
Note:
You can create a base64 encoded value for configuration
with the script provided in this directory. If you want to use this
value for security, copy the output of the script and paste it into
the Contrail registry secret configuration by replacing the DOCKER_CONFIG variable with the generated base64 encoded
value string.
If you are using non-default network-CIDR subnets for
your pods or services, open the deploy/openshift/manifests/cluster-network-02-config.yml file and update the CIDR values.
The default number of master nodes in a Kubernetes cluster
is 3. If you are using a different number of master nodes, modify
the deploy/openshift/manifests/00-contrail-09-manager.yaml file and set the spec.commonConfiguration.replicas field to the
number of master nodes.
Contrail Networking needs to open some networking ports
for operation within AWS. These ports are opened by adding rules to
security groups.
Follow this procedure to add rules to security groups when AWS
resources are manually created:
Build the Contrail CLI tool for managing security group
ports on AWS. This tool allows you to automatically open ports that
are required for Contrail to manage security group ports on AWS that
are attached to Contrail cluster resources.
To build this tool:
content_copyzoom_out_map
go build .
After entering this command, you should be in the binary contrail-sc-open
in your directory. This interface is the compiled tool.
Start the tool:
content_copyzoom_out_map
./contrail-sc-open -cluster-name name of your Openshift cluster -region AWS region where cluster is located
Verify that the service has been created:
content_copyzoom_out_map
oc -n openshift-ingress get service router-default
Proceed to the next step after confirming the service was created.
When the service router-default is created in openshift-ingress,
use the following command to patch the configuration:
The final messages from a sample successful installation:
content_copyzoom_out_map
INFO Waiting up to 10m0s for the openshift-console route to be created...
DEBUG Route found in openshift-console namespace: console
DEBUG Route found in openshift-console namespace: downloads
DEBUG OpenShift console route is created
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/ovaleanu/aws1-ocp4/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.w1.ovsandbox.com
INFO Login to the console with user: kubeadmin, password: XXXxx-XxxXX-xxXXX-XxxxX
The kubeadmin user can now safely be removed. See the Removing the kubeadmin user document from Red Hat OpenShift.
How to Install Earlier Releases of Contrail Networking and
Red Hat OpenShift
If you have a need to install Contrail Networking with earlier
versions of Red Hat Openshift, Contrail Networking is also supported
with Red Hat Openshift versions 4.4 and 3.11.