Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

external-header-nav
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents
file_download PDF
keyboard_arrow_right

How to Install Contrail Networking and Red Hat OpenShift 4.5

date_range 07-Jun-23
Note:

This topic covers Contrail Networking in Red Hat Openshift environments that are using Contrail Networking Release 21-based releases.

Starting in Release 22.1, Contrail Networking evolved into Cloud-Native Contrail Networking. Cloud-Native Contrail offers significant enhancements to optimize networking performance in Kubernetes-orchestrated environments. Cloud-Native Contrail supports Red Hat Openshift and we strongly recommend using Cloud-Native Contrail for networking in environments using Red Hat Openshift.

For general information about Cloud-Native Contrail, see the Cloud-Native Contrail Networking Techlibrary homepage.

Starting in Contrail Networking Release 2011, you can install –Contrail Networking with Red Hat Openshift 4.5 in multiple environments.

This document shows one method of installing Red Hat Openshift 4.5 with Contrail Networking in two separate contexts—on a VM running in a KVM module and within Amazon Web Services (AWS).

There are many implementation and configuration options available for installing and configuring Red Hat OpenShift 4.5 and the scope of all options is beyond this document. For additional information on Red Hat Openshift 4.5 implementation options, see the OpenShift Container Platform 4.5 Documentation from Red Hat.

This document includes the following sections:

How to Install Contrail Networking and Red Hat OpenShift 4.5 using a VM Running in a KVM Module

This section illustrates how to install Contrail Networking with Red Hat OpenShift 4.5 orchestration, where Contrail Networking and Red Hat Openshift are running on virtual machines (VMs) in a Kernel-based Virtual Machine (KVM) module.

This procedure can also be performed to configure an environment where Contrail Networking and Red Hat OpenShift 4.5 are running in an environment with bare metal servers. You can, for instance, use this procedure to establish an environment where the master nodes host the VMs that run the control plane on KVM while the worker nodes operate on physical bare metal servers.

When to Use This Procedure

This procedure is used to install Contrail Networking and Red Hat OpenShift 4.5 orchestration on a virtual machine (VM) running in a Kernel-based Virtual Machine (KVM) module. Support for Contrail Networking installations onto VMs in Red Hat OpenShift 4.5 environments is introduced in Contrail Networking Release 2011. See Contrail Networking Supported Platforms.

You can also use this procedure to install Contrail Networking and Red Hat OpenShift 4.5 orchestration on a bare metal server.

This procedure should work with all versions of Openshift 4.5.

Prerequisites

This document makes the following assumptions about your environment:

  • the KVM environment is operational.

  • the server meets the platform requirements for the Contrail Networking installation. See Contrail Networking Supported Platforms.

  • Minimum server requirements:

    • Master nodes: 8 CPU, 40GB RAM, 250GB SSD storage

      Note:

      The term master node refers to the nodes that build the control plane in this document.

    • Worker nodes: 4 CPU, 16GB RAM, 120GB SSD storage

      Note:

      The term worker node refers to nodes running compute services using the data plane in this document.

    • Helper node: 4 CPU, 8GB RAM, 30GB SSD storage

  • In single node deployments, do not use spinning disk arrays with low Input/Output Operations Per Second (IOPS) when using Contrail Networking with Red Hat Openshift. Higher IOPS disk arrays are required because the control plane always operates as a high availability setup in single node deployments.

    IOPS requirements vary by environment due to multiple factors beyond Contrail Networking and Red Hat Openshift. We, therefore, provide this guideline but do not provide direct guidance around IOPS requirements.

Install Contrail Networking and Red Hat Openshift 4.5

Perform these steps to install Contrail Networking and Red Hat OpenShift 4.5 using a VM running in a KVM module:

Create a Virtual Network or a Bridge Network for the Installation

To create a virtual network or a bridge network for the installation:

  1. Log onto the server that will host the VM that will run Contrail Networking.

    Download the virt-net.xml virtual network configuration file from the Red Hat repository.

    content_copy zoom_out_map
    # wget https://raw.githubusercontent.com/RedHatOfficial/ocp4-helpernode/master/docs/examples/virt-net.xml
  2. Create a virtual network using the virt-net.xml file.

    You may need to modify your virtual network for your environment.

    Example:

    content_copy zoom_out_map
    # virsh net-define --file virt-net.xml
  3. Set the OpenShift 4 virtual network to autostart on bootup:
    content_copy zoom_out_map
    # virsh net-autostart openshift4
    # virsh net-start openshift4
    Note:

    If the worker nodes are running on physical bare metal servers in your environment, this virtual network will be a bridge network with IP address allocations within the same subnet. This addressing scheme is similar to the scheme for the KVM server.

Create a Helper Node with a Virtual Machine Running CentOS 7 or 8

This procedure requires a helper node with a virtual machine that is running either CentOS 7 or 8.

To create this helper node:

  1. Download the Kickstart file for the helper node from the Red Hat repository:

    CentOS 8

    content_copy zoom_out_map
    # wget https://raw.githubusercontent.com/RedHatOfficial/ocp4-helpernode/master/docs/examples/helper-ks8.cfg -O helper-ks.cfg

    CentOS 7

    content_copy zoom_out_map
    # wget https://raw.githubusercontent.com/RedHatOfficial/ocp4-helpernode/master/docs/examples/helper-ks.cfg -O helper-ks.cfg
  2. If you haven’t already configured a root password and the NTP server on the helper node, enter the following commands:

    Example Root Password

    content_copy zoom_out_map
    rootpw --plaintext password

    Example NTP Configuration

    content_copy zoom_out_map
    timezone America/Los_Angeles --isUtc --ntpservers=0.centos.pool.ntp.org,1.centos.pool.ntp.org,2.centos.pool.ntp.org,3.centos.pool.ntp.org
  3. Edit the helper-ks.cfg file for your environment and use it to install the helper node.

    The following examples show how to install the helper node without having to take further actions:

    CentOS 8

    content_copy zoom_out_map
    # virt-install --name="ocp4-aHelper" --vcpus=2 --ram=4096 \
    --disk path=/var/lib/libvirt/images/ocp4-aHelper.qcow2,bus=virtio,size=50 \
    --os-variant centos8 --network network=openshift4,model=virtio \
    --boot hd,menu=on --location /var/lib/libvirt/iso/CentOS-8.2.2004-x86_64-dvd1.iso \
    --initrd-inject helper-ks.cfg --extra-args "inst.ks=file:/helper-ks.cfg" --noautoconsole

    CentOS 7

    content_copy zoom_out_map
    # virt-install --name="ocp4-aHelper" --vcpus=2 --ram=4096 \
    --disk path=/var/lib/libvirt/images/ocp4-aHelper.qcow2,bus=virtio,size=30 \
    --os-variant centos7.0 --network network=openshift4,model=virtio \
    --boot hd,menu=on --location /var/lib/libvirt/iso/CentOS-7-x86_64-Minimal-2003.iso \
    --initrd-inject helper-ks.cfg --extra-args "inst.ks=file:/helper-ks.cfg" --noautoconsole

    The helper node is installed with the following settings, which are pulled from the virt-net.xml file:

    • HELPER_IP: 192.168.7.77

    • NetMask: 255.255.255.0

    • Default Gateway: 192.168.7.1

    • DNS Server: 8.8.8.8

  4. Monitor the helper node installation progress in the viewer:
    content_copy zoom_out_map
    # virt-viewer --domain-name ocp4-aHelper

    When the installation process is complete, the helper node shuts off.

  5. Start the helper node:
    content_copy zoom_out_map
    # virsh start ocp4-aHelper

Prepare the Helper Node

To prepare the helper node after the helper node installation:

  1. Login to the helper node:
    content_copy zoom_out_map
    # ssh -l root HELPER_IP
    Note:

    The default HELPER_IP, which was pulled from the virt-net.xml file, is 192.168.7.77.

  2. Install Enterprise Linux and update CentOS.
    content_copy zoom_out_map
    # yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-$(rpm -E %rhel).noarch.rpm
    # yum -y update
  3. Install Ansible and Git and clone the helpernode repository onto the helper node.
    content_copy zoom_out_map
    # yum -y install ansible git
    # git clone https://github.com/RedHatOfficial/ocp4-helpernode
    # cd ocp4-helpernode
  4. Copy the vars.yaml file into the top-level directory:
    content_copy zoom_out_map
    # cp docs/examples/vars.yaml .

    Review the vars.yml file. Consider changing any value that requires changing in your environment.

    The following values should be reviewed especially carefully:

    • The domain name, which is defined using the domain: parameter in the dns: hierarchy. If you are using local DNS servers, modify the forwarder parameters—forwarder1: and forwarder2: are used in this example—to connect to these DNS servers.

    • Hostnames for master and worker nodes. Hostnames are defined using the name: parameter in either the primaries: or workers: hierarchies.

    • IP and DHCP settings. If you are using a custom bridge network, modify the IP and DHCP settings accordingly.

    • VM and BMS settings.

      If you are using a VM, set the disk: parameter as disk: vda.

      If you are using a BMS, set the disk: parameter as disk: sda.

    A sample vars.yml file:

    content_copy zoom_out_map
    disk: vda
    helper:
      name: "helper"
      ipaddr: "192.168.7.77"
    dns:
      domain: "example.com"
      clusterid: "ocp4"
      forwarder1: "8.8.8.8"
      forwarder2: "8.8.4.4"
    dhcp:
      router: "192.168.7.1"
      bcast: "192.168.7.255"
      netmask: "255.255.255.0"
      poolstart: "192.168.7.10"
      poolend: "192.168.7.30"
      ipid: "192.168.7.0"
      netmaskid: "255.255.255.0"
    bootstrap:
      name: "bootstrap"
      ipaddr: "192.168.7.20"
      macaddr: "52:54:00:60:72:67"
    masters:
      - name: "master0"
    	ipaddr: "192.168.7.21"
    	macaddr: "52:54:00:e7:9d:67"
      - name: "master1"
    	ipaddr: "192.168.7.22"
    	macaddr: "52:54:00:80:16:23"
      - name: "master2"
    	ipaddr: "192.168.7.23"
    	macaddr: "52:54:00:d5:1c:39"
    workers:
      - name: "worker0"
    	ipaddr: "192.168.7.11"
    	macaddr: "52:54:00:f4:26:a1"
      - name: "worker1"
    	ipaddr: "192.168.7.12"
    	macaddr: "52:54:00:82:90:00"
    
    Note:

    If you are using physical servers to host worker nodes, change the provisioning interface for the worker nodes to the mac address.

  5. Review the vars/main.yml file to ensure the file reflects the correct version of Red Hat OpenShift. If you need to change the Red Hat Openshift version in the file, change it.

    In the following sample main.yml file, Red Hat Openshift 4.5 is installed:

    content_copy zoom_out_map
    ssh_gen_key: true
    install_filetranspiler: false
    staticips: false
    force_ocp_download: false
    remove_old_config_files: false
    ocp_bios: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.5/4.5.6/rhcos-4.5.6-x86_64-metal.x86_64.raw.gz"
    ocp_initramfs: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.5/4.5.6/rhcos-4.5.6-x86_64-installer-initramfs.x86_64.img"
    ocp_install_kernel: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.5/4.5.6/rhcos-4.5.6-x86_64-installer-kernel-x86_64"
    ocp_client: "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.5.21/openshift-client-linux-4.5.21.tar.gz"
    ocp_installer: "https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.5.21/openshift-install-linux-4.5.21.tar.gz"
    helm_source: "https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz"
    chars: (\\_|\\$|\\\|\\/|\\=|\\)|\\(|\\&|\\^|\\%|\\$|\\#|\\@|\\!|\\*)
    ppc64le: false
    chronyconfig:
      enabled: false
    setup_registry:
      deploy: false
      autosync_registry: false
      registry_image: docker.io/library/registry:2
      local_repo: "ocp4/openshift4"
      product_repo: "openshift-release-dev"
      release_name: "ocp-release"
      release_tag: "4.5.21-x86_64"
  6. Run the playbook to setup the helper node:
    content_copy zoom_out_map
    # ansible-playbook -e @vars.yaml tasks/main.yml
  7. After the playbook is run, gather information about your environment and confirm that all services are active and running:
    content_copy zoom_out_map
    # /usr/local/bin/helpernodecheck services
    Status of services:
    ===================
    Status of dhcpd svc 		->    Active: active (running) since Mon 2020-09-28 05:40:10 EDT; 33min ago
    Status of named svc 		->    Active: active (running) since Mon 2020-09-28 05:40:08 EDT; 33min ago
    Status of haproxy svc 	->    Active: active (running) since Mon 2020-09-28 05:40:08 EDT; 33min ago
    Status of httpd svc 		->    Active: active (running) since Mon 2020-09-28 05:40:10 EDT; 33min ago
    Status of tftp svc 		->    Active: active (running) since Mon 2020-09-28 06:13:34 EDT; 1s ago
    Unit local-registry.service could not be found.
    Status of local-registry svc 		->
    

Create the Ignition Configurations

To create Ignition configurations:

  1. On your hypervisor and helper nodes, check that your NTP server is properly configured in the /etc/chrony.conf file:
    content_copy zoom_out_map
    chronyc tracking

    The installation fails with a X509: certificate has expired or is not yet valid message when NTP is not properly configured.

  2. Create a location to store your pull secret objects:
    content_copy zoom_out_map
    # mkdir -p ~/.openshift
  3. From Get Started with Openshift website, download your pull secret and save it in the ~/.openshift/pull-secret directory.
    content_copy zoom_out_map
    # ls -1 ~/.openshift/pull-secret
    /root/.openshift/pull-secret
  4. An SSH key is created for you in the ~/.ssh/helper_rsa directory after completing the previous step. You can use this key or create a unique key for authentication.
    content_copy zoom_out_map
    # ls -1 ~/.ssh/helper_rsa
    /root/.ssh/helper_rsa
  5. Create an installation directory.
    content_copy zoom_out_map
    # mkdir ~/ocp4
    # cd ~/ocp4
  6. Create an install-config.yaml file.

    An example file:

    content_copy zoom_out_map
    # cat <<EOF > install-config.yaml
    apiVersion: v1
    baseDomain: example.com
    compute:
    - hyperthreading: Enabled
      name: worker
      replicas: 0
    controlPlane:
      hyperthreading: Enabled
      name: master
      replicas: 3
    metadata:
      name: ocp4
    networking:
      clusterNetworks:
      - cidr: 10.254.0.0/16
        hostPrefix: 24
      networkType: Contrail
      serviceNetwork:
      - 172.30.0.0/16
    platform:
      none: {}
    pullSecret: '$(< ~/.openshift/pull-secret)'
    sshKey: '$(< ~/.ssh/helper_rsa.pub)'
    EOF
  7. Create the installation manifests:
    content_copy zoom_out_map
    # openshift-install create manifests
  8. Set the mastersSchedulable: variable to false in the manifests/cluster-scheduler-02-config.yml file.
    content_copy zoom_out_map
    # sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' manifests/cluster-scheduler-02-config.yml

    A sample cluster-scheduler-02-config.yml file after this configuration change:

    content_copy zoom_out_map
    # cat manifests/cluster-scheduler-02-config.yml
    apiVersion: config.openshift.io/v1
    kind: Scheduler
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      mastersSchedulable: false
      policy:
        name: ""
    status: {}

    This configuration change is needed to prevent pods from being scheduled on control plane machines.

  9. Install the YAML files to apply the Contrail configuration:
    content_copy zoom_out_map
    bash <<EOF
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-01-namespace.yaml -o manifests/00-contrail-01-namespace.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-02-admin-password.yaml -o manifests/00-contrail-02-admin-password.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-02-rbac-auth.yaml -o manifests/00-contrail-02-rbac-auth.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-02-registry-secret.yaml -o manifests/00-contrail-02-registry-secret.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-03-cluster-role.yaml -o manifests/00-contrail-03-cluster-role.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-04-serviceaccount.yaml -o manifests/00-contrail-04-serviceaccount.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-05-rolebinding.yaml -o manifests/00-contrail-05-rolebinding.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-06-clusterrolebinding.yaml -o manifests/00-contrail-06-clusterrolebinding.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_cassandras_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_cassandras_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_commands_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_commands_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_configs_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_configs_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_contrailcnis_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_contrailcnis_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_fernetkeymanagers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_fernetkeymanagers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_contrailmonitors_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_contrailmonitors_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_contrailstatusmonitors_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_contrailstatusmonitors_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_controls_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_controls_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_keystones_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_keystones_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_kubemanagers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_kubemanagers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_managers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_managers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_memcacheds_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_memcacheds_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_postgres_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_postgres_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_provisionmanagers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_provisionmanagers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_rabbitmqs_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_rabbitmqs_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_swiftproxies_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_swiftproxies_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_swifts_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_swifts_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_swiftstorages_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_swiftstorages_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_vrouters_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_vrouters_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_webuis_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_webuis_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_zookeepers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_zookeepers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/releases/R2011/manifests/00-contrail-08-operator.yaml -o manifests/00-contrail-08-operator.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/releases/R2011/manifests/00-contrail-09-manager.yaml -o manifests/00-contrail-09-manager.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/cluster-network-02-config.yml -o manifests/cluster-network-02-config.yml
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master-iptables-machine-config.yaml -o openshift/99_master-iptables-machine-config.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master-kernel-modules-overlay.yaml -o openshift/99_master-kernel-modules-overlay.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master_network_functions.yaml -o openshift/99_master_network_functions.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master_network_manager_stop_service.yaml -o openshift/99_master_network_manager_stop_service.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master-pv-mounts.yaml -o openshift/99_master-pv-mounts.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker-iptables-machine-config.yaml -o openshift/99_worker-iptables-machine-config.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker-kernel-modules-overlay.yaml -o openshift/99_worker-kernel-modules-overlay.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker_network_functions.yaml -o openshift/99_worker_network_functions.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker_network_manager_stop_service.yaml -o openshift/99_worker_network_manager_stop_service.yaml;
    EOF
  10. If your environment has to use a specific NTP server, set the environment using the steps in the Openshift 4.x Chrony Configuration document.
  11. Generate the Ignition configurations:
    content_copy zoom_out_map
    # openshift-install create ignition-configs
  12. Copy the Ignition files in the Ignition directory for the webserver:
    content_copy zoom_out_map
    # cp ~/ocp4/*.ign /var/www/html/ignition/
    # restorecon -vR /var/www/html/
    # restorecon -vR /var/lib/tftpboot/
    # chmod o+r /var/www/html/ignition/*.ign

Launch the Virtual Machines

To launch the virtual machines:

  1. From the hypervisor, use PXE booting to launch the virtual machine or machines. If you are using a bare metal server, use PXE booting to boot the servers.
  2. Launch the bootstrap virtual machine:
    content_copy zoom_out_map
    # virt-install --pxe --network bridge=openshift4 --mac=52:54:00:60:72:67 --name ocp4-bootstrap --ram=8192 --vcpus=4 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-bootstrap.qcow2,size=120 --vnc

    The following actions occur as a result of this step:

    • a bootstrap node virtual machine is created.

    • the bootstrap node VM is connected to the PXE server. The PXE server is our helper node.

    • an IP address is assigned from DHCP.

    • A Red Hat Enterprise Linux CoreOS (RHCOS) image is downloaded from the HTTP server.

    The ignition file is embedded at the end of the installation process.

  3. Use SSH to run the helper RSA:
    content_copy zoom_out_map
    # ssh -i ~/.ssh/helper_rsa core@192.168.7.20
  4. Review the logs:
    content_copy zoom_out_map
    journalctl -f
  5. On the bootstrap node, a temporary etcd and bootkube is created.

    You can monitor these services when they are running by entering the sudo crictl ps command.

    content_copy zoom_out_map
    [core@bootstrap ~]$ sudo crictl ps
    CONTAINER      IMAGE         CREATED             STATE    NAME                            POD ID
    33762f4a23d7d  976cc3323...  54 seconds ago      Running  manager                         29a...
    ad6f2453d7a16  86694d2cd...  About a minute ago  Running  kube-apiserver-insecure-readyz  4cd...
    3bbdf4176882f  quay.io/...   About a minute ago  Running  kube-scheduler                  b3e...
    57ad52023300e  quay.io/...   About a minute ago  Running  kube-controller-manager         596...
    a1dbe7b8950da  quay.io/...   About a minute ago  Running  kube-apiserver                  4cd...
    5aa7a59a06feb  quay.io/...   About a minute ago  Running  cluster-version-operator        3ab...
    ca45790f4a5f6  099c2a...     About a minute ago  Running  etcd-metrics                    081...
    e72fb8aaa1606  quay.io/...   About a minute ago  Running  etcd-member                     081...
    ca56bbf2708f7  1ac19399...   About a minute ago  Running  machine-config-server           c11...
    Note:

    Output modified for readability.

  6. From the hypervisor, launch the VMs on the master nodes:
    content_copy zoom_out_map
    # virt-install --pxe --network bridge=openshift4 --mac=52:54:00:e7:9d:67 --name ocp4-master0 --ram=40960 --vcpus=8 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-master0.qcow2,size=250 --vnc
    # virt-install --pxe --network bridge=openshift4 --mac=52:54:00:80:16:23 --name ocp4-master1 --ram=40960 --vcpus=8 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-master1.qcow2,size=250 --vnc
    # virt-install --pxe --network bridge=openshift4 --mac=52:54:00:d5:1c:39 --name ocp4-master2 --ram=40960 --vcpus=8 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-master2.qcow2,size=250 --vnc

    You can login to the master nodes from the helper node after the master nodes have been provisioned:

    content_copy zoom_out_map
    # ssh -i ~/.ssh/helper_rsa core@192.168.7.21
    # ssh -i ~/.ssh/helper_rsa core@192.168.7.22
    # ssh -i ~/.ssh/helper_rsa core@192.168.7.23

    Enter the sudo crictl ps at any point to monitor pod creation as the VMs are launching.

Monitor the Installation Process and Delete the Bootstrap Virtual Machine

To monitor the installation process:

  1. From the helper node, navigate to the ~/ocp4 directory.
  2. Track the install process log:
    content_copy zoom_out_map
    # openshift-install wait-for bootstrap-complete --log-level debug

    Look for the DEBUG Bootstrap status: complete and the INFO It is now safe to remove the bootstrap resources messages to confirm that the installation is complete.

    content_copy zoom_out_map
    INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp4.example.com:6443...
    INFO API v1.13.4+838b4fa up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    DEBUG Bootstrap status: complete
    INFO It is now safe to remove the bootstrap resources

    Do not proceed to the next step until you see these messages.

  3. From the hypervisor, delete the bootstrap VM and launch the worker nodes.
    Note:

    If you are using physical bare metal servers as worker nodes, skip this step.

    Boot the bare metal servers using PXE instead.

    content_copy zoom_out_map
    # virt-install --pxe --network bridge=openshift4 --mac=52:54:00:f4:26:a1 --name ocp4-worker0 --ram=16384 --vcpus=4 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-worker0.qcow2,size=120 --vnc
    
    # virt-install --pxe --network bridge=openshift4 --mac=52:54:00:82:90:00 --name ocp4-worker1 --ram=16384 --vcpus=4 --os-variant rhel8.0 --disk path=/var/lib/libvirt/images/ocp4-worker1.qcow2,size=120 --vnc

Finish the Installation

To finish the installation:

  1. Login to your Kubernetes cluster:
    content_copy zoom_out_map
    # export KUBECONFIG=/root/ocp4/auth/kubeconfig
  2. Your installation might be waiting for worker nodes to approve the certificate signing request (CSR). The machineconfig node approval operator typically handles CSR approval.

    CSR approval, however, sometimes has to be performed manually.

    To check pending CSRs:

    content_copy zoom_out_map
    # oc get csr

    To approve all pending CSRs:

    content_copy zoom_out_map
    # oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve

    You may have to approve all pending CSRs multiple times, depending on the number of worker nodes in your environment and other factors.

    To monitor incoming CSRs:

    content_copy zoom_out_map
    # watch -n5 oc get csr

    Do not move to the next step until incoming CSRs have stopped.

  3. Set your cluster management state to Managed:
    content_copy zoom_out_map
    # oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'
  4. Setup your registry storage.

    For most environments, see Configuring registry storage for bare metal in the Red Hat Openshift documentation.

    For proof of concept labs and other smaller environments, you can set storage to emptyDir.

    content_copy zoom_out_map
    # oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
  5. If you need to make the registry accessible:
    content_copy zoom_out_map
    # oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}'
  6. Wait for the installation to finish:
    content_copy zoom_out_map
    # openshift-install wait-for install-complete
    INFO Waiting up to 30m0s for the cluster at https://api.ocp4.example.com:6443 to initialize...
    INFO Waiting up to 10m0s for the openshift-console route to be created...
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/ocp4/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp4.example.com
    INFO Login to the console with user: kubeadmin, password: XXX-XXXX-XXXX-XXXX
  7. Add a user to the cluster. See How to Add a User After Completing the Installation.

How to Install Contrail Networking and Red Hat OpenShift 4.5 on Amazon Web Services

Follow these procedures to install Contrail Networking and Red Hat Openshift 4.5 on Amazon Web Services (AWS):

When to Use This Procedure

This procedure is used to install Contrail Networking and Red Hat OpenShift 4.5 orchestration in AWS. Support for Contrail Networking and Red Hat OpenShift 4.5 environments is introduced in Contrail Networking Release 2011. See Contrail Networking Supported Platforms.

Prerequisites

This document makes the following assumptions about your environment:

  • the server meets the platform requirements for the Contrail Networking installation. See Contrail Networking Supported Platforms.

  • You have the Openshift binary version 4.4.8 files or later. See the Openshift Installation site if you need to update your binary files.

  • You can access Openshift image pull secrets. See Using image pull secrets from Red Hat.

  • You have an active AWS account.

  • AWS CLI is installed. See Installing the AWS CLI from AWS.

  • You have an SSH key that you can generate or provide on your local machine during the installation.

Configure DNS

A DNS zone must be created and available in Route 53 for your AWS account before starting this installation. You must also register a domain for your Contrail cluster in AWS Route 53. All entries created in AWS Route 53 are expected to be resolvable from the nodes in the Contrail cluster.

For information on configuring DNS zones in AWS Route 53, see the Amazon Route 53 Developer Guide from AWS.

Configure AWS Credentials

The installer used in this procedure creates multiple resources in AWS that are needed to run your cluster. These resources include Elastic Compute Cloud (EC2) instances, Virtual Private Clouds (VPCs), security groups, IAM roles, and other necessary network building blocks.

AWS credentials are needed to access these resources and should be configured before starting this installation.

To configure AWS credentials, see the Configuration and credential file settings section of the AWS Command Line Interface User Guide from AWS.

Download the OpenShift Installer and the Command Line Tools

To download the installer and the command line tools:

  1. Check which versions of the OpenShift installer are available:
    content_copy zoom_out_map
    $ curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/ | \
      awk '{print $5}'| \
      grep -o '4.[0-9].[0-9]*' | \
      uniq | \
      sort | \
      column
  2. Set the version and download the OpenShift installer and the CLI tool.

    In this example output, the Openshift version is 4.5.21.

    content_copy zoom_out_map
    $ VERSION=4.5.21
    $ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-install-mac-$VERSION.tar.gz
    $ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/$VERSION/openshift-client-mac-$VERSION.tar.gz
    
    $ tar -xvzf openshift-install-mac-${VERSION}.tar.gz -C /usr/local/bin
    $ tar -xvzf openshift-client-mac-${VERSION}.tar.gz -C /usr/local/bin
    
    $ openshift-install version
    $ oc version
    $ kubectl version

Deploy the Cluster

To deploy the cluster:

  1. Generate an SSH private key and add it to the agent:
    content_copy zoom_out_map
    $ ssh-keygen -b 4096 -t rsa -f ~/.ssh/id_rsa -N ""
  2. Create a working folder:

    In this example, a working folder named aws-ocp4 is created and the user is then moved into the new directory.

    content_copy zoom_out_map
    $ mkdir ~/aws-ocp4 ; cd ~/aws-ocp4
  3. Create an installation configuration file. See Creating the installation configuration file section of the Installing a cluster on AWS with customizations document from Red Hat OpenShift.
    content_copy zoom_out_map
    $ openshift-install create install-config

    An install-config.yaml file needs to be created and added to the current directory. A sample install-config.yaml file is provided below.

    Be aware of the following factors while creating the install-config.yaml file:

    • The networkType field is usually set as OpenShiftSDN in the YAML file by default.

      For configuration pointing at Contrail cluster nodes, the networkType field needs to be configured as Contrail.

    • OpenShift master nodes need larger instances. We recommend setting the type to m5.2xlarge or larger for OpenShift nodes.

    • Most OpenShift worker nodes can use the default instance sizes. You should consider using larger instances, however, for high demand performance workloads.

    • Many of the installation parameters in the YAML file are described in more detail in the Installation configuration parameters section of the Installing a cluster on AWS with customizations document from Red Hat OpenShift.

    A sample install-config.yaml file:

    content_copy zoom_out_map
    apiVersion: v1
    baseDomain: ovsandbox.com
    compute:
    - architecture: amd64
      hyperthreading: Enabled
      name: worker
      platform:
        aws:
          rootVolume:
            iops: 2000
            size: 500
            type: io1
          type: m5.4xlarge
      replicas: 3
    controlPlane:
      architecture: amd64
      hyperthreading: Enabled
      name: master
      platform:
        aws:
          rootVolume:
            iops: 4000
            size: 500
            type: io1
          type: m5.2xlarge
      replicas: 3
    metadata:
      creationTimestamp: null
      name: w1
    networking:
      clusterNetwork:
      - cidr: 10.128.0.0/14
        hostPrefix: 23
      machineNetwork:
      - cidr: 10.0.0.0/16
      networkType: Contrail
      serviceNetwork:
      - 172.30.0.0/16
    platform:
      aws:
        region: eu-west-1
    publish: External
    pullSecret: '{"auths"...}'
    sshKey: |
      ssh-rsa ...
  4. Create the installation manifests:
    content_copy zoom_out_map
    # openshift-install create manifests
  5. Install the YAML files to apply the Contrail configuration:
    content_copy zoom_out_map
    bash <<EOF
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-01-namespace.yaml -o manifests/00-contrail-01-namespace.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-02-admin-password.yaml -o manifests/00-contrail-02-admin-password.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-02-rbac-auth.yaml -o manifests/00-contrail-02-rbac-auth.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-02-registry-secret.yaml -o manifests/00-contrail-02-registry-secret.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-03-cluster-role.yaml -o manifests/00-contrail-03-cluster-role.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-04-serviceaccount.yaml -o manifests/00-contrail-04-serviceaccount.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-05-rolebinding.yaml -o manifests/00-contrail-05-rolebinding.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/00-contrail-06-clusterrolebinding.yaml -o manifests/00-contrail-06-clusterrolebinding.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_cassandras_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_cassandras_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_commands_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_commands_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_configs_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_configs_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_contrailcnis_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_contrailcnis_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_fernetkeymanagers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_fernetkeymanagers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_contrailmonitors_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_contrailmonitors_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_contrailstatusmonitors_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_contrailstatusmonitors_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_controls_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_controls_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_keystones_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_keystones_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_kubemanagers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_kubemanagers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_managers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_managers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_memcacheds_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_memcacheds_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_postgres_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_postgres_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_provisionmanagers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_provisionmanagers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_rabbitmqs_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_rabbitmqs_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_swiftproxies_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_swiftproxies_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_swifts_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_swifts_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_swiftstorages_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_swiftstorages_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_vrouters_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_vrouters_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_webuis_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_webuis_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/crds/contrail.juniper.net_zookeepers_crd.yaml -o manifests/00-contrail-07-contrail.juniper.net_zookeepers_crd.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/releases/R2011/manifests/00-contrail-08-operator.yaml -o manifests/00-contrail-08-operator.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/releases/R2011/manifests/00-contrail-09-manager.yaml -o manifests/00-contrail-09-manager.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/manifests/cluster-network-02-config.yml -o manifests/cluster-network-02-config.yml
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master-iptables-machine-config.yaml -o openshift/99_master-iptables-machine-config.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master-kernel-modules-overlay.yaml -o openshift/99_master-kernel-modules-overlay.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master_network_functions.yaml -o openshift/99_master_network_functions.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master_network_manager_stop_service.yaml -o openshift/99_master_network_manager_stop_service.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_master-pv-mounts.yaml -o openshift/99_master-pv-mounts.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker-iptables-machine-config.yaml -o openshift/99_worker-iptables-machine-config.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker-kernel-modules-overlay.yaml -o openshift/99_worker-kernel-modules-overlay.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker_network_functions.yaml -o openshift/99_worker_network_functions.yaml;\
    curl https://raw.githubusercontent.com/Juniper/contrail-operator/R2011/deploy/openshift/openshift/99_worker_network_manager_stop_service.yaml -o openshift/99_worker_network_manager_stop_service.yaml;
    EOF
  6. Modify the YAML files for your environment.

    The scope of each potential configuration changes is beyond the scope of this document.

    Common configuration changes include:

    • Modify the 00-contrail-02-registry-secret.yaml file to providing proper configuration with credentials to a registry. The most commonly used registry is the Contrail repository at hub.juniper.net.

      Note:

      You can create a base64 encoded value for configuration with the script provided in this directory. If you want to use this value for security, copy the output of the script and paste it into the Contrail registry secret configuration by replacing the DOCKER_CONFIG variable with the generated base64 encoded value string.

    • If you are using non-default network-CIDR subnets for your pods or services, open the deploy/openshift/manifests/cluster-network-02-config.yml file and update the CIDR values.

    • The default number of master nodes in a Kubernetes cluster is 3. If you are using a different number of master nodes, modify the deploy/openshift/manifests/00-contrail-09-manager.yaml file and set the spec.commonConfiguration.replicas field to the number of master nodes.

  7. Create the cluster:
    content_copy zoom_out_map
    $ openshift-install create cluster --log-level=debug
    • Contrail Networking needs to open some networking ports for operation within AWS. These ports are opened by adding rules to security groups.

      Follow this procedure to add rules to security groups when AWS resources are manually created:

      1. Build the Contrail CLI tool for managing security group ports on AWS. This tool allows you to automatically open ports that are required for Contrail to manage security group ports on AWS that are attached to Contrail cluster resources.

        To build this tool:

        content_copy zoom_out_map
        go build .

        After entering this command, you should be in the binary contrail-sc-open in your directory. This interface is the compiled tool.

      2. Start the tool:

        content_copy zoom_out_map
        ./contrail-sc-open -cluster-name name of your Openshift cluster -region AWS region where cluster is located
      3. Verify that the service has been created:

        content_copy zoom_out_map
        oc -n openshift-ingress get service router-default

        Proceed to the next step after confirming the service was created.

  8. When the service router-default is created in openshift-ingress, use the following command to patch the configuration:

    content_copy zoom_out_map
    $ oc -n openshift-ingress patch service router-default --patch '{"spec": {"externalTrafficPolicy": "Cluster"}}'
  9. Monitor the screen messages.

    Look for the INFO Install complete!.

    The final messages from a sample successful installation:

    content_copy zoom_out_map
    INFO Waiting up to 10m0s for the openshift-console route to be created...
    DEBUG Route found in openshift-console namespace: console
    DEBUG Route found in openshift-console namespace: downloads
    DEBUG OpenShift console route is created
    INFO Install complete!
    INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/ovaleanu/aws1-ocp4/auth/kubeconfig'
    INFO Access the OpenShift web-console here: https://console-openshift-console.apps.w1.ovsandbox.com
    INFO Login to the console with user: kubeadmin, password: XXXxx-XxxXX-xxXXX-XxxxX
  10. Access the cluster:
    content_copy zoom_out_map
    $ export KUBECONFIG=~/aws-ocp4/auth/kubeconfig
  11. Add a user to the cluster. See How to Add a User After Completing the Installation.

How to Add a User After Completing the Installation

The process for adding an Openshift user is identical in KVM or on AWS.

Redhat OpenShift 4.5 supports a single kubeadmin user by default. This kubeadmin user is used to deploy the initial cluster configuration.

You can use this procedure to create a Custom Resource (CR) to define a HTTPasswd identity provider.

  1. Generate a flat file that contains the user names and passwords for your cluster by using the HTPasswd identity provider:
    content_copy zoom_out_map
    $ htpasswd -c -B -b users.htpasswd testuser MyPassword

    A file called users.httpasswd is created.

  2. Define a secret password that contains the HTPasswd user file:
    content_copy zoom_out_map
    $ oc create secret generic htpass-secret --from-file=htpasswd=/root/ocp4/users.htpasswd -n openshift-config

    This custom resource shows the parameters and acceptable values for an HTPasswd identity provider.

    content_copy zoom_out_map
    $ cat htpasswdCR.yaml
    apiVersion: config.openshift.io/v1
    kind: OAuth
    metadata:
      name: cluster
    spec:
      identityProviders:
      - name: testuser
        mappingMethod: claim
        type: HTPasswd
        htpasswd:
          fileData:
            name: htpass-secret
  3. Apply the defined custom resource:
    content_copy zoom_out_map
    $ oc create -f htpasswdCR.yaml
  4. Add the user and assign the cluster-admin role:
    content_copy zoom_out_map
    $ oc adm policy add-cluster-role-to-user cluster-admin testuser
  5. Login using the new user credentials:
    content_copy zoom_out_map
    oc login -u testuser
    Authentication required for https://api.ocp4.example.com:6443 (openshift)
    Username: testuser
    Password:
    Login successful.

    The kubeadmin user can now safely be removed. See the Removing the kubeadmin user document from Red Hat OpenShift.

How to Install Earlier Releases of Contrail Networking and Red Hat OpenShift

If you have a need to install Contrail Networking with earlier versions of Red Hat Openshift, Contrail Networking is also supported with Red Hat Openshift versions 4.4 and 3.11.

For information on installing Contrail Networking with Red Hat Openshift 4.4, see How to Install Contrail Networking and Red Hat OpenShift 4.4.

For information on installing Contrail Networking with Red Hat Openshift 3.11, see the following documentation:

external-footer-nav
Ask AI
close

How can I help you today?

LLMs can make mistakes. Verify important information.
chat_add_on New topic
send progress_activity
This conversation will be monitored and recorded. Any information you provide will be subject to our Privacy Notice and may be used for quality assurance purposes. Do not include any personal or sensitive information. Ask AI can make mistakes. Verify generated output for accuracy.
Protected by hCaptcha arrow_drop_down arrow_drop_up
Juniper Networks, Inc. | Privacy Notice | Terms of Use