Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Provisioning Red Hat OpenShift Container Platform Clusters Using Ansible Deployer

Contrail Release 5.0.2 supports the following ways of installing and provisioning standalone and nested Red Hat OpenShift Container Platform version 3.9 clusters. These instructions are valid for systems with Microsoft Azure, Amazon Web Services (AWS), or bare metal server (BMS).

Installing a Standalone OpenShift Cluster Using Ansible Deployer

Prerequisites

Ensure the following system requirements.

  • Master Node (x1 or x3 for high availability)

    • Image: RHEL 7.5

    • CPU/RAM: 4 CPU, 32 GB RAM

    • Disk: 250 GB

    • Security Group: Allow all traffic from everywhere

  • Slave Node (xn)

    • Image: RHEL 7.5

    • CPU/RAM: 8 CPU, 64 GB RAM

    • Disk: 250 GB

    • Security Group: Allow all traffic from everywhere

  • Load Balancer Node (x1, only when using high availability. Not needed for single master node installation.)

    • Image: RHEL 7.5

    • CPU/RAM: 2 CPU, 16 GB RAM

    • Disk: 100 GB

    • Security Group: Allow all traffic from everywhere

Note:

Ensure that you launch the instances in the same subnet.

Installing a standalone OpenShift cluster using Ansible deployer

Perform the following steps to install a standalone OpenShift cluster with Contrail as networking provider and provision the cluster using contrail-ansible-deployer.

  1. Re-image all the servers.
  2. Set up environment nodes:
    1. You must register all nodes in order to subscribe to OpenShift Container Platform. Register all nodes in the cluster using Red Hat Subscription Manager (RHSM).

      (all-nodes)# subscription-manager register --username username --password password --force

    2. List the available subscriptions.

      (all-nodes)# subscription-manager list --available --matches '*OpenShift*'

    3. From the list of available subscriptions, find and attach the pool ID for the OpenShift Container Platform subscription.

      (all-nodes)# subscription-manager attach --pool=pool-ID

    4. Disable all yum repositories.

      (all-nodes)# subscription-manager repos --disable="*"

    5. Enable only the required repositories.
    6. Install Extra Packages for Enterprise Linux (EPEL).

      (all-nodes)# yum install wget -y && wget -O /tmp/epel-release-latest-7.noarch.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && rpm -ivh /tmp/epel-release-latest-7.noarch.rpm

      (all-nodes)# yum install wget -y && wget -O /tmp/epel-release-latest-7.noarch.rpm

    7. Update the system to use the latest packages.

      (all-nodes)# yum update -y

    8. Install the following package which provides OpenShift Container Platform utilities.

      (all-nodes)# yum install atomic-openshift-excluder atomic-openshift-utils git python-netaddr -y

    9. Remove the atomic-openshift packages from the list for the duration of the installation.

      (all-nodes)# atomic-openshift-excluder unexclude -y

    10. Enable SSH access for the root user.
    11. Log out.

      (all-nodes)# logout

    12. Log in as root user.

      ssh node-ip -l root

    13. Enforce the SELinux security policy.
  3. Install the supported Ansible version by running the following command:
  4. Get the files from the latest tar ball. Download the OpenShift Container Platform install package from Juniper software download site and modify the contents of the openshift-ansible inventory file.
    1. Download the Openshift Deployer (contrail-openshift-deployer-5.0.X.tar) installer tar ball from the Juniper software download site: https://www.juniper.net/support/downloads/?p=contrail#sw
    2. Copy the install package to the node from where Ansible must be deployed. Ensure that the node has password-free access to the OpenShift master and slave nodes.

      scp contrail-openshift-deployer-5.0.X.tar openshift-ansible-node:/root/

    3. Untar the contrail-openshift-deployer-5.0.X.tar package.

      tar -xvf contrail-openshift-deployer-5.0.X.tar -C /root/

    4. Verify the contents of the openshift-ansible directory.

      cd /root/openshift-ansible/

    5. Modify the inventory file to match your OpenShift environment.

      Populate the install file with Contrail configuration parameters specific to your system. Refer to the following example.

      Add the master nodes in the [nodes] section of the inventory to ensure that the Contrail control pods will come up on the OpenShift master nodes.

      For more information about each of these parameters and for an example for a HA master, see https://github.com/Juniper/contrail-kubernetes-docs/blob/master/install/openshift/3.9/standalone-openshift.md.

    Note:

    Juniper Networks recommends that you obtain the Ansible source files from the latest release.

    This procedure assumes that there is one master node, one infra node, and one compute node.

  5. Edit /etc/hosts to allow all machines to access all nodes.
  6. Set up password free SSH access to the Ansible node and all the nodes.
  7. Run Ansible playbook to install OpenShift Container Platform with Contrail. Before you run Ansible playbook, ensure that you have edited inventory/ose-install file as shown below.
  8. Verify that Contrail has been installed and is operational.
  9. Install the customized web console that should run on the infra nodes. To do this, disable the OpenShift Web console and enable the Contrail Web console and add the following lines in ose-install:
  10. Create a password for the admin user to log in to the UI from the master node.
    Note:

    If you are using a load balancer, you must manually copy the htpasswd file into all your master nodes.

  11. Assign cluster-admin role to admin user.
  12. Open a Web browser and type the entire fqdn name of your master node or load balancer node, followed by :8443/console.
    Note:

    Use the user name and password created above to log into the Web console.

    Note:

    Your DNS should resolve the host name for access. If the host name is not resolved, modify the /etc/hosts file to route to the above host.

  13. Verify the provisioning process.

    (master-node)# oc get pods -n kube-system

    The status of all the pods must be displayed as Running.

    (master-node)# contrail-status

    All contrail-services must be displayed as active.

  14. Access the Contrail and OpenShift Web user interfaces and attempt to log in to each.

    Contrail: https://master-node-ip:8143 with <admin/c0ntrail123> login credentials.

    OpenShift: https://infra-node-ip:8443 with <admin/password created in step 10> login credentials.

You can test the system by launching pods, services, namespaces, network-policies, ingress, and soon. For more information, see the examples listed in https://github.com/juniper/openshift-contrail/tree/master/openshift/examples.

Sample ose-install File

Use the following sample ose-install file for reference.

Sample ose-install File for a HA setup

Use the following sample ose-install file for reference.

Provisioning of Nested OpenShift Clusters Using Ansible Deployer—Beta

When Contrail provides networking for an OpenShift cluster that is provisioned on a Contrail-OpenStack cluster, it is called a nested OpenShift cluster. Contrail components are shared between the two clusters.

The following steps describe how to provision a nested OpenShift cluster.

Note:

Provisioning of nested OpenShift Clusters is supported only as a Beta feature. Ensure that you have an operational Contrail-OpenStack cluster based on Contrail Release 5.0 before provisioning a nested OpenShift cluster.

Configure network connectivity to Contrail configuration and data plane functions

A nested OpenShift cluster is managed by the same Contrail control processes that manage the underlying OpenStack cluster. The nested OpenShift cluster needs IP reachability to the Contrail control processes. Because the OpenShift cluster is actually an overlay on the OpenStack cluster, you can use the link local service feature or a combination of link local service with fabric Source Network Address Translation (SNAT) feature of Contrail to provide IP reachability to and from the OpenShift cluster on the overlay and the OpenStack cluster.

Use one of the following options to create link local services.

  • Fabric SNAT with link local service

    To provide IP reachability to and from the Kubernetes cluster using the fabric SNAT with link local service, perform the following steps.

    1. Enable fabric SNAT on the virtual network of the VMs.

      The fabric SNAT feature must be enabled on the virtual network of the virtual machines on which the Kubernetes master and minions are running.

    2. Create one link local service for the Container Network Interface (CNI) to communicate with its vRouter using the Contrail GUI.

    The following link local service is required.

    Contrail Process

    Service IP

    Service Port

    Fabric IP

    Fabric Port

    vRouter

    Service_IP for the active node

    9091

    127.0.0.1

    9091

    Note:

    Fabric IP address is 127.0.0.1 since you must make the CNI communicate with the vRouter on its underlay node.

    For example, the following link local services must be created:

    Link Local Service Name

    Service IP

    Service Port

    Fabric IP

    Fabric Port

    K8s-cni-to-agent

    10.10.10.5

    9091

    127.0.0.1

    9091

    Note:

    Here 10.10.10.5 is the Service IP address that you chose. This can be any unused IP in the cluster. This IP address is primarily used to identify link local traffic and has no other significance.

  • Link local only

    To configure a Link local service, you need a Service IP address and a Fabric IP address. The fabric IP address is the node IP address on which the Contrail processes are running. Service IP address along with port number is used by the data plane to identify the fabric IP address. Service IP address is required to be a unique and unused IP address in the entire OpenStack cluster. For each node of the OpenStack cluster, one service IP address must be identified.

    The following are the link local services are required:

    Contrail Process

    Service IP

    Service Port

    Fabric IP

    Fabric Port

    Contrail Config

    Service_IP for the active node

    8082

    Node_IP for the active node

    8082

    Contrail Analytics

    Service_IP for the active node

    8086

    Node_IP for the active node

    8086

    Contrail Msg Queue

    Service_IP for the active node

    5673

    Node_IP for the active node

    5673

    Contrail VNC DB

    Service_IP for the active node

    9161

    Node_IP for the active node

    9161

    Keystone

    Service_IP for the active node

    35357

    Node_IP for the active node

    35357

    vRouter

    Service_IP for the active node

    9091

    127.0.0.1

    9091

For example, consider the following hypothetical OpenStack cluster:

This cluster is made of seven nodes. You must allocate seven unused IP addresses for these nodes:

The following link local services must be created:

Link Local Service Name

Service IP

Service Port

Fabric IP

Fabric Port

Contrail Config

10.10.10.1

8082

192.168.1.100

8082

Contrail Analytics

10.10.10.1

8086

192.168.1.100

8086

Contrail Analytics 2

10.10.10.2

8086

192.168.1.101

8086

Contrail Msg Queue

10.10.10.1

5673

192.168.1.100

5673

Contrail VNC DB 1

10.10.10.1

9161

192.168.1.100

9161

Contrail VNC DB 2

10.10.10.2

9161

192.168.1.101

9161

Contrail VNC DB 3

10.10.10.3

9161

192.168.1.102

9161

Keystone

10.10.10.4

35357

192.168.1.200

35357

VRouter-192.168.1.300

10.10.10.5

9091

127.0.0.1

9091

VRouter-192.168.1.400

10.10.10.6

9091

127.0.0.1

9091

VRouter-192.168.1.500

10.10.10.7

9091

127.0.0.1

9091

Installing Nested OpenShift Cluster using Ansible Deployer

Perform the steps on #id-provisioning-of-a-standalone-kubernetes-cluster-3.9__openshift3.9-cluster to continue installing and provisioning the OpenShift cluster.

Sample ose-install File

Add the following information to the #id-provisioning-of-a-standalone-kubernetes-cluster-3.9__sample-ose-install-3.9.

For more information, see https://github.com/Juniper/contrail-kubernetes-docs/tree/master/install/openshift/3.9.