Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
close
keyboard_arrow_left
Contrail Getting Started Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Provisioning Red Hat OpenShift Container Platform Clusters Using Ansible Deployer

date_range 16-Oct-23

Contrail Release 5.0.2 supports the following ways of installing and provisioning standalone and nested Red Hat OpenShift Container Platform version 3.9 clusters. These instructions are valid for systems with Microsoft Azure, Amazon Web Services (AWS), or bare metal server (BMS).

Installing a Standalone OpenShift Cluster Using Ansible Deployer

Prerequisites

Ensure the following system requirements.

  • Master Node (x1 or x3 for high availability)

    • Image: RHEL 7.5

    • CPU/RAM: 4 CPU, 32 GB RAM

    • Disk: 250 GB

    • Security Group: Allow all traffic from everywhere

  • Slave Node (xn)

    • Image: RHEL 7.5

    • CPU/RAM: 8 CPU, 64 GB RAM

    • Disk: 250 GB

    • Security Group: Allow all traffic from everywhere

  • Load Balancer Node (x1, only when using high availability. Not needed for single master node installation.)

    • Image: RHEL 7.5

    • CPU/RAM: 2 CPU, 16 GB RAM

    • Disk: 100 GB

    • Security Group: Allow all traffic from everywhere

Note:

Ensure that you launch the instances in the same subnet.

Installing a standalone OpenShift cluster using Ansible deployer

Perform the following steps to install a standalone OpenShift cluster with Contrail as networking provider and provision the cluster using contrail-ansible-deployer.

  1. Re-image all the servers.
    content_copy zoom_out_map
    /server-manager reimage --server_id server1 redhat-7.5-minimal
  2. Set up environment nodes:
    1. You must register all nodes in order to subscribe to OpenShift Container Platform. Register all nodes in the cluster using Red Hat Subscription Manager (RHSM).

      (all-nodes)# subscription-manager register --username username --password password --force

    2. List the available subscriptions.

      (all-nodes)# subscription-manager list --available --matches '*OpenShift*'

    3. From the list of available subscriptions, find and attach the pool ID for the OpenShift Container Platform subscription.

      (all-nodes)# subscription-manager attach --pool=pool-ID

    4. Disable all yum repositories.

      (all-nodes)# subscription-manager repos --disable="*"

    5. Enable only the required repositories.
      content_copy zoom_out_map
       (all-nodes)# subscription-manager repos \
          --enable="rhel-7-server-rpms" \
          --enable="rhel-7-server-extras-rpms" \
          --enable="rhel-7-server-ose-3.9-rpms" \
          --enable="rhel-7-fast-datapath-rpms" \
          --enable="rhel-7-server-ansible-2.5-rpms"
      
    6. Install Extra Packages for Enterprise Linux (EPEL).

      (all-nodes)# yum install wget -y && wget -O /tmp/epel-release-latest-7.noarch.rpm https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && rpm -ivh /tmp/epel-release-latest-7.noarch.rpm

      (all-nodes)# yum install wget -y && wget -O /tmp/epel-release-latest-7.noarch.rpm

    7. Update the system to use the latest packages.

      (all-nodes)# yum update -y

    8. Install the following package which provides OpenShift Container Platform utilities.

      (all-nodes)# yum install atomic-openshift-excluder atomic-openshift-utils git python-netaddr -y

    9. Remove the atomic-openshift packages from the list for the duration of the installation.

      (all-nodes)# atomic-openshift-excluder unexclude -y

    10. Enable SSH access for the root user.
      content_copy zoom_out_map
       (all-nodes)# sudo su
       (all-nodes)# passwd
       (all-nodes)# sed -i -e 's/#PermitRootLogin yes/PermitRootLogin yes/g' -e 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
       (all-nodes)# service sshd restart
       (all-nodes)# logout 
    11. Log out.

      (all-nodes)# logout

    12. Log in as root user.

      ssh node-ip -l root

    13. Enforce the SELinux security policy.
      content_copy zoom_out_map
      (all-nodes)# vi /etc/selinux/config
      
              SELINUX=enforcing
  3. Install the supported Ansible version by running the following command:
    content_copy zoom_out_map
    yum install ansible
  4. Get the files from the latest tar ball. Download the OpenShift Container Platform install package from Juniper software download site and modify the contents of the openshift-ansible inventory file.
    1. Download the Openshift Deployer (contrail-openshift-deployer-5.0.X.tar) installer tar ball from the Juniper software download site: https://www.juniper.net/support/downloads/?p=contrail#sw
    2. Copy the install package to the node from where Ansible must be deployed. Ensure that the node has password-free access to the OpenShift master and slave nodes.

      scp contrail-openshift-deployer-5.0.X.tar openshift-ansible-node:/root/

    3. Untar the contrail-openshift-deployer-5.0.X.tar package.

      tar -xvf contrail-openshift-deployer-5.0.X.tar -C /root/

    4. Verify the contents of the openshift-ansible directory.

      cd /root/openshift-ansible/

    5. Modify the inventory file to match your OpenShift environment.

      Populate the install file with Contrail configuration parameters specific to your system. Refer to the following example.

      Add the master nodes in the [nodes] section of the inventory to ensure that the Contrail control pods will come up on the OpenShift master nodes.

      content_copy zoom_out_map
      (ansible-node)# vi /root/openshift-ansible/inventory/ose-install
      
              [OSEv3:vars]
              ...
              contrail_version=5.0
              contrail_container_tag=5.0.X-0.X
              contrail_registry=hub.juniper.net/contrail
              contrail_registry_username=username-for-contrail-container-registry
              contrail_registry_password=password-for-contrail-container-registry
                      ...
      

      For more information about each of these parameters and for an example for a HA master, see https://github.com/Juniper/contrail-kubernetes-docs/blob/master/install/openshift/3.9/standalone-openshift.md.

    Note:

    Juniper Networks recommends that you obtain the Ansible source files from the latest release.

    This procedure assumes that there is one master node, one infra node, and one compute node.

    content_copy zoom_out_map
    master : server1 (1x.xx.xx.11)
    
    infra : server2 (1x.xx.xx.22)
    
    compute : server3 (1x.xx.xx.33)
  5. Edit /etc/hosts to allow all machines to access all nodes.
    content_copy zoom_out_map
    [root@server1]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    10.84.5.100 puppet
    1x.xx.xx.11 server1.contrail.juniper.net server1
    1x.xx.xx.22 server2.contrail.juniper.net server2
    1x.xx.xx.33 server3.contrail.juniper.net server3
  6. Set up password free SSH access to the Ansible node and all the nodes.
    content_copy zoom_out_map
    ssh-keygen -t rsa
    ssh-copy-id root@1x.xx.xx.11
    ssh-copy-id root@1x.xx.xx.22
    ssh-copy-id root@1x.xx.xx.33
  7. Run Ansible playbook to install OpenShift Container Platform with Contrail. Before you run Ansible playbook, ensure that you have edited inventory/ose-install file as shown below.
    content_copy zoom_out_map
    (ansible-node)# cd /root/openshift-ansible
    (ansible-node)# ansible-playbook -i inventory/ose-install playbooks/prerequisites.yml
    (ansible-node)# ansible-playbook -i inventory/ose-install playbooks/deploy_cluster.yml
    
  8. Verify that Contrail has been installed and is operational.
    content_copy zoom_out_map
    (master)# oc get ds -n kube-system
    (master)# oc get pods -n kube-system
  9. Install the customized web console that should run on the infra nodes. To do this, disable the OpenShift Web console and enable the Contrail Web console and add the following lines in ose-install:
    content_copy zoom_out_map
    openshift_web_console_install=false
    openshift_web_console_contrail_install=true
    
    
  10. Create a password for the admin user to log in to the UI from the master node.
    content_copy zoom_out_map
    (master-node)# htpasswd /etc/origin/master/htpasswd admin
    Note:

    If you are using a load balancer, you must manually copy the htpasswd file into all your master nodes.

  11. Assign cluster-admin role to admin user.
    content_copy zoom_out_map
    (master-node)# oc adm policy add-cluster-role-to-user cluster-admin admin
    (master-node)# oc login -u admin
  12. Open a Web browser and type the entire fqdn name of your master node or load balancer node, followed by :8443/console.
    content_copy zoom_out_map
    https://<your host name from your ose-install inventory>:8443/console
    Note:

    Use the user name and password created above to log into the Web console.

    Note:

    Your DNS should resolve the host name for access. If the host name is not resolved, modify the /etc/hosts file to route to the above host.

  13. Verify the provisioning process.

    (master-node)# oc get pods -n kube-system

    The status of all the pods must be displayed as Running.

    (master-node)# contrail-status

    All contrail-services must be displayed as active.

  14. Access the Contrail and OpenShift Web user interfaces and attempt to log in to each.

    Contrail: https://master-node-ip:8143 with <admin/c0ntrail123> login credentials.

    OpenShift: https://infra-node-ip:8443 with <admin/password created in step 10> login credentials.

You can test the system by launching pods, services, namespaces, network-policies, ingress, and soon. For more information, see the examples listed in https://github.com/juniper/openshift-contrail/tree/master/openshift/examples.

Sample ose-install File

Use the following sample ose-install file for reference.

content_copy zoom_out_map
[OSEv3:children]
masters
nodes
etcd
openshift_ca

[OSEv3:vars]
ansible_ssh_user=root
ansible_become=yes
debug_level=2
deployment_type=origin #openshift-enterprise for Redhat
openshift_release=v3.9
#openshift_repos_enable_testing=true
containerized=false
openshift_install_examples=true
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
osm_cluster_network_cidr=10.32.0.0/12
openshift_portal_net=10.96.0.0/12
openshift_use_dnsmasq=true
openshift_clock_enabled=true
openshift_hosted_manage_registry=false
openshift_hosted_manage_router=false
openshift_enable_service_catalog=false
openshift_use_openshift_sdn=false
os_sdn_network_plugin_name='cni'
openshift_disable_check=memory_availability,package_availability,disk_availability,package_version,docker_storage
openshift_docker_insecure_registries=opencontrailnightly
openshift_web_console_install=false
#openshift_web_console_nodeselector={'region':'infra'}

openshift_web_console_contrail_install=true
openshift_use_contrail=true
nested_mode_contrail=false
contrail_version=5.0
contrail_container_tag=queens-5.0-156
contrail_registry=opencontrailnightly
# Username /Password for private Docker regiteries
#contrail_registry_username=test
#contrail_registry_password=test
# Below option presides over contrail masters if set
#vrouter_physical_interface=ens160
#docker_version=1.13.1
ntpserver=10.1.1.1 # a proper ntpserver is required for contrail.

# Contrail_vars
# below variables are used by contrail kubemanager to configure the cluster,
# you can configure all options below. All values are defaults and can be modified.

#kubernetes_api_server=10.84.13.52         # in our case this is the master, which is default
#kubernetes_api_port=8080               
#kubernetes_api_secure_port=8443
#cluster_name=myk8s                  
#cluster_project={}
#cluster_network={}
#pod_subnets=10.32.0.0/12
#ip_fabric_subnets=10.64.0.0/12
#service_subnets=10.96.0.0/12
#ip_fabric_forwarding=false
#ip_fabric_snat=false
#public_fip_pool={}
#vnc_endpoint_ip=20.1.1.1
#vnc_endpoint_port=8082

[masters]
10.84.13.52 openshift_hostname=openshift-master

[etcd]
10.84.13.52 openshift_hostname=openshift-master

[nodes]
10.84.13.52 openshift_hostname=openshift-master
10.84.13.53 openshift_hostname=openshift-compute
10.84.13.54 openshift_hostname=openshift-infra openshift_node_labels="{'region': 'infra'}"

[openshift_ca]
10.84.13.52 openshift_hostname=openshift-master

Sample ose-install File for a HA setup

Use the following sample ose-install file for reference.

content_copy zoom_out_map
[OSEv3:children]
masters
nodes
etcd
lb
openshift_ca

[OSEv3:vars]
ansible_ssh_user=root
ansible_become=yes
debug_level=2
deployment_type=openshift-enterprise
openshift_release=v3.9
openshift_repos_enable_testing=true
containerized=false
openshift_install_examples=true
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
osm_cluster_network_cidr=10.32.0.0/12
openshift_portal_net=10.96.0.0/12
openshift_use_dnsmasq=true
openshift_clock_enabled=true
openshift_enable_service_catalog=false
openshift_use_openshift_sdn=false
os_sdn_network_plugin_name='cni'
openshift_disable_check=disk_availability,package_version,docker_storage
openshift_docker_insecure_registries=ci-repo.englab.juniper.net:5010
openshift_web_console_install=false
openshift_web_console_contrail_install=true
openshift_web_console_nodeselector={'region':'infra'}
openshift_hosted_manage_registry=true
openshift_hosted_registry_selector="region=infra"
openshift_hosted_manage_router=true
openshift_hosted_router_selector="region=infra"
ntpserver=10.84.5.100


# Openshift HA
openshift_master_cluster_method=native
openshift_master_cluster_hostname=lb
openshift_master_cluster_public_hostname=lb


# Below are Contrail variables. Comment them out if you don't want to install Contrail through ansible-playbook
contrail_version=5.0
openshift_use_contrail=true
#rhel-queens-5.0-latest
#contrail_container_tag=rhel-queens-5.0-319
#contrail_registry=ci-repo.englab.juniper.net:5010
contrail_registry=hub.juniper.net/contrail
contrail_registry_username=JNPR-Customer200
contrail_registry_password=F********************f
contrail_container_tag=5.0.2-0.309-rhel-queens
contrail_nodes=[10.0.0.7, 10.0.0.8, 10.0.0.13]
vrouter_physical_interface=eth0

[masters]
10.0.0.7 openshift_hostname=master1
10.0.0.8 openshift_hostname=master2
10.0.0.13 openshift_hostname=master3

[lb]
10.0.0.5 openshift_hostname=lb

[etcd]
10.0.0.7 openshift_hostname=master1
10.0.0.8 openshift_hostname=master2
10.0.0.13 openshift_hostname=master3

[nodes]
10.0.0.7 openshift_hostname=master1
10.0.0.8 openshift_hostname=master2
10.0.0.13 openshift_hostname=master3
10.0.0.10 openshift_hostname=slave1
10.0.0.4 openshift_hostname=slave2
10.0.0.6 openshift_hostname=infra1 openshift_node_labels="{'region': 'infra'}"
10.0.0.11 openshift_hostname=infra2 openshift_node_labels="{'region': 'infra'}"
10.0.0.12 openshift_hostname=infra3 openshift_node_labels="{'region': 'infra'}"

[openshift_ca]
10.0.0.7 openshift_hostname=master1
10.0.0.8 openshift_hostname=master2
10.0.0.13 openshift_hostname=master3

Provisioning of Nested OpenShift Clusters Using Ansible Deployer—Beta

When Contrail provides networking for an OpenShift cluster that is provisioned on a Contrail-OpenStack cluster, it is called a nested OpenShift cluster. Contrail components are shared between the two clusters.

The following steps describe how to provision a nested OpenShift cluster.

Note:

Provisioning of nested OpenShift Clusters is supported only as a Beta feature. Ensure that you have an operational Contrail-OpenStack cluster based on Contrail Release 5.0 before provisioning a nested OpenShift cluster.

Configure network connectivity to Contrail configuration and data plane functions

A nested OpenShift cluster is managed by the same Contrail control processes that manage the underlying OpenStack cluster. The nested OpenShift cluster needs IP reachability to the Contrail control processes. Because the OpenShift cluster is actually an overlay on the OpenStack cluster, you can use the link local service feature or a combination of link local service with fabric Source Network Address Translation (SNAT) feature of Contrail to provide IP reachability to and from the OpenShift cluster on the overlay and the OpenStack cluster.

Use one of the following options to create link local services.

  • Fabric SNAT with link local service

    To provide IP reachability to and from the Kubernetes cluster using the fabric SNAT with link local service, perform the following steps.

    1. Enable fabric SNAT on the virtual network of the VMs.

      The fabric SNAT feature must be enabled on the virtual network of the virtual machines on which the Kubernetes master and minions are running.

    2. Create one link local service for the Container Network Interface (CNI) to communicate with its vRouter using the Contrail GUI.

    The following link local service is required.

    Contrail Process

    Service IP

    Service Port

    Fabric IP

    Fabric Port

    vRouter

    Service_IP for the active node

    9091

    127.0.0.1

    9091

    Note:

    Fabric IP address is 127.0.0.1 since you must make the CNI communicate with the vRouter on its underlay node.

    For example, the following link local services must be created:

    Link Local Service Name

    Service IP

    Service Port

    Fabric IP

    Fabric Port

    K8s-cni-to-agent

    10.10.10.5

    9091

    127.0.0.1

    9091

    Note:

    Here 10.10.10.5 is the Service IP address that you chose. This can be any unused IP in the cluster. This IP address is primarily used to identify link local traffic and has no other significance.

  • Link local only

    To configure a Link local service, you need a Service IP address and a Fabric IP address. The fabric IP address is the node IP address on which the Contrail processes are running. Service IP address along with port number is used by the data plane to identify the fabric IP address. Service IP address is required to be a unique and unused IP address in the entire OpenStack cluster. For each node of the OpenStack cluster, one service IP address must be identified.

    The following are the link local services are required:

    Contrail Process

    Service IP

    Service Port

    Fabric IP

    Fabric Port

    Contrail Config

    Service_IP for the active node

    8082

    Node_IP for the active node

    8082

    Contrail Analytics

    Service_IP for the active node

    8086

    Node_IP for the active node

    8086

    Contrail Msg Queue

    Service_IP for the active node

    5673

    Node_IP for the active node

    5673

    Contrail VNC DB

    Service_IP for the active node

    9161

    Node_IP for the active node

    9161

    Keystone

    Service_IP for the active node

    35357

    Node_IP for the active node

    35357

    vRouter

    Service_IP for the active node

    9091

    127.0.0.1

    9091

For example, consider the following hypothetical OpenStack cluster:

content_copy zoom_out_map
Contrail Config : 192.168.1.100
Contrail Analytics : 192.168.1.100, 192.168.1.101
Contrail Msg Queue : 192.168.1.100
Contrail VNC DB : 192.168.1.100, 192.168.1.101, 192.168.1.102
Keystone: 192.168.1.200
Vrouter: 192.168.1.300, 192.168.1.400, 192.168.1.500

This cluster is made of seven nodes. You must allocate seven unused IP addresses for these nodes:

content_copy zoom_out_map
192.168.1.100  --> 10.10.10.1
192.168.1.101  --> 10.10.10.2
192.168.1.102  --> 10.10.10.3
192.168.1.200  --> 10.10.10.4
192.168.1.300  --> 10.10.10.5
192.168.1.400  --> 10.10.10.6
192.168.1.500  --> 10.10.10.7

The following link local services must be created:

Link Local Service Name

Service IP

Service Port

Fabric IP

Fabric Port

Contrail Config

10.10.10.1

8082

192.168.1.100

8082

Contrail Analytics

10.10.10.1

8086

192.168.1.100

8086

Contrail Analytics 2

10.10.10.2

8086

192.168.1.101

8086

Contrail Msg Queue

10.10.10.1

5673

192.168.1.100

5673

Contrail VNC DB 1

10.10.10.1

9161

192.168.1.100

9161

Contrail VNC DB 2

10.10.10.2

9161

192.168.1.101

9161

Contrail VNC DB 3

10.10.10.3

9161

192.168.1.102

9161

Keystone

10.10.10.4

35357

192.168.1.200

35357

VRouter-192.168.1.300

10.10.10.5

9091

127.0.0.1

9091

VRouter-192.168.1.400

10.10.10.6

9091

127.0.0.1

9091

VRouter-192.168.1.500

10.10.10.7

9091

127.0.0.1

9091

Installing Nested OpenShift Cluster using Ansible Deployer

Perform the steps on #id-provisioning-of-a-standalone-kubernetes-cluster-3.9__openshift3.9-cluster to continue installing and provisioning the OpenShift cluster.

Sample ose-install File

Add the following information to the #id-provisioning-of-a-standalone-kubernetes-cluster-3.9__sample-ose-install-3.9.

content_copy zoom_out_map
#Nested mode vars
nested_mode_contrail=true
auth_mode=keystone
keystone_auth_host=192.168.24.12
keystone_auth_admin_tenant=admin
keystone_auth_admin_user=admin
keystone_auth_admin_password=MAYffWrX7ZpPrV2AMAa9zAUvG
keystone_auth_admin_port=35357
keystone_auth_url_version=/v3
#k8s_nested_vrouter_vip is a service IP for the running node which we configured above
k8s_nested_vrouter_vip=10.10.10.5
#k8s_vip is kubernetes api server ip
k8s_vip=192.168.1.3
#cluster_network is the one which vm network belongs to
cluster_network="{'domain': 'default-domain', 'project': 'admin', 'name': 'net1'}"

For more information, see https://github.com/Juniper/contrail-kubernetes-docs/tree/master/install/openshift/3.9.

external-footer-nav