Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
close
keyboard_arrow_left
Contrail Getting Started Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Deploying Contrail with Red Hat OpenStack Platform Director 13

date_range 16-Oct-23

This document explains how to integrate a Contrail 5.0.1 installation with Red Hat OpenStack Platform Director 13.

Overview

Red Hat OpenStack Platform provides an installer named Director (RHOSPD). The Red Hat Director installer is based on the OpenStack project TripleO (OOO, OpenStack on OpenStack). TripleO is an open source project that uses features of OpenStack to deploy a fully functional, tenant-facing OpenStack environment.

TripleO can be used to deploy a RDO based OpenStack environment integrated with Tungsten Fabric.Red Hat OpenStack Platform Director (RHOSPd) can be used to deploy a RHOSP based OpenStack environment integrated with Contrail.

OSPd Features

OSPd uses the concepts of undercloud and overcloud. OSPd sets up an undercloud, an operator-facing deployment cloud that contains the OpenStack components needed to deploy and manage an overcloud, a tenant-facing cloud that hosts user workloads.

The overcloud is the deployed solution that can represent a cloud for any purpose, such as production, staging, test, and so on. The operator can select to deploy to their environment any of the available overcloud roles, such as controller, compute, and the like.

OSPd leverages existing core components of OpenStack including Nova, Ironic, Neutron, Heat, Glance, and Ceilometer to deploy OpenStack on bare metal hardware.

  • Nova and Ironic are used in the undercloud to manage the bare metal instances that comprise the infrastructure for the overcloud.

  • Neutron is used to provide a networking environment in which to deploy the overcloud.

  • Glance stores machine images.

  • Ceilometer collects metrics about the overcloud.

For more information about OSPd architecture, see OSPd documentation

Composable Roles

OSPd enables composable roles. Each role is a group of services that are defined in Heat templates. Composable roles gives the operator the flexibility to add and modify roles as needed.

The following are the Contrail roles used for integrating Contrail to the overcloud environment:

  • Contrail Controller

  • Contrail Analytics

  • Contrail Analytics Database

  • Contrail-TSN

  • Contrail-DPDK

Figure 1 shows the relationship and components of an undercloud and overcloud architecture for Contrail.

Figure 1: undercloud and overcloud with Rolesundercloud and overcloud with Roles

Preparing the Environment for Deployment

The overcloud roles can be deployed to bare metal servers or to virtual machines (VMs). The compute nodes must be deployed to bare metal systems.

Ensure your environment is prepared for the Red Hat deployment. Refer to Red Hat documentation.

Preparing for the Contrail Roles

Ensure the following requirements are met for the Contrail nodes per role.

  • Non-high availability: A minimum of 4 overcloud nodes are needed for control plane roles for a non-high availability deployment:

    • 1x contrail-config (includes Contrail control)

    • 1x contrail-analytics

    • 1x contrail-analytics-database

    • 1x OpenStack controller

  • High availability: A minimum of 12 overcloud nodes are needed for control plane roles for a high availability deployment:

    • 3x contrail-config (includes Contrail control)

    • 3x contrail-analytics

    • 3x contrail-analytics-database

    • 3x OpenStack controller

    • If the control plane roles will be deployed to VMs, use 3 separate physical servers and deploy one role of each kind to each physical server.

RHOSP Director expects the nodes to be provided by the administrator, for example, if you are deploying to VMs, the administrator must create the VMs before starting with deployment.

Preparing for the Underlay Network

Refer to Red Hat documentation for planning and implementing underlay networking, including the kinds of networks used and the purpose of each:

At a high level, every overcloud node must support IPMI.

Preparing for the Provisioning Network

Ensure the following requirements are met for the provisioning network.

  • One NIC from every machine must be in the same broadcast domain of the provisioning network, and it should be the same NIC on each of the overcloud machines. For example, if you use the second NIC on the first overcloud machine, you should use the second NIC on each additional overcloud machine.

    During installation, these NICs will be referenced by a single name across all overcloud machines.

  • The provisioning network NIC should not be the same NIC that you are using for remote connectivity to the undercloud machine. During the undercloud installation, an Open vsSwitch bridge will be created for Neutron and the provisioning NIC will be bridged to the Open vSwitch bridge. Consequently, connectivity would be lost if the provisioning NIC was also used for remote connectivity to the undercloud machine.

  • The provisioning NIC on the overcloud nodes must be untagged.

  • You must have the MAC address of the NIC that will PXE boot the IPMI information for the machine on the provisioning network. The IPMI information will include such things as the IP address of the IPMI NIC and the IPMI username and password.

  • All of the networks must be available to all of the Contrail roles and computes.

Network Isolation

OSPd enables configuration of isolated overcloud networks. Using this approach, it is possible to host traffic in isolated networks for specific types of network traffic, such as tenants, storage, API, and the like. This enables assigning network traffic to specific network interfaces or bonds.

When isolated networks are configured, the OpenStack services are configured to use the isolated networks. If no isolated networks are configured, all services run on the provisioning network.

The following networks are typically used when using network isolation topology:

  • Provisioning- for the undercloud control plane

  • Internal API- for OpenStack internal APIs

  • Tenant

  • Storage

  • Storage Management

  • External

    • Floating IP- Can either be merged with external or can be a separate network.

  • Management

Supported Combinations

The following combinations of Operating System/OpenStack/Deployer/Contrail are supported:

Table 1: Compatibility Matrix

Operating System

OpenStack

Deployer

Contrail

RHEL 7.5

OSP13

OSPd13

Contrail 5.0.1

CentOS 7.5

RDO queens/stable

tripleo queens/stable

Tungsten Fabric latest

Creating Infrastructure

There are many different ways on how to create the infrastructure providing the control plane elements. The following example illustrates all control plane functions as Virtual Machines hosted on KVM hosts.

Table 2: Control Plane Functions

KVM Host

Virtual Machines

KVM1

undercloud

KVM2

OpenStack Controller 1, Contrail Contoller 1

KVM3

OpenStack Controller 2, Contrail Contoller 2

KVM4

OpenStack Controller 2, Contrail Contoller 2

Sample Topology

Layer 1: Physical Layer

Layer 2: Logical Layer

undercloud Configuration

Physical Switch

Use the following information to create ports and Trunked VLANs

Table 3: Physical Switch

Port

Trunked VLAN

Native VLAN

ge0

-

-

ge1

700, 720

-

ge2

700, 710, 720, 730, 740, 750

-

ge3

-

-

ge4

710, 730

700

ge5

-

-

undercloud and overcloud KVM Host Configuration

undercloud and overcloud KVM hosts will need virtual switches and virtual machine definitions configured. You can deploy any KVM host operating system version which supports KVM and OVS. The following example shows a RHEL/CentOS based system. If you are using RHEL, the system much be subscribed.

  • Install Basic Packages

    yum install -y libguestfs \ libguestfs-tools \ openvswitch \ virt-install \ kvm libvirt \ libvirt-python \ python-virtualbmc \ python-virtinst

  • Start libvirtd and ovs

    systemctl start libvirtd systemctl start openvswitch

  • Configure vSwitch

    Table 4: Configure vSwitch

    Bridge

    Trunked VLAN

    Native VLAN

    br0

    710, 720, 730 740, 750

    700

    br1

    -

    -

    Create bridges

    ovs-vsctl add-br br0 ovs-vsctl add-br br1 ovs-vsctl add-port br0 NIC1 ovs-vsctl add-port br1 NIC2 cat << EOF > br0.xml <network> <name>br0</name> <forward mode='bridge'/> <bridge name='br0'/> <virtualport type='openvswitch'/> <portgroup name='overcloud'/> <vlan trunk='yes'> <tag id='700' nativeMode='untagged'/> <tag id='710'/> <tag id='720'/> <tag id='730'/> <tag id='740'/> <tag id='750'/> </vlan> </potgroup> </network> EOF cat << EOF > br1.xml <network> <name>br1</name> <forward mode=’bridge’/> <bridge name='br1'/> <virtualport type='openvswitch'/> </network> EOF virsh net-define br0.xml virsh net-start br0 virsh net-autostart br0 virsh net-define br1.xml virsh net-start br1 virsh net-autostart br1

  • Create overcloud VM Definitions on the overcloud KVM Hosts (KVM2-KVM4)

    Note:

    overcloud VM definition is required to create on each overcloud KVM host.

    Note:

    Use the following formula to create the number of roles per overcloud KVM host:

    ROLES=compute:2,contrail-controller:1,control:1

    The following example defines:

    2x compute nodes 1x cotrail controller node 1x openstack controller node

    num=0 ipmi_user=<user> ipmi_password=<password> libvirt_path=/var/lib/libvirt/images port_group=overcloud prov_switch=br0 /bin/rm ironic_list IFS=',' read -ra role_list <<< "${ROLES}" for role in ${role_list[@]}; do role_name=`echo $role|cut -d ":" -f 1` role_count=`echo $role|cut -d ":" -f 2` for count in `seq 1 ${role_count}`; do echo $role_name $count qemu-img create -f qcow2 ${libvirt_path}/${role_name}_${count}.qcow2 99G virsh define /dev/stdin <<EOF $(virt-install --name ${role_name}_${count} \ --disk ${libvirt_path}/${role_name}_${count}.qcow2 \ --vcpus=4 \ --ram=16348 \ --network network=br0,model=virtio,portgroup=${port_group} \ --network network=br1,model=virtio \ --virt-type kvm \ --cpu host \ --import \ --os-variant rhel7 \ --serial pty \ --console pty,target_type=virtio \ --graphics vnc \ --print-xml) EOF vbmc add ${role_name}_${count} --port 1623${num} --username ${ipmi_user} --password ${ipmi_password} / vbmc start ${role_name}_${count} prov_mac=`virsh domiflist ${role_name}_${count}|grep ${prov_switch}|awk '{print $5}'` vm_name=${role_name}-${count}-`hostname -s` kvm_ip=`ip route get 1 |grep src |awk '{print $7}'` echo ${prov_mac} ${vm_name} ${kvm_ip} ${role_name} 1623${num}>> ironic_list num=$(expr $num + 1) done done

    CAUTION:

    One ironic_list file per KVM host will be created. You need to combine all the ironic_list files from each KVM host on the undercloud.

    The following output shows combined list from all the three Overvcloud KVM hosts:

    52:54:00:e7:ca:9a compute-1-5b3s31 10.87.64.32 compute 16230 52:54:00:30:6c:3f compute-2-5b3s31 10.87.64.32 compute 16231 52:54:00:9a:0c:d5 contrail-controller-1-5b3s31 10.87.64.32 contrail-controller 16232 52:54:00:cc:93:d4 control-1-5b3s31 10.87.64.32 control 16233 52:54:00:28:10:d4 compute-1-5b3s30 10.87.64.31 compute 16230 52:54:00:7f:36:e7 compute-2-5b3s30 10.87.64.31 compute 16231 52:54:00:32:e5:3e contrail-controller-1-5b3s30 10.87.64.31 contrail-controller 16232 52:54:00:d4:31:aa control-1-5b3s30 10.87.64.31 control 16233 52:54:00:d1:d2:ab compute-1-5b3s32 10.87.64.33 compute 16230 52:54:00:ad:a7:cc compute-2-5b3s32 10.87.64.33 compute 16231 52:54:00:55:56:50 contrail-controller-1-5b3s32 10.87.64.33 contrail-controller 16232 52:54:00:91:51:35 control-1-5b3s32 10.87.64.33 control 16233

  • Create undercloud VM Definitions on the undercloud KVM host (KVM1)

    Note:

    undercloud VM definitions is required to create only on undercloud KVM.

    1. Create images directory

      mkdir ~/images cd images

    2. Retrieve the image

      Note:

      The image must be retrieved based on the operating system:

      • CentOS

        curl https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz \ -o CentOS-7-x86_64-GenericCloud-1802.qcow2.xz zx -d images/CentOS-7-x86_64-GenericCloud-1802.qcow2.xz cloud_image=~/images/CentOS-7-x86_64-GenericCloud-1804_02.qcow2

      • RHEL

        Download rhel-server-7.5-update-1-x86_64-kvm.qcow2 from Red Hat portal to ~/images cloud_image=~/images/rhel-server-7.5-update-1-x86_64-kvm.qcow2

    3. Customize the undercloud image

      undercloud_name=queensa undercloud_suffix=local root_password=<password> stack_password=<password> export LIBGUESTFS_BACKEND=direct qemu-img create -f qcow2 /var/lib/libvirt/images/${undercloud_name}.qcow2 100G virt-resize --expand /dev/sda1 ${cloud_image} /var/lib/libvirt/images/${undercloud_name}.qcow2 virt-customize -a /var/lib/libvirt/images/${undercloud_name}.qcow2 \ --run-command 'xfs_growfs /' \ --root-password password:${root_password} \ --hostname ${undercloud_name}.${undercloud_suffix} \ --run-command 'useradd stack' \ --password stack:password:${stack_password} \ --run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \ --chmod 0440:/etc/sudoers.d/stack \ --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \ --run-command 'systemctl enable sshd' \ --run-command 'yum remove -y cloud-init' \ --selinux-relabel

    4. Define the undercloud virsh template

      vcpus=8 vram=32000 virt-install --name ${undercloud_name} \ --disk /var/lib/libvirt/images/${undercloud_name}.qcow2 \ --vcpus=${vcpus} \ --ram=${vram} \ --network network=default,model=virtio \ --network network=br0,model=virtio,portgroup=overcloud \ --virt-type kvm \ --import \ --os-variant rhel7 \ --graphics vnc \ --serial pty \ --noautoconsole \ --console pty,target_type=virtio

    5. Start the undercloud VM

      virsh start ${undercloud_name}

    6. Retrieve the undercloud IP

      It might take several seconds before the IP is available.

      undercloud_ip=`virsh domifaddr ${undercloud_name} |grep ipv4 |awk '{print $4}' |awk -F"/" '{print $1}'` ssh-copy-id ${undercloud_ip}

undercloud Configuration

  1. Login to the undercloud VM from the undercloud KVM host

    ssh ${undercloud_ip}

  2. Configure Hostname

    undercloud_name=`hostname -s` undercloud_suffix=`hostname -d` hostnamectl set-hostname ${undercloud_name}.${undercloud_suffix} hostnamectl set-hostname --transient ${undercloud_name}.${undercloud_suffix}

    Note:

    Make sure to set undercloud IP in the host file located at etc\hosts.

    The commands will be as follows assuming the mgmt NIC is eth0:

    undercloud_ip=`ip addr sh dev eth0 |grep "inet " |awk '{print $2}' |awk -F"/" '{print $1}'` echo ${undercloud_ip} ${undercloud_name}.${undercloud_suffix} ${undercloud_name} >> /etc/hosts`

  3. Setup Repositories

    Note:

    The repository must be setup based on the operating system:

    • CentOS

      tripeo_repos=`python -c 'import requests;r = requests.get("https://trunk.rdoproject.org/centos7-queens/current"); print r.text ' |grep python2-tripleo-repos|awk -F"href=\"" '{print $2}'|awk -F"\"" '{print $1}'` yum install -y https://trunk.rdoproject.org/centos7-queens/current/${tripeo_repos} tripleo-repos -b queens current

    • RHEL

      #Register with Satellite (can be done with CDN as well) satellite_fqdn=device.example.net act_key=xxx org=example yum localinstall -y http://${satellite_fqdn}/pub/katello-ca-consumer-latest.noarch.rpm subscription-manager register --activationkey=${act_key} --org=${org}

  4. Install Tripleo Client

    yum install -y python-tripleoclient tmux

  5. Copy undercloud.conf

    su - stack cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf

undercloud Installation

Run the following command to install the undercloud:

openstack undercloud install source stackrc

undercloud Post Configuration

Complete the following configurations post undercloud installation:

  • Configure forwarding:

    sudo iptables -A FORWARD -i br-ctlplane -o eth0 -j ACCEPT sudo iptables -A FORWARD -i eth0 -o br-ctlplane -m state --state RELATED,ESTABLISHED -j ACCEPT sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

  • Add external API interface:

    sudo ip link add name vlan720 link br-ctlplane type vlan id 720 sudo ip addr add 10.2.0.254/24 dev vlan720 sudo ip link set dev vlan720 up

  • Add stack user to the docker group:

    newgrp docker exit su - stack source stackrc

overcloud Configuration

Configuration

  • Configure nameserver for overcloud nodes

    undercloud_nameserver=8.8.8.8 openstack subnet set `openstack subnet show ctlplane-subnet -c id -f value` --dns-nameserver ${undercloud_nameserver}

  • overcloud images

    1. Create image directory

      mkdir images cd images

    2. Get overcloud images

      • TripleO

        curl -O https://images.rdoproject.org/queens/rdo_trunk/current-tripleo-rdo/ironic-python-agent.tar curl -O https://images.rdoproject.org/queens/rdo_trunk/current-tripleo-rdo/overcloud-full.tar tar xvf ironic-python-agent.tar tar xvf overcloud-full.tar

      • OSP13

        sudo yum install -y rhosp-director-images rhosp-director-images-ipa for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar ; do tar -xvf $i; done

    3. Upload overcloud images

      cd openstack overcloud image upload --image-path /home/stack/images/

  • Prepare Ironic

    OpenStack bare metal provisioning a.k.a Ironic is an integrated OpenStack program which aims to provision bare metal machines instead of virtual machines, forked from the Nova baremetal driver. It is best thought of as a bare metal hypervisor API and a set of plugins which interact with the bare metal hypervisors

    Note:

    Make sure to combine ironic_list files from the three overcloud KVM hosts.

    1. Add the overcloud VMs to Ironic

      content_copy zoom_out_map
      ipmi_password=<password>
      ipmi_user=<user>
      while IFS= read -r line; do
        mac=`echo $line|awk '{print $1}'`
        name=`echo $line|awk '{print $2}'`
        kvm_ip=`echo $line|awk '{print $3}'`
        profile=`echo $line|awk '{print $4}'`
        ipmi_port=`echo $line|awk '{print $5}'`
        uuid=`openstack baremetal node create --driver ipmi \
                                              --property cpus=4 \
                                              --property memory_mb=16348 \
                                              --property local_gb=100 \
                                              --property cpu_arch=x86_64 \
                                              --driver-info ipmi_username=${ipmi_user}  \
                                              --driver-info ipmi_address=${kvm_ip} \
                                              --driver-info ipmi_password=${ipmi_password} \
                                              --driver-info ipmi_port=${ipmi_port} \
                                              --name=${name} \
                                              --property capabilities=profile:${profile},boot_option:local \
                                              -c uuid -f value`
        openstack baremetal port create --node ${uuid} ${mac}
      done < <(cat ironic_list)
      
      DEPLOY_KERNEL=$(openstack image show bm-deploy-kernel -f value -c id)
      DEPLOY_RAMDISK=$(openstack image show bm-deploy-ramdisk -f value -c id)
      
      for i in `openstack baremetal node list -c UUID -f value`; do
        openstack baremetal node set $i --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK
      done
      
      for i in `openstack baremetal node list -c UUID -f value`; do
        openstack baremetal node show $i -c properties -f value
      done
    2. Introspect overcloud node

      content_copy zoom_out_map
      for node in $(openstack baremetal node list -c UUID -f value) ; do
        openstack baremetal node manage $node
      done
      openstack overcloud node introspect --all-manageable --provide
    3. Add Baremetal Server (BMS) to Ironic

      • Automated profiling

        Evaluate the attributes of the physical server. The server will be automatically profiled based on the rules.

        The following example shows how to create a rule for system manufacturer as “Supermicro” and memory greater or equal to 128GByte

        content_copy zoom_out_map
        cat << EOF > ~/rule_compute.json
        [
         {
             "description": "set physical compute",
             "conditions": [
                 {"op": "eq", "field": "data://auto_discovered", "value": true},
                 {"op": "eq", "field": "data://inventory.system_vendor.manufacturer",
                  "value": "Supermicro"},
                 {"op": "ge", "field": "memory_mb", "value": 128000}
             ],
             "actions": [
                 {"action": "set-attribute", "path": "driver_info/ipmi_username",
                  "value": "<user>"},
                 {"action": "set-attribute", "path": "driver_info/ipmi_password",
                  "value": "<password>"},
                 {"action": "set-capability", "name": "profile", "value": "compute"},
                 {"action": "set-attribute", "path": "driver_info/ipmi_address","value": "{data[inventory][bmc_address]}"}
             ]
         }
        ]
        EOF

        You can import the rule by:

        content_copy zoom_out_map
        openstack baremetal introspection rule import ~/rule_compute.json
      • Scanning of BMC ranges

        Scan the BMC IP range and automatically add new servers matching the above rule by:

        content_copy zoom_out_map
        ipmi_range=10.87.122.25/32
        ipmi_password=<password>
        ipmi_user=<user>
        openstack overcloud node discover --range ${ipmi_range} \
          --credentials ${ipmi_user}:${ipmi_password} \
          --introspect --provide
  • Create Flavor

    content_copy zoom_out_map
    for i in compute-dpdk \
    compute-sriov \
    contrail-controller \
    contrail-analytics \
    contrail-database \
    contrail-analytics-database; do
      openstack flavor create $i --ram 4096 --vcpus 1 --disk 40
      openstack flavor set --property "capabilities:boot_option"="local" \
                           --property "capabilities:profile"="${i}" ${i}
    done
  • Create TripleO-Heat-Template Copy

    content_copy zoom_out_map
    cp -r /usr/share/openstack-tripleo-heat-templates/ tripleo-heat-templates
    git clone https://github.com/juniper/contrail-tripleo-heat-templates -b stable/queens
    cp -r contrail-tripleo-heat-templates/* tripleo-heat-templates/
  • Create and Upload Containers

    • OpenStack Contrainers

      1. Create OpenStack container file

        Note:

        The container must be created based on the OpenStack program:

        • TripleO

          content_copy zoom_out_map
          openstack overcloud container image prepare \
            --namespace docker.io/tripleoqueens \
            --tag current-tripleo \
            --tag-from-label rdo_version \
            --output-env-file=~/overcloud_images.yaml
          
          tag=`grep "docker.io/tripleoqueens" docker_registry.yaml |tail -1 |awk -F":" '{print $3}'`
          
          openstack overcloud container image prepare \
            --namespace docker.io/tripleoqueens \
            --tag ${tag} \
            --push-destination 192.168.24.1:8787 \
            --output-env-file=~/overcloud_images.yaml \
            --output-images-file=~/local_registry_images.yaml
        • OSP13

          content_copy zoom_out_map
          openstack overcloud container image prepare \
           --push-destination=192.168.24.1:8787  \
           --tag-from-label {version}-{release} \
           --output-images-file ~/local_registry_images.yaml  \
           --namespace=registry.access.Red Hat.com/rhosp13  \
           --prefix=openstack-  \
           --tag-from-label {version}-{release}  \
           --output-env-file ~/overcloud_images.yaml
      2. Upload OpenStack Containers

        content_copy zoom_out_map
        openstack overcloud container image upload --config-file ~/local_registry_images.yaml
    • Contrail Containers

      1. Create Contrail container file

        Note:

        This step is optional. The Contrail containers can be downloaded from external registries later.

        content_copy zoom_out_map
        cd ~/tripleo-heat-templates/tools/contrail
        ./import_contrail_container.sh -f container_outputfile -r registry -t tag [-i insecure] [-u username] [-p password] [-c certificate pat

        Here are few examples of importing Contrail containers from different sources:

        • Import from password protected public registry:

          content_copy zoom_out_map
          ./import_contrail_container.sh -f /tmp/contrail_container -r hub.juniper.net/contrail -u USERNAME -p PASSWORD -t 1234
          
        • Import from Dockerhub:

          content_copy zoom_out_map
          ./import_contrail_container.sh -f /tmp/contrail_container -r docker.io/opencontrailnightly -t 1234
          
        • Import from private secure registry:

          content_copy zoom_out_map
          ./import_contrail_container.sh -f /tmp/contrail_container -r device.example.net:5443 -c http://device.example.net/pub/device.example.net.crt -t 1234
          
        • Import from private insecure registry:

          content_copy zoom_out_map
          ./import_contrail_container.sh -f /tmp/contrail_container -r 10.0.0.1:5443 -i 1 -t 1234
          
      2. Upload Contrail containers to undercloud registry

        content_copy zoom_out_map
        openstack overcloud container image upload --config-file /tmp/contrail_container

Templates

Different YAML templates can be used to customize the overcloud

  • Contrail Services customization

    content_copy zoom_out_map
    vi ~/tripleo-heat-templates/environments/contrail-services.yaml
    parameter_defaults:
      ContrailSettings:
        VROUTER_GATEWAY: 10.0.0.1
        # KEY1: value1
        # KEY2: value2
  • Contrail registry settings

    content_copy zoom_out_map
    vi ~/tripleo-heat-templates/environments/contrail-services.yaml

    Here are few examples of default values for various registries:

    • Public Juniper registry

      content_copy zoom_out_map
      parameter_defaults:
        ContrailRegistry: hub.juniper.net/contrail
        ContrailRegistryUser: <USER>
        ContrailRegistryPassword: <PASSWORD>
    • Insecure registry

      content_copy zoom_out_map
      parameter_defaults:
        ContrailRegistryInsecure: true
        DockerInsecureRegistryAddress: 10.87.64.32:5000,192.168.24.1:8787
        ContrailRegistry: 10.87.64.32:5000
    • Private secure registry

      content_copy zoom_out_map
      parameter_defaults:
        ContrailRegistryCertUrl: http://device.example.net/pub/device.example.net.crt
        ContrailRegistry: device.example.net:5443
  • Contrail Container image settings

    content_copy zoom_out_map
    parameter_defaults:
      ContrailImageTag: queens-5.0-104-rhel-queens
  • Network customization

    In order to customize the network, define different networks and configure the overcloud nodes NIC layout. TripleO supports a flexible way of customizing the network.

    The following networking customization example uses network as:

    Table 5: Network Customization

    Network

    VLAN

    overcloud Nodes

    provisioning

    -

    All

    internal_api

    710

    All

    external_api

    720

    OpenStack CTRL

    storage

    740

    OpenStack CTRL, Computes

    storage_mgmt

    750

    OpenStack CTRL

    tenant

    -

    Contrail CTRL, Computes

    • Network activation in roles_data

      The networks must be activated per role in the roles_data file:

      content_copy zoom_out_map
      vi ~/tripleo-heat-templates/roles_data_contrail_aio.yaml
      • OpenStack Controller

        content_copy zoom_out_map
        ###############################################################################
        # Role: Controller                                                            #
        ###############################################################################
        - name: Controller
          description: |
            Controller role that has all the controler services loaded and handles
            Database, Messaging and Network functions.
          CountDefault: 1
          tags:
            - primary
            - controller
          networks:
            - External
            - InternalApi
            - Storage
            - StorageMgmt
      • Compute Node

        content_copy zoom_out_map
        ###############################################################################
        # Role: Compute                                                               #
        ###############################################################################
        - name: Compute
          description: |
            Basic Compute Node role
          CountDefault: 1
          networks:
            - InternalApi
            - Tenant
            - Storage
      • Contrail Controller

        content_copy zoom_out_map
        ###############################################################################
        # Role: ContrailController                                                    #
        ###############################################################################
        - name: ContrailController
          description: |
            ContrailController role that has all the Contrail controler services loaded
            and handles config, control and webui functions
          CountDefault: 1
          tags:
            - primary
            - contrailcontroller
          networks:
            - InternalApi
            - Tenant
      • Compute DPDK

        content_copy zoom_out_map
        ###############################################################################
        # Role: ContrailDpdk                                                          #
        ###############################################################################
        - name: ContrailDpdk
          description: |
            Contrail Dpdk Node role
          CountDefault: 0
          tags:
            - contraildpdk
          networks:
            - InternalApi
            - Tenant
            - Storage
      • Compute SRIOV

        content_copy zoom_out_map
        ###############################################################################
        # Role: ContrailSriov
        ###############################################################################
        - name: ContrailSriov
          description: |
            Contrail Sriov Node role
          CountDefault: 0
          tags:
            - contrailsriov
          networks:
            - InternalApi
            - Tenant
            - Storage
      • Compute CSN

        content_copy zoom_out_map
        ###############################################################################
        # Role: ContrailTsn
        ###############################################################################
        - name: ContrailTsn
          description: |
            Contrail Tsn Node role
          CountDefault: 0
          tags:
            - contrailtsn
          networks:
            - InternalApi
            - Tenant
            - Storage
    • Network parameter configuration

      content_copy zoom_out_map
      cat ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml
      resource_registry:
        OS::TripleO::Controller::Net::SoftwareConfig: ../../network/config/contrail/controller-nic-config.yaml
        OS::TripleO::ContrailController::Net::SoftwareConfig: ../../network/config/contrail/contrail-controller-nic-config.yaml
        OS::TripleO::ContrailControlOnly::Net::SoftwareConfig: ../../network/config/contrail/contrail-controller-nic-config.yaml
        OS::TripleO::Compute::Net::SoftwareConfig: ../../network/config/contrail/compute-nic-config.yaml
        OS::TripleO::ContrailDpdk::Net::SoftwareConfig: ../../network/config/contrail/contrail-dpdk-nic-config.yaml
        OS::TripleO::ContrailSriov::Net::SoftwareConfig: ../../network/config/contrail/contrail-sriov-nic-config.yaml
        OS::TripleO::ContrailTsn::Net::SoftwareConfig: ../../network/config/contrail/contrail-tsn-nic-config.yaml
      
      parameter_defaults:
        # Customize all these values to match the local environment
        TenantNetCidr: 10.0.0.0/24
        InternalApiNetCidr: 10.1.0.0/24
        ExternalNetCidr: 10.2.0.0/24
        StorageNetCidr: 10.3.0.0/24
        StorageMgmtNetCidr: 10.4.0.0/24
        # CIDR subnet mask length for provisioning network
        ControlPlaneSubnetCidr: '24'
        # Allocation pools
        TenantAllocationPools: [{'start': '10.0.0.10', 'end': '10.0.0.200'}]
        InternalApiAllocationPools: [{'start': '10.1.0.10', 'end': '10.1.0.200'}]
        ExternalAllocationPools: [{'start': '10.2.0.10', 'end': '10.2.0.200'}]
        StorageAllocationPools: [{'start': '10.3.0.10', 'end': '10.3.0.200'}]
        StorageMgmtAllocationPools: [{'start': '10.4.0.10', 'end': '10.4.0.200'}]
        # Routes
        ControlPlaneDefaultRoute: 192.168.24.1
        InternalApiDefaultRoute: 10.1.0.1
        ExternalInterfaceDefaultRoute: 10.2.0.1
        # Vlans
        InternalApiNetworkVlanID: 710
        ExternalNetworkVlanID: 720
        StorageNetworkVlanID: 730
        StorageMgmtNetworkVlanID: 740
        TenantNetworkVlanID: 3211
        # Services
        EC2MetadataIp: 192.168.24.1  # Generally the IP of the undercloud
        DnsServers: ["172.x.x.x"]
        NtpServer: 10.0.0.1
    • Network interface configuration

      There are NIC configuration files per role.

      content_copy zoom_out_map
      cd ~/tripleo-heat-templates/network/config/contrail
      • OpenStack Controller

        content_copy zoom_out_map
        heat_template_version: queens
        
        description: >
          Software Config to drive os-net-config to configure multiple interfaces
          for the compute role. This is an example for a Nova compute node using
          Contrail vrouter and the vhost0 interface.
        parameters:
          ControlPlaneIp:
            default: ''
            description: IP address/subnet on the ctlplane network
            type: string
          ExternalIpSubnet:
            default: ''
            description: IP address/subnet on the external network
            type: string
          InternalApiIpSubnet:
            default: ''
            description: IP address/subnet on the internal_api network
            type: string
          InternalApiDefaultRoute: # Not used by default in this template
            default: '10.0.0.1'
            description: The default route of the internal api network.
            type: string
          StorageIpSubnet:
            default: ''
            description: IP address/subnet on the storage network
            type: string
          StorageMgmtIpSubnet:
            default: ''
            description: IP address/subnet on the storage_mgmt network
            type: string
          TenantIpSubnet:
            default: ''
            description: IP address/subnet on the tenant network
            type: string
          ManagementIpSubnet: # Only populated when including environments/network-management.yaml
            default: ''
            description: IP address/subnet on the management network
            type: string
          ExternalNetworkVlanID:
            default: 10
            description: Vlan ID for the external network traffic.
            type: number
          InternalApiNetworkVlanID:
            default: 20
            description: Vlan ID for the internal_api network traffic.
            type: number
          StorageNetworkVlanID:
            default: 30
            description: Vlan ID for the storage network traffic.
            type: number
          StorageMgmtNetworkVlanID:
            default: 40
            description: Vlan ID for the storage mgmt network traffic.
            type: number
          TenantNetworkVlanID:
            default: 50
            description: Vlan ID for the tenant network traffic.
            type: number
          ManagementNetworkVlanID:
            default: 60
            description: Vlan ID for the management network traffic.
            type: number
          ControlPlaneSubnetCidr: # Override this via parameter_defaults
            default: '24'
            description: The subnet CIDR of the control plane network.
            type: string
          ControlPlaneDefaultRoute: # Override this via parameter_defaults
            description: The default route of the control plane network.
            type: string
          ExternalInterfaceDefaultRoute: # Not used by default in this template
            default: '10.0.0.1'
            description: The default route of the external network.
            type: string
          ManagementInterfaceDefaultRoute: # Commented out by default in this template
            default: unset
            description: The default route of the management network.
            type: string
          DnsServers: # Override this via parameter_defaults
            default: []
            description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
            type: comma_delimited_list
          EC2MetadataIp: # Override this via parameter_defaults
            description: The IP address of the EC2 metadata server.
            type: string
        
        resources:
          OsNetConfigImpl:
            type: OS::Heat::SoftwareConfig
            properties:
              group: script
              config:
                str_replace:
                  template:
                    get_file: ../../scripts/run-os-net-config.sh
                  params:
                    $network_config:
                      network_config:
                      - type: interface
                        name: nic1
                        use_dhcp: false
                        dns_servers:
                          get_param: DnsServers
                        addresses:
                        - ip_netmask:
                            list_join:
                              - '/'
                              - - get_param: ControlPlaneIp
                                - get_param: ControlPlaneSubnetCidr
                        routes:
                        - ip_netmask: 169.x.x.x/32
                          next_hop:
                            get_param: EC2MetadataIp
                        - default: true
                          next_hop:
                            get_param: ControlPlaneDefaultRoute
                      - type: vlan
                        vlan_id:
                          get_param: InternalApiNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: InternalApiIpSubnet
                      - type: vlan
                        vlan_id:
                          get_param: ExternalNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: ExternalIpSubnet
                      - type: vlan
                        vlan_id:
                          get_param: StorageNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: StorageIpSubnet
                      - type: vlan
                        vlan_id:
                          get_param: StorageMgmtNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: StorageMgmtIpSubnet
        outputs:
          OS::stack_id:
            description: The OsNetConfigImpl resource.
            value:
              get_resource: OsNetConfigImpl
        
      • Contrail Controller

        content_copy zoom_out_map
        heat_template_version: queens
        description: >
          Software Config to drive os-net-config to configure multiple interfaces
          for the compute role. This is an example for a Nova compute node using
          Contrail vrouter and the vhost0 interface.
        
        parameters:
          ControlPlaneIp:
            default: ''
            description: IP address/subnet on the ctlplane network
            type: string
          ExternalIpSubnet:
            default: ''
            description: IP address/subnet on the external network
            type: string
          InternalApiIpSubnet:
            default: ''
            description: IP address/subnet on the internal_api network
            type: string
          InternalApiDefaultRoute: # Not used by default in this template
            default: '10.0.0.1'
            description: The default route of the internal api network.
            type: string
          StorageIpSubnet:
            default: ''
            description: IP address/subnet on the storage network
            type: string
          StorageMgmtIpSubnet:
            default: ''
            description: IP address/subnet on the storage_mgmt network
            type: string
          TenantIpSubnet:
            default: ''
            description: IP address/subnet on the tenant network
            type: string
          ManagementIpSubnet: # Only populated when including environments/network-management.yaml
            default: ''
            description: IP address/subnet on the management network
            type: string
          ExternalNetworkVlanID:
            default: 10
            description: Vlan ID for the external network traffic.
            type: number
          InternalApiNetworkVlanID:
            default: 20
            description: Vlan ID for the internal_api network traffic.
            type: number
          StorageNetworkVlanID:
            default: 30
            description: Vlan ID for the storage network traffic.
            type: number
          StorageMgmtNetworkVlanID:
            default: 40
            description: Vlan ID for the storage mgmt network traffic.
            type: number
          TenantNetworkVlanID:
            default: 50
            description: Vlan ID for the tenant network traffic.
            type: number
          ManagementNetworkVlanID:
            default: 60
            description: Vlan ID for the management network traffic.
            type: number
          ControlPlaneSubnetCidr: # Override this via parameter_defaults
            default: '24'
            description: The subnet CIDR of the control plane network.
            type: string
          ControlPlaneDefaultRoute: # Override this via parameter_defaults
            description: The default route of the control plane network.
            type: string
          ExternalInterfaceDefaultRoute: # Not used by default in this template
            default: '10.0.0.1'
            description: The default route of the external network.
            type: string
          ManagementInterfaceDefaultRoute: # Commented out by default in this template
            default: unset
            description: The default route of the management network.
            type: string
          DnsServers: # Override this via parameter_defaults
            default: []
            description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
            type: comma_delimited_list
          EC2MetadataIp: # Override this via parameter_defaults
            description: The IP address of the EC2 metadata server.
            type: string
        resources:
          OsNetConfigImpl:
            type: OS::Heat::SoftwareConfig
            properties:
              group: script
              config:
                str_replace:
                  template:
                    get_file: ../../scripts/run-os-net-config.sh
                  params:
                    $network_config:
                      network_config:
                      - type: interface
                        name: nic1
                        use_dhcp: false
                        dns_servers:
                          get_param: DnsServers
                        addresses:
                        - ip_netmask:
                            list_join:
                              - '/'
                              - - get_param: ControlPlaneIp
                                - get_param: ControlPlaneSubnetCidr
                        routes:
                        - ip_netmask: 169.x.x.x/32
                          next_hop:
                            get_param: EC2MetadataIp
                        - default: true
                          next_hop:
                            get_param: ControlPlaneDefaultRoute
                      - type: vlan
                        vlan_id:
                          get_param: InternalApiNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: InternalApiIpSubnet
                      - type: interface
                        name: nic2
                        use_dhcp: false
                        addresses:
                        - ip_netmask:
                            get_param: TenantIpSubnet
        outputs:
          OS::stack_id:
            description: The OsNetConfigImpl resource.
            value:
              get_resource: OsNetConfigImpl
      • Compute Node

        content_copy zoom_out_map
        heat_template_version: queens
        description: >
          Software Config to drive os-net-config to configure multiple interfaces
          for the compute role. This is an example for a Nova compute node using
          Contrail vrouter and the vhost0 interface.
        parameters:
          ControlPlaneIp:
            default: ''
            description: IP address/subnet on the ctlplane network
            type: string
          ExternalIpSubnet:
            default: ''
            description: IP address/subnet on the external network
            type: string
          InternalApiIpSubnet:
            default: ''
            description: IP address/subnet on the internal_api network
            type: string
          InternalApiDefaultRoute: # Not used by default in this template
            default: '10.0.0.1'
            description: The default route of the internal api network.
            type: string
          StorageIpSubnet:
            default: ''
            description: IP address/subnet on the storage network
            type: string
          StorageMgmtIpSubnet:
            default: ''
            description: IP address/subnet on the storage_mgmt network
            type: string
          TenantIpSubnet:
            default: ''
            description: IP address/subnet on the tenant network
            type: string
          ManagementIpSubnet: # Only populated when including environments/network-management.yaml
            default: ''
            description: IP address/subnet on the management network
            type: string
          ExternalNetworkVlanID:
            default: 10
            description: Vlan ID for the external network traffic.
            type: number
          InternalApiNetworkVlanID:
            default: 20
            description: Vlan ID for the internal_api network traffic.
            type: number
          StorageNetworkVlanID:
            default: 30
            description: Vlan ID for the storage network traffic.
            type: number
          StorageMgmtNetworkVlanID:
            default: 40
            description: Vlan ID for the storage mgmt network traffic.
            type: number
          TenantNetworkVlanID:
            default: 50
            description: Vlan ID for the tenant network traffic.
            type: number
          ManagementNetworkVlanID:
            default: 60
            description: Vlan ID for the management network traffic.
            type: number
          ControlPlaneSubnetCidr: # Override this via parameter_defaults
            default: '24'
            description: The subnet CIDR of the control plane network.
            type: string
          ControlPlaneDefaultRoute: # Override this via parameter_defaults
            description: The default route of the control plane network.
            type: string
          ExternalInterfaceDefaultRoute: # Not used by default in this template
            default: '10.0.0.1'
            description: The default route of the external network.
            type: string
          ManagementInterfaceDefaultRoute: # Commented out by default in this template
            default: unset
            description: The default route of the management network.
            type: string
          DnsServers: # Override this via parameter_defaults
            default: []
            description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
            type: comma_delimited_list
          EC2MetadataIp: # Override this via parameter_defaults
            description: The IP address of the EC2 metadata server.
            type: string
        resources:
          OsNetConfigImpl:
            type: OS::Heat::SoftwareConfig
            properties:
              group: script
              config:
                str_replace:
                  template:
                    get_file: ../../scripts/run-os-net-config.sh
                  params:
                    $network_config:
                      network_config:
                      - type: interface
                        name: nic1
                        use_dhcp: false
                        dns_servers:
                          get_param: DnsServers
                        addresses:
                        - ip_netmask:
                            list_join:
                              - '/'
                              - - get_param: ControlPlaneIp
                                - get_param: ControlPlaneSubnetCidr
                        routes:
                        - ip_netmask: 169.x.x.x/32
                          next_hop:
                            get_param: EC2MetadataIp
                        - default: true
                          next_hop:
                            get_param: ControlPlaneDefaultRoute
                      - type: vlan
                        vlan_id:
                          get_param: InternalApiNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: InternalApiIpSubnet
                      - type: vlan
                        vlan_id:
                          get_param: StorageNetworkVlanID
                        device: nic1
                        addresses:
                        - ip_netmask:
                            get_param: StorageIpSubnet
                      - type: contrail_vrouter
                        name: vhost0
                        use_dhcp: false
                        members:
                          -
                            type: interface
                            name: nic2
                            use_dhcp: false
                        addresses:
                        - ip_netmask:
                            get_param: TenantIpSubnet
        
        outputs:
          OS::stack_id:
            description: The OsNetConfigImpl resource.
            value:
              get_resource: OsNetConfigImpl
    • Advanced Network Configuration

      • Advanced vRouter Kernel Mode Configurations

        In addition to the standard NIC configuration, the vRouter kernel mode supports the following modes:

        • VLAN

        • Bond

        • Bond + VLAN

        NIC Template Configurations

        The snippets below only shows the relevant section of the NIC configuration for each mode.

        • VLAN

          content_copy zoom_out_map
          - type: vlan
            vlan_id:
              get_param: TenantNetworkVlanID
            device: nic2
          - type: contrail_vrouter
            name: vhost0
            use_dhcp: false
            members:
              -
                type: interface
                name:
                  str_replace:
                    template: vlanVLANID
                    params:
                      VLANID: {get_param: TenantNetworkVlanID}
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
        • Bond

          content_copy zoom_out_map
          - type: linux_bond
            name: bond0
            bonding_options: "mode=4 xmit_hash_policy=layer2+3"
            use_dhcp: false
            members:
             -
               type: interface
               name: nic2
             -
               type: interface
               name: nic3
          - type: contrail_vrouter
            name: vhost0
            use_dhcp: false
            members:
              -
                type: interface
                name: bond0
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
        • Bond + VLAN

          content_copy zoom_out_map
          - type: linux_bond
            name: bond0
            bonding_options: "mode=4 xmit_hash_policy=layer2+3"
            use_dhcp: false
            members:
             -
               type: interface
               name: nic2
             -
               type: interface
               name: nic3
          - type: vlan
            vlan_id:
              get_param: TenantNetworkVlanID
            device: bond0
          - type: contrail_vrouter
            name: vhost0
            use_dhcp: false
            members:
              -
                type: interface
                name:
                  str_replace:
                    template: vlanVLANID
                    params:
                      VLANID: {get_param: TenantNetworkVlanID}
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
      • Advanced vRouter DPDK Mode Configurations

        In addition to the standard NIC configuration, the vRouter DPDK mode supports the following modes:

        • Standard

        • VLAN

        • Bond

        • Bond + VLAN

        Network Environment Configuration

        Enable the number of hugepages:

        content_copy zoom_out_map
        parameter_defaults:
          ContrailDpdkHugepages1GB: 10

        NIC Template Configurations

        • Standard

          content_copy zoom_out_map
          - type: contrail_vrouter_dpdk
            name: vhost0
            use_dhcp: false
            driver: uio_pci_generic
            cpu_list: 0x01
            members:
              -
                type: interface
                name: nic2
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
        • VLAN

          content_copy zoom_out_map
           - type: contrail_vrouter_dpdk
                       name: vhost0
                       use_dhcp: false
                       driver: uio_pci_generic
                       cpu_list: 0x01
                       vlan_id:
                         get_param: TenantNetworkVlanID
                       members:
                         -
                           type: interface
                           name: nic2
                           use_dhcp: false
                       addresses:
                       - ip_netmask:
                           get_param: TenantIpSubnet
        • Bond

          content_copy zoom_out_map
          - type: contrail_vrouter_dpdk
                       name: vhost0
                       use_dhcp: false
                       driver: uio_pci_generic
                       cpu_list: 0x01
                       bond_mode: 4
                       bond_policy: layer2+3
                       members:
                         -
                           type: interface
                           name: nic2
                           use_dhcp: false
                         -
                           type: interface
                           name: nic3
                           use_dhcp: false
                       addresses:
                       - ip_netmask:
                           get_param: TenantIpSubnet
        • Bond + VLAN

          content_copy zoom_out_map
           - type: contrail_vrouter_dpdk
                       name: vhost0
                       use_dhcp: false
                       driver: uio_pci_generic
                       cpu_list: 0x01
                       vlan_id:
                         get_param: TenantNetworkVlanID
                       bond_mode: 4
                       bond_policy: layer2+3
                       members:
                         -
                           type: interface
                           name: nic2
                           use_dhcp: false
                         -
                           type: interface
                           name: nic3
                           use_dhcp: false
                       addresses:
                       - ip_netmask:
                           get_param: TenantIpSubnet
      • Advanced vRouter SRIOV Mode Configurations

        vRouter SRIOV can be used in the following combinations:

        • SRIOV + Kernel mode

          • Standard

          • VLAN

          • Bond

          • Bond + VLAN

        • SRIOV + DPDK mode

          • Standard

          • VLAN

          • Bond

          • Bond + VLAN

        Network environment configuration

        content_copy zoom_out_map
        vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml

        Enable the number of hugepages

        • SRIOV + Kernel mode

          content_copy zoom_out_map
          parameter_defaults:
            ContrailSriovHugepages1GB: 10
        • SRIOV + DPDK mode

          content_copy zoom_out_map
          parameter_defaults:
            ContrailSriovMode: dpdk
            ContrailDpdkHugepages1GB: 10
            ContrailSriovHugepages1GB: 10

        SRIOV PF/VF settings

        content_copy zoom_out_map
        NovaPCIPassthrough:
        - devname: "ens2f1"
          physical_network: "sriov1"
        ContrailSriovNumVFs: ["ens2f1:7"]

        NIC template configurations:

        The SRIOV NICs are not configured in the NIC templates. However, vRouter NICs must still be configured.

        See following NIC Template Configurations for vRouter kernel mode.

        The snippets below only shows the relevant section of the NIC configuration for each mode.

        • VLAN

          content_copy zoom_out_map
          - type: vlan
            vlan_id:
              get_param: TenantNetworkVlanID
            device: nic2
          - type: contrail_vrouter
            name: vhost0
            use_dhcp: false
            members:
              -
                type: interface
                name:
                  str_replace:
                    template: vlanVLANID
                    params:
                      VLANID: {get_param: TenantNetworkVlanID}
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
        • Bond

          content_copy zoom_out_map
          - type: linux_bond
            name: bond0
            bonding_options: "mode=4 xmit_hash_policy=layer2+3"
            use_dhcp: false
            members:
             -
               type: interface
               name: nic2
             -
               type: interface
               name: nic3
          - type: contrail_vrouter
            name: vhost0
            use_dhcp: false
            members:
              -
                type: interface
                name: bond0
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
        • Bond + VLAN

          content_copy zoom_out_map
          - type: linux_bond
            name: bond0
            bonding_options: "mode=4 xmit_hash_policy=layer2+3"
            use_dhcp: false
            members:
             -
               type: interface
               name: nic2
             -
               type: interface
               name: nic3
          - type: vlan
            vlan_id:
              get_param: TenantNetworkVlanID
            device: bond0
          - type: contrail_vrouter
            name: vhost0
            use_dhcp: false
            members:
              -
                type: interface
                name:
                  str_replace:
                    template: vlanVLANID
                    params:
                      VLANID: {get_param: TenantNetworkVlanID}
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet

        See following NIC Template Configurations for vRouter DPDK mode:

        • Standard

          content_copy zoom_out_map
          - type: contrail_vrouter_dpdk
            name: vhost0
            use_dhcp: false
            driver: uio_pci_generic
            cpu_list: 0x01
            members:
              -
                type: interface
                name: nic2
                use_dhcp: false
            addresses:
            - ip_netmask:
                get_param: TenantIpSubnet
        • VLAN

          content_copy zoom_out_map
           - type: contrail_vrouter_dpdk
                       name: vhost0
                       use_dhcp: false
                       driver: uio_pci_generic
                       cpu_list: 0x01
                       vlan_id:
                         get_param: TenantNetworkVlanID
                       members:
                         -
                           type: interface
                           name: nic2
                           use_dhcp: false
                       addresses:
                       - ip_netmask:
                           get_param: TenantIpSubnet
        • Bond

          content_copy zoom_out_map
          - type: contrail_vrouter_dpdk
                       name: vhost0
                       use_dhcp: false
                       driver: uio_pci_generic
                       cpu_list: 0x01
                       bond_mode: 4
                       bond_policy: layer2+3
                       members:
                         -
                           type: interface
                           name: nic2
                           use_dhcp: false
                         -
                           type: interface
                           name: nic3
                           use_dhcp: false
                       addresses:
                       - ip_netmask:
                           get_param: TenantIpSubnet
        • Bond + VLAN

          content_copy zoom_out_map
           - type: contrail_vrouter_dpdk
                       name: vhost0
                       use_dhcp: false
                       driver: uio_pci_generic
                       cpu_list: 0x01
                       vlan_id:
                         get_param: TenantNetworkVlanID
                       bond_mode: 4
                       bond_policy: layer2+3
                       members:
                         -
                           type: interface
                           name: nic2
                           use_dhcp: false
                         -
                           type: interface
                           name: nic3
                           use_dhcp: false
                       addresses:
                       - ip_netmask:
                           get_param: TenantIpSubnet
  • Advanced Scenarios

    Remote Compute

    Remote Compute extends the data plane to remote locations (POP) whilest keeping the control plane central. Each POP will have its own set of Contrail control services, which are running in the central location. The difficulty is to ensure that the compute nodes of a given POP connect to the Control nodes assigned to that POC. The Control nodes must have predictable IP addresses and the compute nodes have to know these IP addresses. In order to achieve that the following methods are used:

    • Custom Roles

    • Static IP assignment

    • Precise Node placement

    • Per Node hieradata

    Each overcloud node has a unique DMI UUID. This UUID is known on the undercloud node as well as on the overcloud node. Hence, this UUID can be used for mapping node specific information. For each POP, a Control role and a Compute role has to be created.

    Overview

    Mapping Table

    Table 6: Mapping Table

    Nova Name

    Ironic Name

    UUID

    KVM

    IP Address

    POP

    overcloud-contrailcontrolonly-0

    control-only-1-5b3s30

    Ironic UUID: 7d758dce-2784-45fd-be09-5a41eb53e764

    DMI UUID: 73F8D030-E896-4A95-A9F5-E1A4FEBE322D

    5b3s30

    10.0.0.11

    POP1

    overcloud-contrailcontrolonly-1

    control-only-2-5b3s30

    Ironic UUID: d26abdeb-d514-4a37-a7fb-2cd2511c351f

    DMI UUID: 14639A66-D62C-4408-82EE-FDDC4E509687

    5b3s30

    10.0.0.14

    POP2

    overcloud-contrailcontrolonly-2

    control-only-1-5b3s31

    Ironic UUID: 91dd9fa9-e8eb-4b51-8b5e-bbaffb6640e4

    DMI UUID: 28AB0B57-D612-431E-B177-1C578AE0FEA4

    5b3s31

    10.0.0.12

    POP1

    overcloud-contrailcontrolonly-3

    control-only-2-5b3s31

    Ironic UUID: 09fa57b8-580f-42ec-bf10-a19573521ed4

    DMI UUID: 09BEC8CB-77E9-42A6-AFF4-6D4880FD87D0

    5b3s31

    10.0.0.15

    POP2

    overcloud-contrailcontrolonly-4

    control-only-1-5b3s32

    Ironic UUID: 4766799-24c8-4e3b-af54-353f2b796ca4

    DMI UUID: 3993957A-ECBF-4520-9F49-0AF6EE1667A7

    5b3s32

    10.0.0.13

    POP1

    overcloud-contrailcontrolonly-5

    control-only-2-5b3s32

    Ironic UUID: 58a803ae-a785-470e-9789-139abbfa74fb

    DMI UUID: AF92F485-C30C-4D0A-BDC4-C6AE97D06A66

    5b3s32

    10.0.0.16

    POP2

    ControlOnly preparation

    Add ControlOnly overcloud VMs to overcloud KVM host

    Note:

    This has to be done on the overcloud KVM hosts

    Two ControlOnly overcloud VM definitions will be created on each of the overcloud KVM hosts.

    content_copy zoom_out_map
    ROLES=control-only:2
    num=4
    ipmi_user=<user>
    ipmi_password=<password>
    libvirt_path=/var/lib/libvirt/images
    port_group=overcloud
    prov_switch=br0
    
    /bin/rm ironic_list
    IFS=',' read -ra role_list <<< "${ROLES}"
    for role in ${role_list[@]}; do
      role_name=`echo $role|cut -d ":" -f 1`
      role_count=`echo $role|cut -d ":" -f 2`
      for count in `seq 1 ${role_count}`; do
        echo $role_name $count
        qemu-img create -f qcow2 ${libvirt_path}/${role_name}_${count}.qcow2 99G
        virsh define /dev/stdin <<EOF
     $(virt-install --name ${role_name}_${count} \
    --disk ${libvirt_path}/${role_name}_${count}.qcow2 \
    --vcpus=4 \
    --ram=16348 \
    --network network=br0,model=virtio,portgroup=${port_group} \
    --network network=br1,model=virtio \
    --virt-type kvm \
    --cpu host \
    --import \
    --os-variant rhel7 \
    --serial pty \
    --console pty,target_type=virtio \
    --graphics vnc \
    --print-xml)
    EOF
        vbmc add ${role_name}_${count} --port 1623${num} --username ${ipmi_user} --password ${ipmi_password}
        vbmc start ${role_name}_${count}
        prov_mac=`virsh domiflist ${role_name}_${count}|grep ${prov_switch}|awk '{print $5}'`
        vm_name=${role_name}-${count}-`hostname -s`
        kvm_ip=`ip route get 1  |grep src |awk '{print $7}'`
        echo ${prov_mac} ${vm_name} ${kvm_ip} ${role_name} 1623${num}>> ironic_list
        num=$(expr $num + 1)
      done
    done
    Note:

    The generated ironic_list will be needed on the undercloud to import the nodes to Ironic.

    Get the ironic_lists from the overcloud KVM hosts and combine them.

    content_copy zoom_out_map
    cat ironic_list_control_only
    52:54:00:3a:2f:ca control-only-1-5b3s30 10.87.64.31 control-only 16234
    52:54:00:31:4f:63 control-only-2-5b3s30 10.87.64.31 control-only 16235
    52:54:00:0c:11:74 control-only-1-5b3s31 10.87.64.32 control-only 16234
    52:54:00:56:ab:55 control-only-2-5b3s31 10.87.64.32 control-only 16235
    52:54:00:c1:f0:9a control-only-1-5b3s32 10.87.64.33 control-only 16234
    52:54:00:f3:ce:13 control-only-2-5b3s32 10.87.64.33 control-only 16235

    Import:

    content_copy zoom_out_map
    ipmi_password=<password>
    ipmi_user=<user>
    
    DEPLOY_KERNEL=$(openstack image show bm-deploy-kernel -f value -c id)
    DEPLOY_RAMDISK=$(openstack image show bm-deploy-ramdisk -f value -c id)
    
    num=0
    while IFS= read -r line; do
      mac=`echo $line|awk '{print $1}'`
      name=`echo $line|awk '{print $2}'`
      kvm_ip=`echo $line|awk '{print $3}'`
      profile=`echo $line|awk '{print $4}'`
      ipmi_port=`echo $line|awk '{print $5}'`
      uuid=`openstack baremetal node create --driver ipmi \
                                            --property cpus=4 \
                                            --property memory_mb=16348 \
                                            --property local_gb=100 \
                                            --property cpu_arch=x86_64 \
                                            --driver-info ipmi_username=${ipmi_user}  \
                                            --driver-info ipmi_address=${kvm_ip} \
                                            --driver-info ipmi_password=${ipmi_password} \
                                            --driver-info ipmi_port=${ipmi_port} \
                                            --name=${name} \
                                            --property capabilities=boot_option:local \
                                            -c uuid -f value`
      openstack baremetal node set ${uuid} --driver-info deploy_kernel=$DEPLOY_KERNEL --driver-info deploy_ramdisk=$DEPLOY_RAMDISK
      openstack baremetal port create --node ${uuid} ${mac}
      openstack baremetal node manage ${uuid}
      num=$(expr $num + 1)
    done < <(cat ironic_list_control_only)

    ControlOnly node introspection

    content_copy zoom_out_map
    openstack overcloud node introspect --all-manageable --provide

    Get the ironic UUID of the ControlOnly nodes

    content_copy zoom_out_map
    openstack baremetal node list |grep control-only
    | 7d758dce-2784-45fd-be09-5a41eb53e764 | control-only-1-5b3s30  | None | power off | available | False |
    | d26abdeb-d514-4a37-a7fb-2cd2511c351f | control-only-2-5b3s30  | None | power off | available | False |
    | 91dd9fa9-e8eb-4b51-8b5e-bbaffb6640e4 | control-only-1-5b3s31  | None | power off | available | False |
    | 09fa57b8-580f-42ec-bf10-a19573521ed4 | control-only-2-5b3s31  | None | power off | available | False |
    | f4766799-24c8-4e3b-af54-353f2b796ca4 | control-only-1-5b3s32  | None | power off | available | False |
    | 58a803ae-a785-470e-9789-139abbfa74fb | control-only-2-5b3s32  | None | power off | available | False |

    The first ControlOnly node on each of the overcloud KVM hosts will be used for POP1, the second for POP2, and so and so forth.

    Get the ironic UUID of the POP compute nodes:

    content_copy zoom_out_map
    openstack baremetal node list |grep compute
    | 91d6026c-b9db-49cb-a685-99a63da5d81e | compute-3-5b3s30 | None | power off | available | False |
    | 8028eb8c-e1e6-4357-8fcf-0796778bd2f7 | compute-4-5b3s30 | None | power off | available | False |
    | b795b3b9-c4e3-4a76-90af-258d9336d9fb | compute-3-5b3s31 | None | power off | available | False |
    | 2d4be83e-6fcc-4761-86f2-c2615dd15074 | compute-4-5b3s31 | None | power off | available | False |

    The first two compute nodes belong to POP1 the second two compute nodes belong to POP2.

    Create an input YAML using the ironic UUIDs:

    content_copy zoom_out_map
     ~/subcluster_input.yaml
    ---
    - subcluster: subcluster1
      asn: "65413"
      control_nodes:
        - uuid: 7d758dce-2784-45fd-be09-5a41eb53e764
          ipaddress: 10.0.0.11
        - uuid: 91dd9fa9-e8eb-4b51-8b5e-bbaffb6640e4
          ipaddress: 10.0.0.12
        - uuid: f4766799-24c8-4e3b-af54-353f2b796ca4
          ipaddress: 10.0.0.13
      compute_nodes:
        - uuid: 91d6026c-b9db-49cb-a685-99a63da5d81e
          vrouter_gateway: 10.0.0.1
        - uuid: 8028eb8c-e1e6-4357-8fcf-0796778bd2f7
          vrouter_gateway: 10.0.0.1
    - subcluster: subcluster2
      asn: "65414"
      control_nodes:
        - uuid: d26abdeb-d514-4a37-a7fb-2cd2511c351f
          ipaddress: 10.0.0.14
        - uuid: 09fa57b8-580f-42ec-bf10-a19573521ed4
          ipaddress: 10.0.0.15
        - uuid: 58a803ae-a785-470e-9789-139abbfa74fb
          ipaddress: 10.0.0.16
      compute_nodes:
        - uuid: b795b3b9-c4e3-4a76-90af-258d9336d9fb
          vrouter_gateway: 10.0.0.1
        - uuid: 2d4be83e-6fcc-4761-86f2-c2615dd15074
          vrouter_gateway: 10.0.0.1
    Note:

    Only control_nodes, compute_nodes, dpdk_nodes and sriov_nodes are supported.

    Generate subcluster environment:

    content_copy zoom_out_map
    ~/tripleo-heat-templates/tools/contrail/create_subcluster_environment.py -i ~/subcluster_input.yaml \
                   -o ~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml

    Check subcluster environment file:

    content_copy zoom_out_map
    cat ~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml
    parameter_defaults:
      NodeDataLookup:
        041D7B75-6581-41B3-886E-C06847B9C87E:
          contrail_settings:
            CONTROL_NODES: 10.0.0.14,10.0.0.15,10.0.0.16
            SUBCLUSTER: subcluster2
            VROUTER_GATEWAY: 10.0.0.1
        09BEC8CB-77E9-42A6-AFF4-6D4880FD87D0:
          contrail_settings:
            BGP_ASN: '65414'
            SUBCLUSTER: subcluster2
        14639A66-D62C-4408-82EE-FDDC4E509687:
          contrail_settings:
            BGP_ASN: '65414'
            SUBCLUSTER: subcluster2
        28AB0B57-D612-431E-B177-1C578AE0FEA4:
          contrail_settings:
            BGP_ASN: '65413'
            SUBCLUSTER: subcluster1
        3993957A-ECBF-4520-9F49-0AF6EE1667A7:
          contrail_settings:
            BGP_ASN: '65413'
            SUBCLUSTER: subcluster1
        73F8D030-E896-4A95-A9F5-E1A4FEBE322D:
          contrail_settings:
            BGP_ASN: '65413'
            SUBCLUSTER: subcluster1
        7933C2D8-E61E-4752-854E-B7B18A424971:
          contrail_settings:
            CONTROL_NODES: 10.0.0.14,10.0.0.15,10.0.0.16
            SUBCLUSTER: subcluster2
            VROUTER_GATEWAY: 10.0.0.1
        AF92F485-C30C-4D0A-BDC4-C6AE97D06A66:
          contrail_settings:
            BGP_ASN: '65414'
            SUBCLUSTER: subcluster2
        BB9E9D00-57D1-410B-8B19-17A0DA581044:
          contrail_settings:
            CONTROL_NODES: 10.0.0.11,10.0.0.12,10.0.0.13
            SUBCLUSTER: subcluster1
            VROUTER_GATEWAY: 10.0.0.1
        E1A809DE-FDB2-4EB2-A91F-1B3F75B99510:
          contrail_settings:
            CONTROL_NODES: 10.0.0.11,10.0.0.12,10.0.0.13
            SUBCLUSTER: subcluster1
            VROUTER_GATEWAY: 10.0.0.1

    Deployment

    Add contrail-subcluster.yaml, contrail-ips-from-pool-all.yaml and contrail-scheduler-hints.yaml to the OpenStack deploy command:

    content_copy zoom_out_map
    openstack overcloud deploy --templates ~/tripleo-heat-templates \
     -e ~/overcloud_images.yaml \
     -e ~/tripleo-heat-templates/environments/network-isolation.yaml \
     -e ~/tripleo-heat-templates/environments/contrail/contrail-plugins.yaml \
     -e ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml \
     -e ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml \
     -e ~/tripleo-heat-templates/environments/contrail/contrail-subcluster.yaml \
     -e ~/tripleo-heat-templates/environments/contrail/contrail-ips-from-pool-all.yaml \
     -e ~/tripleo-heat-templates/environments/contrail/contrail-scheduler-hints.yaml \
     --roles-file ~/tripleo-heat-templates/roles_data_contrail_aio.yaml

overcloud Installation

Deployment:

content_copy zoom_out_map
openstack overcloud deploy --templates ~/tripleo-heat-templates \
-e ~/overcloud_images.yaml \
-e ~/tripleo-heat-templates/environments/network-isolation.yaml \
-e ~/tripleo-heat-templates/environments/contrail/contrail-plugins.yaml \
-e ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml \
-e ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml \
--roles-file ~/tripleo-heat-templates/roles_data_contrail_aio.yaml

Validation Test:

content_copy zoom_out_map
source overcloudrc
curl -O http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
openstack image create --container-format bare --disk-format qcow2 --file cirros-0.3.5-x86_64-disk.img cirros
openstack flavor create --public cirros --id auto --ram 64 --disk 0 --vcpus 1
openstack network create net1
openstack subnet create --subnet-range 1.0.0.0/24 --network net1 sn1
nova boot --image cirros --flavor cirros --nic net-id=`openstack network show net1 -c id -f value` --availability-zone nova:overcloud-novacompute-0.localdomain c1
nova list
external-footer-nav