Download This Guide
Deploying Contrail with Red Hat OpenStack Platform Director 10
This document explains how to integrate a Contrail 3.2 installation with RedHat OpenStack Platform Director 10.
Overview
RedHat OpenStack Platform provides an installer named Director (RHOSPD). The Red Hat Director installer is based on the OpenStack project TripleO (OOO, OpenStack on OpenStack). TripleO is an open source project that uses features of OpenStack to deploy a fully functional, tenant-facing OpenStack environment.
You can use TripleO and Director to deploy a Red Hat cloud environment integrated with Contrail.
TripleO Features
TripleO uses the concepts of undercloud and overcloud. TripleO sets up an undercloud, an operator-facing deployment cloud that contains the OpenStack components needed to deploy and manage an overcloud, a tenant-facing cloud that hosts user workloads.
The overcloud is the deployed solution that can represent a cloud for any purpose, such as production, staging, test, and so on. The operator can select to deploy to their environment any of the available overcloud roles, such as controller, compute, and the like.
TripleO leverages existing core components of OpenStack including Nova, Ironic, Neutron, Heat, Glance, and Ceilometer to deploy OpenStack on bare metal hardware.
- Nova and Ironic are used in the undercloud to manage the bare metal instances that comprise the infrastructure for the overcloud.
- Neutron is used to provide a networking environment in which to deploy the overcloud.
- Glance stores machine images.
- Ceilometer collects metrics about the overcloud.
For more information about TripleO architecture, see
https://docs.openstack.org/developer/tripleo-docs/introduction/architecture.html
Composable Roles
TripleO enables composable roles. Each role is a group of services that are defined in Heat templates. Composable roles gives the operator the flexibility to add and modify roles as needed.
The following are the Contrail roles used for integrating Contrail to the overcloud environment:
- Contrail Controller
- Contrail Analytics
- Contrail Analytics Database
- Contrail-TSN
- Contrail-DPDK
Figure 1 shows the relationship and components of an undercloud and overcloud architecture for Contrail.
Figure 1: Undercloud and Overcloud with Roles

Deployment Tools
Deployment to physical servers or virtual machines occurs by means of collaboration between Heat, Nova, Neutron, Glance, and Ironic.
One nested Heat stack is deployed from the undercloud. The Heat stack has various Nova instances. The Nova instances are the overcloud roles. The definitions for all roles are provided in the Heat templates.
To deploy the stack, Heat makes successive calls to Nova, OpenStack’s compute service controller. Nova depends on Ironic, which, by this stage in the process, has acquired an inventory of introspected hardware.
If configured, Nova flavors can act as a constraint, influencing the range of machines that can be selected for deployment by the Nova scheduler. For each request to deploy a new node with a specific role, Nova filters the list of available nodes, ensuring that the selected nodes meet the hardware requirements.
When the target node has been selected, Ironic performs the provisioning of the node: Ironic retrieves from Glance the OS image associated with the role, causes the node to boot a deployment ramdisk, and, in a typical case, exports the node’s local disk over iSCSI so that the disk can be partitioned and have the OS image written onto it by the Ironic conductor.
Preparing the Environment for Deployment
- Preparing for the Contrail Roles
- Preparing for the Underlay Network
- Preparing for the Provisioning Network
- Network Isolation
The overcloud roles can be deployed to bare metal servers or to virtual machines (VMs). The compute nodes must be deployed to bare metal systems.
Ensure your environment is prepared for the Red Hat deployment. Refer to Red Hat documentation:
Preparing for the Contrail Roles
Ensure the following requirements are met for the Contrail nodes per role.
- Non-high availability: A minimum of four overcloud nodes
are needed for control plane roles for a non-high availability deployment:
- 1x contrail-config (includes Contrail control)
- 1x contrail-analytics
- 1x contrail-analytics-database
- 1x OpenStack controller
- High availability: A minimum of 12 overcloud nodes are
needed for control plane roles for a high availability deployment:
- 3x contrail-config (includes Contrail control)
- 3x contrail-analytics
- 3x contrail-analytics-database
- 3x OpenStack controller
- If the control plane roles will be deployed to VMs, use 3 separate physical servers and deploy one role of each kind to each physical server.
RHOSP Director expects the nodes to be provided by the administrator, for example, if you are deploying to VMs, the administrator must create the VMs before starting with deployment.
Preparing for the Underlay Network
Refer to Red Hat documentation for planning and implementing underlay networking, including the kinds of networks used and the purpose of each:
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/ html-single/director_installation_and_usage/#sect-Planning_Networks
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/ html-single/director_installation_and_usage/#sect-Networking_Requirements
At a high level, every overcloud node must support IPMI.
Refer to “Requirements for Deploying to VMs” in this document if you are deploying to VMs.
Preparing for the Provisioning Network
Ensure the following requirements are met for the provisioning network.
- One NIC from every machine must be in the same broadcast
domain of the provisioning network, and it should be the same NIC
on each of the overcloud machines. For example, if you use the second
NIC on the first overcloud machine, you should use the second NIC
on each additional overcloud machine.
During installation, these NICs will be referenced by a single name across all overcloud machines.
- The provisioning network NIC should not be the same NIC that you are using for remote connectivity to the undercloud machine. During the undercloud installation, an Open vsSwitch bridge will be created for Neutron and the provisioning NIC will be bridged to the Open vSwitch bridge. Consequently, connectivity would be lost if the provisioning NIC was also used for remote connectivity to the undercloud machine.
- The provisioning NIC on the overcloud nodes must be untagged.
- You must have the MAC address of the NIC that will PXE boot the IPMI information for the machine on the provisioning network. The IPMI information will include such things as the IP address of the IPMI NIC and the IPMI username and password.
- All of the networks must be available to all of the Contrail roles and computes.
Network Isolation
TripleO enables configuration of isolated overcloud networks. Using this approach, it is possible to host traffic in isolated networks for specific types of network traffic, such as tenants, storage, API, and the like. This enables assigning network traffic to specific network interfaces or bonds.
When isolated networks are configured, the OpenStack services are configured to use the isolated networks. If no isolated networks are configured, all services run on the provisioning network.
The following networks are typically used when using network isolation topology:
- Provisioning---for the undercloud control plane
- Internal API--- for OpenStack internal APIs
- Tenant
- Storage
- Storage Management
- External
- Floating IP---Can either be merged with external or can be a separate network.
- Management
Templates for Network Isolation
Use the following template files to enable network isolation:
- environments/network-isolation.yaml
Contains the path of templates that need to be included to create various Neutron networks and ports
- environments/contrail/contrail-net.yaml
Contains the subnet/mask, allocation pool, default gateway IP information. Make changes to this file to configure the subnets for your setup.
- environments/contrail/contrail-nic-config.yaml
Defines the NICs that the overcloud VMs will use for each of the networks. Change the contents of this template as needed for your environment.
Features of the default configuration include:
- The first NIC is used for the control plane provisioning network.
- The second NIC is used for the internal API network.
- The third NIC uses multiple VLANs to provide for the rest
of the networks:
- VLAN-10: External network
- VLAN-30: Storage network
- VLAN-40: Storage management network
- VLAN-50: Tenant network
- VLAN-60: Management network
- VLAN-XXX: Floating network (if separate from external network)
Figure 2 shows the network connectivity for the overcloud roles when you use the default Heat templates. Fig . In Figure 2, the vertical lines depict the underlay, which could be a switch. The underlay connectivity must be prepared before starting the deployment. The undercloud must have reachability in the provisioning network and the external networks.
Figure 2: Network Isolation Model

Deploying an OSPD-10 Overcloud
When the requirements for the environment are met, you are ready to start deploying.
Installing the Undercloud
Use the Red Hat OS Director to install the undercloud after the environment has been prepared. You’ll need Red Hat credentials, such as account, password, pool, and the like, to register the undercloud and overcloud nodes.
Follow procedures in the Red Hat documentation to install an undercloud:
Configuring Undercloud and Overcloud
After the undercloud is installed, you can use the following procedures to change parameters in the undercloud.conf file to match your local deployment.
- Configure the undercloud.
cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
vi ~/undercloud.conf
- Install the undercloud OpenStack.
openstack undercloud install
- Source the undercloud credentials.
source ~/stackrc
- Get overcloud images.
sudo yum install rhosp-director-images rhosp-director-images-ipa
mkdir ~/images
cd ~/images
- Upload overcloud images.
for i in /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar; do tar -xvf $i; done
openstack overcloud image upload --image-path /home/stack/images/
cd ~
Defining Nodes with Ironic
The properties of the overcloud nodes and VMs are in the instackenv.json file, which is imported to Ironic.
This procedure shows how to define nodes with Ironic.
- Define nodes in instackenv.json.
vi ~/instackenv.json
- A password-less SSH must be enabled on all hosts on which overcloud VMs will be spawned.
- If you need definitive node placement, assign the appropriate capabilities in the node definition in instackenv.json.
For more information about using instackenv.json, see Red Hat documentation:
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/director_installation_and_usage/chap-configuring_basic_overcloud_requirements_with_the_cli_tools
- https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/ html-single/director_installation_and_usage/#sect-Registering_Nodes_for_the_Overcloud
A sample instackenv.json is included later in this topic.
- Import nodes to Ironic.
openstack baremetal import --json ~/instackenv.json
- Configure boot mode. This assigns the kernel and ramdisk
image to each Ironic node.
openstack baremetal configure boot
- Activate node introspection.
Introspection boots each Ironic node over the PXE network and is used to collect hardware data for the nodes. The capabilities and profile of each node is determined at this step. Because this step includes pushing an image to each of the overcloud roles, successful completion of Ironic introspection also means that the underlay configuration is valid on the provisioning network.
Note: Make sure that the maximum transmission unit (MTU) is consistent across all of the networks to prevent any issues.
for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done
openstack overcloud node introspect --all-manageable --provide
- Perform node profiling.
If you provided capabilities for overcloud nodes, create the corresponding flavors at this time. Each overcloud role can be assigned a certain Nova-flavor in the Heat templates. You can provide details such as memory, disk-size, number of CPUs, and so on. At the time of deploying a role, Director tries to find an Ironic node that has the capabilities listed in the flavor. This is the way in which you can control node placement.
openstack flavor create <flavor-name> --ram <RAM> --vcpus <CPUs> --disk <disk-size>
openstack flavor set --property "capabilities:boot_option"="local" --property "capabilities:profile"="<capability-name>" <flavor-name>
Configuring the Overcloud
- Get Contrail Components
- Configure NICs for Overcloud Networking
- Assign Addresses and Credentials
- Deploying the Overcloud
- Sample instackenv.json
- Adding a New Physical Compute Node
Get Contrail Components
This procedure provides the components needed to integrate Contrail with Director, including adding a repo that hosts Contrail packages and providing Heat templates and corresponding Puppet modules.
- Create a Contrail repo.
A Contrail repo is needed to make sure that the overcloud Contrail roles can install the Contrail packages. The Contrail repo can be hosted on the undercloud or on any machine that is accessible from the overcloud nodes on the provisioning network.
sudo mkdir /var/www/html/contrail sudo tar zxvf ~/contrail-install-packages_<package>.tgz -C /var/www/html/contrail/
- Upload Puppet modules to Swift.
Install the RPMs for Puppet modules in the directory:
home/stack//usr/share/openstack-puppet/modules
/.This folder must contain the Puppet modules necessary to successfully install and start Contrail services in the overcloud roles.
Use the command upload-swift-artifacts to make sure that these modules get uploaded on the overcloud nodes during deployment. All of the commands are executed as user stack.
cd /var/www/html/contrail
yum localinstall contrail-tripleo-puppet-<version>.el7.noarch.rpm puppet-contrail-<version>.el7.noarch.rpm
mkdir -p ~/usr/share/openstack-puppet/modules/contrail
cp -R /usr/share/openstack-puppet/modules/contrail/* ~/usr/share/openstack-puppet/modules/contrail/
mkdir -p ~/usr/share/openstack-puppet/modules/tripleo
cp -R /usr/share/contrail-tripleo-puppet/* ~/usr/share/openstack-puppet/modules/tripleo
cd ~
tar czvf puppet-modules.tgz ~/usr/
upload-swift-artifacts -f puppet-modules.tgz
- Get TripleO Heat templates.
cp -r /usr/share/openstack-tripleo-heat-templates/ ~/tripleo-heat-templates
cd /var/www/html/contrail
yum localinstall contrail-tripleo-heat-templates-<version>.el7.noarch.rpm
cp -r /usr/share/contrail-tripleo-heat-templates/environments/contrail ~/tripleo-heat-templates/environments
cp -r /usr/share/contrail-tripleo-heat-templates/puppet/services/network/* ~/tripleo-heat-templates/puppet/services/network
- Update the contrail-services.yaml.
The contrail-services.yaml is the main administrator-facing Heat template. Provide the correct URL for the Contrail repo that you created, the flavor for overcloud roles, the count for overcloud roles, and other various environment-specific parameters such as DNS-server, NTP server, and the like.
vi ~/tripleo-heat-templates/environments/contrail/contrail-services.yaml
You must set the value for ContrailVersion to 3.
Configure NICs for Overcloud Networking
Use this information to configure the NICs for your system.
Overcloud Networking—Multiple NICs
vi ~/tripleo-heat-templates/environments/contrail/contrail-net.yaml vi ~/tripleo-heat-templates/environments/contrail/contrail-nic-config-compute.yaml vi ~/tripleo-heat-templates/environments/contrail/contrail-nic-config.yaml
Overcloud Networking—Multiple NICs with Bond and VLAN
vi ~/tripleo-heat-templates/environments/contrail/contrail-net-bond-vlan.yaml vi ~/tripleo-heat-templates/environments/contrail/contrail-nic-config-compute-bond-vlan.yaml vi ~/tripleo-heat-templates/environments/contrail/contrail-nic-config-vlan.yaml
Overcloud Networking—Single NIC
vi ~/tripleo-heat-templates/environments/contrail/contrail-net-single.yaml vi ~/tripleo-heat-templates/environments/contrail/contrail-nic-config-compute-single.yaml vi ~/tripleo-heat-templates/environments/contrail/contrail-nic-config-single.yaml
Assign Addresses and Credentials
- Assign static IP addresses.
Use the template ips-from-pool-all.yaml to provide static IP addresses for the overcloud nodes.
vi ~/tripleo-heat-templates/environments/contrail/ips-from-pool-all.yaml
- Provide subscription manager credentials.
Use the template environment-rhel-registration.yaml to provide subscription manager credentials, including rhel_reg_password, rhel_reg_pool_id, rhel_reg_repos, rhel_reg_user, and method.
vi ~/tripleo-heat-templates/extraconfig/pre_deploy/ rhel-registration/environment-rhel-registration.yaml
The following is a sample environment-rhel-registration.yaml file for deployment.
Note: The repos enabled are required to enable deployment for Contrail 3.2 with Director 10 and OpenStack Newton.
[stack@instack ~]$ cat environment-rhel-registration.yaml # Note this can be specified either in the call # to heat stack-create via an additional -e option # or via the global environment on the seed in # /etc/heat/environment.d/default.yaml parameter_defaults: rhel_reg_activation_key: "" rhel_reg_auto_attach: "true" rhel_reg_base_url: "" rhel_reg_environment: "" rhel_reg_force: "" rhel_reg_machine_name: "" rhel_reg_org: "" rhel_reg_password: "" rhel_reg_pool_id: "" rhel_reg_release: "" rhel_reg_repos: "rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-rh-common-rpms rhel-ha-for-rhel-7-server-rpms rhel-7-server-openstack-10-rpms rhel-7-server-openstack-10-devtools-rpms" rhel_reg_sat_url: "" rhel_reg_server_url: "" rhel_reg_service_level: "" rhel_reg_user: "" rhel_reg_type: "" rhel_reg_method: "portal" rhel_reg_sat_repo: "rhel-7-server-satellite-tools-6.1-rpms"
- Set the overcloud nameserver.
neutron subnet-show neutron subnet-update <SUBNET-UUID> --dns-nameserver NAMESERVER_IP
Deploying the Overcloud
- Deploy Overcloud with a Single NIC
- Deploy Overcloud with Multiple NICs
- Deploy Overcloud with Multiple NICs with Bond and VLAN
When you perform the overcloud installation, the overcloud is generated with the definitions you provide in the Heat templates.
The openstack overcloud deploy command creates a nested stack with all the resources needed to deploy the overcloud roles, networks, services, and so on.
- The stack can be updated if you wish to make changes to the overcloud.
- To redeploy the overcloud with a fresh installation, you delete the existing stack, make appropriate changes to the Heat templates, and then redeploy the stack.
Deploy Overcloud with a Single NIC
openstack overcloud deploy --templates tripleo-heat-templates/ \ --roles-file tripleo-heat-templates/environments/contrail/roles_data.yaml \ -e tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ environment-rhel-registration.yaml \ -e tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ rhel-registration-resource-registry.yaml \ -e tripleo-heat-templates/environments/contrail/contrail-services.yaml \ -e tripleo-heat-templates/environments/contrail/contrail-net-single.yaml
Deploy Overcloud with Multiple NICs
openstack overcloud deploy --templates tripleo-heat-templates/ \ --roles-file tripleo-heat-templates/environments/contrail/roles_data.yaml \ -e tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e tripleo-heat-templates/environments/contrail/contrail-services.yaml \ -e tripleo-heat-templates/environments/contrail/network-isolation.yaml \ -e tripleo-heat-templates/environments/contrail/contrail-net.yaml \ -e tripleo-heat-templates/environments/contrail/ips-from-pool-all.yaml \ -e tripleo-heat-templates/environments/network-management.yaml \ -e tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ environment-rhel-registration.yaml \ -e tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ rhel-registration-resource-registry.yaml
Deploy Overcloud with Multiple NICs with Bond and VLAN
openstack overcloud deploy --templates tripleo-heat-templates/ \ --roles-file tripleo-heat-templates/environments/contrail/roles_data.yaml \ -e tripleo-heat-templates/environments/puppet-pacemaker.yaml \ -e tripleo-heat-templates/environments/contrail/contrail-services.yaml \ -e tripleo-heat-templates/environments/contrail/network-isolation.yaml \ -e tripleo-heat-templates/environments/contrail/contrail-net-bond-vlan.yaml \ -e tripleo-heat-templates/environments/contrail/ips-from-pool-all.yaml \ -e tripleo-heat-templates/environments/network-management.yaml \ -e tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ environment-rhel-registration.yaml \ -e tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/ rhel-registration-resource-registry.yaml
Sample instackenv.json
This section has a sample instackenv.json, with OpenStack and Contrail controller on separate physical machines. This sample imports VMs to Ironic.
The sample instackenv.json is from a working environment that includes:
- 3x KVM hosts: 10.xx.xx.22, 10.xx.xx.24, 10.xx.xx.25 2.
- The following overcloud VMs on each KVM host:
- openstack-controller
- contrail-controller
- contrail-analytics
- contrail-analytics database
- compute
- This sample imports VMs to Ironic.
Mandatory parameters for importing VMs to Ironic include:
Sample instackenv.json
{ "arch": "x86_64", "host-ip": "192.168.122.1", "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager", "seed-ip": "", "ssh-key": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----\n", "ssh-user": "root", "nodes": [ { "mac": [ "52:54:00:d7:e4:87" ], "name": "control_1_at_5b5s36", "capabilities" : "profile:control", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.24", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:82:0d:9e" ], "name": "compute_1_at_5b5s36", "capabilities" : "profile:compute", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.24", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:a2:ff:7a" ], "name": "contrail-controller_1_at_5b5s36", "capabilities" : "profile:contrail-controller", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.24", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:51:35:bd" ], "name": "contrail-analytics_1_at_5b5s36", "capabilities" : "profile:contrail-analytics", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.24", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:a1:ae:4d" ], "name": "contrail-analytics-database_1_at_5b5s36", "capabilities" : "profile:contrail-analytics-database", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.24", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:8b:0e:b8" ], "name": "control_1_at_5b5s34", "capabilities" : "profile:control", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.22", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:c5:ba:b0" ], "name": "compute_1_at_5b5s34", "capabilities" : "profile:compute", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.22", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:b8:5b:aa" ], "name": "contrail-controller_1_at_5b5s34", "capabilities" : "profile:contrail-controller", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.22", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:2a:38:f1" ], "name": "contrail-analytics_1_at_5b5s34", "capabilities" : "profile:contrail-analytics", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.22", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:fc:b7:67" ], "name": "contrail-analytics-database_1_at_5b5s34", "capabilities" : "profile:contrail-analytics-database", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.22", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:48:b0:9b" ], "name": "control_1_at_5b5s37", "capabilities" : "profile:control", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.25", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:b3:01:b8" ], "name": "compute_1_at_5b5s37", "capabilities" : "profile:compute", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.25", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:9a:8c:f8" ], "name": "contrail-controller_1_at_5b5s37", "capabilities" : "profile:contrail-controller", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.25", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:8d:3d:d9" ], "name": "contrail-analytics_1_at_5b5s37", "capabilities" : "profile:contrail-analytics", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.25", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } , { "mac": [ "52:54:00:9d:9e:57" ], "name": "contrail-analytics-database_1_at_5b5s37", "capabilities" : "profile:contrail-analytics-database", "cpu": "4", "memory": "16384", "disk": "50", "arch": "x86_64", "pm_user": "root", "pm_addr": "10.xx.xx.25", "pm_password": "-----BEGIN RSA PRIVATE KEY----- $ABC123 -----END RSA PRIVATE KEY-----", "pm_type": "pxe_ssh" } ] }
Adding a New Physical Compute Node
The following is a sample instackenv.json for adding a new physical compute node, by importing the physical compute or bare metal server to Ironic.
{ "nodes": [ { "mac": [ "00:1b:21:99:ce:94" ], "name": "physical-compute_5b5s35", "capabilities" : "profile:compute", "pm_user": "ADMIN", "pm_addr": "10.xx.xxx.206", "pm_password": "ADMIN", "pm_type": "pxe_ipmitool" } ] }
The following are the mandatory parameters to import a physical compute or bare metal server to Ironic.
Specify this driver to provision physical servers.
Requirements for Deploying to VMs
The following are required for deploying to VMs.
- Password-less SSH must be set up from the undercloud to all servers that will host overcloud VMs, for the user ‘root’.
- Libvirtd on KVM hosts must be configured to allow TCP sessions without requiring Transport Layer Security (TLS).
Sample Heat Templates for NICs
This section provides sample Heat templates for different configurations for NICs.
- Example 1: NIC-1 to control plane; NIC-2 bridged interface
- Example 2: NIC-1 to control plane; NIC-2 and NIC-3 bond interface; NIC-4 other networks
Example 1: NIC-1 to control plane; NIC-2 bridged interface
This sample has the following topology:
- NIC-1 is connected to the control plane provisioning network
Connected to an access port on the underlay switch
- NIC-2 is a bridged interface, and has a unique VLAN tag for each of the other overlay networks.
Underlying switch configuration:
- NIC-1 is connected to the control plane provisioning VLAN access-ports of a switch.
- NIC-2 is connected to trunk ports on the switch. The trunk
ports will carry multiple VLAN tags, one each for the following networks:
VLAN-10: External VLAN
VLAN-20: Internal API VLAN
VLAN-30: Storage VLAN
VLAN-40: Storage-MGMT VLAN
VLAN-60: Management VLAN
Figure 3 shows the server NIC configuration for this example.
Figure 3: Server NIC Configuration

NIC Template
The following is the NIC template to configure the setup in this example.
Note: For this setup, the default route is reachable by means of the InternalAPI network.
heat_template_version: 2015-04-30 description: > Software Config to drive os-net-config to configure multiple interfaces for the compute role. parameters: ControlPlaneIp: default: '' description: IP address/subnet on the ctlplane network type: string ExternalIpSubnet: default: '' description: IP address/subnet on the external network type: string InternalApiIpSubnet: default: '' description: IP address/subnet on the internal API network type: string InternalApiDefaultRoute: # Not used by default in this template default: '10.0.0.1' description: The default route of the internal api network. type: string StorageIpSubnet: default: '' description: IP address/subnet on the storage network type: string StorageMgmtIpSubnet: default: '' description: IP address/subnet on the storage mgmt network type: string TenantIpSubnet: default: '' description: IP address/subnet on the tenant network type: string ManagementIpSubnet: # Only populated when including environments/network-management.yaml default: '' description: IP address/subnet on the management network type: string ExternalNetworkVlanID: default: 10 description: Vlan ID for the external network traffic. type: number InternalApiNetworkVlanID: default: 20 description: Vlan ID for the internal_api network traffic. type: number StorageNetworkVlanID: default: 30 description: Vlan ID for the storage network traffic. type: number StorageMgmtNetworkVlanID: default: 40 description: Vlan ID for the storage mgmt network traffic. type: number TenantNetworkVlanID: default: 50 description: Vlan ID for the tenant network traffic. type: number ManagementNetworkVlanID: default: 60 description: Vlan ID for the management network traffic. type: number ControlPlaneSubnetCidr: # Override this via parameter_defaults default: '24' description: The subnet CIDR of the control plane network. type: string ControlPlaneDefaultRoute: # Override this via parameter_defaults description: The default route of the control plane network. type: string ExternalInterfaceDefaultRoute: # Not used by default in this template default: '10.0.0.1' description: The default route of the external network. type: string ManagementInterfaceDefaultRoute: # Commented out by default in this template default: unset description: The default route of the management network. type: string DnsServers: # Override this via parameter_defaults default: [] description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf. type: comma_delimited_list EC2MetadataIp: # Override this via parameter_defaults description: The IP address of the EC2 metadata server. type: string resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 1xx.254.1xx.254/32 next_hop: {get_param: EC2MetadataIp} - type: vlan use_dhcp: false vlan_id: {get_param: InternalApiNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute} - type: vlan vlan_id: {get_param: ManagementNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ManagementIpSubnet} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ExternalIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}
NIC definitions of the corresponding compute file are the following.
network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.xxx.xxx.254/32 next_hop: {get_param: EC2MetadataIp} - type: interface name: vhost0 use_dhcp: false addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ExternalIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
Example 2: NIC-1 to control plane; NIC-2 and NIC-3 bond interface; NIC-4 other networks
This sample has the following topology:
- NIC-1 is connected to the control plane provisioning network
Connected to an access port on the underlay switch
- NIC-2 and NIC-3 are connected to the InternalAPI network.
These two NICs are part of a bond interface.
- NIC-4 has a unique VLAN tag for each of the other overlay networks. It carries the rest of the networks.
Underlying switch configuration:
- NIC-1 is connected to the control plane provisioning VLAN access-ports of a switch.
- NIC-2 and NIC-3 connected to access ports on the switch in the InternalAPI VLAN. These switch ports are bundled together as a LAG
- NIC-4 is connected to trunk ports on the switch. The trunk
ports will carry multiple VLAN tags, one each for the following networks:
VLAN-10: External VLAN
VLAN-30: Storage VLAN
VLAN-40: Storage-MGMT VLAN
VLAN-60: Management VLAN
Figure 4 shows the server NIC configuration for this example.
Figure 4: Server NIC Configuration

NIC Template
The following is a snippet of the corresponding NIC template to configure the setup in this example.
Note: For this setup, the default route is reachable by means of the InternalAPI network.
resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.xxx.xxx.254/32 next_hop: {get_param: EC2MetadataIp} - type: linux_bond name: bond0 use_dhcp: false addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute} bonding_options: “mode=active-active” members: - type: interface name: nic2 - type: interface name: nic3 - - type: vlan vlan_id: {get_param: ManagementNetworkVlanID} device: nic4 addresses: - ip_netmask: {get_param: ManagementIpSubnet} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: nic4 addresses: - ip_netmask: {get_param: ExternalIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: nic4 addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: nic4 addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet} outputs: OS::stack_id: description: The OsNetConfigImpl resource. value: {get_resource: OsNetConfigImpl}
More Template Examples
More template examples are available in the directory:
/home/stack/tripleo-heat-templates/environments/contrail
There are separate templates for control-plane and compute. You can modify the example templates to match your topology.
[stack@instack contrail]$ pwd /home/stack/tripleo-heat-templates/environments/contrail [stack@instack contrail]$ ls -lrt | grep nic | grep compute -rw-rw-r--. 1 stack stack 6136 May 31 15:07 contrail-nic-config-compute-bond-vlan.yaml -rw-rw-r--. 1 stack stack 5839 May 31 15:07 contrail-nic-config-compute-bond-vlan-dpdk.yaml -rw-rw-r--. 1 stack stack 5669 May 31 15:07 contrail-nic-config-compute-storage-mgmt.yaml -rw-rw-r--. 1 stack stack 3864 May 31 15:07 contrail-nic-config-compute-single.yaml -rw-rw-r--. 1 stack stack 5422 May 31 15:07 contrail-nic-config-compute-dpdk.yaml -rw-rw-r--. 1 stack stack 5643 Jun 1 11:56 contrail-nic-config-compute-dpdk-bond-vlan.yaml -rw-rw-r--. 1 stack stack 5661 Jun 2 12:43 contrail-nic-config-compute.yaml [stack@instack contrail]$ [stack@instack contrail]$ [stack@instack contrail]$ ls -lrt | grep nic | grep -v compute -rw-rw-r--. 1 stack stack 5568 May 31 15:07 contrail-nic-config-storage-mgmt.yaml -rw-rw-r--. 1 stack stack 3861 May 31 15:07 contrail-nic-config-single.yaml -rw-rw-r--. 1 stack stack 6688 May 31 15:07 contrail-nic-config-ovs-bond.yaml -rw-rw-r--. 1 stack stack 5793 Jun 1 11:46 contrail-nic-config-vlan.yaml -rw-rw-r--. 1 stack stack 5793 Jun 2 11:54 contrail-nic-config.yaml
What are NIC Templates?
TripleO (OpenStack On OpenStack) provides the flexibility to have different NIC templates for different overcloud roles. For example, there might be differences between the NIC and networking layout for the overcloud-compute-nodes and the overcloud-contrail-controller-nodes.
How NIC Templates Work
The NIC templates provide data to the backend scripts that take care of provisioning the network on the overcloud nodes. The templates are written in standard JSON formats.
The resources section within each template contains all of the networking information for the corresponding overcloud role, including:
- Number of NICs
- Network associated with each NIC
- Static routes associated with each NIC
- Any VLAN configuration which is tied to a particular NIC
- Network associated with each VLAN interface
- Static routes associated with each VLAN
For more information on what each of these sections looks like, see Red Hat documentation: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/advanced_overcloud_customization/sect-isolating_networks
The Red Hat documentation has many examples of how to define a NIC within the template, and some of that information is used in the examples in this topic.
A limitation in Red Hat Director 10 is that all of the overcloud networks must be stretched at Layer 2 to all of the overcloud nodes. If the overcloud nodes are physical servers that are present in different racks or subnets of an IP fabric, then you’ll have to first stretch all the overcloud networks to the physical servers. One way to do this is to use EVPN. If you have a traditional datacenter topology (non-IP fabric), then you can extend VLANs across the physical computes to extend all the overcloud networks.
Deploying an overcloud using TripleO and Director across multiple subnets is an upstream feature and a work-in-progress at this time. Upstream developers (mostly from Red Hat) are driving this effort. To check the status of this feature, see: https://blueprints.launchpad.net/tripleo/+spec/tripleo-routed-networks-deployment
Common Topologies
One of the most common topologies for a TripleO deployment consists of 3 NICs:
- NIC-1: Carries these networks:
- Provisioning: Untagged
- Management: Tagged
- External: Tagged
- NIC-2: Carries internal-API network
- NIC-3: Carries tagged storage related networks (storage and storage management)
Conventions in this Document
Examples are provided in this document.
- The topology used in the examples has the following constraints:
- The first NIC must be connected to the ControlPlane network.
- The second NIC must have separate VLAN interfaces for every other network.
- With the above limitations, ‘eth1’ is specified as the VlanParentInterface.
- Note that ‘nic-2’ is specified as the interface with multiple VLAN sub-interfaces in the NIC definition template.
- In the current version of RHEL 7.3/7.4, the NICs manifest as eth0, eth1, and so on. Because of this, NIC-2 translates to eth-1.
There are several NIC templates within Contrail that are available to users. These templates are named according to the topology that they’re trying to solve, and are available in the environments/contrail/ directory. Please modify these templates according to your topology before deploying Contrail with TripleO/Red Hat Director.
Contrail NIC Templates
As part of deployment, a network (net) template must be provided. The net template files are all available at the same location:
Sample Net Templates
[stack@undercloud contrail]$ ls -lrt | grep contrail-net -rw-rw-r--. 1 stack stack 1866 Sep 19 17:10 contrail-net-storage-mgmt.yaml -rw-rw-r--. 1 stack stack 894 Sep 19 17:10 contrail-net-single.yaml -rw-rw-r--. 1 stack stack 1528 Sep 19 17:10 contrail-net-dpdk.yaml -rw-rw-r--. 1 stack stack 1504 Sep 19 17:10 contrail-net-bond-vlan.yaml -rw-rw-r--. 1 stack stack 1450 Sep 19 17:12 contrail-net.yaml
The template files are prepopulated examples that are included with a Contrail package. The file names match the use case that each is trying to solve. For example, use the contrail-net-dpdk.yaml file if your use case includes a DPDK compute. Similarly, use the contrail-net-bond-vlan.yaml file if your topology uses bond interfaces and VLAN subinterfaces that need to be created on top of the bond interfaces.
Please note that these are example files, and you’ll need to modify them to match your topology.
Resource Registry Example
The resource_registry section of the net template file specifies which NIC template must be used for each role:
Sample Resource Registry of Net Template
[stack@undercloud contrail]$ cat contrail-net.yaml resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: contrail-nic-config-compute.yaml OS::TripleO::ContrailDpdk::Net::SoftwareConfig: contrail-nic-config-compute-dpdk-bond-vlan.yaml OS::TripleO::Controller::Net::SoftwareConfig: contrail-nic-config.yaml OS::TripleO::ContrailController::Net::SoftwareConfig: contrail-nic-config.yaml OS::TripleO::ContrailAnalytics::Net::SoftwareConfig: contrail-nic-config.yaml OS::TripleO::ContrailAnalyticsDatabase::Net::SoftwareConfig: contrail-nic-config.yaml OS::TripleO::ContrailTsn::Net::SoftwareConfig: contrail-nic-config-compute.yaml parameter_defaults: ControlPlaneSubnetCidr: '24' InternalApiNetCidr: 10.0.0.0/24 InternalApiAllocationPools: [{'start': '10.0.0.10', 'end': '10.0.0.200'}] InternalApiDefaultRoute: 10.0.0.1 ManagementNetCidr: 10.1.0.0/24 ManagementAllocationPools: [{'start': '10.1.0.10', 'end': '10.1.0.200'}] ManagementInterfaceDefaultRoute: 10.1.0.1 ExternalNetCidr: 10.2.0.0/24 ExternalAllocationPools: [{'start': '10.2.0.10', 'end': '10.2.0.200'}] EC2MetadataIp: 192.0.2.1 # Generally the IP of the Undercloud DnsServers: ["8.8.8.8","8.8.4.4"] VrouterPhysicalInterface: vlan20 VrouterGateway: 10.0.0.1 VrouterNetmask: 255.255.255.0 ControlVirtualInterface: eth0 PublicVirtualInterface: vlan10 VlanParentInterface: eth1 # If VrouterPhysicalInterface is a vlan interface using vlanX notation
NIC Templates for Control Nodes
In this example, all of the OpenStack controller and the Contrail control plane roles use the NIC template named contrail-nic-config.yaml. Note that the compute roles and the DPDK roles use different NIC templates.
The NIC template files can be accessed at this location:
Sample NIC Templates
[stack@undercloud contrail]$ ls -lrt | grep contrail-nic-config- -rw-rw-r--. 1 stack stack 5615 Sep 19 17:10 contrail-nic-config-vlan.yaml -rw-rw-r--. 1 stack stack 5568 Sep 19 17:10 contrail-nic-config-storage-mgmt.yaml -rw-rw-r--. 1 stack stack 3861 Sep 19 17:10 contrail-nic-config-single.yaml -rw-rw-r--. 1 stack stack 5669 Sep 19 17:10 contrail-nic-config-compute-storage-mgmt.yaml -rw-rw-r--. 1 stack stack 3864 Sep 19 17:10 contrail-nic-config-compute-single.yaml -rw-rw-r--. 1 stack stack 5385 Sep 19 17:10 contrail-nic-config-compute-dpdk.yaml -rw-rw-r--. 1 stack stack 5839 Sep 19 17:10 contrail-nic-config-compute-bond-vlan.yaml -rw-rw-r--. 1 stack stack 5666 Sep 19 17:10 contrail-nic-config-compute-bond-vlan-dpdk.yaml -rw-rw-r--. 1 stack stack 5538 Sep 19 17:10 contrail-nic-config-compute-bond-dpdk.yaml -rw-rw-r--. 1 stack stack 5132 Sep 19 17:13 contrail-nic-config-compute.yaml -rw-r--r--. 1 stack stack 5503 Sep 19 17:13 contrail-nic-config-compute-dpdk-bond-vlan.yaml
Just like the network template files, these NIC template files are examples which are included with the Contrail package. These files also have their names matching the use case that they’re trying to solve.
Note that these NIC template files are examples, and you may have to modify these according to your cluster’s topology.
Also, these examples call out NIC names in the format of nic1, nic2, nic3, and so on (nic.$<number>). Think of these as variables, and Director’s backend scripts translate these NIC numbers into actual interface names based on the interface boot order. So if you specify nic1, nic2, and nic3 in the template and the boot order of interfaces is eth0, eth1, and eth2, then the mapping of these nic variables to actual interfaces would look like:
- Nic1 mapped to eth0
- Nic2 mapped to eth1
- Nic3 mapped to eth2
TripleO also provides the flexibility to use actual NIC names (eth0, em1, ens2f, and so on) in the NIC templates instead of using nic1, nic2, nic3, and the like.
![]() | Note: A common mistake while defining NIC templates is that the boot order of NICs is not set correctly. Because of this, your deployment might progress beyond the network configuration stage, but there might be connectivity issues because the IP/Subnet/route information might not be configured correctly for the NICs of overcloud nodes. |
This section takes a zoom-in look at the network_config section of the NIC template used by the controllers: contrail-nic-config.yaml.
Sample Network Config for Control Nodes
network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - type: vlan use_dhcp: false vlan_id: {get_param: InternalApiNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute} - type: vlan vlan_id: {get_param: ManagementNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ManagementIpSubnet} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ExternalIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
NIC Control Node Template Subsection: Definition for NIC1
The subsection of the template for NIC1 includes the following.
- The definition for an interface called ‘nic1’
- The DNS server is defined. Make sure that this parameter has a valid value. Most commonly, this variable is assigned a value in the contrail-services.yaml file.
- An IP and subnet is provided under the ‘addresses’ section. Note that these values are
also variables, and the format is: $(Network_Name)IP and $(Network_Name)SubnetCidr.
- This means that this particular NIC is on the ControlPlane network. In the background, this NIC might be connected to an access port on a switch for the ControlPlane VLAN.
- In the ‘routes’ section, there’s a /32 route out of this NIC. At the time of planning the networking for your cluster, you may need to provision static routes on the overcloud roles. Use the format mentioned under the ‘routes’ section to specify any such static routes.
Sample NIC1
- type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp}
NIC Template Subsection: Definition for NIC2
The subsection of the template for NIC2 includes the following.
- The NIC2 has multiple VLANs defined on it.
- In the background, NIC2 might be connected to a switch’s trunk port, and all of the corresponding VLANs must be allowed on the trunk.
- Because Director-based deployments need the administrator to use a number of networks, it’s a very common requirement or design to use VLAN interfaces on the overcloud nodes. Consequently, the administrators do not have to be concerned about having 6-7 physical NICs on each overcloud node.
- For each VLAN interface, the vlan_id is defined. Note that the vlan_id points to a variable. As with the example for NIC1, these variables can be assigned values in the contrail-net.yaml.
- Another important observation is the setting of the default
route. In this example, the default route was provisioned on the VLAN
interface in the InternalAPI network. Note that the next hop points
to a variable. As with other variables, this variable can be set in
the contrail-net.yaml file. The following
snippet shows the default route information.
Sample Default Route Information
- type: vlan use_dhcp: false vlan_id: {get_param: InternalApiNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute}
NIC Templates for Compute Nodes
The NIC definitions for compute roles are slightly different from the definitions for control nodes. This is because Contrail provisions a logical interface called ‘vhost0’ on all compute nodes, and this interface must be provided in the NIC definition file for a compute node. Vhost0 is the logical interface that gets attached to the control data network (or the InternalAPI network in TripleO-based installation).
In the contrail-net.yaml example provided in the beginning of this topic, the NIC template used for the compute nodes is contrail-nic-config-compute.yaml. The following is the ‘resources’ section of the contrail-nic-config-compute.yaml file:
Sample Resources for Compute Nodes
resources: OsNetConfigImpl: type: OS::Heat::StructuredConfig properties: group: os-apply-config config: os_net_config: network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp} - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} device: nic2 - type: interface name: vhost0 use_dhcp: false addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute} - type: vlan vlan_id: {get_param: ManagementNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ManagementIpSubnet} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ExternalIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
NIC Compute Node Template Subsection: Definition for NIC1
This section is very similar to the NIC1 definition template for the control nodes. In this example topology, the first NIC for all the compute nodes is connected to the ControlPlane network. Note that this is untagged, so this NIC might be connected to an access port on the underlay switch.
Sample NIC1 for Compute Node
- type: interface name: nic1 use_dhcp: false dns_servers: {get_param: DnsServers} addresses: - ip_netmask: list_join: - '/' - - {get_param: ControlPlaneIp} - {get_param: ControlPlaneSubnetCidr} routes: - ip_netmask: 169.254.169.254/32 next_hop: {get_param: EC2MetadataIp}
NIC Compute Node Template Subsection: Definition for NIC2
This section is very similar to the NIC2 definition template for the control nodes, however there are two major differences:
- The VLAN subinterface for InternalApiNetwork does not have an IP address.
- The Vhost0 interface holds the IP address for InternalApiNetwork.
- If you’re using stock TripleO-based installation, then the IP address for the InternalApiNetwork will always be configured on the vhost0 interface.
Sample NIC2 for Compute Node
- type: interface name: vhost0 use_dhcp: false addresses: - ip_netmask: {get_param: InternalApiIpSubnet} routes: - default: true next_hop: {get_param: InternalApiDefaultRoute} - type: vlan vlan_id: {get_param: InternalApiNetworkVlanID} device: nic2 - type: vlan vlan_id: {get_param: ManagementNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ManagementIpSubnet} - type: vlan vlan_id: {get_param: ExternalNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: ExternalIpSubnet} - type: vlan vlan_id: {get_param: StorageNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageIpSubnet} - type: vlan vlan_id: {get_param: StorageMgmtNetworkVlanID} device: nic2 addresses: - ip_netmask: {get_param: StorageMgmtIpSubnet}
The following are additional parameters that are required to successfully provision compute nodes. The parameters are handled as variables and are normally specified in the contrail-net.yaml file.
- Network-related parameters:
- Subnet CIDR: You can set the subnet mask of each overcloud network in this file.
- Allocation Pool Range: If set, then the overcloud nodes are allocated IP addresses from the specified range
- Default Route: Set the next hop for the default route in the specified format. In this example, the default route is set for InternalApi network and the next hop is set as 10.0.0.1
- VrouterPhysicalInterface: This is the interface on which vhost0 interface gets attached. This may be a physical NIC (e.g. eth2 or enps0f0), or a VLAN interface (e.g. Vlan20)
- VrouterGateway: This is the IP address of the SDN gateway. In a lot of deployments, this might be the IP address of the MX router’s IP address. This IP must be reachable via the InternalAPI network
- VrouterNetmask: subnet mask for the vhost0 interface (this is provisioned in the compute nodes’ config files).
- VlanParentInterface: This is optional, and needed only if vhost0 needs to be attached to a VLAN interface.
Sample NIC2 Additional Parameters for Compute Node
parameter_defaults: ControlPlaneSubnetCidr: '24' InternalApiNetCidr: 10.0.0.0/24 InternalApiAllocationPools: [{'start': '10.0.0.10', 'end': '10.0.0.200'}] InternalApiDefaultRoute: 10.0.0.1 ManagementNetCidr: 10.1.0.0/24 ManagementAllocationPools: [{'start': '10.1.0.10', 'end': '10.1.0.200'}] ManagementInterfaceDefaultRoute: 10.1.0.1 ExternalNetCidr: 10.2.0.0/24 ExternalAllocationPools: [{'start': '10.2.0.10', 'end': '10.2.0.200'}] EC2MetadataIp: 192.0.2.1 # Generally the IP of the Undercloud DnsServers: ["8.8.8.8","8.8.4.4"] VrouterPhysicalInterface: vlan20 VrouterGateway: 10.0.0.1 VrouterNetmask: 255.255.255.0 ControlVirtualInterface: eth0 PublicVirtualInterface: vlan10