Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
Fabric Lifecycle Management Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Deploying ML2 Plug-in with Red Hat OpenStack

date_range 07-Jun-23

Starting in Contrail Networking Release 2011, the ML2 Neutron plug-in is used to integrate OpenStack with Contrail Networking Fabric. Follow these steps to deploy ML2 plugin with Red Hat OpenStack 13 (RHOSP 13).

Deploy Contrail Command and CFM without Orchestrator

Follow these steps to deploy Contrail Command with intermediate OpenStack Keystone.

  1. Deploy Contrail Command.
    1. Prepare input data for Contrail Command deployer.
      content_copy zoom_out_map
      cat >command_servers.yml <<EOF
      ---
      command_servers:
          server1:
              ip: 192.xx.xx.5
              connection: ssh
              ssh_user: cloud-user
       
              registry_insecure: false
              container_registry: svl-artifactory.juniper.net/contrail-nightly
              container_tag: master-latest
              config_dir: /etc/contrail
      
              contrail_config:
                  database:
                      type: postgres
                      dialect: postgres
                      password: contrail123
                  keystone:
                      assignment:
                          data:
                            users:
                              admin:
                                password: contrail123
                  insecure: true
                  client:
                    password: contrail123
      EOF
      
    2. Deploy Contrail Command deployer.
      content_copy zoom_out_map
      sudo docker run -ti --rm --net host --privileged --name contrail_command_deployer \
      -v $(pwd)/command_servers.yml:/command_servers.yml \
      -v $(pwd)/instances.yml:/instances.yml \
      -v $(pwd)/.ssh/id_rsa:/root/.ssh/id_rsa \
      svl-artifactory.juniper.net/contrail-nightly/contrail-command-deployer:master-latest
      
  2. From the Contrail Command UI, deploy Contrail Control nodes and select None when selecting orchestrator.

Configure Fabric by using Contrail Command

Follow these steps to configure fabric by using Contrail Command

Ensure that the following requirements are met.

  • Switches are configured to provide connectivity for RHOSP networking.

  • Names of servers are used as host names in RHOSP deployments.

  • Virtual port groups that are created for deployment on ports and that are used by the fabric, conforms to ML2 naming convention.

    • For OVS ports, there is one virtual port group for every control node.

      ML2 naming convention: `vpg#{base64(nodename)`.

      These virtual port groups are not used for SRIOV ports.

    • For SRIOV ports, there is one virtual port group for every pair of compute node and physical network.

      ML2 naming convention: `vpg#{base64(nodename)}#{base64(physnet)}`

      All SRIOV ports need are tagged with the name of the physical network they are associated to. For example, `label=tenant1`.

    • UUID of the virtual port group is set as a result of the uuid.uuid3(uuid.NAMESPACE_DNS, str(name)) Python function.

    • Created virtual port groups for all networks used in OOO provisioning.

      The following script for creating virtual port groups is provided in the config-api container. This script is used for creating the infrastructure ports that are needed for RHOSP deployment.

      content_copy zoom_out_map
      python /opt/contrail/utils/provision_infra_nw.py –connections <connection.yaml> –fabric <fabricname>
      Sample Connection.yaml: 
      rhosp-provisioning1:
        cidr: 192.XX.XX.0/24
        gateway: 192.XX.XX.254
        vlan: 801
        servers:
          5c7s5-node1.localdomain:
            5c7-qfx6:
              - xe-0/0/54_0
          5c7s5-node2.localdomain:
            5c7-qfx6:
              - xe-0/0/54_1
      rhosp-int-api1:
        cidr: 10.XX.XX.0/24
        gateway: 10.XX.XX.254
        vlan: 811
        data: True
        servers:
          5c7s5-node1.localdomain:
            5c7-qfx5:
              - xe-0/0/50_0
          5c7s5-node2.localdomain:
            5c7-qfx5:
              - xe-0/0/50_1
      
    • Add servers to Contrail Command.

      The server name should match the name that the node will inherit once OOO provisioning is complete. hostname_map.yaml is used here.

      Follow these steps to import servers by using the Contrail Command UI.

      1. Navigate to Infrastructure>Servers and click Import.

        The Import Server pop-up is displayed.

      2. To import a server, click Browse and navigate to the local directory and select the .json file.

        Alternatively, you can drag and drop the .json file in the Drag a file here, or browse pane.

      3. Click Import to import the server.

    • Import Node (Server) Profiles.

      Follow these steps to import node profiles by using the Contrail Command UI.

      1. Navigate to Infrastructure>Servers and click the Server Profiles tab.

        The Import Server Profile pop-up is displayed.

      2. To import a server profile, click Browse and navigate to the local directory and select the .json file.

        Alternatively, you can drag and drop the .json file in the Drag a file here, or browse pane.

      3. Click Import to import the server profile.

    • Associate node profiles (server profiles) and assign tags to SRIOV port only.

      Follow these steps to associate node profiles to servers by using the Contrail Command UI.

      1. Navigate to Infrastructure>Servers.

        The Servers page is displayed.

      2. Select the server you want to assign a server profile to by selecting the check box next to the name of the server.

      3. Click Assign to server profile.

        The Assign Server Profile pop-up is displayed.

      4. Select the server profile from the Server Profile list and click Assign.

        The profile is now assigned.

        Sample Server Profile

        content_copy zoom_out_map
        {
        	"nodes": [
        		{
        			"name": "5c7s5-node2.localdomain",
        			"type": "baremetal",
        			"ports": [{
        					"name": "enp94s0f0",
        					"mac_address": "90:e2:ba:4c:65:c9",
        					"switch_name": "5c7-qfx6",
        					"port_name": "xe-0/0/54:1",
        					"switch_id": "10:0e:7e:bd:94:72"
        				},
        				{
        					"name": "enp94s0f1",
        					"mac_address": "90:e2:ba:4c:65:c9",
        					"switch_name": "5c7-qfx5",
        					"port_name": "xe-0/0/50:1",
        					"switch_id": "10:0e:7e:bd:94:72"
        				},
        				{
        					"name": "enp94s0f2",
        					"mac_address": "90:e2:ba:4c:65:c9",
        					"switch_name": "5c7-qfx5",
        					"port_name": "xe-0/0/2",
        					"switch_id": "10:0e:7e:bd:94:72"
        				},
        				{
        					"name": "enp94s0f3",
        					"mac_address": "90:e2:ba:4c:65:c9",
        					"switch_name": "5c7-qfx6",
        					"port_name": "xe-0/0/2",
        					"switch_id": "10:0e:7e:bd:94:72"
        				}
        
        
        			]
        		}
        	]
        }
        
        
        Sample Node profile:
        
        {
          "resources": [
            {
              "kind": "card",
              "data": {
                "name": "card1",
                "fq_name": ["card1"],
                "interface_map": {
                  "port_info": [
                    {
                      "name": "enp94s0f2",
                      "labels": ["physnet1"]
                    },
        		{
                      "name": "enp94s0f3",
                      "labels": ["physnet2"]
                    }		
                  ]
                }
              }
            },
            {
              "kind": "hardware",
              "data": {
                "name": "sriov-server1",
                "fq_name": ["sriov-server1"],
                "card_refs": [
                  {
                    "to": ["card1"]
                  }
                ]
              }
            },
        	{
                    "kind": "tag",
                    "data": {
                    "tag_type_name": "label",
                    "tag_value": "phynet1",
                    "fq_name": ["label=physnet1"]
        			}
            },
            {
                    "kind": "tag",
                    "data": {
                    "tag_type_name": "label",
                    "tag_value": "physnet2",
                    "fq_name": ["label=physnet2"]
        			}
            },
            {
              "kind": "node_profile",
              "data": {
                "hardware_refs": [
                  {
                    "to": ["sriov-server1"]
                  }
                ],
                "parent_type": "global-system-config",
                "name": "sriov_1",
                "fq_name": ["default-global-system-config", "sriov_1"],
                "node_profile_vendor": "Sriov-server",
                "node_profile_type": "end-system"
              }
            }
          ]
        }
        

Deploy RHOSP13 with ML2 Plug-in

Follow these steps to deploy RHOSP13 with ML2 plug-in.

For detailed instructions on deployment, see RHOSP13 DIRECTOR INSTALLATION AND USAGE.

  1. Prepare Heat templates working folder.
    content_copy zoom_out_map
    # make copy of heat templates
    cp -r /usr/share/openstack-tripleo-heat-templates/ tripleo-heat-templates
    # get latest TF heat templates
    git clone https://github.com/tungstenfabric/tf-tripleo-heat-templates -b stable/queens
    # copy TF templates into working folder
    cp -r contrail-tripleo-heat-templates/* tripleo-heat-templates/
    
  2. If you use Nova Scheduler Hints for node placement,
    1. Set appropriate capabilities properties for baremetal nodes.

      Example Output

      content_copy zoom_out_map
      openstack baremetal node set --property capabilities='node:overcloud-novacompute-0,boot_option:local' <id1>
      openstack baremetal node set --property capabilities='node:overcloud-controller-0,boot_option:local' <id2>
      (see details https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html/advanced_overcloud_customization/sect-controlling_node_placement)
      
      
    2. Prepare scheduler hints environment (scheduler_hints.yaml) file.

      content_copy zoom_out_map
      parameter_defaults:
        ComputeSchedulerHints:
          'capabilities:node': 'overcloud-novacompute-%index%'
        ComputeSriovSchedulerHints:
          'capabilities:node': 'overcloud-computesriov-%index%'
        ControllerSchedulerHints:
          'capabilities:node': 'overcloud-controller-%index%'
      
  3. If you use custom hostnames during server onboarding in Contrail Command, prepare hostname_map.yaml.
    content_copy zoom_out_map
    parameter_defaults:
      HostnameMap:
        overcloud-novacompute-0: b5s3
        overcloud-novacompute-1: b5s4
        overcloud-computesriov-0: b5s1 
    
  4. Modify compute role file to include OS::TripleO::Services::NeutronDhcpAgent service.
  5. Prepare network parameters (params.yaml).
    content_copy zoom_out_map
    parameter_defaults:
      # admin user password
      AdminPassword: qwe123QWE
      # Customize all these values to match the local environment
      TenantNetCidr: 10.x.x.0/24
      InternalApiNetCidr: 10.x.x.0/24
      ExternalNetCidr: 10.x.x.0/24
      StorageNetCidr: 10.x.x.0/24
      StorageMgmtNetCidr: 10.x.x.0/24
      # CIDR subnet mask length for provisioning network
      ControlPlaneSubnetCidr: '24'
      # Allocation pools
      TenantAllocationPools: [{'start': '10.x.x.10', 'end': '10.x.x.200'}]
      InternalApiAllocationPools: [{'start': '10.x.x.10', 'end': '10.x.x.200'}]
      ExternalAllocationPools: [{'start': '10.x.x.10', 'end': '10.x.x.200'}]
      StorageAllocationPools: [{'start': '10.x.x.10', 'end': '10.x.x.200'}]
      StorageMgmtAllocationPools: [{'start': '10.x.x.10', 'end': '10.x.x.200'}]
      # Routes
      ControlPlaneDefaultRoute: 192.x.x.1
      InternalApiDefaultRoute: 10.x.x.1
      ExternalInterfaceDefaultRoute: 10.x.x.1
      # Vlans
      InternalApiNetworkVlanID: 710
      ExternalNetworkVlanID: 720
      StorageNetworkVlanID: 730
      StorageMgmtNetworkVlanID: 740
      TenantNetworkVlanID: 3211
      # Services
      EC2MetadataIp: 192.x.x.1  # Generally the IP of the Undercloud
      DnsServers: ["8.x.x.8"]
      NtpServer: 3.europe.pool.ntp.org
    
  6. Adjust options in the setup in the tripleo-heat-templates/environments/contrail/contrail-plugins-ml2.yaml file.
    content_copy zoom_out_map
      # Counts of nodes:
      ControllerCount: 1
      ComputeCount: 1
      ComputeSriovCount: 1
    
      # ml2/openvswitch_agent.ini: bridge_mappings
      NeutronBridgeMappings:
        - datacentre:br-ex
        - tenant:br-vlans
    
      # ml2/ml2_conf.ini: network_vlan_ranges
      NeutronNetworkVLANRanges:
        - tenant:1:1000
    
      # Sriov role specific options
      ComputeSriovParameters:
        KernelArgs: "iommu=pt intel_iommu=on"
        TunedProfileName: "virtual-host"
        # ml2/openvswitch_agent.ini: bridge_mappings
        NeutronBridgeMappings:
          - datacentre:br-ex
          - tenant:br-vlans
          - tenant1:br-link1
          - tenant2:br-link2
        # ml2/ml2_conf.ini: network_vlan_ranges
        NeutronNetworkVLANRanges:
          - tenant:1:1000
          - tenant1:1:1000
          - tenant2:1:1000
        # ml2/sriov_agent.ini: physical_device_mappings
        NeutronPhysicalDevMappings:
          - tenant1:eth4
          - tenant2:eth5
        NeutronSriovNumVFs:
          - eth4:8
          - eth5:8
        # nova.conf: passthrough_whitelist
        NovaPCIPassthrough:
          - devname: "eth4"
            physical_network: "tenant1"
          - devname: "eth5"
            physical_network: "tenant2"
    
      # Ajust regsitry where Contrail containers are
      # (contrail-node-init and contrail-openstack-neutron-ml2-init)
      ContrailRegistry: '192.xxx.xx.10:8787'
      ContrailImageTag: 'latest'
      ContrailRegistryInsecure: true
      
      # Address of Contrail Config API in format: ip1,ip2
      # !!! Setup to correct IPs to point to existing Contrail cluster
      # These IPs should be accessible from overcloud nodes, they are IPs where Config API listen on.
      ExternalContrailConfigIPs: <Config API IPs>
    
    
      # Tags for ML2 plugin to differentiate ports in RHOSP networks
      # (should be same as used in servers discovery in Contrail Command)
      # !!! Ajust if other values are used in Contrail Command during servers discovery
      ContrailManagementPortTags:
        - 'rhosp-provisioning'
        - 'rhosp-external'
        - 'rhosp-storage'
        - 'rhosp-internal'
        - 'rhosp-storage-mgmt'
      ContrailDataPortTags:
        - 'rhosp-data'
    

    This configuration will ensure that the ML2 Plugin ignores all TungstenFabric ports that contains any one of the following tags.

    content_copy zoom_out_map
    'rhosp-provisioning'
        'rhosp-external'
        'rhosp-storage'
        'rhosp-internal'
        'rhosp-storage-mgmt'
  7. Prepare NIC files corresponding to the setup network layout.

    Example for Compute tenant network to use VLANS without tunneling.

    content_copy zoom_out_map
                  - type: ovs_bridge
                    name: br-vlans
                    members:
                    - type: interface
                      name: nic2
                      primary: true
    

    Example for SRIOV.

    content_copy zoom_out_map
                  - type: ovs_bridge
                    name: br-vlans
                    members:
                    - type: interface
                      name: nic2
                      primary: true
                 - type: ovs_bridge
                    name: br-link0
                    members:
                    - type: interface
                      name: nic3
                      primary: true
                  - type: ovs_bridge
    
  8. Upload Contrail containers to undercloud registry.
    content_copy zoom_out_map
    docker pull hub.juniper.net/contrail-node-init:2011.xx
    
    docker pull hub.juniper.net/contrail-openstack-neutron-ml2-init:2011.xx
    
    docker push 192.xxx.xx.1:8787/hub.juniper.net/contrail-node-init:2011.xx
    
    docker push 192.xxx.xx.1:8787/hub.juniper.net/contrail-openstack-neutron-ml2-init:2011.xx
    
  9. Deploy OpenStack.
    content_copy zoom_out_map
    source stackrc
    openstack overcloud deploy --templates tripleo-heat-templates \
      --roles-file tripleo-heat-templates/roles_data_contrail_ml2.yaml \
      -e ~/overcloud_images.yaml \
      -e ~//hostname_map.yaml \
      -e ~/scheduler_hints.yaml \
      -e tripleo-heat-templates/environments/network-isolation.yaml \
      -e tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
      -e tripleo-heat-templates/environments/contrail/contrail-plugins-ml2.yaml \
      -e params.yaml
    
  10. After OpenStack is deployed, save internal virtual API.
    content_copy zoom_out_map
    Get RHOSP overcloud internal vip, .e.g
    # ssh to one of openstack overcloud controller nodes
    sudo  hiera -c /etc/puppet/hiera.yaml internal_api_virtual_ip
    

Configure Connectivity between RHOSP Internal API Network and Contrail Command Virtual Machines

To configure connectivity between RHOSP internal API network and Contrail Command virtual machines, assign an IP from the network to an interface of the virtual machine.

content_copy zoom_out_map
[stack@command ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth2
# This file is autogenerated by os-net-config
DEVICE=eth2
ONBOOT=yes
HOTPLUG=no
NM_CONTROLLED=no
BOOTPROTO=none
MTU=1500

[stack@command ~]$ cat /etc/sysconfig/network-scripts/ifcfg-eth2.710
# This file is autogenerated by os-net-config
TYPE=vlan
VLAN=yes
DEVICE=eth2.710
ONBOOT=yes
HOTPLUG=no
NM_CONTROLLED=no
BOOTPROTO=none
MTU=1500
IPADDR=10.1.0.9
NETMASK=255.255.xxx.x

Add Red Hat OpenStack Orchestrator

You can add Red Hat OpenStack Orchestrator by using the Contrail Command user interface.

Follow these steps to add Red Hat OpenStack Orchestrator.

  1. Navigate to Infrastructure>External Systems.

    The External Systems page is displayed.

  2. Click Add Orchestrator and select RedHat OpenStack from the list.

    The Add OpenStack page is displayed.

  3. Enter OpenStack Keystone endpoint IP address in the IP address field.
  4. Enter OpenStack Keystone auth user name in the Username field.
  5. Enter OpenStack Keystone auth password.
  6. Click Additional Configuration and enter the information as given in Table 1.
    Table 1: Additional Configuration

    Field

    Action

    Domain Name

    Specify the name of the OpenStack project domain.

    The default value is Default.

    Protocol

    Select the Keystone protocol you want to use for this configuration.

    URL Version

    The URL version for keystone authentication is /v3 by default.

    Tenant

    Specify the tenant for Keystone authentication.

    The default value is admin.

    Region Name

    Enter the name of the OpenStack-managed region within the data center.

    The default value is RegionOne.

    Public port

    Enter the port number to connect to Keystone authentication server.

    The default value is 5000.

  7. Click Add to add the orchestrator.

Create Swift Containers in OpenStack

Create a swift container and name it ”contrail_container” with public read and list permissions. You can create a swift container from the Openstack UI.

Follow these steps to create a swift container by using the OpenStack UI.

  1. Navigate to Project>Object Store>Containers.

    The Containers page is displayed.

  2. Click +Container to create a container.

    The Create Container pop-up is displayed.

  3. Enter a name for the container in the Container Name field.
  4. Select Public from the Container Access options to enable anyone with the public URL to gain access to objects in the container.
  5. Click Submit to create container.

(Optional) Deploy AppFormix and sFlows

Install AppFormix and xFlows by using appformix-ansible-deployer. Ensure that instance.yml has information on OOO and Keystone.

After you have installed Contrail and Red Hat OpenStack, follow these steps to install AppFormix HA and xFlows HA.

Follow these steps to install AppFormix HA.

Before you begin, ensure that Python3 is installed on xFlow nodes.

  1. Navigate to contrail_command container.
    content_copy zoom_out_map
    /var/tmp/contrail_cluster/<uuid1>/instances.yml
  2. Make a copy of the /var/tmp/contrail_cluster/<uuid1>/instances.yml file.
  3. Edit instances.yml to include appformix_controller role and appformix_bare_host role in all nodes that are monitored. Include appformix_openstack_controller role in OpenStack node.
  4. Log into the Contrail Command container.
    content_copy zoom_out_map
    cd /usr/share/contrail/appformix-ansible-deployer/venv
    . venv/bin/activate
    ansible-playbook -e config_file=<instance_file_path> --skip-tags=install_docker playbooks/install_appformix_ansible.yml
    

    The ansible files are now downloaded and the inventory file in /opt/software/appformix/inventory directory is generated.

  5. Navigate to https://ssd-git.juniper.net/appformix/AppFormix/wikis/appformix-installation-for-openstack-in-ha and add the following to the

    /opt/software/appformix/inventory/hosts file.

    content_copy zoom_out_map
    [appformix_controller]
    <appformix_controller_ip1> keepalived_vrrp_interface=<if>
    <appformix_controller_ip2> keepalived_vrrp_interface=<if>
    <appformix_controller_ip3> keepalived_vrrp_interface=<if>
    

    /opt/software/appformix/inventory/group_vars/all file.

    content_copy zoom_out_map
    appformix_vip: <appformix_vip_adddress>
    openstack_auth_url: http://<keystone_auth_host>:5000//v3
    openstack_project_domain_name: Default
    openstack_user_domain_name: Default
    openstack_username: admin
    openstack_password: password
    openstack_project_name: admin
    openstack_tenant_name: admin
    openstack_identity_api_version: 3
    openstack_interface: internal
    openstack_baremetal_api_version: 1.29
    #docker exec -it contrail_command bash
    #cd /usr/share/contrail/appformix-ansible-deployer/appformix
    #source venv/bin/activate
    (venv)# cd /opt/software/appformix/; ansible-playbook -i 
    inventory --skip-tags=install_docker  
    contrail-insights-ansible/appformix_openstack_ha.yml
     
    

Follow these steps to install xFlow HA.

  1. Identify the Contrail Cluster ID from the /contrail-clusters API by using a debugger.

  2. Add appformix_flows role to the node in the instances.yml file, where you want to install xFlows.

    content_copy zoom_out_map
    #docker exec -it contrail_command bash
    #cd /usr/share/contrail/appformix-ansible-deployer/xflow
    #source venv/bin/activate
    #bash deploy_insights_flow.sh <instances.yml path> --cluster-id <contrail_cluster_id>
     
    

    Sample instances.yml file snippets.

    in-band installation of xFlows.

    content_copy zoom_out_map
    instances:
      host1:
        ip: 10.XX.XX.137
        provider: bms
        roles:
          config:
          analytics:
          openstack:
          appformix_openstack_controller:
       host2:
        ip: 10.XX.XX.136
        provider: bms
        roles:
          appformix_bare_host:
      host3:
        ip: 10.XX.XX.135
        provider: bms
        roles:
          appformix_bare_host:
          appformix_flows:
    …...
    contrail_configuration:
      AUTH_MODE: keystone
      KEYSTONE_AUTH_HOST: 10.XX.XX.137
      KEYSTONE_AUTH_URL_VERSION: /v3
    …….
    xflow_configuration:
      telemetry_in_band_cidr: 1.XX.XX.1/24
      loadbalancer_management_vip: 10.XX.XX.166
      loadbalancer_collector_vip: 1.XX.XX.3
      telemetry_in_band_vlan_id: 51
    

    xflow_configuration for out-of-band installation of xFlows.

    content_copy zoom_out_map
    xflow_configuration:
      loadbalancer_collector_vip: 10.XX.XX.166
    
  3. After AppFormix and xFlows installation is completed, add endpoints.

    Navigate to Infrastructure>Cluster>Advanced Options>Endpoints page in the Contrail Command UI and click Create to add endpoints.

Sample Network Files

  • tripleo-heat-templates/network/config/single-nic-vlans/role.role.j2.yaml

    content_copy zoom_out_map
    heat_template_version: queens
    description: >
      Software Config to drive os-net-config to configure VLANs for the {{role.name}} role.
    parameters:
      ControlPlaneIp:
        default: ''
        description: IP address/subnet on the ctlplane network
        type: string
      {%- for network in networks %}
      {{network.name}}IpSubnet:
        default: ''
        description: IP address/subnet on the {{network.name_lower}} network
        type: string
      {%- endfor %}
      {%- for network in networks %}
      {{network.name}}NetworkVlanID:
        default: {{network.vlan}}
        description: Vlan ID for the {{network.name_lower}} network traffic.
        type: number
      {%- endfor %}
      ControlPlaneSubnetCidr: # Override this via parameter_defaults
        default: '24'
        description: The subnet CIDR of the control plane network.
        type: string
      ControlPlaneDefaultRoute: # Override this via parameter_defaults
        description: The default route of the control plane network.
        type: string
    {%- for network in networks %}
    {%- if network.ipv6|default(false) and network.gateway_ipv6|default(false) %}
      {{network.name}}InterfaceDefaultRoute:
        default: '{{network.gateway_ipv6}}'
        description: default route for the {{network.name_lower}} network
        type: string
    {%- elif network.gateway_ip|default(false) %}
      {{network.name}}InterfaceDefaultRoute:
        default: '{{network.gateway_ip}}'
        description: default route for the {{network.name_lower}} network
        type: string
    {%- endif %}
    {%- endfor %}
      DnsServers: # Override this via parameter_defaults
        default: []
        description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
        type: comma_delimited_list
     EC2MetadataIp: # Override this via parameter_defaults
        description: The IP address of the EC2 metadata server.
        type: string
      DnsSearchDomains: # Override this via parameter_defaults
        default: []
        description: A list of DNS search domains to be added (in order) to resolv.conf.
        type: comma_delimited_list
    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: ../../scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:
                  - type: ovs_bridge
    {%- if role.name.startswith('CephStorage') or role.name.startswith('ObjectStorage') or role.name.startswith('BlockStorage') %}
                    name: br-storage
    {%- else %}
                    name: bridge_name
    {%- endif %}
                    use_dhcp: false
                    dns_servers:
                      get_param: DnsServers
                    domain:
                      get_param: DnsSearchDomains
                    addresses:
                    - ip_netmask:
                        list_join:
                        - /
                        - - get_param: ControlPlaneIp
                          - get_param: ControlPlaneSubnetCidr
                    routes:
                    - ip_netmask: 169.254.xxx.xxx/32
                      next_hop:
                        get_param: EC2MetadataIp
                    - default: true
                      next_hop:
                        get_param: ControlPlaneDefaultRoute
                   members:
                    - type: interface
                      name: nic1
                      # force the MAC address of the bridge to this interface
                      primary: true
    {%- for network in networks if network.enabled|default(true) and network.name in role.networks %}
    {%- if network.name not in ["Tenant"] %}
                    - type: vlan
                      vlan_id:
                        get_param: {{network.name}}NetworkVlanID
                      addresses:
                      - ip_netmask:
                          get_param: {{network.name}}IpSubnet
    {%- endif %}
    {%- endfor %}
                  - type: ovs_bridge
                    name: br-vlans
                    members:
                    - type: interface
                      name: nic2
                      primary: true
    outputs:
      OS::stack_id:
        description: The OsNetConfigImpl resource.
        value:
          get_resource: OsNetConfigImpl
    
  • tripleo-heat-templates/network/config/single-nic-vlans/compute-sriov.yaml

    content_copy zoom_out_map
    heat_template_version: queens
    description: >
      Software Config to drive os-net-config to configure VLANs for the {{role.name}} role.
    parameters:
      ControlPlaneIp:
        default: ''
        description: IP address/subnet on the ctlplane network
        type: string
      {%- for network in networks %}
      {{network.name}}IpSubnet:
        default: ''
        description: IP address/subnet on the {{network.name_lower}} network
        type: string
      {%- endfor %}
      {%- for network in networks %}
      {{network.name}}NetworkVlanID:
        default: {{network.vlan}}
        description: Vlan ID for the {{network.name_lower}} network traffic.
        type: number
      {%- endfor %}
      ControlPlaneSubnetCidr: # Override this via parameter_defaults
        default: '24'
        description: The subnet CIDR of the control plane network.
        type: string
      ControlPlaneDefaultRoute: # Override this via parameter_defaults
        description: The default route of the control plane network.
        type: string
    {%- for network in networks %}
    {%- if network.ipv6|default(false) and network.gateway_ipv6|default(false) %}
      {{network.name}}InterfaceDefaultRoute:
        default: '{{network.gateway_ipv6}}'
        description: default route for the {{network.name_lower}} network
        type: string
    {%- elif network.gateway_ip|default(false) %}
      {{network.name}}InterfaceDefaultRoute:
        default: '{{network.gateway_ip}}'
        description: default route for the {{network.name_lower}} network
        type: string
    {%- endif %}
    {%- endfor %}
      DnsServers: # Override this via parameter_defaults
        default: []
        description: A list of DNS servers (2 max for some implementations) that will be added to resolv.conf.
        type: comma_delimited_list
     EC2MetadataIp: # Override this via parameter_defaults
        description: The IP address of the EC2 metadata server.
        type: string
      DnsSearchDomains: # Override this via parameter_defaults
        default: []
        description: A list of DNS search domains to be added (in order) to resolv.conf.
        type: comma_delimited_list
    resources:
      OsNetConfigImpl:
        type: OS::Heat::SoftwareConfig
        properties:
          group: script
          config:
            str_replace:
              template:
                get_file: ../../scripts/run-os-net-config.sh
              params:
                $network_config:
                  network_config:
                  - type: ovs_bridge
    {%- if role.name.startswith('CephStorage') or role.name.startswith('ObjectStorage') or role.name.startswith('BlockStorage') %}
                    name: br-storage
    {%- else %}
                    name: bridge_name
    {%- endif %}
                    use_dhcp: false
                    dns_servers:
                      get_param: DnsServers
                    domain:
                      get_param: DnsSearchDomains
                    addresses:
                    - ip_netmask:
                        list_join:
                        - /
                        - - get_param: ControlPlaneIp
                          - get_param: ControlPlaneSubnetCidr
                    routes:
                    - ip_netmask: 169.254.xxx.xxx/32
                      next_hop:
                        get_param: EC2MetadataIp
                    - default: true
                      next_hop:
                        get_param: ControlPlaneDefaultRoute
                   members:
                    - type: interface
                      name: nic1
                      # force the MAC address of the bridge to this interface
                      primary: true
    {%- for network in networks if network.enabled|default(true) and network.name in role.networks %}
    {%- if network.name not in ["Tenant"] %}
                    - type: vlan
                      vlan_id:
                        get_param: {{network.name}}NetworkVlanID
                      addresses:
                      - ip_netmask:
                          get_param: {{network.name}}IpSubnet
    {%- endif %}
    {%- endfor %}
                  - type: ovs_bridge
                    name: br-vlans
                    members:
                    - type: interface
                      name: nic2
                      primary: true
                 - type: ovs_bridge
                    name: br-link0
                    members:
                    - type: interface
                      name: nic3
                      primary: true
                  - type: ovs_bridge
                    name: br-link1
                    members:
                    - type: interface
                      name: nic4
                      primary: true
    
    outputs:
      OS::stack_id:
        description: The OsNetConfigImpl resource.
        value:
          get_resource: OsNetConfigImpl
    

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
2011
Starting in Contrail Networking Release 2011, the ML2 Neutron plug-in is used to integrate OpenStack with Contrail Networking Fabric.
footer-navigation