Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All

Using ToR Switches and OVSDB to Extend the Contrail Cluster to Other Instances

Support for ToR Switch and OVSDB Overview

Contrail Releases 2.1 and greater support extending a cluster to include bare metal servers and other virtual instances connected to a top-of-rack (ToR) switch that supports the Open vSwitch Database Management (OVSDB) protocol. The bare metal servers and other virtual instances can belong to any of the virtual networks configured in the Contrail cluster, facilitating communication with the virtual instances running in the cluster. Contrail policy configurations can be used to control this communication.

The OVSDB protocol is used to configure the ToR switch and to import dynamically-learned addresses. VXLAN encapsulation is used in the data plane communication with the ToR switch.

ToR Services Node (TSN)

A ToR services node (TSN) can be provisioned as a role in the Contrail system. The TSN acts as the multicast controller for the ToR switches. The TSN also provides DHCP and DNS services to the bare metal servers or virtual instances running behind ToR switch ports.

The TSN receives all the broadcast packets from the ToR switch, and replicates them to the required compute nodes in the cluster and to other EVPN nodes. Broadcast packets from the virtual machines in the cluster are sent directly from the respective compute nodes to the ToR switch.

The TSN can also act as the DHCP server for the bare metal servers or virtual instances, leasing IP addresses to them, along with other DHCP options configured in the system. The TSN also provides a DNS service for the bare metal servers. Multiple TSN nodes can be configured in the system based on the scaling needs of the cluster.

Contrail ToR Agent

A ToR agent provisioned in the Contrail cluster acts as the OVSDB client for the ToR switch, and all of the OVSDB interactions with the ToR switch are performed by using the ToR agent. The ToR agent programs the different OVSDB tables onto the ToR switch and receives the local unicast table entries from the ToR switch.

The ToR agent receives the configuration information for the ToR switch, translates the Contrail configuration to OVSDB, and populates the relevant OVSDB table entries in the ToR switch.

Contrail recognizes the ToR after you configure tsn and toragent roles.

The typical practice is to run the ToR agent on the TSN node.

Configuration Model

Figure 1 depicts the configuration model used in the system.

Figure 1: Configuration Model

Configuration Model

Table 1 maps the Contrail configuration objects to the OVSDB tables.

Table 1: Contrail Objects in the OVSDB

Contrail Object

OVSDB Table

Physical device

Physical switch

Physical interface

Physical port

Logical interface

<VLAN physical port> binding to logical switch

Virtual networks

Logical switch

Layer 2 unicast route table

Unicast remote and local table

Multicast remote table

Multicast local table

Physical locator table

Physical locator set table

Control Plane

The ToR agent receives the EVPN route entries for the virtual networks in which the ToR switch ports are members, and adds the entries to the unicast remote table in the OVSDB.

MAC addresses learned in the ToR switch for different logical switches (entries from the local table in OVSDB) are propagated to the ToR agent. The ToR agent exports the addresses to the control node in the corresponding EVPN tables, which are further distributed to other controllers and subsequently to compute nodes and other EVPN nodes in the cluster.

The TSN node receives the replication tree for each virtual network from the control node. It adds the required ToR addresses to the received replication tree, forming its complete replication tree. The other compute nodes receive the replication tree from the control node, whose tree includes the TSN node.

Data Plane

The data plane encapsulation method is VXLAN. The virtual tunnel endpoint (VTEP) for the bare metal end is on the ToR switch.

Unicast traffic from bare metal servers is VXLAN-encapsulated by the ToR switch and forwarded, if the destination MAC address is known within the virtual switch.

Unicast traffic from the virtual instances in the Contrail cluster is forwarded to the ToR switch, where VXLAN is terminated and the packet is forwarded to the bare metal server.

Broadcast traffic from bare metal servers is received by the TSN node. The TSN node uses the replication tree to flood the broadcast packets in the virtual network.

Broadcast traffic from the virtual instances in the Contrail cluster is sent to the TSN node, which replicates the packets to the ToR switches.

Using the Web Interface to Configure ToR Switch and Interfaces

The Contrail Web user interface can be used to configure a ToR switch and the interfaces on the switch. To add a switch, select Configure > Physical Devices > Physical Routers.

The Physical Routers list is displayed.

Click the + symbol to open the Add menu. From the Add menu you can select one of the following:

  • Add OVSDB Managed ToR
  • Add Netconf Managed Physical Router
  • CPE Router
  • Physical Router

To add a physical ToR, select Add OVSDB Managed ToR. The Create window is displayed, as shown in Figure 2. Enter the IP address and VTEP address of the ToR switch . Also configure the TSN and ToR agent names for the ToR.

Figure 2: Create OVSDB Managed ToR

Create OVSDB Managed ToR

To add the logical interfaces to be configured on the ToR switch, select Configure > Physical Devices > Interfaces.

The Physical Routers list is displayed. Click the + symbol. The Add Interface window is displayed, as shown in Figure 3.

At Add Interface , enter the name of the logical interface. The name must match the name on the ToR, for example, ge-0/0/0.10. Also enter other logical interface configuration parameters, such as VLAN ID, MAC address, and IP address of the bare metal server and the virtual network to which it belongs.

Figure 3: Add Interface

Add Interface

Provisioning ToR and TSN Using Fab Commands

The ToR agent and TSN can be provisioned using fab commands, if the testbed.py is configured with the following:

  • The env.roledef section is configured with the tsn and toragent roles. The hosts for these roles should also host a compute node.
  • The env.tor_agent section should be present and configured.

When these requirements are met, you can use the regular set up procedure with fab setup_all to set up the TSN and ToR agent, along with the other Contrail services. You can also use fab commands to add the TSN and ToR agent to existing compute nodes.

Examples: Configuring ToR and TSN in testbed.py

To add the TSN and ToR agent to the testbed.py:

  1. In the env.roledefs section, add the roles tsn and toragent. Be sure to identify the host(s) for those roles, which should be compute node(s). The following is an example:
    env.roledefs = {
        'all': [host1, host2, host3, host4, host5, host6],
        'cfgm': [host1, host2, host3],
        'openstack': [host1, host2, host3],
        'webui': [host2],
        'control': [host1, host3],
        'compute': [host4, host5, host6],
        'tsn': [host4], 
        'toragent': [host4], 
        'collector': [host1, host3],
        'database': [host1, host2, host3],
        'build': [host_build],
    }
    
  2. In the env.tor_agent section, use the following example to configure the ToR agent.
    env.tor_agent = {
    env.tor_agent = {host4:[{
                    'tor_ip':'1xx.18.90.1',
                    'tor_agent_id':'1',
                    'tor_type':'ovs',
                    'tor_ovs_port':'4321',
                    'tor_ovs_protocol':'pssl',
                    'tor_tsn_ip':'1xx.17.90.4',
                    'tor_tsn_name':'5b7s4',
                    'tor_name':'5b7-qfx2',
                    'tor_tunnel_ip':'34.34.34.34',
                    'tor_vendor_name':'Juniper',
                    'tor_product_name':'QFX5100',
                    'tor_agent_http_server_port': '1234',
                    'tor_agent_ovs_ka': '10000',
                       }
                ]
    }
    

    Two ToR agents provisioned on different hosts are considered redundant to each other if the tor_name and tor_ovs_port in their respective configurations are the same, which means the ToR agents are listening on the same port for SSL connections on both nodes.

  3. Use the task fab setup_all to set up the TSN and ToR agent, along with the other Contrail services.

    Keep in mind the following:

    • The command fab setup_all will provision appropriately when run with the updated testbed.py.
      • nova-compute will not be installed on the tsn nodes.
    • To be able to launch VMs, there must be additional compute nodes that are not tor-agents/TSNs.
    • The default http_server_port is 8085 for both tsn and tor-agent. If you run tor-agent on a tsn node, you should change tor_agent_http_server_port.
  4. Make sure the cluster has reachability to tor_ip and tor_tunnel_ip.

Configuring an Existing Compute with TSN/ToR Using Fab

For existing compute nodes, you can use the tasks add_tsn and add_tor_agent to provision TSN and ToR agents that have been added to the testbed.py.

Table 2 lists fab tasks that can be used to configure TSN/ToR in an existing setup.

Table 2: Fab Tasks for Adding TSN/ToR to Existing Nodes

Command

Description

add_tsn

Provision all the TSNs configured in the testbed.py.

add_tor_agent

Add all the tor-agents configured in the testbed.py.

add_tor_agent_node

Add all tor-agents in specified node. Example: fab add_tor_agent_node:root@<ip>

add_tor_agent_by_id

Add a specified tor-agent, identified by tor_agent_id. Example: fab add_tor_agent_by_id:1,root@<ip>

add_tor_agent_by_index

Add the specified tor-agent, identified by index/position in testbed.py. Example: fab add_tor_agent_by_index:0,root@<ip>

add_tor_agent_by_index_range

Add a group of tor-agents, identified by indices in the testbed.py Example: fab add_tor_agent_by_index_range:0-2,root@<ip>

delete_tor_agent

Remove all tor-agents in all nodes.

delete_tor_agent_node

Remove all tor-agents in specified node. Example: fab delete_tor_agent_node:root@<ip>

delete_tor_agent_by_id

Remove the specified tor-agent, identified by tor-id. Example: , fab delete_tor_agent_by_id:2,root@<ip>

delete_tor_agent_by_index

Remove the specified tor-agent, identified by index/position in testbed.py. Example: fab delete_tor_agent_by_index:0,root@<ip>

delete_tor_agent_by_index_range

Remove a group of tor-agents, identified by indices in testbed.py. Example: fab delete_tor_agent_by_index_range:0-2,root@<ip>

setup_haproxy_config

Provision HA Proxy.

  1. To configure an existing compute node as a TSN or a ToR agent, use the following fab tasks:

    ​fab add_tsn_node:True,user@<ip>

    fab add_tor_agent_node:True,user@<ip>

    Note: Before configuring an existing, already-provisioned node with TSN/ToR agent, stop the nova-compute service on that node.

  2. Configure the vRouter limits on the TSN node to match the scaling requirements in the setup.

    The following example shows an update in the testbed.py made before the set up, so that appropriate vrouter options are configured by the fab task.

    env.vrouter_module_params = {
         host4:{'mpls_labels':'196000', 'nexthops':'521000', 'vrfs':'65536', 'macs':'1000000'},
         host5:{'mpls_labels':'196000', 'nexthops':'521000', 'vrfs':'65536', 'macs':'1000000'}
    }
    

    In the example, the following applies:

    • mpls_labels = (maximum number of VNs * 3) + 4000
    • nexthops = (maximum number of VNs * 4) + number of ToRs + number of compute nodes + 100
    • vrfs = maximum number of VNs
    • macs = maximum number of MACs in a VN

On a TSN node or on a compute node, the currently configured limits can be displayed using vrouter --info.

If the configured limits need to be changed, update by editing the options in the /etc/modprobe.d/vrouter.conf file with the desired values and restarting the node. An example of an updated options statement is the following:

options vrouter vr_mpls_labels=196000 vr_nexthops=521000 vr_vrfs=65536 vr_bridge_entries=1000000

Prerequisite Configuration for QFX5100 Series Switch

When using the Juniper Networks QFX5100 Series switches, ensure the following configurations are made on the switch before extending the Contrail cluster.

  1. Enable OVSDB.
  2. Set the connection protocol.
  3. Identify the interfaces that are managed by means of OVSDB.
  4. Configure the controller (in case pssl is used). If HA Proxy is used, use the address of the HA Proxy node and use the VIP when VRRP is used between multiple nodes running HA Proxy. The following is an example:
    set interfaces lo0 unit 0 family inet address
    
    set switch-options ovsdb-managed
    
    set switch-options vtep-source-interface lo0.0
    
    set protocols ovsdb interfaces
    
    set protocols ovsdb passive-connection protocol tcp port
    
    set protocols ovsdb controller <tor-agent-ip> inactivity-probe-duration 10000 protocol ssl port <tor-agent-port>
    
  5. When using SSL to connect, CA-signed certificates must be copied to the /var/db/certs directory in the QFX device. The following example shows one way to get the certificates. The following comands could be run on any server.
    apt-get install openvswitch-common 
    ovs-pki init 
    ovs-pki req+sign vtep 
    scp vtep-cert.pem root@<qfx>:/var/db/certs 
    scp vtep-privkey.pem root@<qfx>:/var/db/certs 
    cacert.pem file will be available in /var/lib/openvswitch/pki/switchca, when the above are done. This is the file to be provided in the above testbed (in env.ca_cert_file).  
    

Debug QFX5100 Configuration

You can use the following commands on the QFX switch to show the OVSDB configuration.

show ovsdb logical-switch

show ovsdb interface

show ovsdb mac

show ovsdb controller

show vlans

You can use the agent introspect on the ToR agent and the TSN nodes to show the configuration and operational state of these modules.

  • The TSN module is like any other contrail-vrouter-agent on a compute node, with introspect access available on port 8085 by default. Use the introspect on port 8085 to view operational data such as interfaces, virtual network, and VRF information, along with their routes.
  • The port on which the ToR agent introspect access is available is in the configuration file provided to the contrail-tor-agent. This provides the OVSDB data available through the client interface, apart from the other data available in a Contrail Agent.

Changes to Agent Configuration File

You can make changes to the agent features by making changes in the configuration file.

In the /etc/contrail/contrail-vrouter-agent.conf file for TSN, the agent _mode option is available in the DEBUG section to configure the agent to be in TSN mode.

agent_mode = tsn

The following are typical configuration items in a ToR agent configuration file.

[DEFAULT]

agent_name = noded2-1 # Name (formed with hostname and TOR id from below)

agent_mode = tor # Agent mode

http_server_port=9010 # Port on which Introspect access is available
 

[DISCOVERY]

server=<ip> # IP address of discovery server
 

[TOR]

tor_ip=<ip> # IP address of the TOR to manage

tor_id=1 # Identifier for ToR Agent.

tor_type=ovs # ToR management scheme - only “ovs” is supported

tor_ovs_protocol=tcp # IP-Transport protocol used to connect to TOR, can be tcp or pssl

tor_ovs_port=port # OVS server port number on the ToR

tsn_ip=<ip> # IP address of the TSN

tor_keepalive_interval=10000 # keepalive timer in ms 

ssl_cert=/etc/contrail/ssl/certs/tor.1.cert.pem # path to SSL certificate on TOR Agent, needed for pssl

ssl_privkey=/etc/contrail/ssl/private/tor.1.privkey.pem # path to SSL private key on TOR Agent, needed for pssl

ssl_cacert=/etc/contrail/ssl/certs/cacert.pem # path to SSL CA cert on the node, needed for pssl 

REST APIs

For information regarding REST APIs for physical routers and physical and logical interfaces, see REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and Logical Interfaces.

Modified: 2017-03-01