Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Using ToR Switches and OVSDB to Extend the Contrail Cluster to Other Instances

Support for ToR Switch and OVSDB Overview

Contrail Releases 2.1 and greater support extending a cluster to include bare metal servers and other virtual instances connected to a top-of-rack (ToR) switch that supports the Open vSwitch Database Management (OVSDB) protocol. The bare metal servers and other virtual instances can belong to any of the virtual networks configured in the Contrail cluster, facilitating communication with the virtual instances running in the cluster. Contrail policy configurations can be used to control this communication.

The OVSDB protocol is used to configure the ToR switch and to import dynamically-learned addresses. VXLAN encapsulation is used in the data plane communication with the ToR switch.

ToR Services Node (TSN)

A ToR services node (TSN) can be provisioned as a role in the Contrail system. The TSN acts as the multicast controller for the ToR switches. The TSN also provides DHCP and DNS services to the bare metal servers or virtual instances running behind ToR switch ports.

The TSN receives all the broadcast packets from the ToR switch, and replicates them to the required compute nodes in the cluster and to other EVPN nodes. Broadcast packets from the virtual machines in the cluster are sent directly from the respective compute nodes to the ToR switch.

The TSN can also act as the DHCP server for the bare metal servers or virtual instances, leasing IP addresses to them, along with other DHCP options configured in the system. The TSN also provides a DNS service for the bare metal servers. Multiple TSN nodes can be configured in the system based on the scaling needs of the cluster.

Contrail ToR Agent

A ToR agent provisioned in the Contrail cluster acts as the OVSDB client for the ToR switch, and all of the OVSDB interactions with the ToR switch are performed by using the ToR agent. The ToR agent programs the different OVSDB tables onto the ToR switch and receives the local unicast table entries from the ToR switch.

The ToR agent receives the configuration information for the ToR switch, translates the Contrail configuration to OVSDB, and populates the relevant OVSDB table entries in the ToR switch.

Contrail recognizes the ToR after you configure tsn and toragent roles.

The typical practice is to run the ToR agent on the TSN node.

Configuration Model

Figure 1 depicts the configuration model used in the system.

Figure 1: Configuration ModelConfiguration Model

Table 1 maps the Contrail configuration objects to the OVSDB tables.

Table 1: Contrail Objects in the OVSDB

Contrail Object

OVSDB Table

Physical device

Physical switch

Physical interface

Physical port

Logical interface

<VLAN physical port> binding to logical switch

Virtual networks

Logical switch

Layer 2 unicast route table

Unicast remote and local table

Multicast remote table

Multicast local table

Physical locator table

Physical locator set table

Control Plane

The ToR agent receives the EVPN route entries for the virtual networks in which the ToR switch ports are members, and adds the entries to the unicast remote table in the OVSDB.

MAC addresses learned in the ToR switch for different logical switches (entries from the local table in OVSDB) are propagated to the ToR agent. The ToR agent exports the addresses to the control node in the corresponding EVPN tables, which are further distributed to other controllers and subsequently to compute nodes and other EVPN nodes in the cluster.

The TSN node receives the replication tree for each virtual network from the control node. It adds the required ToR addresses to the received replication tree, forming its complete replication tree. The other compute nodes receive the replication tree from the control node, whose tree includes the TSN node.

Data Plane

The data plane encapsulation method is VXLAN. The virtual tunnel endpoint (VTEP) for the bare metal end is on the ToR switch.

Unicast traffic from bare metal servers is VXLAN-encapsulated by the ToR switch and forwarded, if the destination MAC address is known within the virtual switch.

Unicast traffic from the virtual instances in the Contrail cluster is forwarded to the ToR switch, where VXLAN is terminated and the packet is forwarded to the bare metal server.

Broadcast traffic from bare metal servers is received by the TSN node. The TSN node uses the replication tree to flood the broadcast packets in the virtual network.

Broadcast traffic from the virtual instances in the Contrail cluster is sent to the TSN node, which replicates the packets to the ToR switches.

Using the Web Interface to Configure ToR Switch and Interfaces

The Contrail Web user interface can be used to configure a ToR switch and the interfaces on the switch. To add a switch, select Configure > Physical Devices > Physical Routers.

The Physical Routers list is displayed.

Click the + symbol to open the Add menu. From the Add menu you can select one of the following:

  • Add OVSDB Managed ToR

  • Add Netconf Managed Physical Router

  • CPE Router

  • Physical Router

To add a physical ToR, select Add OVSDB Managed ToR. The Create window is displayed, as shown in Figure 2. Enter the IP address and VTEP address of the ToR switch . Also configure the TSN and ToR agent names for the ToR.

Figure 2: Create OVSDB Managed ToRCreate OVSDB Managed ToR

To add the logical interfaces to be configured on the ToR switch, select Configure > Physical Devices > Interfaces.

The Physical Routers list is displayed. Click the + symbol. The Add Interface window is displayed, as shown in Figure 3.

At Add Interface , enter the name of the logical interface. The name must match the name on the ToR, for example, ge-0/0/0.10. Also enter other logical interface configuration parameters, such as VLAN ID, MAC address, and IP address of the bare metal server and the virtual network to which it belongs.

Figure 3: Add InterfaceAdd Interface

Configuration Parameters for Provisioning ToR and TSN

This section presents the configuration parameters for different methods of provisioning ToR and TSN.

The following information can be provided for each ToR agent.

  • IP address of the ToR

  • a unique numeric identifier for the ToR

  • a unique (optional) name for the ToR Agent

  • the OVS protocol (TCP or SSL)

  • the OVS port

    • when OVS protocol is TCP, port indicates the TCP port to connect on the ToR

    • when OVS protocol is pssl, port indicates the SSL port on which the ToR agent listens for connections from the TOR

  • TSN IP address of the ToR

  • name of the TSN node

  • IP address of the data tunnel endpoint

  • HTTP server port of the ToR Agent using which introspect data can be checked

  • vendor name for ToR (optional)

  • product name of ToR switch (optional)

  • OVS keepalive timeout (optional)

Inventory Format ToR and TSN

Indicate the compute node to act as TSN.

JSON Format ToR and TSN

If you are provisioning using JSON, the following example is the JSON format.

For ToR in server.json.

For TSN in server.json.

Testbed.py Format ToR and TSN

Starting with Contrail 4.0, if you are provisioning using SM-Lite, you can provision with JSON or testbed.py. The following is the testbed.py format.

The ToR agent and TSN can be provisioned using the testbed.py configured with the following:

  • The env.roledef section is configured with the tsn and toragent roles. The hosts for these roles should also host a compute node.

  • The env.tor_agent section should be present and configured.

For ToR:

For TSN:

For more information, see https://github.com/Juniper/contrail-controller/wiki/Baremetal-Support .

Prerequisite Configuration for QFX5100 Series Switch

When using the Juniper Networks QFX5100 Series switches, ensure the following configurations are made on the switch before extending the Contrail cluster.

  1. Enable OVSDB.

  2. Set the connection protocol.

  3. Identify the interfaces that are managed by means of OVSDB.

  4. Configure the controller (in case pssl is used). If HA Proxy is used, use the address of the HA Proxy node and use the VIP when VRRP is used between multiple nodes running HA Proxy. The following is an example:

  5. When using SSL to connect, CA-signed certificates must be copied to the /var/db/certs directory in the QFX device. The following example shows one way to get the certificates. The following comands could be run on any server.

Debug QFX5100 Configuration

You can use the following commands on the QFX switch to show the OVSDB configuration.

You can use the agent introspect on the ToR agent and the TSN nodes to show the configuration and operational state of these modules.

  • The TSN module is like any other contrail-vrouter-agent on a compute node, with introspect access available on port 8085 by default. Use the introspect on port 8085 to view operational data such as interfaces, virtual network, and VRF information, along with their routes.

  • The port on which the ToR agent introspect access is available is in the configuration file provided to the contrail-tor-agent. This provides the OVSDB data available through the client interface, apart from the other data available in a Contrail Agent.

Changes to Agent Configuration File

You can make changes to the agent features by making changes in the configuration file.

In the /etc/contrail/contrail-vrouter-agent.conf file for TSN, the agent _mode option is available in the DEBUG section to configure the agent to be in TSN mode.

agent_mode = tsn

The following are typical configuration items in a ToR agent configuration file.

REST APIs

For information regarding REST APIs for physical routers and physical and logical interfaces, see REST APIs for Extending the Contrail Cluster to Physical Routers, and Physical and Logical Interfaces.