Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Configuring the ML2 Plug-in

date_range 01-Apr-21

Juniper ML2 drivers support the following types of virtual networks:

  • VLAN based network

  • VXLAN based tunneled network with EVPN by using Hierarchical Port Binding

By using the features such as LAG and MC-LAG, ML2 drivers support the orchestration of aggregated links that connect the OpenStack nodes to the ToR switches.

Configuring the ML2 VLAN Plug-in

Juniper ML2 VLAN plug-in supports configuring VLAN of the network of each tenant on the corresponding switch port attached to the compute node. VM migration is supported from OpenStack Neutron version 2.7.1 onwards.

  • Supported Devices

    EX series and QFX series devices.

  • Plug-in Configuration

    To configure OpenStack Neutron to use VLAN type driver:

    1. On the OpenStack Controller, open the file /etc/neutron/neutron.conf and update as follows:

      core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin or core_plugin = neutron.plugins.ml2.plugin_pt_ext.Ml2PluginPtExt

    2. On Openstack Controller, update the ML2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini to set Juniper ML2 plug-in as the mechanism driver:

      content_copy zoom_out_map
      [ml2]
      type_drivers = vlan
      mechanism_drivers = openvswitch,juniper
      tenant_network_types = vlan
    3. Specify the VLAN range and the physical network alias to be used.

      content_copy zoom_out_map
      [ml2_type_vlan]
      network_vlan_ranges = physnet1:1001:1200
    4. Restart the OpenStack neutron server for the changes to take effect.

    5. Login to OpenStack GUI and create VLAN based network and launch VMs. You can view the VLAN IDs of the OpenStack network are created on the switch and mapped to the interfaces that are configured through the jnpr_switchport_mapping command.

Configuring ML2 VXLAN Plug-in with EVPN

ML2 EVPN driver is based on Neutron hierarchical port binding design. It configures the ToR switches as VXLAN endpoints (VTEPs), which is used to extend VLAN based L2 domain across routed networks.

Figure 1: IP FabricIP Fabric

To provide L2 connectivity between the network ports and compute nodes, the L2 packets are Tagged with VLANs and sent to the Top of Rack (TOR) switch. The VLANs used to tag the packets are only locally significant to the TOR switch (Switch Local VLAN).

At the ToR switch, the Switch Local VLAN is mapped into a global VXLAN ID. The L2 packets are encapsulated into VXLAN packets and sent to the virtual tunnel endpoint (VTEP) on the destination node, where they are de-encapsulated and sent to the destination VM.

To make the L2 connectivity between the endpoints work with VXLAN, each endpoint must be informed about the presence of destination VM and VTEP. EVPN uses a BGP-based control plane to learn this information. The plug-in assumes that the ToRs are setup with BGP-based peering.

Refer to Junos documentation for configuring BGP on the ToR switches available at BGP Configuration Overview.

Supported Devices

QFX 5100 only

Plug-in Configuration

To install the Juniper neutron plug-in on the neutron server node, complete the following steps:

  1. Edit the ML2 configuration file /etc/neutron/plug-ins/ml2/ml2_conf.ini to add the following configuration for EVPN driver:
    content_copy zoom_out_map
    [ml2]
    type_drivers = vlan,vxlan,evpn
    tenant_network_types = vxlan
    mechanism_drivers = jnpr_evpn,openvswitch
    
    [ml2_type_vlan]
    network_vlan_ranges=<ToR_MGMT_IP_SWITCH1>:<vlan-start>:<vlan-end>,
    <ToR_MGMT_IP_SWITCH2>:<vlan-start>:<vlan-end>,<ToR_MGMT_IP_SWITCH3>:
    <vlan-start>:<vlan-end>
    
    [ml2_type_vxlan]
    vni_ranges = <vni-start>:<vni-end>
  2. Restart the OpenStack neutron server to load the EVPN ML2 driver.
  3. Update the plug-in topology database. To add a ToR switch and two compute nodes connected to it, run the following commands on the neutron server node:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_device add -d ToR1 -u root -p root-password -c switch
    admin@controller:~$ jnpr_switchport_mapping add -H Compute1 -n eth1 -s tor1_mgmt_ip-address -p ge-0/0/1
    admin@controller:~$ jnpr_switchport_mapping add -H Compute2 -n eth1 -s tor2_mgmt_ip-address -p ge-0/0/1
    admin@controller:~$ jnpr_switchport_mapping add -H Network1 -n eth1 -s tor3_mgmt_ip-address -p ge-0/0/1
  4. Update OVS L2 agent on all compute and network nodes.

    All the compute and network nodes needs to be updated with the bridge mapping. The bridge name should be the IP address of the ToR switch.

  5. Edit the file /etc/neutron/plug-ins/ml2/ml2_conf.ini (ubuntu) or /etc/neutron/plug-ins/ml2/openvswitch_agent.ini (centos) to add the following:
    content_copy zoom_out_map
    [ovs]
    bridge_mappings = <ToR_MGMT_IP>:br-eth1

    Here br-eth1 is an OVS bridge which enslaves the eth1 physical port on the OpenStack node connected to ToR1. It provides the physical network connectivity for tenant networks.

  6. Restart the OVS agent on all the compute and network nodes.
  7. Login to OpenStack GUI to create a virtual network and launch VMs. You can view the VXLAN IDs of the OpenStack network, where the switch local VLANs are created on the switch and mapped to the VXLAN ID on each ToR.

Configuring ML2 Driver with Link Aggregation

You can use Link Aggregation (LAG) between an OpenStack compute node and a Juniper switch to improve network resiliency.

Plug-in Configuration

Figure 2 describes the connectivity of OpenStack compute node to the Top of Rack (TOR) switch.

Figure 2: LAGLAG

To configure LAG on Juniper ToR switches, refer to the following link:

Configuring Aggregated Ethernet Links (CLI Procedure)

To configure LAG on the OpenStack compute node:

  1. Connect the data ports of the OpenStack networking to the LAG interface on the Juniper switches. For example, connect eth1 and eth2 for LAG as follows:
    content_copy zoom_out_map
    admin@controller:~$ ovs-vsctl add-bond br-eth1 bond0 eth1 eth2 lacp=active
  2. Create NIC mapping:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address –b physnet1 -n nic
    admin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address –b physnet1 -n nic
    admin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth1
    admin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth2
  3. Add switch port mapping with aggregate interface:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_switchport_mapping add –H openstack-node-name-or-ip-address –n eth1 -s switch-name-or-ip-address –p port –a lag-name
    admin@controller:~$ jnpr_switchport_mapping add –H openstack-node-name-or-ip-address –n eth2 -s switch-name-or-ip-address –p port –a lag-name
    admin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth1 -s dc-nm-qfx3500-b -p ge-0/0/2 -a ae0
    admin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth2 -s dc-nm-qfx3500-b -p ge-0/0/3 -a ae0
  4. Verify switch port mapping with aggregate details:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_switchport_mapping list
    +---------------+--------+-----------------+-----------+-----------+
    | Host          | Nic    | Switch          | Port      | Aggregate |
    +---------------+--------+-----------------+-----------+-----------+
    | 10.207.67.144 | eth1   | dc-nm-qfx3500-b | ge-0/0/2  | ae0       |
    | 10.207.67.144 | eth2   | dc-nm-qfx3500-b | ge-0/0/3  | ae0       |
    +---------------+--------+-----------------+-----------+-----------+
    

Configuring ML2 Driver with Multi-Chassis Link Aggregation

You can configure Multi-Chassis Link Aggregation (MC-LAG) and use it with Juniper Neutron plug-ins.

Figure 3: MC-LAGMC-LAG

Plugin Configuration

To configure MC-LAG on Juniper switches, refer to the following link:

Configuring Multichassis Link Aggregation on EX Series Switches

To configure LAG on Openstack compute:

  1. Connect the data ports of the OpenStack networking to the LAG interface on two different Juniper switches. For example, connect eth1 and eth2 for LAG as follows:
    content_copy zoom_out_map
    admin@controller:~$ ovs-vsctl add-bond br-eth1 bond0 eth1 eth2 lacp=active
  2. Create NIC mapping:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address –b physnet1 -n nic
    admin@controller:~$ jnpr_nic_mapping add –H openstack-node-name-or-ip-address –b physnet1 -n nic
    admin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth1
    admin@controller:~$ jnpr_nic_mapping add -H 10.207.67.144 -b physnet1 -n eth2
  3. Add switch port mapping with aggregate interface on the OpenStack controller:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_switchport_mapping add –H openstack-node-name-or-ip-address –n eth1 -s switch-name-or-ip-address –p port –a lag-name
    admin@controller:~$ jnpr_switchport_mapping add –H openstack-node-name-or-ip-address –n eth2 -s switch-name-or-ip-address –p port –a lag-name
    admin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth1 -s dc-nm-qfx3500-b -p ge-0/0/2 -a ae0
    admin@controller:~$ jnpr_switchport_mapping add -H 10.207.67.144 -n eth2 -s dc-nm-qfx3500-b -p ge-0/0/3 -a ae0
  4. Verify switch port mapping with aggregate details:
    content_copy zoom_out_map
    admin@controller:~$ jnpr_switchport_mapping list
    +---------------+--------+-----------------+-----------+-----------+
    | Host          | Nic    | Switch          | Port      | Aggregate |
    +---------------+--------+-----------------+-----------+-----------+
    | 10.207.67.144 | eth1   | dc-nm-qfx3500-a | ge-0/0/2  | ae0       |
    | 10.207.67.144 | eth2   | dc-nm-qfx3500-b | ge-0/0/3  | ae0       |
    +---------------+--------+-----------------+-----------+-----------+
    
footer-navigation