Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
ON THIS PAGE
 

Configuration Walkthrough

VMware NSX-T and Juniper Apstra Integration Setup - Prerequisite Steps

  • Ensure NSX-T Manager is installed, and the management address is assigned to access it through SSH or User Interface (UI). For this solution, only one node cluster for NSX-T Manager is deployed. However, in production deployment, adding more than one node is recommended based on the NSX-T Data Center Installation Guide.
Figure 1: Transport Node Networking A diagram of a transport leaf Description automatically generated
  • Add all requisite ESXi hosts in VMware vSphere. Note that only four ESXi hosts are required for this use case, as shown in Figure 1.
Note:

Configuring VMware vSphere and adding hosts to the vSphere Client is not in the scope of this guide.

For detailed information on VMware NSX-T, refer to the VMware NSX-T 3.2 Guide.

VMware NSX-T Manager: Create Tunnel Endpoint Pools

Tunnel Endpoints (TEPs) are used in the header of the outer (external) IP encapsulation to uniquely identify the hypervisor hosts originating and terminating the NSX-T encapsulation of overlay frames.

To create a TEP pool, log into VMware NSX-T Manager and navigate to Networking > IP Management > IP Address Pools > IP Address Pools.

Figure 2: Create TEP Pool in NSX-T Graphical user interface, website Description automatically generated

VMware NSX-T Manager: Add vSphere to NSX-T Manager

Next, add the vSphere server as a Compute manager to the NSX-T manager. The NSX-T manager gets the inventory of ESXi hosts that will be used for Compute and Edge transport nodes.

To add the vSphere Server, log into NSX-T Manager and navigate to System > Fabric > Compute Managers.

Figure 3: Add vSphere Details as Compute Manager Graphical user interface, application Description automatically generated
Note:

If adding the vSphere in the NSX-T Manager fails because of the Certificate of Compute Manager not valid error, then follow the VMware KB article to validate the vSphere certificate.

VMware vSphere: Configure VDS On ESXi Host

A vSphere Distributed Switch (VDS) provides centralized management and monitoring of the networking configuration of all hosts associated with the switch. For more information, please refer to the VMware documentation.

In VMware vSphere, under Networking, right-click the data center and select Distributed Switch > New Distributed Switch.

For this virtual distributed switch (VDS), create three Distributed Port Groups (DPGs) on the VDS. Enable VLAN Trunking on all the nodes. The configuration should be as follows:

On the ESXi1_4 device (that hosts the NSX-T-Edge Node) that is connected to the border leaf switches, the following VMNICs are configured in the lab:

  1. Overlay VLAN: LAG1 (vmnic3+vmnic7). This is the aggregate Ethernet interface to border leaf switches.
  2. Left-uplink: vmnic2. This is the routed interface to border leaf1 et-0/0/0:0.
  3. Right-uplink: vmnic6. This is the routed interface to border leaf2 et-0/0/0:0.
Note:

The above VMNICs can be different depending on the setup. Ensure to select the appropriate switch interface to VMNIC mapping.

Figure 4: Physical Adapter Configured on ESXi Host A screenshot of a computer Description automatically generated

1. Create a VDS

In VMware vSphere, navigate to Networking and perform the following: Create a VDS (named NSX-T-Edge-VDS in this case) on the Edge Node Host and assign default uplinks. There are four default uplinks.

Figure 5: Adding NSX-T-Edge VDS A screenshot of a computer Description automatically generated

2. Configure LACP lag1

In VMware vSphere, navigate to Networking and perform the following:

Navigate to Networks > NSX-T-Edge-VDS > Configure > LACP to configure LACP lag1.

Figure 6: Configuring LACP on NSX-T-EDGE VDS Graphical user interface, text, application, email Description automatically generated

3. Assign VMNICs to Uplinks and LAG Ports

In VMware vSphere, perform the following:

On NSX-T-Edge-VDS actions, use Add and Manage Hosts to assign the VMNICs to the uplinks and LAG ports.

Each host must be connected to the fabric as described in the Solution Architecture section.

Figure 7: Add Hosts to NSX-T-Edge-VDS Showing VMNICs Assigned A screenshot of a computer Description automatically generated

4. Create Three Distributed Port Groups

In VMware vSphere, perform the following:

  1. On the VDS, create the following:
    • Port
    • Distributed Port Group:
      • Left-Uplink
      • Overlay
      • Right-Uplink
  2. Enable VLAN Trunking on all the port groups.
Figure 9: Adding Distributed Port Groups A screenshot of a computer Description automatically generated

Below are the three Distributed Port Groups added to the NSX-T-Edge-VDS switch.

  • Left-Uplink Connects to the Border Leaf1
Figure 10: Adding Left-Uplink Distributed Port
A screenshot of a computer Description automatically generated
  • Right-Uplink Connects to the Border Leaf2
  • Overlay Link is Used for VLAN Transport Traffic.
Figure 14: Adding Overlay Distributed Port Group A screenshot of a computer Description automatically generated
Figure 15: Assigning Uplink Failover Order for Overlay A screenshot of a computer Description automatically generated

5. Review VDS

After configuring the NSXT-Edge-VDS switch, the switch should show the following port groups as configured. The Edge switch VM is assigned to the relevant port groups. Also, the physical adapters on the ESXi host are now allocated to the NSX-T-Edge-VDS.

Figure 16: NSX-T Edge VM Created in vSphere NSX-T Edge VM Created in vSphere A screenshot of a computer Description automatically generated
Figure 17: Physical Adapters on ESXi Host Assigned to NSX-T-Edge-VDS A screenshot of a computer Description automatically generated

VMware NSX-T: Configure the Uplink Profiles for the Host and Edge Nodes

Now that the VDS is configured within vSphere, the uplink profiles must be created within NSX-T. The profiles created correspond to the uplinks Overlay, Edge-Right, and Edge-Left.

Note:

NSX-T Edge-left connects to Border Leaf-1 and NSX-T Edge-right connects to Border Leaf-2 for uplink/BGP redundancy.

  • Create Three Uplink Profiles
  1. In NSX-T Manager, navigate to System > Fabric > Profiles > Uplink Profiles, and then click ADD.
  2. Overlay-profile: VLAN 50 maps to lag1.
    Figure 19: Overlay Profile with VLAN 50 A screenshot of a computer Description automatically generated
  3. Edge-left-uplink profile: VLAN 100 maps to uplink1.
  4. Edge-right-uplink profile: VLAN 200 maps to uplink2.

VMware vSphere: Add Transport Node Hosts to VDS

For transport node hosts, VDS should be configured so that the hosts can connect to the overlay transport network.

In VMware vSphere, under Networking, right-click the VDS created in the Juniper Apstra: Add the NSX-Manager section and add all the hosts that form part of the transport node to the VDS. Assign the respective VMNICs that will be used for overlay uplink (LAG link).

Figure 24: Configure MTU on NSX-T-Edge-VDS A screenshot of a computer Description automatically generated

VMware NSX-T: Prepare the Compute Cluster

For an ESXi host to be part of the NSX-T overlay, it must first be added to the NSX-T fabric.

A fabric node is a node that is registered with the NSX-T management plane and has NSX-T modules installed.

  1. In NSX-T Manager, navigate to System > Fabric > Nodes > Host Transport Nodes. Select the appropriate vSphere instance from the Managed by list under Host Transport Nodes.
    Figure 25: NSX-Manager Transport Nodes A screenshot of a computer Description automatically generated
  2. Select the ESXi host and then click Configure NSX.
    Figure 26: Configure NSX VDS on Transport Nodes Configure NSX VDS on Transport Nodes
    Figure 27: ESXi Configure NSX Graphical user interface, application Description automatically generated
  3. Click Finish.
  4. Repeat steps 1 through 3 for all the ESXi hosts that need to be configured as Transport Nodes of the NSX-T cluster.

VMware NSX-T: Transport Nodes-Tunnel IPs

After NSX-T is configured on the nodes, the hosts should report the NSX configuration as “Success” and the node status as “Up.” The NSX version will also be displayed.

Remember the TEP IP addresses as they will be required in the later steps. These are the IP addresses assigned to each node. These IP addresses should be set from the TEP Address Pool that was configured earlier.

Figure 28: TEP Pools assigned to ESXi A screenshot of a computer Description automatically generated

VMware NSX-T: Deploy NSX Edge Node and Create Edge Cluster

Next, the NSX Edge VM must be created. This will be used for north-south communication and BGP peering with the fabric.

  • Create the Edge VM
  1. Log on to NSX-T and navigate to System > Fabric > Edge transport Nodes.
  2. Click +ADD EDGE NODE.
    Figure 29: NSX-T Edge Transport Node Graphical user interface, text, application, email Description automatically generated
  3. Name the Edge VM edge01.
    Figure 30: Adding Edge Node Graphical user interface, application, email Description automatically generated
  4. Enter the Credentials for the NSX Edge VM.

    Note down the credentials to use them in the later steps.

    Figure 31: Adding Credentials for NSX Edge VM Graphical user interface, application Description automatically generated
  5. In the next step, Select the Compute Manager, Cluster, and Datastore to deploy the Edge VM.
  6. Next, configure the Edge VM Network Settings, such as the management IP, default gateway IP, DNS, and NTP server for the edge node.
    • Within vSphere, a VDS was created With three Uplinks. Create one NSX-T VDS for each of the uplinks in the vSphere VDS in NSX-T, Overlay, Edge-Right, and Edge-Left VDS.
      • Name the first NSX-T VDS:

        1. Name the first NSX-T VDS as nvds-overlay.
        2. Set transport zone to Overlay-TZ.
        3. Set the Uplink Profile to overlay-profile.
        4. Select TEP-Pool for the IP Pool.
        Figure 32: NVDS Overlay for Edge Node Graphical user interface, application Description automatically generated
      • Name the Second NSX-T VDS:
        1. Name the second NSX-T VDS as nvds-left.
        2. Set the Transport Zone to Uplink-TZ.
        3. Set the Uplink Profile to edge-left-uplink.
        Figure 33: NVDS Left for Edge Node Graphical user interface, application Description automatically generated
      • Name the Third NSX-T VDS:
        1. Name the third NSX-T VDS as nvds-right.
        2. Set the Transport Zone to Uplink-TZ.
        3. Set the Uplink Profile to edge-right-uplink.
        Figure 34: NVDS Right for Edge Node Graphical user interface, text, application, email Description automatically generated
  • Verify NSX-T Edge Creation

A successful message is displayed once the NSX-T Edge is created. The TEP IP address must be assigned from the previously configured TEP pool.

  • Add Edge Cluster
  1. In NSX-T Manager, navigate to System > Fabric > Nodes > Edge Clusters > ADD.
  2. Name the cluster as edge-cluster1.
  3. Select nsx-default-edge-high-availability-profile for the Edge Cluster Profile.
  4. Select Edge Node for the Member Type edge01 in edge-cluster1.
Figure 35: Edge01 Node Added to Edge-cluster1 Graphical user interface Description automatically generated
Figure 36: Edge01 Node Added to Edge-Cluster1 Graphical user interface, text, application, email Description automatically generated
  • Verify Edge Cluster

To verify that SSH connectivity exists, and the credentials are set up correctly, SSH to the edge node and login.

Figure 37: SSH connectivity to Edge Node Text Description automatically generated

VMware NSX-T: Create a T-1 Gateway

A Tier-1 Gateway is a logical router that provides East-West communication between VMs in the NSX-T domain.

Create a Tier-1 Gateway

In NSX-T Manager, navigate to Networking > Connectivity > Tier-1 Gateways > ADD TIER-1 GATEWAY and then enter the gateway name as T1-1.

Figure 38: NSX-T Tier-1 Gateway A screenshot of a computer Description automatically generated

VMware NSX-T: Create Logical Segments

Segments are virtual L2 networks, and VMs are launched in segments. Segments are connected to the T1 Gateway to enable connectivity between the VMs.

  1. In NSX-T Manager, navigate to Networking > Connectivity > Segments > Segments > ADD SEGMENT.
  2. Name the first segment as vn11.
  3. Select T1-1 | Tier1 under Connected Gateway list to designate Tier 1 Gateway.
  4. Select Overlay-TZ under Transport Zone to specify the transport zone for the overlay.
  5. Enter the subnet to be used: 10.9.11.1/24.
Figure 39: Create Logical Segments A screenshot of a computer Description automatically generated

Add Another Segment

  1. Name the second segment as vn22.
  2. Select T1-1 | Tier1 under Connected Gateway to designate Tier 1 Gateway.
  3. Select Overlay-TZ under Transport Zone to specify the transport zone for the overlay.
  4. Enter the subnet to be used: 10.9.22.1/24.
Figure 40: Create Logical Segments A screenshot of a chat Description automatically generated

VMware NSX-T: Create VLAN Backed Logical Segments

A VLAN-backed segment enables the Tier-0 gateway to establish BGP sessions with the fabric. The VLAN-backed segment serves as the North-South data path of the VMs in NSX to/from the rest of the Data Center Fabrics.

  • Create One VLAN-Backed Segment for Each Uplink:
  1. In NSX-T Manager, navigate to Networking > Connectivity > Segments > Segments > ADD SEGMENT.
  2. Name the first segment as uplink-seg-100.
  3. Do not select a gateway under Connected Gateway.
  4. Select Uplink-TZ under Transport Zone to specify the transport zone for the overlay.
  5. Do not enter a subnet to be used.
  6. Under VLAN, associate VLAN 100.

  • Add Another VLAN Segment
  1. Name the second segment as uplink-seg-200.
  2. Do not select a gateway under Connected Gateway.
  3. Select Uplink-TZ under Transport Zone to specify the transport zone for the overlay.
  4. Do not enter a subnet to be used.
  5. Under VLAN, associate VLAN 200.
Figure 41: Create Uplink Segments for Left and Right Links to Border Leaf Switches A screenshot of a computer Description automatically generated

VMware vSphere: Confirm the Creation of the Logical Segments

The Logical Segments created in the previous steps should be reflected in the vSphere client. Verify that logical segments created in NSX-T are present in the vSphere client. In the vSphere Client, navigate to Distributed vSwitch > Configure > Topology.

VMware vSphere: Create VMs in the Segments

Create two test VMs on each of the Transport Nodes in the cluster:

  1. Connect the first VM on each Transport Node to the vn11 logical segment on NSX-T-Edge-VDS, which will allow testing of the vn11 overlay segment for that Transport Node.
  2. Connect the second VM on each Transport Node to the vn22 logical segment on NSX-T-Edge-VDS, which will allow testing of the vn22 overlay segment for that Transport Node.

For more information, refer to the VMware vSphere guide for creating a VM and setting up a network adapter.

Figure 43: Overlay Segments Created in NSX-T are Visible in vSphere A screenshot of a computer Description automatically generated
Figure 44: VMs Created with vn11 and vn22 Ports Connected A screenshot of a computer Description automatically generated

VMware NSX-T: Create Tier0 Gateways T0-1

A Tier-0 Gateway connects the NSX-T virtual fabric with the physical switch fabric. This is accomplished by using BGP to communicate with the Top of Rack (ToR) switches. In this document, each Transport Node is connected to a pair of QFX-5120 leaf switches, while the Edge VM host (Edge01) is connected to a pair of QFX5130 border leaf switches.

To add a Tier-0 Gateway:

  1. In NSX-T Manager, navigate to Networking > Connectivity > Tier-0 Gateways > ADD GATEWAY.
  2. Name the Tier-0 Gateway as T0-1.
  3. Set HA-Mode to Active-Active.
  4. Set the Edge-cluster on T0-1 to edge-cluster1.
  5. Save and proceed through the next steps to add interfaces, BGP, and route-redistribution.
Figure 45: Create T0-1 Gateway and Connect to Edge-Cluster Graphical user interface, email, website Description automatically generated

VMware NSX-T: Configure the Interfaces on the T0 Gateway

Tier-0 Gateway (T0-1 gateway) requires one interface for each uplink segment.

  1. In NSX-T Manager, navigate to Networking > Connectivity > Tier-0 Gateways > Edit T0-1.
  2. To add two external tnterfaces for the fabric within the Tier-0 Gateway screen for the T0-1 Tier-0 Gateway created above:
    1. Click Set.
    2. Click ADD INTERFACE to add the first interface and configure the following:
      1. Name the interface as left-uplink.
      2. Set type External.
      3. Set the IP Address/Mask as 192.168.100.2/24.
      4. Connect to segment as uplink-seg-100.
      5. Set the Edge Node as edge01.
    3. Click ADD INTERFACE to add the second interface and configure the following:
      1. Name the interface as right-uplink.
      2. Set type External.
      3. Set the IP Address/Mask as 192.168.200.2/24.
      4. Connect to segment as uplink-seg-200.
      5. Set the Edge Node as edge01.

VMware NSX-T: Configure Loopback Interface on T0 Gateway

The loopback interface is used to create a BGP session with switch Fabric border leaf switches.

  1. In NSX-T Manager, navigate to Networking > Connectivity > Tier-0 Gateways > Edit T0-1.
  2. To add loopback interface towards the Fabric within the Tier-0 Gateway page for the T0-1 Tier-0 Gateway created above:
    1. Click Set.
    2. Click ADD INTERFACE to add loopback:
      1. Name the loopback interface as Loopback.
      2. Set type Loopback.
      3. Set the IP Address/Mask as 10.0.0.1/32.
      Figure 47: Configure Loopback Interface to Connect to the Border leaf switches Graphical user interface, text, application, email Description automatically generated Graphical user interface, application Description automatically generated

VMware NSX-T: Configure BGP on the T0 Gateway

In NSX-T Manager, navigate to Networking > Connectivity > Tier-0 Gateways > Edit T0-1

Configure BGP

  1. Within the Tier-0 Gateway page for the T0-1 Tier-0 Gateway created above:
    1. Click BGP.
    2. Set Local AS to 65000.
    3. Enable BGP.
    4. Click Set next to BGP Neighbors.
    5. Click ADD BGP NEIGHBOUR to add the first BGP neighbor.

Configure the Loopback IP Address of Border Leaf1

  1. In the Juniper Apstra UI navigate to Blueprints > <blueprint-name> > Staged > Physical > Nodes.
    1. Refer to column name Loopback IPv4.
    2. In the following figures, the loopback IP Address from Juniper Apstra is 192.168.255.2, but this can vary.
    3. Set BFD to be Disabled.

Configure the Remote AS Number as Border Leaf1 ASN number

  1. In the Juniper Apstra UI navigate to Blueprints > <blueprint-name> > Staged > Physical > Nodes.
    1. Refer to column name ASN.
    2. In the below figures the ASN from Juniper Apstra is 64514, but this can vary.
    3. Set Route Filter to be 1 with IPV4 Route Filter enabled and Out filter as prefixlist-out-default.

      Figure 55 Setting Route Prefix List

      A screenshot of a computer Description automatically generated

    4. Set Allowas-in as Disabled.
    5. Under Timers & Password, set Hold Down Time as 90 and Keep Alive Time as 30.
      Figure 48: Add Border Leaf1 Loopback as the Neighbor Graphical user interface, application, email Description automatically generated
  2. Click ADD BGP NEIGHBOUR to add the second BGP neighbor.

Configure the Loopback IP Address of Border Leaf2:

  1. In the Juniper Apstra UI navigate to Blueprints > <blueprint-name> > Staged > Physical > Nodes.
  2. In the following figures, the loopback IP Address from Juniper Apstra is 192.168.255.3, but this can vary.
  3. Set BFD to be Disabled.

Configure the Remote AS Number as Border Leaf2 ASN Number:

  1. In the Juniper Apstra UI navigate to Blueprints > <blueprint-name> > Staged > Physical > Nodes.
  2. Refer to column name ASN.
  3. In the following figures, the ASN from Juniper Apstra is 64554, but this can vary.
  4. Select 1 for Route Filter.
  5. Enable IPV4 Route Filter.
  6. Select prefixlist-out-default for Out Filter.
  7. Set Allowas-in as Disabled.
  8. Select Max Hop Limit as 10.
  9. Set Hold Down Time as 90 and Keep Alive Time as 30 under Timers & Password:
Note:

BGP status for the two neighbors will be down until Apstra is configured.

Figure 49: BGP neighbors on T0-1 are the Border Leaf loopbacks Graphical user interface, application Description automatically generated

VMware NSX-T: Configure In-Line Mode and Route-Redistribution on the T0 Gateway

  1. In NSX-T Manager, navigate to Networking > Connectivity > Tier-0 Gateways > Edit T0-1.
  2. Within the Tier-0 Gateway, screen for the T0-1 Tier-0 Gateway created above:
    1. Select EVPN Mode as inline.
    2. Click the three vertical dots and create a new VNI-Pool for the EVPN/VXLAN VNI-Pool.
      Figure 50: T0 Gateway EVPN VXLAN VNI Pool Graphical user interface, application Description automatically generated
  3. Click Set near EVPN Tunnel Endpoint and configure the following:
    1. Name EVPN local tunnel endpoint as edge-vtep.
    2. Edge-Node name: edge01 (created as per VMware NSX-T: Deploy NSX Edge Node and Create an Edge Cluster).
    3. Local Address: 10.0.0.1 (this is the loopback address of T0 Gateway as configured in VMware NSX-T: Configure Loopback interface on T0 Gateway).
    4. MTU: 9000.
    5. Save changes to the Tier-0 Gateway.
Figure 51: Configuring Local EVPN Tunnel Endpoint Graphical user interface, text, application, email Description automatically generated
Figure 52: T0-1 Tier-0 Gateway EVPN Inline Mode Graphical user interface, application Description automatically generated

Within the Tier-0 Gateway screen for the T0-1 Tier-0 Gateway created above:

  1. Expand and click Route Re-Distribution.
    Figure 53: Configure Route Re-distribution A white rectangular object with blue border Description automatically generated A screenshot of a computer Description automatically generated
  2. Under Advertised Tier-1 Subnets, check only the following sources:
    1. Connected Interfaces & Segments
    2. Service Interface Subnet
    3. Static Routes
    4. Connected Segment

    Ensure no other boxes are checked.

    Figure 54: Set Route-Redistribution Graphical user interface, text, application Description automatically generated

VMware NSX-T: Create a Static Route to Loopback on Border Leaf Switches

A static route needs to be created for the reachability of the loopbacks on the border leaf switches. These interfaces are used to establish BGP neighbors between NSX-T and the border leaf switches.

  1. In NSX-T Manager, navigate to Networking > Tier-0 Gateways.
  2. Select T0-1 Gateway and edit, then select Set.
  3. Add Loopback 192.168.255.2/32 of Border Leaf1 and 192.168.255.3/32 of Border Leaf2 as static routes.
Note:

Ensure to check Apstra for the correct loopbacks for the border leaf switches.

Static Route to Border Leaf1

Figure 55: Static Route to Border Leaf1 Graphical user interface, text, application Description automatically generated

Click SET NEXT HOPS and add 192.168.100.1 (IP of the border leaf1 switch interface).

Static Route to Border Leaf2

For the static route to border leaf2, click SET NEXT HOPS and add 192.168.200.1 (IP of the border leaf2 switch interface).

Figure 57: Border Leaf2 Static Route Graphical user interface, text, application, email Description automatically generated
Figure 59: Static Routes Towards Border Leaf Switches Graphical user interface, text, application, email Description automatically generated

VMware NSX-T: Create IP Prefix lists on T0 Gateway

The IP Prefix list must be set up not to allow advertising of the fabric IPs.

Add the IP Prefixes as below:

  1. In NSX-T Manager, navigate to Networking > Tier-0 Gateways.
  2. Select T0-1 Gateway, expand Routing, and click on the number beside IP Prefix Lists to add or edit the prefix list.

    prefixlist-out-default is the prefix-list set on the T0 Gateway BGP, as mentioned in VMware NSX-T: Configure BGP on the T0 GW.

    Figure 60: Adding Prefixes to prefixlist-out-default Graphical user interface, text, application Description automatically generated
  3. Click on 1 (or any number) under Prefixes to add prefixes.
  4. Click Edit and add the following prefixes.
Figure 61: VM Prefixes Permitted to be Advertised, Rest Denied Graphical user interface, text, application Description automatically generated

VMware NSX-T: Connect the T1 and T0 Gateways

Connecting the T1 and T0 Gateways enables east-west connectivity and north-south communication to the VMs in the NSX-T Domain.

Connect the Gateways

  1. In NSX-T Manager, navigate to Networking > Tier-1 Gateways.
  2. Select T1-1 Gateway.
  3. Under Linked Tier-0 Gateway, select T0-1.
  4. Add the edge-cluster1 setup in VMware NSX-T: Deploy NSX Edge Node and Create an Edge Cluster.
  5. Select the following under the Route Advertisement:
    • All Static Routes
    • All LB VIP Routes
    • All LB SNAT IP Routes
Figure 62: Link T0-1 Gateway to T1-1 A screenshot of a computer Description automatically generated

Juniper Apstra: Add the NSX-Manager

For Juniper Apstra to remediate inconsistencies between the virtual infrastructure and the physical IP fabric, the NSX-T manager should be added in Juniper Apstra.

In the Juniper Apstra UI, navigate to External Systems > Virtual Infra Managers > Create Virtual Infra Manager and add the NSX-T manager details and credentials.

Figure 63: Adding NSX-T and Vsphere in Apstra A screenshot of a computer Description automatically generated

Juniper Apstra: Create a Routing Policy for NSX-T in the Blueprint

Add a Routing policy for the NSX-T routing zone created in the next step.

In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Policies > Routing Policies > Create Routing Policy.

Figure 64: NSX-T Routing Policy NSX-T Routing Policy

Juniper Apstra: Create a Routing Zone in the Blueprint

Add a Routing Zone That Maps to a VRF in the Blueprint:

  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Virtual > Routing Zones > Create Routing Zone and following details:
    • VRF Name: NSX-T
    • VLAN ID: 9
    • VNI: 20000
    • Route Target: 20000:1
    • Routing Policies: NSX-T
Figure 65: Routing Zone to Communicate with NSX-T Graphical user interface, application Description automatically generated

Juniper Apstra: Assign the Loopback IPs to the Routing Zone

After creating the NSX-T routing zone, assign the loopback IPs for the routing zone. The loopback IP is allocated from an IP Pool in Resources. In the following figures, the pool MUST-EVPN-Loopbacks DC1 is already created under Resources. This is as per the section Apstra Resources: ASN, Fabric, and Loopback IP Address.

The loopback IP is assigned to the routing instance and used to extend EVPN and the NSX-T overlay VLAN between the leaf switches.

Figure 66: IP Pool from Resources Graphical user interface, application Description automatically generated
Figure 67: Assign IP Pool for the Leaf switches in NSX-T Routing Zone A screenshot of a computer Description automatically generated

Juniper Apstra: Add the NSX-Manager into the Blueprint

Add the NSX-Manager into the Blueprint that is managing the fabric:

  1. In the Juniper Apstra UI navigate to Blueprints > <blueprint-name > > Staged > Virtual > Virtual Infra > Add Virtual Infra.
  2. From Virtual Infra Manager, select the vSphere added in Juniper Apstra: Add the NSX-Manager.
  3. Set the VLAN Remediation Policy VN Type as VXLAN.
  4. Set the Routing Zone as NSX-T.
Figure 68: Add NSX-T Manager into Blueprint A screenshot of a chat box Description automatically generated

Juniper Apstra: Add the NSX-T-Overlay as a VN

For the GENEVE Tunnels to come up between the Transport Nodes in NSX-T, connectivity needs to be established through Juniper Apstra Fabric. This is ensured by creating VXLAN Virtual Network in Apstra and assigning correct port mapping in ToR leaf switches towards Transport Node. Ensure that VLAN ID for Overlay VXLAN VN defined in Apstra match the one mapped in Overlay Profile in NSX-T for Transport Nodes.

VLAN 50 is configured in NSX-T Managed for Overlay, which maps to the VNI 10050. Connectivity Template Tagged should be selected while creating a virtual network. The virtual network is assigned to all leaf switches. The IPv4 subnet (IRB) is disabled as NSX-T is already assigning the TEPs to the hosts in NSX-T.

Figure 69: NSX-T Overlay Profile Transport VLAN Configured 50 Graphical user interface, application Description automatically generated A screenshot of a computer Description automatically generated

Juniper Apstra: Verify the Connectivity Templates

With the virtual network created, if you select Create Connectivity Templates, then a Connectivity Template type Virtual Network[Single] is created.

Verify the Creation of a Virtual Network:

  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Connectivity Templates.
  2. Scroll to look for the connectivity template for the Virtual Network.
  3. Click Edit to view the connectivity template.
Figure 70: Apstra Connectivity Template Generated for Overlay Virtual Network A screenshot of a computer Description automatically generated

Juniper Apstra: Assign Interface to the Connectivity Templates

The connectivity template is assigned to the aggregate Ethernet (AE) interfaces facing the ESXi hosts.

Figure 71: AE Interfaces are assigned to Connectivity Template A screenshot of a computer Description automatically generated

Juniper Apstra: Commit the Configuration

From Blueprint, navigate to Blueprints > <blueprint-name> > Uncommitted and commit the configuration. This pushes the VN and routing zone to NSX-T to all the fabric devices.

VMware NSX-T: GENEVE Tunnels

Once Juniper Apstra has pushed the configurations to the Junos OS devices, observe that the GENEVE tunnels between the Transport Nodes and the Edge Nodes are up:

  1. On the edge01 Edge-VM , view the Tunnel Endpoints and verify status is UP.
  2. In NSX-T Manager, navigate to NSX-T Manager > System > Fabric > Nodes > Edge Transport Nodes.
  3. Click edge01, then click Tunnels.
Figure 72: NSX-T Edge Node Tunnels are Up A screenshot of a computer Description automatically generated

Juniper Apstra: Add Connectivity Templates for Connectivity from Edge Node to the Fabric

The connectivity templates specify the IP link for the connectivity from the Edge Node to the fabric and the BGP peering session with a user-specified BGP neighbor-addressed peer.

  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Connectivity Templates.
  2. Click Add Template.

Juniper Apstra: Add IP Link, BGP Peering and Static Route

A Connectivity template is used to create the NSX-T uplinks towards NSX-T Edge Node edge01 for both left and right uplinks.

Here, the default routing zone is selected to connect to the Edge Nodes as the NSX-T traffic within the fabric is an overlay traffic. Once the traffic reaches NSX-T, it is an underlay traffic.

The peer ASN is the NSX-T T0-1 ASN 65000.

  • Create Two IP Link Connectivity Templates for Left Uplink and Right Uplink
Figure 73: IP Link Connectivity Template for Left and Right Uplinks A screenshot of a computer Description automatically generated
  • Create BGP Peering and Assign NSX-T Routing Policy
  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Connectivity Templates.
  2. Click Add Template.
  3. Create Connectivity Templates for BGP peering and assign the NSX-T routing policy.
Figure 74: Connectivity Template for Uplinks Towards Edge Node Connectivity Template for Uplinks Towards Edge Node
  • Create Two Custom Static Route Connectivity Templates

The static route is created in a separate Connectivity Template as the primitive ‘Custom Static Route’ used here is generated at the system level (border-leaf level). The static route for the left uplink starts from Border Leaf1 to the Edge Node and for the right uplink from Border Leaf2 to the Edge Node.

  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Connectivity Templates.
  2. Click Add Template. Repeat this process to create Connectivity Templates for the left and right uplinks.
  • Create Two Connectivity Templates for Static Route
    • Left static route from the Border Leaf1 to the Edge Node.
    • Right static route from the Border Leaf2 to the Edge Node.

Juniper Apstra: Assign the Interfaces to the Connectivity Template

Assign connectivity templates created in Juniper Apstra: Add IP Link, BGP Peering and Static Route section.

  1. Assign Uplinks.
  2. For Uplinks from the border leaf switches, assign the appropriate ethernet interface for dc1_border_leaf1 and dc1_border_leaf2:
    • Left uplink from dc1_border_leaf1
    • Right uplink from dc1_border_leaf2
  3. Assign BGP.
  4. For BGP peering, select dc1_border_leaf1 and dc1_border_leaf2 loopback Interfaces lo0.0.
  5. Assign Static Routes
    • Static route for left uplink to Border Leaf1
    • Static route for right uplink to Border Leaf2

Juniper Apstra: Assign the IPs and VLAN IDs to the Interfaces

Now that a Connectivity Template has been created and physical interfaces are assigned, you must assign IP addresses and VLAN IDs to the interfaces.

Edit the Interface connecting the Border Leaf and the Edge Node:

  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Virtual > Routing Zones > Default Routing Zone > Interfaces > Edit IP Addresses.
  2. Enter the IP address for Border Leaf switches and the IP address of the Edge node interfaces on the host.
Figure 84: Assign IPs to the Interface connected to Edge Host Assign IPs to the Interface connected to Edge Host

Juniper Apstra: Commit the Configuration

Navigate to Blueprints > Uncommitted and commit the configuration. This pushes all the uplinks created using connectivity templates to the devices.

Juniper Junos OS: Verify Configs

Log onto one of the Border Leaf switches and verify that the configuration is being pushed.

Verify Physical Fabric Configuration

SSH into one of the Border Leaf switches and run the following commands:

  • show configuration interfaces et-0/0/0:0 | display set A close-up of a computer screen Description automatically generated
  • show configuration protocols bgp group l3rtr | match 10.0.0.1 | display set A white text with black text Description automatically generated with medium confidence
  • show configuration routing-options | display set

VMware NSX-T: Verify BGP Session on Edge

On the Edge Node, verify that the BGP sessions are established, and the overlay routes exchanged.

Verify NSX-T Configuration

SSH into the NSX-T edge01 Edge-VM and run the following commands:

  1. Firstly, to determine the VRF (SR-T0-1), run the command, get logical-router and pick the one that has the name “SR-T0-1”. (service router Tier0) The corresponding VRF number is in the VRF column.
  2. get bgp neighbor summary A screenshot of a computer Description automatically generated
  3. get route A screenshot of a computer program Description automatically generated

VMware NSX-T: Verify BGP Session on ToR

Juniper Apstra should detect the no BGP anomaly on the blueprint.

In the Juniper Apstra UI, navigate to Blueprints > <Blueprint-name> > Active.

Figure 85: Juniper Apstra Detects No Anomaly Juniper Apstra Detects No Anomaly

Log onto one of the border leaf switches and verify that the BGP sessions are up, and the overlay routes are exchanged.

Verify Physical Fabric Configuration

SSH into one of the border leaf switches and run the following commands:

  • show bgp summary group l3rtr A screenshot of a computer Description automatically generated
  • show route receive-protocol bgp 10.0.0.1 table inet.0 A screen shot of a computer Description automatically generated
  • show route advertising-protocol bgp 10.0.0.1 table inet.0 A screenshot of a computer Description automatically generated

VMware NSX-T: Verify Overlay Connectivity (East-West)

To test east-west traffic, run ping tests between the VMs across segments and between the Linux VMs created in VMware vSphere: Create VMs in the Segments.

Following is the flow shown in Figure 86:

  1. The ping from VM11-1 (on ESXi1_2 host) traverses the Fabric from ESI leaf to reach Border Leaf1.
  2. From border-leaf, it’s sent towards the Edge nodes hosted on ESXi1_4.
  3. From Edge node the traffic is sent towards T1-GW which in turn sends the ping traffic using the TEP port on ESXi1_4 to reach TEP port of ESXi1_3, which then reaches the VM22-1.
Figure 86: VM to VM Traffic Flow (East-West) A diagram of a computer network Description automatically generated
Figure 87: VM11 Connected to vn11 Able to Ping VM22 Connected to vn22 Logical Segment VM11 Connected to vn11 Able to Ping VM22 Connected to vn22 Logical Segment
Figure 88: VM22 Connected to vn22 Logical Segment Pinging VM11 Connected to vn11 Graphical user interface, text Description automatically generated

Traceroute to the overlay VMs from the Border Leaf Switches shows the path taken.

Figure 89: Path towards VM11 from Border Leaf Switches A white text with black numbers Description automatically generated

(Optional) Juniper Apstra: Adding vSphere Server to Juniper Apstra

This step is optional, but the integration of vSphere provides an added layer of visibility into MACs, VMs, and ARPs. It also enables you to view all underlying VMs and docker containers connected with each fabric leaf device connected through the ESXi server.

Add the vSphere server into the blueprint that is managing the fabric:

  1. In the Juniper Apstra UI, navigate to Blueprints > <blueprint-name> > Staged > Virtual > Virtual Infra > Add Virtual Infra.
  2. Set the VLAN Remediation Policy VN Type to VXLAN.
  3. Set the Routing Zone to NSX-T.
  4. Navigate to Blueprints > <blueprint-name> > Active > Query. All the VMs associated with the fabric can be viewed here. For more information, refer to the Juniper Apstra 4.2 User Guide.