Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents

Collapsed Spine Fabric Design and Implementation

date_range 24-May-23

In collapsed spine fabrics, core EVPN-VXLAN overlay functions are collapsed only onto a spine layer. There is no leaf layer; the spine devices can interface directly to existing top-of-rack (ToR) switches in the access layer that might not support EVPN-VXLAN.

TOR switches can be multihomed to more than one spine device for access layer resiliency, which the spine devices manage using EVPN multihoming (also called ESI-LAG) the same way as the leaf devices do in other EVPN-VXLAN reference architectures. (See Multihoming an Ethernet-Connected End System Design and Implementation for details.)

The spine devices also assume any border device roles for connectivity outside the data center.

Some common elements in collapsed spine architecture use cases include:

  • Collapsed spine fabric with spine devices connected back-to-back:

    In this model, the spine devices are connected with point-to-point links. The spine devices establish BGP peering in the underlay and overlay over those links using their loopback addresses. See Figure 1.

    Alternatively, the collapsed spine core devices can be integrated with a route reflector cluster in a super spine layer, which is explained later (our reference architecture).

  • Data center locations connected with Data Center Interconnect (DCI):

    The spine devices can perform border gateway functions to establish EVPN peering between data centers, including Layer 2 stretch and Layer 3 connectivity, as Figure 1 shows.

  • Standalone switches or Virtual Chassis in the access layer:

    The ToR layer can contain standalone switches or Virtual Chassis multihomed to the collapsed spine devices. With Virtual Chassis, you can establish redundant links in the ESI-LAGs between the spine devices and different Virtual Chassis member switches to increase resiliency. See Figure 2.

Figure 1 shows a logical view of a collapsed spine data center with border connectivity, DCI between data centers, and Virtual Chassis in the ToR layer multihomed to the spine devices.

Figure 1: Collapsed Spine Data Center With Multihomed Virtual Chassis TOR Devices and Data Center InterconnectCollapsed Spine Data Center With Multihomed Virtual Chassis TOR Devices and Data Center Interconnect

Figure 2 illustrates Virtual Chassis in the ToR layer multihomed to a back-to-back collapsed spine layer, where the spine devices link to different Virtual Chassis member switches to improve ESI-LAG resiliency.

Figure 2: Collapsed Spine Design With Back-to-Back Spine Devices and Multihomed Virtual Chassis in ToR LayerCollapsed Spine Design With Back-to-Back Spine Devices and Multihomed Virtual Chassis in ToR Layer

Refer to Collapsed Spine with EVPN Multihoming, a network configuration example that describes a common collapsed spine use case with back-to-back spine devices. In that example, the ToR devices are Virtual Chassis that are multihomed to the collapsed spine devices. The example includes how to configure additional security services using an SRX chassis cluster to protect inter-tenant traffic, with inter-data center traffic also routed through the SRX cluster as a DCI solution.

Another collapsed spine fabric model interconnects the spine devices through an IP transit layer route reflector cluster that you integrate with the collapsed spine core underlay and overlay networks. Our reference architecture uses this model and is described in the following sections.

Overview of Collapsed Spine Reference Architecture

Our reference architecture presents a use case for a collapsed spine data center fabric comprising two inter-point of delivery (POD) modules. The PODs and collapsed spine devices in the PODs are interconnected by a super spine IP transit layer configured as a route reflector cluster. See Figure 3. This architecture is similar to a five-stage IP fabric design (see Five-Stage IP Fabric Design and Implementation), but with only the super spine, spine, and access layers. You configure the collapsed spine fabric to integrate the route reflector cluster devices into the IP fabric underlay and EVPN overlay in a similar way.

Figure 3: Collapsed Spine Fabric Integrated With a Route Reflector ClusterCollapsed Spine Fabric Integrated With a Route Reflector Cluster

Figure 3 shows an example of the collapsed spine reference design, which includes the following elements:

  • POD 1: ToR 3 multihomed to Spine 1 and Spine 2

  • POD 2: ToR 1 and ToR 2 multihomed to Spine 3 and Spine 4

  • Route reflector cluster: RR 1 and RR 2 interconnecting Spine devices 1 through 4

The four spine devices make up the collapsed spine EVPN fabric core, with Layer 2 stretch and Layer 3 routing between the spine devices in the two PODs. The spine devices in each POD use ESI-LAGs to the multihomed ToR switches in the same POD.

Configure the Collapsed Spine IP Fabric Underlay Integrated With the Route Reflector Layer

This section describes how to configure the interconnecting links and the IP fabric underlay on the spine and route reflector devices.

Figure 4 shows the collapsed spine and route reflector devices connected by aggregated Ethernet interface links.

Figure 4: Collapsed Spine Reference Architecture Underlay Integrated With Route Reflector ClusterCollapsed Spine Reference Architecture Underlay Integrated With Route Reflector Cluster

To configure the underlay:

  1. Before you configure the interfaces connecting the route reflector and spine devices in the fabric, on each of those devices you must set the number of aggregated Ethernet interfaces you might need on the device. The device assigns unique MAC addresses to each aggregated Ethernet interface you configure.

    Configure the number of aggregated Ethernet interfaces on RR 1, RR 2, Spine 1, Spine 2, Spine 3, and Spine 4 :

    content_copy zoom_out_map
    set chassis aggregated-devices ethernet device-count 20
  2. Configure the aggregated Ethernet interfaces on the route reflector and spine devices that form the collapsed spine fabric as shown in Figure 4.

    For redundancy, this reference design uses two physical interfaces in each aggregated Ethernet link between the route reflector and spine devices. The route reflector devices link to the four spine devices using aggregated Ethernet interfaces ae1 through ae4. Each spine device uses aggregated Ethernet interfaces ae1 (to RR 1) and ae2 (to RR 2).

    Also, we configure a higher MTU (9192) on the physical interfaces to account for VXLAN encapsulation.

    RR 1:

    content_copy zoom_out_map
    set interfaces et-0/0/46 ether-options 802.3ad ae1
    set interfaces et-0/0/62 ether-options 802.3ad ae1
    
    set interfaces et-0/0/9 ether-options 802.3ad ae2
    set interfaces et-0/0/10 ether-options 802.3ad ae2
    
    set interfaces et-0/0/49 ether-options 802.3ad ae3
    set interfaces et-0/0/58 ether-options 802.3ad ae3
    
    set interfaces xe-0/0/34:2 ether-options 802.3ad ae4
    set interfaces xe-0/0/34:3 ether-options 802.3ad ae4
    
    set interfaces ae1 mtu 9192
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    set interfaces ae1 unit 0 family inet address 172.16.1.0/31
    
    set interfaces ae2 mtu 9192
    set interfaces ae2 aggregated-ether-options minimum-links 1
    set interfaces ae2 aggregated-ether-options lacp active
    set interfaces ae2 aggregated-ether-options lacp periodic fast
    set interfaces ae2 unit 0 family inet address 172.16.2.0/31
    
    set interfaces ae3 mtu 9192
    set interfaces ae3 aggregated-ether-options minimum-links 1
    set interfaces ae3 aggregated-ether-options lacp active
    set interfaces ae3 aggregated-ether-options lacp periodic fast
    set interfaces ae3 unit 0 family inet address 172.16.3.0/31
    
    set interfaces ae4 mtu 9192
    set interfaces ae4 aggregated-ether-options minimum-links 1
    set interfaces ae4 aggregated-ether-options lacp active
    set interfaces ae4 aggregated-ether-options lacp periodic fast
    set interfaces ae4 unit 0 family inet address 172.16.4.0/31
    

    RR 2:

    content_copy zoom_out_map
    set interfaces et-0/0/18 ether-options 802.3ad ae1
    set interfaces et-0/0/35 ether-options 802.3ad ae1
    
    set interfaces et-0/0/13 ether-options 802.3ad ae2
    set interfaces et-0/0/14 ether-options 802.3ad ae2
    
    set interfaces et-0/0/22 ether-options 802.3ad ae3
    set interfaces et-0/0/23 ether-options 802.3ad ae3
    
    set interfaces et-0/0/19 ether-options 802.3ad ae4
    set interfaces et-0/0/20 ether-options 802.3ad ae4
    
    set interfaces ae1 mtu 9192
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    set interfaces ae1 unit 0 family inet address 172.16.5.0/31
    
    set interfaces ae2 mtu 9192
    set interfaces ae2 aggregated-ether-options minimum-links 1
    set interfaces ae2 aggregated-ether-options lacp active
    set interfaces ae2 aggregated-ether-options lacp periodic fast
    set interfaces ae2 unit 0 family inet address 172.16.6.0/31
    
    set interfaces ae3 mtu 9192
    set interfaces ae3 aggregated-ether-options minimum-links 1
    set interfaces ae3 aggregated-ether-options lacp active
    set interfaces ae3 aggregated-ether-options lacp periodic fast
    set interfaces ae3 unit 0 family inet address 172.16.7.0/31
    
    set interfaces ae4 mtu 9192
    set interfaces ae4 aggregated-ether-options minimum-links 1
    set interfaces ae4 aggregated-ether-options lacp active
    set interfaces ae4 aggregated-ether-options lacp periodic fast
    set interfaces ae4 unit 0 family inet address 172.16.8.0/31
    

    Spine 1:

    content_copy zoom_out_map
    set interfaces et-0/0/1 ether-options 802.3ad ae1
    set interfaces et-0/0/2 ether-options 802.3ad ae1
    set interfaces et-0/0/14 ether-options 802.3ad ae2
    set interfaces et-0/0/27 ether-options 802.3ad ae2
    
    set interfaces ae1 mtu 9192
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    set interfaces ae1 unit 0 family inet address 172.16.1.1/31
    
    set interfaces ae2 mtu 9192
    set interfaces ae2 aggregated-ether-options minimum-links 1
    set interfaces ae2 aggregated-ether-options lacp active
    set interfaces ae2 aggregated-ether-options lacp periodic fast
    set interfaces ae2 unit 0 family inet address 172.16.5.1/31
    

    Spine 2:

    content_copy zoom_out_map
    set interfaces et-0/0/1 ether-options 802.3ad ae1
    set interfaces et-0/0/2 ether-options 802.3ad ae1
    
    set interfaces et-0/0/14 ether-options 802.3ad ae2
    set interfaces et-0/0/15 ether-options 802.3ad ae2
    
    set interfaces ae1 mtu 9192
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    set interfaces ae1 unit 0 family inet address 172.16.2.1/31
    
    set interfaces ae2 mtu 9192
    set interfaces ae2 aggregated-ether-options minimum-links 1
    set interfaces ae2 aggregated-ether-options lacp active
    set interfaces ae2 aggregated-ether-options lacp periodic fast
    set interfaces ae2 unit 0 family inet address 172.16.6.1/31
    

    Spine 3:

    content_copy zoom_out_map
    set interfaces et-0/0/0 ether-options 802.3ad ae1
    set interfaces et-0/0/1 ether-options 802.3ad ae1
    
    set interfaces et-0/0/7 ether-options 802.3ad ae2
    set interfaces et-0/0/8 ether-options 802.3ad ae2
    
    set interfaces ae1 mtu 9192
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    set interfaces ae1 unit 0 family inet address 172.16.3.1/31
    
    set interfaces ae2 mtu 9192
    set interfaces ae2 aggregated-ether-options minimum-links 1
    set interfaces ae2 aggregated-ether-options lacp active
    set interfaces ae2 aggregated-ether-options lacp periodic fast
    set interfaces ae2 unit 0 family inet address 172.16.7.1/31
    

    Spine 4:

    content_copy zoom_out_map
    set interfaces xe-0/0/3:2 ether-options 802.3ad ae1
    set interfaces xe-0/0/3:3 ether-options 802.3ad ae1
    
    set interfaces et-0/0/19 ether-options 802.3ad ae2
    set interfaces et-0/0/20 ether-options 802.3ad ae2
    
    set interfaces ae1 mtu 9192
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    set interfaces ae1 unit 0 family inet address 172.16.4.1/31
    
    set interfaces ae2 mtu 9192
    set interfaces ae2 aggregated-ether-options minimum-links 1
    set interfaces ae2 aggregated-ether-options lacp active
    set interfaces ae2 aggregated-ether-options lacp periodic fast
    set interfaces ae2 unit 0 family inet address 172.16.8.1/31
    
  3. Configure IP addresses for the loopback interfaces and the router id for each route reflector and spine device, as shown in Figure 4.
    content_copy zoom_out_map
    set interfaces lo0 unit 0 family inet address addr/32
    set routing-options router-id addr
  4. On the route reflector and spine devices, configure the EBGP IP fabric underlay. The underlay configuration is similar to other spine and leaf reference architecture designs in IP Fabric Underlay Network Design and Implementation. However, in the underlay in this reference design, the collapsed spine fabric is integrated with the route reflector devices for IP transit functions between the spine devices within and across the PODs.

    The underlay configuration includes the following:

    • Define an export routing policy (underlay-clos-export) that advertises the IP address of the loopback interface to EBGP peering devices. This export routing policy is used to make the IP address of the loopback interface of each device reachable by all devices in the IP fabric (all route reflector and spine devices).

    • Define a local AS number on each device.

    • On the route reflector devices: Identify the four spine devices as the EBGP neighbors by their aggregated Ethernet link IP addresses and local AS numbers.

      On the spine devices: Identify the two route reflector devices as the EBGP neighbors by their aggregated Ethernet link IP addresses and local AS numbers.

    • Turn on BGP peer state transition logging.

    RR 1:

    content_copy zoom_out_map
    set protocols bgp group underlay-bgp type external
    
    set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0
    set policy-options policy-statement underlay-clos-export term loopback then accept
    set protocols bgp group underlay-bgp export underlay-clos-export
    
    set protocols bgp group underlay-bgp local-as 4200000021
    set protocols bgp group underlay-bgp multipath multiple-as
    
    set protocols bgp group underlay-bgp neighbor 172.16.1.1 peer-as 4200000011
    set protocols bgp group underlay-bgp neighbor 172.16.2.1 peer-as 4200000012
    set protocols bgp group underlay-bgp neighbor 172.16.3.1 peer-as 4200000013
    set protocols bgp group underlay-bgp neighbor 172.16.4.1 peer-as 4200000014
    set protocols bgp log-updown

    RR 2:

    content_copy zoom_out_map
    set protocols bgp group underlay-bgp type external
    
    set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0
    set policy-options policy-statement underlay-clos-export term loopback then accept
    set protocols bgp group underlay-bgp export underlay-clos-export
    
    set protocols bgp group underlay-bgp local-as 4200000022
    set protocols bgp group underlay-bgp multipath multiple-as
    
    set protocols bgp group underlay-bgp neighbor 172.16.5.1 peer-as 4200000011
    set protocols bgp group underlay-bgp neighbor 172.16.6.1 peer-as 4200000012
    set protocols bgp group underlay-bgp neighbor 172.16.7.1 peer-as 4200000013
    set protocols bgp group underlay-bgp neighbor 172.16.8.1 peer-as 4200000014
    set protocols bgp log-updown

    Spine 1:

    content_copy zoom_out_map
    set protocols bgp group underlay-bgp type external
    
    set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0
    set policy-options policy-statement underlay-clos-export term loopback then accept
    set protocols bgp group underlay-bgp export underlay-clos-export
    
    set protocols bgp group underlay-bgp local-as 4200000011
    set protocols bgp group underlay-bgp multipath multiple-as
    
    set protocols bgp group underlay-bgp neighbor 172.16.1.0 peer-as 4200000021
    set protocols bgp group underlay-bgp neighbor 172.16.5.0 peer-as 4200000022
    
    set protocols bgp log-updown

    Spine 2:

    content_copy zoom_out_map
    set protocols bgp group underlay-bgp type external
    
    set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0
    set policy-options policy-statement underlay-clos-export term loopback then accept
    set protocols bgp group underlay-bgp export underlay-clos-export
    
    set protocols bgp group underlay-bgp local-as 4200000012
    set protocols bgp group underlay-bgp multipath multiple-as
    
    set protocols bgp group underlay-bgp neighbor 172.16.2.0 peer-as 4200000021
    set protocols bgp group underlay-bgp neighbor 172.16.6.0 peer-as 4200000022
    
    set protocols bgp log-updown

    Spine 3:

    content_copy zoom_out_map
    set protocols bgp group underlay-bgp type external
    
    set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0
    set policy-options policy-statement underlay-clos-export term loopback then accept
    set protocols bgp group underlay-bgp export underlay-clos-export
    
    set protocols bgp group underlay-bgp local-as 4200000013
    set protocols bgp group underlay-bgp multipath multiple-as
    
    set protocols bgp group underlay-bgp neighbor 172.16.3.0 peer-as 4200000021
    set protocols bgp group underlay-bgp neighbor 172.16.7.0 peer-as 4200000022
    
    set protocols bgp log-updown

    Spine 4:

    content_copy zoom_out_map
    set protocols bgp group underlay-bgp type external
    
    set policy-options policy-statement underlay-clos-export term loopback from interface lo0.0
    set policy-options policy-statement underlay-clos-export term loopback then accept
    set protocols bgp group underlay-bgp export underlay-clos-export
    
    set protocols bgp group underlay-bgp local-as 4200000014
    set protocols bgp group underlay-bgp multipath multiple-as
    
    set protocols bgp group underlay-bgp neighbor 172.16.4.0 peer-as 4200000021
    set protocols bgp group underlay-bgp neighbor 172.16.8.0 peer-as 4200000022
    
    set protocols bgp log-updown

Configure the Collapsed Spine EVPN-VXLAN Overlay Integrated With the Route Reflector Layer

In this design, the overlay is similar to other EVPN-VXLAN data center spine and leaf reference architectures, but doesn’t include a leaf layer. Only the spine devices (integrated with the route reflector cluster) do intra-VLAN and inter-VLAN routing in the fabric. We configure IBGP with Multiprotocol BGP (MP-IBGP) with a single autonomous system (AS) number on the spine devices to establish a signalling path between them by way of the route reflector cluster devices as follows:

  • The route reflector cluster devices peer with the spine devices in both PODs for IP transit.

  • The spine devices peer with the route reflector devices.

See Figure 5, which illustrates the spine and route reflector cluster devices and BGP neighbor IP addresses we configure in the EVPN overlay network.

Figure 5: Collapsed Spine Reference Architecture Overlay Integrated With Route Reflector ClusterCollapsed Spine Reference Architecture Overlay Integrated With Route Reflector Cluster

The overlay configuration is the same on both of the route reflector devices except for the device’s local address (the loopback address). The route reflector devices peer with all of the spine devices.

The overlay configuration is the same on each of the spine devices except for the device’s local address (the loopback address). All of the spine devices peer with the route reflector cluster devices.

We configure EVPN with VXLAN encapsulation and virtual tunnel endpoint (VTEP) interfaces only on the spine devices in the collapsed spine fabric.

To configure the overlay:

  1. Configure an AS number for the IBGP overlay on all spine and route reflector devices:
    content_copy zoom_out_map
    set routing-options autonomous-system 4210000001
  2. Configure IBGP with EVPN signaling on the route reflector devices to peer with the collapsed spine devices, identified as IBGP neighbors by their device loopback addresses as illustrated in Figure 5.

    In this step, you also:

    • Define RR 1 and RR 2 as a route reflector cluster (with cluster ID 192.168.2.1).

    • Enable path maximum transmission unit (MTU) discovery to dynamically determine the MTU size on the network path between the source and the destination, which can help avoid IP fragmentation.

    • Set up Bidirectional Forwarding Detection (BFD) for detecting IBGP neighbor failures.

    • Set the vpn-apply-export option to ensure that both the VRF and BGP group or neighbor export policies in the BGP configuration are applied (in that order) before the device advertises routes in the VPN routing tables to the other route reflector or spine devices. (See Distributing VPN Routes for more information.)

    RR 1:

    content_copy zoom_out_map
    set protocols bgp group overlay-with-rr type internal
    set protocols bgp group overlay-with-rr local-address 192.168.2.1
    
    set protocols bgp group overlay-with-rr family evpn signaling
    
    set protocols bgp group overlay-with-rr cluster 192.168.2.1
    set protocols bgp group overlay-with-rr multipath
    set protocols bgp group overlay-with-rr mtu-discovery
    
    set protocols bgp group overlay-with-rr neighbor 192.168.1.1
    set protocols bgp group overlay-with-rr neighbor 192.168.1.2
    set protocols bgp group overlay-with-rr neighbor 192.168.1.3
    set protocols bgp group overlay-with-rr neighbor 192.168.1.4
    
    set protocols bgp group overlay-with-rr bfd-liveness-detection minimum-interval 1000
    set protocols bgp group overlay-with-rr bfd-liveness-detection multiplier 3
    set protocols bgp group overlay-with-rr bfd-liveness-detection session-mode automatic
    
    set protocols bgp group overlay-with-rr vpn-apply-export
    

    RR 2:

    content_copy zoom_out_map
    set protocols bgp group overlay-with-rr type internal
    set protocols bgp group overlay-with-rr local-address 192.168.2.2
    
    set protocols bgp group overlay-with-rr family evpn signaling
    
    set protocols bgp group overlay-with-rr cluster 192.168.2.1
    set protocols bgp group overlay-with-rr multipath
    set protocols bgp group overlay-with-rr mtu-discovery
    
    set protocols bgp group overlay-with-rr neighbor 192.168.1.1
    set protocols bgp group overlay-with-rr neighbor 192.168.1.2
    set protocols bgp group overlay-with-rr neighbor 192.168.1.3
    set protocols bgp group overlay-with-rr neighbor 192.168.1.4
    
    set protocols bgp group overlay-with-rr bfd-liveness-detection minimum-interval 1000
    set protocols bgp group overlay-with-rr bfd-liveness-detection multiplier 3
    set protocols bgp group overlay-with-rr bfd-liveness-detection session-mode automatic
    
    set protocols bgp group overlay-with-rr vpn-apply-export
    
  3. Configure IBGP with EVPN on the collapsed spine devices to peer with the route reflector devices, which are identified as IBGP neighbors by their device loopback addresses shown in Figure 5. The configuration is the same on all spine devices except you substitute the spine device’s loopback IP address for the local-address device-loopback-addr value.

    In this step you also:

    • Enable path maximum transmission unit (MTU) discovery to dynamically determine the MTU size on the network path between the source and the destination, which can help avoid IP fragmentation.

    • Set up BFD for detecting IBGP neighbor failures.

    • Set the vpn-apply-export option to ensure that both the VRF and BGP group or neighbor export policies in the BGP configuration are applied (in that order) before the device advertises routes in the VPN routing tables to the other route reflector or spine devices. (See Distributing VPN Routes for more information.)

    All spine devices:

    content_copy zoom_out_map
    set protocols bgp group overlay-with-rr type internal
    set protocols bgp group overlay-with-rr local-address device-loopback-addr
    
    set protocols bgp group overlay-with-rr family evpn signaling
    
    set protocols bgp group overlay-with-rr multipath
    set protocols bgp group overlay-with-rr mtu-discovery
    
    set protocols bgp group overlay-with-rr neighbor 192.168.2.1
    set protocols bgp group overlay-with-rr neighbor 192.168.2.2
    
    set protocols bgp group overlay-with-rr bfd-liveness-detection minimum-interval 1000
    set protocols bgp group overlay-with-rr bfd-liveness-detection multiplier 3
    set protocols bgp group overlay-with-rr bfd-liveness-detection session-mode automatic
    
    set protocols bgp group overlay-with-rr vpn-apply-export
  4. Ensure LLDP is enabled on all interfaces except the management interface (em0) on the route reflector cluster and spine devices.

    All route reflector and spine devices:

    content_copy zoom_out_map
    set protocols lldp interface all
    set protocols lldp interface em0 disable
    
  5. Configure EVPN with VXLAN encapsulation in the overlay on the spine devices. The configuration is the same on all spine devices in the collapsed spine fabric.

    In this step:

    • Specify and apply a policy for per-packet load balancing for ECMP in the forwarding table.

    • Configure these EVPN options at the [edit protocols evpn] hierarchy level along with setting VXLAN encapsulation:

      • default-gateway no-gateway-community: Advertise the virtual gateway and IRB MAC addresses to the EVPN peer devices so that Ethernet-only edge devices can learn these MAC addresses. You configure no-gateway-community in a collapsed spine fabric if the spines use:

      • extended-vni-list all option: Allow all configured VXLAN network identifiers (VNIs) to be part of this EVPN-VXLAN BGP domain. We configure VLANs and VLAN to VNI mappings in a later section.

      • remote-ip-host-routes: Enable virtual machine traffic optimization (VMTO). (See Ingress Virtual Machine Traffic Optimization for EVPN for more information.)

    All spine devices:

    content_copy zoom_out_map
    set policy-options policy-statement per-packet-load-balance term 1 then load-balance per-packet
    set routing-options forwarding-table export per-packet-load-balance
    
    set protocols evpn encapsulation vxlan
    set protocols evpn default-gateway no-gateway-community
    set protocols evpn extended-vni-list all
    set protocols evpn remote-ip-host-routes
    
  6. Configure VTEP, route target, and virtual routing and forwarding (VRF) switch options on the spine devices.

    The configuration is the same on all spine devices except on each device you substitute the device’s loopback IP address for the route-distinguisher value. This value defines a unique route distinguisher for routes generated by each device.

    The VTEP source interface in the EVPN instance should also match the IBGP local peer address, which is likewise the device loopback IP address.

    Spine 1:

    content_copy zoom_out_map
    set switch-options vtep-source-interface lo0.0
    set switch-options route-distinguisher 192.168.1.1:3333
    set switch-options vrf-target target:10458:0
    set switch-options vrf-target auto

    Spine 2:

    content_copy zoom_out_map
    set switch-options vtep-source-interface lo0.0
    set switch-options route-distinguisher 192.168.1.2:3333
    set switch-options vrf-target target:10458:0
    set switch-options vrf-target auto

    Spine 3:

    content_copy zoom_out_map
    set switch-options vtep-source-interface lo0.0
    set switch-options route-distinguisher 192.168.1.3:3333
    set switch-options vrf-target target:10458:0
    set switch-options vrf-target auto

    Spine 4:

    content_copy zoom_out_map
    set switch-options vtep-source-interface lo0.0
    set switch-options route-distinguisher 192.168.1.4:3333
    set switch-options vrf-target target:10458:0
    set switch-options vrf-target auto
  7. (Required on PTX10000 Series routers only) Enable tunnel termination globally (in other words, on all interfaces) on the device:
    content_copy zoom_out_map
    set forwarding-options tunnel-termination
    

Configure EVPN Multihoming and Virtual Networks on the Spine Devices for the ToR Switches

This collapsed spine reference design implements EVPN multihoming as described in Multihoming an Ethernet-Connected End System Design and Implementation, except because the leaf layer functions are collapsed into the spine layer, you configure the ESI-LAGs on the spine devices. You also configure VLANs and Layer 2 and Layer 3 routing functions on the spine devices in a similar way as you would on the leaf devices in an edge-routed bridging (ERB) overlay design. The core collapsed spine configuration implements a Layer 2 stretch by setting the same VLANs (and VLAN-to-VNI mappings) on all of the spine devices in both PODs. EVPN Type 2 routes enable communication between endpoints within and across the PODs.

Figure 6 shows the collapsed spine devices in each POD connected with aggregated Ethernet interface links to the multihomed ToR switches in the POD.

Figure 6: Collapsed Spine Fabric With Multihomed ToR SwitchesCollapsed Spine Fabric With Multihomed ToR Switches

For brevity, this section illustrates one aggregated Ethernet link between each spine and each ToR device, with one interface configured on each aggregated Ethernet link from the spine devices to the ToR devices in the POD.

This section covers configuration details only for the spine and ToR devices in POD 2. You can apply a similar configuration with applicable device parameters and interfaces to the spine and ToR devices in POD 1.

The ToR devices include two interfaces in their aggregated Ethernet links, one to each spine device in the POD that form the ESI-LAG for multihoming.

The configuration includes steps to:

  • Configure the interfaces.

  • Set up the ESI-LAGs for EVPN multihoming.

  • Configure Layer 2 and Layer 3 gateway functions, including defining VLANs, the associated IRB interfaces for inter-VLAN routing, and corresponding VLAN-to-VNI mappings.

  1. Configure the interfaces and aggregated Ethernet links on the spines (Spine 3 and Spine 4) to the multihomed ToR switches (ToR 1 and ToR 2) in POD 2.

    Spine 3:

    content_copy zoom_out_map
    set interfaces xe-0/0/22:0 hold-time up 450000
    set interfaces xe-0/0/22:0 hold-time down 450000
    set interfaces xe-0/0/22:0 ether-options 802.3ad ae3
    
    set interfaces xe-0/0/23:0 hold-time up 450000
    set interfaces xe-0/0/23:0 hold-time down 450000
    set interfaces xe-0/0/23:0 ether-options 802.3ad ae10

    Spine 4:

    content_copy zoom_out_map
    set interfaces xe-0/0/4:2 hold-time up 450000
    set interfaces xe-0/0/4:2 hold-time down 450000
    set interfaces xe-0/0/4:2 ether-options 802.3ad ae3
    
    set interfaces xe-0/0/6:1 hold-time up 450000
    set interfaces xe-0/0/6:1 hold-time down 450000
    set interfaces xe-0/0/6:1 ether-options 802.3ad ae10
  2. Configure the ESI-LAGs for EVPN multihoming on the spine devices for the multihomed ToR switches in POD 2. This design uses the same aggregated Ethernet interfaces on the spine devices to the ToR switches, so you use the same configuration on both devices.

    In this reference design, ae3 connects to ToR 1 and ae10 connects to ToR 2.

    Spine 3 and Spine 4:

    content_copy zoom_out_map
    set interfaces ae3 esi 00:00:00:ff:00:02:00:01:00:03
    set interfaces ae3 esi all-active
    set interfaces ae3 aggregated-ether-options lacp active
    set interfaces ae3 aggregated-ether-options lacp periodic fast
    set interfaces ae3 aggregated-ether-options lacp system-id 00:00:00:99:99:01
    set interfaces ae3 aggregated-ether-options lacp hold-time up 300
    
    set interfaces ae10 esi 00:00:00:ff:00:01:00:01:00:0a
    set interfaces ae10 esi all-active
    set interfaces ae10 aggregated-ether-options lacp active
    set interfaces ae10 aggregated-ether-options lacp periodic fast
    set interfaces ae10 aggregated-ether-options lacp system-id 00:00:00:99:99:01
    set interfaces ae10 aggregated-ether-options lacp hold-time up 300
  3. Configure VLANs on the spine devices in POD 2 with ae3 and ae10 as VLAN members.

    Spine 3 and Spine 4:

    content_copy zoom_out_map
    set interfaces ae3 native-vlan-id 4094
    set interfaces ae3 unit 0 family ethernet-switching interface-mode trunk
    set interfaces ae3 unit 0 family ethernet-switching vlan members VLAN-1
    set interfaces ae3 unit 0 family ethernet-switching vlan members VLAN-2
    set interfaces ae3 unit 0 family ethernet-switching vlan members VLAN-3
    set interfaces ae3 unit 0 family ethernet-switching vlan members VLAN-4
    
    set interfaces ae10 native-vlan-id 4094
    set interfaces ae10 unit 0 family ethernet-switching interface-mode trunk
    set interfaces ae10 unit 0 family ethernet-switching vlan members VLAN-1
    set interfaces ae10 unit 0 family ethernet-switching vlan members VLAN-2
    set interfaces ae10 unit 0 family ethernet-switching vlan members VLAN-3
    set interfaces ae10 unit 0 family ethernet-switching vlan members VLAN-4
  4. Map the VLANs to VNIs for the VXLAN tunnels and associate an IRB interface with each one.

    Spine 3 and Spine 4:

    content_copy zoom_out_map
    set vlans VLAN-1 vlan-id 1
    set vlans VLAN-1 l3-interface irb.1
    set vlans VLAN-1 vxlan vni 100001
    set vlans VLAN-2 vlan-id 2
    set vlans VLAN-2 l3-interface irb.2
    set vlans VLAN-2 vxlan vni 100002
    set vlans VLAN-3 vlan-id 3
    set vlans VLAN-3 l3-interface irb.3
    set vlans VLAN-3 vxlan vni 100003
    set vlans VLAN-4 vlan-id 4
    set vlans VLAN-4 l3-interface irb.4
    set vlans VLAN-4 vxlan vni 100004
  5. Configure the IRB interfaces for the VLANs (VNIs) on the spine devices in POD 2 with IPv4 and IPv6 dual stack addresses for both the IRB IP address and virtual gateway IP address.

    Spine 3:

    content_copy zoom_out_map
    set interfaces irb unit 1 virtual-gateway-accept-data
    set interfaces irb unit 1 family inet address 10.0.1.243/24 preferred
    set interfaces irb unit 1 family inet address 10.0.1.243/24 virtual-gateway-address 10.0.1.254
    set interfaces irb unit 1 family inet6 nd6-stale-time 3600
    set interfaces irb unit 1 family inet6 address 2001:db8::10:0:1:243/112 preferred
    set interfaces irb unit 1 family inet6 address 2001:db8::10:0:1:243/112 virtual-gateway-address 2001:db8::10:0:1:254
    set interfaces irb unit 1 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 1 virtual-gateway-v6-mac 00:00:5e:00:00:04
    
    set interfaces irb unit 2 virtual-gateway-accept-data
    set interfaces irb unit 2 family inet address 10.0.2.243/24 preferred
    set interfaces irb unit 2 family inet address 10.0.2.243/24 virtual-gateway-address 10.0.2.254
    set interfaces irb unit 2 family inet6 nd6-stale-time 3600
    set interfaces irb unit 2 family inet6 address 2001:db8::10:0:2:243/112 preferred
    set interfaces irb unit 2 family inet6 address 2001:db8::10:0:2:243/112 virtual-gateway-address 2001:db8::10:0:2:254
    set interfaces irb unit 2 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 2 virtual-gateway-v6-mac 00:00:5e:00:00:04
    
    set interfaces irb unit 3 virtual-gateway-accept-data
    set interfaces irb unit 3 family inet address 10.0.3.243/24 preferred
    set interfaces irb unit 3 family inet address 10.0.3.243/24 virtual-gateway-address 10.0.3.254
    set interfaces irb unit 3 family inet6 nd6-stale-time 3600
    set interfaces irb unit 3 family inet6 address 2001:db8::10:0:3:243/112 preferred
    set interfaces irb unit 3 family inet6 address 2001:db8::10:0:3:243/112 virtual-gateway-address 2001:db8::10:0:3:254
    set interfaces irb unit 3 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 3 virtual-gateway-v6-mac 00:00:5e:00:00:04
    
    set interfaces irb unit 4 virtual-gateway-accept-data
    set interfaces irb unit 4 family inet address 10.0.4.243/24 preferred
    set interfaces irb unit 4 family inet address 10.0.4.243/24 virtual-gateway-address 10.0.4.254
    set interfaces irb unit 4 family inet6 nd6-stale-time 3600
    set interfaces irb unit 4 family inet6 address 2001:db8::10:0:4:243/112 preferred
    set interfaces irb unit 4 family inet6 address 2001:db8::10:0:4:243/112 virtual-gateway-address 2001:db8::10:0:4:254
    set interfaces irb unit 4 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 4 virtual-gateway-v6-mac 00:00:5e:00:00:04

    Spine 4:

    content_copy zoom_out_map
    set interfaces irb unit 1 virtual-gateway-accept-data
    set interfaces irb unit 1 family inet address 10.0.1.244/24 preferred
    set interfaces irb unit 1 family inet address 10.0.1.244/24 virtual-gateway-address 10.0.1.254
    set interfaces irb unit 1 family inet6 nd6-stale-time 3600
    set interfaces irb unit 1 family inet6 address 2001:db8::10:0:1:244/112 preferred
    set interfaces irb unit 1 family inet6 address 2001:db8::10:0:1:244/112 virtual-gateway-address 2001:db8::10:0:1:254
    set interfaces irb unit 1 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 1 virtual-gateway-v6-mac 00:00:5e:00:00:04
    
    set interfaces irb unit 2 virtual-gateway-accept-data
    set interfaces irb unit 2 family inet address 10.0.2.244/24 preferred
    set interfaces irb unit 2 family inet address 10.0.2.244/24 virtual-gateway-address 10.0.2.254
    set interfaces irb unit 2 family inet6 nd6-stale-time 3600
    set interfaces irb unit 2 family inet6 address 2001:db8::10:0:2:244/112 preferred
    set interfaces irb unit 2 family inet6 address 2001:db8::10:0:2:244/112 virtual-gateway-address 2001:db8::10:0:2:254
    set interfaces irb unit 2 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 2 virtual-gateway-v6-mac 00:00:5e:00:00:04
    
    set interfaces irb unit 3 virtual-gateway-accept-data
    set interfaces irb unit 3 family inet address 10.0.3.244/24 preferred
    set interfaces irb unit 3 family inet address 10.0.3.244/24 virtual-gateway-address 10.0.3.254
    set interfaces irb unit 3 family inet6 nd6-stale-time 3600
    set interfaces irb unit 3 family inet6 address 2001:db8::10:0:3:244/112 preferred
    set interfaces irb unit 3 family inet6 address 2001:db8::10:0:3:244/112 virtual-gateway-address 2001:db8::10:0:3:254
    set interfaces irb unit 3 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 3 virtual-gateway-v6-mac 00:00:5e:00:00:04
    
    set interfaces irb unit 4 virtual-gateway-accept-data
    set interfaces irb unit 4 family inet address 10.0.4.244/24 preferred
    set interfaces irb unit 4 family inet address 10.0.4.244/24 virtual-gateway-address 10.0.4.254
    set interfaces irb unit 4 family inet6 nd6-stale-time 3600
    set interfaces irb unit 4 family inet6 address 2001:db8::10:0:4:244/112 preferred
    set interfaces irb unit 4 family inet6 address 2001:db8::10:0:4:244/112 virtual-gateway-address 2001:db8::10:0:4:254
    set interfaces irb unit 4 virtual-gateway-v4-mac 00:00:5e:00:00:04
    set interfaces irb unit 4 virtual-gateway-v6-mac 00:00:5e:00:00:04
  6. Define the VRF routing instance and corresponding IRB interfaces for EVPN Type 2 routes on each spine device in POD 2 for the configured VLANs (VNIs).

    Spine 3:

    content_copy zoom_out_map
    set interfaces lo0 unit 1 family inet
    
    set routing-instances VRF-T2-1 instance-type vrf
    set routing-instances VRF-T2-1 interface lo0.1
    set routing-instances VRF-T2-1 interface irb.1
    set routing-instances VRF-T2-1 interface irb.2
    set routing-instances VRF-T2-1 interface irb.3
    set routing-instances VRF-T2-1 interface irb.4
    set routing-instances VRF-T2-1 route-distinguisher 192.168.1.3:1
    set routing-instances VRF-T2-1 vrf-target target:100:1
    

    Spine 4:

    content_copy zoom_out_map
    set interfaces lo0 unit 1 family inet
    
    set routing-instances VRF-T2-1 instance-type vrf
    set routing-instances VRF-T2-1 interface lo0.1
    set routing-instances VRF-T2-1 interface irb.1
    set routing-instances VRF-T2-1 interface irb.2
    set routing-instances VRF-T2-1 interface irb.3
    set routing-instances VRF-T2-1 interface irb.4
    set routing-instances VRF-T2-1 route-distinguisher 192.168.1.4:1
    set routing-instances VRF-T2-1 vrf-target target:100:1
    
    
  7. Configure the interfaces and aggregated Ethernet links on the multihomed ToR switches (ToR 1 and ToR 2) to the spine devices (Spine 3 and Spine 4) in POD 2. In this step, you:
    • Set the number of aggregated Ethernet interfaces on the switch that you might need (we set 20 here as an example).

    • Configure aggregated Ethernet link ae1 on each ToR switch to the spine devices in POD 2.

    • Configure LLDP on the interfaces.

    ToR 1:

    content_copy zoom_out_map
    set chassis aggregated-devices ethernet device-count 20
    
    set interfaces xe-0/0/26 ether-options 802.3ad ae1
    set interfaces xe-0/0/27 ether-options 802.3ad ae1
    
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    
    set protocols lldp interface all
    set protocols lldp interface em0 disable
    

    ToR 2:

    content_copy zoom_out_map
    set chassis aggregated-devices ethernet device-count 20
    
    set interfaces xe-0/0/1 ether-options 802.3ad ae1
    set interfaces xe-0/0/27 ether-options 802.3ad ae1
    
    set interfaces ae1 aggregated-ether-options minimum-links 1
    set interfaces ae1 aggregated-ether-options lacp active
    set interfaces ae1 aggregated-ether-options lacp periodic fast
    
    set protocols lldp interface all
    set protocols lldp interface em0 disable
    
  8. Configure the VLANs on the ToR switches in POD 2. These match the VLANs you configured in Step 3 on the spine devices in POD 2.

    ToR 1 and ToR 2:

    content_copy zoom_out_map
    set vlans VLAN-1 vlan-id 1
    set vlans VLAN-2 vlan-id 2
    set vlans VLAN-3 vlan-id 3
    set vlans VLAN-4 vlan-id 4
    
    set interfaces ae1 native-vlan-id 4094
    set interfaces ae1 unit 0 family ethernet-switching interface-mode trunk
    set interfaces ae1 unit 0 family ethernet-switching vlan members VLAN-1
    set interfaces ae1 unit 0 family ethernet-switching vlan members VLAN-2
    set interfaces ae1 unit 0 family ethernet-switching vlan members VLAN-3
    set interfaces ae1 unit 0 family ethernet-switching vlan members VLAN-4
    

Verify Collapsed Spine Fabric Connectivity With Route Reflector Cluster and ToR Devices

This section shows CLI commands you can use to verify connectivity between the collapsed spine devices and the route reflector cluster, and between the collapsed spine devices and the ToR devices.

For brevity, this section includes verifying connectivity on the spine devices using only Spine 3 and Spine 4 in POD 2. You can use the same commands on the spine devices (Spine 1 and Spine 2) in POD 1.

  1. Verify connectivity on the aggregated Ethernet links on the route reflector devices toward the four collapsed spine devices. On each route reflector device, aeX connects to Spine X).

    RR 1:

    content_copy zoom_out_map
    user@rr-1> show lacp interfaces 
    Aggregated interface: ae1
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/46      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/46    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/62      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/62    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/46                 Current   Fast periodic Collecting distributing
          et-0/0/62                 Current   Fast periodic Collecting distributing
    
    
    Aggregated interface: ae2
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/9      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/9    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/10     Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/10    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/9                  Current   Fast periodic Collecting distributing
          et-0/0/10                 Current   Fast periodic Collecting distributing
    
    
    Aggregated interface: ae3
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/49      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/49    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/58      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/58    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/49                 Current   Fast periodic Collecting distributing
          et-0/0/58                 Current   Fast periodic Collecting distributing
    
    
    Aggregated interface: ae4
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          xe-0/0/34:2    Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/34:2  Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/34:3    Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/34:3  Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          xe-0/0/34:2               Current   Fast periodic Collecting distributing
          xe-0/0/34:3               Current   Fast periodic Collecting distributing

    RR 2:

    content_copy zoom_out_map
    user@rr-2> show lacp interfaces 
    Aggregated interface: ae1
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/18      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/18    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/35      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/35    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/18                 Current   Fast periodic Collecting distributing
          et-0/0/35                 Current   Fast periodic Collecting distributing
    
    
    Aggregated interface: ae2
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/13      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/13    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/14      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/14    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/13                 Current   Fast periodic Collecting distributing
          et-0/0/14                 Current   Fast periodic Collecting distributing
    
    
    Aggregated interface: ae3
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/22      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/22    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/23      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/23    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/22                 Current   Fast periodic Collecting distributing
          et-0/0/23                 Current   Fast periodic Collecting distributing
    
    
    Aggregated interface: ae4
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/19      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/19    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/20      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/20    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/19                 Current   Fast periodic Collecting distributing
          et-0/0/20                 Current   Fast periodic Collecting distributing
  2. Verify connectivity on the aggregated Ethernet links on the spine devices in POD 2 (Spine 3 and Spine 4) toward the route reflector devices. Links ae1 and ae2 connect to route reflector devices RR 1 and RR 2, respectively, on both Spine 3 and Spine 4.

    Spine 3:

    content_copy zoom_out_map
    user@spine-3> show lacp interfaces ae1
    Aggregated interface: ae1
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/0       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/0     Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/1       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/1     Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/0                  Current   Fast periodic Collecting distributing
          et-0/0/1                  Current   Fast periodic Collecting distributing
    
    
    user@spine-3> show lacp interfaces ae2    
    Aggregated interface: ae2
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/7       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/7     Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/8       Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/8     Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/7                  Current   Fast periodic Collecting distributing
          et-0/0/8                  Current   Fast periodic Collecting distributing
    

    Spine 4:

    content_copy zoom_out_map
    user@spine-4> show lacp interfaces ae1
    Aggregated interface: ae1
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          xe-0/0/3:2     Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/3:2   Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/3:3     Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/3:3   Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          xe-0/0/3:2                Current   Fast periodic Collecting distributing
          xe-0/0/3:3                Current   Fast periodic Collecting distributing
    
    
    user@spine-4> show lacp interfaces ae2    
    Aggregated interface: ae2
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          et-0/0/19      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/19    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/20      Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          et-0/0/20    Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          et-0/0/19                 Current   Fast periodic Collecting distributing
          et-0/0/20                 Current   Fast periodic Collecting distributing
    
  3. Verify connectivity on the aggregated Ethernet links on the spine devices in POD 2 (Spine 3 and Spine 4) toward the multihomed ToR switches. Links ae3 and ae10 connect to ToR 1 and ToR 2, respectively, on both Spine 3 and Spine 4, so this command line filters the output to find link states starting with ae3. The output is truncated to show status only for the relevant links.

    Spine 3:

    content_copy zoom_out_map
    user@spine-3> show lacp interfaces | find ae3
    Aggregated interface: ae3
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          xe-0/0/22:0    Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/22:0  Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          xe-0/0/22:0               Current   Fast periodic Collecting distributing
    
        LACP hold-timer:  Up, Enabled, Interval: 300 sec
                            Status         Re-Start Cnt   TTE(sec)    Hold Start
        xe-0/0/22:0         Not-Running    NA             NA          NA                
    
    ...
    
    Aggregated interface: ae10
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          xe-0/0/23:0    Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/23:0  Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          xe-0/0/23:0               Current   Fast periodic Collecting distributing
    
        LACP hold-timer:  Up, Enabled, Interval: 300 sec
                            Status         Re-Start Cnt   TTE(sec)    Hold Start
        xe-0/0/23:0         Not-Running    NA             NA          NA                
    ...
    

    Spine 4:

    content_copy zoom_out_map
    user@spine-3> show lacp interfaces | find ae3
    Aggregated interface: ae3
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          xe-0/0/4:2     Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/4:2   Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          xe-0/0/4:2                Current   Fast periodic Collecting distributing
    
        LACP hold-timer:  Up, Enabled, Interval: 300 sec
                            Status         Re-Start Cnt   TTE(sec)    Hold Start
        xe-0/0/4:2          Not-Running    NA             NA          NA                
        
    
    ...
    
    Aggregated interface: ae10
        LACP state:       Role   Exp   Def  Dist  Col  Syn  Aggr  Timeout  Activity
          xe-0/0/6:1     Actor    No    No   Yes  Yes  Yes   Yes     Fast    Active
          xe-0/0/6:1   Partner    No    No   Yes  Yes  Yes   Yes     Fast    Active
        LACP protocol:        Receive State  Transmit State          Mux State 
          xe-0/0/6:1                Current   Fast periodic Collecting distributing
    
        LACP hold-timer:  Up, Enabled, Interval: 300 sec
                            Status         Re-Start Cnt   TTE(sec)    Hold Start
        xe-0/0/6:1          Not-Running    NA             NA          NA                
    
    ...
    
    
  4. Verify that the spine devices in POD 2 (Spine 3 and Spine 4) detect the route reflector devices and the ToR switches in POD 2 as LLDP neighbors. For the spine to ToR links, this verifies that the ESI member links have been established to the multihomed ToR switches.

    This sample command output is filtered and truncated to show only the relevant aggregated Ethernet links. Comment lines show the columns for the values displayed in the resulting output. See Figure 4 again, which shows that both spine switches in POD 2 use ae1 and ae2 to link to the route reflector devices, ae3 to link to ToR1, and ae10 to link to ToR 2.

    Spine 3:

    content_copy zoom_out_map
    user@spine-3> show lldp neighbors | grep ae 
    #Local Interface   Parent Interface    Chassis Id          Port info          System Name
    et-0/0/0           ae1                 54:4b:8c:cd:e4:38   et-0/0/58          rr-1
    et-0/0/1           ae1                 54:4b:8c:cd:e4:38   et-0/0/49          rr-1
    et-0/0/7           ae2                 c0:bf:a7:ca:53:c0   et-0/0/22          rr-2
    et-0/0/8           ae2                 c0:bf:a7:ca:53:c0   et-0/0/23          rr-2
    et-0/0/22:0        ae3                 10:0e:7e:b0:a1:40   xe-0/0/26          tor-1
    ...
    xe-0/0/23:0        ae10                20:d8:0b:14:72:00   xe-0/0/1           tor-2
    ...

    Spine 4:

    content_copy zoom_out_map
    user@spine-3> show lldp neighbors | grep ae 
    #Local Interface   Parent Interface    Chassis Id          Port info          System Name
    xe-0/0/3:2         ae1                 54:4b:8c:cd:e4:38   xe-0/0/34:2        rr-1 
    xe-0/0/3:3         ae1                 54:4b:8c:cd:e4:38   xe-0/0/34:3        rr-1 
    et-0/0/19          ae2                 c0:bf:a7:ca:53:c0   et-0/0/19          rr-2 
    et-0/0/20          ae2                 c0:bf:a7:ca:53:c0   et-0/0/20          rr-2 
    
    xe-0/0/4:2         ae3                 10:0e:7e:b0:a1:40   xe-0/0/27          tor-1 
    ...
    xe-0/0/6:1         ae10                20:d8:0b:14:72:00   xe-0/0/27          tor-2 
    
    ...

Verify Collapsed Spine Fabric BGP Underlay and EVPN-VXLAN Overlay Configuration

This section shows CLI commands you can use to verify the underlay and overlay are working for the collapsed spine devices integrated with the route reflector cluste. Refer to Figure 4 and Figure 5 again for the configured underlay and overlay parameters.

For brevity, this section includes verifying connectivity on the spine devices using only Spine 3 and Spine 4 in POD 2. You can use the same commands on the spine devices (Spine 1 and Spine 2) in POD 1.

  1. Verify on the route reflector devices that the EBGP and IBGP peering is established and traffic paths with the four spine devices are active. This sample command output is filtered to show only the relevant status lines showing the established peering. Comment lines show the columns for the values displayed in the resulting output.

    RR 1:

    content_copy zoom_out_map
    user@rr-1> show bgp summary | match Estab 
    # Peer           AS              InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Damped...
    # underlay BGP peerings
    172.16.1.1       4200000011       3758       3767       0       0  1d 4:38:36 Establ
    172.16.2.1       4200000012        129        131       0       5       56:59 Establ
    172.16.3.1       4200000013       3802       3773       0       0  1d 4:41:03 Establ
    172.16.4.1       4200000014       3791       3762       0       0  1d 4:36:06 Establ
    ...
    # overlay BGP peerings
    192.168.1.1      4210000001     980683    4088207       0       0  1d 4:38:32 Establ
    192.168.1.2      4210000001      27145     154826       0       5       56:58 Establ
    192.168.1.3      4210000001    2696563    2953756       0       0  1d 4:41:02 Establ
    192.168.1.4      4210000001    2640667    3000173       0       0  1d 4:36:04 Establ
    ...
    

    RR 2:

    content_copy zoom_out_map
    user@rr-2> show bgp summary | match Estab 
    # Peer           AS              InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Damped...
    # underlay BGP peerings
    172.16.5.1       4200000011       3748       3763       0       0  1d 4:37:57 Establ
    172.16.6.1       4200000012        131        131       0       5       56:16 Establ
    172.16.7.1       4200000013       3796       3765       0       0  1d 4:39:01 Establ
    172.16.8.1       4200000014       3788       3756       0       0  1d 4:35:27 Establ
    ...
    # overlay BGP peerings
    192.168.1.1      4210000001     980619    4085507       0       0  1d 4:37:55 Establ
    192.168.1.2      4210000001      27074     154082       0       5       56:14 Establ
    192.168.1.3      4210000001    2695621    2952494       0       0  1d 4:38:59 Establ
    192.168.1.4      4210000001    2640070    2998889       0       0  1d 4:35:25 Establ
    ...
  2. Verify on the spine devices in POD 2 that the underlay EBGP and overlay IBGP peerings are established. This sample command output is filtered to show only the relevant status lines showing the established peering. Comment lines show the columns for the values displayed in the resulting output.

    Spine 3:

    content_copy zoom_out_map
    user@spine-3> show bgp summary | match Estab
    # Peer           AS              InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped...
    172.16.3.0       4200000021       3761       3788       0       1  1d 4:35:08  Establ
    172.16.7.0       4200000022       3755       3783       0       1  1d 4:33:44  Establ
    ...
    192.168.2.1      4210000001    2942193    2685492       0       1  1d 4:35:06  Establ
    192.168.2.2      4210000001    2941362    2685039       0       1  1d 4:33:43  Establ

    Spine 4:

    content_copy zoom_out_map
    user@spine-4> show bgp summary | match Estab
    # Peer           AS              InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped...
    172.16.4.0       4200000021       3746       3773       0       0  1d 4:28:12 Establ
    172.16.8.0       4200000022       3742       3771       0       0  1d 4:28:12 Establ
    ...
    192.168.2.1      4210000001    2986192    2627487       0       0  1d 4:28:10 Establ
    192.168.2.2      4210000001    2985323    2627487       0       0  1d 4:28:10 Establ
  3. Verify the endpoint destination IP addresses for the remote VTEP interfaces, which are the loopback addresses of the other three spine devices in POD 1 and POD 2 of this collapsed spine topology. We include sample output for Spine 3 in POD 2 here; results are similar on the other spine devices.

    Spine 3:

    content_copy zoom_out_map
    user@spine-3> show interfaces vtep | match Remote
        VXLAN Endpoint Type: Remote, VXLAN Endpoint Address: 192.168.1.4, L2 Routing Instance: default-switch, L3 Routing Instance: default
        VXLAN Endpoint Type: Remote, VXLAN Endpoint Address: 192.168.1.1, L2 Routing Instance: default-switch, L3 Routing Instance: default
        VXLAN Endpoint Type: Remote, VXLAN Endpoint Address: 192.168.1.2, L2 Routing Instance: default-switch, L3 Routing Instance: default
  4. Verify the ESI-LAGs on the spine devices toward the ToR switches. We include sample output here for Spine 3 in POD 2 here; results are similar on the other spine devices.

    Spine 3:

    content_copy zoom_out_map
    user@spine-3> show evpn instance extensive
    Instance: __default_evpn__
      Route Distinguisher: 192.168.1.3:0
      Number of bridge domains: 0
      Number of neighbors: 1
        Address               MAC    MAC+IP        AD        IM        ES Leaf-label Remote-DCI-Peer
        192.168.1.4             0         0         0         0         2
    
    Instance: default-switch
      Route Distinguisher: 192.168.1.3:3333
      Encapsulation type: VXLAN
      Duplicate MAC detection threshold: 5
      Duplicate MAC detection window: 180
      MAC database status                     Local  Remote
        MAC advertisements:                       5       9
        MAC+IP advertisements:                   21      21
        Default gateway MAC advertisements:       8       0
      Number of local interfaces: 3 (3 up)
        Interface name  ESI                            Mode             Status     AC-Role
        .local..5       00:00:00:00:00:00:00:00:00:00  single-homed     Up         Root 
        ae10.0          00:00:00:ff:00:01:00:01:00:0a  all-active       Up         Root 
        ae3.0           00:00:00:ff:00:02:00:01:00:03  all-active       Up         Root 
      Number of IRB interfaces: 4 (4 up)
        Interface name  VLAN   VNI    Status  L3 context
        irb.1                  100001  Up     VRF-T2-1                         
        irb.2                  100002  Up     VRF-T2-1                         
        irb.3                  100003  Up     VRF-T2-1                         
        irb.4                  100004  Up     VRF-T2-1                         
      Number of protect interfaces: 0
      Number of bridge domains: 4
        VLAN  Domain-ID Intfs/up   IRB-intf  Mode            MAC-sync IM-label  v4-SG-sync IM-core-NH v6-SG-sync IM-core-NH Trans-ID
        1     100001       2  2    irb.1     Extended        Enabled  100001    Disabled              Disabled              100001      
        2     100002       2  2    irb.2     Extended        Enabled  100002    Disabled              Disabled              100002      
        3     100003       2  2    irb.3     Extended        Enabled  100003    Disabled              Disabled              100003      
        4     100004       2  2    irb.4     Extended        Enabled  100004    Disabled              Disabled              100004      
      Number of neighbors: 1
        Address               MAC    MAC+IP        AD        IM        ES Leaf-label Remote-DCI-Peer
        192.168.1.4             9        21         8         4         0
      Number of ethernet segments: 6
        ESI: 00:00:00:ff:00:01:00:01:00:0a
          Status: Resolved by IFL ae10.0
          Local interface: ae10.0, Status: Up/Forwarding
          Number of remote PEs connected: 1
            Remote-PE        MAC-label  Aliasing-label  Mode
            192.168.1.4      0          0               all-active   
          DF Election Algorithm: MOD based
          Designated forwarder: 192.168.1.4
          Backup forwarder: 192.168.1.3
          Last designated forwarder update: Apr 09 13:13:20
        ESI: 00:00:00:ff:00:02:00:01:00:03
          Status: Resolved by IFL ae3.0
          Local interface: ae3.0, Status: Up/Forwarding
          Number of remote PEs connected: 1
            Remote-PE        MAC-label  Aliasing-label  Mode
            192.168.1.4      100001     0               all-active   
          DF Election Algorithm: MOD based
          Designated forwarder: 192.168.1.4
          Backup forwarder: 192.168.1.3
          Last designated forwarder update: Apr 09 13:13:20
        ESI: 05:fa:ef:80:81:00:01:86:a1:00
          Local interface: irb.1, Status: Up/Forwarding
          Number of remote PEs connected: 1
            Remote-PE        MAC-label  Aliasing-label  Mode
            192.168.1.4      100001     0               all-active   
        ESI: 05:fa:ef:80:81:00:01:86:a2:00
          Local interface: irb.2, Status: Up/Forwarding
          Number of remote PEs connected: 1
            Remote-PE        MAC-label  Aliasing-label  Mode
            192.168.1.4      100002     0               all-active   
        ESI: 05:fa:ef:80:81:00:01:86:a3:00
          Local interface: irb.3, Status: Up/Forwarding
          Number of remote PEs connected: 1
            Remote-PE        MAC-label  Aliasing-label  Mode
            192.168.1.4      100003     0               all-active   
        ESI: 05:fa:ef:80:81:00:01:86:a4:00
          Local interface: irb.4, Status: Up/Forwarding
          Number of remote PEs connected: 1
            Remote-PE        MAC-label  Aliasing-label  Mode
            192.168.1.4      100004     0               all-active   
      Router-ID: 192.168.1.3
      SMET Forwarding: Disabled
    
external-footer-nav