Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Data Center Fabric Reference Design Overview and Validated Topology

This section provides a high-level overview of the Data Center Fabric reference design topology and summarizes the topologies that were tested and validated by the Juniper Networks Test Team.

Note:

Contact your local account manager for details on the tested scale for all supported features.

Reference Design Overview

The Data Center Fabric reference design tested by Juniper Networks is based on an IP Fabric underlay in a Clos topology that uses the following devices:

  • Spine devices: up to four.

  • Leaf devices: the number of leaf devices supported by Juniper Networks varies depending on the Junos OS or Junos OS Evolved software release and overlay type (centrally-routed bridging overlay or edge-routed bridging overlay).

    The number of tested leaf nodes reflected throughout this guide is 96, which was number tested in the initial reference design.

Each leaf device is interconnected to each spine device using either an aggregated Ethernet interface that includes two high-speed Ethernet interfaces (10-Gbps, 40-Gbps, or 100-Gbps) as LAG members or a single high-speed Ethernet interface.

Figure 1 provides an illustration of the topology used in this reference design:

Figure 1: Data Center Fabric Reference Design - TopologyData Center Fabric Reference Design - Topology

End systems such as servers connect to the data center network through leaf device interfaces. Each end system was multihomed to three leaf devices using a 3-member aggregated Ethernet interface as shown in Figure 2.

Figure 2: Data Center Fabric Reference Design - MultihomingData Center Fabric Reference Design - Multihoming

The objective for multihoming end systems to 3 different leaf devices is to verify that multihoming of an end system to more than 2 leaf devices is fully supported.

Data Center EVPN-VXLAN Fabric Reference Designs—Supported Hardware Summary

Table 1 provides a summary of the hardware that you can use to create the reference designs described in this guide. The table is organized by use cases, roles that devices play in each use case, and the hardware supported for each role.

Note:
  • For each use case, we support the hardware listed in Table 1 only with the associated Junos OS or Junos OS Evolved software releases.

  • To learn about any existing issues and limitations for a hardware device in this reference design, see the release notes for the Junos OS or Junos OS Evolved software release with which the device was tested.

Table 1: Data Center Fabric Reference Design Supported Hardware Summary

Device Roles

Hardware4

Junos OS or Junos OS Evolved (EVO) Software Releases1

Centrally Routed Bridging Overlay

Spine

QFX10002-36Q/72Q

QFX10008

QFX10016

MX Series

17.3R3-S1

QFX5120-32C

QFX10002-60C

19.1R2

PTX10001

PTX10004

PTX10008

QFX5130-32CD

QFX5700

23.2R2-EVO

Server leaf

QFX5100

QFX5110

QFX5200

QFX10002-36Q/72Q

17.3R3-S1

QFX5210-64C

18.1R3-S3

QFX5120-48Y

18.4R2

QFX5120-32C

QFX10002-60C

19.1R2

QFX5120-48T

20.2R2

QFX5120-48YM4

20.4R3

Border spine5 (data center gateway, and data center interconnect (DCI) using IPVPN)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

23.2R2-EVO

Border spine5 (data center gateway, and data center interconnect (DCI) using EVPN Type 5 routes)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

PTX10001

PTX10004

PTX10008

QFX5130-32CD

QFX5700

23.2R2-EVO

Border spine5 (service chaining)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX10002-60C2

19.1R2

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

PTX10001

PTX10004

PTX10008

QFX5130-32CD

QFX5700

23.2R2-EVO

Border leaf (data center gateway, and DCI using IPVPN)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

Border leaf (data center gateway, and DCI using EVPN Type 5 routes)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

PTX10001

PTX10004

PTX10008

23.2R2-EVO

Border leaf (service chaining)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX10002-60C2

19.1R2

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

PTX10001

PTX10004

PTX10008

23.2R2-EVO

Lean super spine

QFX5110

QFX5120-32C

QFX5120-48Y

QFX5200

QFX5210-64C

QFX5220-32CD

QFX5220-128C

QFX10002-36Q/72Q/60C

QFX10008

QFX10016

20.2R2

QFX5120-48YM

20.4R3

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7024

22.4R2-EVO

Edge-Routed Bridging Overlay

Lean spine

QFX5200

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

QFX5210-64C

18.1R3-S3

QFX5120-48Y

18.4R2

QFX5120-32C

QFX10002-60C

19.1R2

QFX5220-32CD

QFX5220-128C

20.2R2

QFX5120-48YM

20.4R3

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7024

22.4R2-EVO

Lean spine with IPv6 underlay

QFX5120-32C

QFX5120-48Y

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.2R2-S1

QFX5120-48YM

21.4R2

Server leaf

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX10002-60C

19.1R2

QFX5110

18.1R3-S3

QFX5120-48Y

18.4R2

QFX5120-32C

19.1R2

QFX5120-48T

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

21.4R2-EVO

ACX7024

22.4R2-EVO

Server leaf with IPv6 underlay

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.2R2-S1

QFX5120-48YM

21.4R2

Server leaf with optimized intersubnet multicast (OISM)

QFX5110

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.4R2

QFX5130-32CD

QFX5700

22.2R2-EVO

PTX10001

PTX10004

PTX10008

23.2R2-EVO

Server leaf with Enhanced OISM

QFX5120-32C

QFX5120-48T

QFX5120-48Y

23.4R2

QFX5130-32CD

QFX5700

23.4R2-EVO

Border spine5 (data center gateway and DCI using IPVPN)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

QFX5120-48Y2

18.4R2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX5120-48T2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

Border spine5 (data center gateway and DCI using EVPN Type 5 routes)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

QFX5120-48Y2

18.4R2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX5120-48T2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

21.4R2-EVO

ACX7024

22.4R2-EVO

Border spine5 (service chaining)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y2

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

QFX10002-60C2

19.1R2

QFX5120-48T2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

21.4R2-EVO

ACX7024

22.4R2-EVO

Border spine with IPv6 underlay5 (including data center gateway, DCI using EVPN Type 5 routes and IPVPN, and service chaining)

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.2R2-S1

QFX5120-48YM

21.4R2

Border spine with DCI gateway—EVPN-VXLAN to EVPN-MPLS stitching

ACX7024

ACX7100-32C

ACX7100-48L

PTX10001

PTX10008

23.4R2-EVO

Border leaf (data center gateway, and DCI using IPVPN)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y2

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

19.1R2

QFX5120-48T2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

Border leaf (data center gateway, and DCI using EVPN Type 5 routes)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

QFX5120-48Y2

18.4R2

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

19.1R2

QFX5120-48T2

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

21.4R2-EVO

ACX7024

22.4R2-EVO

Border leaf (service chaining)

QFX10002-36Q/72Q

QFX10008

QFX10016

17.3R3-S1

QFX5110

18.1R3-S3

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

18.4R2-S2

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-60C3

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

21.4R2-EVO

ACX7024

22.4R2-EVO

Border leaf with IPv6 underlay (including data center gateway, DCI using EVPN Type 5 routes and IPVPN, and service chaining)

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.2R2-S1

QFX5120-48YM

21.4R2

Border leaf with DCI gateway and asymmetric IRB—EVPN-VXLAN to EVPN-VXLAN stitching for Layer 2

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

20.4R3-S1

QFX5130-32CD

QFX5700

22.2R2-EVO

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX5120-48YM

23.2R2

Border leaf with DCI gateway and symmetric IRB—EVPN-VXLAN to EVPN-VXLAN stitching for Layer 2

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX5120-48YM

QFX10002-36Q/72Q

QFX10008

QFX10016

23.2R2

ACX7100-32C

ACX7100-48L

QFX5130-32CD

QFX5700

23.2R2-EVO

Border leaf with DCI gateway—EVPN-VXLAN to EVPN-VXLAN stitching for EVPN Type 5 routes

MX204

MX240,MX480, and MX960 with MPC7E

MX10003

MX10008

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX5120-48YM

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

22.4R2

ACX7024

ACX7100-32C

ACX7100-48L

QFX5130-32CD

QFX5700

PTX10001

PTX10004

PTX10008

22.4R2-EVO

Border leaf with DCI gateway—EVPN-VXLAN to EVPN-MPLS stitching

MX Series 21.4R2

ACX7024

ACX7100-32C

ACX7100-48L

PTX10001

PTX10008

23.4R2-EVO

Border leaf with optimized intersubnet multicast (OISM)

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.4R2

QFX5120-32C

QFX5120-48T

QFX5120-48Y

22.2R2

QFX5130-32CD

QFX5700

22.2R2-EVO

PTX10001

PTX10004

PTX10008

23.2R2-EVO

Border leaf with Enhanced OISM

QFX5120-32C

QFX5120-48T

QFX5120-48Y

23.4R2

QFX5130-32CD

QFX5700

23.4R2-EVO

Lean super spine

QFX5110

QFX5120-32C

QFX5120-48Y

QFX5200

QFX5210-64C

QFX5220-32CD

QFX5220-128C

QFX10002-36Q/72Q/60C

QFX10008

QFX10016

20.2R2

QFX5120-48YM

20.4R3

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7024

22.4R2-EVO

Lean Super Spine with IPv6 underlay

QFX5120-32C

QFX5120-48Y

QFX10002-36Q/72Q

QFX10002-60C

QFX10008

QFX10016

21.2R2-S1

QFX5120-48YM

21.4R2

Collapsed spine

Leaf with Virtual Chassis

QFX5100

QFX5110

QFX5120

20.2R2

Collapsed spine

MX204

MX240, MX480, and MX960 with MPC7E

MX10003

MX10008

QFX5110

QFX5120-32C

QFX5120-48T

QFX5120-48Y

QFX10002-36Q/72Q/60C

QFX10008

QFX10016

20.2R2

QFX5120-48YM

20.4R3

QFX5130-32CD

QFX5700

21.2R2-S1-EVO

ACX7100-32C

ACX7100-48L

PTX10001

PTX10004

PTX10008

21.4R2-EVO

ACX7024

22.4R2-EVO

1 This column includes the initial Junos OS or Junos OS Evolved release train with which we introduce support for the hardware in the reference design. For each initial Junos OS or Junos OS Evolved release train, we also support the hardware with later releases in the same release train. Although we list the initial Junos OS or Junos OS Evolved release train here, we recommend that you use the most recent hardened releases that are listed on the following page: Data Center Architectures Hardened Release Information for EVPN/VXLAN.

2 While functioning in this role, this hardware does not support centrally routed multicast.

3 While functioning in this role, the QFX10002-60C switch supports multicast traffic. However, this switch supports multicast at a lower scale than the QFX10002-36Q/72Q switches.

4 For multicast support, consult the Pathfinder Feature Explorer application for multicast features and the releases that support the feature on each platform.

5 For devices in the border spine role, set a larger value for Bidirectional Forwarding Detection (BFD) timers in the overlay—4 seconds or higher, with a multiplier of 3. These settings help avoid potential flapping of the spine-to-leaf connection due to BFD control timer expiration.

This table does not include backbone devices that connect the data center to a WAN cloud. Backbone devices provide physical connectivity between data centers and are required for DCI. See Data Center Interconnect Design and Implementation Using Type 5 Routes.

Interfaces Summary

This section summarizes the interface connections between spine and leaf devices that were validated in this reference design.

It contains the following sections:

Interfaces Overview

In the validated reference design, spine and leaf devices are interconnected using either an aggregated Ethernet interface that includes two high-speed Ethernet interfaces or a single high-speed Ethernet interface.

The reference design was validated with the following combinations of spine and leaf device interconnections:

  • See Table 1 for the interconnected platforms tested in spine and leaf device roles as of the indicated Junos OS and Junos OS Evolved hardening releases.

    We use 10-Gbps, 25-Gbps, 40-Gbps, or 100-Gbps interfaces on the supported platforms to interconnect spine and leaf devices. Starting in Junos OS and Junos OS Evolved Release 23.2R2, we also use 400-Gbps interfaces to interconnect leaf and spine devices that support this speed.

  • We validated combinations of aggregated Ethernet interfaces containing two 10-Gbps, 25-Gbps, 40-Gbps, 100-Gbps, and 400-Gbps member interfaces between the supported platforms.

  • We validated channelized 10-Gbps, 25-Gbps, 40-Gbps, or 100-Gbps interfaces used to interconnect spine and leaf devices as single links or as member links in a 2-member aggregated Ethernet bundle.

Spine Device Interface Summary

As previously stated, the validated design includes up to 4 spine devices and leaf devices that are interconnected by one or two high-speed Ethernet interfaces.

QFX10008 and QFX10016 switches were used as they can achieve the port density necessary for this reference design. See QFX10008 Hardware Overview or QFX10016 Hardware Overview for information on supported line cards and the number of high-speed Ethernet interfaces supported on these switches.

QFX10002-36Q/72Q, QFX10002-60C, and QFX5120-32C switches, however, do not have the port density to support this reference design at the larger scales but can be deployed as spine devices in smaller scale environments. See QFX10002 Hardware Overview and QFX5120 System Overview for information about the number of high-speed Ethernet interfaces supported on QFX10002-36Q/72Q, QFX10002-60C, and QFX5120-32C switches, respectively.

All channelized spine device interface options are tested and supported in the validated reference design.

Leaf Device Interface Summary

Each leaf device in the reference design connects to the four spine devices and has the port density to support this reference design.

The number and types of high-speed Ethernet interfaces used as uplink interfaces to spine devices vary by leaf device switch model.

To see which high-speed interfaces are available with each leaf device switch model, see the following documents: