Take your networking performance to new heights with a modern, cloud-native, AI-Native architecture. Only Juniper can help you unleash the full potential of Wi-Fi 7 with our AI-Native platform for innovation.
Juniper’s AI data center solution is a quick way to deploy high performing AI training and inference networks that are the most flexible to design and easiest to manage with limited IT resources.
Juniper's Ai-Native routing solution delivers robust 400GbE and 800GbE capabilities for unmatched performance, reliability, and sustainability at scale.
Juniper's Ai-Native routing solution delivers robust 400GbE and 800GbE capabilities for unmatched performance, reliability, and sustainability at scale.
Shaping Student Experiences: The NOW Way to Build Higher Education Networks
Juniper Networks CIO Sharon Mandell and a virtual summit of C-level IT leaders from prestigious institutions discuss ongoing efforts to support digital transformation on campus.
Retail experts Kevin McCartan, Senior IT Service Delivery Engineer at Musgrave; Jack Stratten of Insider Trends; and Christian Gilby, Senior Director of Product Marketing at Juniper Networks, discuss customer experiences.
Legacy networks simply cannot meet the demands of today’s rapidly evolving metro landscape. Unlock a new generation of highly scalable architectures and automated operations with the Juniper ACX7020.
Lack of AI innovation from your current networking vendor slowing you down? Embrace Juniper’s cloud-native, AI-Native access switches that support every level and layer, across nearly every deployment.
Delivering practical solutions and enriching discussions, this podcast series is a vital resource for those seeking an in-depth exploration of AI's transformative potential.
Juniper AI Care Services Revolutionize Your Service Experience
Our industry-first AI-Native services couple AIOps with our deep expertise across the full network life cycle. You can move from reactive response to proactive insight and action.
Juniper AI Data Center Deployment Services Optimize Your AI Model Runs
We use our expertise and validated designs to help design, deploy, validate and tune networks, including GPUs and storage, to get the most from your AI infrastructure operation.
Dive deep with leading experts and thought leaders on all the topics that matter most to your business, from AI to network security to driving rapid, relevant transformation for your business.
For
a control-plane driven overlay, there must be a signalling path between
the VXLAN virtual tunnel endpoint (VTEP) devices.
In this reference design with an IPv4 Fabric underlay, all overlay
types use IBGP with Multiprotocol BGP (MP-IBGP) to maintain the signalling
path between the VTEPs within an autonomous system. The spine devices
act as a route reflector cluster, and the leaf devices are route reflector
clients, as shown in Figure 1.
Figure 1: IBGP Route Reflector Cluster
To configure an EVPN-VXLAN data center fabric architecture with
an IPv6 Fabric, see IPv6 Fabric Underlay
and Overlay Network Design and Implementation with EBGP instead of this procedure. In an IPv6 Fabric configuration, we use
EBGP and IPv6 for underlay connectivity, as well as EBGP and IPv6
for peering and EVPN signalling in the overlay. With an IPv6 Fabric,
the VTEPs encapsulate the VXLAN packets with an IPv6 outer header
and tunnel the packets using IPv6. You can use either an IPv4 Fabric
or an IPv6 Fabric in your data center architecture. You can’t
mix IPv4 Fabric and IPv6 Fabric elements in the same architecture.
To configure IBGP for the overlay peering in an IPv4
Fabric, perform the following:
Configure an AS number for overlay
IBGP. All leaf and spine devices participating in the overlay use
the same AS number. In this example, the AS number is private AS 4210000001.
Spine and Leaf Devices:
content_copyzoom_out_map
set routing-options autonomous-system 4210000001
Configure IBGP using EVPN signaling
on each spine device to peer with every leaf device (Leaf 1 through
Leaf 96). Also, form the route reflector cluster (cluster ID 192.168.0.10)
and configure equal cost multipath (ECMP) for BGP. The configuration
included here belongs to Spine 1, as shown in Figure 2.
Figure 2: IBGP – Spine Device
Tip:
By default, BGP selects only one best path when there are
multiple, equal-cost BGP paths to a destination. When you enable BGP
multipath by including the multipath statement at the [edit protocols bgp group group-name] hierarchy
level, the device installs all of the equal-cost BGP paths into the
forwarding table. This feature helps load balance the traffic across
multiple paths.
Spine 1:
content_copyzoom_out_map
set protocols bgp group OVERLAY type internalset protocols bgp group OVERLAY local-address 192.168.0.1set protocols bgp group OVERLAY family evpn signalingset protocols bgp group OVERLAY cluster 192.168.0.10set protocols bgp group OVERLAY multipathset protocols bgp group OVERLAY neighbor 192.168.1.1...set protocols bgp group OVERLAY neighbor 192.168.1.96
Configure IBGP on the spine devices to peer with all the
other spine devices acting as route reflectors. This step completes
the full mesh peering topology required to form a route reflector
cluster.
Spine 1:
content_copyzoom_out_map
set protocols bgp group OVERLAY_RR_MESH type internalset protocols bgp group OVERLAY_RR_MESH local-address 192.168.0.1set protocols bgp group OVERLAY_RR_MESH family evpn signalingset protocols bgp group OVERLAY_RR_MESH neighbor 192.168.0.2set protocols bgp group OVERLAY_RR_MESH neighbor 192.168.0.3set protocols bgp group OVERLAY_RR_MESH neighbor 192.168.0.4
Configure BFD on all BGP groups on the spine devices to
enable rapid detection of failures and reconvergence.
Spine 1:
content_copyzoom_out_map
set protocols bgp group OVERLAY bfd-liveness-detection minimum-interval 350set protocols bgp group OVERLAY bfd-liveness-detection multiplier 3set protocols bgp group OVERLAY bfd-liveness-detection session-mode automaticset protocols bgp group OVERLAY_RR_MESH bfd-liveness-detection minimum-interval 350set protocols bgp group OVERLAY_RR_MESH bfd-liveness-detection multiplier 3set protocols bgp group OVERLAY_RR_MESH bfd-liveness-detection session-mode automatic
Configure IBGP with EVPN signaling from each leaf device
(route reflector client) to each spine device (route reflector cluster). The
configuration included here belongs to Leaf 1, as shown in Figure 3.
Figure 3: IBGP – Leaf Device
Leaf 1:
content_copyzoom_out_map
set protocols bgp group OVERLAY type internalset protocols bgp group OVERLAY local-address 192.168.1.1set protocols bgp group OVERLAY family evpn signalingset protocols bgp group OVERLAY neighbor 192.168.0.1set protocols bgp group OVERLAY neighbor 192.168.0.2set protocols bgp group OVERLAY neighbor 192.168.0.3set protocols bgp group OVERLAY neighbor 192.168.0.4
Configure BFD on the leaf devices to enable rapid detection
of failures and reconvergence.
Note:
QFX5100 switches only support BFD liveness detection minimum
intervals of 1 second or longer. The configuration here has a minimum
interval of 350 ms, which is supported on devices other than QFX5100
switches.
Leaf 1:
content_copyzoom_out_map
set protocols bgp group OVERLAY bfd-liveness-detection minimum-interval 350set protocols bgp group OVERLAY bfd-liveness-detection multiplier 3set protocols bgp group OVERLAY bfd-liveness-detection session-mode automatic
Verify that IBGP is functional on the spine devices.
Verify that BFD is operational on the spine devices.
content_copyzoom_out_map
user@spine-1> show bfd session
Detect Transmit
Address State Interface Time Interval Multiplier
192.168.0.2 Up 1.050 0.350 3
192.168.0.3 Up 1.050 0.350 3
192.168.0.4 Up 1.050 0.350 3
192.168.1.1 Up 1.050 0.350 3
...
192.168.1.96 Up 1.050 0.350 3
Verify that IBGP is operational
on the leaf devices.
Verify that BFD is operational on the leaf devices.
content_copyzoom_out_map
user@leaf-10> show bfd session
Detect Transmit
Address State Interface Time Interval Multiplier
192.168.0.1 Up 1.050 0.350 3
192.168.0.2 Up 1.050 0.350 3
192.168.0.3 Up 1.050 0.350 3
192.168.0.4 Up 1.050 0.350 3