Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Multicast Optimization Design and Implementation

Juniper Networks supports the multicast optimization features discussed in this section in both centrally-routed bridging (CRB) and edge-routed bridging (ERB) overlays.

This design assumes that an EVPN-VXLAN ERB overlay is already running for IPv4 unicast traffic. (See Edge-Routed Bridging Overlay Design and Implementation for information about configuring edge-routed bridging.) However, the multicast optimization features uses a centrally routed approach.

Note:

Starting in Junos OS and Junos OS Evolved Release 22.2R2, we recommend deploying the optimized intersubnet multicast (OISM) solution for ERB overlay unicast EVPN-VXLAN networks that include multicast traffic. OISM combines the best aspects of ERB and CRB overlay designs together to provide the most efficient multicast traffic flow in ERB overlay fabrics.

We describe the OISM configuration that we validated in our ERB overlay reference architecture here:

This section shows how to add centrally-routed multicast optimizations to the edge-routed bridging topology shown in Figure 1.

Figure 1: Topology for Multicast Optimizations in an Edge-Routed Bridging OverlayTopology for Multicast Optimizations in an Edge-Routed Bridging Overlay

Multicast is configured as follows:

  • Server leaf devices are set up in AR leaf role and for IGMP snooping.

  • Spine devices are set up in AR replicator role.

  • Border leaf devices are set up for multicast routing.

Note:

If your multicast environment requires assisted replication to handle large multicast flows and multicast routing, we recommend any of the QFX10000 line of switches for the border leaf and border spine roles. However, note that the QFX10002-60C switch supports multicast at a lower scale than the QFX10002-36Q/72Q switches. Also, we do not recommend any of the MX Series routers included in this reference design as a border leaf in a multicast environment with large multicast flows.

For an overview of multicast optimizations, see the Multicast Optimization section in Data Center Fabric Blueprint Architecture Components.

The following sections show how to configure and verify multicast assisted replication:

Configuring the Server Leaf

We are configuring AR and IGMP snooping on the server leaf. When IGMP snooping is enabled on a device, SMET is also enabled on the device by default.

  1. Enable IGMP snooping.
  2. Enable AR in the leaf role. This causes the server leaf to only forward one copy of multicast traffic to the spine, which then performs replication of the multicast traffic.

    The replicator-activation-delay is the time, in seconds, the leaf waits before sending the replication to the AR replicator after receiving the AR replicator route from the replicator.

Configuring the Spine

We are configuring the spine as AR replicator device.

  1. Configure IP addressing for the loopback interfaces. One address is used for the AR replicator role (192.168.102.2). The other address (192.168.2.2) is used for the VTEP tunnel.
  2. Configure the spine to act as the AR replicator device.
  3. Configure the loopback interface that is used in the VRF routing instance.
  4. Configure a VRF routing instance.
  5. Configure VLANs to the border leaf.
  6. Configure the EVPN protocol with VXLAN encapsulation.
  7. Configure the switch options, and specify that the loopback interface is the VTEP source interface.

Configuring the Border Leaf

This section describes how to set up multicast routing on the border leafs.

Note:

We do not configure AR on the border leafs. In this network design, the two border leafs share a multihomed ESI, and one of the border leaf devices supports AR but the other does not. In this situation, we do not recommend configuring AR on the border leaf that supports this feature. However, if your network includes two border leafs that share a multihomed ESI, and both border leaf devices support AR, we support the configuration of AR on both border leafs.

  1. Configure VLANs
  2. Configure the EVPN protocol with VXLAN encapsulation.
    Note:

    On QFX5130 and QFX5700 switches, also include the host-profile unified forwarding profile option to support an EVPN-VXLAN environment (see Layer 2 Forwarding Tables for details):

  3. Configure the switch options and specify that the loopback interface is the VTEP source interface.
  4. Configure the IRBs.
  5. Configure a VRF routing instance.
  6. Configure PIM for multicast routing at the border leaf devices.

Verifying Assisted Replication on the Server Leaf

The server leaf is in the role of AR leaf device. This means that it does not perform ingress replication. Instead, it forwards one copy of multicast traffic to the spine, which is configured as the AR replicator device.

  1. Verify that the spines are in the role of Assisted Replicator and are receiving Type 3 routes. Address 192.168.102.1 is Spine 1, and address 192.168.102.2 is Spine 2.
  2. Verify the spine that is set up as the AR device for the VLAN. 192.168.102.2 is the address of the AR device.

Verifying Assisted Replication on the Spine

Verify that server leaf devices 1 through 4 are AR leaf devices. (The loopback addresses of server leaf devices 1 through 4 are 192.168.0.1, 192.168.0.2, 192.168.0.3, and 192.168.0.4, respectively.) The border leaf devices are not set up for assisted replication.

Multicast Optimization with a Centrally Routed Multicast Design— Feature Summary

Table 1 provides a history of the features described in this section and their support within this reference design.

Table 1: Multicast Optimization Feature Summary (Centrally Routed Multicast Design)

Hardware

IGMPv2 Snooping

EVPN Type 6 SMET Routes

Inter-VNI Multicast with PIM Gateway

Assisted Replication

PIM to External Rendezvous Point (From Border)

QFX51001

Not supported

Not supported

Not supported

Not supported

Not supported

QFX5110-32Q, QFX5110-48S

18.1R3-S3

18.4R2

Not supported

Not supported

Not supported

QFX5120-48Y

18.4R2

18.4R2

Not supported

Not supported

Not supported

QFX5120-32C

19.1R2

19.1R2

Not supported

Not supported

Not supported

QFX5200-32C1, QFX5200-48Y1

Not supported

Not supported

Not supported

Not supported

Not supported

QFX10002-36Q/72Q, QFX10008, QFX10016

18.1R3-S3

18.4R2

18.1R3-S3

18.4R2

17.3R3-S1

QFX10002-60C2

20.2R2

20.2R2

20.2R2

20.2R2

20.2R2

MX204; MX240, MX480, MX960 with MPC7E; MX10003;

Not supported

Not supported

Not supported

Not supported

Not supported

1Make sure that IGMP snooping is not enabled on these QFX switches. If IGMP snooping is inadvertently enabled, these switches might process EVPN Type 6 routes that are reflected to them.

2The QFX10002-60C switch supports multicast at a lower scale than the QFX10002-36Q/72Q switches.