- play_arrow Overview
- play_arrow Configuring Protocol Independent Multicast
- play_arrow Understanding PIM
- play_arrow Configuring PIM Basics
- Configuring Different PIM Modes
- Configuring Multiple Instances of PIM
- Changing the PIM Version
- Optimizing the Number of Multicast Flows on QFabric Systems
- Modifying the PIM Hello Interval
- Preserving Multicast Performance by Disabling Response to the ping Utility
- Configuring PIM Trace Options
- Configuring BFD for PIM
- Configuring BFD Authentication for PIM
- play_arrow Routing Content to Densely Clustered Receivers with PIM Dense Mode
- play_arrow Routing Content to Larger, Sparser Groups with PIM Sparse Mode
- Understanding PIM Sparse Mode
- Examples: Configuring PIM Sparse Mode
- Configuring Static RP
- Example: Configuring Anycast RP
- Configuring PIM Bootstrap Router
- Understanding PIM Auto-RP
- Configuring All PIM Anycast Non-RP Routers
- Configuring a PIM Anycast RP Router with MSDP
- Configuring Embedded RP
- Configuring PIM Filtering
- Examples: Configuring PIM RPT and SPT Cutover
- Disabling PIM
- play_arrow Configuring Designated Routers
- play_arrow Receiving Content Directly from the Source with SSM
- Understanding PIM Source-Specific Mode
- Example: Configuring Source-Specific Multicast
- Example: Configuring PIM SSM on a Network
- Example: Configuring an SSM-Only Domain
- Example: Configuring SSM Mapping
- Example: Configuring Source-Specific Multicast Groups with Any-Source Override
- Example: Configuring SSM Maps for Different Groups to Different Sources
- play_arrow Minimizing Routing State Information with Bidirectional PIM
- play_arrow Rapidly Detecting Communication Failures with PIM and the BFD Protocol
- play_arrow Configuring PIM Options
- play_arrow Verifying PIM Configurations
-
- play_arrow Configuring Multicast Routing Protocols
- play_arrow Connecting Routing Domains Using MSDP
- play_arrow Handling Session Announcements with SAP and SDP
- play_arrow Facilitating Multicast Delivery Across Unicast-Only Networks with AMT
- play_arrow Routing Content to Densely Clustered Receivers with DVMRP
-
- play_arrow Configuring Multicast VPNs
- play_arrow Configuring Draft-Rosen Multicast VPNs
- Draft-Rosen Multicast VPNs Overview
- Example: Configuring Any-Source Draft-Rosen 6 Multicast VPNs
- Example: Configuring a Specific Tunnel for IPv4 Multicast VPN Traffic (Using Draft-Rosen MVPNs)
- Example: Configuring Source-Specific Draft-Rosen 7 Multicast VPNs
- Understanding Data MDTs
- Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode
- Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast Mode
- Examples: Configuring Data MDTs
- play_arrow Configuring Next-Generation Multicast VPNs
- Understanding Next-Generation MVPN Network Topology
- Understanding Next-Generation MVPN Concepts and Terminology
- Understanding Next-Generation MVPN Control Plane
- Next-Generation MVPN Data Plane Overview
- Enabling Next-Generation MVPN Services
- Generating Next-Generation MVPN VRF Import and Export Policies Overview
- Multiprotocol BGP MVPNs Overview
- Configuring Multiprotocol BGP Multicast VPNs
- BGP-MVPN Inter-AS Option B Overview
- ACX Support for BGP MVPN
- Example: Configuring MBGP MVPN Extranets
- Understanding Redundant Virtual Tunnel Interfaces in MBGP MVPNs
- Example: Configuring Redundant Virtual Tunnel Interfaces in MBGP MVPNs
- Understanding Sender-Based RPF in a BGP MVPN with RSVP-TE Point-to-Multipoint Provider Tunnels
- Example: Configuring Sender-Based RPF in a BGP MVPN with RSVP-TE Point-to-Multipoint Provider Tunnels
- Example: Configuring Sender-Based RPF in a BGP MVPN with MLDP Point-to-Multipoint Provider Tunnels
- Configuring MBGP MVPN Wildcards
- Distributing C-Multicast Routes Overview
- Exchanging C-Multicast Routes
- Generating Source AS and Route Target Import Communities Overview
- Originating Type 1 Intra-AS Autodiscovery Routes Overview
- Signaling Provider Tunnels and Data Plane Setup
- Anti-spoofing support for MPLS labels in BGP/MPLS IP VPNs (Inter-AS Option B)
- BGP-MVPN SD-WAN Overlay
- play_arrow Configuring PIM Join Load Balancing
- Use Case for PIM Join Load Balancing
- Configuring PIM Join Load Balancing
- PIM Join Load Balancing on Multipath MVPN Routes Overview
- Example: Configuring PIM Join Load Balancing on Draft-Rosen Multicast VPN
- Example: Configuring PIM Join Load Balancing on Next-Generation Multicast VPN
- Example: Configuring PIM Make-Before-Break Join Load Balancing
- Example: Configuring PIM State Limits
-
- play_arrow General Multicast Options
- play_arrow Bit Index Explicit Replication (BIER)
- play_arrow Prevent Routing Loops with Reverse Path Forwarding
- play_arrow Use Multicast-Only Fast Reroute (MoFRR) to Minimize Packet Loss During Link Failures
- play_arrow Enable Multicast Between Layer 2 and Layer 3 Devices Using Snooping
- play_arrow Configure Multicast Routing Options
- play_arrow Controller-Based BGP Multicast Signaling
-
- play_arrow Troubleshooting
- play_arrow Knowledge Base
-
- play_arrow Configuration Statements and Operational Commands
Understanding Distributed IGMP
By default, Internet Group Management Protocol (IGMP) processing takes place on the Routing Engine for MX Series routers. This centralized architecture may lead to reduced performance in scaled environments or when the Routing Engine undergoes CLI changes or route updates. You can improve system performance for IGMP processing by enabling distributed IGMP, which utilizes the Packet Forwarding Engine to maintain a higher system-wide processing rate for join and leave events.
Distributed IGMP Overview
Distributed IGMP works by moving IGMP processing from the Routing Engine to the Packet Forwarding Engine. When distributed IGMP is not enabled, IGMP processing is centralized on the routing protocol process (rpd) running on the Routing Engine. When you enable distributed IGMP, join and leave events are processed across Modular Port Concentrators (MPCs) on the Packet Forwarding Engine. Because join and leave processing is distributed across multiple MPCs instead of being processed through a centralized rpd on the Routing Engine, performance improves and join and leave latency decreases.
When you enable distributed IGMP, each Packet Forwarding Engine processes reports and generates queries, maintains local group membership to the interface
mapping table and updates the forwarding state based on this table, runs distributed IGMP independently, and implements the group-policy
and ssm-map-policy
IGMP
interface options.
Information from group-policy
and ssm-map-policy
IGMP interface options passes from the Routing Engine to the Packet Forwarding
Engine.
When you enable distributed IGMP, the rpd
on the Routing Engine synchronizes all IGMP configurations (including global
and interface-level configurations) from the rpd
to each Packet Forwarding Engine, runs passive IGMP on distributed interfaces,
and notifies Protocol Independent Multicast (PIM) of all group memberships per distributed IGMP interface.
Guidelines for Configuring Distributed IGMP
Consider the following guidelines when you configure distributed IGMP on an MX Series router with MPCs:
Distributed IGMP increases network performance by reducing the maximum join and leave latency and by increasing join and leave events.
Note:Join and leave latency may increase if multicast traffic is not preprovisioned and destined for an MX Series router when a join or leave event is received from a client interface.
Distributed IGMP is supported for Ethernet interfaces. It does not improve performance on PIM interfaces.
Starting in Junos OS release 18.2, distributed IGMP is supported on Ethernet interfaces with enhanced subscriber management. IGMP processing for subscriber flows is moved from the Routing Engine to the Packet Forwarding Engine of supported line cards.
Multicast groups cannot be comprised of mixed receivers. They can be either centralized IGMP or distributed IGMP.
You can reduce initial join delays by enabling Protocol Independent Multicast (PIM) static joins or IGMP static joins. You can reduce initial delays even more by preprovisioning multicast traffic. When you preprovision multicast traffic, MPCs with distributed IGMP interfaces receive multicast traffic.
For distributed IGMP to function properly, you must enable enhanced IP network services on a single-chassis MX Series router. Virtual Chassis is not supported.
When you enable distributed IGMP, the following interface options are not supported on the Packet Forwarding Engine:
oif-map
,group-limit
,ssm-map
, andstatic
. Thetraceoptions
andaccounting
statements can only be enabled for IGMP operations still performed on the Routing Engine; they are not supported on the Packet Forwarding Engine. Theclear igmp membership
command is not supported when distributed IGMP is enabled.
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.