- play_arrow EVPN-VXLAN
- play_arrow Overview
- Understanding EVPN with VXLAN Data Plane Encapsulation
- EVPN-over-VXLAN Supported Functionality
- Understanding VXLANs
- VXLAN Constraints on EX Series, QFX Series, PTX Series, and ACX Series Devices
- EVPN Over VXLAN Encapsulation Configuration Overview for QFX Series and EX4600 Switches
- Implementing EVPN-VXLAN for Data Centers
- PIM NSR and Unified ISSU Support for VXLAN Overview
- Routing IPv6 Data Traffic through an EVPN-VXLAN Network with an IPv4 Underlay
- Understanding How to Configure VXLANs and Layer 3 Logical Interfaces to Interoperate
- Understanding GBP Profiles
- play_arrow Configuring EVPN-VXLAN Interfaces
- Understanding Flexible Ethernet Services Support With EVPN-VXLAN
- EVPN-VXLAN Lightweight Leaf to Server Loop Detection
- Overlapping VLAN Support Using VLAN Translation in EVPN-VXLAN Networks
- Overlapping VLAN Support Using Multiple Forwarding Instances or VLAN Normalization
- Layer 2 Protocol Tunneling over VXLAN Tunnels in EVPN-VXLAN Bridged Overlay Networks
- MAC Filtering, Storm Control, and Port Mirroring Support in an EVPN-VXLAN Environment
- Example: Micro and Macro Segmentation using Group Based Policy in a VXLAN
- DHCP Smart Relay in EVPN-VXLAN
- play_arrow Configuring VLAN-Aware Bundle Services, VLAN-Based Services, and Virtual Switch Support
- play_arrow Load Balancing with EVPN-VXLAN Multihoming
- play_arrow Setting Up a Layer 3 VXLAN Gateway
- play_arrow Configuring an EVPN-VXLAN Centrally-Routed Bridged Overlay
- play_arrow Configuring an EVPN-VXLAN Edge-Routed Bridging Overlay
- play_arrow IPv6 Underlay for VXLAN Overlays
- play_arrow Multicast Features with EVPN-VXLAN
- Multicast Support in EVPN-VXLAN Overlay Networks
- Overview of Multicast Forwarding with IGMP Snooping or MLD Snooping in an EVPN-VXLAN Environment
- Example: Preserving Bandwidth with IGMP Snooping in an EVPN-VXLAN Environment
- Overview of Selective Multicast Forwarding
- Configuring the number of SMET Nexthops
- Assisted Replication Multicast Optimization in EVPN Networks
- Optimized Intersubnet Multicast in EVPN Networks
- play_arrow Configuring the Tunneling of Q-in-Q Traffic
- play_arrow Tunnel Traffic Inspection on SRX Series Devices
- play_arrow Fault Detection and Isolation in EVPN-VXLAN Fabrics
-
- play_arrow EVPN-MPLS
- play_arrow Overview
- play_arrow Convergence in an EVPN MPLS Network
- play_arrow Pseudowire Termination at an EVPN
- play_arrow Configuring the Distribution of Routes
- Configuring an IGP on the PE and P Routers on EX9200 Switches
- Configuring IBGP Sessions Between PE Routers in VPNs on EX9200 Switches
- Configuring a Signaling Protocol and LSPs for VPNs on EX9200 Switches
- Configuring Entropy Labels
- Configuring Control Word for EVPN-MPLS
- Understanding P2MPs LSP for the EVPN Inclusive Provider Tunnel
- Configuring Bud Node Support
- play_arrow Configuring VLAN Services and Virtual Switch Support
- play_arrow Configuring Integrated Bridging and Routing
- EVPN with IRB Solution Overview
- An EVPN with IRB Solution on EX9200 Switches Overview
- Anycast Gateways
- Configuring EVPN with IRB Solution
- Configuring an EVPN with IRB Solution on EX9200 Switches
- Example: Configuring EVPN with IRB Solution
- Example: Configuring an EVPN with IRB Solution on EX9200 Switches
- play_arrow Configuring IGMP or MLD Snooping with EVPN-MPLS
-
- play_arrow EVPN E-LAN Services
- play_arrow EVPN-VPWS
- play_arrow Configuring VPWS Service with EVPN Mechanisms
- Overview of VPWS with EVPN Signaling Mechanisms
- Control word for EVPN-VPWS
- Overview of Flexible Cross-Connect Support on VPWS with EVPN
- Overview of Headend Termination for EVPN VPWS for Business Services
- Configuring VPWS with EVPN Signaling Mechanisms
- Example: Configuring VPWS with EVPN Signaling Mechanisms
- FAT Flow Labels in EVPN-VPWS Routing Instances
- Configuring EVPN-VPWS over SRv6
- Configuring Micro-SIDs in EVPN-VPWS
-
- play_arrow EVPN-ETREE
- play_arrow Overview
- play_arrow Configuring EVPN-ETREE
-
- play_arrow Using EVPN for Interconnection
- play_arrow Interconnecting VXLAN Data Centers With EVPN
- play_arrow Interconnecting EVPN-VXLAN Data Centers Through an EVPN-MPLS WAN
- play_arrow Extending a Junos Fusion Enterprise Using EVPN-MPLS
-
- play_arrow PBB-EVPN
- play_arrow Configuring PBB-EVPN Integration
- play_arrow Configuring MAC Pinning for PBB-EVPNs
-
- play_arrow EVPN Standards
- play_arrow Supported EVPN Standards
-
- play_arrow VXLAN-Only Features
- play_arrow Flexible VXLAN Tunnels
- play_arrow Static VXLAN
-
- play_arrow Configuration Statements and Operational Commands
EVPN Multihoming Designated Forwarder Election
The designated forwarder (DF) manages broadcast, unknown unicast, and multicast (BUM) traffic to prevent loops and ensure efficient traffic distribution.
DF Election Overview
Depending on the multihoming mode of operation, traffic to a multihomed customer edge (CE) device uses one or all the multihomed provider edge (PE) devices to reach the customer site. The designated forwarder (DF) election procedure ensures only one endpoint, the DF handles the broadcast, unknown unicast, and multicast (BUM) traffic for a given Ethernet segment, thereby preventing forwarding loops and optimizing network performance.
The DF election process dynamically responds to various network events such as configuration changes, BGP session transitions, or link state changes. This adaptability enables the network to maintain efficient traffic forwarding without manual intervention. When a triggering event occurs, the DF election mechanism re-evaluates and potentially reassigns the DF role, maintaining optimal traffic handling across the network.
The DF election hold timer prevents the election process from starting prematurely. This ensures the network has time to stabilize before the election procedure begins. The timer defaults to 3 seconds. However, you can modify it with the designated-forwarder-election-hold-time statement to suit your network's stability and performance needs. This timer value must be the same across all the PE routers connected to the same Ethernet segment.
The default DF election procedure (as specified in RFC 7432) uses IP addresses and service carving to elect a DF for each EVPN instance (EVI). This election procedure is revertive, so when the elected DF fails and then recovers from that failure it will preempt existing DF.
The preference-based DF election procedure uses manually configured preference
values, the Don’t Preempt (DP) bit, and router ID or loopback address to elect the
DF. Starting in Junos OS Release 24.2, the preference statement includes a non-revertive
option
that prevents the preemption of the existing DF after a failure. This
non-revertive
option avoids a service impact if the old DF
recovers after failing. The default behavior is revertive.
Benefits of Designated Forwarder (DF) Election
Prevents forwarding loops—By electing a single multihomed Ethernet segment to handle BUM traffic, the DF election ensures that only one endpoint forwards traffic, significantly reducing the risk of forwarding loops and enhancing network stability.
Adapts dynamically to network changes—The DF election process responds dynamically to network changes, such as new interface configurations or recovery from link failures, maintaining network stability and operational efficiency.
Ensures efficient failover—The presence of a backup DF (BDF), which remains in a blocking state until it is needed to take over, ensures smooth failover and continuous network operation with minimal traffic disruption, thereby improving overall network resilience.
Reduces route processing overhead—Utilizing the ES-Import extended community for route filtering ensures that only relevant routes are imported by PEs connected to the same Ethernet segment, which reduces unnecessary route processing and maintains efficient route management.
Maintains network consistency—The mass withdrawal mechanism, triggered by the withdrawal of Ethernet autodiscovery routes after a link failure, invalidates stale MAC addresses on remote PEs, ensuring that the network state remains consistent and preventing issues caused by outdated MAC address information.
Distributes traffic efficiently—The DF election process balances the load across multiple PEs, ensuring that no single segment is overwhelmed with traffic, which optimizes network performance and resource utilization.
DF Election Roles
The designated forwarder (DF) election process involves selecting a forwarding role as follows:
Designated forwarder (DF)—The PE router that announces the MAC advertisement route for the customer site's MAC address. This PE router is the primary PE router that forwards BUM traffic to the multihomed CE device and is called the designated forwarder (DF) PE router.
Backup designated forwarder (BDF)—Each router in the set of PE routers that advertise the Ethernet autodiscovery route for the same ESI and serve as the backup path in case the DF fails, is called a backup designated forwarder (BDF). A BDF is also called a non-DF router.
The DF election process elects a local PE router as the BDF, which then puts the multihomed interface connecting to the customer site into a blocking state for the active-standby mode. The interface stays in the blocking state until the BDF is elected as the DF for the Ethernet segment.
Non-designated forwarder (non-DF)—Other PE routers not selected as the DF. The BDF is also considered to be a non-DF.
DF Election Trigger
In general, the following conditions will trigger the DF election process:
When you configure an interface with a nonzero ESI, or when the PE router transitions from an isolated-from-the-core (no BGP session) state to a connected-to-the-core (has established BGP session) state. These conditions also trigger the hold timer. By default, the PE puts the interface into a blocking state until the router is elected as the DF.
After completing a DF election process, a PE router receives a new Ethernet segment route or detects the withdrawal of an existing Ethernet segment route. Neither of these trigger the hold timer.
When an interface of a non-DF PE router recovers from a link failure. In this case the PE router has no knowledge of the hold time imposed by other PE routers. As a result, the recovered PE router does not trigger a hold timer.
DF Election Procedure (RFC 7432)
Service carving refers to the default procedure for DF election at the granularity of the ESI and EVI. With service carving, it is possible to elect multiple DFs per Ethernet segment (one per EVI) to perform load-balancing of multi-destination traffic for a given Ethernet segment. The load-balancing procedures carve up the EVI space among the PE nodes evenly, in such a way that every PE is the DF for a disjoint set of EVIs.
The service carving procedure is as follows:
When a PE router discovers the ESI of the attached Ethernet segment, it advertises an autodiscovery route per Ethernet segment with the associated ES-import extended community attribute.
The PE router starts a hold timer (default value of 3 seconds) in order to receive the autodiscovery routes from other PE nodes connected to the same Ethernet segment. This timer value must be the same across all the PE routers connected to the same Ethernet segment.
You can overwrite the default hold timer using the designated-forwarder-election-hold-time configuration statement.
When the hold timer expires, each PE router builds an ordered list of the IP addresses of all the PE nodes connected to the Ethernet segment (including itself), in increasing numeric order. The system assigns every PE router an ordinal indicating its position in the ordered list, starting with 0 for the PE with the numerically lowest IP address. The DF for a given EVI is then determined by the ordinal matching the VLAN ID modulo the number of PEs, providing a deterministic and predictable method for DF selection. For example, if the VLAN ID is 10 and there are three PEs, the DF would be the PE with the ordinal number that corresponds to 10 modulo 3, (10 mod 3 = 1).
The PE router elected as the DF for a given EVI unblocks traffic for the Ethernet tags associated with that EVI. The DF PE unblocks multi-destination traffic in the egress direction toward the Ethernet segment. All the non-DF PE routers continue to drop multi-destination traffic (for the associated EVIs) in the egress direction toward the Ethernet segment.
In Figure 1, Routers PE1, PE2, and PE3 perform a DF election for active-active multihoming. Each router can become the DF for a particular VLAN from a range of VLANs configured on ESI1 and a non-DF for other VLANs. Each DF forwards BUM traffic on the ESI and VLAN it serves. The non-DF PE routers block the BUM traffic on those particular Ethernet segments.
Preference-Based DF Election
- Overview
- Benefits of Preference-Based Designated Forwarder (DF) Election
- Preference-Based DF Election Procedure
- DF Election Algorithm Mismatch
- DF Election Algorithm Migration
- Changing Preference for Maintenance
- Non-Revertive Mode
- Load Balancing with Preference-Based DF Election
Overview
The DF election based on RFC 7432 fails to meet the operational requirements of some service providers. To address this issue, Junos OS Release 17.3 introduced the preference-based DF election feature that enables control of the DF election based on administrative preference values set on interfaces.
The preference-based DF election feature offers network operators the flexibility to manage DF roles with preference values configured on interfaces. In scenarios where the primary link must handle most of the traffic, this strategy optimizes both throughput and resource allocation.
Starting in Junos OS Release 24.2, we provide more configuration options to customize the preference-based DF election process:
The preference
non-revertive
option improves network stability by ensuring a previously designated DF does not preempt the current DF when it comes back online after a failure.The preference
least
, and the evpndesignated-forwarder-preference-highest
and evpndesignated-forwarder-preference-least
statements enable you to select whether the election process uses the highest or lowest preference values at the ESI and EVI levels.
Please refer to Feature Explorer for a complete list of the products that support these features.
Benefits of Preference-Based Designated Forwarder (DF) Election
Optimized traffic flow—Configuring the DF based on interface attributes like bandwidth ensures optimal link selection. This results in more efficient traffic distribution and better use of network resources.
Enhanced operational control—Manually configuring preference values gives you greater control over the DF election process and ensure the most suitable link is used.
Enhanced network stability—The preference
non-revertive
option prevents a returning DF from preempting the current DF. This eliminates unnecessary disruptions, and ensures continuous service stability.Granular load balancing control—You can configure DF election preferences. You can also select the DF based on the highest or lowest preference values at the ESI and EVI levels. This enables you to effectively distribute traffic across multiple links, leading to improved network performance.
Maintenance flexibility—You can adjust preference values to switch the DF role during maintenance activities, facilitating operational flexibility and reducing the impact of maintenance on service continuity.
Preference-Based DF Election Procedure
The preference-based DF election uses manually configured interface preference values when electing a DF. Manual configuration of preference values gives you enhanced control over the DF election process. You can set specific preferences on interfaces to influence which node acts as the DF.
The preference-based DF election proceeds as follows:
Configure the DF election type preference value under an ESI.
The multihoming PE devices advertise the configured preference value and DP bit using the DF election extended community in the EVPN Type 4 routes.
After receiving the EVPN Type 4 route, the PE devices build the list of candidate DF devices, in the order of the preference value, DP bit, and IP address.
When the DF timer expires, the PE devices elect the DF.
By default, the DF election is based on the highest preference value. However, you can configure the preference-based DF election process to choose the DF based on the lowest preference value using the preference
least
or evpndesignated-forwarder-preference-least
statements.Note:The EVPN configuration for evpn
designated-forwarder-preference-highest
ordesignated-forwarder-preference-least
should be the same on the competing multihoming EVIs; otherwise the election process might elect two DFs, which can cause traffic loss or traffic loops.If multiple DF candidates have the same preference value, then the PE device selects the DF based on the DP bit. When those DF candidates have the same DP bit value, the process elects the DF based on the lowest IP address.
DF Election Algorithm Mismatch
When there is a mismatch between a locally configured DF election algorithm and a remote PE device’s DF election algorithm, then all the PE devices should fall back to the default DF election as specified in RFC 7432.
DF Election Algorithm Migration
Migration from the traditional modulo-based DF election to the new preference-based method requires careful planning. Typically, this involves a maintenance window where interfaces with the same ESI on non-DF PEs are brought down. You then configure the new DF election algorithm on the DF PE before applying it to the other multihoming PEs. This structured approach ensures a smooth transition with minimal service impact.
Perform the migration using the following steps:
Bring down all the interfaces with the same ESI on the non-DF PE devices.
Configure the current DF PE with the preference-based DF election options.
Configure the preference-based DF election options on the non-DF PE devices.
Bring up all the interfaces on the non-DF PE devices.
After reconfiguring and bringing the interfaces back online, verify that the DF election process is functioning as intended. Monitor the network to ensure that the designated forwarders are correctly elected based on the configured preferences. This step is crucial to confirm that the new settings are correctly applied and that the network operates smoothly with the enhanced DF election mechanism.
Changing Preference for Maintenance
The ability to change preference values during maintenance activities enhances operational flexibility. You can switch DF roles as needed by simply changing the configured preference value on a selected device.
Change the DF for a given ESI by performing one of the following steps:
Change the preference value to a higher value on a current non-DF device.
Change the preference value to a lower value on the current DF device.
Changing the preference value for an ESI can lead to some traffic loss during the short duration required to integrate the delay in the updated BGP route propagation with the new preference value.
Non-Revertive Mode
Beginning with Junos OS Release 24.2R1, you can enable the
non-revertive
option under the [edit interfaces
name esi df-election-type preference] hierarchy, which helps to ensure that your network
remains stable across link failures and recoveries. When you configure this
option per ESI, it provides granular control and ensures each segment adheres to
the desired operational behavior.
The non-revertive
option ensures that once a DF is elected, it
will not be preempted by the previously designated DF coming back online after a
failure. This non-revertive mode is key in maintaining a stable network
environment and avoiding unnecessary service disruptions.
Please refer to Feature Explorer for a complete list of the products that support this feature.
The non-revertive
preference-based DF election option
doesn’t work if:
You configure the
no-core-isolation
option under the [edit protocols evpn] hierarchy, and any of the following events occur:You reboot the device.
You run the restart routing command.
You run the clear bgp neighbor command.
You have the graceful restart (GR) feature enabled, and the device goes through a graceful restart.
Load Balancing with Preference-Based DF Election
The preference-based DF election enables load balancing by selecting DFs based on
the highest or lowest preference values. By default, the DF is selected based on
the highest preference value. You can configure DF election type preference least
on the interface (ESI level) to choose the DF based on
lowest preference value.
[edit interfaces interface-name] esi { XX:XX:XX:XX:XX:XX:XX:XX:XX:XX; df-election-type { preference { value value; least; } } }
You can also configure evpn designated-forwarder-preference-highest
or
designated-forwarder-preference-least
at the EVI level.
[edit routing-instances] instance-name { instance-type evpn; protocols { evpn { (designated-forwarder-preference-highest | designated-forwarder-preference-least); } } }
The EVI level configurations override the ESI level configurations when both are used as shown in Table 1 below.
You can use these configurations to manage load balancing in different scenarios as in the following examples.
Single ESI Under Multiple EVIs
Configure load balancing for a single ESI under multiple EVIs using various combinations of the following statements:
DF election type preference
least
at the ESI level.evpn
designated-forwarder-preference-least
ordesignated-forwarder-preference-highest
at the EVI level.
Case No. |
|
|
| Result on EVI-1 | Result on EVI-2 |
---|---|---|---|---|---|
1 | No | No | No | Highest | Highest |
2 | No | Yes | Yes (optional command) | Lowest | Highest |
3 | Yes | No | No | Lowest | Lowest |
4 | Yes | No | Yes | Lowest | Highest |
Multiple ESIs Under a Single EVI
When configuring load balancing for multiple ESIs under a single EVI, use the
ESI default setting to select the highest preference or configure preference least
on the ESI to select the lowest preference.
Do not configure the evpn designated-forwarder-preference-least
or
designated-forwarder-preference-highest
statements at
the EVI level because they will override the ESI level configurations.
|
| Result for ES1-1 on EVI-1 | Result for ESI-2 on EVI-1 |
---|---|---|---|
No | No | Highest | Highest |
Yes | No | Lowest | Highest |
DF Verification
The following show commands offer detailed insights into DF election preferences and statuses, aiding in effective troubleshooting and monitoring:
These commands display detailed information about the EVPN instance, including DF election preferences and the current DF status. The ESI info command provides insights into the current DF and backup forwarders along with their preference values and non-revertive status.
By mastering these commands, you can effectively implement and manage the DF Election feature, ensuring a robust and efficient network environment.
DF Election for Virtual Switch
The virtual switch permits multiple bridge domains in one EVPN instance (EVI). It also accommodates both trunk and access ports. You can configure flexible Ethernet services on the port, enabling different VLANs on a single port to become part of different EVIs.
See the following for more information:
The DF election for virtual switch depends on the following:
Port mode—Sub-interface, trunk interface, and access port
EVI mode—Virtual switch with EVPN and EVPN-EVI
In the virtual switch, multiple Ethernet tags can be associated with a single EVI, wherein the numerically lowest Ethernet tag value in the EVI is used for the DF election.
Handling Failover
A failover can occur when:
The DF PE router loses its DF role.
There is a link or port failure on the DF PE router.
On losing the DF role, the PE router puts the customer-facing interface on the DF into the blocking state.
A link or port failure triggers a DF election process, which results in the BDF PE router's election as the DF. At that time, unicast traffic and BUM flow of traffic will be affected as follows:
Unicast Traffic
CE to Core—The CE device continues to flood traffic on all the links. The previous BDF PE router changes the EVPN multihomed status of the interface from the blocking state to the forwarding state, and traffic is learned and forwarded through this PE router.
Core to CE—The failed DF PE router withdraws the Ethernet autodiscovery route per Ethernet segment and the locally-learned MAC routes, causing the remote PE routers to redirect traffic to the BDF.
The transition of the BDF PE router to the DF role can take some time, causing the EVPN multihomed status of the interface to continue to be in the blocking state, resulting in traffic loss.
BUM Traffic
CE to Core—All the traffic is routed toward the BDF.
Core to CE—The remote PE routers flood the BUM traffic in the core.