Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up

Active-Active Bridging and VRRP over IRB Functionality Overview

date_range 23-Aug-18

Active-active bridging and VRRP over IRB support extends multichassis link aggregation group (MC-LAG) by adding the following functionality to MX Series routers and QFX Series switches:

  • Interchassis link (ICL) pseudowire interface or Ethernet interface (ICL-PL field) for active-active bridging

  • Active-active bridging

  • VRRP over IRB for active-active bridging

  • A single bridge domain not corresponding to two redundancy group IDs

How Active-Active Bridging over IRB Functionality Works

Active-Active bridging over IRB functionality uses the address resolution protocol (ARP) Active-Active MC-LAG.

Suppose one of the PE routers issues an ARP request and another PE router gets the response and, because of the aggregated Ethernet distribution logic, the ARP resolution is not successful. Junos OS uses ARP response packet snooping to perform active-active multichassis link aggregation group support, providing synchronization without the need to maintain any specific state.

Address Resolution Protocol Active-Active MC-LAG Support Methodology

Suppose one of the PE routers issues an ARP request and another PE router gets the response and, because of the aggregated Ethernet distribution logic, the ARP resolution is not successful. Junos OS uses ARP response packet snooping to perform active-active multichassis link aggregation group support, providing easy synchronization without the need to maintain any specific state.

Benefits of Active-Active Bridging and VRRP over IRB Functionality

Benefits of active-active bridging and VRRP over IRB functionality include:

  • An MC-LAG reduces operational expenses by providing active-active links with a LAG, eliminates the need for Spanning Tree Protocol (STP), and provides faster Layer 2 convergence upon link and device failures.

  • An MC-LAG adds node-level redundancy to the normal link-level redundancy that a LAG provides. An MC-LAG improves network resiliency, which reduces network down time as well as expenses.

  • In data centers, it is desirable for servers to have redundant connections to the network. You probably want active-active connections along with links from any server to at least two separate routers.

  • An MC-LAG allows you to bond two or more physical links into a logical link between two routers or between a server and a router, which improves network efficiency. An MC-LAG enables you to load-balance traffic on multiple physical links. If a link fails, the traffic can be forwarded through the other available link, and the logical aggregated link remains in the UP state.

Where Can I Use Active-Active Bridging and VRRP over IRB Functionality?

Active-active bridging and Virtual Router Redundancy Protocol (VRRP) over integrated routing and bridging (IRB) is supported on MX Series routers and QFX Series switches.

MC-LAG Functions in an Active-Active Bridging Domain

The following functions are supported for MC-LAG in an active-active bridging domain:

  • MC-LAG is supported only between two chassis, using an interchassis link (ICL) pseudowire interface or Ethernet interface (ICL-PL field) for active-active bridging, and active-active bridging VRRP over IRB for active-active bridging.

  • For VPLS networks, you can configure the aggregated Ethernet (aeX) interfaces on MC-LAG devices with the encapsulation ethernet-vpls statement to use Ethernet VPLS encapsulation on Ethernet interfaces that have VPLS enabled and that must accept packets carrying standard Tag Protocol ID (TPID) values or the encapsulation vlan-vpls statement to use Ethernet VLAN encapsulation on VPLS circuits.

  • Layer 2 circuit functionalities are supported with ethernet-ccc as the encapsulation mode.

  • Network topologies in a triangular and square pattern are supported. In a triangular network design, with equal-cost paths to all redundant nodes, slower, timer-based convergence can possibly be prevented. Instead of indirect neighbor or route loss detection using hellos and dead timers, you can identify the physical link loss and denote a path as unusable and reroute all traffic to the alternate equal-cost path. In a square network design, depending on the location of the failure, the routing protocol might converge to identify a new path to the subnet or the VLAN, causing the convergence of the network to be slower.

  • Interoperation of Link Aggregation Control Protocol (LACP) for MC-LAG devices is supported. LACP is one method of bundling several physical interfaces to form one logical interface. When LACP is enabled, the local and remote sides of the aggregated Ethernet links exchange protocol data units (PDUs), which contain information about the state of the link. You can configure Ethernet links to actively transmit PDUs, or you can configure the links to passively transmit them, sending out LACP PDUs only when the links receive the PDUs from another link. One side of the link must be configured as active for the link to be up.

  • Active-standby mode is supported using LACP. When an MC-LAG operates in the active-standby mode, one of the router’s ports only becomes active when failure is detected in the active links. In this mode, the provider edge (PE) routers perform an election to determine the active and standby routers.

  • Configuration of the pseudowire status type length variable (TLV) is supported. The pseudowire status TLV is used to communicate the status of a pseudowire back and forth between two PE routers. The pseudowire status negotiation process ensures that a PE router reverts back to the label withdraw method for pseudowire status if its remote PE router neighbor does not support the pseudowire status TLV.

  • The MC-LAG devices use Inter-Chassis Control Protocol (ICCP) to exchange the control information between two MC-LAG network devices.

Points to Remember When Configuring MC-LAG Active-Active Bridge Domains

Keep the following points in mind when you configure MC-LAG in an active-active bridging domain:

  • A single bridge domain cannot be associated with two redundancy groups. You cannot configure a bridge domain to contain logical interfaces from two different multichassis aggregated Ethernet interfaces and associate them with different redundancy group IDs by using the redundancy group group-id statement at the [edit interfaces aeX aggregated-ether-options] hierarchy level.

  • You must configure logical interfaces in a bridge domain from a single multichassis aggregated Ethernet interface and associate it with a redundancy group. You must configure a service ID by including the service-id vid statement at the [edit bridge-domains bd-name] hierarchy level for multichassis aggregated Ethernet interfaces if you configure logical interfaces on multichassis aggregated Ethernet interfaces that are part of the bridge domain.

More Data Traffic Forwarding Rules

In active-active bridging and VRRP over IRB topographies, network interfaces are categorized into three different interface types, as follows:

S-LinksSingle-homed link (S-Link) terminating on MC-LAG-N device or MC-LAG in active-standby mode. In Figure 4, interfaces ge-0/0/0.0 and ge-1/0/0.0 are S-Links.
MC-LinksMC-LAG links. In Figure 4, interface ae0.0 is the MC-Link.
ICLInterchassis link.

Based on incoming and outgoing interface types, some constraints are added to the Layer 2 forwarding rules for MC-LAG configurations, as described in the data traffic forwarding rules. Note that if only one of the MC-LAG member link is in the UP state, it is considered an S-Link.

The following data traffic forwarding rules apply:

  1. When an MC-LAG network receives a packet from a local MC-Link or S-Link, the packet is forwarded to other local interfaces, including S-Links and MC-Links based on the normal Layer 2 forwarding rules and on the configuration of the mesh-group and no-local-switching statements. If MC-Links and S-Links are in the same mesh group and their no-local-switching statements are enabled, the received packets are only forwarded upstream and not sent to MC-Links and S-Links.
    Note

    The functionality described in Rule 2 is not supported.

  2. The following circumstances determine whether or not an ICL receives a packet from a local MC-Link or S-Link:
    1. If the peer MC-LAG network device has S-Links or MC-LAGs that do not reside on the local MC-LAG network device

    2. Whether or not interfaces on two peering MC-LAG network devices are allowed to talk to each other only if both a. and b. are true. Traffic is always forwarded to the ICL.

  3. When an MC-LAG network receives a packet from the ICL, the packet is forwarded to all local S-Links and active MC-LAGs that do not exist in the MC-LAG network that the packet comes from.
  4. Note

    The topology shown in Figure 1 is not supported.

    In certain cases, for example the topology shown in Figure 1, there could be a loop caused by the ICL. To break the loop, one of the following mechanisms could be used:
    1. Run certain protocols, such as STP. In this case, whether packets received on one ICL are forwarded to other ICLs is determined by using Rule 3.

    2. Configure the ICL to be fully meshed among the MC-LAG network devices. In this case, traffic received on the ICL would not be forwarded to any other ICLs.

    In either case, duplicate packets could be forwarded to the MC-LAG clients. Consider the topology shown in Figure 1, where if network routing instance N1 receives a packet from ge-0/0/0.0, it could be flooded to ICL1 and ICL3.

    When receiving from ICL1 and ICL3, network routing instances N3 and N2 could flood the same packet to MCL2, as shown in Figure 1. To prevent this from happening, the ICL designated forwarder should be elected between MC-LAG peers, and traffic received on an ICL could be forwarded to the active-active MC-LAG client by the designated forwarder only.

  5. When received from an ICL, traffic should not be forwarded to the core-facing client link connection between two provider edge (PE) devices (MC-Link) if the peer chassis's (where the traffic is coming from) MC-Link is UP.

How to Configure MC-LAG Active-Active Bridge Domains

For a MC-LAG configured in an active-active bridge domain and with VRRP configured over an IRB interface, you must include the accept-data statement at the [edit interfaces interface-name unit logical-unit-number family inet address address vrrp-group group-id] hierarchy level to enable the router that functions as the master router to accept all packets destined for the virtual IP address.

On an MC-LAG, if you modify the source MAC address to be the virtual MAC address, you must specify the virtual IP address as the source IP address instead of the physical IP address. In such a case, the accept-data option is required for VRRP to prevent ARP from performing an incorrect mapping between IP and MAC addresses for customer edge (CE) devices. The accept-data attribute is needed for VRRP over IRB interfaces in MC-LAG to enable OSPF or other Layer 3 protocols and applications to work properly over multichassis aggregated Ethernet (mc-aeX) interfaces.

Note

On an MC-LAG, the unit number associated with aggregated Ethernet interfaces on provider edge router PE1 must match the unit number associated with aggregated Ethernet interfaces on provider edge router PE2. If the unit numbers differ, MAC address synchronization does not happen. As a result, the status of the MAC address on the remote provider edge router remains in a pending state.

If you are using the VRRP over IRB or RVI method to enable Layer 3 functionality, you must configure static ARP entries for the IRB or RVI interface of the remote MC-LAG peer to allow routing protocols to run over the IRB or RVI interfaces.

MAC Address Management

If an MC-LAG is configured to be active-active, upstream and downstream traffic could go through different MC-LAG network devices. Since the media access control (MAC) address is learned only on one of the MC-LAG network devices, the reverse direction's traffic could be going through the other MC-LAG network and be flooded unnecessarily. Also, a single-homed client's MAC address is only learned on the MC-LAG network device it is attached to. If a client attached to the peer MC-LAG network needs to communicate with that single-homed client, then traffic would be flooded on the peer MC-LAG network device. To avoid unnecessary flooding, whenever a MAC address is learned on one of the MC-LAG network devices, it gets replicated to the peer MC-LAG network device. The following conditions should be applied when MAC address replication is performed:

  • MAC addresses learned on an MC-LAG of one MC-LAG network device should be replicated as learned on the same MC-LAG of the peer MC-LAG network device.

  • MAC addresses learned on single-homed customer edge (CE) clients of one MC-LAG network device should be replicated as learned on the ICL-PL interface of the peer MC-LAG network device.

  • MAC addresses learned on MC-LAG VE clients of one MC-LAG network device should be replicated as learned on the corresponding VE interface of the peer MC-LAG network device.

  • MAC address learning on an ICL is disabled from the data path. It depends on software to install MAC addresses replicated through Inter-Chassis Control Protocol (ICCP).

MAC Aging

MAC aging support in Junos OS extends aggregated Ethernet logic for a specified MC-LAG. A MAC address in software is deleted until all Packet Forwarding Engines have deleted the MAC address. In the case of an MC-LAG, a remote provider edge is treated as a remote Packet Forwarding Engine and has a bit in the MAC data structure.

Layer 3 Routing

In general, when an MC-LAG is configured to provide Layer 3 routing functions to downstream clients, the MC-LAG network peers should be configured to provide the same gateway address to the downstream clients. To the upstream routers, the MC-LAG network peers could be viewed as either equal-cost multipath (ECMP) or two routes with different preference values.

Junos OS supports active-active MC-LAGs by using VRRP over IRB. Junos OS also supports active-active MC-LAGs by using IRB MAC address synchronization. You must configure IRB using the same IP address across MC-LAG peers. IRB MAC synchronization is supported on 32-bit interfaces and interoperates with earlier MPC and MIC releases.

To ensure that Layer 3 operates properly, instead of dropping the Layer 3 packet, the VRRP backup attempts to perform routing functions if the packet is received on an MC-LAG. A VRRP backup sends and responds to Address Resolution Protocol (ARP) requests.

For ARP, the same issue exists as with Layer 2 MAC addresses. Once ARP is learned, it must be replicated to the MC-LAG through ICCP. The peer must install an ARP route based on the ARP information received through ICCP.

For ARP aging, ARP requests on the MC-LAG peers can be aged out independently.

Topologies Supported for MC-LAG Active-Active Bridge Domains

The topologies shown in Figure 2 and Figure 3 are supported. These figures use the following abbreviations:

  • Aggregated Ethernet (AE)

  • Interchassis link (ICL)

  • Multichassis link (MCL)

Potential Problems When Configuring MC-LAG Active-Active Bridge Domains

When configured to be active-active, the client device load-balances the traffic to the peering MC-LAG network devices. In a bridging environment, this could potentially cause the following problems:

  • Traffic received on the MC-LAG from one MC-LAG network device could be looped back to the same MC-LAG on the other MC-LAG network device.

  • Duplicated packets could be received by the MC-LAG client device.

  • Traffic could be unnecessarily forwarded on the interchassis link.

To better illustrate the problems listed, consider Figure 4, where an MC-LAG device MCL1 and single-homed clients ge-0/0/0.0 and ge-1/0/0.0 are allowed to talk to each other through an ICL. These problems could occur:

Figure 4: MC-LAG Device and Single-Homed Client
MC-LAG Device
and Single-Homed Client
  • Traffic received on network routing instance N1 from MCL1 could be flooded to ICL to reach network routing instance N2. Once it reaches network routing instance N2, it could flood again to MCL1.

  • Traffic received on interface ge-0/0/0.0 could be flooded to MCL1 and ICL on network routing instance N1. Once network routing instance N2 receives such traffic from ICL, it could again be flooded to MCL1.

  • If interface ge-1/0/0.0 does not exist on network routing instance N2, traffic received from interface ge-0/0/0.0 or MCL1 on network routing instance N1 could be flooded to network routing instance N2 through ICL unnecessarily since interface ge-0/0/0.0 and MCL1 could reach each other through network routing instance N1.

Restrictions When Configuring MC-LAG Active-Active Bridge Domains

In an IPv6 network, you cannot configure an MC-LAG in an active-active bridge domain if you specified the vlan-id none statement at the [edit bridge-domain bd-name] hierarchy level. The vlan-id none statement that enables the removal of the incoming VLAN tags identifying a Layer 2 logical interface when packets are sent over VPLS pseudowires is not supported for IPv6 packets in an MC-LAG.

The following functionality is not supported for MC-LAG active-active bridge domains:

  • Virtual private LAN service (VPLS) within the core

  • Bridged core

  • Topology as described in Rule 4 of More Data Traffic Forwarding Rules

  • Routed multichassis aggregated Ethernet interface, where the VRRP backup router is used in the edge of the network

  • Track object, where in the case of an MC-LAG, the status of the uplinks from the provider edge can be monitored, and the MC-LAG can act on the status

  • Mixed mode (active-active MC-LAG is supported on MX Series routers with MPC or MIC interfaces only)

    All interfaces in the bridge domain that are multichassis aggregated Ethernet active-active must be on MPCs or MICs.

The topologies shown in Figure 5, Figure 6, and Figure 7 are not supported:

Figure 5: Interchassis Data Link Between Active-Active Nodes
Interchassis Data Link Between Active-Active
Nodes
Figure 6: Active-Active MC-LAG with Single MC-LAG
Active-Active MC-LAG with Single MC-LAG
Figure 7: Active-Active MC-LAG with Multiple Nodes on a Single Multichassis Link
Active-Active MC-LAG with Multiple Nodes
on a Single Multichassis Link
Note

A redundancy group cannot span more than two routers.

IGMP Snooping on Active-Active MC-LAG

IGMP Snooping on Active-Active MC-LAG

For multicast to work in an active-active MC-LAG scenario, the typical topology is as shown in Figure 8 and Figure 9 with interested receivers over S-links and MC-Links. Starting in Junos OS Release 11.2, support is extended for sources connected over the Layer 2 interface.

If an MC-LAG is configured to be active-active, reports from MC-LAG clients could reach any of the MC-LAG network device peers. Therefore, the IGMP snooping module needs to replicate the states such that the Layer 2 multicast route state on both peers are the same. Additionally for S-Link clients, snooping needs to replicate these joins to its snooping peer, which in the case of Layer 3 connected source, passes this information to the PIM on IRB to enable the designated router to pull traffic for these groups,

The ICL should be configured as a router facing interface. For the scenario where traffic arrives through a Layer 3 interface, it is a requirement to have PIM and IGMP enabled on the IRB interface configured on the MC-LAG network device peers.

With reference to Figure 8, either Device N1 or N2 becomes a designated router (for this example, N1 is the designated router). Router N1 therefore pulls the multicast traffic from the core. Once multicast data hits the network Device N1, the data is forwarded based on the snooping learned route.

For MC-Link clients, data is forwarded through N1. In the case of failover of the MC-Links, the data reaches the client through N2. For S-Link clients on N1, data would be forwarded through normal snooping routes.

For S-Link clients on N2, data is forwarded through the ICL interface. Layer 2 multicast routes on N1 do not show these groups unless there is interest for the same group over MC-Links or over S-Links on N1. For the IRB scenario, the IGMP membership and Layer 3 multicast route on N1 does however show these groups learned over the IRB interface.

Therefore, for a case where a specific group interest is only on the S-Link on N2, data arriving on N1 reaches N2 through the default route, and the Layer 2 multicast route on N2 has the S-Link in the outgoing interface list.

In Figure 9, MCL1 and MCL2 are on different devices, and the multicast source or IGMP querier is connected through MCL2. The data forwarding behavior seen is similar to that explained for multicast topology with source connected through Layer 3.

Note

IGMP snooping should not be configured in proxy mode. There should be no IGMP hosts or IGMP or PIM routers sitting on the ICL interface.

Up and Down Event Handling

The following conditions apply to up and down event handling:

  • If the Inter-Chassis Control Protocol (ICCP) connection is UP but the ICL interface goes DOWN, the router configured as the backup brings down all the multichassis aggregated Ethernet interfaces shared with the peer that is connected to ICL. This ensures that there are no loops in the network. Otherwise, both PEs become PIM-designated routers and, hence, forward multiple copies of the same packet to the customer edge.

  • If the ICCP connection is UP and the ICL comes UP, the router configured as the backup brings up the multichassis aggregated Ethernet interfaces shared with the peer.

  • If both the ICCP connection and the ICL are DOWN, the router configured as the backup brings up the multichassis aggregated Ethernet interfaces shared with the peer.

  • The Layer 2 address learning process (l2ald) does not store the information about a MAC address learned from a peer in the kernel. If l2ald restarts, and if the MAC address was not learned from the local multichassis aggregated Ethernet interface, l2ald clears the MAC addresses, which causes the router to flood the packets destined to this MAC address. This behavior is similar to that in a Routing Engine switchover. (Note that currently l2ald runs on a Routing Engine only when it is a master). Also, during the time l2ald is DOWN, ARP packets received from an ICCP peer are dropped. ARP retry takes care of this situation. This is the case with Routing Engine switchover, too.

  • If ICCP restarts, l2ald does not identify that a MAC address was learned from a peer and, if the MAC address was learned only from the peer, that MAC address is deleted, and the packets destined to this MAC address are flooded.

Inter-Chassis Control Protocol

Inter-Chassis Control Protocol (ICCP) is used to synchronize configurations, states, and data.

ICCP supports the following types of state information:

  • MC-LAG members and their operational states

  • Single-homed members and their operational states

ICCP supports the following application database synchronization parameters:

  • MAC addresses learned and to be aged

  • ARP information learned over IRB

Inter-Chassis Control Protocol Message

ICCP messages and attribute-value pairs (AVPs) are used for synchronizing MAC address and ARP information.

footer-navigation