Load Balancing MPLS Traffic
Configuring Load Balancing Based on MPLS Labels
Load balancing occurs on a per-packet basis for MPLS flows on supported platforms. Entropy, or random distribution, is essential for the uniform distribution of packets to their next hops. By default, when load balancing is used to help distribute traffic, Junos OS employs a hash algorithm to select a next-hop address to install into the forwarding table. Whenever the set of next hops for a destination changes, the next-hop address is reselected by means of the hash algorithm. You can configure how the hash algorithm is used to load-balance traffic across a set of equal-cost label switched paths (LSPs).
To ensure entropy for VPLS & VPWS traffic, Junos OS can create a hash based on data from the IP header and as many as three MPLS labels (the so-called top labels).
In some cases, as the number of network feature that use labels grows (such as MPLS Fast Reroute, and RFC 3107, RSVP and VPN) data in the top three labels can become static and thus not a sufficient source for entropy. Load balancing can become skewed as a result, or the incidence of out-of-order packet delivery may rise. For these cases, labels from the bottom of the label stack can be used (see Table 1, below for qualifications). Top labels and bottom labels cannot be used at the same time.
MPC cards do not support the regular hash key configuration.
For the MPC-based hash key configuration to be effective, you need
an enhanced-hash-key
configuration.
Load balancing is used to evenly distribute traffic when the following conditions apply:
There are multiple equal-cost next hops over different interfaces to the same destination.
There is a single next hop over an aggregated interface.
An LSP tends to load-balance its placement by randomly selecting one of the equal-cost next hops and using it exclusively. The random selection is made independently at each transit router, which compares Interior Gateway Protocol (IGP) metrics alone. No consideration is given to bandwidth or congestion levels.
This feature applies to aggregated Ethernet and aggregated SONET/SDH interfaces as well as multiple equal-cost MPLS next hops. In addition, on the T Series, MX Series, M120, and M320 routers only, you can configure load balancing for IPv4 traffic over Layer 2 Ethernet pseudowires. You can also configure load balancing for Ethernet pseudowires based on IP information. The option to include IP information in the hash key provides support for Ethernet circuit cross-connect (CCC) connections.
To load-balance based on the MPLS label information, configure
the family mpls
statement:
[edit forwarding-options hash-key] family mpls { all-labels; bottom-label-1; bottom-label-2; bottom-label-3; label-1; label-2; label-3; no-labels; no-label-1-exp; payload { ether-pseudowire; ip { disable; layer-3-only; port-data { destination-lsb; destination-msb; source-lsb; source-msb; } } } }
You can include this statement at the following hierarchy levels:
[edit forwarding-options hash-key]
Table 1 provides detailed information about all of the possible MPLS LSP load-balancing options.
Statement |
Supported Platforms |
MPLS LSP Load Balancing Options |
---|---|---|
|
MX Series and PTX Series |
Prior to Junos OS Release 19.1R1, up to eight MPLS labels were included in the hash key to identify the uniqueness of a flow in the Packet Forwarding Engine. On PTX Series routers, this value is set by default. Starting in Junos OS Release 19.1R1, for MX Series routers with MPC and MIC interfaces, up to sixteen incoming MPLS labels are included in the hash key. |
|
MX Series with DPC (I-Chip). Not supported on M10i, M7i, and M120. |
Uses the bottom-most label for calculating the hash key, for example if the top labels do not provide sufficient variable for the required level of entropy. |
|
MX Series with DPC (I-Chip). Not supported on M10i, M7i, and M120. |
Uses the second label from the bottom for calculating the hash key, for example if the top labels do not provide sufficient variable for the required level of entropy. |
|
MX Series with DPC (I-Chip). Not supported on M10i, M7i, and M120. |
Uses the third label from the bottom for calculating the hash key, for example if the top labels do not provide sufficient variable for the required level of entropy. |
|
M Series, MX Series, T Series |
Include the first label in the hash key. Use this option for single label packets. |
|
M Series, MX Series, T Series |
Include the second label in the hash key. You must also
configure the |
|
M Series, MX Series, T Series |
Include the third label in the hash key. You must also
configure the |
|
All |
Excludes MPLS labels from the hash key. |
|
M Series, MX Series, T Series |
Excludes the EXP bit of the top label from the hash key.
You must also configure the For Layer 2 VPNs, the router could encounter a packet reordering problem. When a burst of traffic pushes the customer traffic bandwidth to exceed its limits, the traffic might be affected in mid flow. Packets might be reordered as a result. By excluding the EXP bit from the hash key, you can avoid this reordering problem. |
|
All |
Allows you to configure which parts of the IP packet payload to include in the hash key. For the PTX Series Packet Transport Router, this value is set by default. |
|
PTX Series |
Exclude IP payload from the hash key. |
|
M120, M320, MX Series, T Series |
Load-balance IPv4 traffic over Layer 2 Ethernet pseudowires. |
|
All |
Include the IPv4 or IPv6 address in the hash key. You
must also configure either |
|
All |
Include only the Layer 3 IP information in the hash key.
Excludes all of the |
|
M Series, MX Series, T Series |
Include the source and destination port field information.
By default, the most significant byte and least significant byte of
the source and destination port fields are used in the hash key. To
select specific bytes to use in the hash key, include one or more
of the |
|
M Series, MX Series, T Series |
Include the least significant byte of the destination
port in the hash key. Can be combined with any of the other |
|
M Series, MX Series, T Series |
Include the most significant byte of the destination
port in the hash key. Can be combined with any of the other |
|
M Series, MX Series, T Series |
Include the least significant byte of the source port
in the hash key. Can be combined with any of the other |
|
M Series, MX Series, T Series |
Include the most significant byte of the source port
in the hash key. Can be combined with any of the other |
The following examples illustrate ways in which you can configure MPLS LSP load balancing:
To include the IP address as well as the first label in the hash key:
For M Series, MX Series, and T Series routers, configure the
label-1
statement and theip
option for thepayload
statement at the[edit forwarding-options hash-key family mpls]
hierarchy level:[edit forwarding-options hash-key family mpls] label-1; payload { ip; }
For PTX Series Packet Transport Routers, the
all-labels
andip payload
options are configured by default, so no configuration is necessary.
(M320 and T Series routers only) To include the IP address as well as both the first and second labels in the hash key, configure the
label-1
andlabel-2
options and theip
option for thepayload
statement at the[edit forwarding-options hash-key family mpls]
hierarchy level:[edit forwarding-options hash-key family mpls] label-1; label-2; payload { ip; }
Note:You can include this combination of statements on M320 and T Series routers only. If you include them on an M Series Multiservice Edge Router, only the first MPLS label and the IP payload are used in the hash key.
For T Series routers, ensure proper load balancing by including the
label-1
,label-2
, andlabel-3
options at the[edit forwarding-options hash-key family mpls]
hierarchy level:[edit forwarding-options hash-key family mpls] label-1; label-2; label-3;
(M Series, MX Series, and T Series routers only) For Layer 2 VPNs, the router could encounter a packet reordering problem. When a burst of traffic pushes the customer traffic bandwidth to exceed its limits, the traffic might be affected in mid flow. Packets might be reordered as a result. By excluding the EXP bit from the hash key, you can avoid this reordering problem. To exclude the EXP bit of the first label from the hash calculations, include the
no-label-1-exp
statement at the[edit forwarding-options hash-key family mpls]
hierarchy level:[edit forwarding-options hash-key family mpls] label-1; no-label-1-exp; payload { ip; }
Example: Load-Balanced MPLS Network
When you configure several RSVP LSPs to the same egress router, the LSP with the lowest metric is selected and carries all traffic. If all of the LSPs have the same metric, one of the LSPs is selected at random and all traffic is forwarded over it. To distribute traffic equally across all LSPs, you can configure load balancing on the ingress or transit routers, depending on the type of load balancing configured.
Figure 1 illustrates an MPLS network with four LSPs configured to the same egress router (R0). Load balancing is configured on ingress router R1. The example network uses Open Shortest Path First (OSPF) as the interior gateway protocol (IGP) with OSPF area 0.0.0.0. An IGP is required for the Constrained Shortest Path First (CSPF) LSP, which is the default for the Junos OS. In addition, the example network uses a policy to create BGP traffic.
The network shown in Figure 1 consists of the following components:
A full-mesh interior BGP (IBGP) topology, using AS 65432
MPLS and RSVP enabled on all routers
A send-statics policy on routers R1 and R0 that allows a new route to be advertised into the network
Four unidirectional LSPs between R1 and R0, and one reverse direction LSP between R0 and R1, which allows for bidirectional traffic
Load balancing configured on ingress router R1
The network shown in Figure 1 is a BGP full-mesh network. Since route reflectors and confederations are not used to propagate BGP learned routes, each router must have a BGP session with every other router running BGP.
Router Configurations for the Load-Balanced MPLS Network
- Purpose
- Action
- Sample Output 1
- Sample Output 2
- Sample Output 3
- Sample Output 4
- Sample Output 5
- Sample Output 6
- Meaning
Purpose
The configurations in this topic are for the six load-balanced routers in the example network illustrated in Load-Balancing Network Topology.
Action
To display the configuration of a router, use the following Junos OS CLI operational mode command:
user@host> show configuration | no-more
Sample Output 1
The following configuration output is for edge router R6.
user@R6> show configuration | no-more [...Output truncated...] interfaces { fe-0/1/2 { unit 0 { family inet { address 10.0.16.14/30; } family mpls; #MPLS enabled on relevant interfaces } } fe-1/3/0 { unit 0 { family inet { address 10.10.12.1/24; } } } fxp0 { unit 0 { family inet { address 192.168.70.148/21; } } } lo0 { unit 0 { family inet { address 192.168.6.1/32; } } } } routing-options { static { [...Output truncated...] router-id 192.168.6.1; #Manually configured RID autonomous-system 65432; #Full mesh IBGP } } protocols { rsvp { interface fe-0/1/2.0; interface fxp0.0 { disable; } } mpls { interface fe-0/1/2.0; interface fxp0.0 { disable; } } bgp { group internal { type internal; local-address 192.168.6.1; neighbor 192.168.1.1; neighbor 192.168.2.1; neighbor 192.168.4.1; neighbor 192.168.9.1; neighbor 192.168.0.1; } } ospf { #IGP enabled traffic-engineering; area 0.0.0.0 { interface fe-0/1/2.0; interface fe-1/3/0.0; interface lo0.0 { passive; #Ensures protocols do not run over this interface } } } }
Sample Output 2
The following configuration output is for ingress router R1.
user@R1> show configuration | no-more [...Output truncated...] interfaces { fe-0/1/0 { unit 0 { family inet { address 10.0.12.13/30; } family mpls; #MPLS enabled on relevant interfaces } } fe-0/1/2 { unit 0 { family inet { address 10.0.16.13/30; } family mpls; } } fxp0 { unit 0 { family inet { address 192.168.70.143/21; } } } lo0 { unit 0 { family inet { address 192.168.1.1/32; } } } } routing-options { static { [...Output truncated...] route 100.100.1.0/24 reject; #Static route for send-statics policy } router-id 192.168.1.1; #Manually configured RID autonomous-system 65432; #Full mesh IBGP forwarding-table { export lbpp; #Routes exported to forwarding table } } protocols { rsvp { interface fe-0/1/0.0; interface fe-0/1/2.0; interface fxp0.0 { disable; } } mpls { label-switched-path lsp 1 { #First LSP to 192.168.0.1; # Destination of the LSP install 10.0.90.14/32 active; # The prefix is installed in the primary via-r4; # inet.0 routing table } label-switched-path lsp2 { to 192.168.0.1; install 10.0.90.14/32 active; primary via-r2; } label-switched-path lsp3 { to 192.168.0.1; install 10.0.90.14/32 active; primary via-r2; } label-switched-path lsp4 { to 192.168.0.1; install 10.0.90.14/32 active; primary via-r4; } path via-r2 { #Primary path to spread traffic across interfaces 10.0.29.2 loose; } path via-r4 { 10.0.24.2 loose; } interface fe-0/1/0.0; interface fe-0/1/2.0; interface fxp0.0 { disable; } } bgp { export send-statics; #Allows advertising of a new route group internal { type internal; local-address 192.168.1.1; neighbor 192.168.2.1; neighbor 192.168.4.1; neighbor 192.168.9.1; neighbor 192.168.6.1; neighbor 192.168.0.1; } } ospf { #IGP enabled traffic-engineering; area 0.0.0.0 { interface fe-0/1/0.0; interface fe-0/1/2.0; interface lo0.0 { passive; #Ensures protocols do not run over this interface } } } } policy-options { #Load balancing policy policy-statement lbpp { then { load-balance per-packet; } } policy-statement send-statics { #Static route policy term statics { from { route-filter 100.100.1.0/24 exact; } then accept; } } }
Sample Output 3
The following configuration output is for transit router R2.
user@R2> show configuration | no-more [...Output truncated...] interfaces { so-0/0/1 { unit 0 { family inet { address 10.0.24.1/30; } family mpls; #MPLS enabled on relevant interfaces } } so-0/0/2 { unit 0 { family inet { address 10.0.29.1/30; } family mpls; } } fe-0/1/0 { unit 0 { family inet { address 10.0.12.14/30; } family mpls; } } fxp0 { unit 0 { family inet { address 192.168.70.144/21; } } } lo0 { unit 0 { family inet { address 192.168.2.1/32; } } } } routing-options { static { [...Output truncated...] router-id 192.168.2.1; #Manually configured RID autonomous-system 65432; #Full mesh IBGP } } protocols { rsvp { interface so-0/0/1.0; interface fe-0/1/0.0; interface so-0/0/2.0; interface fxp0.0 { disable; } } mpls { interface fe-0/1/0.0; interface so-0/0/1.0; interface so-0/0/2.0; interface fxp0.0 { disable; } } bgp { group internal { type internal; local-address 192.168.2.1; neighbor 192.168.1.1; neighbor 192.168.4.1; neighbor 192.168.9.1; neighbor 192.168.6.1; neighbor 192.168.0.1; } } ospf { #IGP enabled traffic-engineering; area 0.0.0.0 { interface fe-0/1/0.0; interface so-0/0/1.0; interface so-0/0/2.0; interface lo0.0 { passive; #Ensures protocols do not run over this interface } } } }
Sample Output 4
The following configuration output is for transit router R4.
user@R4> show configuration | no-more [...Output truncated...] interfaces { so-0/0/1 { unit 0 { family inet { address 10.0.24.2/30; } family mpls; # MPLS enabled on relevant interfaces } } so-0/0/3 { unit 0 { family inet { address 10.0.49.1/30; } family mpls; } } fxp0 { unit 0 { family inet { address 192.168.70.146/21; } } } lo0 { unit 0 { family inet { address 192.168.4.1/32; } } } } routing-options { static { [...Output truncated...] router-id 192.168.4.1; #Manually configured RID autonomous-system 65432; #Full mesh IBGP } protocols { rsvp { interface so-0/0/1.0; interface so-0/0/3.0; interface fxp0.0 { disable; } } mpls { interface so-0/0/1.0; interface so-0/0/3.0; interface fxp0.0 { disable; } } bgp { group internal { type internal; local-address 192.168.4.1; neighbor 192.168.1.1; neighbor 192.168.2.1; neighbor 192.168.9.1; neighbor 192.168.6.1; neighbor 192.168.0.1; } } ospf { #IGP enabled traffic-engineering; area 0.0.0.0 { interface so-0/0/1.0; interface so-0/0/3.0; interface lo0.0 { passive; #Ensures protocols do not run over this interface } } } }
Sample Output 5
The following configuration output is for transit router R9.
user@R9> show configuration | no-more [...Output truncated...] interfaces { so-0/0/2 { unit 0 { family inet { address 10.0.29.2/30; } family mpls; #MPLS enabled on relevant interfaces } } so-0/0/3 { unit 0 { family inet { address 10.0.49.2/30; } family mpls; } } fe-0/1/0 { unit 0 { family inet { address 10.0.90.13/30; } family mpls; } } fxp0 { unit 0 { family inet { address 192.168.69.206/21; } } } lo0 { unit 0 { family inet { address 192.168.9.1/32; } } } } routing-options { static { [...Output truncated...] router-id 192.168.9. 1; #Manually configured RID autonomous-system 65432; #Full mesh IBGP } protocols { rsvp { interface so-0/0/2.0; interface so-0/0/3.0; interface fe-0/1/0.0; interface fxp0.0 { disable; } } mpls { interface so-0/0/2.0; interface so-0/0/3.0; interface fe-0/1/0.0; interface fxp0.0 { disable; } } bgp { group internal { type internal; local-address 192.168.9.1; neighbor 192.168.1.1; neighbor 192.168.2.1; neighbor 192.168.4.1; neighbor 192.168.0.1; neighbor 192.168.6.1; } } ospf { #IGP enabled traffic-engineering; area 0.0.0.0 { interface so-0/0/2.0; interface so-0/0/3.0; interface fe-0/1/0.0; interface lo0.0 { passive; #Ensures protocols do not run over this interface } } } }
Sample Output 6
The following configuration output is for egress router R0.
user@R0> show configuration | no-more [...Output truncated...] interfaces { fe-0/1/0 { unit 0 { family inet { address 10.0.90.14/30; } family mpls; #MPLS enabled on relevant interfaces } } fe-1/3/0 { unit 0 { family inet { address 10.10.11.1/24; } } fxp0 { unit 0 { family inet { address 192.168.69.207/21; } } } lo0 { unit 0 { family inet { address 192.168.0.1/32; } } } } routing-options { static { [...Output truncated...] route 100.100.10.0/24 reject; #Static route for send-statics policy } router-id 192.168.0.1; #Manually configured RID autonomous-system 65432; #Full mesh IBGP } protocols { rsvp { interface fe-0/1/0.0; interface fe-1/3/0.0; interface fxp0.0 { disable; } } mpls { label-switched-path r0-r6 { to 192.168.6.1; } interface fe-0/1/0.0; interface fe-1/3/0.0; interface fxp0.0 { disable; } } bgp { group internal { type internal; local-address 192.168.0.1; export send-statics; #Allows advertising of a new route neighbor 192.168.9.1; neighbor 192.168.6.1; neighbor 192.168.1.1; neighbor 192.168.2.1; neighbor 192.168.4.1; } } ospf { #IGP enabled traffic-engineering; area 0.0.0.0 { interface fe-0/1/0.0; interface fe-1/3/0.0; interface lo0.0 { passive; #Ensures protocols do not run over this interface } } } } policy-options { policy-statement send-statics { term statics { from { route-filter 100.100.10.0/24 exact; } then accept; } } }
Meaning
Sample Outputs 1 through 6 show the base interfaces, routing options, protocols, and policy options configurations for all six routers in the example network illustrated in Example: Load-Balanced MPLS Network.
All routers in the network have MPLS, RSVP, and BGP enabled. OSPF is configured as the IGP, and relevant interfaces have basic IP information and MPLS support.
In addition, all routers have the router ID (RID) configured
manually at the [edit routing-options]
hierarchy level
to avoid duplicate RID problems. The passive
statement
is included in the OSPF configuration to ensure that protocols are
not run over the loopback (lo0) interface and that the loopback
(lo0) interface is advertised correctly throughout the network.
Sample Outputs 1, 3, 4, and 5 for R6, R2, R4, and R9 show the base configuration for transit label-switched routers. The base configuration includes all interfaces enabled for MPLS, the RID manually configured, and the relevant protocols (RSVP, MPLS, BGP, and OSPF).
Sample Output 2 from ingress router R1 shows the base configuration plus four LSPs (lsp1 through lsp4) configured to R0. The four LSPs are configured with different primary paths that specify a loose hop through R4 for lsp1 and lsp4, and through R2 for lsp2 and lsp3.
To create traffic, R1 has a static route (100.100.1.0/24) configured at the [edit routing-options static route]
hierarchy
level. The prefix is included in the send-statics policy at the [edit policy-options send statics]
hierarchy level so the routes
can become BGP routes.
In addition, on the ingress router R1, load balancing
is configured using the per-packet option, and the policy
is exported at the [edit routing-options forwarding-table]
hierarchy level.
Sample Output 6 from egress router R0 shows one LSP (r0-r6) to R6 used to create bidirectional traffic. OSPF requires bidirectional LSP reachability before it will advertise the LSP into the IGP. Although the LSP is advertised into the IGP, no hello messages or routing updates occur over the LSP—only user traffic is sent over the LSP. The router uses its local copy of the IGP database to verify bidirectional reachability.
In addition, R0 has a static route (100.100.10.0/24) configured at the [edit routing-options static route]
hierarchy
level. The prefix is included in the send-statics policy at the [edit policy-options send statics]
hierarchy level so the routes
can become BGP routes.
Configuring Load Balancing Based on MPLS Labels on ACX Series Routers
Table 2 provides detailed information about all of the possible MPLS LSP load-balancing options.
ACX Series routers can load-balance on a per-packet basis in MPLS. Load balancing can be performed on information in both the IP header and on up to three MPLS labels, providing a more uniform distribution of MPLS traffic to next hops. This feature is enabled on supported platforms by default and requires no configuration.
Load balancing is used to evenly distribute traffic when there is a single next hop over an aggregated interface or a LAG bundle. Load balancing using MPLS labels is supported only for LAG interfaces and not for equal-cost multipath (ECMP) links.
By default, when load balancing is used to help distribute traffic, Junos OS employs a hash algorithm to select a next-hop address to install into the forwarding table. Whenever the set of next hops for a destination changes in any way, the next-hop address is reselected by means of the hash algorithm. You can configure how the hash algorithm is used to load-balance traffic across interfaces in an aggregated Ethernet (ae) interface.
An LSP tends to load-balance its placement by randomly selecting one of the
interfaces in an ae-
interface bundle and using it exclusively. The
random selection is made independently at each transit router, which compares
Interior Gateway Protocol (IGP) metrics alone. No consideration is given to
bandwidth or congestion levels.
On ACX Series routers, the load balancing on labelled switched paths (LSPs) for virtual private LAN service (VPLS), L2 circuit, and Layer2 virtual private network (L2VPN) are not supported.
To load-balance based on the MPLS label information, configure the family
mpls
statement:
[edit forwarding-options hash-key] family mpls { all-labels; label-1; label-2; label-3; no-labels; payload { ether-pseudowire; ip { layer-3-only; port-data { destination-lsb; destination-msb; source-lsb; source-msb; } } } }
You can include this statement at the [edit forwarding-options
hash-key]
hierarchy level.
When you configure payload ip (user@host#
set forwarding-options hash-key family mpls payload ip),
configuring layer-3-only
and port-data
is
mandatory.
Load balancing functionality, without proper hash-keys configuration, may result in an unpredictable behavior.
For Layer 2 VPN/pseudowire tunnel termination, upto two labels are used for hashing and payload MAC destination and source addresses can be optionally selected. These controls can be used to support ether-pseudowire knob in family mpls under hash-key configuration shown above. However, since ACX2000 and ACX4000 also support TDM pseudowires, the ether-pseudowire knobs needs to be used only when TDM pseudowires are not being used.
For Layer 3 VPN tunnel termination, upto two labels are used for hasing and payload IP source and destination addresses and Layer 4 source and destination ports can be optionally selected. These controls can be used for supporting ip port-data knobs in family mpls under hash-key configuration shown above. However, since Layer 4 port MSB and LSB cannot be individually selected, one of destination-lsb or destination-msb knobs or one of source-lsb or source-msb knobs would select Layer 4 destination or source ports, respectively.
For LSR case, upto three labels are used for hashing. If a BOS is seen when parsing the first three labels, BCM examines the first nibble of payload - if the nibble is 4, the payload is treated as IPv4 and if the first nibble is 6, the payload is treated as IPv6 and in such cases payload source and destination IP addresses can be speculatively used for hashing. These controls can be used for supporting ip port-data knobs in family mpls under hash-key configuration. However, Layer 4 ports cannot be used for hashing in LSR case, and only layer-3-only knob is applicable. BCM does not claim support for hashing on fields beyond the three MPLS labels. Load Balancing for a single pseudowire session does not take place in case of LSR as all the traffic specific to that session will carry the same set of MPLS labels.
Load balancing on LSR AE interfaces can be achieved for a higher number of MPLS sessions, that is minimum of 10 sessions. This is applicable for CCC/VPLS/L3VPN. In case of Layer 3 VPN, the traffic may not be equally distributed across the member links as the layer 3 addresses also get accounted for (along with the labels) for the hash input function.
For LER scenarios, in case of ACX5048 and ACX5096, hashing based on Layer 3 and Layer 4 fields is possible by configuring the payload option under the “family mpls” hierarchy. Hashing on the LER is not be based on Labels. For Layer 3 service, it is mandatory to mention the payload as “layer-3-only” and specify “port-data” in case of Layer 4 service. You can also mention the label count while configuring hash-keys on LER routers.
LER and LSR load balancing behavior is applicable for CCC/VPLS/Layer 3 VPN and other IP MPLS scenarios.
This feature applies to aggregated Ethernet and aggregated SONET/SDH interfaces. In addition, you can configure load balancing for IPv4 traffic over Layer 2 Ethernet pseudowires. You can also configure load balancing for Ethernet pseudowires based on IP information. The option to include IP information in the hash key provides support for Ethernet circuit cross-connect (CCC) connections.
Statement |
MPLS LSP Load Balancing Options |
---|---|
|
Include the first label in the hash key. Use this option for single label packets. |
|
Include the second label in the hash key. You must also configure
the |
|
Include the third label in the hash key. You must also configure
the |
|
Excludes MPLS labels from the hash key. |
|
Allows you to configure which parts of the IP packet payload to include in the hash key. For the PTX Series Packet Transport Switch, this value is set by default. |
|
Exclude IP payload from the hash key. |
|
Load-balance IPv4 traffic over Layer 2 Ethernet pseudowires. |
|
Include the IPv4 or IPv6 address in the hash key. You must also
configure either |
|
Include only the Layer 3 IP information in the hash key. Excludes
all of the |
|
Include the source and destination port field information. By
default, the most significant byte and least significant byte of
the source and destination port fields are used in the hash key.
To select specific bytes to use in the hash key, include one or
more of the |
|
Include the least significant byte of the destination port in the
hash key. Can be combined with any of the other
|
|
Include the most significant byte of the destination port in the
hash key. Can be combined with any of the other
|
|
Include the least significant byte of the source port in the hash
key. Can be combined with any of the other
|
|
Include the most significant byte of the source port in the hash
key. Can be combined with any of the other
|
To include the IP address as well as the first label in the hash key, configure the
label-1
statement and the ip
option for the
payload
statement at the [edit forwarding-options
hash-key family mpls]
hierarchy level:
[edit forwarding-options hash-key family mpls] label-1; payload { ip; }
To include the IP address as well as both the first and second labels in the hash
key, configure the label-1
and label-2
options and
the ip
option for the payload
statement at the
[edit forwarding-options hash-key family mpls]
hierarchy
level:
[edit forwarding-options hash-key family mpls] label-1; label-2; payload { ip; }
Ensure proper load balancing by including the label-1
,
label-2
, and label-3
options at the
[edit forwarding-options hash-key family mpls]
hierarchy
level:
[edit forwarding-options hash-key family mpls] label-1; label-2; label-3;
MPLS Encapsulated Payload Load-balancing Overview
Routers can load-balance on a per-packet basis in MPLS. Load balancing can be performed on the information in both the IP header and on up to three MPLS labels, providing a more uniform distribution of MPLS traffic to next hops.
Load balancing is used to evenly distribute traffic when the following conditions apply:
There are multiple equal-cost next hops over different interfaces to the same destination.
There is a single next hop over an aggregated interface.
By default, when load balancing is used to help distribute traffic, a hash algorithm is used to select a next-hop address to install into the forwarding table. Whenever the set of next hops for a destination changes in any way, the next-hop address is reselected by means of the hash algorithm.
In case of multiple transport layer networks such as Ethernet
over MPLS or Ethernet pseudowire, the hash algorithm needs to look
beyond the outer header of the payload and into the inner headers
to generate an even distribution. To determine the inner encapsulation,
the PFE relies on the presence of certain codes or numbers at fixed
payload offets; for example the presence of payload type 0X800 or
the presence of protocol number 4 for an IPv4 packet. In Junos OS,
you can configure zero-control-word
option to indicate
the start of an Ethernet frame in an MPLS ether-pseudowire payload.
On seeing this control word, which is four bytes having a numerical
value of all zeros, the hash generator assumes the start of an Ethernet
frame at the end of the control word in an MPLS ether-pseudowire packet.
For DPC I-chip-based cards, configure the zero-control-word
option at the [edit forwarding-options hash-key family mpls
ether-pseudowire]
hierarchy level; and for MPC cards, configure
the zero-control-word
option at the [edit forwarding-options
enhanced-hash-key family mpls ether-pseudowire]
hierarchy level.
Configuring MPLS Encapsulated Payload for Load Balancing
By default, when load balancing is used to help distribute
traffic, a hash algorithm is used to select a next-hop address to
install into the forwarding table. Whenever the set of next hops for
a destination changes in any way, the next-hop address is reselected
by means of the hash algorithm. Configure the zero-control-word
option to indicate the start of an Ethernet frame in an MPLS ether-pseudowire
payload. On seeing this control word, four bytes having a numerical
value of all zeros, the hash generator assumes the start of the Ethernet
frame at the end of the control word in an MPLS ether-pseudowire packet.
Before you begin to configure MPLS encapsulated payload for load balancing, configure routing and signaling protocols.
To configure MPLS encapsulated payload for load balancing:
zero-control-word
option to indicate
the start of an Ethernet frame in an MPLS ether-pseudowire payload. For DPC I-chip-based cards, configure the
zero-control-word
option at the[edit forwarding-options hash-key family mpls ether-pseudowire]
hierarchy level.[edit forwarding-options hash-key family mpls ether-pseudowire] user@host# set zero-control-word
For MPC cards, configure the
zero-control-word
option at the[edit forwarding-options enhanced-hash-key family mpls ether-pseudowire]
hierarchy level.[edit forwarding-options enhanced-hash-key family mpls ether-pseudowire] user@host# set zero-control-word
Policy-Based Multipath Routes Overview
Segment routing networks can have multiple transport protocols in the core. You can combine segment routing SR-TE LDP or RSVP routes and SR-TE IP routes and install a multipath route in the routing information base (also known as routing table). You can then steer selective service traffic using the mutlipath route through policy configuration.
- Understanding Policy-Based Multipath Routes
- Benefits of Policy-Based Multipath Routes
- Policy-Based Multipath Routes for Route Resolution
- Sample Route Resolution Using Policy-Based Multipath Routes
- Enhancement to Class-of-Service (CoS) Forwarding-Policy
- Enhancements to Policy Match Protocol
- Impact of Configuring Policy-Based Multipath Route on Network Performance
Understanding Policy-Based Multipath Routes
There are different transport protocols in a network, such as IGP, labelled IGP, RSVP, LDP, and segment routing traffic-engineering (SR-TE) protocols, that are used to resolve service traffic. However, you could not use a combination of the transport protocols to resolve the service traffic. With the introduction of the policy-based multipath feature, you can combine segment routing traffic-engineered (SR-TE) LDP or RSVP routes and SR-TE IP routes to create a multipath route that is installed in the routing information base. You can resolve BGP service routes over the mutlipath route through policy configuration and steer traffic differently for different prefixes.
A multipath route has combined next hops of route entries that are used for load
balancing. All the supporting routes of the multipath route entry must be in same
routing information base. When the supporting routes are under different routing
information base, you can use the rib-group
configuration statement
to add route entries to a particular routing information base.
You can configure a multipath route using a policy to select the list of routes whose
next hops is to be combined together. When you include the
policy-multipath
statement along with the
policy
statement at the [edit routing-options rib
routing-table-name]
hierarchy level, a
policy-based multipath route is created.
The policy-based multipath feature is supported for both IP and IPv6 protocols, and
can be configured under the [edit routing-instances]
hierarchy
level.
For example:
[edit routing-options] user@host# set rib inet.3 policy-multipath policy example-policy [edit policy-options] user@host# set policy-statement example-policy from example-conditions user@host# set policy-options policy-statement example-policy then accept
The configured policy is applied to each route entry for a given prefix. The multipath route is created only when more than one route (including active route) passes the policy. Any action commands configured in the policy, such as apply, is evaluated using the active route. For non-active routes, the policy is applied to check if the routes can participate in the multipath route or not. Multipath routes inherit all attributes of the active route. These attributes can be modified using the multipath policy configuration.
Benefits of Policy-Based Multipath Routes
-
Provides flexibility to combine core network protocols to steer selective traffic.
-
Optimizes network performance with weighted equal-cost multipath using multipath routes.
Policy-Based Multipath Routes for Route Resolution
You can combine segment routing traffic-engineered (SR-TE) LDP or RSVP routes and SR-TE IP routes and install a multipath route in the routing information base. The policy-based multipath routes are not active entries in the routing information base. When a multipath route is generated by configuration of policy, it is used for resolving protocol next hops instead of active routes. A multipath route next hop is created by merging gateways of next hops of each constituent route.
Take the following into consideration when configuring policy-based multipath routes for route resolution:
-
If the member route of a multipath route points to a next hop other than the router next hop or an indirect next hop with forwarding next hop to the router next hop, such next hops are ignored.
-
If the constituent routes point to indirect next hop, then gateways from the forwarding-next hop are merged and the indirect next hop is ignored.
-
If total number of gateways exceeds the
maximum-ecmp
supported on the device, then only themaximum-ecmp
gateways are retained and all other gateways are ignored. -
Gateways with lower weights are given preference. When one of the member route has unilist of indirect next hops and each of the next hop is pointing to a forwarding next hop, there can be weight values both at the indirect next hop and at forwarding next hop. In such cases, weight value of gateways is updated to reflect the combined effect of weights at both levels.
Sample Route Resolution Using Policy-Based Multipath Routes
Taking as an example, let us assume there are segment routing traffic-engineered LSPs, label IS-IS routes, and LDP LSPs for a destination 10.1.1.1/32, as displayed in the output below:
10.1.1.1/32 *[SPRING-TE/8] 00:00:58, metric 1, metric2 30 > to 10.13.1.2 via ge-0/0/1.1, Push 33333, Push 801005, Push 801006(top) [L-ISIS/14] 1w0d 00:15:57, metric 10 > to 10.12.1.1 via ge-0/0/0.1 to 10.22.1.1 via ge-0/0/0.2 to 10.23.1.1 via ge-0/0/0.3 to 10.24.1.1 via ge-0/0/0.4 to 10.25.1.1 via ge-0/0/0.5 to 10.13.1.2 via ge-0/0/1.1, Push 801001, Push 801005(top) [LDP/19] 1w0d 00:09:27, metric 1 > to 10.12.1.1 via ge-0/0/0.1 to 10.22.1.1 via ge-0/0/0.2 to 10.23.1.1 via ge-0/0/0.3 to 10.24.1.1 via ge-0/0/0.4 to 10.25.1.1 via ge-0/0/0.5 to 10.13.1.2 via ge-0/0/1.1, Push 801001, Push 801005(top)
Here, segment routing LSP is the active route entry to the 10.1.1.1 destination, and by default, only this route is used to resolve any services resolving over 10.1.1.1.
When there is a requirement to use more than one protocols for resolving service
routes, you can achieve this by configuring policy-multipath
to combine the protocols. For instance, if segment routing and LDP paths are
required for service resolution, you must configure
policy-multipath
combining the
segment
routing and LDP routes for prefix
10.1.1.1.
For example:
[edit policy-options] user@host# set rib inet.3 policy-multipath policy example-policy user@host# set policy-statement abc term 1 from protocol spring-te user@host# set policy-statement abc term 1 from protocol ldp user@host# set policy-statement abc term 1 from route-filter 10.1.1.1/32 exact user@host# set policy-statement abc term 1 then accept
With this configuration, you create a policy-based multipath route for prefix 10.1.1.1/32 that uses constituent route entries of segment routing and LDP protocols.
You can view the multipath route using the show route
command
output, as follows:
10.1.1.1/32 *[SPRING-TE/8] 00:10:28, metric 1, metric2 30 > to 10.13.1.2 via ge-0/0/1.1, Push 33333, Push 801005, Push 801006(top) [L-ISIS/14] 1w0d 00:25:27, metric 10 > to 10.12.1.1 via ge-0/0/0.1 to 10.22.1.1 via ge-0/0/0.2 to 10.23.1.1 via ge-0/0/0.3 to 10.24.1.1 via ge-0/0/0.4 to 10.25.1.1 via ge-0/0/0.5 to 10.13.1.2 via ge-0/0/1.1, Push 801001, Push 801005(top) [LDP/19] 1w0d 00:18:57, metric 1 > to 10.12.1.1 via ge-0/0/0.1 to 10.22.1.1 via ge-0/0/0.2 to 10.23.1.1 via ge-0/0/0.3 to 10.24.1.1 via ge-0/0/0.4 to 10.25.1.1 via ge-0/0/0.5 to 10.13.1.2 via ge-0/0/1.1, Push 801001, Push 801005(top) [Multipath/8] 00:03:13, metric 1, metric2 30 > to 10.12.1.1 via ge-0/0/0.1 to 10.22.1.1 via ge-0/0/0.2 to 10.23.1.1 via ge-0/0/0.3 to 10.24.1.1 via ge-0/0/0.4 to 10.25.1.1 via ge-0/0/0.5 to 10.13.1.2 via ge-0/0/1.1, Push 33333, Push 801005, Push 801006(top) to 10.13.1.2 via ge-0/0/1.1, Push 801001, Push 801005(top)
You can see form the command output that the multipath route combines next hops of segment routing and LDP paths. The multipath route it is not active, and by default, the route preference and metric is the same as that of active route.
You can use the following combinations for the poilcy-based multipath route: However we cannot create multipath of LDP/L-ISIS as active-route is not part of multipath.
-
Segment routing traffic-engineered LSPs and LDP LSPs.
-
Segment routing traffic-engineered LSPs, and label IS-IS paths.
-
Segment routing traffic-engineered LSPs, LDP LSPs, and label IS-IS paths.
However, you cannot create multipath route of LDP and label IS-IS, as the active route is not part of the multipath route.
With the same configuration, assuming that there is a static route 1.2.3.4/32 configured with a protocol next hop of 10.1.1.1, this route is resolved using the multipath route over both segment routing traffic-engineered LSPs and LDP LSPs.
For example:
10.1.3.4/32 *[Static/5] 00:00:12, metric2 1 to 10.12.1.1 via ge-0/0/0.1 > to 10.22.1.1 via ge-0/0/0.2 to 10.23.1.1 via ge-0/0/0.3 to 10.24.1.1 via ge-0/0/0.4 to 10.25.1.1 via ge-0/0/0.5 to 10.13.1.2 via ge-0/0/1.1, Push 33333, Push 801005, Push 801006(top) to 10.13.1.2 via ge-0/0/1.1, Push 801001, Push 801005(top)
Enhancement to Class-of-Service (CoS) Forwarding-Policy
For class-of-service-based forwarding, you must use the forwarding-policy
next-hop-map
configuration statement.
Prior to Junos OS Release 19.1R1, the match conditions supported under class-of-service-based forwarding included:
-
next-hop—Match next hop based on outgoing interface or next hop address.
-
lsp-next-hop—Match named LSPs using regular expression of LSP name.
-
non-lsp-next-hop—Match all LSPs without an LSP name.
With the policy-based multipath route feature, you can also match all next hops
without a label for certain prefixes. To do this, you must enable the
non-labelled-next-hop
option at the [edit
class-of-service forwarding-policy next-hop-map map-name
forwarding-class forwarding-class-name
hierarchy
level.
For example:
[edit] class-of-service { forwarding-policy { next-hop-map abc { forwarding-class best-effort { non-labelled-next-hop; } } } }
Enhancements to Policy Match Protocol
Prior to Junos OS Release 19.1R1, when you used a policy to match protocol using the
from protocol
statement at the [edit policy-options
policy-statement statement-name]
hierarchy level,
all protocol routes (labeled and unlabeled) were matched. With the policy-based
multipath route feature, you can match labeled protocol routes specifically.
The options for matching labeled protocols) are:
-
l-isis—Match labeled IS-IS routes. The
isis
option matches IS-IS routes, excluding label IS-IS routes. -
l-ospf—Match labled OSPF routes. The
ospf
option matches all OSPF routes, including OSPFv2, OSPFv3 and label OSPF.
For example:
[edit] policy-options { policy-statement abc { from protocol [ l-ospf l-isis ]; } }
Impact of Configuring Policy-Based Multipath Route on Network Performance
When you configure policy-based multipath route, a change of route in the routing
information base results in the evaluation of the policy to check if a multipath
route needs to be created. Because this feature requires that member routes must be
in the same routing information base, the rib-group
statement is
used to merge routes from different routing information base. Configuring the
rib-group
statement at the application level increases number
of routes in the system.
When there are a number of routes in the routing information base, constant change of routes leads to reevaluation of the multipath policy. This could impact network performance. It is recommended to configure the policy-based multipath route feature only when required.
Understanding IP-Based Filtering and Selective Port Mirroring of MPLS Traffic
In an MPLS packet, the IP header comes immediately after the MPLS header. The IP-based filtering feature provides a deep inspection mechanism, where a maximum of upto eight MPLS labels of the inner payload can be inspected to enable filtering of MPLS traffic based on IP parameters. The filtered MPLS traffic can also be port mirrored to a monitoring device to offer network-based services in the core MPLS network.
IP-Based Filtering of MPLS Traffic
Prior to Junos OS Release 18.4R1, filtering based on IP parameters was not supported for MPLS family filter. With the introduction of the IP-based filtering feature, you can apply inbound and outbound filters for MPLS-tagged IPv4 and IPv6 packets based on IP parameters, such as source and destination addresses, Layer 4 protocol type, and source and destination ports.
The IP-based filtering feature enables you to filter MPLS packets at the ingress of an interface, where the filtering is done using match conditions on the inner payload of the MPLS packet. The selective MPLS traffic can then be port mirrored to a remote monitoring device using logical tunnels.
To support IP-based filtering, additional match conditions are added that allow MPLS packets to be deep inspected to parse the inner payload with Layer 3 and Layer 4 headers before the appropriate filters are applied.
The IP-based filtering feature is supported only for MPLS-tagged IPv4 and IPv6 packets. In other words, the MPLS filters match IP parameters only when the IP payload comes immediately after the MPLS labels.
In other scenarios, where the MPLS payload includes pseudowires, protocols other than inet and inet6, or other encapsulations like Layer 2 VPN or VPLS, the IP-based filtering feature is not supported.
The following match conditions are added for the IP-based filtering of MPLS traffic:
IPv4 source address
IPv4 destination address
IPv6 source address
IPv6 destination address
Protocol
Source port
Destination port
Source IPv4 prefix list
Destination IPv4 prefix list
Source IPv6 prefix list
Destination IPv6 prefix list
The following match combinations are supported for the IP-based filtering of MPLS traffic:
Source and destination address match conditions with IPv4 and IPv6 prefix lists.
Source and destination port address and protocol types match conditions with IPv4 and IPv6 prefix lists.
Selective Port Mirroring of MPLS Traffic
Port mirroring is the capability of mirroring a packet to a configured destination, in addition to the normal processing and forwarding of the packets. Port mirroring is applied as an action for a firewall filter, which is applied at the ingress or egress of any interface. Similarly, the selective port mirroring feature provides the capability to mirror MPLS traffic, which is filtered based on IP parameters, to a mirrored destination using logical tunnels.
To enable selective port mirroring, additional actions are configured
at the [edit firewall family mpls filter filter-nameterm term-name then]
hierarchy level,
in addition to the existing counter
, accept
,
and discard
actions:
port-mirror
port-mirror-instance
Port Mirroring
The port-mirror
action enables port mirroring globally
on the device, which applies to all Packet Forwarding Engines (PFEs)
and associated interfaces.
For MPLS family filter, the port-mirror
action is
enabled for global port mirroring.
Port Mirroring Instance
The port-mirror-instance
action enables you to customize
each instance with different properties for input sampling and port
mirroring output destinations, instead of having to use a single system-wide
configuration for port mirroring.
You can configure only two port mirroring instances per Flexible
PIC Concentrator (FPC) by including the instance port-mirror-instance-name
statement at the [edit forwarding-options port-mirror]
hierarchy level. You can then associate individual port mirroring
instances with an FPC, PIC, or (Forwarding Engine Board (FEB) depending
on the device hardware.
For MPLS family filter, the port-mirror-instance
action
is enabled only for the port-mirroring instance.
For both port-mirror
and port-mirror-instance
actions, the output interface must be enabled with Layer 2 family
and not family MPLS (Layer 3) for the selective port mirroring feature
to work.
Sample Configurations
- IP-Based Filtering Configuration
- Selective Port Mirroring Configuration
- Mirrored Destination Configuration
IP-Based Filtering Configuration
[edit firewall family mpls filter mpls-filter] term ipv4-term { from { ip-version { ipv4 { source-address { 10.10.10.10/24; } destination-address { 20.20.20.20/24; } protocol tcp { source-port 100; destination-port 200; } soure-prefix-list ipv4-source-users; destination-prefix-list ipv4-destination-users; } } exp 1; } then port-mirror; then accept; then count; } term ipv6-term { from { ip-version { ipv6 { source-address { 2000::1/128; } destination-address { 3000::1/128; } protocol tcp { source-port 100; destination-port 200; } source-prefix-list ipv6-source-users; destination-prefix-list ipv6-destination-users; } } exp 1; } then port-mirror-instance port-mirror-instance1; then accept; then count; }
[edit policy-options] prefix-list ipv4-source-users { 172.16.1.16/28; 172.16.2.16/28; } prefix-list ipv6-source-users { 2001::1/128; 3001::1/128; }
[edit interfaces] xe-0/0/1 { unit 0 { family inet { address 100.100.100.1/30; } family mpls { filter { input mpls-filter; } } } }
Selective Port Mirroring Configuration
[edit forwarding-options] port-mirroring { input { rate 2; run-length 4; maximum-packet-length 500; } family any { output { interface xe-2/0/2.0; } } }
[edit forwarding-options] port-mirroring { instance { port-mirror-instance1 { input { rate 3; run-length 5; maximum-packet-length 500; } family any { output { interface xe-2/0/2.0; } } } } }
The output interface xe-2/0/2.0
is configured for
Layer 2 family and not family MPLS.
For both port-mirror
and port-mirror-instance
actions, the output interface must be enabled with Layer 2 family
and not family MPLS (Layer 3) for the selective port mirroring feature
to work.
Mirrored Destination Configuration
[edit interfaces] xe-2/0/2 { vlan-tagging; encapsulation extended-vlan-bridge; unit 0 { vlan-id 600; } }
[edit bridge-domains] bd { domain-type bridge; interface xe-2/0/2.0; }
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.