Understanding Aggregated Multiservices Interfaces for Next Gen Services
This topic provides an overview of using the Aggregated Multiservices Interfaces feature with the MX-SPC3 services card for Next Gen Services. It contains the following sections:
Aggregated Multiservices Interface
In Junos OS, you can combine multiple services interfaces to create a bundle of services interfaces that can function as a single interface. Such a bundle of interfaces is known as an aggregated multiservices interface (AMS), and is denoted as amsN in the configuration, where N is a unique number that identifies an AMS interface (for example, ams0). Starting in Junos OS Release 19.3R2, AMS interfaces are supported on the Next Gen Services MX-SPC3 services card.
AMS configuration provides higher scalability, improved performance, and better failover and load-balancing options.
An AMS configuration enables service sets to support multiple services PICs by associating an AMS bundle with a service set. For Next Gen Services, the MX-SPC3 services card supports up to two PICs and you can have a maximum of eight MX-SPC3 services cards in your chassis. This enables a Next Gen Services AMS bundle to have up to 16 services PICs as member interfaces and you can distribute services among the member interfaces.
Member interfaces are identified as mams in the configuration. The chassisd process in routers that support AMS configuration creates a mams entry for every multiservices interface on the router.
When you configure services options at the ams interface level, the options apply to all member interfaces (mams) for the ams interface.
The options also apply to service sets configured on services interfaces corresponding to the ams interface’s member interfaces. All settings are per PIC. For example, session-limit applies per member and not at an aggregate level.
You
cannot configure services options at both the ams (aggregate) and
member-interface level. If services options are configured on vms-x/y/z
, they also apply to service sets on mams-x/y/z
.
When you want services options settings to apply uniformly to all members, configure services options at the ams interface level. If you need different settings for individual members, configure services options at the member interface level.
Per-member drop of traffic and per-member next-hop configuration is required for NAT64. For NAPT44, this per-member specification allows arbitrary hash keys, providing better load-balancing options to allow dynamic NAT operations to be performed. For NAT64, NAPT44, and dynamic NAT44, it is not possible to determine which member allocates the dynamic NAT address. To ensure that reverse flow packets arrive at the same member as the forward flow packets, pool-address-based routes are used to steer reverse flow packets.
If you modify a NAT pool that is being used by a service set assigned to an AMS interface, you must deactivate and activate the service set before the NAT pool changes take effect.
Traffic distribution over the member interfaces of an AMS interface
can occur in either a round-robin fashion or hash-based. You can configure
the following hash key values to regulate the traffic distribution: source-ip
, destination-ip
,
and protocol
. For services that require traffic symmetry,
you must configure symmetrical hashing. Symmetrical hashing configuration
ensures that both forward and reverse traffic is routed through the
same member interface.
If the service set is applied on the Gigabit Ethernet or 10-Gigabit Ethernet interface (interface-style service set) that functions as the NAT inside interface, then the hash keys used for load balancing might be configured in such a way that the ingress key is set as destination IP address and the egress key is set as source IP address. Because the source IP address undergoes NAT processing, it is not available for hashing the traffic in the reverse direction. Therefore, load balancing does not happen on the same IP address and forward and reverse traffic does not map to the same PIC. With the hash keys reversed, load balancing occurs correctly.
With next-hop services, for forward traffic, the ingress key on the inside interface load -balances traffic, and for reverse traffic, the ingress key on the outside interface load -balances traffic or per-member next hops steer reverse traffic. With interface-style services, the ingress key load-balances forward traffic and the egress key load-balances forward traffic or per-member next hops steer reverse traffic. Forward traffic is traffic entering from the inner side of a service set and reverse traffic is traffic entering from the outer side of a service set. The forward key is the hash key used for the forward direction of traffic and the reverse key is the hash key used for the reverse direction of traffic (depends on whether it relates to interface services or next-hop services style.)
With stateful firewalls, you can configure the following combinations of forward and reverse keys for load balancing. In the following combinations presented for hash keys, FOR-KEY refers to the forward key, REV-KEY denotes the reverse key, SIP signifies source IP address, DIP signifies destination IP address, and PROTO refers to protocol such as IP.
FOR-KEY: SIP, REV-KEY: DIP
FOR-KEY: SIP,PROTO REV-KEY: DIP, PROTO
FOR-KEY: DIP, REV-KEY: SIP
FOR-KEY: DIP,PROTO REV-KEY: SIP, PROTO
FOR-KEY: SIP,DIP REV-KEY: SIP, DIP
FOR-KEY: SIP,DIP,PROTO REV-KEY: SIP, DIP,PROTO
With static NAT configured as basic NAT44 or destination NAT44, and with stateful firewall configured or not, if the forward direction of traffic must undergo NAT processing, configure the hash keys as follows:
FOR-KEY: DIP, REV-KEY: SIP
FOR-KEY: DIP,PROTO REV-KEY: SIP, PROTO
If the reverse direction of traffic must undergo NAT processing, configure the hash keys as follows:
FOR-KEY: SIP, REV-KEY: DIP
FOR-KEY: SIP,PROTO REV-KEY: DIP, PROTO
With dynamic NAT configured, and with stateful firewall configured or not, only the forward direction traffic can undergo NAT. The forward hash key can be any combination of SIP, DIP, and protocol, and the reverse hash key is ignored.
The Junos OS AMS configuration supports IPv4 and IPv6 traffic.
IPv6 Traffic on AMS Interfaces Overview
You can use AMS interfaces for IPv6 traffic. To configure IPv6
support for an AMS interface, include the family inet6
statement at the [edit interfaces ams-interface-name unit 1]
hierarchy level. When family inet
and family inet6
are set for an AMS interface subunit, the hash-keys
is configured at service-set level for interface
style and at IFL level for next-hop style.
When a member interface of an AMS bundle fails, traffic destined to the failed member is redistributed among the remaining active members. The traffic (flows or sessions) traversing through the existing active members is unaffected. If M members are currently active, the expected result is that only about 1/M fraction of the traffic (flows/sessions) is impacted because that amount of traffic is shifted from the failed member to remain active members. When the failed member interface comes back online, only a fraction of the traffic is redistributed to the new member. If N members are currently active, the expected result is that only about 1/(N+1) fraction of the traffic (flows/sessions) is impacted because that amount of traffic moves to the new restored member. The 1/M and 1/(N+1) values assume that the flows are uniformly distributed among members, because a packet-hash is used to load-balance and because traffic usually contains a typical random combination of IP addresses (or any other fields that are used as load-balancing keys).
Similar to IPv4 traffic, for IPv6 packets, an AMS bundle must contain members of only one services PIC type.
The number of flows distributed, in an ideal environment, can be 1/N in a best-case scenario when the Nth member goes up or down. However, this assumption considers that the hash keys load-balance the real or dynamic traffic. For example, consider a real-world deployment where member A is serving only one flow, whereas member B is serving 10 flows. If member B goes down, then the number of flows disrupted is 10/11. The NAT pool-split behavior is designed to utilize the benefits of the rehash-minimization feature. The splitting of a NAT pool is performed for dynamic NAT scenarios (dynamic NAT, NAT64, and NAPT44).
If the original and redistributed flows are defined as follows:
Member-original-flows—The traffic mapped to a member when all members are up.
Member-redistributed-flows—The additional traffic mapped to a member when some other member fails. These traffic flows might need to be rebalanced when member interfaces come up and go down.
With the preceding definitions of the original and redistributed flows for member interfaces, the following observations apply:
The member-original-flows of a member stay intact as long as that member is up. Such flows are not impacted when other members move between the up and down states.
The member-redistributed-flows of a member can change when other members go up or down. This change of flows occurs because these additional flows need to be rebalanced among all active members. Therefore, the member-redistributed-flow can vary a lot based on other members going down or up. Although it might seem that when a member goes down, the flows on active-members are preserved, and that when a member goes up, flows on active-members are not preserved in an effective way, this behavior is only because of static or hash-based rebalancing of traffic among active members.
The rehash-minimization feature handles the operational changes
in a member interface status only (such as member offline or member
Junos OS reset). It does not handle changes in configuration. For
example, addition or deletion, or activation and deactivation, of
member interfaces at the [edit interfaces amsN load-balancing-options member-interface mams-a/b/0]
hierarchy
level requires the member PICs to be bounced. Twice NAT or hairpinning
is not supported, similar to IPv4 support for AMS interfaces.
Member Failure Options and High Availability Settings
Because multiple service interfaces are configured as part of an AMS bundle, AMS configuration also provides for failover and high availability support. You can either configure one of the member interfaces as a backup interface that becomes active when any one of the other member interfaces goes down, or configure the AMS in such a way that when one of the member interfaces goes down, the traffic assigned to that interface is shared across the active interfaces.
The member-failure-options
configuration
statement enables you to configure how to handle traffic
when a member interface fails. One option is to redistribute the traffic
immediately among the other member interfaces. However, redistribution
of traffic involves recalculating the hash tags, and might cause some
disruption in traffic on all the member interfaces.
The other option is to configure the AMS to drop all traffic
that is assigned to the failed member interface. With this you can
optionally configure an interval, rejoin-timeout
, for the
AMS to wait for the failed interface to come back online after which
the AMS can redistribute the traffic among other member interfaces.
If the failed member interface comes back online before the configured
wait time, traffic continues unaffected on all member interfaces,
including the interface that has come back online and resumed the
operations.
You can also control the rejoining of the failed interface when
it comes back online. If you do not include the enable-rejoin
statement in the member-failure-options
configuration,
the failed interface cannot rejoin the AMS when it comes back online.
In such cases, you can manually rejoin that to the AMS by executing
the request interfaces revert interface-name
operational mode command.
The rejoin-timeout
and enable-rejoin
statements
enable you to minimize traffic disruptions when member interfaces
flap.
When member-failure-options
are not configured, the default behavior
is to drop member traffic with a rejoin timeout of 120 seconds.
The high-availability-options
configuration enables
you to designate one of the member interfaces as a backup interface.
The backup interface does not participate in routing operations as
long as it remains a backup interface. When a member interface fails,
the backup interface handles the traffic assigned to the failed interface.
When the failed interface comes back online, it becomes the new backup
interface.
In a many-to-one configuration (N:1), a single backup interface supports all other member interfaces in the group. If any of the member interfaces fails, the backup interface takes over. In this stateless configuration, data is not synchronized between the backup interface and the other member interfaces.
When both member-failure-options
and high-availability-options
are configured for an AMS, the high-availability-options
configuration takes precedence over the member-failure-options
configuration. If a second failure occurs before the failed interface
comes back online to be the new backup, the member-failure-options
configuration takes effect.
Warm Standby Redundancy
Starting in Junos OS Release 19.3R2, the N:1 warm standby option is supported on the MX-SPC3 if you are running Next Gen Services. Each warm standby AMS interface contains two members; one member is the service interface you want to protect, called the primary interface, and one member is the secondary (backup) interface. The primary interface is the active interface and the backup interface does not handle any traffic unless the primary interface fails.
To configure warm standby on an AMS interface, you use the redundancy-options
statement. You cannot use the load-balancing-options
statement in a warm standby AMS interface.
To switch from the primary interface to the secondary interface,
issue the request interface switchover amsN
command.
To revert to the primary interface from the secondary interface,
issue the request interface revert amsN
command.
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.