Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Traffic Load Balancer

Traffic Load Balancer Overview

Traffic Load Balancing Support Summary

Table 1 provides a summary of the traffic load balancing support on the MS-MPC and MS-MIC cards for Adaptive Services versus support on the MX-SPC3 security services card for Next Gen Services.

Table 1: Traffic Load Balancing Support Summary

MS-MPC

MX-SPC3

Junos Release

< 16.1R6 & 18.2.R1

≥ 16.1R6 & 18.2R1

19.3R2

Max # of Instances per Chassis

32

2,000 / 32 in L2 DSR mode

2,000

Max # of Virtual Services per Instance

32

32

32

Max # of virtual IP address per virtual service

1

1

Max # of Groups per Instances

32

32

32

Max # of Real-Services (Servers) per Group

255

255

255

Max # of groups per virtual service

1

1

Max # of Network Monitor Profiles per Group

2

2

Max # of HC’s per security services per PIC/NPU in 5-sec’s

4,000

1,250 – 19.3R2

10,000 – 20.1R1

Supported Health Check Protocols

ICMP, TCP, UDP, HTTP, SSL, Custom

ICMP, TCP, UDP, HTTP, SSL, TLS Hello, Custom

Traffic Load Balancer Application Description

Traffic Load Balancer (TLB) is supported on MX Series routers with either of the Multiservices Modular Port Concentrator (MS-MPC), Multiservices Modular Interface Card (MS-MIC), or the MX Security Services Processing Card (MX-SPC3) and in conjunction with the Modular Port Concentrator (MPC) line cards supported on the MX Series routers as described in Table 2.

Note:

You cannot run Deterministic NAT and TLB simultaneously.

Table 2: TLB MX Series Router Platform Support Summary

TLB Mode

MX Platform Coverage

Multiservices Modular Port Concentrator (MS-MPC)

MX240, MX2480, MX960, MX2008, MX2010, MX2020

MX Security Services Processing Card (MX-SPC3)

MX240, MX480, MX960

  • TLB enables you to distribute traffic among multiple servers.

  • TLB employs an MS-MPC-based control plane and a data plane using the MX Series router forwarding engine.

  • TLB uses an enhanced version of equal-cost multipath (ECMP). Enhanced ECMP facilitates the distribution of flows across groups of servers. Enhancements to native ECMP ensure that when servers fail, only flows associated with those servers are impacted, minimizing the overall network churn on services and sessions.

  • TLB provides application-based health monitoring for up to 255 servers per group, providing Intelligent traffic steering based on health checking of server availability information. You can configure an aggregated multiservices (AMS) interface to provide one-to-one redundancy for MS-MPCs or Next Gen Services MX-SPC3 card used for server health monitoring.

  • TLB applies its flow distribution processing to ingress traffic.

  • TLB supports multiple virtual routing instances to provide improved support for large scale load balancing requirements.

  • TLB supports static virtual-IP-address-to-real-IP-address translation, and static destination port translation during load balancing.

Traffic Load Balancer Modes of Operation

Traffic Load Balancer provides three modes of operation for the distribution of outgoing traffic and for handling the processing of return traffic.

Table 3 summarizes the TLB support and which cards it’s supported on.

Table 3: TLB Versus Security Service Cards Summary

Security Service Card

MS-MPC

MX-SPC3

Translate

Yes

Yes

Transparent Layer 3 Direct Server Return

Yes

Yes

Transparent Layer 2 Direct Server Return

Yes

Not Supported

Transparent Mode Layer 2 Direct Server Return

When you use transparent mode Layer 2 direct server return (DSR):

  • The PFE processes data.

  • Load balancing works by changing the Layer 2 MAC of packets.

  • An MS-MPC performs the network-monitoring probes.

  • Real servers must be directly (Layer 2) reachable from the MX Series router.

  • TLB installs a route and all the traffic over that route is load-balanced.

  • TLB never modifies Layer 3 and higher level headers.

Figure 1 shows the TLB topology for transparent mode Layer 2 DSR.

Figure 1: TLB Topology for Transparent ModeTLB Topology for Transparent Mode

Translated Mode

Translated mode provides greater flexibility than transparent mode Layer 2 DSR. When you choose translated mode:

  • An MS-MPC performs the network-monitoring probes.

  • The PFE performs stateless load balancing:

    • Data traffic directed to a virtual IP address undergoes translation of the virtual IP address to a real server IP address and translates the virtual port to a server listening port. Return traffic undergoes the reverse translation.

    • Client to virtual IP traffic is translated; the traffic is routed to reach its destination.

    • Server-to-client traffic is captured using implicit filters and directed to an appropriate load-balancing next hop for reverse processing. After translation, traffic is routed back to the client.

    • Two load balancing methods are available: random and hash. The random method is only for UDP traffic and provides quavms-random distribution. While not literally random, this mode provides fair distribution of traffic to an available set of servers. The hash method provides a hash key based on any combination of the source IP address, destination IP address, and protocol.

      Note:

      Translated mode processing is only available for IPv4-to-IPv4 and IPv6-to-IPv6 traffic.

Figure 2 shows the TLB topology for translated mode.

Figure 2: TLB Topology for Translated ModeTLB Topology for Translated Mode

Transparent Mode Layer 3 Direct Server Return

Transparent mode Layer 3 DSR load balancing distributes sessions to servers that can be a Layer 3 hop away. Traffic is returned directly to the client from the real-server.

Traffic Load Balancer Functions

TLB provides the following functions:

  • TLB always distributes the requests for any flow. When you specify DSR mode, the response returns directly to the source. When you specify translated mode, reverse traffic is steered through implicit filters on server-facing interfaces.

  • TLB supports hash-based load balancing or random load balancing.

  • TLB enables you to configure servers offline to prevent a performance impact that might be caused by a rehashing for all existing flows. You can add a server in the administrative down state and use it later for traffic distribution by disabling the administrative down state. Configuring servers offline helps prevent traffic impact to other servers.

  • When health checking determines a server to be down, only the affected flows are rehashed.

  • When a previously down server is returned to service, all flows belonging to that server based on hashing return to it, impacting performance for the returned flows. For this reason, you can disable the automatic rejoining of a server to an active group. You can return servers to service by issuing the request services traffic-load-balance real-service rejoin operational command.

    Note:

    NAT is not applied to the distributed flows.

  • Health check monitoring application runs on an MS-MPC/NPU. This network processor unit (NPU) is not used for handling data traffic.

  • TLB supports static virtual-IP-adddress-to-real-IP-address translation, and static destination port translation during load balancing.

  • TLB provides multiple VRF support.

Traffic Load Balancer Application Components

Servers and Server Groups

TLB enables configuration of groups of up to 255 servers (referred to in configuration statements as real services) for use as alternate destinations for stateless session distribution. All servers used in server groups must be individually configured before assignment to groups. Load balancing uses hashing or randomization for session distribution. Users can add and delete servers to and from the TLB server distribution table and can also change the administrative status of a server.

Note:

TLB uses the session distribution next-hop API to update the server distribution table and retrieve statistics. Applications do not have direct control on the server distribution table management. They can only influence changes indirectly through the add and delete services of the TLB API.

Server Health Monitoring — Single Health Check and Dual Health Check

TLB supports TCP, HTTP, SSL Hello, TLS Hello, and custom health check probes to monitor the health of servers in a group. You can use a single probe type for a server group, or a dual health check configuration that includes two probe types. The configurable health monitoring function resides on either an MX-SPC3 or an MS-MPC. By default, probe requests are sent every 5 seconds. Also by default, a real server is declared down only after five consecutive probe failures and declared up only after five consecutive probe successes.

Use a custom health check probe to specify the following:

  • Expected string in the probe response

  • String that is sent with the probe

  • Server status to assign when the probe times out (up or down)

  • Server status to assign when the expected response to the probe is received (up or down)

  • Protocol — UDP or TCP

TLB provides application stickiness, meaning that server failures or changes do not affect traffic flows to other active servers. Changing a server’s administrative state from up to down does not impact any active flows to remaining servers in the server distribution table. Adding a server or deleting a server from a group has some traffic impact for a length of time that depends on your configuration of the interval and retry parameters in the monitoring profile.

TLB provides two levels of server health monitoring:

  • Single Health Check—One probe type is attached to a server group by means of the network-monitoring-profile configuration statement.

  • TLB Dual Health Check (TLB-DHC)—Two probe types are associated with a server group by means of the network-monitoring-profile configuration statement. A server’s status is declared based on the result of two health check probes. Users can configure up to two health check profiles per server group. If a server group is configured for dual health check, a real-service is declared to be UP only when both health-check probes are simultaneously UP; otherwise, a real-service is declared to be DOWN.

Note:

The following restrictions apply to AMS interfaces used for server health monitoring:

  • An AMS interface configured under a TLB instance uses its configured member interfaces exclusively for health checking of configured multiple real servers.

  • The member interfaces use unit 0 for single VRF cases, but can use units other than 1 for multiple VRF cases.

  • TLB uses the IP address that is configured for AMS member interfaces as the source IP address for health checks.

  • The member interfaces must be in the same routing instance as the interface used to reach real servers. This is mandatory for TLB server health-check procedures.

Starting in Junos OS Release 24.2R1, when TLS and SSL are configured in the same group, the OR mechanism is used now instead of AND to determine the status of the real server. That is, the real server is marked as UP if any one of the probes is working. Previously, the real server was marked UP only if both the probes succeeded.

When the SSL probing version is provided, it probes with that version. When the SSL version is not specified, the behavior changes to Fallback from version v3 to v2. The probe starts with SSLv3. If the SSLv3 probe fails, the system probes for SSLv2. Previously, when the version attribute was not provided explicitly, the probing was done with the default version, v3.

Note:

This health check behavior enhancement is applicable only when the TLS and SSL probes are configured in the same health check group.

The output for show services traffic-load-balance statistics instance <inst> extensive is changed.

user@host# show services traffic-load-balance statistics instance <inst-name>
Note:

The SSL-hello probe version is moved under real server statistics from virtual service when SSL version is not specified under health check profile.

Routing Engine-Based Health Check for Traffic Load Balancer

Traffic Load Balancer (TLB) for next generation MX routers is capable of running the health check process on the Routing Engine (RE) too. This feature is applicable for both MSP and USF.

To enable the health-check process (net-monitord) on RE, you can use the new command, set services traffic-load-balance routing-engine-mode. This configuration and the TLB change ensure that the process responsible for managing and orchestrating traffic distribution and redirection connects to the local instance of the network monitoring process instead of the remote instance running on the service-PIC.

The health-check probe types that are supported on the RE-based health-checks are ICMP, TCP, UDP, HTTP, and SSL probes.

The new feature requires modification to the TLB configuration. Loopback interface is used instead of the service interface.

Note:

The interfaces ms-x/y/0 or vms-x/y/0 respectively for MSP and USF are not needed by TLB when net-monitord is running on RE. Replace references to the ms-x/y/0 or vms-x/y/0 interfaces with loopback interface lo.x.

Note:

To enable RE-based TLB, you must configure the routing-engine-mode to enable net- monitord on RE. A validation for the configuration is added and both interface ms-x/y/0.0 or interface vms-x/y/0.0 cannot be configured together in the respective mode of operation, namely, MSP or USF.

Virtual Services

The virtual service provides a virtual IP address (VIP) that is associated with the group of servers to which traffic is directed as determined by hash-based or random session distribution and server health monitoring. In the case of L2 DSR and L3 DSR, the special address 0.0.0.0 causes all traffic flowing to the forwarding instance to be load balanced.

The virtual service configuration includes:

  • Mode—indicating how traffic is handled (translated or transparent).

  • The group of servers to which sessions are distributed.

  • The load balancing method.

  • Routing instance and route metric.

Best Practice:

Although you can assign a virtual address of 0.0.0.0 in order to use default routing, we recommend using a virtual address that can be assigned to a routing instance set up specifically for TLB.

Traffic Load Balancer Configuration Limits

Traffic Load Balancer configuration limits are described in Table 4.

Table 4: TLB Configuration Limits

Configuration Component

Configuration Limit

Maximum number of instances

Starting in Junos OS Release 16.1R6 and Junos OS Release 18.2R1, the TLB application supports 2000 TLB instances for virtual services that use the direct-server-return or the translated mode. In earlier releases, the maximum number of instances is 32.

If multiple virtual services are using the same server group, then all of those virtual services must use the same load balancing method to support 2000 TLB instances.

For virtual services that use the layer2-direct-server-return mode, TLB supports only 32 TLB instances. To perform the same function as the layer2-direct-server-return mode and have support for 2000 TLB instances, you can use the direct-server-return mode and use a service filter with the skip action.

Maximum number of servers per group

255

Maximum number of virtual services per services PIC

32

Maximum number of health checks per services PIC in a 5-second interval

For MS-MPC services cards: 2000

For Next Gen Services mode and the MX-SPC3 services cards: 1250

Maximum number of groups per virtual service

1

Maximum number of virtual IP addresses per virtual service

1

Supported health checking protocols

ICMP, TCP, HTTP, SSL, TLS-Hello, Custom

Note:

ICMP health checking is supported only on MS-MPC services cards.

Starting in Junos OS release 22.4R1, TLB is enhanced to support TLS-Hello health check type. For TLS-Hello over TCP, TLS v1.2 and v1.3 health checks are supported.

Configuring TLB

The following topics describe how to configure TLB. To create a complete application, you must also define interfaces and routing information. You can optionally define firewall filters and policy options in order to differentiate TLB traffic.

Loading the TLB Service Package

Load the TLB service package on each service PIC on which you want to run TLB.

Note:

For Next Gen Services and the MX-SPC3 services card, you do not need to load this package.

To load the TLB service package on a service PIC:

  • Load the jservices-traffic-dird package.

    For example:

Configuring a TLB Instance Name

Before configuring TLB, enable the sdk-service process by configuring system processes sdk-service enable at the [edit] hierarchy.

To configure a name for the TLB instance:

  • At the [edit services traffic-load-balance] hierarchy level, identify the TLB instance name.

    For example:

Configuring Interface and Routing Information

To configure interface and routing information:

  1. At the [edit services traffic-load-balance instance instance-name] hierarchy level, identify the service interface associated with this instance.

    For example, on an MS-MPC:

    For example, for Next Gen Services on an MX-SPC3:

  2. Enable the routing of health-check packet responses from real servers to the service interface that you identified in Step 1.

    For example, on an MS-MPC:

    For example, on an MX-SPC3:

  3. Specify the client interface for which an implicit filter is defined to direct traffic in the forward direction. This is required only for translated mode.

    For example:

  4. Specify the virtual routing instance used to route data traffic in the forward direction to servers. This is required for SLT and Layer 3 DSR; it is optional for Layer 2 DSR.

    For example:

  5. Specify the server interface for which implicit filters are defined to direct return traffic to the client.
    Note:

    Implicit filters for return traffic are not used for DSR.

    For example:

  6. (Optional) Specify the filter used to bypass health checking for return traffic.

    For example:

  7. Specify the virtual routing instance in which you want the data in the reverse direction to be routed to the clients.

    For example:

    Note:

    Virtual routing instances for routing data in the reverse direction are not used with DSR.

Configuring Servers

To configure servers for the TLB instance:

Configure a logical name and IP address for each server to be made available for next-hop distribution.

For example:

Configuring Network Monitoring Profiles

A network monitoring profile configures a health check probe, which you assign to a server group to which session traffic is distributed.

To configure a network monitoring profile:

  1. Configure the type of probe to use for health monitoring — icmp, tcp, http, ssl-hello, tls-hello,or custom.
    Note:

    icmp probes are supported only on MS-MPC cards.

    Next Gen Services and the MX-SPC3 do not support ICMP probes in this release.

    • For an ICMP probe:

    • For a TCP probe:

    • For an HTTP probe:

    • For an SSL probe:

    • For a TLS-Hello probe:

    • For a custom probe:

  2. Configure the interval for probe attempts, in seconds (1 through 180).

    For example:

  3. Configure the number of failure retries, after which the real server is tagged as down.

    For example:

  4. Configure the number of recovery retries, which is the number of successful probe attempts after which the server is declared up.

    For example:

Configuring Server Groups

Server groups consist of servers to which traffic is distributed by means of stateless, hash-based session distribution and server health monitoring.

To configure a server group:

  1. Specify the names of one or more configured real servers.

    For example:

  2. Configure the routing instance for the group when you do not want to use the default instance, inet.0.

    For example:

  3. (Optional) Disable the default option that allows a server to rejoin the group automatically when it comes up.
  4. (Optional) Configure the logical unit of the instance’s service interface to use for health checking.
    1. Specify the logical unit.

    2. Enable the routing of health-check packet responses from real servers to the interface.

    For example:

  5. Configure one or two network monitoring profiles to be used to monitor the health of servers in this group.

    For example:

Configuring Virtual Services

A virtual service provides an address that is associated with a the group of servers to which traffic is directed as determined by hash-based or random session distribution and server health monitoring. You may optionally specify filters and routing instances to steer traffic for TLB.

To configure a virtual service:

  1. At the [edit services traffic-load-balance instance instance-name] hierarchy level, specify a non-zero address for the virtual service.

    For example:

  2. Specify the server group used for this virtual service.

    For example:

  3. (Optional) Specify a routing instance for the virtual service. If you do not specify a routing instance, the default routing instance is used.

    For example:

  4. Specify the processing mode for the virtual service.

    For example:

  5. (Optional) For a translated mode virtual service, enable the addition of the IP addresses for all the real servers in the group under the virtual service to the server-side filters. Doing this allows you to configure two virtual services with the same listening port and protocol on the same interface and VRF.
  6. (Optional) Specify a routing metric for the virtual service.

    For example:

  7. Specify the method used for load balancing. You can specify a hash method that provides a hash key based on any combination of the source IP address, destination IP address, and protocol, or you can specify random.

    For example:

    or

    Note:

    If you switch between the hash method and the random method for a virtual service, the statistics for the virtual service are lost.

  8. For a translated mode virtual service, specify a service for translation, including a virtual-port, server-listening-port, and protocol.

    For example:

  9. Commit the configuration.
    Note:

    In the absence of a client-interface configuration under the TLB instance, the implicit client filter (for VIP) is attached to the client-vrf configured under the TLB instance. In this case, the routing-instance under a translate mode virtual service cannot be the same as the client-vrf configured under the TLB instance. if it is, the commit fails.

Configuring Tracing for the Health Check Monitoring Function

To configure tracing options for the health check monitoring function:

  1. Specify that you want to configure tracing options for the health check monitoring function.
  2. (Optional) Configure the name of the file used for the trace output.
  3. (Optional) Disable remote tracing capabilities.
  4. (Optional) Configure flags to filter the operations to be logged.

    Table 5 describes the flags that you can include.

    Table 5: Trace Flags

    Flag

    Support on MS-MPC and MX-SPC3 Cards

    Description

    all

    MS-MPC and MX-SPC3

    Trace all operations.

    all-real-services

    MX-SPC3

    Trace all real services.

    config

    MS-MPC and MX-SPC3

    Trace traffic load balancer configuration events.

    connect

    MS-MPC and MX-SPC3

    Trace traffic load balancer ipc events.

    database

    MS-MPC and MX-SPC3

    Trace database events.

    file-descriptor-queue

    MS-MPC

    Trace file descriptor queue events.

    inter-thread

    MS-MPC

    Trace inter-thread communication events.

    filter

    MS-MPC and MX-SPC3

    Trace traffic load balancer filter programming events.

    health

    MS-MPC and MX-SPC3

    Trace traffic load balancer health events.

    messages

    MS-MPC and MX-SPC3

    Trace normal events.

    normal

    MS-MPC and MX-SPC3

    Trace normal events.

    operational-commands

    MS-MPC and MX-SPC3

    Trace traffic load balancer show events.

    parse

    MS-MPC and MX-SPC3

    Trace traffic load balancer parse events.

    probe

    MS-MPC and MX-SPC3

    Trace probe events.

    probe-infra

    MS-MPC and MX-SPC3

    Trace probe infra events.

    route

    MS-MPC and MX-SPC3

    Trace traffic load balancer route events.

    snmp

    MS-MPC and MX-SPC3

    Trace traffic load balancer SNMP events.

    statistics

    MS-MPC and MX-SPC3

    Trace traffic load balancer statistics events.

    system

    MS-MPC and MX-SPC3

    Trace traffic load balancer system events.

  5. (Optional) Configure the level of tracing.
  6. (Optional) Configure tracing for a particular real server within a particular server group.
  7. (Optional) Starting in Junos OS Release 16.1R6 and 18.2R1, configure tracing for a particular virtual service and instance.

Change History Table

Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.

Release
Description
16.1R6
Starting in Junos OS Release 16.1R6 and Junos OS Release 18.2R1, the TLB application supports 2000 TLB instances for virtual services that use the direct-server-return or the translated mode.
16.1R6
Starting in Junos OS Release 16.1R6 and 18.2R1, configure tracing for a particular virtual service and instance.