Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Multinode High Availability Services

Multinode High Availability supports active/active mode for data plane and active/backup mode for control plane services. Lets learn about control plane stateless and stateful services in the following sections:

Control Plane Stateless Services

SRG0 manages services without control plane state, such as application security, IDP, Content Security, firewall, NAT, policies, ALG, and so on. Failover for these services is required at data plane level only and some of these services are pass through (not terminating on the device except NAT, firewall authentication).

SRG0 remains active on both nodes and forwards traffic from both the nodes. These feature works independently on both SRX Series Firewalls in Multinode High Availability.

To configure the control plane stateless services:

  • Configure the features as you configure them on a stand-alone SRX Series Firewall.
  • Install the same Junos OS version on the participating security devices (Junos OS Release 22.3R1 or later)
  • Install identical licenses on both the nodes
  • Download and install same versions of application signature package or IPS package on both nodes (if you are using application security and IDP)
  • Configure conditional route advertisement, routing policy, and static routes as per your requirements.
  • In Multinode High Availability, configuration synchronization does not happen by default. You need to configure applications as part of groups and then synchronize the configuration using the peer synchronization option or manage configuration independently on each node.

Network Address Translation

Services such as Firewall, ALG, NAT do not have control plane state. For such services, only data plane state needs to be synchronized across the nodes.

In a Multinode High Availability setup, one device handles a NAT session at a time, and the other device takes over the active role when failover happens. So, a session remains active on one device, and on the other device, the session will be in warm (standby) state till failover happens.

NAT sessions and ALG state objects gets synchronized between the nodes. If one node fails, the second node continues to process traffic for the synchronized sessions from the failed device, including NAT translations.

You must create NAT rules and pools with the same parameters on both the SRX Series Firewalls. To steer the response path for the NAT traffic (destined to NAT pool IP address) to the correct SRX Series Firewall (active device), you must have the required routing configuration on both active/backup devices. That is, the configuration must specify what routes are advertised via the routing protocols to the adjacent routing devices. Accordingly, you must also configure policy-option and route configuration.

When you run NAT-specific operational commands on both devices, you can see the same output. However, there could be instances where NAT rule / pool internal numerical IDs can be different between the nodes. Different numerical IDs don’t impact the session sync/ NAT translations upon failover.

Firewall User Authentication

With firewall authentication, you can restrict or permit users individually or in groups. Users can be authenticated using a local password database or using an external password database.

Multinode High Availability supports following authentication methods:

  • Pass-through authentication
  • Pass-through with web-redirect authentication
  • Web authentication

Firewall user authentication is service with a active control plane state and requires control and data plane states synchronization across the nodes. While working in Multinode High Availability setup, the firewall user authentication feature works independently on both SRX Series Firewalls and synchronizes the authentication table between the nodes. When a user authenticates successfully, authentication entry gets synced to the other node and is visible on both the nodes when you run show command (example: show security firewall-authentication users ).

Note:

When synchronizing configuration between nodes, verify that authentication, policy, source zone, and destination zone details match on both nodes. Maintaining the same order in your configuration ensures successful synchronization of authentication entries across both nodes.

If you clear an authentication entry in one node using the clear security user-identification local-authentication-table statement, ensure that you clear the authentication entry in the other node also.

Follow the same practice in case of asymmetric traffic configuration as well.

Multinode High Availability supports Juniper Identity Management Service (JIMS) to obtain user identity information. Each node fetches the authentication entries from JIMS server and process them independently. Because of this, you must run firewall user authentication commands separately on each node. For example, when you display the authentication entries using the show commands, each node displays only those authentication entries that it is handling currently (as if working independently in standalone mode:

Configuration Synchronization Between Multinode High Availability Nodes

In Multinode High Availability, two SRX Series Firewalls act as independent devices. These devices have unique hostname and the IP address on fxp0 interface. You can configure control plane stateless services such as ALG, firewall, NAT independently on these devices. Node-specific packets are always processed on the respective nodes.

Following packets/services are node-specific (local) in Multinode High Availability:

  • Routing protocols packets to Routing Engine

  • Management services, such as SNMP, and operational commands (show, request)

  • Processes, such as the authentication service process (authd), integrated with RADIUS and LDAP servers

  • ICL encryption specific tunnel control and data packets

The configuration synchronization in Multinode High Availability is not enabled by default. If you want certain configurations to synchronize to the other node, you need to:

  • Configure the feature/function as part of groups
  • Synchronize the configuration using the [edit system commit peers-synchronize] option

When you enable configuration synchronization (by using the peers-synchronize option) on both the devices in a Multinode High Availability, configuration settings you configure on one peer under [groups] will automatically sync to the other peer upon the commit action.

The local peer on which you enable the peers-synchronize statement copies and loads its configuration to the remote peer. Each peer then performs a syntax check on the configuration file being committed. If no errors are found, the configuration is activated and becomes the current operational configuration on both peers.

The following configuration snippet shows VPN configuration under avpn_config_group on host-mnha-01. We'll synchronize the configuration to the other peer device host-mnha-02.

  1. Configure the hostname and IP address of the participating peer device (host-mnha-02), authentication details, and include the peers-synchronization statement.
  2. Configure the group (avpn_config_group) and specify apply conditions (when peers host-mnha-01 and host-mnha-02)

  3. Use the apply-groups command at the root of the configuration.

    When you commit the configuration, Junos checks the command and merge the correct group to match the node name.

  4. Verify the syncronization status using the show configuration system command from the operational mode.

    The command output displays the details of the peer SRX Series Firewall under the peers option.

Note:

The configuration synchronization happens dynamically and if any configuration change done when only one node is available or when the connectivity is broken between the nodes, you must issue one more commit to synchronize the configuration to the other node. Otherwise, it will lead to inconsistent configurations across nodes for the applications.

Note:
  • The configuration synchronization is not mandatory for Multinode High Availability to work. However, for an easy configuration synchronization, we recommend using the set system commit peers-synchronize statement with junos groups configuration in one direction (node 0 to node 1 for example).
  • We recommend using out of band management (fxp0) connection to form configuration sync between Multinode High Availability nodes to manage common configurations.
  • For IPsec use case, if configuration synchronization is not enabled, you must commit the configuration first on the backup node and then on the active node.