Multinode High Availability Services
Multinode High Availability supports active/active mode for data plane and active/backup mode for control plane services. Lets learn about control plane stateless and stateful services in the following sections:
Control Plane Stateless Services
SRG0 manages services without control plane state, such as application security, IDP, Content Security, firewall, NAT, policies, ALG, and so on. Failover for these services is required at data plane level only and some of these services are pass through (not terminating on the device except NAT, firewall authentication).
SRG0 remains active on both nodes and forwards traffic from both the nodes. These feature works independently on both SRX Series Firewalls in Multinode High Availability.
To configure the control plane stateless services:
- Configure the features as you configure them on a stand-alone SRX Series Firewall.
- Install the same Junos OS version on the participating security devices (Junos OS Release 22.3R1 or later)
- Install identical licenses on both the nodes
- Download and install same versions of application signature package or IPS package on both nodes (if you are using application security and IDP)
- Configure conditional route advertisement, routing policy, and static routes as per your requirements.
- In Multinode High Availability, configuration synchronization does not happen by
default. You need to configure applications as part of groups and then synchronize the
configuration using the
peer synchronization
option or manage configuration independently on each node.
Network Address Translation
Services such as Firewall, ALG, NAT do not have control plane state. For such services, only data plane state needs to be synchronized across the nodes.
In a Multinode High Availability setup, one device handles a NAT session at a time, and the other device takes over the active role when failover happens. So, a session remains active on one device, and on the other device, the session will be in warm (standby) state till failover happens.
NAT sessions and ALG state objects gets synchronized between the nodes. If one node fails, the second node continues to process traffic for the synchronized sessions from the failed device, including NAT translations.
You must create NAT rules and pools with the same parameters on both the SRX Series Firewalls. To steer the response path for the NAT traffic (destined to NAT pool IP address) to the correct SRX Series Firewall (active device), you must have the required routing configuration on both active/backup devices. That is, the configuration must specify what routes are advertised via the routing protocols to the adjacent routing devices. Accordingly, you must also configure policy-option and route configuration.
When you run NAT-specific operational commands on both devices, you can see the same output. However, there could be instances where NAT rule / pool internal numerical IDs can be different between the nodes. Different numerical IDs don’t impact the session sync/ NAT translations upon failover.
Firewall User Authentication
With firewall authentication, you can restrict or permit users individually or in groups. Users can be authenticated using a local password database or using an external password database.
Multinode High Availability supports following authentication methods:
- Pass-through authentication
- Pass-through with web-redirect authentication
- Web authentication
Firewall user authentication is service with a active control plane state and requires
control and data plane states synchronization across the nodes. While working in Multinode
High Availability setup, the firewall user authentication feature works independently on
both SRX Series Firewalls and synchronizes the authentication table between the nodes. When
a user authenticates successfully, authentication entry gets synced to the other node and is
visible on both the nodes when you run show command (example:
show security firewall-authentication users
).
When synchronizing configuration between nodes, verify that authentication, policy, source zone, and destination zone details match on both nodes. Maintaining the same order in your configuration ensures successful synchronization of authentication entries across both nodes.
If you clear an authentication entry in one node using the clear security user-identification local-authentication-table statement, ensure that you clear the authentication entry in the other node also.
Follow the same practice in case of asymmetric traffic configuration as well.
Multinode High Availability supports Juniper Identity Management Service (JIMS) to obtain user identity information. Each node fetches the authentication entries from JIMS server and process them independently. Because of this, you must run firewall user authentication commands separately on each node. For example, when you display the authentication entries using the show commands, each node displays only those authentication entries that it is handling currently (as if working independently in standalone mode:
Configuration Synchronization Between Multinode High Availability Nodes
In Multinode High Availability, two SRX Series Firewalls act as independent devices. These devices have unique hostname and the IP address on fxp0 interface. You can configure control plane stateless services such as ALG, firewall, NAT independently on these devices. Node-specific packets are always processed on the respective nodes.
Following packets/services are node-specific (local) in Multinode High Availability:
-
Routing protocols packets to Routing Engine
-
Management services, such as SNMP, and operational commands (
show
,request
) -
Processes, such as the authentication service process (authd), integrated with RADIUS and LDAP servers
-
ICL encryption specific tunnel control and data packets
The configuration synchronization in Multinode High Availability is not enabled by default. If you want certain configurations to synchronize to the other node, you need to:
- Configure the feature/function as part of
groups
- Synchronize the configuration using the
[edit system commit peers-synchronize]
option
When you enable configuration synchronization (by using the
peers-synchronize
option) on both the devices in a Multinode High
Availability, configuration settings you configure on one peer under [groups] will
automatically sync to the other peer upon the commit action.
The local peer on which you enable the peers-synchronize
statement copies
and loads its configuration to the remote peer. Each peer then performs a syntax check on
the configuration file being committed. If no errors are found, the configuration is
activated and becomes the current operational configuration on both peers.
The following configuration snippet shows VPN configuration under
avpn_config_group
on host-mnha-01. We'll synchronize the configuration to
the other peer device host-mnha-02.
- Configure the hostname and IP address of the participating peer device (host-mnha-02),
authentication details, and include the
peers-synchronization
statement.On host-mnha-01 [edit] set system commit peers-synchronize set system commit peers host-mnha-02 user user-02 set system commit peers host-mnha-02 authentication "$ABC" set system services netconf ssh set system static-host-mapping host-mnha-02 inet 10.157.75.129
-
Configure the group (avpn_config_group) and specify apply conditions (when peers host-mnha-01 and host-mnha-02)
On host-mnha-01 set groups avpn_config_group when peers host-mnha-01 set groups avpn_config_group when peers host-mnha-02 set groups avpn_config_group security ike proposal avpn_IKE_PROP authentication-method rsa-signatures set groups avpn_config_group security ike proposal avpn_IKE_PROP dh-group group14 set groups avpn_config_group security ike proposal avpn_IKE_PROP authentication-algorithm sha1 set groups avpn_config_group security ike proposal avpn_IKE_PROP encryption-algorithm aes-128-cbc set groups avpn_config_group security ike proposal avpn_IKE_PROP lifetime-seconds 3600 set groups avpn_config_group security ike policy avpn_IKE_POL proposals avpn_IKE_PROP set groups avpn_config_group security ike policy avpn_IKE_POL certificate local-certificate crt2k set groups avpn_config_group security ike gateway avpn_ike_gw ike-policy avpn_IKE_POL set groups avpn_config_group security ike gateway avpn_ike_gw dynamic distinguished-name wildcard C=us,O=ixia set groups avpn_config_group security ike gateway avpn_ike_gw dynamic ike-user-type group-ike-id set groups avpn_config_group security ike gateway avpn_ike_gw dead-peer-detection probe-idle-tunnel set groups avpn_config_group security ike gateway avpn_ike_gw dead-peer-detection interval 60 set groups avpn_config_group security ike gateway avpn_ike_gw dead-peer-detection threshold 5 set groups avpn_config_group security ike gateway avpn_ike_gw local-identity hostname srx.juniper.net set groups avpn_config_group security ike gateway avpn_ike_gw external-interface lo0.0 set groups avpn_config_group security ike gateway avpn_ike_gw local-address 10.11.0.1 set groups avpn_config_group security ike gateway avpn_ike_gw version v2-only set groups avpn_config_group security ipsec proposal avpn_IPSEC_PROP protocol esp set groups avpn_config_group security ipsec proposal avpn_IPSEC_PROP authentication-algorithm hmac-sha1-96 set groups avpn_config_group security ipsec proposal avpn_IPSEC_PROP encryption-algorithm aes-128-cbc set groups avpn_config_group security ipsec proposal avpn_IPSEC_PROP lifetime-seconds 1800 set groups avpn_config_group security ipsec policy avpn_IPSEC_POL perfect-forward-secrecy keys group14 set groups avpn_config_group security ipsec policy avpn_IPSEC_POL proposals avpn_IPSEC_PROP set groups avpn_config_group security ipsec vpn avpn_ipsec_vpn bind-interface st0.15001 set groups avpn_config_group security ipsec vpn avpn_ipsec_vpn ike gateway avpn_ike_gw set groups avpn_config_group security ipsec vpn avpn_ipsec_vpn ike ipsec-policy avpn_IPSEC_POL set groups avpn_config_group security ipsec vpn avpn_ipsec_vpn traffic-selector ts local-ip 10.19.0.0/8 set groups avpn_config_group security ipsec vpn avpn_ipsec_vpn traffic-selector ts remote-ip 10.4.0.0/8 set groups avpn_config_group security zones security-zone vpn host-inbound-traffic system-services all set groups avpn_config_group security zones security-zone vpn host-inbound-traffic protocols all set groups avpn_config_group security zones security-zone vpn interfaces st0.15001 set groups avpn_config_group interfaces st0 description vpn set groups avpn_config_group interfaces st0 unit 15001 family inet
-
Use the
apply-groups
command at the root of the configuration.On host-mnha-01 set apply-groups avpn_config_group
When you commit the configuration, Junos checks the command and merge the correct group to match the node name.
-
Verify the syncronization status using the
show configuration system
command from the operational mode.user@host-mnha-01> show configuration system ........... commit { peers { host-mnha-02 { user user user-02; authentication "$ABC123"; } } } static-host-mapping { host-mnha-02 inet 10.157.75.129; } ............
The command output displays the details of the peer SRX Series Firewall under the peers option.
The configuration synchronization happens dynamically and if any configuration change done when only one node is available or when the connectivity is broken between the nodes, you must issue one more commit to synchronize the configuration to the other node. Otherwise, it will lead to inconsistent configurations across nodes for the applications.
- The configuration synchronization is not mandatory for Multinode High Availability to
work. However, for an easy configuration synchronization, we recommend using the
set system commit peers-synchronize
statement withjunos groups
configuration in one direction (node 0 to node 1 for example). - We recommend using out of band management (fxp0) connection to form configuration sync between Multinode High Availability nodes to manage common configurations.
- For IPsec use case, if configuration synchronization is not enabled, you must commit the configuration first on the backup node and then on the active node.