Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Chassis Cluster on NFX250 Devices

Chassis clustering involves the synchronizing of configuration files and the dynamic runtime session states between two devices, which are part of the chassis cluster setup. In NFX250, the vSRX Virtual Firewall instances on two devices are grouped into a cluster to provide high availability (HA).

NFX250 Chassis Cluster Overview

You can configure NFX250 devices to operate in cluster mode by connecting and configuring the vSRX Virtual Firewall instances on each device to operate like a single node, providing redundancy at the device, interface, and service level.

When two devices are configured to operate as a chassis cluster, each device becomes a node of that cluster. The two nodes back up each other, with one node acting as the primary device and the other node acting as the secondary device, ensuring stateful failover of processes and services when the system or hardware fails. If the primary device fails, the secondary device takes over the processing of traffic.

The nodes of a cluster are connected together through two links called control link and fabric link. The devices in a chassis cluster synchronize the configuration, kernel, and PFE session states across the cluster to facilitate high availability, failover of stateful services, and load balancing.

  • Control link—Synchronizes the configuration between the nodes. When you submit configuration statements to the cluster, the configuration is automatically synchronized over the control interface.

    To create a vSRX Virtual Firewall cluster control link data path, connect the ge-0/0/0 interface on one node to the ge-0/0/0 interface on the second node.

    Note:

    You can use only the ge-0/0/0 interface to create a control link.

  • Fabric link (data link)—Forwards traffic between the nodes. Traffic arriving on a node that needs to be processed on the other node is forwarded over the fabric link. Similarly, traffic processed on a node that needs to exit through an interface on the other node is forwarded over the fabric link.

    You can use any interface except the ge-0/0/0 to create a fabric link.

Chassis Cluster Modes

The chassis cluster can be configured in active/passive or active/active mode.

  • Active/passive mode—In active/passive mode, the transit traffic passes through the primary node while the backup node is used only in the event of a failure. When a failure occurs, the backup device becomes the primary device and takes over all forwarding tasks.

  • Active/active mode—In active/active mode, the transit traffic passes through both nodes all the time.

Chassis Cluster Interfaces

The chassis cluster interfaces include:

  • Redundant Ethernet (reth) interface—A pseudo-interface that includes a physical interface from each node of a cluster. The reth interface of the active node is responsible for passing the traffic in a chassis cluster setup.

    A reth interface must contain, at minimum, a pair of Fast Ethernet interfaces or a pair of Gigabit Ethernet interfaces that are referred to as child interfaces of the redundant Ethernet interface (the redundant parent). If two or more child interfaces from each node are assigned to the redundant Ethernet interface, a redundant Ethernet interface link aggregation group can be formed.

    Note:

    You can configure a maximum of 128 reth interfaces on NFX250 devices.

  • Control interface—An interface that provides the control link between the two nodes in the cluster. This interface is used for routing updates and for control plane signal traffic, such as heartbeat and threshold information that trigger node failover.

  • Fabric interface—An interface that provides the physical connection between two nodes of a cluster. A fabric interface is formed by connecting a pair of Ethernet interfaces back-to-back (one from each node). The Packet Forwarding Engines of the cluster uses this interface to transmit transit traffic and to synchronize the runtime state of the data plane software. You must specify the physical interfaces to be used for the fabric interface in the configuration.

Note:

Chassis cluster is enabled over vSRX Virtual Firewall 2.0 VNF instances running on two separate NFX250 devices.

Chassis Cluster Limitation

Redundant LAG (RLAG) of reth member interfaces of the same node is not supported. A reth interface with more than one child interface per node is called RLAG.

Example: Configuring a Chassis Cluster on NFX250 Devices

This example shows how to set up chassis clustering on NFX250 devices.

Requirements

Before you begin:

  • Physically connect the two devices and ensure that they are the same NFX250 model.

  • Ensure that both devices are running the same Junos OS version

  • (Optional) Remove all interface mapping for the control port ge-0/0/0 on both the nodes.

  • Connect the control port ge-0/0/0 on node 0 to the ge-0/0/0 port on node 1.

  • Connect the fabric port on node 0 to the fabric port on node 1.

Overview

This example shows how to set up basic active/passive chassis clustering. One device actively maintains control of the chassis cluster. The other device passively maintains its state for cluster failover capabilities in case the active device becomes inactive.

Note:

This example does not describe in detail miscellaneous configurations such as how to configure security features. They are essentially the same as they would be for standalone configurations.

Configuration

Deploy vSRX Virtual Firewall 2.0 VNF on NFX250 devices

Step-by-Step Procedure
  1. Copy the vSRX Virtual Firewall 2.0 VNF image to the /var/third-party/images/ folder.

  2. Define the host-OS VLANs:

  3. Deploy the vSRX Virtual Firewall VNF on NFX250:

Configure the JCP Datapath

Step-by-Step Procedure
  1. Configure the front panel ports in Ethernet switching family and map it to the VLANs:

Creating VNF HA Chassis Cluster

Step-by-Step Procedure
  1. Access the VNF:

  2. Change the mode to chassis cluster:

  3. Configure the VNFs that are spawned in a cluster mode:

Configuring a Chassis Cluster

Step-by-Step Procedure
  1. Configure the cluster ID on both the nodes and reboot the devices. A reboot is required to enter into cluster mode after the cluster ID and node ID are set.

    Note:

    You must enter the operational mode to issue the commands on both devices.

    The cluster-id is the same on both devices, but the node ID must be different because one device is node 0 and the other device is node 1. The range for the cluster-id is 0 through 255 and setting it to 0 is equivalent to disabling cluster mode.

  2. Verify that the chassis cluster is configured successfully:

    After the chassis cluster is set up, you can enter the configuration mode and perform all the configurations on the primary node, node0.

  3. Configure the host names and the out-of-band management IP addresses for nodes 0 and 1:

    If you are accessing the device from a different subnet other than the one configured for the out-of-band management, then set up a static route:

  4. Configure a backup router to access the router from an external network for the out-of-band management

Configure Fabric interfaces

Step-by-Step Procedure

The ge-0/0/0 interface is a pre-defined control link. Therefore, you should select any other interface on the device to configure a fabric interface. For example, in the below configuration, ge-0/0/1 is used as the fabric interface.

  1. Connect one end of the Ethernet cable to ge-0/0/1 on NFX250NG-1 device and the other end of the cable to ge-0/0/1 on NFX250NG-2 device.

  2. Map physical LAN to virtual WAN port:

  3. Configure front panel (L2) interfaces corresponding to fabric interface:

  4. Configure L3 interfaces as fabric member:

  5. Configure data path for fabric interfaces:

  6. Configure port peering for fabric and reth members. Port peering ensures that when a LAN interface controlled by the Layer 2 dataplane (FPC0) fails, the corresponding interface on the Layer 3 dataplane (FPC1) is marked down and vice versa. This helps in the failover of the corresponding redundant group to the secondary node.

Configure Redundant Groups and Redundant Interfaces

Step-by-Step Procedure
  1. Configure redundancy groups 1 and 2. Both redundancy-group 1 and redundancy-group 2 control the data plane and include the data plane ports. Each node has interfaces in a redundancy group. As part of redundancy group configuration, you must also define the priority for control plane and data plane—which device is preferred for the control plane, and which device is preferred for the data plane. For chassis clustering, higher priority is preferred. The higher number takes precedence.

    In this configuration, node 0 is the active node as it is associated with redundancy-group 1. reth0 is member of redundancy-group 1 and reth1 is member of redundancy-group 2. You must configure all changes in the cluster through node 0. If node 0 fails, then node 1 will be the active node.

  2. Configure front panel (L2) interfaces corresponding to reth interface:

  3. Configure WAN (L3) interfaces as reth member:

  4. Configure reth interfaces:

    • Configure reth0:

    • Configure reth1:

  5. Configure interface monitoring for reth interfaces members:

  6. Configure security policies to allow traffic from LAN to WAN, and from WAN to LAN:

Verification

Verifying Chassis Cluster Status

Purpose

Verify the status of the chassis cluster and its interfaces.

Action

From operational mode, issue the following commands:

  • Verify the status of the cluster:

  • Verify the status of the redundancy groups:

  • Verify the status of the interfaces:

  • Verify the status of the port-peering interfaces: