Chassis Cluster Fabric Interfaces
SRX Series devices in a chassis cluster use the fabric (fab) interface for session synchronization and forward traffic between the two chassis. The fabric link is a physical connection between two Ethernet interfaces on the same LAN. Both interfaces must be the same media type. For more information, see the following topics:
Understanding Chassis Cluster Fabric Interfaces
The fabric is a physical connection between two nodes of a cluster and is formed by connecting a pair of Ethernet interfaces back-to-back (one from each node).
Unlike for the control link, whose interfaces are determined by the system, you specify the physical interfaces to be used for the fabric data link in the configuration.
The fabric is the data link between the nodes and is used to forward traffic between the chassis. Traffic arriving on a node that needs to be processed on the other is forwarded over the fabric data link. Similarly, traffic processed on a node that needs to exit through an interface on the other node is forwarded over the fabric.
The data link is referred to as the fabric interface. It is used by the cluster's Packet Forwarding Engines to transmit transit traffic and to synchronize the data plane software’s dynamic runtime state. The fabric provides for synchronization of session state objects created by operations such as authentication, Network Address Translation (NAT), Application Layer Gateways (ALGs), and IP Security (IPsec) sessions.
When the system creates the fabric interface, the software assigns it an internally derived IP address to be used for packet transmission.
After fabric interfaces have been configured on a chassis cluster, removing the fabric configuration on either node will cause the redundancy group 0 (RG0) secondary node to move to a disabled state. (Resetting a device to the factory default configuration removes the fabric configuration and thereby causes the RG0 secondary node to move to a disabled state.) After the fabric configuration is committed, do not reset either device to the factory default configuration.
- Supported Fabric Interface Types for SRX Series Firewalls (SRX300 Series, SRX1500, SRX1600, SRX2300, SRX4100/SRX4200, SRX4300, SRX4600, and SRX5000 line)
- Jumbo Frame Support
- Understanding Fabric Interfaces on SRX5000 Line Devices for IOC2 and IOC3
- Understanding Session RTOs
- Understanding Data Forwarding
- Understanding Fabric Data Link Failure and Recovery
Supported Fabric Interface Types for SRX Series Firewalls (SRX300 Series, SRX1500, SRX1600, SRX2300, SRX4100/SRX4200, SRX4300, SRX4600, and SRX5000 line)
For SRX Series chassis clusters, the fabric link can be any pair of Ethernet interfaces spanning the cluster; the fabric link can be any pair of Gigabit Ethernet interface. Examples:
-
For SRX300, SRX320, SRX340, and SRX345 devices, the fabric link can be any pair of Gigabit Ethernet interfaces. For SRX380 devices, the fabric link can be any pair of Gigabit Ethernet interfaces or any pair of 10-Gigabit Ethernet interface.
-
For SRX1500 and SRX1600, the fabric link can be any pair of Ethernet interfaces spanning the cluster; the fabric link can be any pair of Gigabit Ethernet interface or any pair of 10-Gigabit Ethernet interface. The supported interface types for SRX1600 are:.
-
1-Gigabit Ethernet (ge)
-
10-Gigabit Ethernet (xe) (10-Gigabit Ethernet Interface SFP+ slots)
-
25-Gigabit Ethernet (et) (25-Gigabit Ethernet Interface SFP28)
-
-
Supported fabric interface types for SRX4100 and SRX4200 devices are 10-Gigabit Ethernet (xe) (10-Gigabit Ethernet Interface SFP+ slots).
-
For SRX2300 and SRX4300, the fabric link can be any pair of Gigabit Ethernet interfaces. The supported fabric interface types are:
-
10-Gigabit Ethernet (mge)
-
10-Gigabit Ethernet (xe) (10-Gigabit Ethernet Interface SFP+ slots)
-
25-Gigabit Ethernet (et) (25-Gigabit Ethernet Interface SFP28)
-
100-Gigabit Ethernet (et) (100-Gigabit Ethernet Interface QSFP28 slots)
-
-
Supported fabric interface types for SRX4600 devices are 40-Gigabit Ethernet (et) (40-Gigabit Ethernet Interface QSFP slots) and 10-Gigabit Ethernet (xe).
-
Supported fabric interface types supported for SRX5000 line devices are:
-
Fast Ethernet
-
Gigabit Ethernet
-
10-Gigabit Ethernet
-
40-Gigabit Ethernet
-
100-Gigabit Ethernet
Starting in Junos OS Release 12.1X46-D10 and Junos OS Release 17.3R1, 100-Gigabit Ethernet interface is supported on SRX5000 line devices.
Starting in Junos OS Release 19.3R1, the SRX5K-IOC4-10G and SRX5K-IOC4-MRAT are supported along with SRX5K-SPC3 on the SRX5000 line devices. SRX5K-IOC4-10G MPIC supports MACsec.
-
For details about port and interface usage for management, control, and fabric links, see Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming.
Jumbo Frame Support
The fabric data link does not support fragmentation. To accommodate this state, jumbo frame support is enabled by default on the link with an maximum transmission unit (MTU) size of 9014 bytes (9000 bytes of payload + 14 bytes for the Ethernet header) on SRX Series Firewalls. To ensure the traffic that transits the data link does not exceed this size, we recommend that no other interfaces exceed the fabric data link's MTU size.
Understanding Fabric Interfaces on SRX5000 Line Devices for IOC2 and IOC3
Starting with Junos OS Release 15.1X49-D10, the SRX5K-MPC3-100G10G (IOC3) and the SRX5K-MPC3-40G10G (IOC3) are introduced.
The SRX5K-MPC (IOC2) is a Modular Port Concentrator (MPC) that is supported on the SRX5400, SRX5600, and SRX5800. This interface card accepts Modular Interface Cards (MICs), which add Ethernet ports to your services gateway to provide the physical connections to various network media types. The MPCs and MICs support fabric links for chassis clusters. The SRX5K-MPC provides 10-Gigabit Ethernet (with 10x10GE MIC), 40-Gigabit Ethernet, 100-Gigabit Ethernet, and 20x1GE Ethernet ports as fabric ports. On SRX5400 devices, only SRX5K-MPCs (IOC2) are supported.
The SRX5K-MPC3-100G10G (IOC3) and the SRX5K-MPC3-40G10G (IOC3) are Modular Port Concentrators (MPCs) that are supported on the SRX5400, SRX5600, and SRX5800. These interface cards accept Modular Interface Cards (MICs), which add Ethernet ports to your services gateway to provide the physical connections to various network media types. The MPCs and MICs support fabric links for chassis clusters.
The two types of IOC3 Modular Port Concentrators (MPCs), which have different built-in MICs, are the 24x10GE + 6x40GE MPC and the 2x100GE + 4x10GE MPC.
Due to power and thermal constraints, all four PICs on the 24x10GE + 6x40GE cannot be powered on. A maximum of two PICs can be powered on at the same time.
Use the set chassis fpc <slot> pic
<pic> power off
command to choose
the PICs you want to power on.
On SRX5400, SRX5600, and SRX5800 devices in a chassis cluster, when the PICs containing fabric links on the SRX5K-MPC3-40G10G (IOC3) are powered off to turn on alternate PICs, always ensure that:
-
The new fabric links are configured on the new PICs that are turned on. At least one fabric link must be present and online to ensure minimal RTO loss.
-
The chassis cluster is in active-passive mode to ensure minimal RTO loss, once alternate links are brought online.
-
If no alternate fabric links are configured on the PICs that are turned on, RTO synchronous communication between the two nodes stops and the chassis cluster session state will not back up, because the fabric link is missing. You can view the CLI output for this scenario indicating a bad chassis cluster state by using the
show chassis cluster interfaces
command.
Understanding Session RTOs
The data plane software, which operates in active/active mode, manages flow processing and session state redundancy and processes transit traffic. All packets belonging to a particular session are processed on the same node to ensure that the same security treatment is applied to them. The system identifies the node on which a session is active and forwards its packets to that node for processing. (After a packet is processed, the Packet Forwarding Engine transmits the packet to the node on which its egress interface exists if that node is not the local one.)
To provide for session (or flow) redundancy, the data plane software synchronizes its state by sending special payload packets called runtime objects (RTOs) from one node to the other across the fabric data link. By transmitting information about a session between the nodes, RTOs ensure the consistency and stability of sessions if a failover were to occur, and thus they enable the system to continue to process traffic belonging to existing sessions. To ensure that session information is always synchronized between the two nodes, the data plane software gives RTOs transmission priority over transit traffic.
The data plane software creates RTOs for UDP and TCP sessions and tracks state changes. It also synchronizes traffic for IPv4 pass-through protocols such as Generic Routing Encapsulation (GRE) and IPsec.
RTOs for synchronizing a session include:
-
Session creation RTOs on the first packet
-
Session deletion and age-out RTOs
-
Change-related RTOs, including:
-
TCP state changes
-
Timeout synchronization request and response messages
-
RTOs for creating and deleting temporary openings in the firewall (pinholes) and child session pinholes
-
Understanding Data Forwarding
For Junos OS, flow processing occurs on a single node on which the session for that flow was established and is active. This approach ensures that the same security measures are applied to all packets belonging to a session.
A chassis cluster can receive traffic on an interface on one node and send it out to an interface on the other node. (In active/active mode, the ingress interface for traffic might exist on one node and its egress interface on the other.)
This traversal is required in the following situations:
-
When packets are processed on one node, but need to be forwarded out an egress interface on the other node
-
When packets arrive on an interface on one node, but must be processed on the other node
If the ingress and egress interfaces for a packet are on one node, but the packet must be processed on the other node because its session was established there, it must traverse the data link twice. This can be the case for some complex media sessions, such as voice-over-IP (VoIP) sessions.
Understanding Fabric Data Link Failure and Recovery
Intrusion Detection and Prevention (IDP) services do not support failover. For this reason, IDP services are not applied for sessions that were present prior to the failover. IDP services are applied for new sessions created on the new primary node.
The fabric data link is vital to the chassis cluster. If the link is unavailable, traffic forwarding and RTO synchronization are affected, which can result in loss of traffic and unpredictable system behavior.
To eliminate this possibility, Junos OS uses fabric monitoring to check whether the fabric link, or the two fabric links in the case of a dual fabric link configuration, are alive by periodically transmitting probes over the fabric links. If Junos OS detects fabric faults, RG1+ status of the secondary node changes to ineligible. It determines that a fabric fault has occurred if a fabric probe is not received but the fabric interface is active. To recover from this state, both the fabric links need to come back to online state and should start exchanging probes. As soon as this happens, all the FPCs on the previously ineligible node will be reset. They then come to online state and rejoin the cluster.
If you make any changes to the configuration
while the secondary node is disabled, execute the
commit
command to synchronize the
configuration after you reboot the node. If you
did not make configuration changes, the
configuration file remains synchronized with that
of the primary node.
Starting with Junos OS Release 12.1X47-D10 and Junos OS Release 17.3R1, the fabric monitoring feature is enabled by default on SRX5800, SRX5600, and SRX5400 devices.
Starting with Junos OS Release 12.1X47-D10 and Junos OS Release 17.3R1, recovery of the fabric link and synchronization take place automatically.
When both the primary and secondary nodes are healthy (that is, there are no failures) and the fabric link goes down, RG1+ redundancy group(s) on the secondary node becomes ineligible. When one of the nodes is unhealthy (that is, there is a failure), RG1+ redundancy group(s) on this node (either the primary or secondary node) becomes ineligible. When both nodes are unhealthy and the fabric link goes down, RG1+ redundancy group(s) on the secondary node becomes ineligible. When the fabric link comes up, the node on which RG1+ became ineligible performs a cold synchronization on all Services Processing Units and transitions to active standby.
-
If RG0 is primary on an unhealthy node, then RG0 will fail over from an unhealthy to a healthy node. For example, if node 0 is primary for RG0+ and node 0 becomes unhealthy, then RG1+ on node 0 will transition to ineligible after 66 seconds of a fabric link failure and RG0+ fails over to node 1, which is the healthy node.
-
Only RG1+ transitions to an ineligible state. RG0 continues to be in either a primary or secondary state.
Use the show chassis cluster
interfaces
CLI command to verify the
status of the fabric link.
See Also
Example: Configuring the Chassis Cluster Fabric Interfaces
This example shows how to configure the chassis cluster fabric. The fabric is the back-to-back data connection between the nodes in a cluster. Traffic on one node that needs to be processed on the other node or to exit through an interface on the other node passes over the fabric. Session state information also passes over the fabric.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See Example: Setting the Node ID and Cluster ID for Security Devices in a Chassis Cluster .
Overview
In most SRX Series Firewalls in a chassis cluster, you can configure any pair of Gigabit Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation is not supported on the fabric link. The maximum MTU size for fabric interfaces is 9014 bytes and the maximum MTU size for other interfaces is 8900 bytes. Jumbo frame support on the member links is enabled by default.
This example illustrates how to configure the fabric link.
Only the same type of interfaces can be configured as fabric
children, and you must configure an equal number of child links for fab0
and fab1
.
If you are connecting each of the fabric links through a switch, you must enable the jumbo frame feature on the corresponding switch ports. If both of the fabric links are connected through the same switch, the RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair must be in another VLAN. Here too, the jumbo frame feature must be enabled on the corresponding switch ports.
Configuration
Procedure
CLI Quick Configuration
To quickly configure this example, copy the
following commands, paste them into a text file, remove any line breaks,
change any details necessary to match your network configuration,
copy and paste the commands into the CLI at the [edit]
hierarchy
level, and then enter commit
from configuration mode.
{primary:node0}[edit] set interfaces fab0 fabric-options member-interfaces ge-0/0/1 set interfaces fab1 fabric-options member-interfaces ge-7/0/1
Step-by-Step Procedure
To configure the chassis cluster fabric:
Specify the fabric interfaces.
{primary:node0}[edit] user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/1 {primary:node0}[edit] user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/1
Results
From configuration mode, confirm your configuration
by entering the show interfaces
command. If the output
does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
For brevity, this show
command output includes only
the configuration that is relevant to this example. Any other configuration
on the system has been replaced with ellipses (...).
{primary:node0}[edit] user@host# show interfaces ... fab0 { fabric-options { member-interfaces { ge-0/0/1; } } } fab1 { fabric-options { member-interfaces { ge-7/0/1; } } }
If you are done configuring the device, enter commit
from configuration mode.
Verification
Verifying the Chassis Cluster Fabric
Purpose
Verify the chassis cluster fabric.
Action
From operational mode, enter the show interfaces
terse | match fab
command.
{primary:node0} user@host> show interfaces terse | match fab ge-0/0/1.0 up up aenet --> fab0.0 ge-7/0/1.0 up up aenet --> fab1.0 fab0 up up fab0.0 up up inet 30.17.0.200/24 fab1 up up fab1.0 up up inet 30.18.0.200/24
Verifying Chassis Cluster Data Plane Interfaces
Purpose
Display chassis cluster data plane interface status.
Action
From the CLI, enter the show chassis cluster data-plane
interfaces
command:
{primary:node1}
user@host> show chassis cluster data-plane interfaces
fab0:
Name Status
ge-2/1/9 up
ge-2/2/5 up
fab1:
Name Status
ge-8/1/9 up
ge-8/2/5 up
Viewing Chassis Cluster Data Plane Statistics
Purpose
Display chassis cluster data plane statistics.
Action
From the CLI, enter the show chassis cluster data-plane
statistics
command:
{primary:node1}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 0 0
Session create 0 0
Session close 0 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RTSP ALG 0 0
Clearing Chassis Cluster Data Plane Statistics
To clear displayed chassis cluster data plane statistics, enter
the clear chassis cluster data-plane statistics
command
from the CLI:
{primary:node1}
user@host> clear chassis cluster data-plane statistics
Cleared data-plane statistics
See Also
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.