- play_arrow Overview
- play_arrow Setting Up a Chassis Cluster
- SRX Series Chassis Cluster Configuration Overview
- SRX Series Chassis Cluster Slot Numbering and Logical Interface Naming
- Preparing Your Equipment for Chassis Cluster Formation
- Connecting SRX Series Firewalls to Create a Chassis Cluster
- Example: Setting the Node ID and Cluster ID for Security Devices in a Chassis Cluster
- Chassis Cluster Management Interfaces
- Chassis Cluster Fabric Interfaces
- Chassis Cluster Control Plane Interfaces
- Chassis Cluster Redundancy Groups
- Chassis Cluster Redundant Ethernet Interfaces
- Configuring Chassis Clustering on SRX Series Devices
- Example: Enabling Eight-Queue Class of Service on Redundant Ethernet Interfaces on SRX Series Firewalls in a Chassis Cluster
- Conditional Route Advertisement over Redundant Ethernet Interfaces on SRX Series Firewalls in a Chassis Cluster
- play_arrow Chassis Cluster Operations
- Aggregated Ethernet Interfaces in a Chassis Cluster
- NTP Time Synchronization on Chassis Cluster
- Active/Passive Chassis Cluster Deployments
- Example: Configuring an SRX Series Services Gateway as a Full Mesh Chassis Cluster
- Example: Configuring an Active/Active Layer 3 Cluster Deployment
- Multicast Routing and Asymmetric Routing on Chassis Cluster
- Ethernet Switching on Chassis Cluster
- Media Access Control Security (MACsec) on Chassis Cluster
- Understanding SCTP Behavior in Chassis Cluster
- Example: Encrypting Messages Between Two Nodes in a Chassis Cluster
- play_arrow Upgrading or Disabling a Chassis Cluster
- play_arrow Troubleshooting
- Troubleshooting a Control Link Failure in an SRX Chassis Cluster
- Troubleshooting a Fabric Link Failure in an SRX Chassis Cluster
- Troubleshooting a Redundancy Group that Does Not Fail Over in an SRX Chassis Cluster
- Troubleshooting an SRX Chassis Cluster with One Node in the Primary State and the Other Node in the Disabled State
- Troubleshooting an SRX Chassis Cluster with One Node in the Primary State and the Other Node in the Lost State
- Troubleshooting an SRX Chassis Cluster with One Node in the Hold State and the Other Node in the Lost State
- Troubleshooting Chassis Cluster Management Issues
- Data Collection for Customer Support
- play_arrow Configuration Statements and Operational Commands
- play_arrow Chassis Cluster Support on SRX100, SRX210, SRX220, SRX240, SRX550M, SRX650, SRX1400, SRX3400, and SRX3600 Devices
Chassis Cluster Dual Control Links
Dual control links provide a redundant link for controlling network traffic.
Use Feature Explorer to confirm platform and release support for specific features.
Review the Platform-Specific Dual Control Links Behavior section for notes related to your platform.
Chassis Cluster Dual Control Links Overview
A control link connects two SRX Series Firewalls and sends chassis cluster control data, including heartbeats and configuration synchronization, between them. The link is a single point of failure: If the control link goes down, the secondary SRX Series is disabled from the cluster.
Dual control links prevent downtime due to a single point of failure. Two control link interfaces connect each device in a cluster. Dual control links provide a redundant link for controlling traffic. Unlike dual fabric links, only one control link is used at any one time.
Previously, if you wanted to disable the control link and fabric link, you had to unplug the cables manually.
The CLI commands work as follows:
In configuration mode
To disable the control link, run the
set chassis cluster control-interface <node0/node1> disable
on node 0 or node 1.If you disable the links using the configuration command, the links remain disabled even after system reboot.
To enable the control link, run the
delete chassis cluster control-interface <node0/node1> disable
on both nodes.
In operational mode
To disable the control link from the local node, run the
request chassis cluster control-interface <node0/node1> disable
command.If you disable the control link using the operational mode CLI command, the link will be enabled after system reboot.
To enable the control link on a local node, run the
request chassis cluster control-interface <node0/node1> enable
command.
Benefit of Dual Control Links
Dual control links prevent the possibility of a single point of failure by providing a redundant link for control traffic.
Dual Control Link Functionality Requirements
For the SRX5600 and SRX5800 Services Gateways, dual control link functionality requires that a second Routing Engine and a second Switch Control Board (SCB) be installed on each device in the cluster. The purpose of the second Routing Engine is to initialize the switch on the primary SCB. The second SCB houses the second Routing Engine.
This second Routing Engine does not provide backup functionality. It does not need to be upgraded, even when you upgrade the software on the primary Routing Engine on the same node. Note the following conditions:
You can run CLI commands and enter configuration mode only on the primary Routing Engine.
You set the chassis ID and cluster ID only on the primary Routing Engine.
If you want to be able to check that the second Routing Engine boots up, or if you want to upgrade a software image, you need a console connection to the second Routing Engine.
As long as the first Routing Engine is installed (even if it reboots or fails), the second Routing Engine cannot take over the chassis primary role; that is, it cannot control any of the hardware on the chassis.
A redundancy group 0 failover implies a Routing Engine failover. In the case of a Routing Engine failover, all processes running on the primary node are killed and then spawned on the new primary Routing Engine. This failover could result in loss of state, such as routing state, and degrade performance by introducing system churn.
See Also
Dual Control Link Connections for SRX Series Firewalls in a Chassis Cluster
You can connect two control links between SRX5600 devices and SRX5800 devices, effectively reducing the chance of control link failure.
Junos OS does not support dual control links on SRX5400 devices, due to the limited number of slots.
For SRX5600 devices and SRX5800 devices, connect two pairs of the same type of Ethernet ports. For each device, you can use ports on the same Services Processing Card (SPC), but we recommend that you connect the control ports to two different SPCs to provide high availability. Figure 1 shows a pair of SRX5800 devices with dual control links connected. In this example, control port 0 and control port 1 are connected on different SPCs.

For SRX5600 devices and SRX5800 devices, you must connect control port 0 on one node to control port 0 on the other node. You must also connect control port 1 on one node to control port 1 on the other node. If you connect control port 0 to control port 1, the nodes cannot receive heartbeat packets across the control links.
See Also
Upgrade the Second Routing Engine When Using Chassis Cluster Dual Control Links on SRX5600 and SRX5800 Devices
Instead, use the primary Routing Engine to create a bootable USB storage device, which you can then use to install a software image on the second Routing Engine.
To upgrade the software image on the second Routing Engine:
Example: Configure Chassis Cluster Control Ports for Dual Control Links
This example shows how to configure chassis cluster control ports for use as dual control links on SRX5600 devices and SRX5800 devices. You need to configure the control ports that you will use on each device to set up the control links.
Junos OS does not support dual control links on SRX5400 devices, due to the limited number of slots.
Requirements
Before you begin:
Understand chassis cluster control links. See Understanding Chassis Cluster Control Plane and Control Links.
Physically connect the control ports on the devices. See Connecting SRX Series Devices to Create a Chassis Cluster.
Overview
By default, all control ports on SRX5600 devices and SRX5800 devices are disabled. After connecting the control ports, configuring the control ports, and establishing the chassis cluster, the control links are set up.
This example configures control ports with the following FPCs and ports as the dual control links:
FPC 4, port 0
FPC 10, port 0
FPC 6, port 1
FPC 12, port 1
Configuration
Procedure
CLI Quick Configuration
To quickly configure this section of the example,
copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration,
copy and paste the commands into the CLI at the [edit]
hierarchy
level, and then enter commit
from configuration mode.
{primary:node0}[edit] set chassis cluster control-ports fpc 4 port 0 set chassis cluster control-ports fpc 10 port 0 set chassis cluster control-ports fpc 6 port 1 set chassis cluster control-ports fpc 12 port 1
Step-by-Step Procedure
To configure control ports for use as dual control links for the chassis cluster:
Specify the control ports.
{primary:node0}[edit] user@host# set chassis cluster control-ports fpc 4 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 10 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 6 port 1 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 12 port 1
Results
In configuration mode, confirm your configuration by entering the show chassis
cluster
command. If the output does not display the intended
configuration, repeat the configuration instructions in this example to
correct it.
For brevity, this show
command output includes only
the configuration that is relevant to this example. Any other configuration
on the system has been replaced with ellipses (...).
{primary:node0}[edit] user@host# show chassis cluster ... control-ports { fpc 4 port 0; fpc 6 port 1; fpc 10 port 0; fpc 12 port 1; } ...
If you are finished configuring the device, enter commit
from configuration
mode.
Verification
Verification of the Chassis Cluster Status
Purpose
Verify the chassis cluster status.
Action
In operational mode, enter the show chassis cluster status
command.
{primary:node0} user@host> show chassis cluster status Cluster ID: 1 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 100 primary no no node1 1 secondary no no Redundancy group: 1 , Failover count: 1 node0 0 primary no no node1 0 secondary no no
Meaning
Use the show chassis cluster status command to confirm that the devices in the chassis cluster are communicating with each other. The output shows that the chassis cluster is functioning properly, as one device is the primary node and the other is the secondary node.
Platform-Specific Dual Control Links Behavior
Use Feature Explorer to confirm platform and release support for specific features.
Use the following table to review platform-specific behaviors for your platform.
Platform | Difference |
---|---|
SRX Series |
|
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.