ON THIS PAGE
Chassis Cluster Dual Control Links
Dual control links provide a redundant link for controlling network traffic.
Chassis Cluster Dual Control Links Overview
A control link connects two SRX Series Firewalls and sends chassis cluster control data, including heartbeats and configuration synchronization, between them. The link is a single point of failure: If the control link goes down, the secondary SRX Series is disabled from the cluster.
Dual control links prevent downtime due to a single point of failure. Two control link interfaces connect each device in a cluster. Dual control links provide a redundant link for controlling traffic. Unlike dual fabric links, only one control link is used at any one time.
The SRX1600, SRX2300, SRX4300, SRX4600, SRX5600, and SRX5800 Services Gateways support dual control links.
We do not support dual control link functionality on these Services Gateways: SRX4100, SRX4200, or SRX5400.
Starting with Junos OS Release 20.4R1, you can enable or disable the control links on SRX1500 Services Gateways using operational mode CLI commands and configuration mode CLI commands, described in a subsequent paragraph. This CLI feature enables you to control the status of cluster nodes during a cluster upgrade.
Previously, if you wanted to disable the control link and fabric link, you had to unplug the cables manually.
The CLI commands work as follows:
-
In configuration mode
-
To disable the control link, run the
set chassis cluster control-interface <node0/node1> disable
on node 0 or node 1.If you disable the links using the configuration command, the links remain disabled even after system reboot.
-
To enable the control link, run the
delete chassis cluster control-interface <node0/node1> disable
on both nodes.
-
-
In operational mode
-
To disable the control link from the local node, run the
request chassis cluster control-interface <node0/node1> disable
command.If you disable the control link using the operational mode CLI command, the link will be enabled after system reboot.
-
To enable the control link on a local node, run the
request chassis cluster control-interface <node0/node1> enable
command.
-
Benefit of Dual Control Links
Dual control links prevent the possibility of a single point of failure by providing a redundant link for control traffic.
Dual Control Link Functionality Requirements
For the SRX5600 and SRX5800 Services Gateways, dual control link functionality requires that a second Routing Engine and a second Switch Control Board (SCB) be installed on each device in the cluster. The purpose of the second Routing Engine is to initialize the switch on the primary SCB. The second SCB houses the second Routing Engine.
For the SRX5000 Services Gateways only, the second Routing Engine must be running Junos OS Release 10.0 or later.
This second Routing Engine does not provide backup functionality. It does not need to be upgraded, even when you upgrade the software on the primary Routing Engine on the same node. Note the following conditions:
-
You can run CLI commands and enter configuration mode only on the primary Routing Engine.
-
You set the chassis ID and cluster ID only on the primary Routing Engine.
-
If you want to be able to check that the second Routing Engine boots up, or if you want to upgrade a software image, you need a console connection to the second Routing Engine.
As long as the first Routing Engine is installed (even if it reboots or fails), the second Routing Engine cannot take over the chassis primary role; that is, it cannot control any of the hardware on the chassis.
A redundancy group 0 failover implies a Routing Engine failover. In the case of a Routing Engine failover, all processes running on the primary node are killed and then spawned on the new primary Routing Engine. This failover could result in loss of state, such as routing state, and degrade performance by introducing system churn.
For SRX3000 Services Gateways, dual control link functionality requires that an SRX Clustering Module (SCM) be installed on each device in the cluster. Although the SCM fits in the Routing Engine slot, it is not a Routing Engine. The SRX3000 devices do not support a second Routing Engine. The purpose of the SCM is only to initialize the second control link.
See Also
Dual Control Link Connections for SRX Series Firewalls in a Chassis Cluster
You can connect two control links between SRX5600 devices and SRX5800 devices, effectively reducing the chance of control link failure.
Junos OS does not support dual control links on SRX5400 devices, due to the limited number of slots.
For SRX5600 devices and SRX5800 devices, connect two pairs of the same type of Ethernet ports. For each device, you can use ports on the same Services Processing Card (SPC), but we recommend that you connect the control ports to two different SPCs to provide high availability. Figure 1 shows a pair of SRX5800 devices with dual control links connected. In this example, control port 0 and control port 1 are connected on different SPCs.
For SRX5600 devices and SRX5800 devices, you must connect control port 0 on one node to control port 0 on the other node. You must also connect control port 1 on one node to control port 1 on the other node. If you connect control port 0 to control port 1, the nodes cannot receive heartbeat packets across the control links.
See Also
Upgrade the Second Routing Engine When Using Chassis Cluster Dual Control Links on SRX5600 and SRX5800 Devices
You must use a second Routing Engine for each SRX5600 device and SRX5800 device in a cluster if you are using dual control links. The second Routing Engine does not provide backup functionality; its purpose is only to initialize the switch on the Switch Control Board (SCB). The second Routing Engine must be running Junos OS Release 12.1X47-D35, 12.3X48-D30, 15.1X49-D40, or later. For more information, see knowledge base article KB30371.
On SRX5600 devices and SRX5800 devices,
you
can use the show chassis hardware
command to see the serial number
and the hardware version details of the second Routing Engine. To use this
functionality, ensure that the second Routing Engine is running either Junos OS
Release 15.1X49-D70 or Junos OS Release 17.3R1.
Junos OS does not support dual control link functionality on the SRX5400 Services Gateways, due to limited slots.
Instead, use the primary Routing Engine to create a bootable USB storage device, which you can then use to install a software image on the second Routing Engine.
To upgrade the software image on the second Routing Engine:
Example: Configure Chassis Cluster Control Ports for Dual Control Links
This example shows how to configure chassis cluster control ports for use as dual control links on SRX5600 devices and SRX5800 devices. You need to configure the control ports that you will use on each device to set up the control links.
Junos OS does not support dual control links on SRX5400 devices, due to the limited number of slots.
Requirements
Before you begin:
Understand chassis cluster control links. See Understanding Chassis Cluster Control Plane and Control Links.
Physically connect the control ports on the devices. See Connecting SRX Series Devices to Create a Chassis Cluster.
Overview
By default, all control ports on SRX5600 devices and SRX5800 devices are disabled. After connecting the control ports, configuring the control ports, and establishing the chassis cluster, the control links are set up.
This example configures control ports with the following FPCs and ports as the dual control links:
FPC 4, port 0
FPC 10, port 0
FPC 6, port 1
FPC 12, port 1
Configuration
Procedure
CLI Quick Configuration
To quickly configure this section of the example,
copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration,
copy and paste the commands into the CLI at the [edit]
hierarchy
level, and then enter commit
from configuration mode.
{primary:node0}[edit] set chassis cluster control-ports fpc 4 port 0 set chassis cluster control-ports fpc 10 port 0 set chassis cluster control-ports fpc 6 port 1 set chassis cluster control-ports fpc 12 port 1
Step-by-Step Procedure
To configure control ports for use as dual control links for the chassis cluster:
Specify the control ports.
{primary:node0}[edit] user@host# set chassis cluster control-ports fpc 4 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 10 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 6 port 1 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 12 port 1
Results
In configuration mode, confirm your configuration by entering the show chassis
cluster
command. If the output does not display the intended
configuration, repeat the configuration instructions in this example to
correct it.
For brevity, this show
command output includes only
the configuration that is relevant to this example. Any other configuration
on the system has been replaced with ellipses (...).
{primary:node0}[edit] user@host# show chassis cluster ... control-ports { fpc 4 port 0; fpc 6 port 1; fpc 10 port 0; fpc 12 port 1; } ...
If you are finished configuring the device, enter commit
from configuration
mode.
Verification
Verification of the Chassis Cluster Status
Purpose
Verify the chassis cluster status.
Action
In operational mode, enter the show chassis cluster status
command.
{primary:node0} user@host> show chassis cluster status Cluster ID: 1 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 100 primary no no node1 1 secondary no no Redundancy group: 1 , Failover count: 1 node0 0 primary no no node1 0 secondary no no
Meaning
Use the show chassis cluster status command to confirm that the devices in the chassis cluster are communicating with each other. The output shows that the chassis cluster is functioning properly, as one device is the primary node and the other is the secondary node.