Chassis Cluster Control Plane Interfaces
You can use control plane interfaces to synchronize the kernel state between Routing Engines on SRX Series Firewalls in a chassis cluster. Control plane interfaces provide the link between the two nodes in the cluster.
Control planes use this link to:
-
Communicate node discovery.
-
Maintains session state for a cluster.
-
Access the configuration file.
-
Detect liveliness signals across the nodes.
Chassis Cluster Control Plane and Control Links
The control plane software, which operates in active or backup mode, is an integral part of Junos OS that is active on the primary node of a cluster. It achieves redundancy by communicating state, configuration, and other information to the inactive Routing Engine on the secondary node. If the primary Routing Engine fails, the secondary Routing Engine is ready to assume control.
The control plane software:
-
Runs on the Routing Engine.
-
Oversees the entire chassis cluster system, including interfaces on both nodes.
-
Manages system and data plane resources, including the Packet Forwarding Engine (PFE) on each node.
-
Synchronizes the configuration over the control link.
-
Establishes and maintains sessions, including authentication, authorization, and accounting (AAA) functions.
-
Manages application-specific signaling protocols.
-
Establishes and maintains management sessions, such as Telnet connections.
-
Handles asymmetric routing.
-
Manages routing state, Address Resolution Protocol (ARP) processing, and Dynamic Host Configuration Protocol (DHCP) processing.
Information from the control plane software follows two paths:
-
On the primary node (where the Routing Engine is active), control information flows from the Routing Engine to the local Packet Forwarding Engine.
-
Control information flows across the control link to the secondary node's Routing Engine and Packet Forwarding Engine.
The control plane software running on the primary Routing Engine maintains state for the entire cluster. Only those processes running on the same node as the control plane software can update state information. The primary Routing Engine synchronizes state for the secondary node and also processes all host traffic.
Chassis Cluster Control Links
The control interfaces provide the control link between the two nodes in the cluster and are used for routing updates and for control plane signal traffic, such as heartbeat and threshold information that triggers node failover. The control link also synchronizes the configuration between the nodes. When you submit configuration statements to the cluster, the control link synchronizes the configuration automatically.
The control link relies on a proprietary protocol to transmit session state, configuration, and liveliness state across the nodes.
Starting in Junos OS Release 19.3R1, the SRX5K-RE3-128G device is supported along with the SRX5K-SPC3 device on the SRX5000 line devices. The control interfaces ixlv0 and igb0 are used to configure the SRX5K-RE3-128G device.Control links control the communication between the control plane, data plane, and heartbeat messages.
Single Control Link in a Chassis Cluster
For a single control link in a chassis cluster, you must use the same control port for the control link connection and for configuration on both nodes.
For example, if you configure port 0 as a control port on node 0, you must configure port 0 as a control port on node 1. You must connect the ports with a cable.
Dual Control Link in a Chassis Cluster
You must connect dual control links in a chassis cluster directly. Cross connections—that is, connecting port 0 on one node to port 1 on the other node and vice versa—do not work.
For dual control links, you must make these connections:
-
Connect control port 0 on node 0 to control port 0 on node 1.
-
Connect control port 1 on node 0 to control port 1 on node 1.
Encryption on Chassis Cluster Control Link
Chassis cluster control links support an optional encrypted security feature that you can configure and activate.
Note that Juniper Networks security documentation uses chassis cluster when referring to high availability (HA) control links. You will still see the abbreviation ha used in place of chassis cluster in commands.
The control link access prevents hackers from logging in to the system without authentication through the control link, with Telnet access disabled. Using the internal IPsec key for internal communication between devices, the configuration information that passes through the chassis cluster link from the primary node to the secondary node is encrypted. Without the IPsec key, an attacker cannot gain privilege access or observe traffic.
To enable this feature, run the set security ipsec internal
security-association manual encryption ike-ha-link-encryption
enable
configuration command.
You must reboot both the nodes to activate this configuration.
Encryption on chassis cluster control link using IPsec is supported on SRX4600 line devices, SRX5000 line devices, and vSRX Virtual Firewall platforms.
When the chassis cluster is running with the IPsec key configured already, then you can make any changes to the key without rebooting the device. In this case, you will have to change the key only on one node.
When IPsec key encryption is configured, for any configuration changes
under internal security association (SA) hierarchy, you must reboot both the nodes.
To verify the configured Internet Key Exchange (IKE) chassis cluster link encryption
algorithm, view the output of show security
internal-security-association
.
SRX Series Firewalls | Description |
---|---|
SRX5400, SRX5600, and SRX5800 |
By default, all control ports are disabled. Each Services
Processing Card (SPC) in a device has two control ports, and
each device can have multiple SPCs plugged in to it. To set up
the control link in a chassis cluster, you connect
and configure the control ports that you use on each device
( |
SRX4600 |
Dedicated chassis cluster control ports and fabric ports are available. No control link configuration is needed for SRX4600 devices;
however, you need to configure fabric
link
explicitly for chassis cluster deployments. If you want to
configure 1-Gigabit Ethernet interfaces for the control ports,
you must explicitly set the speed using the operational CLI
command statement |
SRX4100 and SRX4200 |
Dedicated chassis cluster control ports are available. Control link configuration is not required. For more information about all SRX4100 ports and SRX4200 ports, including dedicated control links ports and fabric link ports, see Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming. When devices are not in cluster mode, dedicated chassis cluster ports cannot be used as revenue ports or traffic ports. |
SRX2300 and SRX4300 |
Devices use the dual dedicated control port with MACsec support. |
SRX1600 |
Devices use the dual dedicated control port with MACsec support. |
SRX1500 |
Devices use the dedicated control port. |
SRX300, SRX320, SRX340, SRX345, and SRX380. |
Control link uses the ge-0/0/1 interface. |
For details about port usage and interface usage for management links, control links, and fabric links, see Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface Naming.
Example: Configure Chassis Cluster Control Ports for Control Link
This example shows how to configure chassis cluster control ports on these devices: SRX5400, SRX5600, and SRX5800. You need to configure the control ports that you will use on each device to set up the control link.
Requirements
Before you begin:
Understand chassis cluster control links. See Understanding Chassis Cluster Control Plane and Control Links.
Physically connect the control ports on the devices. See Connecting SRX Series Devices to Create a Chassis Cluster.
Overview
Control link traffic passes through the switches in the Services Processing Cards (SPCs) and reaches the other node. On SRX Series Firewalls, chassis cluster ports are located at the SPCs in the chassis cluster. By default, all control ports on SRX5400 devices, SRX5600 devices, and SRX5800 devices are disabled. To set up the control links, you connect the control ports, configure the control ports, and set up the chassis cluster.
This example configures control ports with the following Flexible PIC Concentrators (FPCs) and ports as the control link:
- FPC 4, port 0
- FPC 10, port 0
Configuration
Procedure
CLI Quick Configuration
To quickly configure this section of the example, copy the following commands, paste them into a
text file, remove any line breaks, change any details necessary to match
your network configuration, copy and paste the commands into the CLI at the
[edit]
hierarchy level, and then enter
commit
in configuration mode.
{primary:node0}[edit] set chassis cluster control-ports fpc 4 port 0 set chassis cluster control-ports fpc 10 port 0 {primary:node1}[edit] set chassis cluster control-ports fpc 4 port 0 set chassis cluster control-ports fpc 10 port 0
Step-by-Step Procedure
To configure control ports as the control link for the chassis cluster:
Specify the control ports.
{primary:node0}[edit] user@host# set chassis cluster control-ports fpc 4 port 0 {primary:node0}[edit] user@host# set chassis cluster control-ports fpc 10 port 0 {primary:node1}[edit] user@host# set chassis cluster control-ports fpc 4 port 0 {primary:node1}[edit] user@host# set chassis cluster control-ports fpc 10 port 0
Results
In configuration mode, confirm your configuration by entering the show chassis cluster
command. If the output does not display the intended
configuration, repeat the configuration instructions in this example to
correct it.
For brevity, this show
command output includes only
the configuration that is relevant to this example. Any other configuration
on the system has been replaced with ellipses (...).
user@host# show chassis cluster ... control-ports { fpc 4 port 0; fpc 10 port 0; } ...
After you configure the device, enter commit
in configuration mode.
Verify the Chassis Cluster Status
Purpose
Verify the chassis cluster status.
Action
In operational mode, enter the show chassis cluster status
command.
{primary:node0} user@host> show chassis cluster status Cluster ID: 1 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 100 primary no no node1 1 secondary no no Redundancy group: 1 , Failover count: 1 node0 0 primary no no node1 0 secondary no no
Meaning
Use the show chassis cluster status command to confirm that the devices in the chassis cluster are communicating with each other. The preceding output shows that chassis cluster is functioning properly, as one device is the primary node and the other is the secondary node.
Verify Chassis Cluster Control Plane Statistics
Purpose
Display chassis cluster control plane statistics.
Action
At the CLI, enter the show chassis cluster control-plane
statistics
command:
{primary:node1}
user@host> show chassis cluster control-plane statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 124
Heartbeat packets received: 125
Fabric link statistics:
Child link 0
Probes sent: 124
Probes received: 125
{primary:node1}
user@host> show chassis cluster control-plane statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 258698
Heartbeat packets received: 258693
Control link 1:
Heartbeat packets sent: 258698
Heartbeat packets received: 258693
Fabric link statistics:
Child link 0
Probes sent: 258690
Probes received: 258690
Child link 1
Probes sent: 258505
Probes received: 258505
See Also
Clear Chassis Cluster Control Plane Statistics
To clear displayed chassis cluster control plane statistics, enter the clear chassis
cluster control-plane statistics
command at the CLI:
{primary:node1}
user@host> clear chassis cluster control-plane statistics
Cleared control-plane statistics
Change from Chassis Cluster to Standalone Mode
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.