Understanding What Happens When Chassis Cluster Is Enabled
After wiring the two devices together as described in Connecting SRX Series Hardware to Create a Chassis Cluster or Connecting J Series Hardware to Create a Chassis Cluster, you use CLI operational mode commands to enable chassis clustering by assigning a cluster ID and node ID on each chassis in the cluster. The cluster ID is the same on both nodes.
To do this, you connect to the console port on the device that will be the primary, give it a node ID, and identify the cluster it will belong to, then reboot the system. You then connect the console port to the other device, give it a node ID, and assign it the same cluster ID you gave to the first node, then reboot the system. In both instances, you can cause the system to boot automatically by including the reboot parameter in the CLI command line. (For further explanation of primary and secondary nodes, see Understanding Chassis Cluster Redundancy Groups.)
![]() | Caution: The factory default configuration for SRX100, SRX210, and SRX240 devices automatically enables Layer 2 Ethernet switching. Because Layer 2 Ethernet switching is not supported in chassis cluster mode, for these devices, if you use the factory default configuration, you must delete the Ethernet switching configuration before you enable chassis clustering. See Disabling Switching on SRX100, SRX210, and SRX240 Devices Before Enabling Chassis Clustering for more information. |
![]() | Caution: After fabric interfaces have been configured on a chassis cluster, removing the fabric configuration on either node will cause the redundancy group 0 (RG0) secondary node to move to a disabled state. (Resetting a device to the factory default configuration removes the fabric configuration and thereby causes the RG0 secondary node to move to a disabled state.) After the fabric configuration is committed, do not reset either device to the factory default configuration. |
FPC Slot Numbering in an SRX Series Chassis Cluster (SRX5800 Devices) shows how the FPC slots are numbered on two nodes in an SRX5000 line chassis cluster. Other figures show slot numbering on both nodes in other SRX Series chassis clusters. PIM Slot Numbering in a J Series Chassis Cluster (J6350 Devices) shows how the PIM slots are numbered on two nodes in a J Series chassis cluster.
Related Topics
- JUNOS Software Feature Support Reference for SRX Series and J Series Devices
- Node Interfaces on Active SRX Series Chassis Clusters
- Node Interfaces on Active J Series Chassis Clusters
- Management Interface on an Active Chassis Cluster
- Fabric Interface on an Active Chassis Cluster
- Control Interface on an Active Chassis Cluster
- Disabling Chassis Cluster