Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
close
keyboard_arrow_left
SRX5400 Firewall Hardware Guide
Table of Contents Expand all
list Table of Contents
keyboard_arrow_right

Maintaining the SRX5400 Host Subsystem

date_range 26-Jul-23

Maintaining the SRX5400 Firewall Host Subsystem

Purpose

For optimum firewall performance, verify the condition of the host subsystem. The host subsystem is composed of an SCB and a Routing Engine installed into the slot in the SCB.

Action

On a regular basis:

  • Check the LEDs on the craft interface to view information about the status of the Routing Engines.

  • Check the LEDs on the SCB faceplate.

  • Check the LEDs on the Routing Engine faceplate.

  • To check the status of the Routing Engine, issue the show chassis routing-engine command. The output is similar to the following:

    content_copy zoom_out_map
    user@host> show chassis routing-engine
    
    Routing Engine status:
      Slot 0:
        Current state                  Master
        Election priority              Master (default)
        Temperature                 36 degrees C / 96 degrees F
        CPU temperature             33 degrees C / 91 degrees F
        DRAM                      2048 MB
        Memory utilization          12 percent
        CPU utilization:
          User                       1 percent
          Background                 0 percent
          Kernel                     4 percent
          Interrupt                  0 percent
          Idle                      94 percent
        Model                          RE-S-1300
        Serial ID                      1000697084
        Start time                     2008-07-11 08:31:44 PDT
        Uptime                         3 hours, 27 minutes, 27 seconds
        Load averages:                 1 minute   5 minute  15 minute
                                           0.44       0.16       0.06
    
  • To check the status of the SCB, issue the show chassis environment cb command. The output is similar to the following:

    content_copy zoom_out_map
    user@host> show chassis environment cb
    
    CB 0 status:
      State                      Online Master
      Temperature                40 degrees C / 104 degrees F
      Power 1
        1.2 V                     1208 mV
        1.5 V                     1521 mV
        1.8 V                     1807 mV
        2.5 V                     2507 mV
        3.3 V                     3319 mV
        5.0 V                     5033 mV
        12.0 V                   12142 mV
        1.25 V                    1243 mV
        3.3 V SM3                 3312 mV
        5 V RE                    5059 mV
        12 V RE                  11968 mV
      Power 2
        11.3 V bias PEM          11253 mV
        4.6 V bias MidPlane       4814 mV
        11.3 V bias FPD          11234 mV
        11.3 V bias POE 0        11176 mV
        11.3 V bias POE 1        11292 mV
      Bus Revision               42
      FPGA Revision              1
    

To check the status of a specific SCB, issue the show chassis environment cb node slot command, for example, show chassis environment cb node 0.

For more information about using the CLI, see the CLI Explorer.

Taking the SRX5400 Firewall Host Subsystem Offline

The host subsystem is composed of an SCB with a Routing Engine installed in it. You take the host subsystem offline and bring it online as a unit. Before you replace an SCB or a Routing Engine, you must take the host subsystem offline. Taking the host subsystem offline causes the device to shut down.

To take the host subsystem offline:

  1. On the console or other management device connected to the Routing Engine that is paired with the SCB you are removing, enter CLI operational mode and issue the following command. The command shuts down the Routing Engine cleanly, so its state information is preserved:
    content_copy zoom_out_map
    user@host> request system halt
  2. Wait until a message appears on the console confirming that the operating system has halted.

    For more information about the command, see Junos OS System Basics and Services Command Reference at www.juniper.net/documentation/.

    Note:

    The SCB might continue forwarding traffic for approximately 5 minutes after the request system halt command has been issued.

Operating and Positioning the SRX5400 Firewall SCB Ejectors

  • When removing or inserting the SCB, ensure that the cards or blank panels in adjacent slots are fully inserted to avoid hitting them with the ejector handles. The ejector handles require that all adjacent components be completely inserted so the ejector handles do not hit them, which could result in damage.

  • The ejector handles must be stored toward the center of the board. Ensure the long ends of the ejectors located at both the right and left ends of the board are horizontal and pressed as far as possible toward the center of the board.

  • To insert or remove the SCB, slide the ejector across the SCB horizontally, rotate it, and slide it again another quarter of a turn. Turn the ejector again and repeat as necessary. Utilize the indexing feature to maximize leverage and to avoid hitting any adjacent components.

  • Operate both ejector handles simultaneously. The insertion force on the SCB is too great for one ejector.

Replacing the SRX5400 Firewall SCB

Before replacing the SCB, read the guidelines in Operating and Positioning the SRX5400 Firewall SCB Ejectors. To replace the SCB, perform the following procedures:

Note:

The procedure to replace an SCB applies to the SRX5K-SCB, SRX5K-SCBE, and SRX5K-SCB3.

Removing the SRX5400 Firewall SCB

To remove the SCB (see Figure 1):

Note:

The SCB and Routing Engine are removed as a unit. You can also remove the Routing Engine separately.

CAUTION:

Before removing the SCB, ensure that you know how to operate the ejector handles properly to avoid damage to the equipment.

  1. If you are removing an SCB from a chassis cluster, deactivate the fabric interfaces from any of the nodes.
    Note:

    The fabric interfaces should be deactivated to avoid failures in the chassis cluster.

    content_copy zoom_out_map
    user@host# deactivate interfaces fab0
    user@host# deactivate interfaces fab1
    user@host# commit
  2. Power off the firewall using the command request system power-off.
    content_copy zoom_out_map
    user@host# request system power-off
    Note:

    Wait until a message appears on the console confirming that the services stopped.

  3. Physically turn off the power and remove the power cables from the chassis.
  4. Place an electrostatic bag or antistatic mat on a flat, stable surface.
  5. Attach an ESD grounding strap to your bare wrist, and connect the other end of the strap to an ESD grounding point.
  6. Rotate the ejector handles simultaneously counterclockwise to unseat the SCB.
  7. Grasp the ejector handles and slide the SCB about halfway out of the chassis.
  8. Place one hand underneath the SCB to support it and slide it completely out of the chassis.
  9. Place the SCB on the antistatic mat.
  10. If you are not replacing the SCB now, install a blank panel over the empty slot.
Figure 1: Removing the SCBRemoving the SCB

Installing an SRX5400 Firewall SCB

To install the SCB (see Figure 2):

  1. Attach an ESD grounding strap to your bare wrist, and connect the other end of the strap to an ESD grounding point.
  2. Power off the firewall using the command request system power-off.
    content_copy zoom_out_map
    user@host# request system power-off
    Note:

    Wait until a message appears on the console confirming that the services stopped.

  3. Physically turn off the power and remove the power cables from the chassis.
  4. Carefully align the sides of the SCB with the guides inside the chassis.
  5. Slide the SCB into the chassis until you feel resistance, carefully ensuring that it is correctly aligned.
    Figure 2: Installing the SCBInstalling the SCB
  6. Grasp both ejector handles and rotate them simultaneously clockwise until the SCB is fully seated.
  7. Place the ejector handles in the proper position, horizontally and toward the center of the board.
  8. Connect the power cables to the chassis and power on the firewall. The OK LED on the power supply faceplate should blink, then light steadily.
  9. To verify that the SCB is functioning normally, check the LEDs on its faceplate. The green OK/FAIL LED should light steadily a few minutes after the SCB is installed. If the OK/FAIL LED is red, remove and install the SCB again. If the OK/FAIL LED still lights steadily, the SCB is not functioning properly. Contact your customer support representative.

    To check the status of the SCB:

    content_copy zoom_out_map
    user@host> show chassis environment cb
  10. If you installed an SCB into a chassis cluster, through the console of the newly installed SCB put the node back into cluster and reboot.
    content_copy zoom_out_map
    user@host> set chassis cluster cluster-id X node Y reboot

    where x is the cluster ID and Y is the node ID

  11. Activate the disabled fabric interfaces.
    content_copy zoom_out_map
    user@host# activate interfaces fab0
    user@host# activate interfaces fab1
    user@host# commit

Replacing the SRX5400 Firewall Routing Engine

To replace the Routing Engine, perform the following procedures:

Note:

The procedure to replace a Routing Engine applies to both SRX5K-RE-13-20, SRX5K-RE-1800X4, and SRX5K-RE-128G.

Removing the SRX5400 Firewall Routing Engine

CAUTION:

Before you replace the Routing Engine, you must take the host subsystem offline.

To remove the Routing Engine (see Figure 3):

  1. Take the host subsystem offline as described in Taking the SRX5400 Firewall Host Subsystem Offline.
  2. Place an electrostatic bag or antistatic mat on a flat, stable surface.
  3. Attach an ESD grounding strap to your bare wrist, and connect the other end of the strap to an ESD grounding point.
  4. Flip the ejector handles outward to unseat the Routing Engine.
  5. Grasp the Routing Engine by the ejector handles and slide it about halfway out of the chassis.
  6. Place one hand underneath the Routing Engine to support it and slide it completely out of the chassis.
    Figure 3: Removing the Routing EngineRemoving the Routing Engine
  7. Place the Routing Engine on the antistatic mat.

Installing the SRX5400 Firewall Routing Engine

To install the Routing Engine into the SCB (see Figure 4):

Note:

If you install only one Routing Engine in the service gateway, you must install it in SCB slot 0 of service gateway chassis.

  1. If you have not already done so, take the host subsystem offline. See Taking the SRX5400 Firewall Host Subsystem Offline.
  2. Attach an ESD grounding strap to your bare wrist, and connect the other end of the strap to an ESD grounding point.
  3. Ensure that the ejector handles are not in the locked position. If necessary, flip the ejector handles outward.
  4. Place one hand underneath the Routing Engine to support it.
  5. Carefully align the sides of the Routing Engine with the guides inside the opening on the SCB.
  6. Slide the Routing Engine into the SCB until you feel resistance, and then press the Routing Engine's faceplate until it engages the connectors.
    Figure 4: Installing the Routing EngineInstalling the Routing Engine
  7. Press both of the ejector handles inward to seat the Routing Engine.
  8. Tighten the captive screws on the right and left ends of the Routing Engine faceplate.
  9. Power on the firewall. The OK LED on the power supply faceplate should blink, then light steadily.

    The Routing Engine might require several minutes to boot.

    After the Routing Engine boots, verify that it is installed correctly by checking the RE0 and RE1 LEDs on the craft interface. If the firewall is operational and the Routing Engine is functioning properly, the green ONLINE LED lights steadily. If the red FAIL LED lights steadily instead, remove and install the Routing Engine again. If the red FAIL LED still lights steadily, the Routing Engine is not functioning properly. Contact your customer support representative.

    To check the status of the Routing Engine, use the CLI command:

    content_copy zoom_out_map
    user@host> show chassis routing-engine
    Routing Engine status:
      Slot 0:
        Current state                  Master ... 

    For more information about using the CLI, see the CLI Explorer.

  10. If the Routing Engine was replaced on one of the nodes in a chassis cluster, then you need to copy certificates and key pairs from the other node in the cluster:
    1. Start the shell interface as a root user on both nodes of the cluster.

    2. Verify files in the /var/db/certs/common/key-pair folder of the source node (other node in the cluster) and destination node (node on which the Routing Engine was replaced) by using the following command:

      ls -la /var/db/certs/common/key-pair/

    3. If the same files exist on both nodes, back up the files on the destination node to a different location. For example:

      root@SRX-B% pwd /var/db/certs/common/key-pair root@SRX-B% ls -la total 8 drwx------ 2 root wheel 512 Jan 22 15:09 drwx------ 7 root wheel 512 Mar 26 2009 -rw-r--r-- 1 root wheel 0 Jan 22 15:09 test root@SRX-B% mv test test.old root@SRX-B% ls -la total 8 drwx------ 2 root wheel 512 Jan 22 15:10 drwx------ 7 root wheel 512 Mar 26 2009 -rw-r--r-- 1 root wheel 0 Jan 22 15:09 test.old root@SRX-B%

    4. Copy the files from the /var/db/certs/common/key-pair folder of the source node to the same folder on the destination node.

      Note:

      Ensure that you use the correct node number for the destination node.

    5. In the destination node, use the ls –la command to verify that all files from the /var/db/certs/common/key-pair folder of the source node are copied.

    6. Repeat Step b through Step e for the /var/db/certs/common/local and /var/db/certs/common/certification-authority folders.

Low Impact Hardware Upgrade for SCB3 and IOC3

Before you begin the LICU procedure, verify that both firewalls in the cluster are running the same Junos OS release.

Note:

You can perform the hardware upgrade using the LICU process only.

You must perform the hardware upgrade at the same time as the software upgrade from Junos OS Release 12.3X48-D10 to 15.1X49-D10.

If your device is part of a chassis cluster, you can upgrade SRX5K-SCBE (SCB2) to SRX5K-SCB3 (SCB3) and SRX5K-MPC (IOC2) to IOC3 (SRX5K-MPC3-100G10G or SRX5K-MPC3-40G10G) using the low-impact hardware upgrade (LICU) procedure, with minimum downtime. You can also follow this procedure to upgrade SCB1 to SCB2, and RE1 to RE2.

In the chassis cluster, the primary device is depicted as node 0 and the secondary device as node 1.

Follow these steps to perform the LICU.

  1. Ensure that the secondary node does not have an impact on network traffic by isolating it from the network when LICU is in progress. For this, disable the physical interfaces (RETH child interfaces) on the secondary node.
    content_copy zoom_out_map
    For SRX5400 Services Gateways
    admin@cluster#set interfaces xe-5/0/0 disable 
    admin@cluster#set interfaces xe-5/1/0 disable
    For SRX5600 Services Gateways
    admin@cluster#set interfaces xe-9/0/0 disable 
    admin@cluster#set interfaces xe-9/0/4 disable
    For SRX5800 Services Gateways
    admin@cluster#set interfaces xe-13/0/0 disable 
    admin@cluster#set interfaces xe-13/1/0 disable
    
  2. Disable SYN bit and TCP sequence number checking for the secondary node to take over.
    content_copy zoom_out_map
    admin@cluster#set security flow tcp-session no-syn-check
    admin@cluster#set security flow tcp-session no-sequence-check
    
  3. Commit the configuration.
    content_copy zoom_out_map
    root@#commit
    
  4. Disconnect control and fabric links between the devices in the chassis cluster so that nodes running different Junos OS releases are disconnected. For this, change the control port and fabric port to erroneous values. Fabric ports must be set to any FPC number and control ports to any non-IOC port. Issue the following commands:
    content_copy zoom_out_map
    admin@cluster#delete chassis cluster control-ports
    admin@cluster#set chassis cluster control-ports fpc 10 port 0 <<<<<<< non-SPC port 
    admin@cluster#set chassis cluster control-ports fpc 22 port 0 <<<<<<< non-SPC port 
    admin@cluster#delete interfaces fab0
    admin@cluster#delete interfaces fab1 
    admin@cluster#set interfaces fab0 fabric-options member-interfaces xe-4/0/5 <<<<<<< non-IOC port 
    admin@cluster#set interfaces fab1 fabric-options member-interfaces xe-10/0/5<<<<<<< non-IOC port 
    
  5. Commit the configuration.
    content_copy zoom_out_map
    root@#commit
    
    Note:

    After you commit the configuration, the following error message appears: Connection to node1 has been broken error:remote unlock-configuration failed on node1 due to control plane communication break.

    Ignore the error message.

  6. Upgrade the Junos OS release on the secondary node from 12.3X48-D10 to 15.1X49-D10.
    content_copy zoom_out_map
    admin@cluster#request system software add <location of package/junos filename> no-validate no-copy
    
  7. Power on the secondary node.
  8. Perform the hardware upgrade on the secondary node by replacing SCB2 with SCB3, IOC2 with IOC3, and the existing midplane with the enhanced midplane.

    Following these steps while upgrading the SCB:

    To upgrade the Routing Engine on the secondary node:

    1. Before powering off the secondary node, copy the configuration information to a USB device.
    2. Replace RE1 with RE2 and upgrade the Junos OS on RE2.
    3. Upload the configuration to RE2 from the USB device.

      For more information about mounting the USB drive on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

    Perform this step when you upgrade the MPC.

    1. Configure the control port, fabric port, and RETH child ports on the secondary node.
      content_copy zoom_out_map
      [edit]
      content_copy zoom_out_map
      root@clustert# show | display set | grep delete
      content_copy zoom_out_map
      delete groups global interfaces fab1
      content_copy zoom_out_map
      delete groups global interfaces fab0
      content_copy zoom_out_map
      delete interfaces reth0
      content_copy zoom_out_map
      delete interfaces reth1
      content_copy zoom_out_map
      delete interfaces xe-3/0/5 gigether-options redundant-parent
      reth0
      content_copy zoom_out_map
      delete interfaces xe-9/0/5 gigether-options redundant-parent
      reth0
      content_copy zoom_out_map
      delete interfaces xe-3/0/9 gigether-options redundant-parent
      reth
      content_copy zoom_out_map
      delete interfaces xe-9/0/9 gigether-options redundant-parent
      reth0
      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep fab
      set groups global interfaces fab1 fabric-options member-interfaces
      xe-9/0/2 
      set groups global interfaces fab0 fabric-options member-interfaces
      xe-3/0/2

      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep reth0
      set chassis cluster redundancy-group 1 ip-monitoring family
      inet 44.44.44.2 interface reth0.0 secondary-ip-address 44.44.44.3
      set interfaces xe-3/0/0 gigether-options redundant-parent
      reth0
      set interfaces xe-9/0/0 gigether-options redundant-parent
      reth0
      set interfaces reth0 vlan-tagging
      set interfaces reth0 redundant-ether-options redundancy-group
      1
      set interfaces reth0 unit 0 vlan-id 20
      set interfaces reth0 unit 0 family inet address 44.44.44.1/8

      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep reth1
      set interfaces xe-3/0/4 gigether-options redundant-parent
      reth1
      set interfaces xe-9/0/4 gigether-options redundant-parent
      reth1
      set interfaces reth1 vlan-tagging 
      set interfaces reth1 redundant-ether-options redundancy-group
      1
       set interfaces reth1 unit 0 vlan-id 30
      set interfaces reth1 unit 0 family inet address 55.55.55.1/8 
  9. Verify that the secondary node is running the upgraded Junos OS release.
    content_copy zoom_out_map
    root@cluster> show version  node1
    
    Hostname: <displays the hostname>
    Model: <displays the model number>
    Junos: 15.1X49-D10
    JUNOS Software Release [15.1X49-D10]
    
    content_copy zoom_out_map
    root@cluster> show chassis cluster status
    
    Monitor Failure codes:
        CS  Cold Sync monitoring        FL  Fabric Connection monitoring
        GR  GRES monitoring             HW  Hardware monitoring
        IF  Interface monitoring        IP  IP monitoring
        LB  Loopback monitoring         MB  Mbuf monitoring
        NH  Nexthop monitoring          NP  NPC monitoring              
        SP  SPU monitoring              SM  Schedule monitoring
        CF  Config Sync monitoring
     
    
    Cluster ID: 1
    Node   Priority Status         Preempt Manual   Monitor-failures
    
    Redundancy group: 0 , Failover count: 1
    node0  0        lost           n/a     n/a      n/a            
    node1  100      primary        no      no       None           
    
    Redundancy group: 1 , Failover count: 3
    node0  0        lost           n/a     n/a      n/a            
    node1  150      primary        no      no       None           
    
    content_copy zoom_out_map
    root@cluster>show chassis fpc pic-status  node1
    Slot 1   Online       SRX5k IOC II
      PIC 0  Online       1x 100GE CFP
      PIC 2  Online       2x 40GE QSFP+
    Slot 2   Online       SRX5k SPC II
      PIC 0  Online       SPU Cp
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 3   Online       SRX5k IOC II
      PIC 0  Online       10x 10GE SFP+
      PIC 2  Online       2x 40GE QSFP+
    Slot 4   Online       SRX5k SPC II
      PIC 0  Online       SPU Flow
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 5   Online       SRX5k IOC II
      PIC 0  Online       10x 10GE SFP+
      PIC 2  Online       2x 40GE QSFP+
    
    
  10. Verify configuration changes by disabling interfaces on the primary node and enabling interfaces on the secondary.
    content_copy zoom_out_map
    For SRX5400 Services Gateways
    admin@cluster#set interfaces xe-2/0/0 disable 
    admin@cluster#set interfaces xe-2/1/0 disable
    admin@cluster#delete interfaces xe-5/0/0 disable
    admin@cluster#delete interfaces xe-5/1/0 disable
    For SRX5600 Services Gateways
    admin@cluster#set interfaces xe-2/0/0 disable 
    admin@cluster#set interfaces xe-2/0/4 disable
    admin@cluster#delete interfaces xe-9/0/0 disable
    admin@cluster#delete interfaces xe-9/0/4 disable
    For SRX5800 Services Gateways
    admin@cluster#set interfaces xe-1/0/0 disable 
    admin@cluster#set interfaces xe-1/1/0 disable
    admin@cluster#delete interfaces xe-13/0/0 disable
    admin@cluster#delete interfaces xe-13/1/0 disable
    
  11. Check the configuration changes.
    content_copy zoom_out_map
    root@#commit check
    
  12. After verifying, commit the configuration.
    content_copy zoom_out_map
    root@#commit
    

    Network traffic fails over to the secondary node.

  13. Verify that the failover was successful by checking the session tables and network traffic on the secondary node.
    content_copy zoom_out_map
    admin@cluster#show security flow session summary 
    admin@cluster#monitor interface traffic
    
  14. Upgrade the Junos OS release on the primary node from 12.3X48-D10 to 15.1X49-D10.
    content_copy zoom_out_map
    admin@cluster#request system software add <location of package/junos filename> no-validate no-copy
    

    Ignore error messages pertaining to the disconnected cluster.

  15. Power on the primary node.
  16. Perform the hardware upgrade on the primary node by replacing SCB2 with SCB3, IOC2 with IOC3, and the existing midplane with the enhanced midplane.

    Perform the following steps while upgrading the SCB.

    To upgrade the Routing Engine on the primary node:

    1. Before powering off the secondary node, copy the configuration information to a USB device.
    2. Replace RE1 with RE2 and upgrade the Junos OS on RE2.
    3. Upload the configuration to RE2 from the USB device.

      For more information about mounting the USB drive on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

    Perform this step when you upgrade the MPC.

    1. Configure the control port, fabric port, and RETH child ports on the primary node.
      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep delete
      delete groups global interfaces fab1
      delete groups global interfaces fab0
      delete interfaces reth0
      delete interfaces reth1
      delete interfaces xe-3/0/5 gigether-options redundant-parent
      reth0
      delete interfaces xe-9/0/5 gigether-options redundant-parent
      reth0
      delete interfaces xe-3/0/9 gigether-options redundant-parent
      reth0
      delete interfaces xe-9/0/9 gigether-options redundant-parent
      reth0

      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep fab
      set groups global interfaces fab1 fabric-options member-interfaces
      xe-9/0/2 
      set groups global interfaces fab0 fabric-options member-interfaces
      xe-3/0/2

      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep reth0
      set chassis cluster redundancy-group 1 ip-monitoring family
      inet 44.44.44.2 interface reth0.0 secondary-ip-address 44.44.44.3
      set interfaces xe-3/0/0 gigether-options redundant-parent
      reth0
      set interfaces xe-9/0/0 gigether-options redundant-parent
      reth0
      set interfaces reth0 vlan-tagging
      set interfaces reth0 redundant-ether-options redundancy-group
      1
      set interfaces reth0 unit 0 vlan-id 20
      set interfaces reth0 unit 0 family inet address 44.44.44.1/8

      content_copy zoom_out_map
      [edit]
      root@clustert# show | display set | grep reth1
      set interfaces xe-3/0/4 gigether-options redundant-parent
      reth1
      set interfaces xe-9/0/4 gigether-options redundant-parent
      reth1
      set interfaces reth1 vlan-tagging 
      set interfaces reth1 redundant-ether-options redundancy-group
      1
       set interfaces reth1 unit 0 vlan-id 30
      set interfaces reth1 unit 0 family inet address 55.55.55.1/8 
  17. Verify that the primary node is running the upgraded Junos OS release, and that the primary node is available to take over network traffic.
    content_copy zoom_out_map
    root@cluster> show version  node1
    
    Hostname: <displays the hostname>
    Model: <displays the model number>
    Junos: 15.1X49-D10
    JUNOS Software Release [15.1X49-D10]
    
    
    content_copy zoom_out_map
    root@cluster> show chassis cluster status
    
    Monitor Failure codes:
        CS  Cold Sync monitoring        FL  Fabric Connection monitoring
        GR  GRES monitoring             HW  Hardware monitoring
        IF  Interface monitoring        IP  IP monitoring
        LB  Loopback monitoring         MB  Mbuf monitoring
        NH  Nexthop monitoring          NP  NPC monitoring              
        SP  SPU monitoring              SM  Schedule monitoring
        CF  Config Sync monitoring
     
    
    Cluster ID: 1
    Node   Priority Status         Preempt Manual   Monitor-failures
    
    Redundancy group: 0 , Failover count: 1
    node0  0        lost           n/a     n/a      n/a            
    node1  100      primary        no      no       None           
    
    Redundancy group: 1 , Failover count: 3
    node0  0        lost           n/a     n/a      n/a            
    node1  150      primary        no      no       None           
    
    content_copy zoom_out_map
    root@cluster>show chassis fpc pic-status  node1
    Slot 1   Online       SRX5k IOC II
      PIC 0  Online       1x 100GE CFP
      PIC 2  Online       2x 40GE QSFP+
    Slot 2   Online       SRX5k SPC II
      PIC 0  Online       SPU Cp
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 3   Online       SRX5k IOC II
      PIC 0  Online       10x 10GE SFP+
      PIC 2  Online       2x 40GE QSFP+
    Slot 4   Online       SRX5k SPC II
      PIC 0  Online       SPU Flow
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 5   Online       SRX5k IOC II
      PIC 0  Online       10x 10GE SFP+
      PIC 2  Online       2x 40GE QSFP+
    
    
  18. Check the configuration changes.
    content_copy zoom_out_map
    root@#commit check
    
  19. After verifying, commit the configuration.
    content_copy zoom_out_map
    root@#commit
    
  20. Verify configuration changes by disabling interfaces on the secondary node and enabling interfaces on the primary.
    content_copy zoom_out_map
    For SRX5400 Services Gateways
    admin@cluster#set interfaces xe-5/0/0 disable
    admin@cluster#set interfaces xe-5/1/0 disable
    admin@cluster#delete interfaces xe-2/0/0 disable 
    admin@cluster#delete interfaces xe-2/1/0 disable
    For SRX5600 Services Gateways
    admin@cluster#set interfaces xe-9/0/0 disable
    admin@cluster#set interfaces xe-9/0/4 disable
    admin@cluster#delete interfaces xe-2/0/0 disable 
    admin@cluster#delete interfaces xe-2/0/4 disable
    For SRX5800 Services Gateways
    admin@cluster#set interfaces xe-13/0/0 disable
    admin@cluster#set interfaces xe-13/1/0 disable
    admin@cluster#delete interfaces xe-1/0/0 disable 
    admin@cluster#delete interfaces xe-1/1/0 disable
    

    Network traffic fails over to the primary node.

  21. To synchronize the devices within the cluster, reconfigure the control ports and fabric ports with the correct port values on the secondary node.
    content_copy zoom_out_map
    admin@cluster#delete chassis cluster control-ports 
    admin@cluster#set chassis cluster control-ports fpc 1 port 0
    admin@cluster#set chassis cluster control-ports fpc 13 port 0
    admin@cluster#delete interfaces fab0 
    admin@cluster#delete interfaces fab1 
    admin@cluster#set interfaces fab0 fabric-options member-interfaces xe-3/0/2 
    admin@cluster#set interfaces fab1 fabric-options member-interfaces xe-9/0/2
    
  22. Commit the configuration.
    content_copy zoom_out_map
    root@#commit
    
  23. Power on the secondary node.
    1. When you power on the secondary node, enable the control ports and fabric ports on the primary node, and reconfigure them with the correct port values.
      content_copy zoom_out_map
      admin@cluster#delete chassis cluster control-ports 
      admin@cluster#set chassis cluster control-ports fpc 1 port 0
      admin@cluster#set chassis cluster control-ports fpc 13 port 0
      admin@cluster#delete interfaces fab0 
      admin@cluster#delete interfaces fab1 
      admin@cluster#set interfaces fab0 fabric-options member-interfaces xe-3/0/2 
      admin@cluster#set interfaces fab1 fabric-options member-interfaces xe-9/0/2
      
  24. Commit the configuration.
    content_copy zoom_out_map
    root@#commit
    
  25. After the secondary node is up, verify that it synchronizes with the primary node.
    content_copy zoom_out_map
    admin@cluster#delete interfaces xe-4/0/5 disable
    admin@cluster#delete interfaces xe-10/0/5 disable
    
  26. Enable SYN bit and TCP sequence number checking for the secondary node.
    content_copy zoom_out_map
    admin@cluster#delete security flow tcp-session no-syn-check
    admin@cluster#delete security flow tcp-session no-sequence-check
    
  27. Commit the configuration.
    content_copy zoom_out_map
    root@#commit
    
  28. Verify the Redundancy Group (RG) states and their priority.
    content_copy zoom_out_map
    root@cluster>show version
    node0:
    --------------------------------------------------------------------------
    Hostname: <displays the hostname>
    Model: <displays the model number>
    Junos: 15.1X49-D10
    JUNOS Software Release [15.1X49-D10]
    
    node1:
    --------------------------------------------------------------------------
    Hostname: <displays the hostname>
    Model: <displays the model>
    Junos: 15.1X49-D10
    JUNOS Software Release [15.1X49-D10]

    After the secondary node is powered on, issue the following command:

    content_copy zoom_out_map
    root@cluster>show chassis fpc pic-status
    node0:
    --------------------------------------------------------------------------
    Slot 1   Online       SRX5k IOC II
      PIC 0  Online       1x 100GE CFP
      PIC 2  Online       2x 40GE QSFP+
    Slot 2   Online       SRX5k SPC II
      PIC 0  Online       SPU Cp
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 3   Online       SRX5k IOC3 24XGE+6XLG
      PIC 0  Online       12x 10GE SFP+
      PIC 1  Online       12x 10GE SFP+
      PIC 2  Offline      3x 40GE QSFP+
      PIC 3  Offline      3x 40GE QSFP+
    Slot 4   Online       SRX5k SPC II
      PIC 0  Online       SPU Flow
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 5   Online       SRX5k IOC II
      PIC 0  Online       10x 10GE SFP+
      PIC 2  Online       10x 10GE SFP+
                                            
    node1:                                  
    --------------------------------------------------------------------------
    Slot 1   Online       SRX5k IOC II
      PIC 0  Online       1x 100GE CFP
      PIC 2  Online       2x 40GE QSFP+
    Slot 2   Online       SRX5k SPC II
      PIC 0  Online       SPU Cp
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 3   Online       SRX5k IOC3 24XGE+6XLG
      PIC 0  Online       12x 10GE SFP+
      PIC 1  Online       12x 10GE SFP+
      PIC 2  Offline      3x 40GE QSFP+
      PIC 3  Offline      3x 40GE QSFP+
    Slot 4   Online       SRX5k SPC II
      PIC 0  Online       SPU Flow
      PIC 1  Online       SPU Flow
      PIC 2  Online       SPU Flow
      PIC 3  Online       SPU Flow
    Slot 5   Online       SRX5k IOC II
      PIC 0  Online       10x 10GE SFP+
      PIC 2  Online       2x 40GE QSFP+
                                            
    
    content_copy zoom_out_map
    root@cluster> show chassis cluster status 
        CS  Cold Sync monitoring        FL  Fabric Connection monitoring
        GR  GRES monitoring             HW  Hardware monitoring
        IF  Interface monitoring        IP  IP monitoring
        LB  Loopback monitoring         MB  Mbuf monitoring
        NH  Nexthop monitoring          NP  NPC monitoring              
        SP  SPU monitoring              SM  Schedule monitoring
        CF  Config Sync monitoring
     
    Cluster ID: 1
    Node   Priority Status         Preempt Manual   Monitor-failures
    
    Redundancy group: 0 , Failover count: 0
    node0  250      primary        no      no       None           
    node1  100      secondary      no      no       None           
    
    Redundancy group: 1 , Failover count: 0
    node0  254      primary        no      no       None           
    node1  150      secondary      no      no       None           
    
    
    content_copy zoom_out_map
    root@cluster>show security monitoring 
    node0:
    --------------------------------------------------------------------------
                      Flow session   Flow session     CP session     CP session 
    FPC PIC CPU Mem        current        maximum        current        maximum
    ---------------------------------------------------------------------------
      2   0   0  11              0              0        1999999      104857600
      2   1   2   5         289065        4194304              0              0
      2   2   2   5         289062        4194304              0              0
      2   3   2   5         289060        4194304              0              0
      4   0   2   5         289061        4194304              0              0
      4   1   2   5         281249        4194304              0              0
      4   2   2   5         281251        4194304              0              0
      4   3   2   5         281251        4194304              0              0
    
    node1:
    --------------------------------------------------------------------------
                      Flow session   Flow session     CP session     CP session 
    FPC PIC CPU Mem        current        maximum        current        maximum
    ---------------------------------------------------------------------------
      2   0   0  11              0              0        1999999      104857600
      2   1   0   5         289065        4194304              0              0
      2   2   0   5         289062        4194304              0              0
      2   3   0   5         289060        4194304              0              0
      4   0   0   5         289061        4194304              0              0
      4   1   0   5         281249        4194304              0              0
      4   2   0   5         281251        4194304              0              0
      4   3   0   5         281251        4194304              0              0
    
    
    

    Enable the traffic interfaces on the secondary node.

    content_copy zoom_out_map
    root@cluster> show interfaces terse | grep reth0 
    xe-3/0/0.0              up    up   aenet    --> reth0.0
    xe-3/0/0.32767          up    up   aenet    --> reth0.32767
    xe-9/0/0.0              up    up   aenet    --> reth0.0 
     xe-9/0/0.32767          up    up   aenet    --> reth0.32767
    reth0                   up    up
    reth0.0                 up    up   inet     44.44.44.1/8
     
    reth0.32767             up    up   multiservice
    content_copy zoom_out_map
    root@cluster> show interfaces terse | grep reth1
    xe-3/0/4.0              up    up   aenet    --> reth1.0
    xe-3/0/4.32767          up    up   aenet    --> reth1.32767
    xe-9/0/4.0              up    up   aenet    --> reth1.0
     xe-9/0/4.32767          up    up   aenet    --> reth1.32767
    reth1                  up    up
    reth1.0                 up    up   inet     55.55.55.1/8
       
    reth1.32767             up    up   multiservice

For more information about LICU, refer to KB article KB17947 from the Knowledge Base.

In-Service Hardware Upgrade for SRX5K-RE-1800X4 and SRX5K-SCBE or SRX5K-RE-1800X4 and SRX5K-SCB3 in a Chassis Cluster

Ensure that the following prerequisites are completed before you begin the ISHU procedure:

  • Replace all interface cards such as IOCs and Flex IOCs as specified in Table 1.

    Table 1: List of Interface Cards for Upgrade

    Cards to Replace

    Replacement Cards for Upgrade

    SRX5K-40GE-SFP

    SRX5K-MPC and MICs

    SRX5K-4XGE-XFP

    SRX5K-MPC and MICs

    SRX5K-FPC-IOC

    SRX5K-MPC and MICs

    SRX5K-RE-13-20

    SRX5K-RE-1800X4

    SRX5K-SCB

    SRX5K-SCBE

    SRX5K-SCBE

    SRX5K-SCB3

  • Verify that both firewalls in the cluster are running the same Junos OS versions; release 12.1X47-D15 or later for SRX5K-SCBE with SRX5K-RE-1800X4 and 15.1X49-D10 or later for SRX5K-SCB3 with SRX5K-RE-1800X4. For more information on cards supported on the firewalls see Cards Supported on SRX5400, SRX5600, and SRX5800 Firewalls.

    For more information about unified in-service software upgrade (unified ISSU), see Upgrading Both Devices in a Chassis Cluster Using an ISSU.

If your device is part of a chassis cluster, using the in-service hardware upgrade (ISHU) procedure you can upgrade:

  • SRX5K-SCB with SRX5K-RE-13-20 to SRX5K-SCBE with SRX5K-RE-1800X4

    Note:

    Both the firewalls must have the same Junos OS version 12.3X48.

  • SRX5K-SCBE with SRX5K-RE-1800X4 to SRX5K-SCB3 with SRX5K-RE-1800X4

    Note:

    You cannot upgrade SRX5K-SCB with SRX5K-RE-13-20 directly to SRX5K-SCB3 with SRX5K-RE-1800X4.

Note:

We strongly recommend that you perform the ISHU during a maintenance window, or during the lowest possible traffic as the secondary node is not available at this time.

Ensure to upgrade the SCB and Routing Engine at the same time as the following configurations are only supported:

  • SRX5K-RE-13-20 and SRX5K-SCB

  • SRX5K-RE-1800X4 and SRX5K-SCBE

  • SRX5K-RE-1800X4 and SRX5K-SCB3

Note:

While performing the ISHU, in the SRX5800 firewall, the second SCB can contain a Routing Engine but the third SCB must not contain a Routing Engine. In the SRX5600 Firewall, the second SCB can contain a Routing Engine.

To perform an ISHU:

  1. Export the configuration information from the secondary node to a USB or an external storage device.

    For more information about mounting the USB on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

  2. Power off the secondary node.
  3. Disconnect all the interface cards from the chassis backplane by pulling them out of the backplane by 6” to 8” (leaving cables in place).
  4. Replace the SRX5K-SCBs with SRX5K-SCBEs, or SRX5K-SCBEs with SRX5K-SCB3s and SRX5K-RE-13-20s with SRX5K-RE-1800X4s based on the chassis specifications.
  5. Power on the secondary node.
  6. After the secondary node reboots as a standalone node, configure the same cluster ID as in the primary node.
    content_copy zoom_out_map
    root@>set chassis cluster cluster-id 1 node 1
    
  7. Install the same Junos OS software image on the secondary node as on the primary node and reboot.
    Note:

    Ensure that the Junos OS version installed is release 12.1X47-D15 or later for SRX5K-RE-1800X4 & SRX5K-SCBE and 15.1X49-D10 or later for SRX5K-RE-1800X4 & SRX5K-SCB3.

  8. After the secondary node reboots, import all the configuration settings from the USB to the node.

    For more information about mounting the USB on the device, refer to KB articles KB12880 and KB12022 from the Knowledge Base.

  9. Power off the secondary node.
  10. Re-insert all the interface cards into the chassis backplane.
    Note:

    Ensure the cards are inserted in the same order as in the primary node, and maintain connectivity between the control link and fabric link.

  11. Power on the node and issue this command to ensure all the cards are online:
    content_copy zoom_out_map
    user@host> show chassis fpc pic-status

    After the node boots, it must join the cluster as a secondary node. To verify, issue the following command

    content_copy zoom_out_map
    admin@cluster> show chassis cluster status
    Note:

    The command output must indicate that the node priority is set to a non-zero value, and that the cluster contains a primary node and a secondary node.

  12. Initiate Redundancy Group (RG) failover to the upgraded node, manually, so that it is assigned to all RGs as a primary node.

    For RG0, issue the following command:

    content_copy zoom_out_map
    admin@cluster> request chassis cluster failover
    redundancy-group 0 node 1
    

    For RG1, issue the following command:

    content_copy zoom_out_map
    admin@cluster> request chassis cluster failover
    redundancy-group 1 node 1
    

    Verify that all RGs are failed over by issuing the following command:

    content_copy zoom_out_map
    admin@cluster> show chassis cluster status
    
  13. Verify the operations of the upgraded secondary node by performing the following:
    • To ensure all FPC’s are online, issue the following command:

      content_copy zoom_out_map
      admin@cluster> show chassis fpc pic-status
      
    • To ensure all RG’s are upgraded and the node priority is set to a non-zero value, issue the following command:

      content_copy zoom_out_map
      admin@cluster> show chassis cluster status
      
    • To ensure that the upgraded primary node receives and transmits data, issue the following command:

      content_copy zoom_out_map
      admin@cluster> monitor interface traffic
      
    • To ensure sessions are created and deleted on the upgraded node, issue the following command:

      content_copy zoom_out_map
      admin@cluster> show security monitoring
      
  14. Repeat Step 1 through 12 for the primary node.
  15. To ensure that the ISHU process is completed successfully, check the status of the cluster by issuing the following command:
    content_copy zoom_out_map
    admin@cluster> show chassis cluster status

For detailed information about chassis cluster, see the Chassis Cluster User Guide for SRX Series Devices at www.juniper.net/documentation/.

external-footer-nav