ON THIS PAGE
Troubleshooting Chassis Cluster Management Issues
Unable to Manage an SRX Series Chassis Cluster Using the Management Port or Revenue Ports
Problem
Description
Cannot manage the SRX Series chassis cluster using the management port or revenue ports.
Environment
SRX Series chassis cluster
Diagnosis
Which node in the chassis cluster are you using to manage the cluster?
Primary node—Proceed to:
Manage the Chassis Cluster Using J-Web.
Note:You can use J-Web to manage only the primary node.
Manage the Chassis Cluster Using the Revenue Port or fxp0 Management Port.
Note:You can use the revenue port or fxp0 management port to manage the primary node.
Secondary node—Proceed to Manage the Chassis Cluster Using the fxp0 Management Port
Note:You can manage the secondary node only by using the fxp0 management port.
Resolution
- Manage the Chassis Cluster Using J-Web
- Manage Chassis Cluster Using the Revenue Port or fxp0 Management Port
- Manage the Chassis Cluster Using the fxp0 Management Port
- What’s Next
- Manage the Chassis Cluster Using J-Web
- Manage Chassis Cluster Using the Revenue Port or fxp0 Management Port
- Manage the Chassis Cluster Using the fxp0 Management Port
- What’s Next
Manage the Chassis Cluster Using J-Web
You can use J-Web to manage only the primary node.
-
Connect a console to the primary node.
-
Using the CLI, run the
show system services web-management
command. -
Check whether the loopback interface (lo0) is configured under the Web management HTTP/HTTPS configuration. See web-management (System Services) .
-
If the loopback interface (lo0) is configured under the Web management HTTP/HTTPS configuration, remove the loopback interface by running the
delete system services web-management http interface lo.0
command. -
Commit the change, and check whether you can now manage the chassis cluster.
-
If you still cannot manage the chassis cluster, proceed to Manage Chassis Cluster Using the Revenue Port or fxp0 Management Port.
Manage Chassis Cluster Using the Revenue Port or fxp0 Management Port
You can use both the revenue port or fxp0 management port to manage the primary node.
-
Connect to a console using the revenue port of the primary node which you want to use as a management interface.
-
Verify the configuration of the management interface:
-
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the host-inbound-traffic hierarchy level in the relevant zone:
zones { security-zone trust { host-inbound-traffic { system-services { any-service; } protocols { all; } } interfaces { reth0.0 reth0.1; } }
-
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the system services hierarchy level:
{primary:node1}[edit] root# show system services { http; ssh; telnet; }
-
-
Does ping to the management interface work?
-
Yes: See Unable to Manage an SRX Series Chassis Cluster Using fxp0 When the Destination in the Backup Router is 0/0. If this solution doesn't work, proceed to What’s Next to open a case with Juniper Networks technical support.
-
No: Proceed to step 4.
-
-
Using the CLI, run the
show interfaces terse
command:In the output, is the status of
FXP0 interface
Up, and does it provide an IP address?-
Yes: Proceed to step 5.
-
No: Verify the following:
-
Using the CLI, verify that the fxp0 interface is configured correctly: show groups.
Sample output:
root@srx# show groups node0 { system { host-name SRX3400-1; backup-router 192.168.1.254 destination 0.0.0.0/0; } interfaces { fxp0 { unit 0 { family inet { address 192.168.1.1/24; } } } } } node1 { system { host-name SRX3400-2; backup-router 192.168.1.254 destination 0.0.0.0/0; } interfaces { fxp0 { unit 0 { family inet { address 192.168.1.2/24; } } } } } } apply-groups "${NODE}"; system { services { ftp; ssh; telnet; } }
-
Check the condition of the cable that is connected to the fxp0 interface. Is the cable in good condition?
-
Yes: Proceed to the next step.
-
No: Replace the cable and try to manage the chassis cluster. If you still cannot manage the chassis cluster, proceed to the next step.
-
-
Using the CLI, check for incrementing error counters: show interfaces fxp0.0 extensive.
-
Yes: If you find any errors in the output, proceed to What’s Next to open a case with Juniper Networks technical support.
-
No: If there are no errors in the output and if you still cannot manage the chassis cluster, proceed to step 5.
-
-
-
-
Check whether the IP address of the fxp0 interface and the IP address of the management device are in the same subnet.
-
Yes: Proceed to the step 6.
-
No: Using the CLI, check if there is a route for the management device IP address: show route <management device IP>:
-
If a route does not exist for the management device IP address, add a route for the management subnet in the inet.0 table with the next-hop as the backup router IP address.
-
-
-
Using the CLI, check whether there is an ARP entry for the management device on the services gateway: show arp no-resolve | match <ip>.
-
Yes: Check whether the chassis cluster has multiple routes to the management device: show route <device-ip>.
-
Yes: There could be routes to the management device through the fxp0 interface and other interface leading to asymmetric routing. Proceed to What’s Next to open a case with Juniper Networks technical support.
-
No: Proceed to Manage the Chassis Cluster Using the fxp0 Management Port.
-
-
No: Proceed to What’s Next to open a case with Juniper Networks technical support.
-
Manage the Chassis Cluster Using the fxp0 Management Port
You can use only the fxp0 management port to manage the secondary node.
-
Verify the configuration of management interface on the secondary node:
-
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the host-inbound-traffic hierarchy level:
zones { security-zone trust { host-inbound-traffic { system-services { any-service; } protocols { all; } } interfaces { reth0.0 reth0.1; } }
-
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the system services hierarchy level:
{primary:node1}[edit] root# show system services { http; ssh; telnet; }
See Unable to Manage an SRX Series Chassis Cluster Using fxp0 When the Destination in the Backup Router is 0/0 and Configuring backup-router Command on Chassis Cluster for more information about the configuration guidelines.
If the configuration is correct and you still cannot manage the chassis cluster, proceed to step 2.
-
-
Are the IP addresses of the fxp0 interfaces of the primary node and the secondary node in the same subnet?
-
Yes: Proceed to What’s Next.
-
No: Configure the fxp0 interfaces of the primary node and the secondary node in the same subnet. Go to step 1 and verify the configuration.
-
What’s Next
-
If the issue persists, see KB Article KB20795.
-
If you wish to debug further, see KB Article KB21164 to check the debug logs.
-
To open a JTAC case with the Juniper Networks support team, see Data Collection for Customer Support for the data you should collect to assist in troubleshooting prior to opening a JTAC case.
Manage the Chassis Cluster Using J-Web
You can use J-Web to manage only the primary node.
Connect a console to the primary node.
Using the CLI, run the
show system services web-management
command.Check whether the loopback interface (lo0) is configured under the Web management HTTP/HTTPS configuration. See web-management (System Services) .
If the loopback interface (lo0) is configured under the Web management HTTP/HTTPS configuration, remove the loopback interface by running the
delete system services web-management http interface lo.0
command.Commit the change, and check whether you can now manage the chassis cluster.
If you still cannot manage the chassis cluster, proceed to Manage Chassis Cluster Using the Revenue Port or fxp0 Management Port.
Manage Chassis Cluster Using the Revenue Port or fxp0 Management Port
You can use both the revenue port or fxp0 management port to manage the primary node.
Connect to a console using the revenue port of the primary node which you want to use as a management interface.
Verify the configuration of the management interface:
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the host-inbound-traffic hierarchy level in the relevant zone:
zones { security-zone trust { host-inbound-traffic { system-services { any-service; } protocols { all; } } interfaces { reth0.0 reth0.1; } }
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the system services hierarchy level:
{primary:node1}[edit] root# show system services { http; ssh; telnet; }
Does ping to the management interface work?
Yes: See Unable to Manage an SRX Series Chassis Cluster Using fxp0 When the Destination in the Backup Router is 0/0. If this solution doesn't work, proceed to What’s Next to open a case with Juniper Networks technical support.
No: Proceed to step 4.
Using the CLI, run the
show interfaces terse
command:In the output, is the status of
FXP0 interface
Up, and does it provide an IP address?Yes: Proceed to step 5.
No: Verify the following:
Using the CLI, verify that the fxp0 interface is configured correctly: show groups.
Sample output:
root@srx# show groups node0 { system { host-name SRX3400-1; backup-router 192.168.1.254 destination 0.0.0.0/0; } interfaces { fxp0 { unit 0 { family inet { address 192.168.1.1/24; } } } } } node1 { system { host-name SRX3400-2; backup-router 192.168.1.254 destination 0.0.0.0/0; } interfaces { fxp0 { unit 0 { family inet { address 192.168.1.2/24; } } } } } } apply-groups "${NODE}"; system { services { ftp; ssh; telnet; } }
Check the condition of the cable that is connected to the fxp0 interface. Is the cable in good condition?
Yes: Proceed to the next step.
No: Replace the cable and try to manage the chassis cluster. If you still cannot manage the chassis cluster, proceed to the next step.
Using the CLI, check for incrementing error counters: show interfaces fxp0.0 extensive.
Yes: If you find any errors in the output, proceed to What’s Next to open a case with Juniper Networks technical support.
No: If there are no errors in the output and if you still cannot manage the chassis cluster, proceed to step 5.
Check whether the IP address of the fxp0 interface and the IP address of the management device are in the same subnet.
Yes: Proceed to the step 6.
No: Using the CLI, check if there is a route for the management device IP address: show route <management device IP>:
If a route does not exist for the management device IP address, add a route for the management subnet in the inet.0 table with the next-hop as the backup router IP address.
Using the CLI, check whether there is an ARP entry for the management device on the services gateway: show arp no-resolve | match <ip>.
Yes: Check whether the chassis cluster has multiple routes to the management device: show route <device-ip>.
Yes: There could be routes to the management device through the fxp0 interface and other interface leading to asymmetric routing. Proceed to What’s Next to open a case with Juniper Networks technical support.
No: Proceed to Manage the Chassis Cluster Using the fxp0 Management Port.
No: Proceed to What’s Next to open a case with Juniper Networks technical support.
Manage the Chassis Cluster Using the fxp0 Management Port
You can use only the fxp0 management port to manage the secondary node.
Verify the configuration of management interface on the secondary node:
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the host-inbound-traffic hierarchy level:
zones { security-zone trust { host-inbound-traffic { system-services { any-service; } protocols { all; } } interfaces { reth0.0 reth0.1; } }
Verify that the required system services (SSH, Telnet, HTTP) are enabled at the system services hierarchy level:
{primary:node1}[edit] root# show system services { http; ssh; telnet; }
See Unable to Manage an SRX Series Chassis Cluster Using fxp0 When the Destination in the Backup Router is 0/0 and Configuring backup-router Command on Chassis Cluster for more information about the configuration guidelines.
If the configuration is correct and you still cannot manage the chassis cluster, proceed to step 2.
Are the IP addresses of the fxp0 interfaces of the primary node and the secondary node in the same subnet?
Yes: Proceed to What’s Next.
No: Configure the fxp0 interfaces of the primary node and the secondary node in the same subnet. Go to step 1 and verify the configuration.
What’s Next
If the issue persists, see KB Article KB20795.
If you wish to debug further, see KB Article KB21164 to check the debug logs.
To open a JTAC case with the Juniper Networks support team, see Data Collection for Customer Support for the data you should collect to assist in troubleshooting prior to opening a JTAC case.
Unable to Manage the Secondary Node of a Chassis Cluster Using J-Web
Problem
Description
Cannot manage the secondary node of a chassis cluster using the J-Web interface.
Environment
SRX Series chassis cluster
Symptoms
When in the Junos Services Redundancy Protocol (JSRP) chassis cluster mode, you cannot manage redundancy group 0 (RG0) on the secondary node using the J-Web interface.
Cause
You can use the J-Web interface to manage redundancy group 0 only on the primary node.
The processes that J-Web references are not running on the secondary node.
Example
The following example shows the output of syslog and system process on both node0 and node1 after RG0 was failed over from node1 to node0.
On node1, web-management process (httpd-gk) was terminated (exited).
On node0, web-management process (httpd-gk) was started.
Two http-related processes (httpd-gk and httpd), run only on node0, which is the new primary node of RG0.
{secondary:node1} root@SRX210HE-B> show chassis cluster status Cluster ID: 1 Node Priority Status Preempt Manual failover Redundancy group: 0 , Failover count: 1 node0 255 primary no yes node1 1 secondary no yes Redundancy group: 1 , Failover count: 1 node0 100 primary yes no node1 1 secondary yes no {secondary:node1} root@SRX210HE-B> show log log-any | grep web-management Jul 5 11:31:52 SRX210HE-B init: web-management (PID 9660) started Jul 5 12:00:37 SRX210HE-B init: web-management (PID 9660) SIGTERM sent Jul 5 12:00:37 SRX210HE-B init: web-management (PID 9660) exited with status=0 Normal Exit {primary:node0} root@SRX210HE-A> show log log-any | grep web-management Jul 5 12:00:37 SRX210HE-A init: web-management (PID 9498) started {primary:node0} root@SRX210HE-A> show system processes extensive node 0 | grep http 9498 root 1 76 0 12916K 4604K select 0 0:00 0.00% httpd-gk 9535 nobody 1 90 0 8860K 3264K select 0 0:00 0.00% httpd {primary:node0} root@SRX210HE-A> show system processes extensive node 1 | grep http => No httpd-gk and httpd processes running on node 1 (secondary node)
This limits remote procedure calls (RPCs) from the J-Web logic, and subsequently, pages that can be issued from the secondary node.
Solution
You can manage the secondary node of a chassis cluster using the CLI (SSH, telnet, and console). See Manage the Chassis Cluster Using the fxp0 Management Port
Unable to Manage an SRX Series Chassis Cluster Using fxp0 When the Destination in the Backup Router is 0/0
This topic explains, with an example, how to manage an SRX Series chassis cluster configured using the backup-router configuration through the fxp0 interface.
Problem
Description
The management device cannot manage the chassis cluster through an fxp0 interface, but it can ping both fxp0 interfaces.
Sample Topology
The topology, IP addresses, and configuration are as follows:
Primary fxp0: 192.168.1.1/24
Secondary fxp0: 192.168.1.2/24
Gateway for fxp0: 192.168.1.254
Management device: 172.16.1.1/24
groups { node0 { system { host-name SRX5400-1; backup-router 192.168.1.254 destination 0.0.0.0/0; } interfaces { fxp0 { unit 0 { family inet { address 192.168.1.1/24; } } } } } node1 { system { host-name SRX5400-2; backup-router 192.168.1.254 destination 0.0.0.0/0; } interfaces { fxp0 { unit 0 { family inet { address 192.168.1.2/24; } } } } } } apply-groups "${NODE}"; system { services { ftp; ssh; telnet; } }
Environment
SRX Series chassis cluster
Cause
There is a route for 172.16.1.1 through the interfaces other than the fxp0 interface on the cluster devices. We do not recommend using 0.0.0.0/0 as the backup-router destination. Ping works because the echo reply for an incoming echo request to the fxp0 interface is sent out following the route for 172.16.1.1 through interfaces other than fxp0, but Telnet fails.
Solution
Remove the route for 172.16.1.1 in the routing table, and set a more specific backup-router destination in group node0/node1.
For example:
groups { node0 { ... backup-router 192.168.1.254 destination 172.16.1.1/32; ... } node1 { ... backup-router 192.168.1.254 destination 172.16.1.1/32; ... }
No changes are displayed in the routing table after the configuration is applied because the backup-router configuration is intended to facilitate management access on the backup node only. Access to the primary node is enabled through routing on the primary node.Thus, when the backup router configuration is complete, you can see that a route is injected into the forwarding table on the secondary node. You cannot see the routing table on the secondary node because the routing subsystem does not run on the secondary node.
Sample Output when the Backup router is Configured with Destination 0/0
Routing table on primary node:
{primary:node0}[edit] root@SRX5400-1# run show route inet.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.1.0/24 *[Direct/0] 00:00:54 > via fxp0.0 192.168.1.1/32 *[Local/0] 00:00:54 Local via fxp0.0
Forwarding table on secondary node with destination 0/0:
root@SRX3400-2# run show route forwarding-table Routing table: default.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default user 0 28:c0:da:a0:88:0 ucst 345 2 fxp0.0 default perm 0 rjct 36 1 0.0.0.0/32 perm 0 dscd 34 1 192.168.1.0/24 intf 0 rslv 344 1 fxp0.0 192.168.1.0/32 dest 0 192.168.1.0 recv 342 1 fxp0.0 192.168.1.2/32 intf 0 192.168.1.2 locl 343 2 192.168.1.2/32 dest 0 192.168.1.2 locl 343 2 192.168.1.254/32 dest 0 28:c0:da:a0:88:0 ucst 345 2 fxp0.0 192.168.1.255/32 dest 0 192.168.1.255 bcst 336 1 fxp0.0 224.0.0.0/4 perm 0 mdsc 35 1 224.0.0.1/32 perm 0 224.0.0.1 mcst 31 1 255.255.255.255/32 perm 0 bcst 32 1 Routing table: __master.anon__.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default perm 0 rjct 526 1 0.0.0.0/32 perm 0 dscd 524 1 224.0.0.0/4 perm 0 mdsc 525 1 224.0.0.1/32 perm 0 224.0.0.1 mcst 521 1 255.255.255.255/32 perm 0 bcst 522 1
Sample Output when the Backup router is Configured with Destination 172.16.1.1/32
Routing table on primary node:
{primary:node0}[edit] root@SRX5400-1# run show route inet.0: 2 destinations, 2 routes (2 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.1.0/24 *[Direct/0] 00:17:51 > via fxp0.0 192.168.1.1/32 *[Local/0] 00:55:50 Local via fxp0.0
Forwarding table on primary node:
Note:On the primary node, route 172.16.1.1/32 of the backup router is not shown in the sample output.
{primary:node0}[edit] root@SRX5400-1# run show route forwarding-table Routing table: default.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default perm 0 rjct 36 1 0.0.0.0/32 perm 0 dscd 34 1 192.168.1.0/24 intf 0 rslv 334 1 fxp0.0 192.168.1.0/32 dest 0 192.168.1.0 recv 331 1 fxp0.0 192.168.1.1/32 intf 0 192.168.1.1 locl 332 2 192.168.1.1/32 dest 0 192.168.1.1 locl 332 2 192.168.1.3/32 dest 0 5c:5e:ab:16:e3:81 ucst 339 1 fxp0.0 192.168.1.6/32 dest 0 0:26:88:4f:c8:8 ucst 340 1 fxp0.0 192.168.1.11/32 dest 0 0:30:48:bc:9f:45 ucst 342 1 fxp0.0 192.168.1.254/32 dest 0 28:c0:da:a0:88:0 ucst 343 1 fxp0.0 192.168.1.255/32 dest 0 192.168.1.255 bcst 329 1 fxp0.0 224.0.0.0/4 perm 0 mdsc 35 1 224.0.0.1/32 perm 0 224.0.0.1 mcst 31 1 255.255.255.255/32 perm 0 bcst 32 1 Routing table: __master.anon__.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default perm 0 rjct 526 1 0.0.0.0/32 perm 0 dscd 524 1 224.0.0.0/4 perm 0 mdsc 525 1 224.0.0.1/32 perm 0 224.0.0.1 mcst 521 1 255.255.255.255/32 perm 0 bcst 522 1
Forwarding table on the secondary node:
Note:On the secondary node, route 172.16.1.1/32 of the backup router is shown in the sample output. This facilitates access to the secondary node through the fxp0 interface.
{secondary:node1}[edit] root@SRX5400-2# run show route forwarding-table Routing table: default.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default perm 0 rjct 36 1 0.0.0.0/32 perm 0 dscd 34 1 172.16.1.1/32 user 0 192.168.1.254 ucst 345 2 fxp0.0 192.168.1.0/24 intf 0 rslv 344 1 fxp0.0 192.168.1.0/32 dest 0 192.168.1.0 recv 342 1 fxp0.0 192.168.1.2/32 intf 0 192.168.1.2 locl 343 2 192.168.1.2/32 dest 0 192.168.1.2 locl 343 2 192.168.1.254/32 dest 0 28:c0:da:a0:88:0 ucst 345 2 fxp0.0 192.168.1.255/32 dest 0 192.168.1.255 bcst 336 1 fxp0.0 224.0.0.0/4 perm 0 mdsc 35 1 224.0.0.1/32 perm 0 224.0.0.1 mcst 31 1 255.255.255.255/32 perm 0 bcst 32 1 Routing table: __master.anon__.inet Internet: Destination Type RtRef Next hop Type Index NhRef Netif default perm 0 rjct 526 1 0.0.0.0/32 perm 0 dscd 524 1 224.0.0.0/4 perm 0 mdsc 525 1 224.0.0.1/32 perm 0 224.0.0.1 mcst 521 1 255.255.255.255/32 perm 0 bcst 522 1
If a particular subnet has a route configured through the backup router and a static route under routing-options, there could be problems accessing the fxp0 interface. In the example above, the issue with accessing the fxp0 interface from the management device occurs when :
The same route exists as a static route and through the backup router.
There is a static route that is more specific than the route through the backup router.
In the examples, when the routes from the primary node are synchronized to the secondary node's forwarding table, the route configured under static route takes precedence over the route under backup router. If 0/0 is configured under backup-router, the chances of a better matching route under static route is higher. Hence, we recommend avoiding 0/0 under backup router.
If you want to configure routes to the same destination using backup router as well as the static route, split the routes when configuring under backup-router. This makes the routes configured under backup router as the preferred routes and ensures that the fxp0 interface is accessible.
[edit routing-options static route] 0.0.0.0/0 next-hop 100.200.200.254; [edit groups node0 ] backup-router 192.168.1.254 destination [0.0.0.0/1 128.0.0.0/1];
Unable to Upgrade a Chassis Cluster Using In-Service Software Upgrade
Problem
Description
Unable to upgrade a chassis cluster using minimal downtime upgrade method.
Environment
SRX5400 chassis cluster.
Symptoms
-
Cluster stuck in node0 RG1 with IF flag and cannot upgarde.
-
Configuration commit error is shown on CLI.
Cause
Configuration has same prefix on backup-router destinations (on backup RE/node) and user interface address.
regress@R1_re# show
interfaces ge-0/0/0
unit 0 { family inet { address 192.1.1.1/24; } }
regress@R1_re#
show groups re1 system
backup-router
10.204.63.254 destination 192.1.1.1/18;
regress@R1_re#
commit
re0: configuration check succeeds re1: error: Cannot have same prefix for backup-router destination and interface address. ge-0/0/0.0 inet 192.1.1 error: configuration check-out failed re0: error: remote commit-configuration failed on re1
Solution
In chassis cluster mode, the backup router's destination address for IPv4 and IPv6 routers using the commands edit system backup-router address destination destination-address and edit system inet6-backup-router address destination destination-address must not be same as interface address configured for IPv4 and IPv6 using the commands edit interfaces interface-name unit logical-unit-number family inet address ipv4-address and edit interfaces interface-name unit logical-unit-number family inet6 address ipv6-address.Configuring backup-router Command on Chassis Cluster
How to back up a router in an SRX Series chassis
cluster using the backup-router
configuration command.
Problem
Description
Intermittent connectivity issues to NSM and other management hosts from the secondary node.
Environment
SRX Series chassis cluster
Cause
Setting a destination of 0.0.0.0/0
on the
backup router (without configuration) is not supported.
Example of an incorrect configuration:
set groups node0 system backup-router 10.10.10.1 destination 0.0.0.0/0
Solution
See Configuring a Backup Router for the recommended way to set up a backup router by using a non-zero prefix.
Example of a non-zero subnet backup-router configuration:
set groups node0 system backup-router 10.10.10.1 destination 10.100.0.0/16
As an alternative to the 0/0 backup-router destination, here is another example where 0/0 gets split into two prefixes:
set groups node0 system backup-router 10.10.10.1 destination 0.0.0.0/1 set groups node0 system backup-router 10.10.10.1 destination 128.0.0.0/1
If multiple networks need to be reachable through the backup router, you can add multiple destination entries to the configuration. The backup-router configuration is used only by the RG0 secondary node. The primary node continues to use the inet.0 route table.
Unable to Upgrade a Chassis Cluster Using In-Service Software Upgrade
Problem
Description
Unable to upgrade a chassis cluster using minimal downtime upgrade method.
Environment
SRX5400 chassis cluster.
Symptoms
-
Cluster stuck in node0 RG1 with IF flag and cannot upgarde.
-
Configuration commit error is shown on CLI.
Cause
Configuration has same prefix on backup-router destinations (on backup RE/node) and user interface address.
regress@R1_re# show
interfaces ge-0/0/0
unit 0 { family inet { address 192.1.1.1/24; } }
regress@R1_re#
show groups re1 system
backup-router
10.204.63.254 destination 192.1.1.1/18;
regress@R1_re#
commit
re0: configuration check succeeds re1: error: Cannot have same prefix for backup-router destination and interface address. ge-0/0/0.0 inet 192.1.1 error: configuration check-out failed re0: error: remote commit-configuration failed on re1