ON THIS PAGE
Known Limitations
Learn about known limitations in Junos OS Evolved Release 22.2R1 for ACX Series routers.
For the most complete and latest information about known Junos OS Evolved defects, use the Juniper Networks online Junos Problem Report Search application.
General Routing
-
In some corner cases, traffic is not scheduled equally between strict priority queues. This can happen in the following scenario. Priority queue configured and is completely utilizing the bandwidth and remaining queues are starved and traffic completely dropping on those queues. In this state if we configure a second strict high priority queue traffic is not scheduled equally between strict priority queues. This is hardware specific issue, ACX7509 specific. If we have a shaper on priority queue this issue will not happen. Also if the traffic starts after the configurations no issues seen. PR1577035
-
PTP to PTP noise transfer fails for frequencies 1. 0.03125 HZ 2. 0.123125 HZ. PR1608786
-
On ACX7100-48L devices, syncE to PTP and syncE to 1pps noise transfer tests fails for frequencies 1. 0.00781 HZ 2. 0.01563 HZ 3. 0.03125 HZ 4. 0.06156 HZ 5. 0.12313 HZ. PR1608866
-
The syncE to PTP and syncE to 1pps transient response marginally fails. This happens when the servo get the initial 100 nano seconds jump in one measurement window and the next 100 nano seconds in the next measurement window adjusting less initially. PR1608934
-
On ACX7100-48L, enabling or disabling of PTP TC or BC causes all the interfaces to flap at the same time. PR1609927
-
PTP to PTP noise transfer fails for frequency 0.03125 HZ. PR1611838
-
On ACX7100-32C devices, the syncE to PTP and syncE to 1pps noise transfer tests fail for 1. 0.00781 HZ 2. 0.01563 HZ 3. 0.03125 HZ 4. 0.06156 HZ 5. 0.12313 HZ frequencies. PR1611911
-
The
clear mpls lsp
operation is a destructive operation where it wipes off all existing routes and next-hops in the system and does a fresh reinstallation, the 10 seconds delay in traffic restoration for 16000 l3vpn routes might be attributed to programming delay in the hardware units combined with software model and the CPU capacity. PR1614413 -
The learning rate of ACX7509 is same as ACX7100 when the host routes /128 routes are downloaded to LEM table in both ACX7100 or ACX7509. The PR is reporting an issue only when the scale goes beyond LEM table size. If the scale is within LEM table size, then the FIB downloading rate remains same in ACX7100 and ACX7509. PR1624365
-
G.8275.1- G.8273.2 1PPS cTE performance test might be outside class-C when using channelized 10G ports for PTP BC on ACX7100-32C. On each reboot, the 1PPS cTE measurement might be within the class-C measurement threshold, or might randomly be out of it by a few nanoseconds. PR1629819
-
G.8275.1- G.8273.2 1PPS cTE performance test might be outside class-C when using channelized 25G ports with 100G ports for PTP BC on ACX7100-32C. On each reboot, the 1PPS cTE measurement might be within the class-C measurement threshold, or might randomly be out of it by a few nanoseconds. PR1637268
-
Ping and Traceroute works with reply mode as ip-udp (applicable to other Junos OS Evolved ACX series). Other reply mode (application-level-control-channel ) works when we support BFD over VCCV. For ping, the default mode for MSPW Echo reply is application-level-control-channel. Hence, MSPW L2VPN ping needs reply-mode as ip-udp for the ping to work, as BFD over VCCV is not supported yet. For traceroute, the default mode is application-level-control-channel. Hence, MSPW L2VPN traceroute needs reply-mode as ip-udp for the traceroute to work, as BFD over VCCV is not supported. PR1642026
-
Links might flap briefly (for a few milliseconds) if a switchover happens due to primary FEB power-fault. Workaround is to configure interface hold-timers at far-end routers. PR1652921
-
1. Why SIP and DIP increment case working fine for all cases (L3VPN, IPv4) other than 6VPE? Also all CCC and bridge family works fine when we increment SMAC and DMAC. For Ipv4 related services, there is a field called BRCM_HASH_FIELD_IPV4_ADDRESS which we are setting while configuring the BCM pipeline during hashing. Therefore, when there is a symmetrical increase in both Source IP and Destination IP, this flag takes care of load balancing. For IPV6 services, this field isnt supported as specified in the BCM documentation. Therefore on symmetrical increase in SIP and DIP, load balancing does not work. 2. Why 6VPE, it is not load-balanced? Reasoning from BCM. Its not that 6VPE service is not load-balanced. It is perfectly load-balanced as specified in the outputs below. xmen-s-p1b-d Seconds: 6 Time: 07:41:10 Interface Link Input packets (pps) Output packets (pps) ae0 Up 4798 (0) 4153317 (996) esi Up 0 (0) 0 (0) et-0/0/0 Up 0 (0) 0 (0) et-0/0/1 Up 2238 (0) 1983350 (797) <<== (Load balancing) et-0/0/2 Up 2560 (0) 2169967 (199) <<-(Load balancing) et-0/0/3 Up 0 (0) 0 (0) et-0/0/4 Up 0 (0) 0 (0) et-0/0/5 Up 4286778 (997) 8430 (8) Its just that for one particular case in which both are incrementing, the functionality doesnt work due to the reason provided in response 1. So we cannot say that the feature does not work. 3. In the 6PE case in SW set-up. The same behaviour of load balancing not working in 6PE case has been observed. PR1658411
-
Junos OS Evolved follows Make-Before-Break (MBB) mechanism to program next-hop and route to achieve faster convergence. The mechanism installs new forwarding table entry before deleting old one, minimizing traffic loss during route convergence. However, it temporarily increases the number of forwarding paths programed in the Packet Forwarding Engine depending on number of times next-hop/route changes in short period of time. MBB is applied during link-flaps, graceful restarts (ldp), session flaps(ldp) etc. For deployments where the network device is running on the higher end of the tunnel scale limits, a link flap can easily exceed the scale of the device. Once Packet Forwarding Engine exceeds its forwarding table capacity any new nexthop add for a tunnel is ignored, resulting in traffic black hole for those NHs. A link flap though, will trigger MBB for only the tunnels associated with that particular link. If we take a worst-case situation that all the links flap at once and all the tunnels are hence undergoing MBB, we have to keep the tunnel limit to half to be absolutely sure not to exceed the limit. PR1660472
Routing Protocols
-
When NSR is enabled on routers with non-forwarding routing-instance having BGP peers, the BGP peers in that instance will not be successfully replicated to the backup RPD. PR1648707