Download This Guide
Related Documentation
- Changes in Default Behavior and Syntax, and for Future Releases in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Errata and Changes in Documentation for Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Outstanding Issues in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Resolved Issues in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Upgrade and Downgrade Instructions for Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
New Features in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
The following features have been added to Junos OS Release 12.3. Following the description is the title of the manual or manuals to consult for further information.
- Hardware
- Class of Service
- Forwarding and Sampling
- High Availability (HA) and Resiliency
- Interfaces and Chassis
- Junos OS XML API and Scripting
- Layer 2 Features
- Layer 2 Tunneling Protocol
- MPLS
- Multicast
- Power Management
- Routing Policy and Firewall Filters
- Routing Protocols
- Security
- Subscriber Access Management
- System Logging
- User Interface and Configuration
- VPLS
- VPNs
Hardware
- SFP-GE80KCW1470-ET,
SFP-GE80KCW1490-ET, SFP-GE80KCW1510-ET, SFP-GE80KCW1530-ET, SFP-GE80KCW1550-ET,
SFP-GE80KCW1570-ET, SFP-GE80KCW1590-ET, and SFP-GE80KCW1610-ET (MX
Series)—These transceivers provide a duplex LC
connector and support operation and monitoring with links up to a
distance of 80 km. Each transceiver is tuned to a different transmit
wavelength for use in CWDM applications. These transceivers are supported
on the following interface module. For more information about interface
modules, see the Interface Module Reference for
your router.
- Gigabit Ethernet MIC with SFP (model number: MIC-3D-20GE-SFP) in all versions of MX-MPC1, MX-MPC2, and MX-MPC3 —Supported in Junos OS Release 12.3R5, 13.2R3, 13.3R1, and later.
[See Gigabit Ethernet SFP CWDM Optical Interface Specification.]
- CFP-GEN2-CGE-ER4 (MX Series, T1600, and T4000)—The
CFP-GEN2-CGE-ER4 transceiver (part number: 740-049763) provides a
duplex LC connector and supports the 100GBASE-ER4 optical interface
specification and monitoring. The “GEN2” optics have been
redesigned with newer versions of internal components for reduced
power consumption. The following interface modules support the CFP-GEN2-CGE-ER4
transceiver. For more information about interface modules, see the Interface Module Reference for your router.
MX Series routers:
- 100-Gigabit Ethernet MIC with CFP (model number: MIC3-3D-1X100GE-CFP)—Supported in Junos OS Release 12.1R1 and later
- 2x100GE + 8x10GE MPC4E (model number: MPC4E-3D-2CGE-8XGE)—Supported in Junos OS Release 12.3R2 and later
T1600 and T4000 routers:
- 100-Gigabit Ethernet PIC with CFP (model numbers: PD-1CE-CFP-FPC4 and PD-1CGE-CFP)—Supported in Junos OS Releases 12.3R5, 13.2R3, 13.3R1, and later
[See 100-Gigabit Ethernet 100GBASE-R Optical Interface Specifications.]
- CFP-GEN2-100GBASE-LR4 (T1600 and T4000)—The CFP-GEN2-100GBASE-LR4
transceiver (part number: 740-047682) provides a duplex LC connector
and supports the 100GBASE-LR4 optical interface specification and
monitoring. The “GEN2” optics have been redesigned with
newer versions of internal components for reduced power consumption.
The following interface modules support the CFP-GEN2-100GBASE-LR4
transceiver. For more information about interface modules, see the Interface Module Reference for your router.
- 100-Gigabit Ethernet PIC with CFP (model numbers: PD-1CE-CFP-FPC4 and PD-1CGE-CFP)—Supported in Junos OS Releases 12.3R5, 13.2R3, 13.3R1, and later
[See 100-Gigabit Ethernet 100GBASE-R Optical Interface Specifications.]
- Support for 32 GB RE-S-1800 Routing Engine (MX Series)—Starting with Junos OS Release 12.3R4, the 32-GB RE-S-1800 Routing Engine (MX240, MX480, and MX960 model number RE-S-1800X4-32G-S; MX2010 and MX2020 model number REMX2K-1800-32G-S) is supported on MX Series routers. The 32 GB RE-S-1800 Routing Engine has a 1800 GHz processor with 32-GB of memory, and supports both 32-bit and 64-bit Junos OS builds. On the MX2010 and MX2020 routers, the Routing Engine supports only 64-bit Junos OS build. The new Routing Engine supports a 4-GB CompactFlash card. All CLI commands supported on the older Routing Engines are supported on the new Routing Engine.
Class of Service
- Support for class-of-service features to
ensure quality of service for real-time traffic that is sensitive
to latency on a network (MX240, MX480, MX960 Routers with Application
Services Modular Line Card)—The new Application
Services Modular Line Card (AS MLC) supports the following CoS features
on MX240, MX480, and MX960 routers:
- Code-point aliases—A code-point alias is a meaningful name that can be associated with CoS values such as Differentiated Services code points (DSCPs), DSCP IPv6, IP precedence, IEEE 802.1, and MPLS experimental (EXP) bits that can then be used while configuring CoS components.
- Classification—Packet classification associates
the packet with a particular CoS servicing level. In Junos OS, classifiers
associate incoming packets with a forwarding class and loss priority
and, based on the associated forwarding class, assign packets to output
queues.
- Behavior Aggregate—A method of classification that operates on a packet as it enters the router.
- Multifield Classification— A method of classification that can examine multiple fields in the packet.
- Fixed Classification—A method of classification that refers to the association of a forwarding class with a packet regardless of its packet contents.
[See Class of Service on Application Services Modular Line Card Overview.]
- Scheduling—Schedulers are used to define the properties
of output queues. On the AS modular carrier card (AS MCC), the following
scheduling features are supported (physical interfaces only):
- Buffer sizes
- Delay buffer size
- Drop profile map
- Excess priority
- Excess rate percentage
- Output-traffic-control profile
- Priority
- Scheduler-map
- Shaping rate
- Transmit rate
- WRED rules
[Junos OS Class-of-Service Configuration Guide]
- Setting the 802.1p field for host-generated traffic—On MPCs and Enhanced Queuing DPCs, you can now configure the
IEEE 802.1p bits in the 802.1p field—also known as the Priority
Code Point (PCP) field—in the Ethernet frame header for host
outbound packets (control plane traffic). In earlier releases, this
field is not configurable; instead it is set by CoS automatically
for host outbound traffic.
To configure a global default value for this field for all host outbound traffic, include the default value statement at the [edit class-of-service host-outbound-traffic ieee-802.1] hierarchy level. This configuration has no effect on data plane traffic; you configure rewrite rules for these packets as always.
You cannot configure a default value for the 802.1p bits for host outbound traffic on a per-interface level. However, you can specify that the CoS 802.1p rewrite rules already configured on egress logical interfaces are applied to all host outbound packets on that interface. To do so, include the rewrite-rules statement at the [edit class-of-service host-outbound-traffic ieee-802.1] hierarchy level. This capability enables you to set only the outer tags or both the outer and the inner tags on dual-tagged VLAN packets. (On Enhanced Queuing DPCs, both inner and outer tags must be set.)
This feature includes the following support:
- Address families—IPv4 and IPv6
- Interfaces—IP over VLAN demux, PPP over VLAN demux, and VLAN over Gigabit Ethernet
- Packet types—ARP, ANCP, DHCP, ICMP, IGMP, and PPP
- VLANs—Single and dual-tagged
[Class of Service]
- Software feature support on the MX2020 routers—Starting with Release 12.3, all MPCs and MICs supported on
the MX Series routers in Junos OS Release 12.3 continue to be supported
on the MX2020 routers. Also, the MX2020 routers support all software
features that are supported by other MX Series routers in Junos OS
Release 12.1.
The following key Junos OS features are supported:
- Basic Layer 2 features including Layer 2 Ethernet OAM and virtual private LAN service (VPLS)
- Class-of-service (CoS)
- Firewall filters and policers
- Integrated Routing and Bridging (IRB)
- Interoperability with existing DPCs and MPCs
- Layer 2 protocols
- Layer 2 VPNs, Layer 2 circuits, and Layer 3 VPNs
- Layer 3 routing protocols and MPLS
- Multicast forwarding
- Port mirroring
- Spanning Tree Protocols (STP)
- Synchronous Ethernet and Precision Time Protocol (IEEE 1588)
- Tunnel support
[Class of Service, Ethernet Interfaces Configuration Guide, System Basics and Services Command Reference]
- Ingress CoS on MIC and MPC interfaces (MX
Series routers)—You can configure ingress CoS
parameters, including hierarchical schedulers, on MX Series routers
with MIC and MPC interfaces. In general, the supported configuration
statements apply to per-unit schedulers or to hierarchical schedulers.
To configure ingress CoS for per-unit schedulers, include the following statements at the [edit class-of-service interfaces interface-name] hierarchy level:
input-scheduler-mapinput-shaping-rateinput-traffic-control-profileinput-traffic-control-profile-remainingTo configure ingress CoS for hierarchical schedulers, include the interface-set interface-set-name statement at the [edit class-of-service interfaces] hierarchy level.
Note: The interface-set statement supports only the following options:
input-traffic-control-profileinput-traffic-control-profile-remainingTo configure ingress CoS at the logical interface level, include the following statements at the [edit class-of-service interfaces interface interface-name unit logical-unit-number] hierarchy level:
input-scheduler-mapinput-shaping-rateinput-traffic-control-profile[See Configuring Ingress Hierarchical CoS on MIC and MPC interfaces.]
- Extends explicit burst size configuration
support on IQ2 and IQ2E interfaces–The burst size
for shapers can be configured explicitly in a traffic control profile
for IQ2 and IQ2E interfaces. This feature is supported on M71, M10i,
M40e, M120, M320, and all T Series routers.
To enable this feature, include the burst-size statement at the following hierarchy levels:
[edit class-of-service traffic-control-profiles shaping-rate][edit class-of-service traffic-control-profiles guaranteed-rate]Note: The guaranteed-rate burst size value cannot be greater than the shaping-rate burst size.
[See Configuring Traffic Control Profiles for Shared Scheduling and Shaping.]
- Classification and DSCP marking of distributed protocol handler traffic—The scope of traffic affected by the host-outbound-traffic statement is expanded. When it was introduced in Junos OS Release 8.4,
the host-outbound-traffic statement at the [edit class-of-service] hierarchy level enabled you to specify the forwarding class assignment
and DiffServ code point (DSCP) value for egress traffic sent from
the Routing Engine. Affected traffic included control plane packets
(such as OSPF hello and ICMP echo reply [ping] packets) and TCP-related
packets (such as BGP and LDP control packets).
In Junos OS 12.2R2, the same configuration applies to distributed protocol handler traffic in addition to Routing Engine traffic. Distributed protocol handler traffic refers to traffic from the router’s periodic packet management process (ppm) sessions, and it includes both IP (Layer 3) traffic such as BFD keepalive (KA) messages and non-IP (Layer 2) traffic such as LACP control traffic on aggregated Ethernet. DSCP changes do not apply to MPLS EXP bits or IEEE 802.1p bits. The specified queue must be correctly configured. The affected traffic includes distributed protocol handler traffic as well as Routing Engine traffic for egress interfaces hosted on MX Series routers with Trio-based or I-chip based Packet Forwarding Engines, and on M120, M320, and T Series routers.
If you need the Routing Engine traffic and distributed protocol handler traffic to be classified in different forwarding classes or marked with different DSCP values, then you need to configure some additional steps. Apply a standard firewall filter to the loopback interface and configure the filter actions to set the forwarding class and DSCP value that override the host-outbound-traffic settings.
For interfaces on MX80 routers, LACP control traffic is sent through the Routing Engine rather than through the Packet Forwarding Engine.
Note: Any DSCP rewrite rules configured on a 10-Gigabit Ethernet LAN/WAN PIC with SFP+ overwrite the DSCP value rewritten as specified under the host-outbound-traffic statement.
The following partial configuration example classifies egress traffic from the Routing Engine as well as distributed protocol handler traffic:
[edit]class-of-service {host-outbound-traffic {forwarding-class my_fc_control-traffic_dph;dscp-code-point 001010;}forwarding-classes {queue 5 my_fc_control-traffic_dph;queue 6 my_fc_control_traffic_re;}}interfaces {lo0 {unit 0 {family inet {filter {output my_filter_reclassify_re;}}}}}firewall {filter my_filter_reclassify_re {term 1 {then {forwarding-class my_fc_control_traffic_re;dscp code-points 000011;accept;}}}}The statements in the example configuration cause the router to classify egress traffic from the Routing Engine and distributed protocol handler traffic as follows:
- Distributed protocol handler traffic is classified to the my_fc_control-traffic_dph forwarding class, which is mapped to queue 5. Of those packets, Layer 3 packets are marked at egress with DSCP bits 001010 (10 decimal), which is compatible with ToS bits 00101000 (40 decimal).
- Routing Engine traffic is classified to the my_fc_control-traffic_re forwarding class, which is mapped to queue 6. Of those packets, Layer 3 packets are marked at egress with DSCP bits 001100 (12 decimal), which is compatible with ToS bits 00110000 (48 decimal).
If you do not apply the firewall filter to the loopback interface, Routing Engine-sourced traffic is classified and marked using the forwarding class and DSCP value specified in the host-outbound-traffic configuration statement.
If you omit both the firewall filter and the host-outbound-traffic configuration shown in the previous configuration, then all network control traffic—including Routing Engine-sourced and distributed protocol handler traffic—uses output queue 3 (the default output queue for control traffic), and DSCP bits for Layer 3 packets are set to the default value 0 (Best Effort service).
- Enhancements to scheduler configuration on
FRF.16 physical interfaces—Starting with Release
12.3R2, Junos OS extends the class-of-service scheduler support on
FRF.16 physical interfaces to the excess-rate, excess-priority, and drop-profile-map configurations. The excess-rate, excess-priority, and drop-profile-map statements
are configured at the [edit class-of-service schedulers scheduler-name] hierarchy level.
- Support for the drop-profile-map configuration enables you to configure random early detection (RED) on FRF.16 bundle physical interfaces.
- Support for the excess-rate configuration enables you to specify the percentage of the excess bandwidth traffic to share.
- Support for the excess-priority configuration enables you to specify the priority for excess bandwidth traffic on a scheduler.
This feature is supported only on multiservices PICs installed on MX Series routers.
- Accurate reporting of output counters for MLFR UNI NNI bundles—Starting with Release 12.3R2, Junos OS reports the actual output counters in the multilink frame relay (MLFR) UNI NNI bundle statistics section of the show interfaces lsq-interface statistics command output. From this release on, Junos OS also provides per-DLCI counters for logical interfaces. In earlier releases, there was a discrepancy between the actual output counters and the reported value because of errors in calculating the output counters at the logical interface level. That is, at the logical interface level, the output counter was calculated as the sum of frames egressing at the member links instead of roviding the output counter as the sum of per-DLCI output frames
- Extended MPC support for per-unit schedulers—Enables you to configure per-unit schedulers on the non-queuing
16x10GE MPC and the MPC3E, meaning you can include the per-unit-scheduler statement at the [edit interfaces interface name] hierarchy level. When per-unit schedulers are enabled, you
can define dedicated schedulers for logical interfaces by including
the scheduler-map statement at the [edit class-of-service interfaces interface name unit logical unit number] hierarchy level. Alternatively, you
can include the scheduler-map statement at the [edit class-of-service traffic-control-profiles traffic control profile name] hierarchy level and then include the output-traffic-control-profile statement at the [edit class-of-service interfaces interface name unit logical unit number] hierarchy level.
Enabling per-unit schedulers on the 16x10GE MPC and the MPC3E adds additional output to the show interfaces interface name [detail | extensive] command. This additional output lists the maximum resources available and the number of configured resources for schedulers.
[Applying Scheduler Maps and Shaping Rate to DLCIs and VLANs]
Forwarding and Sampling
- Increased forwarding capabilities for MPCs and
Multiservices DPCs through FIB localization (MX Series routers)—Forwarding information base (FIB) localization characterizes
the Packet Forwarding Engines in a router into two types: FIB-Remote
and FIB-Local. FIB-Local Packet Forwarding Engines install all of
the routes from the default route tables into Packet Forwarding Engine
forwarding hardware. FIB-Remote Packet Forwarding Engines create a
default (0.0) route that references a next hop or a unilist of next
hops to indicate the FIB-Local that can perform full IP table looks-ups
for received packets. FIB-Remote Packet Forwarding Engines forward
received packets to the set of FIB-Local Packet Forwarding Engines.
The capacity of MPCs is much higher than that of Multiservices DPCs, so an MPC is designated as the local Packet Forwarding Engine, and a Multiservices DPC is designated as the remote Packet Forwarding Engine. The remote Packet Forwarding Engine forwards all network-bound traffic to the local Packet Forwarding Engine. If multiple MPCs are designated as local Packet Forwarding Engines, then the Multiservices DPC will load-balance the traffic using the unilist of next hops as the default route.
High Availability (HA) and Resiliency
- Protocol Independent Multicast nonstop active
routing support for IGMP-only interfaces—Starting
with Release 12.3, Junos OS extends the Protocol Independent Multicast
(PIM) nonstop active routing support to IGMP-only interfaces.
In Junos OS releases earlier than 12.3, the PIM joins created on IGMP-only interfaces were not replicated on the backup Routing Engine and so the corresponding multicast routes were marked as pruned (meaning discarded) on the backup Routing Engine. Because of this limitation, after a switchover, the new master Routing Engine had to wait for the IGMP module to come up and start receiving reports to create PIM joins and to install multicast routes. This causes traffic loss until the multicast joins and routes are reinstated.
However, in Junos OS Release 12.3 and later, the multicast joins on the IGMP-only interfaces are mapped to PIM states, and these states are replicated on the backup Routing Engine. If the corresponding PIM states are available on the backup, the multicast routes are marked as forwarding on the backup Routing Engine. This enables uninterrupted traffic flow after a switchover. This enhancement covers IGMPv2, IGMPv3, MLDv1, and MLDv2 reports and leaves.
[High Availability]
- RSVP
Nonstop active routing support for RSVP includes:
- Point-to-Multipoint LSPs
- RSVP Point-to-Multipoint ingress, transit, and egress LSPs using existing non-chained next hop.
- RSVP Point-to-Multipoint transit LSPs using composite next hops for Point-to-Multipoint label routes.
- Point-to-Point LSPs
- RSVP Point-to-Point ingress, transit, and egress LSPs using non-chained next hops.
- RSVP Point-to-Point transit LSPs using chained composite next hops.
- Point-to-Multipoint LSPs
- Configuration support to include GTP TEID field
in hash key for load-balancing GTP-U traffic (MX Series routers with
MPCs and MX80)—On an MX Series router with MPCs,
when there are multiple equal-cost paths to the same destination for
the active route, Junos OS uses a hash algorithm to choose one of
the next-hop addresses from the forwarding table when making a forwarding
decision. Whenever the set of next hops for a destination changes
in any way, the next-hop address is rechosen using the hash algorithm.
For GPRS tunneling protocol (GTP)-encapsulated traffic, the tunnel
endpoint identifier (TEID) field changes for traffic traversing through
peer routers. To implement load balancing for GTP-encapsulated traffic
on the user plane (GTP-U), the TEID should be included in the hash
key.
In Junos OS Release 12.3R2, you can configure GTP hashing on MX Series routers with MPCs and on MX80, to include the TEID field in hash calculations for IPv4 and IPv6 packets. To configure GTP hashing and include the GTP TEID field in hash calculations, configure the gtp-tunnel-end-point-identifier statement at the [edit forwarding-options enhanched-hash-key family] hierarchy level. GTP hashing is supported for both IPv4 and IPv6 packets received for GTP-U traffic at the MPC. For bridging and MPLS packets, GTP hashing is supported for IPv4 and IPv6 packets that are carried as payload for GTP-encapsulated traffic.
Note: For IPv4 packets, GTP hashing is supported only for the nonfragmented packets.
Interfaces and Chassis
- Support for fabric management features (MX240,
MX480, MX960 Routers with Application Services Modular Carrier Card)—The Application Services Module Line Card (AS MLC) is supported
on MX240, MX480, and MX960 routers. The AS MLC consists of the following
components:
- Application Services Modular Carrier Card (AS MCC)
- Application Services Modular Processing Card (AS MXC)
- Application Services Modular Storage Card (AS MSC)
The AS MCC plugs into the chassis and provides the fabric interface. On the fabric management side, the AS MLC provides redirection functionality using a demultiplexer. The following CLI operational mode commands display fabric-related information for the AS MCC:
- show chassis fabric fpcs
- show chassis fabric map
- show chassis fabric plane
- show chassis fabric plane-location
- show chassis fabric reachability
- show chassis fabric summary
[Junos OS System Basics Configuration Guide, Junos OS System Basics and Services Command Reference]
[See Fabric Plane Management on AS MLC Modular Carrier Card Overview.]
- Support for chassis management (MX240, MX480,
MX960 Routers with Application Services Modular Line Card)—The Application Services Modular Line Card (AS MLC) is a Modular
Port Concentrator (MPC) that is designed to run services and applications
on MX240, MX480, and MX960 routers.
The following CLI operational mode commands support the chassis management operations of the modular carrier card on the AS MLC:
- show chassis environment fpc
- show chassis firmware
- show chassis fpc
- show chassis hardware
- show chassis pic
- show chassis temperature-thresholds
- request chassis fpc
- request chassis mic
- request chassis mic fpc-slot mic-slot
[Junos OS System Basics Configuration Guide, Junos OS System Basics and Services Command Reference]
- 16-Port Channelized E1/T1 Circuit Emulation MIC (MX Series routers)—Starting with Junos OS Release 12.3, the 16-Port Channelized E1/T1 Circuit Emulation MIC (MIC-3D-16CHE1-T1-CE) is supported on MX80, MX240, MX480, and MX960 routers. [See 16-Port Channelized E1/T1 Circuit Emulation MIC Overview.]
- Extends signaling support for SAToP/CESoPSN for
E1/T1 interfaces ( MX Series routers)—Starting
with Junos OS Release 12.3, the E1/T1 interfaces support signaling
for Structure-Agnostic TDM over Packet (SAToP) and Circuit Emulation
Services over Packet-Switched Network (CESoPSN) through Layer 2 VPN
using BGP.
[See Configuring SAToP on Channelized E1/T1 Circuit Emulation MIC and Configuring CESoPSN on Channelized E1/T1 Circuit Emulation MIC.]
- Extends support for diagnostic, OAM, and timing
features to 16-port Channelized E1/T1 Circuit Emulation MIC (MX Series
routers)—Starting with Junos OS Release 12.3,
the 16-port Channelized E1/T1 Circuit Emulation MIC (MIC-3D-16CHE1-T1-CE)
supports the following features:
- Diagnostic features:
- Loopback: Support for E1/T1-level payload, local line, remote line, and NxDS0 payload loopbacks.
- Bit error rate test (BERT): Support for the following
BERT algorithms:
- pseudo-2e11-o1520
- pseudo-2e15-o152
- pseudo-2e20-o150
- Operation, Administration, and Maintenance (OAM) features:
- Performance monitoring: Supports the following Layer 1
performance-monitoring statistics at the E1/T1 interface level for
all kinds of encapsulations:
- E1 interfaces
- BPV—Bipolar violation
- EXZ—Excessive zeros
- SEF—Severely errored framing
- BEE—Bit error event
- LCV—Line code violation
- PCV—Pulse code violation
- LES—Line error seconds
- ES—Errored seconds
- SES—Severely errored seconds
- SEFS—Severely errored framing seconds
- BES—Bit error seconds
- UAS—Unavailable seconds
- FEBE—Far-end block error
- CRC—Cyclic redundancy check errors
- LOFS—Loss of frame seconds
- LOSS—Loss of signal seconds
- T1 interfaces
- BPV—Bipolar violation
- EXZ—Excessive zeros
- SEF—Severely errored framing
- BEE—Bit error event
- LCV—Line code violation
- PCV—Pulse code violation
- LES—Line error seconds
- ES—Errored seconds
- SES—Severely errored seconds
- SEFS—Severely errored framing seconds
- BES—Bit error seconds
- UAS—Unavailable seconds
- LOFS—Loss of frame seconds
- LOSS—Loss of signal seconds
- CRC—Cyclic redundancy check errors
- CRC Major—Cyclic redundancy check major alarm threshold exceeded
- CRC Minor—Cyclic redundancy check minor alarm threshold exceeded
- E1 interfaces
- Performance monitoring: Supports the following Layer 1
performance-monitoring statistics at the E1/T1 interface level for
all kinds of encapsulations:
- Timing features: Support for the following transmit clocking
options on the E1/T1 interface:
- Looped timing
- System timing
Note: In Junos OS Release 12.3, IMA Link alarms are not supported on the 16-port Channelized E1/T1 MIC.
[See Configuring E1 Loopback Capability, Configuring E1 BERT Properties, and Interface Diagnostics.
- Diagnostic features:
- Extends support for SAToP features to 16-port Channelized
E1/T1 Circuit Emulation MIC (MX Series routers)—Starting
with Junos OS Release 12.3, the 16-port Channelized E1/T1 Circuit
Emulation MIC (MIC-3D-16CHE1-T1-CE) supports E1/T1 SAToP features.
[See Configuring SAToP on Channelized E1/T1 Circuit Emulation MIC.]
- CESoPSN encapsulation support extended to 16-Port
Channelized E1/T1 Circuit Emulation MIC (MX Series routers)—Starting with Junos OS Release 12.3, support for CESoPSN encapsulation
is extended to the 16-port Channelized E1/T1 Circuit Emulation MIC
(MIC-3D-16CHE1-T1-CE).
[See Configuring CESoPSN on Channelized E1/T1 Circuit Emulation MIC.]
- SNMP and MIB support (MX2020 routers)—Starting with Junos OS Release 12.3, the enterprise-specific
Chassis Definitions for Router Model MIB,
jnx-chas-defines.mib
, is updated to include information about the new MX2020 routers. The Chassis Definitions for Router Model MIB contains the object identifiers (OIDs) used by the Chassis MIB to identify platform and chassis components of each router.[See jnxBoxAnatomy, Chassis Definitions for Router Model MIB, and MIB Objects for the MX2020 3D Universal Edge Router.]
- Junos OS support for FRU management of MX2020 routers—Starting with Release 12.3, Junos OS supports the new MX2020
routers. The MX2020 routers are the next generation of MX Series 3D
Universal Edge Routers. The Junos OS chassis management software for
the MX2020 routers provides enhanced environmental monitoring and
field-replaceable unit (FRU) control. FRUs supported on the MX2020
routers include:
- RE and CB—Routing Engine and Control Board including a Processor Mezzanine Board (PMB)
- PDM—Power distribution module
- PSM—Power supply module
- Fan trays
- SFB—Switch Fabric Board
- Front panel display
- Adapter cards
- Line cards
The MX2020 router supports up to two Control Boards (CBs) with the second CB being used as a redundant CB. The CB provides control and monitoring functions for the router. Adapter card and switch fabric board FRU management functionality is controlled by a dedicated processor housed on the Processor Mezzanine Board. The MX2020 router supports 20 adapter cards and 8 Switch Fabric Boards (SFBs).
The MX2020 chassis has two cooling zones. Fans operating in one zone have no impact on cooling in another zone, enabling the chassis to run fans at different speeds in different zones. The chassis can coordinate FRU temperatures in each zone and the fan speeds of the fan trays in these zones.
The power system on the MX2020 routers consists of three components: the power supply modules (PSMs), the power distribution module (PDM), and the power midplane. The MX2020 router chassis supplies N +N feed redundancy, N +1 power supply redundancy for line cards, and N +N power supply redundancy for the critical FRUs. The critical FRUs include two CBs, eight SFBs, and three fan trays (two fan trays in one zone and one fan tray in the other zone.) In cases where all PSMs are not present, or some PSMs fail or are removed during operation, service interruption is minimized by keeping the affected FPCs online without supplying redundant power to these FPCs. You can use the following configuration statement to monitor power management on the switch chassis:
- fru-poweron-sequence—Include the fru-poweron-sequence statement at the [edit chassis] hierarchy level to configure the power-on sequence for the FPCs in the chassis.
Table 1: Maximum FRUs Supported on the MX2020 Router
FRU Maximum Number Routing Engines and CB 2
PDM 4
PSM 18
Fan trays 4
SFB 8
Front panel display 1
Adapter cards 20
Line cards 20
The following CLI operational mode commands support the various FRU and power management operations on MX2020 routers:
Show commands:
- show chassis adc
- show chassis alarms
- show chassis environment
- show chassis environment adc adc-slot-number
- show chassis environment cb cb-slot-number
- show chassis environment fpc fpc-slot-number
- show chassis environment fpm fpm-slot-number
- show chassis environment monitored
- show chassis environment psm psm-slot-number
- show chassis environment routing-engine routing-engine-slot-number
- show chassis environment sfb sfb-slot-number
- show chassis craft-interface
- show chassis ethernet-switch < errors | statistics >
- show chassis fabric destinations
- show chassis fabric fpcs
- show chassis fabric plane
- show chassis fabric plane-location
- show chassis fabric summary
- show chassis fan
- show chassis firmware
- show chassis fpc < detail | pic-status | fpc-slot-number >
- show chassis hardware < clei-models | detail | extensive | models >
- show chassis in-service-upgrade
- show chassis mac-addresses
- show chassis network-services
- show chassis pic fpc-slot fpc-slot-number pic-slot pic-slot-number
- show chassis power
- show chassis power sequence
- show chassis routing-engine < routing-engine-slot-number | bios >
- show chassis sfb < slot sfb-slot-number >
- show chassis spmb
- show chassis temperature-thresholds
- show chassis zones < detail >
Request commands:
- request chassis cb ( offline | online ) slot slot-number
- request chassis fabric plane ( offline | online ) fabric-plane-number
- request chassis fpc ( offline | online | restart ) slot fpc-slot-number
- request chassis fpm resync
- request chassis mic ( offline | online ) fpc-slot fpc-slot-number mic-slot mic-slot-number
- request chassis routing-engine master ( acquire | release | switch ) < no-confirm >
- request chassis sfb ( offline | online ) slot sfb-slot-number
- request chassis spmb restart slot spmb-slot-number
Restart command:
- restart chassis-control < gracefully | immediately | soft >
For details of all system management operational mode commands and the command options supported on the MX2020 router, see the System Basics and Services Command Reference.
[See System Basics: Chassis-Level Features Configuration Guide.]
- SAToP support extended to Channelized OC3/STM1
(Multi-Rate) Circuit Emulation MIC with SFP ( MX Series routers)—Starting with Junos OS Release 12.3R1, support for Structure-Agnostic
Time-Division Multiplexing over Packet (SAToP) is extended to MIC-3D-4COC3-1COC12-CE.
You can configure 336 T1 channels on each COC12 interface on this
MIC.
[See Configuring SAToP on Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP and Configuring SAToP Encapsulation on T1/E1 Interfaces on Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP.]
- CESoPSN support extended to Channelized OC3/STM1
(Multi-Rate) Circuit Emulation MIC with SFP (MX Series routers)—Starting with Junos OS Release 12.3, support for Circuit Emulation
Service over Packet-Switched Network (CESoPSN) is extended to MIC-3D-4COC3-1COC12-CE.
You can configure 336 CT1 channels on each COC12 interface on this
MIC.
[See Configuring CESoPSN on Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP and Configuring CESoPSN Encapsulation on DS Interfaces on Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP.]
- Support for ATM PWE3 on Channelized OC3/STM1 (Multi-Rate)
Circuit Emulation MIC with SFP (MX80 routers with a modular chassis,
and MX240, MX480, and MX960 routers)—Starting
with Junos OS Release 12.3, ATM Pseudowire Emulation Edge to Edge
(PWE3) is supported on channelized T1/E1 interfaces of the Channelized
OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP (MIC-3D-4COC3-1COC12-CE).
The following PWE3 features are supported:
- ATM pseudowire encapsulation. The pseudowire encapsulation can be either cell-relay or AAL5 transport mode. Both modes enable the transport of ATM cells across a packet-switched network (PSN).
- Cell-relay VPI/VCI swapping. The Channelized OC3/STM1
(Multi-Rate) Circuit Emulation MIC with SFP can overwrite the virtual
path identifier (VPI) and virtual channel identifier (VCI) header
values on egress and on both ingress and egress.
Note: Cell-relay VPI swapping on both ingress and egress is not compatible with the ATM policing feature.
To configure the Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP to modify both the VPI and VCI header values on both ingress and egress, you must specify the psn-vci statement at the following hierarchy level:
[edit interface at-interface-name/pic/port unit logical-unit-number]
To configure the Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP to modify only the VPI header values on both ingress and egress, you must specify the psn-vpi statement at the following hierarchy level:
[edit interface at-interface-name/pic/port unit logical-unit-number]
To configure the Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP to pass the VPI and VCI header values transparently, you must specify the no-vpivci-swapping statement at the following hierarchy level:
[edit interface at-interface-name/pic/port unit logical-unit-number]
If none of the aforementioned configuration statements are included, for virtual path pseudowires, VPI values are modified on egress, whereas for virtual channel pseudowires, both VPI and VCI header values are modified on egress.
- Pseudowire ATM MIB support for Channelized OC3/STM1
(Multi-Rate) Circuit Emulation MIC with SFP (MX80 routers with a modular
chassis, and MX240, MX480, and MX960 routers)—Starting
with Release 12.3, Junos OS extends Pseudowire ATM MIB support to
the Channelized OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP
(MIC-3D-4COC3-1COC12-CE).
[See Interpreting the Enterprise-Specific Pseudowire ATM MIB.]
- Multiple VRRP owners per physical port—Support for multiple owner addresses per physical interface, allowing users to reuse interface address identifiers (IFAs) as virtual IP addresses (VIPs).
- Chassis daemon enhancements for the MFC application
on the Routing Engine—The chassis daemon (chassisd) process
runs on the Routing Engine to communicate directly with its peer processes
running on the Packet Forwarding Engine. Starting with Junos OS Release
12.1, the chassisd process has been enhanced to enable the Media Flow
Controller (MFC) application to run on a Dense Port Concentrator (DPC)
with an x86 blade for high application throughput and a large amount
of solid state storage on MX Series routers. The chassisd process
detects the installation of the modular x86 blade for MFC services
and monitors the physical status of hardware components and the field-replaceable
units (FRUs) that enable MFC to be run on the x86 blade.
[System Basics]
- Support for aggregated SONET/SDH Interfaces (MX
Series Routers)—Starting with Junos OS Release
12.3, you can configure aggregated SONET bundles with the member links
of SONET/SDH OC3/STM1 (Multi-Rate) MICs with SFP, that is, MIC-3D-8OC3OC12-4OC48
and MIC-3D-4OC3OC12-1OC48.
Junos OS enables link aggregation of SONET/SDH interfaces; this is similar to Ethernet link aggregation, but is not defined in a public standard. Junos OS balances traffic across the member links within an aggregated SONET/SDH bundle based on the Layer 3 information carried in the packet. This implementation uses the same load-balancing algorithm used for per-packet load balancing.
The following features are supported on MIC-3D-8OC3OC12-4OC48 and MIC-3D-4OC3OC12-1OC48:
- Encapsulation—Point-to-Point Protocol (PPP) and Cisco High-Level Data Link Control (Cisco HDLC)
- Filters and policers—Single-rate policers, three-color marking policers, two-rate three-color marking policers, hierarchical policers, and percentage-based policers. By default, policer bandwidth and burst size applied on aggregated bundles are not matched to the user-configured bandwidth and burst size.
- Mixed mode links
- Support for synchronizing an MX240, MX480, or MX960 router chassis with an Enhanced MX SCB to an external BITS timing source—This feature uses the Building Integrated Timing Supply (BITS) external clock interface (ECI) on the Enhanced MX SCB. The BITS ECI can also be configured to display the selected chassis clock source (SETS) or a recovered line clock source (Synchronous Ethernet or Precision Time Protocol). You can configure the BITS ECI by using the synchronization statement at the [edit chassis] hierarchy level. You can view the BITS ECI information with the show chassis synchronization extensive command.
- Aggregated interfaces support increased to 64 links (MX Series)—This feature adds support for specifying up to 64 links for aggregated devices. You set the number of links in the new maximum-links statement at the [chassis aggregated-devices] hierarchy level.
- Junos OS support for new MX2010 routers—Starting with Release 12.3, Junos OS supports the new MX2010
routers. The MX2010 routers are an extension of the MX2020 routers
and support all features supported by the MX2020 routers. Also, the
MX2010 routers support all software features that are supported by
other MX Series routers in Junos OS Release 12.1.
The power system on the MX2010 routers consists of three components: the power supply modules (PSMs), the power distribution module (PDM), and the power midplane. The power feed (AC or DC) is connected to the PDM. The PDM delivers the power from the feeds to the power midplane. The power from the power midplane is provided to the PSMs. Output from the PSMs is sent back to the power midplane and then eventually to the field-replaceable units (FRUs). The MX2010 router chassis supplies N +N feed redundancy and N + 1 PSM redundancy for line cards. In case some PSMs fail or are removed during operation, service interruption is minimized by keeping as many affected FPCs online by supplying redundant power to these FPCs. Unlike the MX2020 router chassis, the MX2010 router chassis does not provide redundancy for the critical FRUs because there is only one power zone.
Include the following existing configuration statement at the [edit chassis] hierarchy level to configure the power-on sequence for the FPCs in the chassis:
[edit chassis]
fru-poweron-sequence fru-poweron-sequence
Junos OS also supports the following CLI operational mode commands for chassis management of MX2010 routers:
Show commands
Request commands
Restart commands
show chassis adc
request chassis cb (offline | online) slot slot-number
restart chassis-control < gracefully | immediately | soft >
show chassis alarms
request chassis fabric plane (offline | online) fabric-plane-number
–
show chassis environment adc <adc-slot-number>
request chassis fpc (offline | online | restart) slot fpc-slot-number
–
show chassis environment cb <cb-clot-number>
request chassis fpm resync
–
show chassis environment fpc <fpc-slot-number>
request chassis mic (offline | online) fpc-slot fpc-slot-number mic-slot mic-slot-number
–
show chassis environment fpm
request chassis routing-engine master (acquire | release | switch) <no-confirm>
–
show chassis environment monitored
request chassis sfb (offline | online) slot sfb-slot-number
–
show chassis environment psm <psm-slot-number>
request chassis spmb restart slot spmb-slot-number
–
show chassis environment routing-engine <routing-engine-slot-number>
–
–
show chassis environment sfb <sfb-slot-number>
–
–
show chassis environment <adc | cb | fpc | fpm | monitored | psm | routing-engine | sfb>
–
–
show chassis craft-interface
–
–
show chassis ethernet-switch <(errors | statistics)>
–
–
show chassis fabric destinations <fpc fpc-slot-number>
–
–
show chassis fabric ( destinations | fpcs | plane | plane-location | summary )
–
–
show chassis fan
–
–
show chassis firmware
–
–
show chassis fpc <slot> detail | <detail <slot>> | <pic-status <slot>> |<fpc-slot-number>
–
–
show chassis hardware < (clei-models | detail | extensive | models)>
–
–
show chassis in-service upgrade
–
–
show chassis mac-addresses
–
–
show chassis network-services
–
–
show chassis pic fpc-slot fpc-slot-number pic-slot pic-slot-number
–
–
show chassis power <sequence>
–
–
show chassis routing-engine <slot-number | bios>
–
–
show chassis sfb <slot slot-number>
–
–
show chassis spmb
–
–
show chassis temperature-thresholds
–
–
show chassis zones <detail>
–
–
For details of all system management operational mode commands and the command options supported on the MX2010 router, see the System Basics and Services Command Reference.
[System Basics and Services Command Reference]
- SNMP and MIB support for MX2010 routers—Starting with Junos OS Release 12.3, the enterprise-specific
Chassis Definitions for Router Model MIB,
jnx-chas-defines.mib
, is updated to include information about the new MX2010 routers. The Chassis Definitions for Router Model MIB contains the object identifiers (OIDs) used by the Chassis MIB to identify platform and chassis components of each router.[See jnxBoxAnatomy, Chassis Definitions for Router Model MIB, and MIB Objects for the MX2010 3D Universal Edge Router.]
- Improvements to interface transmit statistics
reporting (MX Series devices)—On MX Series devices,
the logical interface-level statistics show only the offered load,
which is often different from the actual transmitted load. To address
this limitation, Junos OS introduces a new configuration option in
Releases 11.4 R3 and 12.3 R1 and later. The new configuration option, interface-transmit-statistics at the [edit
interface interface-name] hierarchy
level, enables you to configure Junos OS to accurately capture and
report the transmitted load on interfaces.
When the interface-transmit-statistics statement is included at the [edit interface interface-name] hierarchy level, the following operational mode commands report the actual transmitted load:
- show interface interface-name <detail | extensive>
- monitor interface interface-name
- show snmp mib get objectID.ifIndex
Note: This configuration is not supported on Enhanced IQ (IQE) and Enhanced IQ2 (IQ2E) PICs.
The show interface interface-name command also shows whether the interface-transmit-statistics configuration is enabled or disabled on the interface.
[See Improvements to Interface Transmit Statistics Reporting.]
- Extends support for encapsulating TDM signals as
pseudowires for E1/T1 Circuit Emulation MIC (MX Series routers)—Starting with Junos OS Release 12.3, the Channelized E1/T1
Circuit Emulation MIC (MIC-3D-16CHE1-T1-CE) supports encapsulating
structured (NxDS0) time division multiplexed
(TDM) signals as pseudowires over packet-switch networks (PSNs).
[See Configuring SAToP Emulation on T1/E1 Interfaces on Circuit Emulation PICs.]
- RE-JCS-1X2400-48G-S Routing Engine—The JCS-1200 Control System now supports the RE-JCS-1X2400-48G-S Routing Engine. The RE-JCS-1X2400-48G-S Routing Engine requires the enhanced management module (model number MM-E-JCS-S). The RE-JCS-1X2400-48G-S Routing Engine provides a 2.4-GHz dual core Xeon processor, 48 GB of memory, and two 128 GB hot-pluggable solid state drives. The RE-JCS-1X2400-48G-S Routing Engine supports the same functionality as the other routing engines supported on the JCS-1200.
- SFPP-10GE-ZR transceiver—The following PICs on the T640, T1600, and T4000 routers now
support the SFPP-10GE-ZR transceiver. The SFPP-10GE-ZR transceiver
supports the 10GBASE-Z optical interface standard. For more information,
see “Cables and connectors” in the PIC guide.
T640 Router:
- 10-Gigabit Ethernet LAN/WAN PIC with SFP+ (Model number: PD-5-10XGE-SFPP)
T1600 Router:
- 10-Gigabit Ethernet LAN/WAN PIC with Oversubscription and SFP+ (Model number: PD-5-10XGE-SFPP)
T4000 Router:
- 10-Gigabit Ethernet LAN/WAN PIC with SFP+ (Model number: PF-12XGE-SFPP)
- 10-Gigabit Ethernet LAN/WAN PIC with Oversubscription and SFP+ (Model numbers: PD-5-10XGE-SFPP for 10-Port Type 4 PIC and PF-24XGE-SFPP for 24-Port Type 5 PIC)
[See 10-Gigabit Ethernet 10GBASE Optical Interface Specifications, T640 Core Router PIC Guide , T1600 Core Router PIC Guide , and T4000 Core Router PIC Guide .]
- CFP-100GBASE-ER4
and CFP-100GBASE-SR10 Transceivers—The following
PICs on the T1600 and T4000 routers now support the CFP-100GBASE-ER4
and CFP-100GBASE-SR10 transceivers. The CFP-100GBASE-ER4 transceiver
supports the 100GBASE-ER4 optical interface standard. The CFP-100GBASE-SR10
transceiver supports the 100GBASE-SR10 optical interface standard.
For more information, see “Cables and connectors” in the
PIC guide.
- T1600 Router: 100-Gigabit Ethernet PIC with CFP (Model number: PD-1CE-CFP-FPC4)
- T4000 Router: 100-Gigabit Ethernet PIC with CFP (Model numbers: PF-1CGE-CFP for Type 5 and PD-1CE-CFP-FPC4 for Type 4)
[See 100-Gigabit Ethernet 100GBASE-R Optical Interface Specifications, T1600 Core Router PIC Guide , and T4000 Core Router PIC Guide .]
- Accounting of system statistics for IPv4
and IPv6 traffic—On MX Series routers, you can
enable accounting of system statistics for IPv4 and IPv6 traffic by
including the extended-statistics statement at the [edit chassis] hierarchy level. By default, accounting of system
statistics is disabled.
[See extended-statistics.]
- Fabric enhancements for MX2020 and MX2010 routers —MX2020 and MX2010 routers now support all existing fabric hardening enhancements.
- Support
for unified in-service software upgrade (TX Matrix Plus router)—Starting with Junos OS Release 12.3R2, unified in-service
software upgrade (unified ISSU) is supported on a routing matrix based
on a TX Matrix Plus router with the TXP-T1600 configuration.
Unified ISSU is a process to upgrade the system software with minimal disruption of transit traffic and no disruption on the control plane. In this process, the new system software version must be later than the previous system software version. When unified ISSU completes, the new system software state is identical to that of the system software when the system upgrade is performed by powering off the system and then powering it back on.
- Enhancement to ping ethernet command—Enables
you to specify a multicast MAC address. For example:
user@host> ping ethernet maintenance-domain md3 maintenance-association ma3 01:80:c2:00:00:33
- Symmetric load balancing on MX Series routers
with MPCs—Enables support for symmetrical load
balancing over 802.3ad link aggregation groups (LAGs) on MX Series
routers with MPCs.
[See Configuring Symmetrical Load Balancing on an 802.3ad Link Aggregation Group on MX Series Routers.]
- Computation of the Layer 2 overhead attribute in
interface statistics (MX Series routers)—On MX
Series routers, you can configure the physical interface and logical
interface statistics to include the Layer 2 overhead size (header
and trailer bytes) for both ingress and egress interfaces. Both the
transit and total statistical information are computed and displayed
for each logical interface. This functionality is supported on 1-Gigabit
and 10-Gigabit Ethernet interfaces on Dense Port Concentrators (DPCs)
and Modular Port Concentrators (MPCs).
You can enable the Layer 2 overhead bytes for computation in the logical interface statistics by configuring the account-layer2-overhead (value | <ingress bytes | egress bytes>) statement at the [edit interface interface-name unit logical-unit-number] hierarchy level. If you configure this capability, all the Layer 2 header details (L2 header and cyclic redundancy check [CRC]) based on the Layer 2 encapsulation configured for an interface are calculated and displayed in the physical and logical interface statistics for ingress and egress interfaces in the output of the show interfaces interface-name commands. For physical and logical interfaces, the Input bytes and Output bytes fields under the Traffic statistics section in the output of the show interfaces interface-name <detail | extensive> command include the Layer 2 overhead of the packets. For physical and logical interfaces, the Input Rate and Output Rate fields under the Traffic statistics section in the output of the show interfaces interface-name <media | statistics> command include the Layer 2 overhead of the packets. For logical interfaces, the values for the newly added Egress accounting overhead and Ingress accounting overhead fields display the Layer 2 overhead size for transmitted and received packets respectively.
The ifInOctets and the ifOutOctets MIB objects display statistics that include Layer 2 overhead bytes if you configured the setting to account for Layer 2 overhead at the logical interface level.
- New label-switching router (LSR) FPC (model number
T4000-FPC5-LSR)—The new LSR FPC in a T4000 core
router provides LSR capability with the following scaling numbers:
Feature
LSR FPC Scale
RIB capacity
28 million
FIB (IPv4 and IPv6 unicast)
64,000
MPLS label push
48,000
MPLS label FIB/MPLS swap table
256,000
IP multicast route capacity
256,000
Multicast forwarding table (S, G)
128,000
RSVP LSPs
32,000 (ingress/egress)
64,000 (transit)
Layer 2 VPN ingress/egress with family CCC/TCC
8000
The LSR FPC operates in the following modes:
- Packet transport mode—When the LSR FPC operates as an LSR only, the LSR FPC scaling numbers are supported.
- Converged P/PE mode—In a mixed provider (P) and provider edge (PE) router deployment, the LSR FPC might receive routes and next hops that exceed the LSR scaling numbers that are supported. In that case, extended scaling numbers, such as for the T4000-FPC5-3D, are supported.
- PE router mode—In PE router mode, running a Layer 3 VPN or peering services from the LSR FPC is not supported.
[See the T4000 Core Router Hardware Guide .]
- Support for new fixed-configuration MPC on MX240,
MX480, MX960, and MX2020 routers—MX2020, MX960,
MX480, and MX240 routers support a new MPC, MPC4E. MPC4E provides
scalability in bandwidth and services capabilities of the routers.
MPC4E, like other MPCs, provides the connection between the customer's
Ethernet interfaces and the routing fabric of the MX Series chassis.
MPC4E is a fixed-configuration MPC and does not contain separate slots
for Modular Interface Cards (MICs). MPC4E is available in two models:
MPC4E-3D-32XGE-SFPP and MPC4E-3D-2CGE-8XGE.
MPC4E, like MPC3E, requires the Enhanced MX Switch Control Board (SCBE) for fabric redundancy. MPC4E does not support legacy SCBs. MPC4E interoperates with existing MX Series line cards, including Dense Port Concentrators (DPCs) and Modular Port Concentrators (MPCs).
MPC4E contains two Packet Forwarding Engines––PFE0 hosts PIC0 and PIC1 while PFE1 hosts PIC2 and PIC3.
MPC4E supports:
- Forwarding capability of up to 130 Gbps per Packet Forwarding Engine. On MX240, MX480, and MX960 routers with Enhanced MX Switch Control Boards (SCBEs), each Packet Forwarding Engine can forward up to 117 Gbps because of the Packet Forwarding Engine and fabric limitations. On M2020 routers, each Packet Forwarding Engine can forward up to 130 Gbps.
- Both 10-Gigabit Ethernet interfaces and 100-Gigabit Ethernet interfaces.
- Small form-factor pluggable (SFP) and C form-factor pluggable (CFP) transceivers for connectivity.
- Up to 240 Gbps of full-duplex traffic.
- Intelligent oversubscription services.
- WAN-PHY mode on 10-Gigabit Ethernet Interfaces on a per-port basis.
- Up to four full-duplex tunnel interfaces on each MPC4E.
- Effective line rate of 200 Gbps for packets larger than 300 bytes.
MPC4E supports feature parity with the Junos OS Release 12.3 software features:
- Basic Layer 2 features and virtual private LAN service (VPLS) functionality, including Operation, Administration, and Maintenance (OAM)
- Class-of-service (CoS) support
- Firewall filters and policers
- Interoperability with existing DPCs and MPCs
- Internet Group Management Protocol (IGMP) snooping with bridging, integrated routing and bridging (IRB), and VPLS
- Layer 3 routing protocols
- J-Flow monitoring and services
- MPLS
- Multicast forwarding
- Precision Time Protocol (IEEE 1588)
- Tunnel Interfaces support
The following features are not supported on the MPC4E:
- Fine-grained queuing and input queuing
- Intelligent hierarchical policers
- Layer 2 trunk port
- MPLS fast reroute (FRR) VPLS instance prioritization
- Multilink services
- Virtual Chassis support
For more information about the supported and unsupported Junos OS software features for this MPC, see Protocols and Applications Supported by the MX240, MX480, MX960, and MX2000 MPC4E in the MX Series Line Card Guide.
- Support for Ethernet synthetic loss measurement—You can trigger on-demand and proactive Operations, Administration,
and Maintenance (OAM) for measurement of statistical counter values
corresponding to ingress and egress synthetic frames. Frame loss
is calculated using synthetic frames instead of data traffic. These
counters maintain a count of transmitted and received synthetic frames
and frame loss between a pair of maintenance association end points
(MEPs).
The Junos OS implementation of Ethernet synthetic loss measurement (ETH-SLM) is fully compliant with the ITU-T Recommendation Y.1731. Junos OS maintains various counters for ETH-SLM PDUs, which can be retrieved at any time for sessions that are initiated by a certain MEP. You can clear all the ETH-SLM statistics and PDU counters.
The ETH-SLM feature provides the option to perform ETH-SLM for a given 802.1p priority; to set the size of the ETM-SLM protocol data unit (PDU); and to generate XML output.
You can perform ETH-SLM in on-demand ETH-SLM mode (triggered through the CLI) or in proactive ETH-SLM mode (triggered by the iterator application). To trigger synthetic frame loss measurement (on-demand mode) and provide a run-time display of the measurement values, use the monitor ethernet synthetic-loss-measurement (remote-mac-address | mep mep-id) maintenance-domain md-name maintenance-association ma-name count frame-count wait time priority 802.1p value size xml operational mode command.
To display the archived on-demand synthetic frame loss measurement values, use the show oam ethernet connectivity-fault-management synthetic-loss-statistics maintenance-domain md-name maintenance-association ma-name local-mep local-mep-id remote-mep remote-mep-id count entry-count operational mode command. To display the cumulative on-demand synthetic frame loss measurement values, use the show oam ethernet connectivity-fault-management interfaces detail operational mode command.
To perform proactive ETH-SLM, you need to create an SLA iterator profile and associate the profile with a remote MEP. To create an SLA iterator profile for ETH-SLM, include the measurement-type slm statement at the [edit protocols oam ethernet connectivity-fault-management performance-monitoring sla-iterator-profiles profile-name] hierarchy level. To display proactive synthetic loss measurement values, use the show oam ethernet connectivity-fault-management sla-iterator-statistics maintenance-domain md-name maintenance-association ma-name local-mep local-mep-id remote-mep remote-ip-id sla-iterator identifier operational mode command.
You can reset the SLM statistics by clearing the currently measured ETH-SLM statistical counters. To clear the existing on-demand Ethernet loss statistics measured for a specific maintenance domain and maintenance association local and remote MEP and restart the counter, use the clear oam ethernet connectivity-fault-management synthetic-loss-measurement maintenance-domain md-name maintenance-association ma-name local-mep local-mep remote-mep remote-mep operational mode command. To clear the existing proactive ETH-SLM counters for a specific maintenance domain, maintenance association, local MEP, remote MEP, and an SLA iterator, use the clear oam ethernet connectivity-fault-management sla-iterator-statistics maintenance-domain md-name maintenance-association ma-name local-mep local-mep-id remote-mep remote-mep-id sla-iterator identifier operational mode command.
The following list consists of the connectivity fault management (CFM)-related operational mode commands that display ETH-SLM statistics:
- The show oam ethernet connectivity-fault-management interfaces detail command is enhanced to display on-demand ETH-SLM statistics for MEPs in the specified CFM maintenance association within the specified CFM maintenance domain.
- The show oam ethernet connectivity-fault-management mep-statistics command is enhanced to display on-demand ETH-SLM statistics and frame counts for MEPs in the specified CFM maintenance association within the specified CFM maintenance domain.
- The show oam ethernet connectivity-fault-management mep-database command is enhanced to display on-demand ETH-SLM frame counters for MEPs in the specified CFM maintenance association within the specified CFM maintenance domain.
- The show oam ethernet connectivity-fault-management sla-iterator-statistics command is enhanced to display service-level agreement (SLA) iterator statistics for ETH-SLM.
[Release Notes]
- Support for OSS mapping to represent a T4000 chassis
as a T1600 or a T640 chassis (T4000 routers)—Starting
with Junos OS Release 12.3R3, you can map a T4000 chassis to a T1600
chassis or a T640 chassis, so that the T4000 chassis is represented
as a T1600 chassis or a T640 chassis, respectively, without changing
the operations support systems (OSS) qualification. Therefore, you
can avoid changes to the OSS when a T1600 chassis or a T640 chassis
is upgraded to a T4000 chassis.
You can configure the OSS mapping feature with the set oss-map model-name t640|t1600 configuration command at the [edit chassis] hierarchy level. This command changes the chassis field to the known chassis field in the output of the show chassis hardware and the show chassis oss-map operational mode commands. You can verify the change with the show snmp mib walk system and show snmp mib walk jnxBoxAnatomy operational commands as well.
You can delete the OSS mapping feature by using the delete chassis oss-map model-name t640|t1600 configuration command.
- Enhanced load balancing for MIC and MPC interfaces
(MX Series) — Starting with Junos OS Release 12.3R4,
the following load-balancing solutions are supported on an aggregated
Ethernet bundle to correct genuine traffic imbalance among the member
links:
- Adaptive — Uses a real-time feedback and control mechanism to monitor and manage traffic imbalances.
- Per-packet random spray — Randomly sprays the packets to the aggregate next hops to ensure that the next hops are equally loaded resulting in packet reordering.
The aggregated Ethernet load-balancing solutions are mutually exclusive. To configure these solutions, include the adaptive or per-packet statement at the [edit interfaces aex aggregated-ether-options load-balance] hierarchy level.
- SFPP-10G-CT50-ZR (MX Series)—The SPFF-10G-CT50-ZR
tunable transceiver provides a duplex LC connector and supports the
10GBASE-Z optical interface specification and monitoring. The transceiver
is not specified as part of the 10-Gigabit Ethernet standard and is
instead built according to Juniper Networks specifications. Only WAN-PHY
and LAN-PHY modes are supported. To configure the wavelength on the
transceiver, use the wavelength statement at the [edit interfaces interface-name optics-options] hierarchy level. The following interface module supports the SPFF-10G-CT50-ZR
transceiver:
MX Series:
- 16-port 10-Gigabit Ethernet MPC (model number: MPC-3D-16XGE-SFPP)—Supported in Junos OS Release 12.3R6, 13.2R3, 13.3R2, 14.1, and later.
For more information about interface modules, see the “Cables and Connectors” section in the Interface Module Reference for your router.
[See 10-Gigabit Ethernet 10GBASE Optical Interface Specifications and wavelength.]
- SFPP-10G-ZR-OTN-XT
(MX Series, T1600, and T4000)—The SFPP-10G-ZR-OTN-XT
dual-rate extended temperature transceiver provides a duplex LC connector
and supports the 10GBASE-Z optical interface specification and monitoring.
The transceiver is not specified as part of the 10-Gigabit Ethernet
standard and is instead built according to ITU-T and Juniper Networks
specifications. In addition, the transceiver supports LAN-PHY and
WAN-PHY modes and OTN rates and provides a NEBS-compliant 10-Gigabit
Ethernet ZR transceiver for the MX Series interface modules listed
below. The following interface modules support the SFPP-10G-ZR-OTN-XT
transceiver:
MX Series:
- 10-Gigabit Ethernet MIC with SFP+ (model number: MIC3-3D-10XGE-SFPP)—Supported in Junos OS Release 12.3R5, 13.2R3, 13.3, and later
- 16-port 10-Gigabit Ethernet (model number: MPC-3D-16XGE-SFPP)—Supported in Junos OS Release 12.3R5, 13.2R3, 13.3, and later
- 32-port 10-Gigabit Ethernet MPC4E (model number: MPC4E-3D-32XGE-SFPP)—Supported in Junos OS Release 12.3R5, 13.2R3, 13.3, and later
- 2-port 100-Gigabit Ethernet + 8-port 10-Gigabit Ethernet MPC4E (model number: MPC4E-3D-2CGE-8XGE)—Supported in Junos OS Release 12.3R5, 13.2R3, 13.3, and later
T1600 and T4000 routers:
- 10-Gigabit Ethernet LAN/WAN PIC with Oversubscription and SFP+ (model numbers: PD-5-10XGE-SFPP and PF-24XGE-SFPP)—Supported in Junos OS Release 12.3R5, 13.2R3, 13.3, and later
- 10-Gigabit Ethernet LAN/WAN PIC with SFP+ (model number: PF-12XGE-SFPP)—Supported in Junos OS Release 12.3R5, 13.2R3, 13.3, and later
For more information about interface modules, see the “Cables and Connectors” section in the Interface Module Reference for your router.
[See 10-Gigabit Ethernet 10GBASE Optical Interface Specifications.]
Junos OS XML API and Scripting
- Support for service template automation—Starting with Junos OS Release 12.3, you can use service template
automation to provision services such as VPLS VLAN, Layer 2 and Layer
3 VPNs, and IPsec across similar platforms running Junos OS.
Service template automation uses the
service-builder.slax
op script to transform a user-defined service template definition into a uniform API, which you can then use to configure and provision services on similar platforms running Junos OS. This permits you to create a service template on one device, generalize the parameters, and then quickly and uniformly provision that service on other devices. This decreases the time required to configure the same service on multiple devices, and reduces configuration errors associated with manually configuring each device.[See Service Template Automation.]
- Support for configuring limits on concurrently
running event policies and memory allocation for scripts—Junos OS Release 12.3 supports configuring limits on the maximum
number of concurrently running event policies and the maximum amount
of memory allocated for the data segment for scripts of a given type.
By default, the maximum number of event policies that can run concurrently
in the system is 15, and the maximum amount of memory allocated for
the data segment portion of an executed script is half of the total
available memory of the system, up to a maximum value of 128 MB.
To set the maximum number of event policies that can run concurrently on a device, configure the max-policies policies statement at the [edit event-options] hierarchy level. You can configure a maximum of 0 through 20 policies. To set the maximum memory allocated to the data segment for scripts of a given type, configure the max-datasize size statement under the hierarchy appropriate for that script type, where size is the memory in bytes. To specify the memory in kilobytes, megabytes, or gigabytes, append k, m, or g, respectively, to the size. You can configure the memory in the range from 2,3068,672 bytes (22 MB) through 1,073,741,824 bytes (1 GB).
[See Configuring Limits on Executed Event Policies and Memory Allocation for Scripts.]
Layer 2 Features
- Support for Synchronous Ethernet and Precision Time Protocol on MX Series routers with 16-port Channelized E1/T1 Circuit Emulation MIC (MX Series Routers)—Starting with Junos OS Release 12.3, Synchronous Ethernet and Precision Time Protocol (PTP) are supported on MX Series routers with the 16-port Channelized E1/T1 Circuit Emulation MIC (MIC-3D-CH-16E1T1-CE). The clock derived by Synchronous Ethernet, PTP, or an internal oscillator is used to drive the T1/E1 interfaces on the 16-port Channelized E1/T1 Circuit Emulation MIC.
- E-TREE with remote VSI support on NSN Carrier
Ethernet Transport—The NSN Carrier Ethernet Transport
solution supports Metro Ethernet Forum (MEF) Ethernet Tree (E-TREE)
services using centralized virtual switch instances (VSIs). E-TREE
is a rooted multipoint service, where end points are classified as
Roots and Leaves. Root end points can communicate with both Root and
Leaf end points, but Leaf end points can only communicate with the
Root end points.
The NSN CET solution employs E-TREE services using a centralized VSI model. This means that VSIs are only provisioned on certain selected PEs. End points are connected to these central VSIs using spoke pseudowires. The centralized VSI model uses a lower number of pseudowires and less bandwidth than the distributed VSI model.
- Adds support to send and receive untagged
RSTP BPDUs on Ethernet interfaces (MX Series platforms)–VLAN Spanning Tree Protocol (VSTP) can send and receive untagged
Rapid Spanning Tree Protocol (RSTP) bridge protocol data units (BPDUs)
on Gigabit Ethernet (ge), 10 Gigabit Ethernet (xe), and aggregated
Ethernet (ae) interfaces.
To configure this feature, include the access-trunk statement at the following hierarchy levels:
[edit protocols vstp vlan vlan-identifier interface interface-name][edit routing-instances routing-instance-name instance-type (layer2-control | virtual-switch)][edit logical-systems logical-system-name protocols vstp][edit logical-systems logical-system-name routing-instances routing-instance-name protocols vstp][See access-trunk.]
- Extends support for multilink-based protocols on
T4000 and TX Matrix Plus routers—Starting with
Junos OS Release 12.3R3, multilink-based protocols are supported on
the T4000 and TX Matrix Plus routers with Multiservices PICs.
- Multilink Point-to-Point Protocol (MLPPP)—Supports Priority-based Flow Control (PFC) for data packets and Link Control Protocol (LCP) for control packets. Compressed Real-Time Transport Protocol (CRTP) and Multiclass MLPPP are supported for both data and control packets.
- Multilink Frame Relay (MLFR) end-to-end (FRF.15)—Supports Ethernet Local Management Interface (LMI), Consortium LMI (C-LMI), and Link Integrity Protocol (LIP) for data and control packets.
- Multilink Frame Relay (MFR) UNI NNI (FRF.16)—Supports Ethernet Local Management Interface (LMI), Consortium LMI (C-LMI), and Link Integrity Protocol (LIP) for data and control packets.
- Link fragmentation and interleaving (LFI) non multilink MLPPP and MLFR packets.
- Communications Assistance for Law Enforcement Act (CALEA)--Defines electronic surveillance guidelines for telecommunications companies.
- Two-Way Active Measurement Protocol (TWAMP)-- Adds two-way or round-trip measurement capabilities
[Interfaces Command Reference]
- Extends support of IPv6 statistics for MLPPP bundles
on T4000 and TX Matrix Plus routers—Starting with
Junos OS Release 12.3R3, the show interfaces lsq-fpc/pic/port command
displays the packet and byte counters for IPv6 data for Multilink
Point-to-Point Protocol (MLPPP) bundles on link services intelligent
queuing (LSQ) interfaces.
[Interfaces Command Reference]
- Link Layer Discovery Protocol (LLDP) support (MX240,
MX480, and MX960)—You can configure the LLDP protocol
on MX Series routers with MPC3E and MPC4E. To configure and adjust
default parameters, include the lldp statement
at the [edit protocols] hierarchy level.
LLDP is disabled by default. At the [edit protocols lldp] hierarchy level, use the enable statement to enable LLDP, and the interfaces statement to enable LLDP on all or some interfaces. Use the following statements at the [edit protocols lldp] hierarchy level to configure or adjust the default LLDP parameters:
- advertisement-interval
- hold-multiplier
- lldp-configuration-notification-interval.
- ptopo-configuration-maximum-hold-time
- ptopo-configuration-trap-interval
- transmit-delay
- Configuration support for manual and automatic
link switchover mechanism on multichassis link aggregation interface—You can configure a multichassis link aggregation (MC-LAG)
interface in active-standby mode to automatically revert to a preferred
node. In an MC-LAG topology with active-standby mode, a link switchover
happens only if the active node goes down. With this configuration,
you can trigger a link switchover to a preferred node even when the
active node is available.
To enable automatic link switchover for an multichassis link aggregation (mc-ae) interface, you must configure the switchover-mode revertive statement at the [edit interfaces aex aggregated-ether-options mc-ae] hierarchy level. You can also specify the revert time for the switchover by using the revert-time statement. To continue using the manual switchover mechanism, you must configure the switchover-mode non-revertive statement at the [edit interfaces aex aggregated-ether-options mc-ae] hierarchy level. For nonrevertive mode, you can configure manual switchover to the preferred node by using the switchover immediate and mc-ae-id statements at the [request interface mc-ae] hierarchy level.
With this feature, you can use the show interfaces mc-ae revertive-info command to view the switchover configuration information.
- Uniform Enhanced Layer 2 Software CLI configuration
statements and operational commands—Enhanced Layer
2 Software (ELS) provides a uniform CLI for configuring and monitoring
Layer 2 features on MX Series routers in LAN mode (MX-ELM). With ELS,
for example, you can configure a VLAN and other Layer 2 features on
an MX-ELM router by using the same configuration commands.
[See the ELS CLI documentation for MX Series routers: Junos OS for EX9200 Switches, Release 12.3.]
- When changing modes the user must delete any unsupported configurations.
- The web-based ELS Translator tool is available for registered
customers to help them become familiar with the ELS CLI and to quickly
translate existing MX Series router CLI configurations into ELS CLI
configurations.
[See ELS Translator.]
Layer 2 Tunneling Protocol
- Support for filtering trace results by subscriber
or domain for AAA, L2TP, and PPP (MX Series routers)—You can now filter trace results for AAA (authd), L2TP (jl2tpd),
and PPP (jpppd) by subscribers or domains. Specify the filter user username option at the appropriate
hierarchy level:
- AAA—[edit system processes general-authentication-service traceoptions filter]
- L2TP—[edit services l2tp traceoptions filter]
- PPP—[edit protocols ppp-service traceoptions filter]
For subscriber usernames that have the expected form of user@domain, you can filter on either the user or the domain. The filter supports the use of a wildcard (*) at the beginning or end of the user, the domain, or both. For example, the following are all acceptable uses of the wildcard: tom@example.com, tom*, *tom, *ample.com, tom@ex*, tom*@*example.com.
You cannot filter results using a wildcard in the middle of the user or domain. For example, the following uses of the wildcard are not supported: tom*25@example.com, tom125@ex*.com.
When you enable filtering by username, traces that have insufficient information to determine the username are automatically excluded.
MPLS
- Link protection for MLDP—MLDP
link protection enables fast reroute of traffic carried over LDP LSPs
in case of a link failure. LDP point-to-multipoint LSPs can be used
to send traffic from a single root or ingress node to a number of
leaf nodes or egress nodes traversing one or more transit nodes. When
one of the links of the point-to-multipoint tree fails, the subtrees
may get detached until the IGP reconverges and MLDP initiates label
mapping using the best path from the downstream to the new upstream
router. To protect the traffic in the event of a link failure, you
can configure an explicit tunnel so that traffic can be rerouted using
the tunnel. Junos OS supports make-before-break (MBB) capabilities
to ensure minimum packet loss when attempting to signal a new LSP
path before tearing down the old LSP path. This feature also adds
targeted LDP support for MLDP link protection.
To configure MLDP link protection, use the make-before-break, and link-protection-timeout statements at the [edit protocols ldp] hierarchy level. To view MBB capabilities, use the show ldp session detail command. To verify that link protection is active, use the show ldp interface extensive command. To view adjacency type, use the show ldp neighbor extensive command. To view MBB interval, use the show ldp overview command.
- Ultimate-hop popping feature now available for
LSPs configured on M Series, MX Series, and T Series platforms—An ultimate-hop popping LSP pops the MPLS label at the LSP
egress. The default behavior for an LSP on a Juniper Networks device
is to pop the MPLS label at the penultimate-hop router (the router
before the egress router). Ultimate-hop popping is available on RSVP-signaled
LSPs and static LSPs.
The following network applications could require that you configure UHP LSPs:
- MPLS-TP for performance monitoring and in-band OAM
- Edge protection virtual circuits
- UHP static LSPs
To enable ultimate-hop popping on an LSP, include the ultimate-hop-popping statement at the [edit protocols mpls label-switched-path lsp-name] hierarchy level to enable ultimate-hop popping on a specific LSP or at the [edit protocols mpls] hierarchy level to enable ultimate-hop popping on all of the ingress LSPs configured on the router. When you enable ultimate-hop popping, RSVP attempts to resignal existing LSPs as ultimate-hop popping LSPs in a make-before-break fashion. If an egress router does not support ultimate-hop popping, the existing LSP is torn down. If you disable ultimate-hop popping, RSVP resignals existing LSPs as penultimate-hop popping LSPs in a make-before-break fashion.
- Enable local receivers on the ingress of a point-to-multipoint
circuit cross-connect (CCC)—This feature enables
you to switch the traffic entering a P2MP LSP to local interfaces.
On the ingress PE router, CCC can be used to switch an incoming CCC
interface to one or more outgoing CCC interfaces. To configure the
output interface, include the output-interface statement
at the [edit protocols connections p2mp-transmit-switch <p2mp-lsp-name-on-which-to-transmit>] hierarchy
level. One or more output interfaces can be configured as local receivers
on the ingress PE router using this statement. Use the show connections p2mp-transmit-switch (extensive | history | status), show route ccc <interface-name> (detail | extensive), and show route forwarding-table ccc <interface-name> (detail | extensive) commands to view details of the local
receiving interfaces at ingress.
[MPLS]
- Support for Bidirectional Forwarding Detection
protocol, LSP traceroute, and LSP ping on Channelized OC3/STM1 (Multi-Rate)
Circuit Emulation MIC with SFP (MX Series Routers)—Starting
with Junos OS 12.3, support for Bidirectional Forwarding Detection
(BFD) protocol, LSP traceroute, and LSP ping is extended to Channelized
OC3/STM1 (Multi-Rate) Circuit Emulation MIC with SFP (MIC-3D-4COC3-1COC12-CE).
The BFD protocol is a simple hello mechanism that detects failures in a network. You can configure Bidirectional Forwarding Detection (BFD) for LDP LSPs. You can also use the LSP ping commands to detect LSP data plane faults. You can trace the route followed by an LDP-signaled LSP.
LDP LSP traceroute is based on RFC 4379, Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures. This feature allows you to periodically trace all paths in a Forwarding Equivalence Class (FEC). The FEC topology information is stored in a database accessible from the CLI.
- Host fast reroute (HFRR)—Adds a precomputed protection path into the Packet Forwarding Engine, such that if a link between a provider edge device and a server farm becomes unusable for forwarding, the Packet Forwarding Engine can use another path without having to wait for the router or the protocols to provide updated forwarding information. HFRR is a technology that protects IP endpoints on multipoint interfaces, such as Ethernet. This technology is important in data centers where fast service restoration for server endpoints is critical. After an interface or a link goes down, HFRR enables the local repair time to be approximately 50 milliseconds. You can configure HFRR by adding the link-protection statement to the interface configuration in the routing instance. We recommend that you include this statement on all provider edge (PE) devices that are connected to server farms through multipoint interfaces.
- Support of Path Computation Element Protocol
for RSVP-TE—Starting with Junos OS Release 12.3,
the MPLS RSVP-TE functionality is extended to provide a partial client-side
implementation of the stateful Path Computation Element (PCE) architecture
(draft-ietf-pce-stateful-pce). The PCE computes path for the traffic
engineered LSPs (TE LSPs) of ingress routers that have been configured
for external control. The ingress router that connects to a PCE is
called a Path Computation Client (PCC). The PCC is configured with
the Path Computation Client Protocol (PCEP) (defined in RFC 5440,
but limited to the functionality supported on a stateful PCE only)
to facilitate external path computing by a PCE.
In this new functionality, the active stateful PCE sets parameters for the PCC's TE LSPs, such as bandwidth, path (ERO), and priority. The TE LSP parameters configured from the PCC's CLI are overridden by the PCE-provided parameters. The PCC re-signals the TE LSPs based on the path specified by the PCE. Since the PCE has a global view of the bandwidth demand in the network and performs external path computations after looking up the traffic engineering database, this feature provides a mechanism for offline control of TE LSPs in the MPLS RSVP TE enabled network.
To enable external path computing by a PCE, include the lsp-external-controller statement on the PCC at the [edit mpls] and [edit mpls lsp lsp-name] hierarchy levels. To enable PCE to PCC communication, configure pcep on the PCC at the [edit protocols] hierarchy level.
[See PCEP Configuration Guide.]
Multicast
- Redundant virtual tunnel (VT) interfaces in Multiprotocol
BGP (MBGP) multicast VPNs (MVPNs)—VT interfaces
are needed for multicast traffic on routing devices that function
as combined provider edge (PE) and provider core (P) routers to optimize
bandwidth usage on core links. VT interfaces prevent traffic replication
when a P router also acts as a PE router (an exit point for multicast
traffic). You can configure up to eight VT interfaces in a routing
instance, thus providing Tunnel PIC redundancy inside the same multicast
VPN routing instance. When the active VT interface fails, the secondary
one takes over, and you can continue managing multicast traffic with
no duplication. To configure, include multiple VT interfaces in the
routing instance and, optionally, apply the primary statement
to one of the VT interfaces.
[See Example: Configuring Redundant Virtual Tunnel Interfaces in MBGP MVPNs.]
- Enhancements to RPD_MC_OIF_REJECT and RPD_MC_OIF_RE_ADMIT
system log messages—When multicast call admission
control (CAC) is enabled on an interface, the routing software cannot
add a multicast flow to that interface if doing so exceeds the maximum
configured bandwidth for that interface. Consequently, the interface
is rejected for that flow due to insufficient bandwidth, and the router
writes the RPD_MC_OIF_REJECT system log message to the log file at
the info severity level. When bandwidth again becomes available
on the interface, interfaces previously rejected for a flow are readmitted.
In that case, the router writes the RPD_MC_OIF_RE_ADMIT system log
message to the log file at the info severity level.
Both the RPD_MC_OIF_REJECT system log message and the RPD_MC_OIF_RE_ADMIT system log message include the interface-name. In RPD_MC_OIF_REJECT messages, interface-name identifies the interface that was rejected for a multicast flow due to insufficient bandwidth. In RPD_MC_OIF_RE_ADMIT messages, interface-name identifies the interface that was re-admitted for a multicast flow due to newly available bandwidth on the interface.
The RPD_MC_OIF_REJECT and RPD_MC_OIF_RE_ADMIT system log messages have been enhanced in this release to include the following information in addition to the interface-name:
- group-address—IP address of the multicast group
- source-address—Source IP address of the multicast flow
- flow-rate—Bandwidth of the multicast flow, in bits per second (bps)
- maximum-flow-rate—Maximum bandwidth that is the sum of all multicast flows on the interface, in bps
- admitted-flow-rate—Admitted bandwidth that is the sum of all multicast flows on the interface, in bps
When the maximum allowable bandwidth is exceeded on the logical interface (also known as the map-to interface) to which an outgoing interface (OIF) map directs (maps) multicast traffic, the RPD_MC_OIF_REJECT and RPD_MC_OIF_RE_ADMIT system log messages also display the oif-map-interface-name string. The oif-map-interface-name is an optional string that identifies one or more subscriber interfaces that requested the multicast traffic and are associated with the OIF map.
The oif-map-interface-name string appears in the RPD_MC_OIF_REJECT and RPD_MC_OIF_RE_ADMIT system log messages when all of the following conditions are met:
- The subscriber interface has CAC enabled, and is associated with an OIF map.
- The map-to interface (also known as the multicast VLAN, or M-VLAN) has CAC enabled.
- The subscriber interface that maps traffic to the M-VLAN receives an IGMP or MLD join message.
- The M-VLAN is not already sending the group and source of the multicast flow.
- Adding a multicast flow to the M-VLAN exceeds the maximum bandwidth configured on the M-VLAN interface.
- The group and source is source-specific multicast (SSM), or multicast data traffic is flowing.
Being able to view all of this information in a single system log message makes it easier for you to identify, troubleshoot, and resolve problems when using multicast protocols in your network. In earlier Junos OS releases, the RPD_MC_OIF_REJECT and RPD_MC_OIF_RE_ADMIT system log messages included only interface-name, but did not include group-address, source-address, or oif-map-interface-name.
The following example shows an RPD_MC_OIF_REJECT system log message for an oversubscribed interface. Because the interface is not configured with an OIF map, the oif-map-interface-name string does not appear.
Oct 26 08:09:51 wfpro-mx1-c r1:rpd[12955]: RPD_MC_OIF_REJECT: 225.1.0.2 193.0.1.2 (5000000 bps) rejected due to lack of bandwidth on ge-4/1/0.1 (maximum 12000000 bps, admitted 10000000 bps)
This example includes the following information:
- The group-address is 225.1.0.2
- The source-address is 193.0.1.2
- The flow-rate is 5000000 bps
- The interface-name is ge-4/1/0.1
- The maximum-flow-rate is 12000000 bps
- The admitted-flow-rate is 10000000 bps
The following example shows the same RPD_MC_OIF_REJECT system log message for an interface configured with an OIF map. All of the values are the same as in the preceding RPD_MC_OIF_REJECT example except for the addition of the oif-map-interface-name string, which is requested from ge-4/1/0.1 ge-4/1/0.2 ge-4/1/0.3.
Oct 26 08:17:05 wfpro-mx1-c r1:rpd[15133]: RPD_MC_OIF_REJECT: 225.1.0.2 193.0.1.2 (5000000 bps) rejected due to lack of bandwidth on ge-4/1/0.4 (maximum 12000000 bps, admitted 10000000 bps) requested from ge-4/1/0.1 ge-4/1/0.2 ge-4/1/0.3
The enhancements to the RPD_MC_OIF_REJECT and RPD_MC_OIF_RE_ADMIT system log messages make no changes to how bandwidth is managed on the router for multicast configurations.
[Multicast Protocols Configuration Guide]
- Static ARP with multicast MAC address for an IRB
interface—Enables you to configure a static ARP
entry with a multicast MAC address for an IRB interface which acts
as the gateway to the network load balancing (NLB) servers. Earlier,
the NLB servers dropped packets with a unicast IP address and a multicast
MAC address. Junos OS 12.3 supports the configuration of a static
ARP with a multicast MAC address.
To configure a static ARP entry with a multicast MAC address for an IRB interface, configure the ARP entry at the [edit interfaces irb unit logical-unit-number family inet address address] hierarchy level.
irb {unit logical-unit-number{family inet {address address{arp address multicast-mac mac-add;}}}}
Power Management
- Power management support on T4000 routers with
six-input DC power supply —Starting with Junos
OS Release 12.3, the power management feature is enabled on a Juniper
Networks T4000 Core Router. This feature enables you to limit the
overall chassis output power consumption. That is, power management
enables you to limit the router from powering on a Flexible PIC Concentrator
(FPC) when sufficient output power is not available to power on the
FPC.
The power management feature is enabled only when six input feeds with 40 amperes (A) each or four input feeds with 60 A each are configured on the router. The power management feature is not enabled for any other input feed–current combination. When the power management feature is not enabled, Junos OS tries to power on all the FPCs connected to the router.
Caution: If you do not configure the power management feature and the maximum power draw is exceeded by the router, FPCs’ states might change from Online to Offline or Present, some traffic might drop, or the interfaces might flap.
After you connect the input feeds to the router, you must configure the number of input feeds connected to the router and the amount of current received at the input feeds. Use the feeds statement and the input current statement at the [edit chassis pem] hierarchy level to configure the number of input feeds and the amount of current received at the input feeds, respectively.
Note: You can connect three 80 A DC power cables to the six-input DC power supply by using terminal jumpers. When you do this, ensure that you set the value of feeds statement to 6 and that of the input current statement to 40. If these configurations are not set, the power management feature is not enabled and, therefore, Junos OS tries to power on all the FPCs connected to the router.
When the power management feature is enabled, FPCs connected to the router are powered on based on the power received by the router. If the router receives sufficient power to power on all the FPCs connected to the router, all the FPCs are powered on. If sufficient power is not available, Junos OS limits the number of FPCs brought online. That is, Junos OS uses the total available chassis output power as a factor to decide whether or not to power on an FPC connected to the router.
[See T4000 Power Management Overview and T4000 Core Router Hardware Guide .]
Routing Policy and Firewall Filters
- Source checking for forwarding filter tables—On MX Series 3D Universal Edge Routers, you can apply a forwarding table filter by using the source-checking statement at the [edit forwarding-options family inet6] hierarchy level. This discards IPv6 packets when the source address type is unspecified, loopback, multicast, or link-local. RFC 4291, IP Version 6 Addressing Architecture, refers to four address types that require special treatment when they are used as source addresses. The four address types are: Unspecified, Loopback, Multicast, and Link-Local Unicast. The loopback and multicast addresses must never be used as a source address in IPv6 packets. The unspecified and link-local addresses can be used as source addresses but routers must never forward packets that have these addresses as source addresses. Typically, packets that contain unspecified or link-local addresses as source addresses are delivered to the local host. If the destination is not the local host, then the packet must not be forwarded. Configuring this statement filters or discards IPv6 packets of these four address types.
- Unidirectional GRE tunnels across IPv4 without
tunnel interfaces—For Junos OS Release 12.3R2
and later, you can configure a tunnel that transports IPv4, IPv6,
protocol-independent, or MPLS traffic across an IPv4 network without
having to create tunnel interfaces on services PICs. This type of
GRE tunnel is unidirectional and transports unicast or multicast transit
traffic as clear text. Encapsulation, de-encapsulation, and forwarding
of payloads is executed by Packet Forwarding Engine processes for
logical Ethernet interfaces or aggregated Ethernet interfaces hosted
on MICs and MPCs in MX Series routers. Two MX Series routers installed
as PE routers provide network connectivity to two CE routers that
lack a native routing path between them. This feature is also supported
in logical systems.
Specify tunnel characteristics by configuring the tunnel-end-point statement on the ingress PE router:
firewall {tunnel-end-point tunnel-name {ipv4 {source-address source-host-address;destination-address destination-host-address;}gre [key number];}}To configure the ingress PE router to encapsulate passenger protocol packets, attach a passenger protocol family firewall filter at the input of a supported interface. The following terminating firewall filter action refers to the specified tunnel and initiates encapsulation of matched packets:
encapsulate tunnel-nameTo configure the egress PE router to de-encapsulate GRE packets and forward the original passenger protocol packets, attach an IPv4 firewall filter at the input of all interfaces that are advertised addresses for the router. The following terminating firewall filter action initiates de-encapsulation of matched packets:
decapsulate [routing-instance instance-name]By default, the Packet Forwarding Engine uses the default routing instance to forward payload packets to the destination network. If the payload is MPLS, the Packet Forwarding Engine performs route lookup on the MPLS path routing table using the route label in the MPLS header.
If you specify the decapsulate action with an optional routing instance name, the Packet Forwarding Engine performs route lookup on the routing instance, and the instance must be configured.
[Firewall Filters Configuration Guide]
Routing Protocols
- Expanded support for
advertising multiple paths to a destination in BGP—This
feature now supports graceful restart and additional address families.
Previously, graceful restart was not supported and only the IPv4 address
family was supported with the BGP add-path feature. Now
the following address families are supported:
- IPv4 unicast (net unicast)
- IPv6 unicast (inet6 unicast)
- IPv4 labeled unicast (inet labeled-unicast)
- IPv6 labeled unicast (inet6 labeled-unicast)
To configure these address families, include the family <address-family> add-path statement at the [edit protocols bgp] hierarchy level.
To configure graceful restart, include the graceful-restart statement at the [edit routing-options] hierarchy level.
[See Example: Advertising Multiple BGP Paths to a Destination.]
- Support for multihop BFD session—One desirable application of BFD is to detect connectivity to routing devices that span multiple network hops and follow unpredictable paths. This is known as a multihop session. Until Junos OS Release 12.3, multihop BFD was non-distributed and ran on the Routing Engine. Starting in Junos OS 12.3, multihop BFD is distributed, meaning that it runs on the Packet Forwarding Engine. This change provides multiple scalability improvements.
Security
- DDoS protection flow detection (MX Series routers)—Flow detection is an enhancement to DDoS protection that supplements
the DDoS policer hierarchies. When you enable flow detection by including
the flow-detection statement at the [edit system ddos-protection global] hierarchy level, a limited amount of hardware resources
are used to monitor the arrival rate of host-bound flows of control
traffic. This behavior makes flow detection highly scalable compared
to filter policers, which track all flows and therefore consume a
considerable amount of resources.
Flows that violate a DDoS protection policer are tracked as suspicious flows; they become culprit flows when they violate the policer bandwidth for the duration of a configurable detection period. Culprit flows are dropped, kept, or policed to below the allowed bandwidth level. Suspicious flow tracking stops if the violation stops before the detection period expires.
Most flow detection attributes are configured at the packet level or flow aggregation level. Table 2 lists these statements, which you can include at the [edit system ddos-protection protocols protocol-group packet-type] hierarchy level. You can disable flow detection, configure the action taken for culprit flows, specify a bandwidth different than the policer bandwidth, configure flows to be monitored even when a policer is not in violation, disable automatic event reporting, or enable a timeout period that automatically removes flows as culprit flows after the timeout has expired.
Table 2: Flow Detection Packet-Level Statements
flow-detection-mode
flow-level-detection
no-flow-logging
flow-detect-time
flow-recover-time
physical-interface
flow-level-bandwidth
flow-timeout-time
subscriber
flow-level-control
logical-interface
timeout-active-flows
By default, flow detection automatically generates reports for events associated with the identification and tracking of culprit flows and bandwidth violations. You can include the flow-report-rate and violation-report-rate statements at the [edit system ddos-protection global] hierarchy level to configure the event reporting rate.
Use the show ddos-protection protocols flow-detection command to display flow detection information for all protocol groups or for a particular protocol group. Use the show ddos-protection protocols culprit-flows command to display information about culprit flows for all packet types, including the number of culprit flows discovered, the protocol group and packet type, the interface on which the flow arrived, and the source address for the flow. The show ddos-protection statistics command now provides a global count of discovered and currently tracked culprit flows. You can use the clear ddos-protection protocols culprit-flows command to clear all culprit flows, or just those for a protocol group or individual packet type.
[DDoS Configuration]
Subscriber Access Management
- Support for PPP subscriber services over ATM networks
(MX Series routers with MPCs and ATM MICs with SFP)—Enables
you to create PPP-over-ATM (PPPoA) configurations on an MX Series
router that has an ATM MIC with SFP (model number MIC-3D-80C3-20C12-ATM)
and a supported MPC installed. PPPoA configurations support statically
created PPP logical subscriber interfaces over static ATM underlying
interfaces. (Dynamic creation of the PPP interfaces is not supported.)
Most features supported for PPPoE configurations are also supported
for PPPoA configurations on an MX Series router. You can dynamically
apply subscriber services such as CoS and firewall filters to the
static PPP logical subscriber interface by configuring the services
in the dynamic profile that creates the PPP logical interface.
PPPoA configurations on an MX Series router support two types of encapsulation on the ATM underlying interface:
- To configure PPPoA encapsulation that uses LLC, you must configure the ATM underlying interface with PPP-over-AAL5 LLC encapsulation. To do so, include the encapsulation atm-ppp-llc statement at the [edit interfaces interface-name unit logical-unit-number] hierarchy level.
- To configure PPPoA encapsulation that uses VC multiplexing, you must configure the ATM underlying interface with PPP-over-ATM AAL5 multiplex encapsulation. To do so, include the encapsulation atm-ppp-vc-mux statement at the [edit interfaces interface-name unit logical-unit-number] hierarchy level.
PPPoA configurations enable the delivery of subscriber-based services, such as CoS and firewall filters, for PPP subscribers accessing the router over an ATM network. You use the same basic statements, commands, and procedures to create, verify, and manage PPPoA configurations as you use for PPPoA configurations on M Series routers and T Series routers.
[Subscriber Access, Network Interfaces]
- Support for adjusting shaping rate and overhead
accounting attributes based on PPPoE access line parameters for agent
circuit identifier interface sets (MX Series routers with MPCs/MICs)—Extends the functionality available in earlier Junos OS releases
to enable you to configure the router to use the Actual-Data-Rate-Downstream
[26-130] and Access-Loop-Encapsulation [26-144] DSL Forum vendor-specific
attributes (VSAs) found in PPPoE Active Discovery Initiation (PADI)
and PPPoE Active Discovery Request (PADR) control packets to adjust
the shaping-rate and overhead-accounting class of service (CoS) attributes,
respectively, for dynamic agent circuit identifier (ACI) interface
sets. In earlier Junos OS releases, you used this feature to adjust
the shaping-rate and overhead-accounting attributes only for dynamic
subscriber interfaces not associated with ACI interface sets.
The shaping-rate attribute is based on the value of the Actual-Data-Rate-Downstream VSA. The overhead-accounting attribute is based on the value of the Access-Loop-Encapsulation VSA, and specifies whether the access loop uses Ethernet (frame mode) or ATM (cell mode) encapsulation. In subscriber access networks where the router passes downstream ATM traffic to Ethernet interfaces, the different Layer 2 encapsulations between the router and the PPPoE Intermediate Agent on the digital subscriber line access multiplexer (DSLAM) make managing the bandwidth of downstream ATM traffic difficult. Using the Access-Loop-Encapsulation VSA to shape traffic based on frames or cells enables the router to adjust the shaping-rate and overhead-accounting attributes in order to apply the correct downstream rate for the subscriber.
You can enable this feature in either the dynamic profile that defines the ACI interface set, or in the dynamic profile for the dynamic PPPoE (pp0) subscriber interface associated with the ACI interface set, as follows:
- To configure the router to use the Actual-Data-Rate-Downstream VSA to adjust the shaping-rate CoS attribute, include the vendor-specific-tags actual-data-rate-downstream statement at the [edit dynamic-profiles profile-name class-of-service dynamic-class-of-service-options] hierarchy level.
- To configure the router to use the Access-Loop-Encapsulation VSA to adjust the overhead-accounting CoS attribute, include the vendor-specific-tags access-loop-encapsulation statement at the [edit dynamic-profiles profile-name class-of-service dynamic-class-of-service-options] hierarchy level.
When you enable this feature, the router adjusts the shaping-rate and overhead-accounting attributes when the dynamic ACI interface set is created and the router receives the PADI and PADR packets from the first subscriber interface member of the ACI interface set. The value of the Actual-Data-Rate-Downstream VSA in the PADI and PADR control packets overrides the shaping-rate value configured at the [edit dynamic-profiles profile-name class-of-service traffic-control-profiles] hierarchy level only if the Actual-Data-Rate-Downstream value is less than the shaping-rate value configured with the CLI. The value of the Access-Loop-Encapsulation VSA always overrides the overhead-accounting value configured at the [edit dynamic-profiles profile-name class-of-service traffic-control-profiles] hierarchy level.
As part of this feature, the output of the following operational commands has been enhanced to display the adjustment value (frame mode or cell mode) for the overhead-accounting attribute:
- show class-of-service interface
- show class-of-service interface-set
- show class-of-service traffic-control-profile
[Subscriber Access]
- DHCP relay agent selective traffic processing based
on DHCP options (MX Series routers)—Subscriber
management enables you to configure DHCP relay agent to provide subscriber
support based on information in DHCP options. For DHCPv4 relay agent,
you use DHCP option 60 and option 77 to identify the client traffic.
For DHCPv6 relay agent, you use DHCPv6 option 15 and option 16.
You can use the DHCP option information to specify the action DHCP relay agent takes on client traffic that meets the specified match criteria, such as forwarding traffic to a specific DHCP server, or dropping the traffic. You can also specify a default action, which DHCP relay agent uses when the option string in the client traffic does not satisfy any match criteria or when no other action is configured.
To configure DHCP relay agent selective processing, you use the relay-option statement at the [edit forwarding-options dhcp-relay] or [edit forwarding-options dhcp-relay dhcpv6] hierarchy level. To display statistics for the number of forwarded packets, use the show dhcp relay statistics and show dhcpv6 relay statistics commands.
[Subscriber Access]
- Ensuring that RADIUS clears existing session state
before performing authentication and accounting for new sessions (MX
Series routers)—At subscriber session startup,
the Junos OS authd process sends an Acct-On message to RADIUS servers.
In some service provider environments, upon receipt of the Acct-On
message, the RADIUS server cleans up the previous session state and
removes accounting statistics. However, authentication or accounting
for the new session can start before the RADIUS cleanup of the previous
session—this can result in RADIUS deleting the new session’s
authentication and accounting information (which might include billing
information).
To ensure that the new session’s authentication and accounting information is not deleted, you can optionally configure authd to wait for an Acct-On-Ack response message from RADIUS before sending the new authentication and accounting updates to the RADIUS server. When this feature is enabled, all authentication requests fail until the router receives the Acct-On-Ack response from at least one configured RADIUS server.
To enable this feature, you configure the wait-for-acct-on-ack statement at the [edit access profile profile-name accounting] hierarchy level. To display the response status of the Acct-On messages (for example, Ack, Pending, None), use the show network-access aaa accounting command.
[Subscriber Access]
- Enhanced local configuration of DNS name server
addresses (MX Series routers)—You can now configure
the DNS name server addresses locally per routing instance or per
access profile. The new configuration applies to both terminated and
tunneled PPP subscribers (IPv4 and IPv6), DHCP subscribers (DHCPv4
and DHCPv6), and IP-over-Ethernet (VLAN) subscribers. In earlier releases,
the local configuration for the DNS server address applied only to
DHCP subscribers (configured as a DHCP attribute), and only at the
more granular level of the address pool.
As with the address-pool configuration, the new statements enable you to configure multiple DNS name server addresses per routing instance and access profile by issuing the statement for each address.
Because you can both configure name server addresses at more than one level and configure more than one address within a level, a preference order for the configurations determines which address is returned to the client.
- Within a configuration level, the preference order for the address matches the order in which the address is configured. For example, the first address configured within an access profile is preferred to the second address configured in that profile.
Among configuration levels, the preference order depends on the client type:
- For DHCP subscribers, the preference in descending order
is:
RADIUS > DHCP address pool > access profile > global
- For non-DHCP subscribers, the preference in descending
order is:
RADIUS > access profile > global
- For DHCP subscribers, the preference in descending order
is:
- Accordingly, all subscriber types prefer a name server address configured in RADIUS to the address configured anywhere else. When a name server address is configured only in a DHCP address pool, then no address is available to non-DHCP subscribers. For all subscriber types, the global name server address is used only when no other name server addresses are configured.
To configure a name server address in a routing instance, include the domain-server-name-inet or domain-name-server statement for IPv4 addresses, or the domain-name-server-inet6 statement for IPv6 addresses, at the [edit access] hierarchy level.
To configure a name server address in an access profile, include any of the same statements at the [edit access profile] hierarchy level.
Best Practice: In practice, choose either the domain-name-server statement or the domain-name-server-inet statement for IPv4 addresses. They both have the same effect and there is no need to use both statements.
[Subscriber Access]
- Gx-Plus support for service provisioning (MX Series
routers)—Gx-Plus now supports service (policy
rule) provisioning, service activation, threshold notifications, threshold
updates, service termination, and recovery. Previously, Gx-Plus supported
only notification, termination, and recovery. To request subscriber
service provisioning from the Policy Control and Charging Rules Function
(PCRF), include the provisioning-order gx-plus statement
in the subscriber access profile.
By default, Gx-Plus provisioning requests are made only for IPv4 subscribers. To enable requests to be made also for IPv6 subscribers, include the include-ipv6 statement at the [edit access gx-plus global] hierarchy level.
The PCRF can request usage monitoring for the provisioned services for one or more of the following: number of bytes transmitted (CC-Output-Octets), number of bytes received (CC-Input-Octets), number of bytes transmitted and received (CC-Total-Octets), and elapsed time (CC-Time). If the specified threshold is reached, the router sends a usage report back to the PCRF. The PCRF can then return new threshold triggers and request that services be activated or deactivated.
When a subscriber has been provisioned with Gx-Plus, only the PCRF can activate or deactivate services for that subscriber. Accordingly, AAA rejects any RADIUS CoA or CLI service activation or deactivation requests for these subscribers. You can override PCRF control on an individual session, which is useful for session and service troubleshooting. To do so, issue the new request network-access aaa subscriber set session-id command. You can then activate and deactivate services with the existing request network-access aaa subscriber add session-id and request network-access aaa subscriber delete session-id commands, respectively.
[Subscriber Access]
- Support for maintenance of CoS shaping rates for
ANCP subscribers across ANCP restarts (MX Series routers)—When ANCP stops due to a process restart or graceful Routing
Engine switchover (GRES), CoS now enforces the ANCP downstream shaping-rates
until the CoS keepalive timer expires. When the timer expires, CoS
reverts to its configured shaping-rate for the interfaces.
You can configure the CoS keepalive timer by including the existing maximum-helper-restart-time seconds statement at the [edit protocols ancp] hierarchy level. It specifies how much time other daemons such as CoS will wait for ANCP to restart and is used to configure the CoS rate update keepalive timer.
ANCP does not maintain TCP sessions from neighbors across the restart or graceful Routing Engine switchover (GRES). When it restarts, it must re-establish sessions with neighbors and subscriber sessions before the timer expires. For all the re-established sessions, ANCP updates CoS with the updated downstream shaping rates and provides DSL line attributes to the session database for AAA.
If CoS stops or restarts while ANCP is up, ANCP retransmits all known subscriber downstream rates to CoS. Any existing adjusted shaping rates that have not been updated revert to the configured CoS shaping rates when the CoS restart timer expires.
[Subscriber Access]
- MAC address validation in enhanced network services
modes—MAC address validation is now optimized
for scaling when the router is configured for Enhanced IP Network
Services mode or Enhanced Ethernet Network Services mode. When MAC
address validation is enabled, the router compares the IP source and
MAC source addresses against trusted addresses, and forwards or drops
the packets according to the match and the validation mode. This feature
is not available for IPv6.
Note: When the router is configured for either of the enhanced network services modes, MAC address validation is supported only on MPCs. If the router has both DPCs and MPCs, or only DPCs, you cannot configure the chassis to be in enhanced mode.
In contrast, when the router is configured for a normal (non-enhanced) network services mode, MAC address validation is supported on both DPCs and MPCs. The router can be populated completely with one or the other type of line card, or have a mix of both types. Normal network services mode is the default.
To configure an enhanced network services mode, include the network-services service statement at the [edit chassis] hierarchy level, and then configure MAC address validation as usual.
Note: In normal network services mode, you can use the show interfaces statistics interface-name command to display a per-interface count of the packets that failed validation and were dropped. In enhanced network services modes, this command does not count the dropped packets; you must contact Juniper Networks Customer Support for assistance in collecting this data.
[Subscriber Access]
- Fail filters for RPF checks in dynamic profiles—By default, unicast RPF checks prevent DHCP packets from being
accepted on interfaces protected by the RPF check. When you enable
an RPF check with a dynamic profile, you must configure a fail filter
that identifies and passes DHCP packets.
To configure a fail filter, include the fail-filter filter-name statement at the [edit dynamic-profiles profile-name interfaces interface-name unit logical-unit-number family family rpf-check] hierarchy level. To configure the terms of the fail filter, include the filter filter-name statement at the [edit firewall family family] hierarchy level. Include conditions in a filter term to identify DHCP packets, such as from destination-port dhcp and from destination-address 255.255.255.255/32. Define another filter term to drop all other packets that fail the RPF check. This feature is available for both IPv4 and IPv6 address families.
To confirm that the fail filter is active, you can issue the show subscribers extensive command, which displays the name of active filters.
[Subscriber Access]
- Filtering traffic that is mirrored using
DTCP-initiated subscriber secure policy—You can
now filter mirrored traffic before it is sent to a mediation device.
This feature allows service providers to reduce the volume of traffic
sent to a mediation device. For some types of traffic, such as IPTV
or video on demand, it is not necessary to mirror the entire content
of the traffic because the content might already be known or controlled
by the service provider.
To configure, create a policy at the [edit services radius-flow-tap policy policy-name] hierarchy level. You can set up the policy to filter IPv4 or IPv6 traffic by source or destination address or port, protocol, or DSCP value. You then apply the policy by using the new DTCP attribute X-Drop-Policy. You can use the X-Drop-Policy attribute with the ADD DTCP command to begin filtering traffic when mirroring is triggered using the ADD DTCP command. To begin filtering traffic that is currently being mirrored, use the X-Drop-Policy attribute with the new ENABLE DTCP command. To stop filtering traffic that is currently being mirrored, use the X-Drop-Policy attribute with the new DISABLE DTCP command.
[Subscriber Access Configuration Guide]
- Enhancements to multicast subscriber flow distribution
in an aggregated Ethernet bundle (MX Series routers)—Enables you to both target and separate the distribution of
multicast subscriber traffic using enhanced IP chassis network services
mode in an aggregated Ethernet bundle that is configured without link
protection.
This feature enhances already released scheduling and scaling improvements made for subscribers in an aggregated Ethernet bundle and includes support for the following:
- IP demux subscriber interfaces on the EQ DPC and MPC/MIC
modules and VLAN demux subscriber interfaces on MPC/MIC modules.
Note: This feature is not supported for VLAN subscriber interfaces.
- Multicast using the enhanced-ip mode setting at the [edit chassis network-services] hierarchy level.
- Multicast traffic to egress in parallel with unicast traffic, sharing the CoS hierarchy and aggregated Ethernet flow distribution.
- Targeted multicast flow distribution over inter-chassis redundancy (ICR) configurations where multicast traffic flows toward the subscriber primary interface even if that interface resides on a remote chassis within the virtual system.
- The ability to separate unicast and multicast subscriber traffic on a per VLAN basis using OIF mapping.
Targeted distribution enables you to target egress traffic for subscribers on a link. The system distributes subscriber interfaces equally among the links. For multicast traffic to egress in parallel with unicast traffic, share the CoS hierarchy and aggregated Ethernet flow distribution:
- Configure subscriber distribution. See Distribution of Demux Subscribers in an Aggregated Ethernet Interface.
- Configure the network-services statement at the [edit chassis] hierarchy level to use enhanced-ip mode to take advantage of using the EQ DPC and MPC/MIC modules.
Separated target distribution enables you to target multicast traffic to use a specific VLAN over the aggregated Ethernet interface instead of flowing over the same interface in parallel. To configure separated targeted distribution for a multicast link:
- Configure an interior gateway protocol. See the Junos OS Routing Protocols Configuration Guide .
- Configure IGMP or MLD on the interfaces. See the Junos OS Multicast Protocols Configuration Guide for static configuration. See the Junos OS Subscriber Access Configuration Guide for dynamic configuration.
- Configure the network-services statement at the [edit chassis] hierarchy level to use enhanced-ip mode to take advantage of using the EQ DPC and MPC/MIC modules.
- Configure an OIF mapping for any subscriber VLAN interfaces. See Example: Configuring Multicast with Subscriber VLANs in the Junos OS Multicast Protocols Configuration Guide .
- Configure the distribution type for demux subscribers on an aggregated Ethernet interface by including the targeted-distribution statement at the [edit dynamic-profiles profile-name interfaces demux0 unit unit-name] or [edit interfaces demux0 unit unit-name] hierarchy level.
When links are removed, affected flows are redistributed among the remaining active backup links. When links are added to the system, no automatic redistribution occurs. New subscriber and multicast flows are assigned to the links with the least number of subscribers (typically, the new links). You can configure the system to periodically rebalance the distribution of subscribers on the links by including the rebalance-periodic time hours:minutes interval hours statement at the [edit interfaces ae0 aggregated-ether-options] hierarchy level. To manually rebalance the subscribers on the interface, issue the request interface rebalance interface interface-name command.
To display a summary of the targeted distribution on a logical interface, issue the show interface interface-name extensive command. To display the targeted distribution on a specific aggregated Ethernet bundle, issue the show interface targeting aex command.
[Subscriber Access, Network Interfaces]
- IP demux subscriber interfaces on the EQ DPC and MPC/MIC
modules and VLAN demux subscriber interfaces on MPC/MIC modules.
- Layer-2 control packets—The
forwarding path supports the following types of Layer-2 control packets
(excluding Operation, Administration, and Maintenance (OAM) packets)
in both directions, receiving and forwarding:
- Ethernet control packets—ARP, IS-IS, 1588v2, Ethernet Synchronization Messaging Channel (ESMC).
- Host path—The host path
to and from the CPU is supported in the following ways:
- Host-bound traffic, prioritized into multiple queues, to support various levels of traffic.
- Hardware-based policing used to limit denial of service attacks.
- Protocol and flow-based policing.
- Code point-based classification and prioritization of packets from the host to the external world.
- Counters and statistics—Most
packet and byte-level statistics for various entities in the forwarding
path available in Junos OS are supported. The following counters and
statistics are supported:
- Ingress and egress packet and byte counters for logical interfaces, Ethernet pseudowires, and MPLS transit label-switched paths.
- Discard packets counter for system-wide global Packet Forwarding Engine statistics.
- Statistics collection and reporting for Gigabit Ethernet interfaces—For Gigabit Ethernet interfaces, Packet Forwarding Engine statistics are disabled by default. To enable Gigabit Ethernet interface statistics, you must specifically configure them. To configure Gigabit Ethernet interface statistics, include the new statistics statement at the [edit interfaces interface-name unit logical-unit-number] hierarchy level. To display statistics, issue the show interfaces interface-name (brief-|-extensive) operational mode command.
- Address Resolution Protocol (ARP) parameters—The maximum number of ARP entries is 7,000.
- Support for configuring NAS-Port and NAS-Port-Type
RADIUS attributes per physical interface, VLAN, or S-VLAN (MX Series
routers with MPCs/MICs)—Enables you to configure
the NAS-Port-Type (61) RADIUS IETF attribute, and an extended format
for the NAS-Port (5) RADIUS IETF attribute, on a per-physical interface,
per-static VLAN, or per-static stacked VLAN (S-VLAN) basis. The router
passes the NAS-Port and NAS-Port-Type attributes to the RADIUS server
during the authentication, authorization, and accounting (AAA) process.
The NAS-Port-Type attribute specifies the type of physical port that the network access server (NAS) uses to authenticate the subscriber. The NAS-Port attribute specifies the physical port number of the NAS that is authenticating the user, and is formed by a combination of the physical port’s slot number, port number, adapter number, VLAN ID, and S-VLAN ID. The NAS-Port extended format configures the number of bits (bit width) for each field in the NAS-Port attribute: slot, adapter, port, VLAN, and S-VLAN.
Configuring the NAS-Port-Type and the extended format for NAS-Port on a per-VLAN, per–S-VLAN, or per-physical interface basis is useful in the following network configurations:
- 1:1 access model (per-VLAN basis)—In a 1:1 access model, dedicated customer VLANs (C-VLANs) provide a one-to-one correspondence between an individual subscriber and the VLAN encapsulation.
- N:1 access model (per–S-VLAN basis)—In an N:1 access model, service VLANs are dedicated to a particular service, such as video, voice, or data, instead of to a particular subscriber. Because a service VLAN is typically shared by many subscribers within the same household or in different households, the N:1 access model provides a many-to-one correspondence between individual subscribers and the VLAN encapsulation.
- 1:1 or N:1 access model (per-physical interface basis)—You can configure the NAS-Port-Type and NAS-Port format on a per-physical interface basis for both the 1:1 access model and the N:1 access model.
To configure the NAS-Port-Type and the format for NAS-Port on a per-VLAN, or per–S-VLAN, or per-physical interface basis, you must create a NAS-Port options definition. The NAS-Port options definition includes the NAS-Port extended format, the NAS-Port-Type, and either the VLAN range of subscribers or the S-VLAN range of subscribers to which the definition applies.
The basic tasks for configuring a NAS-Port options definition are as follows:
- To create a named NAS-Port options definition, include the nas-port-options nas-port-options-name statement at the [edit interfaces interface-name radius-options] hierarchy level.
- To configure the extended format for the NAS-Port, include the nas-port-extended-format statement and appropriate options at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name] hierarchy level. To include S-VLAN IDs, in addition to VLAN IDs, in the extended format, include the stacked statement at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name nas-port-extended-format] hierarchy level.
- To configure the NAS-Port-Type, include the nas-port-type port-type statement at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name] hierarchy level.
- To configure the VLAN range of subscribers to which the NAS-Port options definition applies, include the vlan-ranges statement at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name] hierarchy level. To specify all VLANs in the VLAN range, include the any statement at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name vlan-ranges] hierarchy level.
- To configure the S-VLAN range of subscribers to which the NAS-Port options definition applies, include the stacked-vlan-ranges statement at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name] hierarchy level. To specify all VLAN IDs in the outer tag of the S-VLAN range, include the any statement at the [edit interfaces interface-name radius-options nas-port-options nas-port-options-name stacked-vlan-ranges] hierarchy level. You cannot configure the inner tag (S-VLAN ID) of the S-VLAN range; the inner tag is always specified as any to represent all S-VLAN IDs.
Note: You can create a maximum of 16 NAS-Port options definitions per physical interface. Each definition can include a maximum of 32 VLAN ranges or 32 S-VLAN ranges, but cannot include a combination of VLAN ranges and S-VLAN ranges.
[Subscriber Access]
- Support for one dynamic profile for both
single-stack and dual-stack subscribers—On PPP
access networks, you can use one dynamic profile to support the following
address combinations: IPv4 only, IPv6 only, and IPv4 and IPv6 dual
stack.
[Designing an IPv6 Architecture and Implementing IPv4 and IPv6 Dual Stack for Broadband Edge]
- Support for DHCPv6 requests that include
a request for both DHCPv6 IA_NA and DHCPv6 prefix delegation—For DHCPv6 subscribers on DHCP access networks, a client can
solicit both an IA_NA address and a prefix for DHCP prefix delegation,
and the session comes up even if either the address or the prefix
is not allocated. In earlier releases, an error was returned if the
BNG did not return both an address for DHCPv6 IA_NA and a prefix for
DHCPv6 prefix delegation.
[Designing an IPv6 Architecture and Implementing IPv4 and IPv6 Dual Stack for Broadband Edge]
- Support for new Juniper Networks Diameter
AVP (MX Series routers)—Junos OS supports a new
Juniper Networks Diameter AVP, Juniper-State-ID (AVP code
2058). Juniper-State-ID specifies the value assigned to
each synchronization cycle for the purpose of identifying which messages
to discard. The Juniper-State-ID AVP can be included in
Diameter messages and used by supported Diameter applications such
as JSRC and PTSP.
[Subscriber Access Configuration Guide]
- Subscriber management and services feature and scaling parity (MX2010 and MX2020)—Starting in Junos OS Release 12.3R4, the MX2010 router and the MX2020 router support all subscriber management and services features that are supported by the MX240, MX480, and MX960 routers. In addition, the scaling and performance values for the MX2010 router and the MX2020 router match those of MX960 routers.
System Logging
New and deprecated system log tags—The following set of system log message is new in this release:
- LLDP—This section describes messages with the LLDP prefix. They are generated by the link layer discovery protocol process (lldpd), which is used by EX Series switches to learn and distribute device information on network links. The information allows the switch to quickly identify a variety of devices, including IP telephones, resulting in a LAN that interoperates smoothly and efficiently.
The following system log messages are new in this release:
- ASP_NAT_PORT_BLOCK_ACTIVE
- ASP_PCP_NAT_MAP_CREATE
- ASP_PCP_NAT_MAP_DELETE
- ASP_PCP_TPC_ALLOC_ERR
- ASP_PCP_TPC_NOT_FOUND
- AUTHD_ACCT_ON_ACK_NOT_RECEIVED
- CHASSISD_FPC_OPTICS_HOT_NOTICE
- CHASSISD_MAC_ADDRESS_VIRB_ERROR
- CHASSISD_RE_CONSOLE_ME_STORM
- COSD_CLASS_NO_SUPPORT_IFD
- COSD_CLASS_NO_SUPPORT_L3_IFL
- COSD_MAX_FORWARDING_CLASSES_ABC
- DDOS_SCFD_FLOW_AGGREGATED
- DDOS_SCFD_FLOW_CLEARED
- DDOS_SCFD_FLOW_DEAGGREGATED
- DDOS_SCFD_FLOW_FOUND
- DDOS_SCFD_FLOW_RETURN_NORMAL
- DDOS_SCFD_FLOW_TIMEOUT
- ESWD_VMEMBER_MAC_LIMIT_DROP
- FC_PROXY_NP_PORT_RESTORE_FAILED
- LIBJNX_PRIV_RAISE_FAILED
- LLDP_NEIGHBOR_DOWN
- LLDP_NEIGHBOR_UP
- PPMD_MIRROR_ERROR
- RPD_PARSE_BAD_COMMAND
- RPD_PARSE_BAD_FILE
- RPD_PIM_JP_INFINITE_HOLDTIME
- UFDD_LINK_CHANGE
- WEB_CERT_FILE_NOT_FOUND_RETRY
- WEB_DUPLICATE_HTTPD
The following system log messages are no longer documented, either because they indicate internal software errors that are not caused by configuration problems or because they are no longer generated. If these messages appear in your log, contact your technical support representative for assistance:
- FABOAMD_TASK_SOCK_ERR
- JCS_EXT_LINK_STATE
- JCS_RSD_LINK_STATE
- JCS_SWITCH_COMMUNICATION_OK
- LIBJNX_AUDIT_ERROR
- LIBJNX_COMPRESS_EXEC_FAILED
- LIBJNX_INVALID_CHASSIS_ID
- LIBJNX_INVALID_RE_SLOT_ID
- LIBJNX_REPLICATE_RCP_EXEC_FAILED
User Interface and Configuration
- Support for HTTP reverse proxy and HTTP transparent
proxy on Application Services Modular Line Card (MX240, MX480, MX960
routers)—The Application Services Modular Line
Card with Media Flow Controller software installed enables configuring
support for HTTP reverse proxy and HTTP transparent proxy caching.
The Application Services Modular Line Card (AS MLC) has three components:
- Application Services Modular Carrier Card (AS MCC)
- Application Services Modular Processing Card with 64G (AS MXC)
- Application Services Modular Storage Card with 6.4 TB capacity (AS MSC)
The AS MLC for MX Series routers supports high throughput for applications developed with Juniper Networks Media Flow Controller software. A Media Flow Controller application functions as a web-caching proxy server that processes HTTP traffic. HTTP requests are routed to the Media Flow Controller either explicitly for a domain (reverse proxy) or by redirecting traffic based on a policy (transparent proxy).
Media Flow Controller software can operate in HTTP reverse proxy mode, HTTP transparent proxy mode, or mixed mode.
In HTTP reverse proxy configurations, the service provider provides services to a set of domains (content providers) that buy content caching capability from the service provider. Clients connect to content providers through virtual IP (VIP) addresses. Service providers in the reverse proxy scenario generally deploy the routers with AS MLC hardware to honor service requests (such as caching) from the domain users.
HTTP reverse proxy supports the following features:
- Retrieve and deliver content from content providers in response to client requests as if the content originated at the proxy
- Prevent attacks from the Web when a firewall is included in the reverse proxy configuration
- Load balance client requests among multiple servers
- Lessen load on origin servers by caching both static and dynamic content
In HTTP transparent proxy configurations, the service provider implements the AS MLC to improve its own caching capability and to reduce the load on its own network. Implementing caching on an MX Series router with an AS MLC improves the retrieval speeds for data and optimizes the back-end network utilization. Typically, HTTP transparent proxy retrieves content for clients from the Internet. The client identifies the target of the request, which is commonly a location on the Internet.
HTTP transparent proxy does not enforce local policies: it does not add, delete, or modify information contained in the messages it forwards. HTTP transparent proxy is a cache for data. HTTP transparent proxy satisfies client requests directly because it retains the data that was previously requested by the same or by a different client. HTTP transparent proxy improves the efficiency and performance of network bandwidth within the content provider’s data center.
In mixed mode, both reverse proxy and transparent proxy are configured on the same router.
[Junos OS Ethernet Interfaces Configuration Guide]
- Support for 10-port 10-Gigabit Ethernet MIC with
SFPP on MPC3E (MX240, MX480, and MX960 routers)—Starting
with Junos OS Release 12.3, the MPC3E supports the 10-port 10-Gigabit
Ethernet MIC with SFPP (MIC3-3D-10XGE-SFPP). The 10-port 10-Gigabit
Ethernet MIC with SFPP uses SFP+ optical transceiver modules for connectivity.
The MIC supports up to ten 10-Gigabit Ethernet interfaces and occupies
MIC slot 0 or 1 in the MPC3E.
The MIC supports both LAN-PHY and WAN-PHY interface framing modes. You can configure the framing mode on a per-port basis. Use the existing command to switch between LAN-PHY and WAN-PHY modes:
set interfaces interface-name framing (lan-phy | wan-phy)
The 10-Gigabit Ethernet MIC with SFPP supports the same features as the other MICs supported on the MPC3E.
[See MPC3E MIC Overview, MX Series 3D Universal Edge Router Line Card Guide, Ethernet Interfaces Configuration Guide, System Basics.]
- Inline flow monitoring support (MX Series routers
with MPC3E)—Junos OS Release 12.3 supports inline
flow monitoring and sampling services on MX Series routers with MPC3E.
To configure inline flow monitoring, include the inline-jflow statement at the [edit forwarding-options sampling instance instance-name family inet output] hierarchy level.
Inline flow monitoring supports a specified sampling output format
designated as IP_FIX and uses UDP as the transport protocol. Inline
flow monitoring supports both IPv4 and IPv6 formats.
[See Configuring Inline Sampling, and Protocols and Applications Supported by MX240, MX480, MX960 MPC3E.]
- Enhancements to IPv4 and IPv6 inline-jflow flow
IPFIX record templates—Junos OS Release 12.3 introduces
the VLAN ID field in the inline-jflow flow IPFIX record templates
for IPv4 and IPv6 traffic. The VLAN ID field is not valid for egress
traffic, and returns a value of 0 for egress
traffic. Note that the VLAN ID field is updated while creating a new
flow record, and any change in VLAN ID after that might not be updated
in the record.
[Services Interfaces]
- Support for IPv6 flow servers on interfaces hosted
on MICs or MPCs—Starting with Release 12.3, Junos
OS enables you to configure IPv6 flow servers for inline flow monitoring.
When you configure an IPv6 address for the flow-server statement
at the [edit forwarding-options sampling instance instance-name family (inet |inet6 |mpls) output] hierarchy level, you must
also configure an IPv6 address for the inline-jflow source-address statement at the [edit forwarding-options sampling instance instance-name family (inet | inet6 | mpls) output] hierarchy level. You can configure different families that use IPv4
and IPv6 flow servers under the same sampling instance. However, you
can configure only one flow server per family.
[Services Interfaces]
- Optical transceiver support for MIC3-3D-1X100GE-CFP
on MPC3E (MX240, MX480, and MX960 routers)—Starting
with Junos OS Release 12.3, the 100-Gigabit Ethernet MIC with CFP
(MIC3-3D-1X100GE-CFP) on MPC3E supports the CFP-100GBase-ER4 optical
transceiver.
If the ambient temperature exceeds 40° C and the other MIC slot is not empty, the CFP-100GBase-ER4 optical transceiver is put on low power mode, which disables the transmitter and takes the optic modules on the MIC offline. This protects the optical transceiver and also prevents damage to adjacent components.
When the optical transceiver is taken offline, you might see the following system log (syslog) message:
PIC 1 optic modules in Port 0 8 have been disabled since ambient temperature is over threshold.
Note: The CFP-100GBase-ER4 optical transceiver is NEBS (Network Equipment Building System) compliant only when plugged into the 100-Gigabit Ethernet MIC with CFP and when the other MIC slot is empty.
To reactivate the optical transceiver, use the request chassis optics fpc-slot fpc-slot-number reactivate operational mode command.
[System Basics Configuration Guide]
- Optical transceiver support for MIC3-3D-10XGE-SFPP
on MPC3E (MX240, MX480, and MX960 routers)—Starting
with Junos OS Release 12.3, the 10-port 10-Gigabit Ethernet MIC with
SFPP (MIC3-3D-10XGE-SFPP) on MPC3E supports the SFPP-10GE-ZR optical
transceiver.
If the ambient temperature exceeds 40° C, the transmitter on the SFPP-10GE-ZR optical transceiver is disabled, which takes the optic modules on the MIC offline. This protects the optical transceiver and also prevents damage to adjacent components.
When the optical transceiver is taken offline, you might see the following system log (syslog) message:
PIC 1 optic modules in Port 0 8 have been disabled since ambient temperature is over threshold.
Note: The SFPP-10GE-ZR optical transceiver is not NEBS (Network Equipment Building System) compliant when plugged into the 10-port 10-Gigabit Ethernet MIC with SFPP. If other optical transceivers have been added, they can continue to operate.
To reactivate the optical transceiver, use the request chassis optics fpc-slot fpc-slot-number reactivate operational mode command.
[System Basics Configuration Guide]
VPLS
- PIM snooping for VPLS—PIM
snooping is introduced to restrict multicast traffic to interested
devices in a VPLS. A new statement, pim-snooping, is introduced
at the [edit routing-instances instance-name protocols] hierarchy level to configure PIM snooping on the
PE device. PIM snooping configures a device to examine and operate
only on PIM hello and join/prune packets.
A PIM snooping device snoops PIM hello and join/prune packets on each interface to find interested multicast receivers and populates the multicast forwarding tree with this information. PIM snooping can also be configured on PE routers connected as pseudowires, which ensures that no new PIM packets are generated in the VPLS, with the exception of PIM messages sent through LDP on the pseudowire.
PIM snooping improves IP multicast bandwidth in the VPLS core. Only devices that are members of a multicast group receive the multicast traffic meant for the group. This ensures network integrity and reliability, and multicast data transmission is secured.
- Improved VPLS MAC address learning on T4000 routers
with Type 5 FPCs—Junos OS Release 12.3 enables
improved virtual private LAN service (VPLS) MAC address learning on
T4000 routers with Type 5 FPCs by supporting up to 262,143 MAC addresses
per VPLS routing instance. In Junos OS releases before Release 12.3,
T4000 routers with Type 5 FPCs support only 65,535 MAC addresses per
VPLS routing instance.
To enable the improved VPLS MAC address learning on T4000 routers with Type 5 FPCs:
- Include the enhanced-mode statement at the [edit chassis network-services] hierarchy level and perform a system reboot. By default, the enhanced-mode statement is not configured.
- Include the mac-table-size statement at the [edit routing-instances vpls protocols vpls ] hierarchy level.
Note:
- You can configure the enhanced-mode statement only on T4000 routers with Type 5 FPCs.
- The enhanced-mode statement supports up to 262,143 MAC addresses per VPLS routing instance. However, the MAC address learning limit for each interface remains the same (that is, 65,535 MAC addresses).
- You must reboot the system after configuring the enhanced-mode statement. Otherwise, the improved VPLS MAC address learning does not take effect.
- When the T4000 router reboots after the enhanced-mode statement has been configured, all Type 4 FPCs go offline.
[See Configuring Improved VPLS MAC Address Learning on T4000 Routers with Type 5 FPCs.]
- VPLS Multihoming support extended to FEC 129–Enables you to connect a customer site to two or more PE routers to provide redundant connectivity. A redundant PE router can provide network service to the customer site as soon as a failure is detected. VPLS multihoming helps to maintain VPLS service and traffic forwarding to and from the multihomed site in the event of network failures. BGP-based VPLS autodiscovery (FEC 129) enables each VPLS PE router to discover the other PE routers that are in the same VPLS domain. VPLS autodiscovery also automatically detects when PE routers are added or removed from the VPLS domain. You do not need to manually configure the VPLS and maintain the configuration when a PE router is added or deleted. VPLS autodiscovery uses BGP to discover the VPLS members and to set up and tear down pseudowires in the VPLS. To configure, include the multi-homing statement at the [edit routing-instances instance-name] hierarchy level.
- BGP path selection for Layer 2 VPNs and VPLS—By default, Juniper Networks routers use just the designated
forwarder path selection algorithm to select the best path to reach
each Layer 2 VPN or VPLS routing instance destination. However, you
can now configure the routers in your network to use both the BGP
path selection algorithm and the designated forwarder path selection
algorithm. The Provider routers within the network can use the standard
BGP path selection algorithm. Using the standard BGP path selection
for Layer 2 VPN and VPLS routes allows a service provider to leverage
the existing Layer 3 VPN network infrastructure to also support Layer
2 VPNs and VPLS. The BGP path selection algorithm also helps to ensure
that the service provider’s network behaves predictably with
regard to Layer 2 VPN and VPLS path selection. This is particularly
important in networks employing route reflectors and multihoming.
The PE routers continue to use the designated forwarder path selection algorithm to select the preferred path to reach each CE device. The VPLS designated forwarder algorithm uses the D-bit, preference, and PE router identifier to determine which path to use to reach each CE device in the Layer 2 VPN or VPLS routing instance.
To enable the BGP path selection algorithm for Layer 2 VPN and VPLS routing instances, do the following:
- Specify a unique route distinguisher on each PE router participating in a Layer 2 VPN or VPLS routing instance.
- Configure the l2vpn-use-bgp-rules statement on all of the PE and Provider routers participating in Layer 2 VPN or VPLS routing instances. You can configure this statement at the [edit protocols bgp path-selection] hierarchy level to apply this behavior to all of the routing instances on the router or at the [edit routing-instances routing-instance-name protocols bgp path-selection] hierarchy level to apply this behavior to a specific routing instance.
On all of the PE and Provider routers participating in Layer 2 VPN or VPLS routing instances, run Junos OS Release 12.3 or later. Attempting to enable this functionality on a network with a mix of routers that both do and do not support this feature can result in anomalous behavior.
[See Enabling BGP Path Selection for Layer 2 VPNs and VPLS.]
VPNs
- Provider edge link protection in Layer 3
VPNs—A precomputed protection path can be configured
in a Layer 3 VPN such that if a link between a CE router and a PE
router goes down, the protection path (also known as the backup path)
between the CE router and an alternate PE router can be used. This
is useful in an MPLS service provider network, where a customer can
have dual-homed CE routers that are connected to the service provider
through different PE routers. In this case, the protection path avoids
disruption of service if a PE-CE link goes down.
The protection path can be configured on a PE router in a Layer 3 VPN by configuring the protection statement at the [edit routing-instances instance-name protocols bgp family inet unicast] or [edit routing-instances instance-name protocols bgp family inet6 unicast] hierarchy level.
The protection statement indicates that protection is required on prefixes received from a particular neighbor or family. After protection is enabled for a given family, group, or neighbor, protection entries are added for prefixes or next hops received from the respective peer.
A protection path can be selected only if the best path has already been installed by BGP in the forwarding table. This is because a protection path cannot be used as the best path. There are two conditions under which the protection path will not work:
- When configured for an internal BGP peer.
- When configured with external and internal BGP multipath.
[See Example: Configuring Provider Edge Link Protection in Layer 3 VPNs.]
- Edge node failure protection for LDP-signaled pseudowires—This feature provides a fast protection mechanism against
egress PE router failure when transport LSPs are RSVP-TE LSPs. This
is achieved by using multihomed CEs, upstream assigned labels, context-specific
label switching (egress-protection and context-identifier statements), and by extending RSVP facility backup fast reroute
(FRR) to enable node protection at the penultimate hop router of the
LSP. With node protection capability, the penultimate hop router can
perform local repair upon an egress PE failure and redirect pseudowire
traffic very quickly to a protector PE through a bypass LSP. You must
configure a Layer 2 circuit and transport LSP to enable this feature.
Use the show rsvp session and show mpls lsp commands
to view bypass LSP and backup LSP information on the penultimate hop
router and a protector PE router.
[VPNs]
- Support for configuring more than one million Layer
3 VPN labels—For Layer 3 VPNs configured on Juniper
Networks routers, Junos OS normally allocates one inner VPN label
for each customer edge (CE)-facing virtual routing and forwarding
(VRF) interface of a provider edge (PE) router. However, other vendors
allocate one VPN label for each route learned over the CE-facing interfaces
of a PE router. This practice increases the number of VPN labels exponentially,
which leads to slow system processing and slow convergence time.
For Juniper Networks routers participating in a mixed vendor network with more than one million Layer 3 VPN labels, include the extended-space statement at the [edit routing-options forwarding-table chained-composite-next-hop ingress l3vpn] hierarchy level. The extended-space statement is disabled by default.
We recommend that you configure the extended-space statement in mixed vendor networks containing more than one million BGP routes to support Layer 3 VPNs. However, because using this statement can also enhance the Layer 3 VPN performance of Juniper Networks routers in networks where only Juniper Networks routers are deployed, we recommend configuring the statement in these networks as well.
[See Accepting BGP Updates with Unique Inner VPN Labels in Layer 3 VPNs.]
- Layer 2 circuit switching protection—Provides
traffic protection for the Layer 2 circuit paths configured between
PE routers. In the event the path (working path) used by a Layer 2
circuit fails, traffic can be switched to an alternate path (protection
path). Switching protection is supported for locally switched Layer
2 circuits and provides 1 to 1 protection for each Layer 2 circuit
interface.
Each working path can be configured to have a either a protection path routed directly to the neighboring PE router or indirectly using a pseudowire configured through an intermediate PE router. The protection path provides failure protection for the traffic flowing between the PE routers. Ethernet OAM monitors the status of these paths. When OAM detects a failure, it reroutes the traffic from the failed working path to the protection path. You can configure OAM to revert the traffic automatically to the working path when it is restored. You can also manually switch traffic between the working path, the protection path, and back.
Layer 2 circuit switching protection is supported on MX Series routers only. Nonstop routing (NSR) and graceful Routing Engine switchover (GRES) are not supported.
To enable Layer 2 circuit switching protection, include the connection-protection statement at the [edit protocols l2circuit local switching interface interface-name end-interface] hierarchy level. You also need to configure OAM for the working path and the protection path by configuring the maintenance-association statement and sub-statements at the [edit protocols oam ethernet connectivity-fault-management maintenance-domain maintenance-domain-name] hierarchy level.
[See Example: Configuring Layer 2 Circuit Switching Protection]
- History enhancements for Layer 2 circuit,
Layer 2 VPN, and FEC 129-based pseudowires—Adds
the instance-history option to the show vpls connections and show l2vpn connections commands. Also adds instance-level logs for the following events:
- Catastrophic events
- Route withdrawals
- Pseudowire switchovers
- Connect protect switchovers
- Protect interface swaps
- Interface flaps (interface down events)
- Label block changes
These logs are maintained until the instance is deleted from the configuration.
Related Documentation
Modified: 2016-06-09
Download This Guide
Related Documentation
- Changes in Default Behavior and Syntax, and for Future Releases in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Errata and Changes in Documentation for Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Outstanding Issues in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Resolved Issues in Junos OS Release 12.3 for M Series, MX Series, and T Series Routers
- Upgrade and Downgrade Instructions for Junos OS Release 12.3 for M Series, MX Series, and T Series Routers