Junos Trio Chipset on MX Series FAQs
Junos Trio is a chipset with revolutionary 3D scaling technology that enables networks to dynamically scale for more bandwidth, subscribers, and services . This section answers questions related to this chipset used with Junos OS Release 10.1 and later.
Which product platforms use the Junos Trio chipset?
- MX Series MPC line cards
The MX Series MPC provides the connection between the customer’s Ethernet interfaces and the routing fabric of the MX Series chassis. The board features two Junos Trio chipsets.
- 16-port 10-Gigabit Ethernet MPC
This MPC provides the connection between 10-Gigabit Ethernet LAN interfaces and the routing fabric of the MX Series chassis. It has four identical packet processing paths combined with a control plane.
- 100-Gigabit Ethernet MPC
This MPC provides the connection between 100-Gigabit Ethernet LAN interfaces and the routing fabric of the MX Series chassis. It supports two separate slots for MICs.
- MX5, MX10, MX40, MX80 routers, and EX Series switches.
What is the total number of Packet Forwarding Engines in each of the DPCs using the Junos Trio chipset?
- The 16-port 10-Gigabit Ethernet MPC on MX Series routers has a total of four Packet Forwarding Engines per MPC.
- Each MPC has two Packet Forwarding Engines.
What is the default power-on sequence when line cards with Junos Trio chipsets are in the same chassis with other types of line cards?
If the set chassis network-services attribute is not configured, the following is the line card power-up rule:
- If the first PIC concentrator powered up is a DPC, then only DPCs within the box are allowed to power up.
- If the first PIC concentrator powered up is an MPC, then only MPCs are allowed to power up.
As of Junos OS Release 10.2 and later, the power-up rule has been changed to the following:
- If the set chassis network-services attribute is configured as ip at start time, any MX Series device-supported boards (such as DPC, FPC, and MPC) will boot.
- If the set chassis network-services attribute is configured as ethernet at start time, any MX Series device-supported boards (such as DPC, FPC, and MPC) will boot.
- If the set chassis network-services attribute is configured as enhanced-ip at start time, only MPCs and MS-DPCs are powered on in the chassis. Non-service DPCs do not work with enhanced network services mode options.
- If the set chassis network-services attribute is configured as enhanced-ethernet at start time, only MPCs and MS-DPCs are powered on in the chassis.
What are the QoS differences between the 16-port 10-Gigabit Ethernet MPC and the I-chip-based DPC?
- Dynamic memory is not supported on the 16-port 10-Gigabit Ethernet MPC.
- A buffer configured on an MPC queue is treated as the maximum. However, it is treated as the minimum on the I-chip DPC.
- The MPC maintains packets in 128-byte chunks.
- Port shaping is supported on all MPCs.
- Queues can have unique shaping and guaranteed rate configuration.
What is the difference in buffer management on the MPCs compared with the DPCs?
On port-queuing DPCs, 64 byte-per-unit dynamic buffers are available per queue. If a queue is using more than its allocated bandwidth share due to excess bandwidth left over from other queues, its buffers are dynamically increased. This is feasible because the I-chip on the DPCs primarily perform weighted random early detection (WRED) drops at the head of the queue, as opposed to “tail-assisted” drops, which are performed only when a temporal buffer is configured or when the queue becomes full. When a temporal buffer is not configured, the allocated buffer is treated as the minimum for that queue, and can expand if other queues are not using their share.
With the Trio chipset on the MPCs, WRED drops are performed at the tail of the queue. The packet buffer is organized into 128-byte units. Before a packet is queued, buffer and WRED checks are performed, and the decision to drop is made at this time. Once a packet is queued, it is not dropped. As a result, dynamic buffer allocation is not supported on the Packet Forwarding Engines containing the Trio chipset. The buffer allocation per queue on the Packet Forwarding Engines containing the Trio chipset is considered the maximum for that queue. Once the allocated buffer becomes full, subsequent packets are dropped until space is available, even if other queues are idle. Buffering is only required during oversubscription.
To provide larger buffers on Packet Forwarding Engines with the Trio chipset, the delay buffer can be increased from the default 100 ms to 200 ms of the port speed. the delay buffer can also be oversubscribed using the delay-buffer-rate configuration per port.
What are the supported QoS features for the 16-port 10-Gigabit Ethernet MPC and 100-Gigabit Ethernet MPC.
The Gigabit Ethernet MPCs supports the following QoS functionality:
- Port-based queuing
- Per-port shaping
- Eight queues per port
- 100 ms of delay buffer by default per port
- 200 ms of delay buffer configurable per port
- Ability to oversubscribe the delay buffer beyond 200 ms per port
- Queue-level shaping and guaranteed rate
- Separate guaranteed and shaping rates
- Rate limit option to police a queue to act as a Low Latency Queue (LLQ)
- Four WRED profiles per queue
- Multiple queue priority levels
- Strict High, High, Medium, and Low guaranteed priority
levels
- Strict High and High are at the same hardware priority level
- Round robin at each guaranteed priority level
- High and Low excess priority levels
- Queues perform WRR at the excess priority levels
- Strict priority scheduling at each excess priority level
- Classification per VLAN
- MPLS EXP
- IPv6, IPv4 ToS
- Inner and outer tag 802.1p (and DEI7)
- MF classifiers per VLAN
- Policers per VLAN
- Single rate, single-rate tricolor marking, two-rate tricolor marking, hierarchical policers
- Class-aware intelligent hierarchical policers
- Physical interface policers
- Rewrites per VLAN
- MPLS EXP, IP DSCP/PREC
- Inner and outer tag 802.1p (and DEI7)
- Ingress DSCP rewrite