Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation

Dense Port Concentrators FAQs

This section presents frequently asked questions and answers related to Dense Port Concentrators on Juniper Networks MX Series routers.

How many Layer 3 and Layer 2 policers are supported on Juniper Networks MX Series devices? What are the limiting factors for the policers?

The Layer 3 interface policer limit for I-chip-based DPCs is 39,000 possible, with 16,000 tested. In most implementations, logical interface limits are reached per port and per Packet Forwarding Engine before policer limits are reached.

The MX Series DPCs support Layer 2 policers indirectly. The Layer 2 policers are supported in the IQ2 Ethernet Services Engine, and the I-chip supports both Layer 3 and Layer 2 policing. The type of policing applied by the I-chip (Layer 3 or Layer 2) depends on the type of packet. For example, if a Layer 2 packet is received on a device using an I-chip, Layer 2 policing is performed by the I-chip.

Are peak information rate (PIR) and committed information rate (CIR) supported at the queue level on MX Series devices?

Support for PIR and CIR on MPCs

On MX Series devices using MPCs, CIR and PIR are supported at the queue level with class-of-service (CoS) schedulers. Use three rates, transmit-rate, shaping-rate, and excess-rate, which can be configured simultaneously, and use the excess-priority statement with excess-rate to manage the bandwidth in the excess region between CIR and PIR, as follows:

  • excess-rate—Use to configure the percentage of excess bandwidth traffic that should go into the queue in the excess region.
  • shaping-rate—Use to configure the maximum bandwidth usage, which sets the PIR.
  • transmit-rate—Use to configure the minimum bandwidth allocated to a queue, which sets the CIR.
  • excess-priority—Use to configure the priority of excess traffic in a scheduler as low, medium-low, medium-high, or high.

Support for PIR and CIR on DPCs

On devices using enhanced queuing DPCs, CIR and PIR are not supported at the queue level. PIR on these devices is supported by using the rate-limit statement. You can achieve an effect similar to CIR and PIR at the queue level by combining tricolor policers with rate-limit and drop profiles. You use the tricolor policers to enforce the CIR and PIR, by configuring the drop profiles to drop the yellow packets (packet loss priority (PLP): medium-high) before the green packets (PLP: low).

Is QoS on GRE tunnels supported when using the MS-DPC (IP services line card) on an MX Series device?

No, QoS on GRE tunnels is not supported when using the MS-DPC line card.

Is it possible to use a common QoS scheduler on a traffic-class group comprised of an aggregate of multiple GRE tunnels?

Support for using a common QoS scheduler on an aggregate group of GRE tunnels is accomplished by using a per-unit scheduler for GRE tunnels, which provides fine-grained queuing by using a single scheduler for a set of queues. As of Junos OS Release 10.1 and later, support for per-unit scheduler for GRE tunnels is added for:

  • M Series Multiservice Edge Routers: M320 with SFPC, M120, M7i, and M10i for non-enhanced (ABC-based ) Compact Forwarding Engine Board (CFEB) and enhanced CFEB (I-chip-based)
  • T Series Core Routers, including TX Matrix devices

This feature adds all of the functionality of tunnel PICs to Gigabit Ethernet Intelligent Queuing 2 (IQ2) and Enhanced IQ2 (IQ2E) PICs. The QoS for the GRE tunnel traffic is applied as the traffic is looped through the IQ2 or IQ2E PIC. Shaping is performed on all packets that pass through the GRE tunnel.

Is it possible to create a strict-high priority queue running at the same time as several high priority queues?

Yes, this can be done on all of the DPCs, those with enhanced queuing and those without enhanced queuing. This cannot be done on devices using IQ2 PICs.

Are the QoS features handled the same on the 10-Gigabit Ethernet DPCs as on the 1-Gigabit Ethernet DPCs?

Yes, the QoS features are the same for both types of DPCs.

Is hierarchical QoS per VLAN supported on aggregated Ethernet (AE) interfaces?

Hierarchical QoS allows you to control QoS at multiple levels: the physical level, the logical level, and fine-grained control at the command-line level. It is useful for managing bandwidth congestion and link sharing in multi-service networks. For MX Series devices with AE interfaces, hierarchical QoS per VLAN is available over a link aggregation group (LAG) in Junos OS Release 9.4 and later. This support is only one-to-one active/backup. However, it can be supported across DPCs.

What are the differences between the CoS traffic-manager options on the MX Series?

Use traffic-manager mode with the following options to configure CoS traffic manager mode of operation:

  • egress-only—Enables CoS queuing and scheduling on the egress side for the PIC that houses the interface. This is the default mode for an Enhanced Queuing (EQ) DPC on MX Series routers.
  • ingress-and-egress—Enables CoS queuing and scheduling on both the egress and ingress sides for the PIC. This is the default mode for IQ2 and IQ2E PICs on M Series and T Series routers. For EQ DPCs, you must configure the traffic-manager statement with ingress-and-egress mode to enable ingress CoS on the EQ DPC.
    • When ingress-and-egress is turned on, the classification is done in the IQ2 or IQ2E PIC.
    • When ingress-and-egress is not turned on, the classification is done at the I-chip.

Is rate-limit at the physical interface level supported on Enhanced Queuing DPCs (DPCE-R-Qs)?

Yes. In Junos OS Release 10.0 and later, rate-limit is supported at the physical interface level on Enhanced Queuing DPCs.

Is ingress queuing supported with the Enhanced Queuing DPCs (DPCE-R-Qs)?

Yes. Ingress queuing is supported with Enhanced Queuing DPCs. By default, though, ingress CoS features are disabled on the Enhanced Queuing DPCs. To enable ingress CoS features on an Enhanced Queuing DPC, include the traffic-manager statement with mode ingress-and-egress.

Is rate limiting at the queue level supported on Enhanced Queuing DPCs (DPCE-R-Qs)?

No. In contrast with non-queuing Packet Forwarding Engines, Enhanced Queuing DPCs do not support the CoS function of rate limiting on a per-queue basis. With an Enhanced Queuing DPC, you can rate limit traffic by using firewall filters to apply single-rate two-color policers to the input or output traffic at logical interfaces.

How is the shaping rate calculated on the Enhanced Queuing DPCs (DPCE-R-Qs)?

The shaping rate calculation on the Enhanced Queuing DPC is similar to the shaping rate and WRR calculations performed on the Gigabit Ethernet IQ2 PICs:

For ingress and egress: Layer 3 header + Layer 2 header + frame check sequence (FCS)

What is the queuing buffer size on MX Series DPCs?

  • On port queuing DPCs, the delay buffer is 100 ms per port on egress. This delay calculation is based on a 64 byte average packet size.
  • On Enhanced Queuing DPCs, the delay buffer is 500 ms per port on egress and ingress. This delay calculation is based on a 512 byte average packet size.

Are fine-grained queuing capabilities supported on Enhanced Queuing DPCs (DPCE-R-Q)?

Yes. DPCE-Rs and DPCE-Qs support up to 64,000 individual queues across both 1-Gigabit Ethernet and 10-Gigabit Ethernet ports. Additional features supported on these DPC types include:

  • Four-level hierarchical WRR
  • Four levels of per-VLAN queue priority
  • Priority propagation
  • Drop statistics per VLAN, color, or queue
  • Changeable allocation of schedulers per port for up to 8,000 scheduler nodes with eight queues each or 16,000 nodes with four queues each

What are the QoS properties of the DPCE-R and DPCE-Q line cards?

The following are the major QoS properties and their features:

  • Queuing at the VLAN level, per Packet Forwarding Engine using I-chip
    • 4,000 schedulers with four queues
    • 2,000 schedulers with eight queues
  • Hierarchical QoS
    • Traffic shaping at the physical port and at the customer VLAN or set of VLANs with the same service VLAN.
    • The traffic-control-profiles configuration statement is extended to support QoS at the interface-set level.
    • Support for priority propagation.
    • Shape traffic at the physical port and the customer VLAN and set of customer VLANs with the same service VLAN.
    • More customer VLAN schedulers than previous solutions: 2,000 eight-queue schedulers or 4,000 four-queue schedulers per 10 Gigabit Ethernet, versus 1,000 any-queue schedulers per 10 Gigabit Ethernet for IQ2.
    • Three levels of priority versus only two levels with IQ and IQ2.
    • Priority propagation—Priority from the queue level is preserved/demoted when passing logical interface or interface set stages.
    • Shaping and scheduling at inner/outer VLAN tag levels using a logical interface or interface set.
  • Queues and forwarding classes:
    • Eight queues per port
    • 16 forwarding classes
    • Four scheduling priorities per queue
    • Four WRED profiles per queue with flexible RED profiles

What are the QoS properties of the DPCE-R line cards?

  • Full Junos OS Layer 3 routing feature set
    • Eight queues per port
    • Layer 2 Ethernet switching features
    • Per-VLAN policing
    • Per-VLAN rewrite
    • Per-VLAN tricolor marking
    • Per-VLAN classification
    • Per-VLAN accounting
    • Per-VLAN filtering
  • Classification per VLAN
    • 802.1p of inner or outer tag
    • MPLS EXP
    • IPv4 type of service (ToS) firewall filters
    • Both Layer 3 and Layer 2 fields for VPLS and bridge traffic
    • Hierarchical policers per VLAN, two-rate tricolor marking (TCM), single-rate TCM, single-rate policers
    • Packet header rewrite per-VLAN queuing and scheduler per port
  • Queues and forwarding classes
    • Eight queues per port
    • 16 forwarding classes
    • Four scheduling priorities per queue
    • Four WRED profiles per queue with flexible RED profiles

What are the QoS properties of the Enhanced Queuing DPCs (DPCE-R-Q)?

  • Support ingress queuing, scheduling, and shaping
  • Classification using EXP for VPLS without tunnel
  • ACL-based classification for ingress QoS Layer 2 policers: per-VLAN ingress policers and per-VLAN egress policers
  • Match 802.1p and PLP in a firewall filter
  • Rewrite inner packets 802.1p
  • Rate limit per queue
  • Includes DEI7 bit in 802.1p-based classification
  • Double the number of subscribers, schedulers, shapers, and queues per DPC
  • Multiple VLAN bundling (interface sets within interface sets)
  • Class-aware hierarchical policers

Is DiffServ code point (DSCP) classification of MPLS-tagged packets supported on the I-chip-based DPCs?

The DSCP classifier is supported on I-chip DPCs as shown in the following table.

Table 1: DSCP Classifier Configuration

MPLS Configuration

Supported Platforms

DSCP Classifier Configuration

Layer 3 VPNs and VPLSs using an LSI routing instance

M320, M120, and MX Series

Configured under class-of-service routing-instances on the egress PE router.

Layer 3 VPNs using a virtual tunnel (VT) routing instance

M320, M120, and MX Series

Configured on the core-facing interface under class-of-service interfaces on the egress PE router.

MPLS forwarding

M320, M120, and MX Series (not supported on IQE and MX Series when ingress queuing is used)

Configured on the ingress core facing interface under class-of-service interfaces on the P or egress PE router.

VPLS using a VT routing instance

Not supported

MPLS forwarding when number of labels in the MPLS label stack is more than two

Not supported

Published: 2012-11-14

Supported Platforms

Published: 2012-11-14