- play_arrow Overview
- play_arrow Understanding How Class of Service Manages Congestion and Defines Traffic Forwarding Behavior
- Understanding How Class of Service Manages Congestion and Controls Service Levels in the Network
- How CoS Applies to Packet Flow Across a Network
- The Junos OS CoS Components Used to Manage Congestion and Control Service Levels
- Mapping CoS Component Inputs to Outputs
- Default Junos OS CoS Settings
- Packet Flow Through the Junos OS CoS Process Overview
- Configuring Basic Packet Flow Through the Junos OS CoS Process
- Example: Classifying All Traffic from a Remote Device by Configuring Fixed Interface-Based Classification
- Interface Types That Do Not Support Junos OS CoS
-
- play_arrow Configuring Platform-Specific Functionality
- play_arrow Configuring Class of Service on ACX Series Universal Metro Routers
- CoS on ACX Series Routers Features Overview
- Understanding CoS CLI Configuration Statements on ACX Series Routers
- DSCP Propagation and Default CoS on ACX Series Routers
- Configuring CoS on ACX Series Routers
- Classifiers and Rewrite Rules at the Global, Physical, and Logical Interface Levels Overview
- Configuring Classifiers and Rewrite Rules at the Global and Physical Interface Levels
- Applying DSCP and DSCP IPv6 Classifiers on ACX Series Routers
- Schedulers Overview for ACX Series Routers
- Shared and Dedicated Buffer Memory Pools on ACX Series Routers
- CoS for PPP and MLPPP Interfaces on ACX Series Routers
- CoS for NAT Services on ACX Series Routers
- Hierarchical Class of Service in ACX Series Routers
- Storm Control on ACX Series Routers Overview
- play_arrow Configuring Class of Service on MX Series 5G Universal Routing Platforms
- Junos CoS on MX Series 5G Universal Routing Platforms Overview
- CoS Features and Limitations on MX Series Routers
- Configuring and Applying IEEE 802.1ad Classifiers
- Scheduling and Shaping in Hierarchical CoS Queues for Traffic Routed to GRE Tunnels
- Example: Performing Output Scheduling and Shaping in Hierarchical CoS Queues for Traffic Routed to GRE Tunnels
- CoS-Based Interface Counters for IPv4 or IPv6 Aggregate on Layer 2
- Enabling a Timestamp for Ingress and Egress Queue Packets
- play_arrow Configuring Class of Service on PTX Series Packet Transport Routers
- CoS Features and Limitations on PTX Series Routers
- CoS Feature Differences Between PTX Series Packet Transport Routers and T Series Routers
- Understanding Scheduling on PTX Series Routers
- Virtual Output Queues on PTX Series Packet Transport Routers
- Example: Configuring Excess Rate for PTX Series Packet Transport Routers
- Identifying the Source of RED Dropped Packets on PTX Series Routers
- Configuring Queuing and Shaping on Logical Interfaces on PTX Series Routers
- Example: Configuring Queuing and Shaping on Logical Interfaces in PTX Series Packet Transport Routers
- Example: Configuring Strict-Priority Scheduling on a PTX Series Router
- CoS Support on EVPN VXLANs
- Understanding CoS CLI Configuration Statements on PTX Series Routers
- Classification Based on Outer Header of Decapsulation Tunnel
-
- play_arrow Configuring Line Card-Specific and Interface-Specific Functionality
- play_arrow Feature Support of Line Cards and Interfaces
- play_arrow Configuring Class of Service for Tunnels
- play_arrow Configuring Class of Service on Services PICs
- CoS on Services PICs Overview
- Configuring CoS Rules on Services PICs
- Configuring CoS Rule Sets on Services PICs
- Example: Configuring CoS Rules on Services PICs
- Packet Rewriting on Services Interfaces
- Multiservices PIC ToS Translation
- Fragmentation by Forwarding Class Overview
- Configuring Fragmentation by Forwarding Class
- Configuring Drop Timeout Interval for Fragmentation by Forwarding Class
- Example: Configuring Fragmentation by Forwarding Class
- Allocating Excess Bandwidth Among Frame Relay DLCIs on Multiservices PICs
- Configuring Rate Limiting and Sharing of Excess Bandwidth on Multiservices PICs
- play_arrow Configuring Class of Service on IQ and Enhanced IQ (IQE) PICs
- CoS on Enhanced IQ PICs Overview
- Calculation of Expected Traffic on IQE PIC Queues
- Configuring the Junos OS to Support Eight Queues on IQ Interfaces for T Series and M320 Routers
- BA Classifiers and ToS Translation Tables
- Configuring ToS Translation Tables
- Configuring Hierarchical Layer 2 Policers on IQE PICs
- Configuring Excess Bandwidth Sharing on IQE PICs
- Configuring Rate-Limiting Policers for High Priority Low-Latency Queues on IQE PICs
- Applying Scheduler Maps and Shaping Rate to Physical Interfaces on IQ PICs
- Applying Scheduler Maps to Chassis-Level Queues
- play_arrow Configuring Class of Service on Ethernet IQ2 and Enhanced IQ2 PICs
- CoS on Enhanced IQ2 PICs Overview
- CoS Features and Limitations on IQ2 and IQ2E PICs (M Series and T Series)
- Differences Between Gigabit Ethernet IQ and Gigabit Ethernet IQ2 PICs
- Shaping Granularity Values for Enhanced Queuing Hardware
- Ethernet IQ2 PIC RTT Delay Buffer Values
- Configuring BA Classifiers for Bridged Ethernet
- Setting the Number of Egress Queues on IQ2 and Enhanced IQ2 PICs
- Configuring the Number of Schedulers per Port for Ethernet IQ2 PICs
- Applying Scheduler Maps to Chassis-Level Queues
- CoS for L2TP Tunnels on Ethernet Interface Overview
- Configuring CoS for L2TP Tunnels on Ethernet Interfaces
- Configuring LNS CoS for Link Redundancy
- Example: Configuring L2TP LNS CoS Support for Link Redundancy
- Configuring Shaping on 10-Gigabit Ethernet IQ2 PICs
- Configuring Per-Unit Scheduling for GRE Tunnels Using IQ2 and IQ2E PICs
- Understanding Burst Size Configuration on IQ2 and IQ2E Interfaces
- Configuring Burst Size for Shapers on IQ2 and IQ2E Interfaces
- Configuring a CIR and a PIR on Ethernet IQ2 Interfaces
- Example: Configuring Shared Resources on Ethernet IQ2 Interfaces
- Configuring and Applying IEEE 802.1ad Classifiers
- Configuring Rate Limits to Protect Lower Queues on IQ2 and Enhanced IQ2 PICs
- Simple Filters Overview
- Configuring a Simple Filter
- play_arrow Configuring Class of Service on 10-Gigabit Ethernet LAN/WAN PICs with SFP+
- CoS on 10-Gigabit Ethernet LAN/WAN PIC with SFP+ Overview
- BA and Fixed Classification on 10-Gigabit Ethernet LAN/WAN PIC with SFP+ Overview
- DSCP Rewrite for the 10-Gigabit Ethernet LAN/WAN PIC with SFP+
- Configuring DSCP Rewrite for the 10-Gigabit Ethernet LAN/WAN PIC
- Queuing on 10-Gigabit Ethernet LAN/WAN PICs Properties
- Mapping Forwarding Classes to CoS Queues on 10-Gigabit Ethernet LAN/WAN PICs
- Scheduling and Shaping on 10-Gigabit Ethernet LAN/WAN PICs Overview
- Example: Configuring Shaping Overhead on 10-Gigabit Ethernet LAN/WAN PICs
- play_arrow Configuring Class of Service on Enhanced Queuing DPCs
- Enhanced Queuing DPC CoS Properties
- Configuring Rate Limits on Enhanced Queuing DPCs
- Configuring WRED on Enhanced Queuing DPCs
- Configuring MDRR on Enhanced Queuing DPCs
- Configuring Excess Bandwidth Sharing
- Configuring Customer VLAN (Level 3) Shaping on Enhanced Queuing DPCs
- Simple Filters Overview
- Configuring Simple Filters on Enhanced Queuing DPCs
- Configuring a Simple Filter
- play_arrow Configuring Class of Service on MICs, MPCs, and MLCs
- CoS Features and Limitations on MIC and MPC Interfaces
- Dedicated Queue Scaling for CoS Configurations on MIC and MPC Interfaces Overview
- Verifying the Number of Dedicated Queues Configured on MIC and MPC Interfaces
- Scaling of Per-VLAN Queuing on Non-Queuing MPCs
- Increasing Available Bandwidth on Rich-Queuing MPCs by Bypassing the Queuing Chip
- Flexible Queuing Mode
- Multifield Classifier for Ingress Queuing on MX Series Routers with MPC
- Example: Configuring a Filter for Use as an Ingress Queuing Filter
- Ingress Queuing Filter with Policing Functionality
- Ingress Rate Limiting on MX Series Routers with MPCs
- Rate Shaping on MIC and MPC Interfaces
- Per-Priority Shaping on MIC and MPC Interfaces Overview
- Example: Configuring Per-Priority Shaping on MIC and MPC Interfaces
- Configuring Static Shaping Parameters to Account for Overhead in Downstream Traffic Rates
- Example: Configuring Static Shaping Parameters to Account for Overhead in Downstream Traffic Rates
- Traffic Burst Management on MIC and MPC Interfaces Overview
- Understanding Hierarchical Scheduling for MIC and MPC Interfaces
- Configuring Ingress Hierarchical CoS on MIC and MPC Interfaces
- Configuring a CoS Scheduling Policy on Logical Tunnel Interfaces
- Per-Unit Scheduling and Hierarchical Scheduling for MPC Interfaces
- Managing Dedicated and Remaining Queues for Static CoS Configurations on MIC and MPC Interfaces
- Excess Bandwidth Distribution on MIC and MPC Interfaces Overview
- Bandwidth Management for Downstream Traffic in Edge Networks Overview
- Scheduler Delay Buffering on MIC and MPC Interfaces
- Managing Excess Bandwidth Distribution on Static Interfaces on MICs and MPCs
- Drop Profiles on MIC and MPC Interfaces
- Intelligent Oversubscription on MIC and MPC Interfaces Overview
- Jitter Reduction in Hierarchical CoS Queues
- Example: Reducing Jitter in Hierarchical CoS Queues
- CoS on Ethernet Pseudowires in Universal Edge Networks Overview
- CoS Scheduling Policy on Logical Tunnel Interfaces Overview
- Configuring CoS on an Ethernet Pseudowire for Multiservice Edge Networks
- CoS for L2TP LNS Inline Services Overview
- Configuring Static CoS for an L2TP LNS Inline Service
- CoS on Circuit Emulation ATM MICs Overview
- Configuring CoS on Circuit Emulation ATM MICs
- Understanding IEEE 802.1p Inheritance push and swap from a Transparent Tag
- Configuring IEEE 802.1p Inheritance push and swap from the Transparent Tag
- CoS on Application Services Modular Line Card Overview
- play_arrow Configuring Class of Service on Aggregated, Channelized, and Gigabit Ethernet Interfaces
- Limitations on CoS for Aggregated Interfaces
- Policer Support for Aggregated Ethernet Interfaces Overview
- Understanding Schedulers on Aggregated Interfaces
- Examples: Configuring CoS on Aggregated Interfaces
- Hierarchical Schedulers on Aggregated Ethernet Interfaces Overview
- Configuring Hierarchical Schedulers on Aggregated Ethernet Interfaces
- Example: Configuring Scheduling Modes on Aggregated Interfaces
- Enabling VLAN Shaping and Scheduling on Aggregated Interfaces
- Class of Service on demux Interfaces
- Example: Configuring Per-Unit Schedulers for Channelized Interfaces
- Applying Layer 2 Policers to Gigabit Ethernet Interfaces
-
- play_arrow Configuration Statements and Operational Commands
RED Drop Profiles for Congestion Management
This topic describes the use and configuration of random early detection (RED) drop profiles for congestion management. A drop profile is a mechanism of RED that defines parameters that allow packets to be dropped from a queue based on how full the queue is. Drop profiles define the meanings of the packet loss priorities.
Manage Congestion with RED Drop Profiles and Packet Loss Priorities
You can configure two parameters to control congestion in each output queue. One parameter, delay-buffer bandwidth, enables queue growth to absorb burst traffic up to the specified product of delay-buffer time and output rate. Once the specified delay buffer becomes full, packets with 100 percent drop probability are dropped from the tail of the queue. For more information, see Managing Congestion on the Egress Interface by Configuring the Scheduler Buffer Size.
The other parameter, which this topic covers, defines the drop probabilities across the range of delay-buffer occupancy, supporting the RED process. When the number of packets queued is greater than the ability of the router or switch to empty a queue, the queue requires a method for determining which packets to drop from the network. To address this, you can enable RED on individual queues.
Depending on the drop probabilities, RED might drop many packets long before the buffer becomes full, or it might drop only a few packets even if the buffer is almost full.
A drop profile is a mechanism of RED that defines parameters that allow packets to be dropped from the network. Drop profiles define the meanings of the packet loss priorities.
When you configure drop profiles, there are two important values:
queue fullness represents a percentage of the memory used to store packets in relation to the total amount that has been allocated for a specific queue.
drop probability is a percentage value that correlates to the likelihood that an individual packet is dropped from the network.
How these two variables function is illustrated in graph format. Figure 1 shows both a discrete and an interpolated graph. Although the formation of these graph lines is different, the application of the profile is the same. When a packet joins the tail of the queue, a random number from 0 to 100 is calculated by the router or switch. This random number is plotted against the drop profile using the current queue fullness of that particular queue. When the random number falls above the graph line, the packet is transmitted onto the physical media. When the number falls below the graph line, the packet is dropped from the network.

You create drop profiles by defining multiple fill levels and drop probabilities.
To create the discrete profile graph as shown in Figure 1 on the left, the software begins at the bottom-left corner, representing a 0-percent fill level and a 0-percent drop probability. This configuration creates a line horizontally to the right on the fullness level (l) x-axis until it reaches the first defined fill level, 50-percent for this configuration, which is designated to have a drop probability (p) of 20-percent. The software then continues the line horizontally along the fill level until the next drop probability is reached at the designated data point of 75-percent fill level, which has a designated drop-probability of 40-percent. The line is then continued horizontally to the next fill level of 85-percent and the designated drop probability of 75-percent. The line continues horizontally to the next designated fill level of 90-percent, which has a designated drop probability of 90-percent, and a line is created to data point 90-percent (l), 90-percent (p) (l90 p90). From the l90 p90 point, the line continues horizontally to the 100-percent fill level, which has a drop probability of 100 percent, at which the line rises to the end-point of 100-100, which is 100 percent fill level with a 100 percent drop probability.
If you specify an interpolated drop profile, in the first quadrant the initial line segment spans from the origin (0,0) to the next defined point. From that defined fill-level/drop-probability point, a second line runs to the next point, and so forth, until a final line segment connects (100, 100). The software automatically constructs a drop profile containing 64 fill levels at drop probabilities that approximate the calculated line segments.
For consistent behavior across router families, include the pair (100, 100) in the drop profile configuration.
You can create a smoother graph line by configuring the profile with the
interpolate
statement. This enables the software to
automatically generate 64 data points on the graph beginning at (0, 0) and ending at
(100, 100). Along the way, the graph line intersects specific data points that you
have defined.
If you configure the interpolate
statement, you can specify more
than 64 pairs, but the system generates only 64 discrete entries.
Loss priorities allow you to set the priority of dropping a packet. Loss priority affects the scheduling of a packet without affecting the packet’s relative ordering. You can use the packet loss priority (PLP) bit as part of a congestion control strategy. You can use the loss priority setting to identify packets that have experienced congestion. Typically you mark packets exceeding some service level with a high loss priority. You set loss priority by configuring a classifier or a policer. The loss priority is used later in the workflow to select one of the drop profiles used by RED.
You specify drop probabilities in the drop profile section of the class-of-service (CoS) configuration hierarchy and map them to corresponding loss priorities in each scheduler configuration. For each scheduler, you can configure multiple separate drop profiles, one for each combination of loss priority.
You can configure a maximum of 32 different drop profiles.
Use Feature Explorer to confirm platform and release support for specific features.
Review the Platform-Specific RED Drop Profile Behavior section for notes related to your platform.
Configure RED Drop Profiles to Define Packet Drop or ECN Behaviors
You enable RED by applying a drop profile to a scheduler. When RED is operational on an interface, the queue no longer drops all excess packets at the tail of the queue. Rather, a controlled fraction of packets are dropped, or marked with ECN (if enabled). Some output-buffered routers perform RED drops of oldest packets at the head of the queue. Other routers perform RED as packets enter a queue. When a queue becomes full, tail-drops (100%) supersede random dropping.
To configure RED drop profiles, include the following statements at the
[edit class-of-service]
hierarchy level:
[edit class-of-service] drop-profiles { profile-name { fill-level percentage drop-probability percentage; interpolate { drop-probability [ values ]; fill-level [ values ]; } } }
To configure a drop profile, include either the interpolate
statement and its options, or the fill-level and drop-probability
percentage
values.
For example, the following shows a discrete configuration and an interpolated configuration that correspond to the graphs in Figure 1. The values defined in the configurations are matched to represent the data points in the graph lines.
Create a Discrete Configuration
class-of-service { drop-profiles { discrete-style-profile { fill-level 0 drop-probability 0; fill-level 50 drop-probability 20; fill-level 75 drop-probability 40; fill-level 85 drop-probability 75; fill-level 90 drop-probability 90; fill-level 100 drop-probability 100; } } }
Create an Interpolated Configuration
class-of-service { drop-profiles { interpolated-style-profile { interpolate { fill-level [ 0 50 75 85 90 100 ]; drop-probability [ 0 20 40 75 90 100 ]; } } } }
To configure a drop profile:
After you configure a drop profile, you must assign the drop profile to a drop-profile map, and assign the drop-profile map to a scheduler, as discussed in Determining Packet Drop Behavior by Configuring Drop Profile Maps for Schedulers.
Platform-Specific RED Drop Profile Behavior
Use Feature Explorer to confirm platform and release support for RED drop profiles.
Use the following table to review platform-specific behaviors for your platform:
Platform | Difference |
---|---|
ACX5448 |
|
ACX7000 Series |
|
MX Series |
|
PTX Series |
|