- play_arrow Basic CoS Configuration
- play_arrow CoS Overview
- play_arrow CoS on Interfaces
- play_arrow CoS Code-Point Aliases
- play_arrow CoS Classifiers
- Understanding CoS Classifiers
- Defining CoS BA Classifiers (DSCP, DSCP IPv6, IEEE 802.1p)
- Example: Configuring Classifiers
- Example: Configuring Unicast Classifiers
- Example: Configuring Multidestination (Multicast, Broadcast, DLF) Classifiers
- Understanding Host Inbound Traffic Classification
- Configuring a Global MPLS EXP Classifier
- Monitoring CoS Classifiers
- play_arrow CoS Rewrite Rules
- Understanding CoS Rewrite Rules
- Defining CoS Rewrite Rules
- Understanding Applying CoS Classifiers and Rewrite Rules to Interfaces
- Troubleshooting an Unexpected Rewrite Value
- Understanding CoS MPLS EXP Classifiers and Rewrite Rules
- Configuring Rewrite Rules for MPLS EXP Classifiers
- Monitoring CoS Rewrite Rules
- play_arrow CoS Forwarding Classes and Forwarding Class Sets
- Understanding CoS Forwarding Classes
- Defining CoS Forwarding Classes
- Forwarding Policy Options Overview
- Configuring CoS-Based Forwarding
- Example: Configuring CoS-Based Forwarding
- Example: Configuring Forwarding Classes
- Understanding CoS Forwarding Class Sets (Priority Groups)
- Defining CoS Forwarding Class Sets
- Example: Configuring Forwarding Class Sets
- Monitoring CoS Forwarding Classes
- play_arrow Lossless Traffic Flows, Ethernet PAUSE Flow Control, and PFC
- Understanding CoS IEEE 802.1p Priorities for Lossless Traffic Flows
- Configuring CoS PFC (Congestion Notification Profiles)
- Understanding CoS Flow Control (Ethernet PAUSE and PFC)
- Enabling and Disabling CoS Symmetric Ethernet PAUSE Flow Control
- Configuring CoS Asymmetric Ethernet PAUSE Flow Control
- Understanding PFC Functionality Across Layer 3 Interfaces
- Example: Configuring PFC Across Layer 3 Interfaces
- Understanding PFC Using DSCP at Layer 3 for Untagged Traffic
- Configuring DSCP-based PFC for Layer 3 Untagged Traffic
- play_arrow CoS and Host Outbound Traffic
-
- play_arrow CoS Queue Schedulers, Traffic Control Profiles, and Hierarchical Port Scheduling (ETS)
- play_arrow Queue Schedulers and Scheduling Priority
- Understanding Default CoS Scheduling and Classification
- Understanding CoS Scheduling Behavior and Configuration Considerations
- Understanding CoS Output Queue Schedulers
- Defining CoS Queue Schedulers
- Example: Configuring Queue Schedulers
- Defining CoS Queue Scheduling Priority
- Example: Configuring Queue Scheduling Priority
- Monitoring CoS Scheduler Maps
- play_arrow Port Scheduling and Shaping
- play_arrow Troubleshooting Egress Bandwidth Issues
- play_arrow Traffic Control Profiles and Priority Group Scheduling
- Understanding CoS Traffic Control Profiles
- Understanding CoS Priority Group Scheduling
- Understanding CoS Virtual Output Queues (VOQs)
- Defining CoS Traffic Control Profiles (Priority Group Scheduling)
- Example: Configuring Traffic Control Profiles (Priority Group Scheduling)
- Understanding CoS Priority Group and Queue Guaranteed Minimum Bandwidth
- Example: Configuring Minimum Guaranteed Output Bandwidth
- Understanding CoS Priority Group Shaping and Queue Shaping (Maximum Bandwidth)
- Example: Configuring Maximum Output Bandwidth
- play_arrow Hierarchical Port Scheduling (ETS)
-
- play_arrow Data Center Bridging and Lossless FCoE
- play_arrow Data Center Bridging
- Understanding DCB Features and Requirements
- Understanding DCBX
- Configuring the DCBX Mode
- Configuring DCBX Autonegotiation
- Understanding DCBX Application Protocol TLV Exchange
- Defining an Application for DCBX Application Protocol TLV Exchange
- Configuring an Application Map for DCBX Application Protocol TLV Exchange
- Applying an Application Map to an Interface for DCBX Application Protocol TLV Exchange
- Example: Configuring DCBX Application Protocol TLV Exchange
- play_arrow Lossless FCoE
- Example: Configuring CoS PFC for FCoE Traffic
- Example: Configuring CoS for FCoE Transit Switch Traffic Across an MC-LAG
- Example: Configuring CoS Using ELS for FCoE Transit Switch Traffic Across an MC-LAG
- Example: Configuring Lossless FCoE Traffic When the Converged Ethernet Network Does Not Use IEEE 802.1p Priority 3 for FCoE Traffic (FCoE Transit Switch)
- Example: Configuring Two or More Lossless FCoE Priorities on the Same FCoE Transit Switch Interface
- Example: Configuring Two or More Lossless FCoE IEEE 802.1p Priorities on Different FCoE Transit Switch Interfaces
- Example: Configuring Lossless IEEE 802.1p Priorities on Ethernet Interfaces for Multiple Applications (FCoE and iSCSI)
- Troubleshooting Dropped FCoE Traffic
-
- play_arrow CoS Buffers and the Shared Buffer Pool
- play_arrow CoS Buffers Overview
- play_arrow Shared Buffer Pool Examples
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Best-Effort Unicast Traffic
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Best-Effort Traffic on Links with Ethernet PAUSE Enabled
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Multicast Traffic
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Lossless Traffic
-
- play_arrow CoS on EVPN VXLANs
- play_arrow Configuration Statements and Operational Commands
Understanding CoS WRED Drop Profiles
When the number of packets queued is greater than the ability of the device to empty an output queue, the queue requires a method for determining which packets to drop to relieve the congestion. Weighted random early detection (WRED) drop profiles define the drop probability of packets of different packet loss probabilities (PLPs) as the output queue fills. During periods of congestion, as the output queue fills, the device drops incoming packets as determined by a drop profile, until the output queue becomes less congested.
Depending on the drop probabilities, a drop profile can drop many packets long before the buffer becomes full, or it can drop only a few packets even if the buffer is almost full.
You configure drop profiles in the drop profile section of the
class-of-service (CoS) configuration hierarchy. You apply drop profiles
using a drop profile map in queue scheduler configuration. For each
queue scheduler, you can configure separate drop profiles for each
PLP using the loss-priority
attribute (low, medium-high,
and high). This enables you to treat traffic of different PLPs in
different ways during periods of congestion.
Do not apply drop profiles to lossless traffic (traffic that belongs to a forwarding
class that has the no-loss
drop attribute.). Lossless traffic uses
priority-based flow control (PFC) to control congestion.
You cannot apply drop profiles to multidestination queues on devices that support them.
Drop Profile Parameters
Drop profiles specify two values, which work as pairs:
Fill level—The queue fullness value, which represents a percentage of the memory used to store packets in relation to the total amount of memory allocated to the queue.
Drop probability—The percentage value that corresponds to the likelihood that an individual packet is dropped.
Defining Drop Profiles on Switches Except QFX10000
You set two queue fill levels and two drop probabilities in each drop profile. The first fill level and the first drop probability create one value pair and the second fill level and the second drop probability create a second value pair.
The first fill level value specifies the percentage of queue fullness at which packets begin to drop, known as the drop start point. Until the queue reaches this level of fullness, no packets are dropped. The second fill level value specifies the percentage of queue fullness at which all packets are dropped, known as the drop end point.
The first drop probability value is always 0
(zero).
This pairs with the drop start point and specifies that until the
queue fullness level reaches the first fill level, no packets drop.
When the queue fullness exceeds the drop start point, packets begin
to drop until the queue exceeds the second fill level, when all packets
drop. The second drop probability value, known as the maximum drop
rate, specifies the likelihood of dropping packets when the queue
fullness reaches the drop end point. As the queue fills from the drop
start point to the drop end point, packets drop in a smooth, linear
pattern (called an interpolated graph) as shown in Figure 1. After the drop
end point, all packets drop.

The thick line in Figure 1 shows the packet drop characteristics for a sample WRED profile. At the drop start point, the queue reaches a fill level of 30 percent. At the drop end point, the queue fill level reaches 50 percent, and the maximum drop rate is 80 percent.
No packets drop until the queue fill level reaches the drop start point of 30 percent. When the queue reaches the 30 percent fill level, packets begin to drop. As the queue fills, the percentage of packets dropped increases in a linear fashion. When the queue fills to the drop end point of 50 percent, the rate of packet drop has increased to the maximum drop rate of 80 percent. When the queue fill level exceeds the drop end point of 50 percent, all of the packets drop until the queue fill level drops below 50 percent.
Defining Drop Profiles on QFX10000 Switches
Each queue fill level pairs with a drop probability. As the queue fills to different levels, every time it reaches a fill level configured in a drop profile, the queue applies the drop probability paired with that fill level to the traffic in the queue that exceeds the fill level. You can configure up to 32 pairs of fill levels and drop probabilities to create a customized packet drop probability curve with up to 32 points of differentiation.
Packets are not dropped until they reach the first configured queue fill level. When the queue reaches the first fill level, packets begin to drop at the configured drop probability rate paired with the first fill level. When the queue reaches the second fill level, packets begin to drop at the configured drop probability rate paired with the second fill level. This process continues for the number of fill level/drop probability pairs that you configure in the drop profile.
Drop profiles are interpolated, not segmented. An interpolated drop profile gradually increases the drop probability along a curve between each configured fill level. When the queue reaches the next fill level, the drop probability reaches the drop probability paired with that fill level. A segmented drop profile “jumps” from one fill level and drop probability setting to another in a stepped fashion. The drop probability of traffic does not change as the queue fills until the next fill level is reached.
An example of interpolation is a drop profile with three fill level/drop probability pairs:
25 percent queue fill level paired with a 30 percent drop probability
50 percent queue fill level paired with a 60 percent drop probability
75 percent queue fill level paired with a 100 percent drop probability (all packets that exceed the 75 percent queue fill level are dropped)
The queue drops no packets until its fill level reaches 25 percent. During periods of congestion, when the queue fills above 25 percent full, the queue begins to drop packets at a rate of 30 percent of the packets above the fill level.
However, as the queue continues to fill, it does not continue to drop packets at the 30 percent drop probability. Instead, the drop probability gradually increases as the queue fills to the 50 percent fullness level. When the queue reaches the 50 percent fill level, the drop probability has increased to the configured drop probability pair for the fill level, which is 60 percent.
As the queue continues to fill, the drop probability does not remain at 60 percent, but continues to rise as the queue fills. When the queue reaches the final fill level at 75 percent full, the drop probability has risen to 100 percent and all packets that exceed the 75 percent fill level are dropped.
Default Drop Profile
If you do not configure drop profiles and apply them to queue schedulers, the device uses the default drop profile for lossy traffic classes. In the default drop profile, when the fill level is 0 percent, the drop probability is 0 percent. When the fill level is 100 percent, the drop probability is 100 percent. During periods of congestion, as soon as packets arrive on a queue, the default profile might begin to drop packets.
Packet Drop Method
When a packet reaches the head of a queue, the device calculates a random number between 0 and 100. The device plots the random number against the drop profile using the current fill level of the queue. When the random number falls above the graph line, the queue transmits the packet out the egress interface. When the number falls below graph the line, the device drops the packet.
Packet Drop Example for Switches Except QFX10000
To create the linear drop pattern from the drop start point to the drop end point, the drop probabilities are derived using a linear approximation with eight sections, or steps, from the minimum queue fill level to the maximum queue fill level. The fill levels are divided into the eight sections equally, starting at the minimum fill level and ending at the maximum fill level. As the queue fills, the percentage of dropped packets increases. The percentage of packets dropped is based on the maximum drop rate.
For example, the default drop profile (which specifies a maximum drop rate of 100 percent) has the following drop probabilities at each section, or step, in the eight-section linear drop pattern:
First section—The minimum drop probability is 6.25 percent of the maximum drop rate. The maximum drop probability is 12.5 percent of the maximum drop rate.
Second section—The minimum drop probability is 18.75 percent of the maximum drop rate. The maximum drop probability is 25 percent of the maximum drop rate.
Third section—The minimum drop probability is 30.25 percent of the maximum drop rate. The maximum drop probability is 37.5 percent of the maximum drop rate.
Fourth section—The minimum drop probability is 43.75 percent of the maximum drop rate. The maximum drop probability is 50 percent of the maximum drop rate.
Fifth section—The minimum drop probability is 56.25 percent of the maximum drop rate. The maximum drop probability is 62 percent of the maximum drop rate.
Sixth section—The minimum drop probability is 68.75 percent of the maximum drop rate. The maximum drop probability is 75.5 percent of the maximum drop rate.
Seventh section—The minimum drop probability is 81.25 percent of the maximum drop rate. The maximum drop probability is 87.5 percent of the maximum drop rate.
Eighth section—The minimum drop probability is 92.75 percent of the maximum drop rate. The maximum drop probability is 100 percent of the maximum drop rate.
Packets drop even when there is no congestion, because packet drops begin at the drop start point regardless of whether congestion exists on the port. The default drop profile example represents the worst-case scenario, because the drop start point fill level is 0 percent, so packet drop begins when the queue starts to receive packets.
You can specify when packets begin to drop by configuring a drop start point at a fill level greater than 0 percent. For example, if you configure a drop profile that has a drop start point of 30 percent, packets do not drop until the queue is 30 percent full. We recommend that you configure drop profiles that are appropriate to your network traffic conditions.
The smaller the gap between the minimum drop rate (which is always 0) and the maximum drop rate, the smaller the gap between the minimum drop probability and the maximum drop probability at each section (step) of the linear drop pattern. The default drop profile, which has the maximum gap between the minimum drop rate (0 percent) and the maximum drop rate (100 percent), has the highest gap between the minimum drop probability and the maximum drop probability at each step. Configuring a lower maximum drop rate for a drop profile reduces the gap between the minimum drop probability and the maximum drop probability.
Drop Profile Maps
Drop profile maps are part of scheduler configuration. A drop profile map maps drop profiles to packet loss priorities. Specifying the drop profile map in a scheduler associates the drop profile with the forwarding classes (queues) that you map to the scheduler in a scheduler map.
You configure loss priority for a queue in the classifier section of the CoS configuration hierarchy, and the loss priority is applied to the traffic assigned to the forwarding class at the ingress interface.
Each scheduler can have multiple drop profile maps.
Congestion Prevention
Configuring drop profiles on output queues enables you to control how congestion affects other queues on a port. If you do not configure drop profiles and map them to output queues, the device uses the default drop profile on queues that forward lossy traffic.
For example, if an ingress port forwards traffic to more than one egress port, and at least one of the egress ports experiences congestion, that can cause ingress port congestion. Ingress port congestion (ingress buffer exceeds its resource allocation) can cause frames to drop at the ingress port instead of at the egress port. Ingress port frame drop affects all of the egress ports to which the congested ingress port forwards traffic, not just the congested egress port.
Do not configure drop profiles for the fcoe
and no-loss
forwarding classes. FCoE and other lossless traffic queues require
lossless behavior (traffic queues that are configured with the
no-loss
packet drop attribute). Use priority-based flow
control (PFC) to prevent frame drop on lossless priorities.
Configuring a WRED Drop Profile and Applying it to an Output Queue
To configure a WRED packet drop profile and apply it to an output queue:
Configure a drop profile:
On switches except QFX10000 use the statement
set class-of-service drop-profiles profile-name interpolate fill-level drop-start-point fill-level drop-end-point drop-probability 0 drop-probability percentage
.On QFX10000 switches use the statement
set class-of-service drop-profiles profile-name interpolate fill-level level1 level2 ... level32 drop-probability probability1 probability2 ... probability32
. You can specify as few as two fill level/drop probability pairs or as many as 32 pairs.
Map the drop profile to a queue scheduler using the statement
set class-of-service schedulers scheduler-name drop-profile-map loss-priority (low | medium-high | high) protocol any drop-profile profile-name
. The name of the drop-profile is the name of the WRED profile configured in Step 1.Map the scheduler, which Step 2 associates with the drop profile, to the output queue using the statement
set class-of-service scheduler-maps map-name forwarding-class forwarding-class-name scheduler scheduler-name
. The forwarding class identifies the output queue. Forwarding classes are mapped to output queues by default, and can be remapped to different queues by explicit user configuration. The scheduler name is the scheduler configured in Step 2.On switches except QFX10000, associate the scheduler map with a traffic control profile using the statement
set class-of-service traffic-control-profiles tcp-name scheduler-map map-name
. The scheduler map name is the name configured in Step 3.On switches except QFX10000, associate the traffic control profile with an interface using the statement
set class-of-service interfaces interface-name forwarding-class-set forwarding-class-set-name output-traffic-control-profile tcp-name
. The output traffic control profile name is the name of the traffic control profile configured in Step 4.The interface uses the scheduler map in the traffic control profile to apply the drop profile (and other attributes) to the output queue (forwarding class) on that interface. Because you can use different traffic control profiles to map different schedulers to different interfaces, the same queue number on different interfaces can handle traffic in different ways.
On QFX10000 switches, associate the scheduler map with an interface using the statement
set class-of-service interfaces interface-name scheduler-map scheduler-map-name
.The interface uses the scheduler map to apply the drop profile (and other attributes) to the output queue mapped to the forwarding class on that interface. Because you can use different scheduler maps on different interfaces, the same queue number on different interfaces can handle traffic in different ways.
Drop Profiles on Explicit Congestion Notification Enabled Queues
You must configure a WRED drop profile on queues that you enable for explicit congestion notification (ECN). On ECN-enabled queues, the drop profile sets the threshold for when the queue should mark a packet as experiencing congestion (see Understanding CoS Explicit Congestion Notification). When a queue fills to the level at which the WRED drop profile has a packet drop probability greater than zero (0), the device might mark a packet as experiencing congestion. Whether or not a device marks a packet as experiencing congestion is the same probability as the drop probability of the queue at that fill level.
On ECN-enabled queues, the device does not use the drop profile to control dropping packets that are not ECN-capable packets (packets marked non-ECT, ECN code bits 00) during periods of congestion. Instead, the device uses the tail-drop algorithm to drop non-ECN-capable packets during periods of congestion. When a queue fills to its maximum level of fullness, tail-drop simply drops all subsequently arriving packets until there is space in the queue to buffer more packets. All non-ECN-capable packets are treated the same way.
To apply a WRED drop profile to non-ECT traffic, configure a multifield (MF) classifier to assign non-ECT traffic to a different output queue that is not ECN-enabled, and then apply the WRED drop profile to that queue.