- play_arrow Basic CoS Configuration
- play_arrow CoS Overview
- play_arrow CoS on Interfaces
- play_arrow CoS Code-Point Aliases
- play_arrow CoS Classifiers
- Understanding CoS Classifiers
- Defining CoS BA Classifiers (DSCP, DSCP IPv6, IEEE 802.1p)
- Example: Configuring Classifiers
- Example: Configuring Unicast Classifiers
- Example: Configuring Multidestination (Multicast, Broadcast, DLF) Classifiers
- Understanding Host Inbound Traffic Classification
- Configuring a Global MPLS EXP Classifier
- Monitoring CoS Classifiers
- play_arrow CoS Rewrite Rules
- Understanding CoS Rewrite Rules
- Defining CoS Rewrite Rules
- Understanding Applying CoS Classifiers and Rewrite Rules to Interfaces
- Troubleshooting an Unexpected Rewrite Value
- Understanding CoS MPLS EXP Classifiers and Rewrite Rules
- Configuring Rewrite Rules for MPLS EXP Classifiers
- Monitoring CoS Rewrite Rules
- play_arrow CoS Forwarding Classes and Forwarding Class Sets
- Understanding CoS Forwarding Classes
- Defining CoS Forwarding Classes
- Forwarding Policy Options Overview
- Configuring CoS-Based Forwarding
- Example: Configuring CoS-Based Forwarding
- Example: Configuring Forwarding Classes
- Understanding CoS Forwarding Class Sets (Priority Groups)
- Defining CoS Forwarding Class Sets
- Example: Configuring Forwarding Class Sets
- Monitoring CoS Forwarding Classes
- play_arrow Lossless Traffic Flows, Ethernet PAUSE Flow Control, and PFC
- Understanding CoS IEEE 802.1p Priorities for Lossless Traffic Flows
- Configuring CoS PFC (Congestion Notification Profiles)
- Understanding CoS Flow Control (Ethernet PAUSE and PFC)
- Enabling and Disabling CoS Symmetric Ethernet PAUSE Flow Control
- Configuring CoS Asymmetric Ethernet PAUSE Flow Control
- Understanding PFC Functionality Across Layer 3 Interfaces
- Example: Configuring PFC Across Layer 3 Interfaces
- Understanding PFC Using DSCP at Layer 3 for Untagged Traffic
- Configuring DSCP-based PFC for Layer 3 Untagged Traffic
- play_arrow CoS and Host Outbound Traffic
-
- play_arrow Weighted Random Early Detection (WRED) and Explicit Congestion Notification (ECN)
- play_arrow WRED and Drop Profiles
- play_arrow Explicit Congestion Notification (ECN)
-
- play_arrow Data Center Bridging and Lossless FCoE
- play_arrow Data Center Bridging
- Understanding DCB Features and Requirements
- Understanding DCBX
- Configuring the DCBX Mode
- Configuring DCBX Autonegotiation
- Understanding DCBX Application Protocol TLV Exchange
- Defining an Application for DCBX Application Protocol TLV Exchange
- Configuring an Application Map for DCBX Application Protocol TLV Exchange
- Applying an Application Map to an Interface for DCBX Application Protocol TLV Exchange
- Example: Configuring DCBX Application Protocol TLV Exchange
- play_arrow Lossless FCoE
- Example: Configuring CoS PFC for FCoE Traffic
- Example: Configuring CoS for FCoE Transit Switch Traffic Across an MC-LAG
- Example: Configuring CoS Using ELS for FCoE Transit Switch Traffic Across an MC-LAG
- Example: Configuring Lossless FCoE Traffic When the Converged Ethernet Network Does Not Use IEEE 802.1p Priority 3 for FCoE Traffic (FCoE Transit Switch)
- Example: Configuring Two or More Lossless FCoE Priorities on the Same FCoE Transit Switch Interface
- Example: Configuring Two or More Lossless FCoE IEEE 802.1p Priorities on Different FCoE Transit Switch Interfaces
- Example: Configuring Lossless IEEE 802.1p Priorities on Ethernet Interfaces for Multiple Applications (FCoE and iSCSI)
- Troubleshooting Dropped FCoE Traffic
-
- play_arrow CoS Buffers and the Shared Buffer Pool
- play_arrow CoS Buffers Overview
- play_arrow Shared Buffer Pool Examples
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Best-Effort Unicast Traffic
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Best-Effort Traffic on Links with Ethernet PAUSE Enabled
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Multicast Traffic
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Lossless Traffic
-
- play_arrow CoS on EVPN VXLANs
- play_arrow Configuration Statements and Operational Commands
Defining CoS Queue Scheduling Priority
You can configure the scheduling priority of individual queues by specifying the priority in a scheduler, and then associating the scheduler with a queue by using a scheduler map. On QFX5100, QFX5200, EX4600, QFX3500, and QFX3600 switches, and on QFabric systems, queues can have one of two bandwidth scheduling priorities, strict-high
priority or low
priority. On QFX10000 Series switches, queues can also be configured as high
priority.
By default, all queues are low priority queues.
The switch services low priority queues after servicing any queue that has strict-high priority traffic or high priority traffic. Strict-high priority queues receive preferential treatment over all other queues and receive all of their configured bandwidth before other queues are serviced. Low-priority queues do not transmit traffic until strict-high priority queues are empty, and receive the bandwidth that remains after the strict-high queues have been serviced. High priority queues receive preference over low priority queues.
Different switches handle traffic configured as strict-high
priority traffic in different ways:
QFX5100, QFX5200, QFX3500, QFX3600, and EX4600 switches, and QFabric systems—You can configure only one queue as a strict-high priority queue.
On these switches, we recommend that you always apply a shaping rate to strict-high priority queues to prevent them from starving other queues. If you do not apply a shaping rate to limit the amount of bandwidth a strict-high priority queue can use, then the strict-high priority queue can use all of the available port bandwidth and starve other queues on the port.
QFX10000 switches—You can configure as many queues as you want as strict-high priority. However, keep in mind that too much strict-high priority traffic can starve low priority queues on the port.
Note:We strongly recommend that you configure a transmit rate on all strict-high priority queues to limit the amount of traffic the switch treats as strict-high priority traffic and prevent strict-high priority queues from starving other queues on the port. This is especially important if you configure more than one strict-high priority queue on a port. If you do not configure a transmit rate to limit the amount of bandwidth strict-high priority queues can use, then the strict-high priority queues can use all of the available port bandwidth and starve other queues on the port.
The switch treats traffic in excess of the transmit rate as best-effort traffic that receives bandwidth from the leftover (excess) port bandwidth pool. On strict-high priority queues, all traffic that exceeds the transmit rate shares in the port excess bandwidth pool based on the strict-high priority excess bandwidth sharing weight of “1”, which is not configurable. The actual amount of extra bandwidth that traffic exceeding the transmit rate receives depends on how many other queues consume excess bandwidth and the excess rates of those queues.
To configure queue priority using the CLI:
content_copy zoom_out_map[edit class-of-service] user@switch# set schedulers scheduler-name priority level