- play_arrow Basic CoS Configuration
- play_arrow CoS Overview
- play_arrow CoS on Interfaces
- play_arrow CoS Code-Point Aliases
- play_arrow CoS Classifiers
- Understanding CoS Classifiers
- Defining CoS BA Classifiers (DSCP, DSCP IPv6, IEEE 802.1p)
- Example: Configuring Classifiers
- Example: Configuring Unicast Classifiers
- Example: Configuring Multidestination (Multicast, Broadcast, DLF) Classifiers
- Understanding Host Inbound Traffic Classification
- Configuring a Global MPLS EXP Classifier
- Monitoring CoS Classifiers
- play_arrow CoS Rewrite Rules
- Understanding CoS Rewrite Rules
- Defining CoS Rewrite Rules
- Understanding Applying CoS Classifiers and Rewrite Rules to Interfaces
- Troubleshooting an Unexpected Rewrite Value
- Understanding CoS MPLS EXP Classifiers and Rewrite Rules
- Configuring Rewrite Rules for MPLS EXP Classifiers
- Monitoring CoS Rewrite Rules
- play_arrow CoS Forwarding Classes and Forwarding Class Sets
- Understanding CoS Forwarding Classes
- Defining CoS Forwarding Classes
- Forwarding Policy Options Overview
- Configuring CoS-Based Forwarding
- Example: Configuring CoS-Based Forwarding
- Example: Configuring Forwarding Classes
- Understanding CoS Forwarding Class Sets (Priority Groups)
- Defining CoS Forwarding Class Sets
- Example: Configuring Forwarding Class Sets
- Monitoring CoS Forwarding Classes
- play_arrow Lossless Traffic Flows, Ethernet PAUSE Flow Control, and PFC
- Understanding CoS IEEE 802.1p Priorities for Lossless Traffic Flows
- Configuring CoS PFC (Congestion Notification Profiles)
- Understanding CoS Flow Control (Ethernet PAUSE and PFC)
- Enabling and Disabling CoS Symmetric Ethernet PAUSE Flow Control
- Configuring CoS Asymmetric Ethernet PAUSE Flow Control
- Understanding PFC Functionality Across Layer 3 Interfaces
- Example: Configuring PFC Across Layer 3 Interfaces
- Understanding PFC Using DSCP at Layer 3 for Untagged Traffic
- Configuring DSCP-based PFC for Layer 3 Untagged Traffic
- play_arrow CoS and Host Outbound Traffic
-
- play_arrow Weighted Random Early Detection (WRED) and Explicit Congestion Notification (ECN)
- play_arrow WRED and Drop Profiles
- play_arrow Explicit Congestion Notification (ECN)
-
- play_arrow CoS Queue Schedulers, Traffic Control Profiles, and Hierarchical Port Scheduling (ETS)
- play_arrow Queue Schedulers and Scheduling Priority
- Understanding Default CoS Scheduling and Classification
- Understanding CoS Scheduling Behavior and Configuration Considerations
- Understanding CoS Output Queue Schedulers
- Defining CoS Queue Schedulers
- Example: Configuring Queue Schedulers
- Defining CoS Queue Scheduling Priority
- Example: Configuring Queue Scheduling Priority
- Monitoring CoS Scheduler Maps
- play_arrow Port Scheduling and Shaping
- play_arrow Troubleshooting Egress Bandwidth Issues
- play_arrow Traffic Control Profiles and Priority Group Scheduling
- Understanding CoS Traffic Control Profiles
- Understanding CoS Priority Group Scheduling
- Understanding CoS Virtual Output Queues (VOQs)
- Defining CoS Traffic Control Profiles (Priority Group Scheduling)
- Example: Configuring Traffic Control Profiles (Priority Group Scheduling)
- Understanding CoS Priority Group and Queue Guaranteed Minimum Bandwidth
- Example: Configuring Minimum Guaranteed Output Bandwidth
- Understanding CoS Priority Group Shaping and Queue Shaping (Maximum Bandwidth)
- Example: Configuring Maximum Output Bandwidth
- play_arrow Hierarchical Port Scheduling (ETS)
-
- play_arrow CoS Buffers and the Shared Buffer Pool
- play_arrow CoS Buffers Overview
- play_arrow Shared Buffer Pool Examples
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Best-Effort Unicast Traffic
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Best-Effort Traffic on Links with Ethernet PAUSE Enabled
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Multicast Traffic
- Example: Recommended Configuration of the Shared Buffer Pool for Networks with Mostly Lossless Traffic
-
- play_arrow CoS on EVPN VXLANs
- play_arrow Configuration Statements and Operational Commands
ON THIS PAGE
Understanding DCB Features and Requirements
Data center bridging (DCB) is a set of enhancements to the IEEE 802.1 bridge specifications. DCB modifies and extends Ethernet behavior to support I/O convergence in the data center. I/O convergence includes but is not limited to the transport of Ethernet LAN traffic and Fibre Channel (FC) storage area network (SAN) traffic on the same physical Ethernet network infrastructure.
Video 1: What is Data Center Bridging?
A converged architecture saves cost by reducing the number of networks and switches required to support both types of traffic, reducing the number of interfaces required, reducing cable complexity, and reducing administration activities.
The Juniper Networks QFX Series and EX4600 switches support the DCB features required to transport converged Ethernet and FC traffic while providing the class-of-service (CoS) and other characteristics FC requires for transmitting storage traffic. To accommodate FC traffic, DCB specifications provide:
A flow control mechanism called priority-based flow control (PFC, described in IEEE 802.1Qbb) to help provide lossless transport.
A discovery and exchange protocol for conveying configuration and capabilities among neighbors to ensure consistent configuration across the network, called Data Center Bridging Capability Exchange protocol (DCBX), which is an extension of Link Layer Data Protocol (LLDP, described in IEEE 802.1AB).
A bandwidth management mechanism called enhanced transmission selection (ETS, described in IEEE 802.1Qaz).
A congestion management mechanism called quantized congestion notification (QCN, described in IEEE 802.1Qau).
The switch supports the PFC, DCBX, and ETS standards but does not support QCN. The switch also provides the high-bandwidth interfaces (10-Gbps minimum) required to support DCB and converged traffic.
This topic describes the DCB standards and requirements the switch supports:
Lossless Transport
FC traffic requires lossless transport (defined as no frames dropped because of congestion). Standard Ethernet does not support lossless transport, but the DCB extensions to Ethernet along with proper buffer management enable an Ethernet network to provide the level of class of service (CoS) necessary to transport FC frames encapsulated in Ethernet over an Ethernet network.
This section describes these factors in creating lossless transport over Ethernet:
PFC
PFC is a link-level flow control mechanism similar to Ethernet PAUSE (described in IEEE 802.3x). Ethernet PAUSE stops all traffic on a link for a period of time. PFC enables you to divide traffic on a link into eight priorities and stop the traffic of a selected priority without stopping the traffic assigned to other priorities on the link.
Pausing the traffic of a selected priority enables you to provide lossless transport for traffic assigned that priority and at the same time use standard lossy Ethernet transport for the rest of the link traffic.
Buffer Management
Buffer management is critical to the proper functioning of PFC, because if buffers are allowed to overflow, frames are dropped and transport is not lossless.
For each lossless flow priority, the switch requires sufficient buffer space to:
Store frames sent during the time it takes to send the PFC pause frame across the cable between devices.
Store the frames that are already on the wire when the sender receives the PFC pause frame.
The propagation delay due to cable length and speed, as well as processing speed, determines the amount of buffer space needed to prevent frame loss due to congestion.
The switch automatically sets the threshold for sending PFC pause frames to accommodate delay from cables as long as 150 meters (492 feet) and to accommodate large frames that might be on the wire when the switch sends the pause frame. This ensures that the switch sends pause frames early enough to allow the sender to stop transmitting before the receive buffers on the switch overflow.
Physical Interfaces
QFX Series switches support 10-Gbps or faster, full-duplex interfaces. The switch enables DCB capability only on 10-Gbps or faster Ethernet interfaces.
ETS
PFC divides traffic into up to eight separate streams (priorities, configured on the switch as forwarding classes) on a physical link. ETS enables you to manage the link bandwidth by:
Grouping the priorities into priority groups (configured on the switch as forwarding class sets).
Specifying the bandwidth available to each of the priority groups as a percentage of the total available link bandwidth.
Allocating the bandwidth to the individual priorities in the priority group.
The available link bandwidth is the bandwidth remaining after
servicing strict-high priority queues. On QFX5200, QFX5100, EX4600,
QFX3500, and QFX3600 switches, and on QFabric systems, we recommend
that you always configure a shaping rate to limit the amount of bandwidth
a strict-high priority queue can consume by including the shaping-rate
statement in the [edit class-of-service schedulers]
hierarchy on the strict-high
priority scheduler. This prevents a strict-high priority queue from
starving other queues on the port. (On QFX10000 switches, configure
a transmit rate on strict-high priority queues to set a maximum amount
of bandwidth for strict-high priority traffic.)
Managing link bandwidth with ETS provides several advantages:
There is uniform management of all types of traffic on the link, both congestion-managed traffic and standard Ethernet traffic.
When a priority group does not use all of its allocated bandwidth, other priority groups on the link can use that bandwidth as needed.
When a priority in a priority group does not use all of its allocated bandwidth, other priorities in the group can use that bandwidth.
The result is better bandwidth utilization, because priorities that consist of bursty traffic can share bandwidth during periods of low traffic transmission instead of consuming their entire bandwidth allocation when traffic loads are light.
You can assign traffic types with different service needs to different priorities so that each traffic type receives appropriate treatment.
Strict priority traffic retains its allocated bandwidth.
DCBX
DCB devices use DCBX to exchange configuration information with directly connected peers (switches and endpoints such as servers). DCBX is an extension of LLDP. If you disable LLDP on an interface, that interface cannot run DCBX. If you attempt to enable DCBX on an interface on which LLDP is disabled, the configuration commit fails.
DCBX can:
Discover the DCB capabilities of peers.
Detect DCB feature misconfiguration or mismatches between peers.
Configure DCB features on peers.
You can configure DCBX operation for PFC, ETS, and for Layer 2 and Layer 4 applications such as FCoE and iSCSI. DCBX is enabled or disabled on a per-interface basis.