- play_arrow Overview
- play_arrow Understanding How Class of Service Manages Congestion and Defines Traffic Forwarding Behavior
- Understanding How Class of Service Manages Congestion and Controls Service Levels in the Network
- How CoS Applies to Packet Flow Across a Network
- The Junos OS CoS Components Used to Manage Congestion and Control Service Levels
- Mapping CoS Component Inputs to Outputs
- Default Junos OS CoS Settings
- Packet Flow Through the Junos OS CoS Process Overview
- Configuring Basic Packet Flow Through the Junos OS CoS Process
- Example: Classifying All Traffic from a Remote Device by Configuring Fixed Interface-Based Classification
- Interface Types That Do Not Support Junos OS CoS
-
- play_arrow Configuring Class of Service
- play_arrow Assigning Service Levels with Behavior Aggregate Classifiers
- Understanding How Behavior Aggregate Classifiers Prioritize Trusted Traffic
- Default IP Precedence Classifier
- Default DSCP and DSCP IPv6 Classifiers
- Default MPLS EXP Classifier
- Default IEEE 802.1p Classifier
- Default IEEE 802.1ad Classifier
- Default Aliases for CoS Value Bit Patterns Overview
- Defining Aliases for CoS Value Bit Patterns
- Configuring Behavior Aggregate Classifiers
- Applying Behavior Aggregate Classifiers to Logical Interfaces
- Example: Configuring and Applying a Default DSCP Behavior Aggregate Classifier
- Example: Configuring Behavior Aggregate Classifiers
- Understanding DSCP Classification for VPLS
- Example: Configuring DSCP Classification for VPLS
- Configuring Class of Service for MPLS LSPs
- Applying DSCP Classifiers to MPLS Traffic
- Applying MPLS EXP Classifiers to Routing Instances
- Applying MPLS EXP Classifiers for Explicit-Null Labels
- Manage Ingress Oversubscription with Traffic Class Maps
- play_arrow Assigning Service Levels with Multifield Classifiers
- Overview of Assigning Service Levels to Packets Based on Multiple Packet Header Fields
- Configuring Multifield Classifiers
- Using Multifield Classifiers to Set Packet Loss Priority
- Example: Configuring and Applying a Firewall Filter for a Multifield Classifier
- Example: Classifying Packets Based on Their Destination Address
- Example: Configuring and Verifying a Complex Multifield Filter
- play_arrow Controlling Network Access with Traffic Policing
- Controlling Network Access Using Traffic Policing Overview
- Effect of Two-Color Policers on Shaping Rate Changes
- Configuring Policers Based on Logical Interface Bandwidth
- Example: Limiting Inbound Traffic at Your Network Border by Configuring an Ingress Single-Rate Two-Color Policer
- Example: Performing CoS at an Egress Network Boundary by Configuring an Egress Single-Rate Two-Color Policer
- Example: Limiting Inbound Traffic Within Your Network by Configuring an Ingress Single-Rate Two-Color Policer and Configuring Multifield Classifiers
- Example: Limiting Outbound Traffic Within Your Network by Configuring an Egress Single-Rate Two-Color Policer and Configuring Multifield Classifiers
- Overview of Tricolor Marking Architecture
- Enabling Tricolor Marking and Limitations of Three-Color Policers
- Configuring and Applying Tricolor Marking Policers
- Configuring Single-Rate Tricolor Marking
- Configuring Two-Rate Tricolor Marking
- Example: Configuring and Verifying Two-Rate Tricolor Marking
- Applying Firewall Filter Tricolor Marking Policers to Interfaces
- Policer Overhead to Account for Rate Shaping in the Traffic Manager
- play_arrow Defining Forwarding Behavior with Forwarding Classes
- Understanding How Forwarding Classes Assign Classes to Output Queues
- Default Forwarding Classes
- Configuring a Custom Forwarding Class for Each Queue
- Configuring Up to 16 Custom Forwarding Classes
- Classifying Packets by Egress Interface
- Forwarding Policy Options Overview
- Configuring CoS-Based Forwarding
- Example: Configuring CoS-Based Forwarding
- Example: Configuring CoS-Based Forwarding for Different Traffic Types
- Example: Configuring CoS-Based Forwarding for IPv6
- Applying Forwarding Classes to Interfaces
- Understanding Queuing and Marking of Host Outbound Traffic
- Forwarding Classes and Fabric Priority Queues
- Default Routing Engine Protocol Queue Assignments
- Assigning Forwarding Class and DSCP Value for Routing Engine-Generated Traffic
- Example: Writing Different DSCP and EXP Values in MPLS-Tagged IP Packets
- Change the Default Queuing and Marking of Host Outbound Traffic
- Example: Configure Different Queuing and Marking Defaults for Outbound Routing Engine and Distributed Protocol Handler Traffic
- Overriding the Input Classification
- play_arrow Defining Output Queue Properties with Schedulers
- How Schedulers Define Output Queue Properties
- Default Schedulers Overview
- Configuring Schedulers
- Configuring Scheduler Maps
- Applying Scheduler Maps Overview
- Applying Scheduler Maps to Physical Interfaces
- Configuring Traffic Control Profiles for Shared Scheduling and Shaping
- Configuring an Input Scheduler on an Interface
- Understanding Interface Sets
- Configuring Interface Sets
- Interface Set Caveats
- Configuring Internal Scheduler Nodes
- Example: Configuring and Applying Scheduler Maps
- play_arrow Controlling Bandwidth with Scheduler Rates
- Oversubscribing Interface Bandwidth
- Configuring Scheduler Transmission Rate
- Providing a Guaranteed Minimum Rate
- PIR-Only and CIR Mode
- Excess Rate and Excess Priority Configuration Examples
- Controlling Remaining Traffic
- Bandwidth Sharing on Nonqueuing Packet Forwarding Engines Overview
- Configuring Rate Limits on Nonqueuing Packet Forwarding Engines
- Applying Scheduler Maps and Shaping Rate to DLCIs and VLANs
- Example: Applying Scheduler Maps and Shaping Rate to DLCIs
- Example: Applying Scheduling and Shaping to VLANs
- Applying a Shaping Rate to Physical Interfaces Overview
- Configuring the Shaping Rate for Physical Interfaces
- Example: Limiting Egress Traffic on an Interface Using Port Shaping for CoS
- Configuring Input Shaping Rates for Both Physical and Logical Interfaces
- play_arrow Setting Transmission Order with Scheduler Priorities and Hierarchical Scheduling
- Priority Scheduling Overview
- Configuring Schedulers for Priority Scheduling
- Associating Schedulers with Fabric Priorities
- Hierarchical Class of Service Overview
- Hierarchical Class of Service Network Scenarios
- Understanding Hierarchical Scheduling
- Priority Propagation in Hierarchical Scheduling
- Hierarchical CoS for Metro Ethernet Environments
- Hierarchical Schedulers and Traffic Control Profiles
- Example: Building a Four-Level Hierarchy of Schedulers
- Hierarchical Class of Service for Network Slicing
- Configuring Ingress Hierarchical CoS
- play_arrow Controlling Congestion with Scheduler RED Drop Profiles, Buffers, PFC, and ECN
- RED Drop Profiles for Congestion Management
- Determining Packet Drop Behavior by Configuring Drop Profile Maps for Schedulers
- Managing Congestion by Setting Packet Loss Priority for Different Traffic Flows
- Mapping PLP to RED Drop Profiles
- Managing Congestion on the Egress Interface by Configuring the Scheduler Buffer Size
- Managing Transient Traffic Bursts by Configuring Weighted RED Buffer Occupancy
- Example: Managing Transient Traffic Bursts by Configuring Weighted RED Buffer Occupancy
- Understanding PFC Using DSCP at Layer 3 for Untagged Traffic
- Configuring DSCP-based PFC for Layer 3 Untagged Traffic
- PFC Watchdog
- CoS Explicit Congestion Notification
- Example: Configuring Static and Dynamic ECN
- play_arrow Altering Outgoing Packet Headers Using Rewrite Rules
- Rewriting Packet Headers to Ensure Forwarding Behavior
- Applying Default Rewrite Rules
- Configuring Rewrite Rules
- Configuring Rewrite Rules Based on PLP
- Applying IEEE 802.1p Rewrite Rules to Dual VLAN Tags
- Applying IEEE 802.1ad Rewrite Rules to Dual VLAN Tags
- Rewriting IEEE 802.1p Packet Headers with an MPLS EXP Value
- Setting IPv6 DSCP and MPLS EXP Values Independently
- Configuring DSCP Values for IPv6 Packets Entering the MPLS Tunnel
- Setting Ingress DSCP Bits for Multicast Traffic over Layer 3 VPNs
- Applying Rewrite Rules to Output Logical Interfaces
- Rewriting MPLS and IPv4 Packet Headers
- Rewriting the EXP Bits of All Three Labels of an Outgoing Packet
- Defining a Custom Frame Relay Loss Priority Map
- Example: Per-Node Rewriting of EXP Bits
- Example: Rewriting CoS Information at the Network Border to Enforce CoS Strategies
- Example: Remarking Diffserv Code Points to MPLS EXPs to Carry CoS Profiles Across a Service Provider’s L3VPN MPLS Network
- Example: Remarking Diffserv Code Points to 802.1P PCPs to Carry CoS Profiles Across a Service Provider’s VPLS Network
- Assigning Rewrite Rules on a Per-Customer Basis Using Policy Maps
- Host Outbound Traffic IEEE802.1p Rewrite
- play_arrow Altering Class of Service Values in Packets Exiting the Network Using IPv6 DiffServ
- Resources for CoS with DiffServ for IPv6
- System Requirements for CoS with DiffServ for IPv6
- Terms and Acronyms for CoS with DiffServ for IPv6
- Default DSCP Mappings
- Default Forwarding Classes
- Juniper Networks Default Forwarding Classes
- Roadmap for Configuring CoS with IPv6 DiffServ
- Configuring a Firewall Filter for an MF Classifier on Customer Interfaces
- Applying the Firewall Filter to Customer Interfaces
- Assigning Forwarding Classes to Output Queues
- Configuring Rewrite Rules
- DSCP IPv6 Rewrites and Forwarding Class Maps
- Applying Rewrite Rules to an Interface
- Configuring RED Drop Profiles
- Configuring BA Classifiers
- Applying a BA Classifier to an Interface
- Configuring a Scheduler
- Configuring Scheduler Maps
- Applying a Scheduler Map to an Interface
- Example: Configuring DiffServ for IPv6
-
- play_arrow Configuring Line Card-Specific and Interface-Specific Functionality
- play_arrow Feature Support of Line Cards and Interfaces
- play_arrow Configuring Class of Service for Tunnels
- play_arrow Configuring Class of Service on Services PICs
- CoS on Services PICs Overview
- Configuring CoS Rules on Services PICs
- Configuring CoS Rule Sets on Services PICs
- Example: Configuring CoS Rules on Services PICs
- Packet Rewriting on Services Interfaces
- Multiservices PIC ToS Translation
- Fragmentation by Forwarding Class Overview
- Configuring Fragmentation by Forwarding Class
- Configuring Drop Timeout Interval for Fragmentation by Forwarding Class
- Example: Configuring Fragmentation by Forwarding Class
- Allocating Excess Bandwidth Among Frame Relay DLCIs on Multiservices PICs
- Configuring Rate Limiting and Sharing of Excess Bandwidth on Multiservices PICs
- play_arrow Configuring Class of Service on IQ and Enhanced IQ (IQE) PICs
- CoS on Enhanced IQ PICs Overview
- Calculation of Expected Traffic on IQE PIC Queues
- Configuring the Junos OS to Support Eight Queues on IQ Interfaces for T Series and M320 Routers
- BA Classifiers and ToS Translation Tables
- Configuring ToS Translation Tables
- Configuring Hierarchical Layer 2 Policers on IQE PICs
- Configuring Excess Bandwidth Sharing on IQE PICs
- Configuring Rate-Limiting Policers for High Priority Low-Latency Queues on IQE PICs
- Applying Scheduler Maps and Shaping Rate to Physical Interfaces on IQ PICs
- Applying Scheduler Maps to Chassis-Level Queues
- play_arrow Configuring Class of Service on Ethernet IQ2 and Enhanced IQ2 PICs
- CoS on Enhanced IQ2 PICs Overview
- CoS Features and Limitations on IQ2 and IQ2E PICs (M Series and T Series)
- Differences Between Gigabit Ethernet IQ and Gigabit Ethernet IQ2 PICs
- Shaping Granularity Values for Enhanced Queuing Hardware
- Ethernet IQ2 PIC RTT Delay Buffer Values
- Configuring BA Classifiers for Bridged Ethernet
- Setting the Number of Egress Queues on IQ2 and Enhanced IQ2 PICs
- Configuring the Number of Schedulers per Port for Ethernet IQ2 PICs
- Applying Scheduler Maps to Chassis-Level Queues
- CoS for L2TP Tunnels on Ethernet Interface Overview
- Configuring CoS for L2TP Tunnels on Ethernet Interfaces
- Configuring LNS CoS for Link Redundancy
- Example: Configuring L2TP LNS CoS Support for Link Redundancy
- Configuring Shaping on 10-Gigabit Ethernet IQ2 PICs
- Configuring Per-Unit Scheduling for GRE Tunnels Using IQ2 and IQ2E PICs
- Understanding Burst Size Configuration on IQ2 and IQ2E Interfaces
- Configuring Burst Size for Shapers on IQ2 and IQ2E Interfaces
- Configuring a CIR and a PIR on Ethernet IQ2 Interfaces
- Example: Configuring Shared Resources on Ethernet IQ2 Interfaces
- Configuring and Applying IEEE 802.1ad Classifiers
- Configuring Rate Limits to Protect Lower Queues on IQ2 and Enhanced IQ2 PICs
- Simple Filters Overview
- Configuring a Simple Filter
- play_arrow Configuring Class of Service on 10-Gigabit Ethernet LAN/WAN PICs with SFP+
- CoS on 10-Gigabit Ethernet LAN/WAN PIC with SFP+ Overview
- BA and Fixed Classification on 10-Gigabit Ethernet LAN/WAN PIC with SFP+ Overview
- DSCP Rewrite for the 10-Gigabit Ethernet LAN/WAN PIC with SFP+
- Configuring DSCP Rewrite for the 10-Gigabit Ethernet LAN/WAN PIC
- Queuing on 10-Gigabit Ethernet LAN/WAN PICs Properties
- Mapping Forwarding Classes to CoS Queues on 10-Gigabit Ethernet LAN/WAN PICs
- Scheduling and Shaping on 10-Gigabit Ethernet LAN/WAN PICs Overview
- Example: Configuring Shaping Overhead on 10-Gigabit Ethernet LAN/WAN PICs
- play_arrow Configuring Class of Service on Enhanced Queuing DPCs
- Enhanced Queuing DPC CoS Properties
- Configuring Rate Limits on Enhanced Queuing DPCs
- Configuring WRED on Enhanced Queuing DPCs
- Configuring MDRR on Enhanced Queuing DPCs
- Configuring Excess Bandwidth Sharing
- Configuring Customer VLAN (Level 3) Shaping on Enhanced Queuing DPCs
- Simple Filters Overview
- Configuring Simple Filters on Enhanced Queuing DPCs
- Configuring a Simple Filter
- play_arrow Configuring Class of Service on MICs, MPCs, and MLCs
- CoS Features and Limitations on MIC and MPC Interfaces
- Dedicated Queue Scaling for CoS Configurations on MIC and MPC Interfaces Overview
- Verifying the Number of Dedicated Queues Configured on MIC and MPC Interfaces
- Scaling of Per-VLAN Queuing on Non-Queuing MPCs
- Increasing Available Bandwidth on Rich-Queuing MPCs by Bypassing the Queuing Chip
- Flexible Queuing Mode
- Multifield Classifier for Ingress Queuing on MX Series Routers with MPC
- Example: Configuring a Filter for Use as an Ingress Queuing Filter
- Ingress Queuing Filter with Policing Functionality
- Ingress Rate Limiting on MX Series Routers with MPCs
- Rate Shaping on MIC and MPC Interfaces
- Per-Priority Shaping on MIC and MPC Interfaces Overview
- Example: Configuring Per-Priority Shaping on MIC and MPC Interfaces
- Configuring Static Shaping Parameters to Account for Overhead in Downstream Traffic Rates
- Example: Configuring Static Shaping Parameters to Account for Overhead in Downstream Traffic Rates
- Traffic Burst Management on MIC and MPC Interfaces Overview
- Understanding Hierarchical Scheduling for MIC and MPC Interfaces
- Configuring Ingress Hierarchical CoS on MIC and MPC Interfaces
- Configuring a CoS Scheduling Policy on Logical Tunnel Interfaces
- Per-Unit Scheduling and Hierarchical Scheduling for MPC Interfaces
- Managing Dedicated and Remaining Queues for Static CoS Configurations on MIC and MPC Interfaces
- Excess Bandwidth Distribution on MIC and MPC Interfaces Overview
- Bandwidth Management for Downstream Traffic in Edge Networks Overview
- Scheduler Delay Buffering on MIC and MPC Interfaces
- Managing Excess Bandwidth Distribution on Static Interfaces on MICs and MPCs
- Drop Profiles on MIC and MPC Interfaces
- Intelligent Oversubscription on MIC and MPC Interfaces Overview
- Jitter Reduction in Hierarchical CoS Queues
- Example: Reducing Jitter in Hierarchical CoS Queues
- CoS on Ethernet Pseudowires in Universal Edge Networks Overview
- CoS Scheduling Policy on Logical Tunnel Interfaces Overview
- Configuring CoS on an Ethernet Pseudowire for Multiservice Edge Networks
- CoS for L2TP LNS Inline Services Overview
- Configuring Static CoS for an L2TP LNS Inline Service
- CoS on Circuit Emulation ATM MICs Overview
- Configuring CoS on Circuit Emulation ATM MICs
- Understanding IEEE 802.1p Inheritance push and swap from a Transparent Tag
- Configuring IEEE 802.1p Inheritance push and swap from the Transparent Tag
- CoS on Application Services Modular Line Card Overview
- play_arrow Configuring Class of Service on Aggregated, Channelized, and Gigabit Ethernet Interfaces
- Limitations on CoS for Aggregated Interfaces
- Policer Support for Aggregated Ethernet Interfaces Overview
- Understanding Schedulers on Aggregated Interfaces
- Examples: Configuring CoS on Aggregated Interfaces
- Hierarchical Schedulers on Aggregated Ethernet Interfaces Overview
- Configuring Hierarchical Schedulers on Aggregated Ethernet Interfaces
- Example: Configuring Scheduling Modes on Aggregated Interfaces
- Enabling VLAN Shaping and Scheduling on Aggregated Interfaces
- Class of Service on demux Interfaces
- Example: Configuring Per-Unit Schedulers for Channelized Interfaces
- Applying Layer 2 Policers to Gigabit Ethernet Interfaces
-
- play_arrow Configuration Statements and Operational Commands
ON THIS PAGE
Example: Performing Output Scheduling and Shaping in Hierarchical CoS Queues for Traffic Routed to GRE Tunnels
This example shows how to configure a generic routing encapsulation (GRE) tunnel device to perform CoS output scheduling and shaping of IPv4 traffic routed to GRE tunnels. This feature is supported on MX Series routers running Junos OS Release 12.3R4 or later revisions, 13.2R2 or later revision, or 13.3R1 or later, with GRE tunnel interfaces configured on MPC1 Q, MPC2 Q, or MPC2 EQ modules.
Requirements
This example uses the following Juniper Networks hardware and Junos OS software:
Transport network—An IPv4 network running Junos OS Release 13.3.
GRE tunnel device—One MX80 router installed as an ingress provider edge (PE) router.
Input and output logical interfaces configurable on two ports of the built-in 10-Gigabit Ethernet Modular Interface Card (MIC):
Input logical interface
ge-1/1/0.0
for receiving traffic that is to be transported across the network.Output logical interfaces
ge-1/1/1.0
,ge-1/1/1.1
, andge-1/1/1.2
to convert to GRE tunnel source interfacesgr-1/1/10.1
,gr-1/1/10.2
, andgr-1/1/10.3
.For information about interfaces hosted on modules in MX80 routers, see the following topics:
Overview
In this example, you configure the router with input and output logical interfaces for IPv4 traffic, and then you convert the output logical interface to four GRE tunnel source interfaces. You also install static routes in the routing table so that input traffic is routed to the four GRE tunnels.
Before you apply a traffic control profile with a scheduler-map and shaping rate to a GRE tunnel interface, you must configure and commit a hierarchical scheduler on the GRE tunnel physical interface, specifying a maximum of two hierarchical scheduling levels for node scaling.
Configuration
To configure scheduling and shaping in hierarchical CoS queues for traffic routed to GRE tunnel interfaces configured on MPC1Q, MPC2Q, or MPC2 EQ modules on an MX Series router, perform these tasks:
- CLI Quick Configuration
- Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes
- Measuring GRE Tunnel Transmission Rates Without Shaping Applied
- Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces
CLI Quick Configuration
To quickly configure this example, copy the
following commands, paste them into a text file, remove any line breaks,
change any details necessary to match your network configuration,
and then copy and paste the commands into the CLI at the [edit]
hierarchy level.
Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes
set chassis fpc 1 pic 1 tunnel-services bandwidth 1g set interfaces ge-1/1/0 unit 0 family inet address 10.6.6.1/24 set interfaces ge-1/1/1 unit 0 family inet address 10.70.1.1/24 arp 10.70.1.3 mac 00:00:03:00:04:00 set interfaces ge-1/1/1 unit 0 family inet address 10.80.1.1/24 arp 10.80.1.3 mac 00:00:03:00:04:01 set interfaces ge-1/1/1 unit 0 family inet address 10.90.1.1/24 arp 10.90.1.3 mac 00:00:03:00:04:02 set interfaces ge-1/1/1 unit 0 family inet address 10.100.1.1/24 arp 10.100.1.3 mac 00:00:03:00:04:04 set interfaces gr-1/1/10 unit 1 family inet address 10.100.1.1/24 set interfaces gr-1/1/10 unit 1 tunnel source 10.70.1.1 destination 10.70.1.3 set interfaces gr-1/1/10 unit 2 family inet address 10.200.1.1/24 set interfaces gr-1/1/10 unit 2 tunnel source 10.80.1.1 destination 10.80.1.3 set interfaces gr-1/1/10 unit 3 family inet address 10.201.1.1/24 set interfaces gr-1/1/10 unit 3 tunnel source 10.90.1.1 destination 10.90.1.3 set interfaces gr-1/1/10 unit 4 family inet address 10.202.1.1/24 set interfaces gr-1/1/10 unit 4 tunnel source 10.100.1.1 destination 10.100.1.3 set interfaces gr-1/1/10 hierarchical-scheduler set routing-options static route 10.2.2.0/24 next-hop gr-1/1/10.1 set routing-options static route 10.3.3.0/24 next-hop gr-1/1/10.2 set routing-options static route 10.4.4.0/24 next-hop gr-1/1/10.3 set routing-options static route 10.5.5.0/24 next-hop gr-1/1/10.4
Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces
set class-of-service forwarding-classes queue 0 be set class-of-service forwarding-classes queue 1 ef set class-of-service forwarding-classes queue 2 af set class-of-service forwarding-classes queue 3 nc set class-of-service forwarding-classes queue 4 be1 set class-of-service forwarding-classes queue 5 ef1 set class-of-service forwarding-classes queue 6 af1 set class-of-service forwarding-classes queue 7 nc1 set class-of-service classifiers inet-precedence gr-inet forwarding-class be loss-priority low code-points 000 set class-of-service classifiers inet-precedence gr-inet forwarding-class ef loss-priority low code-points 001 set class-of-service classifiers inet-precedence gr-inet forwarding-class af loss-priority low code-points 010 set class-of-service classifiers inet-precedence gr-inet forwarding-class nc loss-priority low code-points 011 set class-of-service classifiers inet-precedence gr-inet forwarding-class be1 loss-priority low code-points 100 set class-of-service classifiers inet-precedence gr-inet forwarding-class ef1 loss-priority low code-points 101 set class-of-service classifiers inet-precedence gr-inet forwarding-class af1 loss-priority low code-points 110 set class-of-service classifiers inet-precedence gr-inet forwarding-class nc1 loss-priority low code-points 111 set class-of-service interfaces ge-1/1/0 unit 0 classifiers inet-precedence gr-inet set class-of-service schedulers be_sch transmit-rate percent 30 set class-of-service schedulers ef_sch transmit-rate percent 40 set class-of-service schedulers af_sch transmit-rate percent 25 set class-of-service schedulers nc_sch transmit-rate percent 5 set class-of-service schedulers be1_sch transmit-rate percent 60 set class-of-service schedulers be1_sch priority low set class-of-service schedulers ef1_sch transmit-rate percent 40 set class-of-service schedulers ef1_sch priority medium-low set class-of-service schedulers af1_sch transmit-rate percent 10 set class-of-service schedulers af1_sch priority strict-high set class-of-service schedulers nc1_sch shaping-rate percent 10 set class-of-service schedulers nc1_sch priority high set class-of-service scheduler-maps sch_map_1 forwarding-class be scheduler be_sch set class-of-service scheduler-maps sch_map_1 forwarding-class ef scheduler ef_sch set class-of-service scheduler-maps sch_map_1 forwarding-class af scheduler af_sch set class-of-service scheduler-maps sch_map_1 forwarding-class nc scheduler nc_sch set class-of-service scheduler-maps sch_map_2 forwarding-class be scheduler be1_sch set class-of-service scheduler-maps sch_map_2 forwarding-class ef scheduler ef1_sch set class-of-service scheduler-maps sch_map_3 forwarding-class af scheduler af_sch set class-of-service scheduler-maps sch_map_3 forwarding-class nc scheduler nc_sch set class-of-service traffic-control-profiles gr-ifl-tcp3 guaranteed-rate 5m set class-of-service traffic-control-profiles gr-ifd-tcp shaping-rate 10m set class-of-service traffic-control-profiles gr-ifd-tcp-remain shaping-rate 7m set class-of-service traffic-control-profiles gr-ifd-tcp-remain guaranteed-rate 4m set class-of-service traffic-control-profiles gr-ifl-tcp1 scheduler-map sch_map_1 set class-of-service traffic-control-profiles gr-ifl-tcp1 shaping-rate 8m set class-of-service traffic-control-profiles gr-ifl-tcp1 guaranteed-rate 3m set class-of-service traffic-control-profiles gr-ifl-tcp2 scheduler-map sch_map_2 set class-of-service traffic-control-profiles gr-ifl-tcp2 guaranteed-rate 2m set class-of-service traffic-control-profiles gr-ifl-tcp3 scheduler-map sch_map_3 set class-of-service interfaces gr-1/1/10 output-traffic-control-profile gr-ifd-tcp set class-of-service interfaces gr-1/1/10 output-traffic-control-profile-remaining gr-ifd-remain set class-of-service interfaces gr-1/1/10 unit 1 output-traffic-control-profile gr-ifl-tcp1 set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp2 set class-of-service interfaces gr-1/1/10 unit 3 output-traffic-control-profile gr-ifl-tcp3
Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes
Step-by-Step Procedure
To configure GRE tunnel interfaces (including enabling hierarchical scheduling) and static routes:
Configure the amount of bandwidth for tunnel services on the physical interface.
content_copy zoom_out_map[edit] user@host# set chassis fpc 1 pic 1 tunnel-services bandwidth 1g
Configure the GRE tunnel device output logical interface.
content_copy zoom_out_map[edit] user@host# set interfaces ge-1/1/0 unit 0 family inet address 10.6.6.1/24
Configure the GRE tunnel device output logical interface.
content_copy zoom_out_map[edit] user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.70.1.1/24 arp 10.70.1.3 mac 00:00:03:00:04:00 user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.80.1.1/24 arp 10.80.1.3 mac 00:00:03:00:04:01 user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.90.1.1/24 arp 10.90.1.3 mac 00:00:03:00:04:02 user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.100.1.1/24 arp 10.100.1.3 mac 00:00:03:00:04:04
Convert the output logical interface to four GRE tunnel interfaces.
content_copy zoom_out_map[edit] user@host# set interfaces gr-1/1/10 unit 1 family inet address 10.100.1.1/24 user@host# set interfaces gr-1/1/10 unit 1 tunnel source 10.70.1.1 destination 10.70.1.3 user@host# set interfaces gr-1/1/10 unit 2 family inet address 10.200.1.1/24 user@host# set interfaces gr-1/1/10 unit 2 tunnel source 10.80.1.1 destination 10.80.1.3 user@host# set interfaces gr-1/1/10 unit 3 family inet address 10.201.1.1/24 user@host# set interfaces gr-1/1/10 unit 3 tunnel source 10.90.1.1 destination 10.90.1.3 user@host# set interfaces gr-1/1/10 unit 4 family inet address 10.202.1.1/24 user@host# set interfaces gr-1/1/10 unit 4 tunnel source 10.100.1.1 destination 10.100.1.3
Enable the GRE tunnel interfaces to use hierarchical scheduling.
content_copy zoom_out_map[edit] user@host# set interfaces gr-1/1/10 hierarchical-scheduler
Install static routes in the routing table so that the device routes IPv4 traffic to the GRE tunnel source interfaces.
Traffic destined to the subnets 10.2.2.0/24, 10.3.3.0/24, 10.4.4.0/24, and 10.5.5.0/24 is routed to the tunnel interfaces at IP addresses 10.70.1.1, 10.80.1.1, 10.90.1.1, and 10.100.1.1, respectively.
content_copy zoom_out_map[edit] user@host# set routing-options static route 10.2.2.0/24 next-hop gr-1/1/10.1 user@host# set routing-options static route 10.3.3.0/24 next-hop gr-1/1/10.2 user@host# set routing-options static route 10.4.4.0/24 next-hop gr-1/1/10.3 user@host# set routing-options static route 10.5.5.0/24 next-hop gr-1/1/10.4
If you are done configuring the device, commit the configuration.
content_copy zoom_out_map[edit] user@host# commit
Results
From configuration mode, confirm your configuration
by entering the show chassis fpc 1 pic 1
, show interfaces ge-1/1/0
, show interfaces
ge-1/1/1
, show interfaces gr-1/1/10
, and show routing-options
commands. If the output does not
display the intended configuration, repeat the instructions in this
example to correct the configuration.
Confirm the configuration of interfaces, hierarchical scheduling on the GRE tunnel physical interface, and static routes.
user@host# show chassis fpc 1 pic 1 tunnel-services { bandwidth 1g; } user@host# show interfaces ge-1/1/0 unit 0 { family inet { address 10.6.6.1/24; ] } user@host# show interfaces ge-1/1/1 unit 0 { family inet { address 10.70.1.1/24 ( arp 10.70.1.3 mac 00:00:03:00:04:00; } address 10.80.1.1/24 { arp 10.80.1.3 mac 00:00:03:00:04:01; } address 10.90.1.1/24 { arp 10.90.1.3 mac 00:00:03:00:04:02; } address 10.100.1.1/24 { arp 10.100.1.3 mac 00:00:03:00:04:04; } ] } user@host# show interfaces gr-1/1/10 hierarchical-scheduler; unit 1 { tunnel { destination 10.70.1.3; source 10.70.1.1; } family inet { address 10.100.1.1/24; } } unit 2 { tunnel { destination 10.80.1.3; source 10.80.1.1; } family inet { address 10.200.1.1/24; } } unit 3 { tunnel { destination 10.90.1.3; source 10.90.1.1; } family inet { address 10.201.1.1/24; } } unit 4 { tunnel { destination 10.100.1.3; source 10.100.1.1; } family inet { address 10.202.1.1/24; } } user@host# show routing-options static { route 10.2.2.0/24 next-hop gr-1/1/10.1; route 10.3.3.0/24 next-hop gr-1/1/10.2; route 10.4.4.0/24 next-hop gr-1/1/10.3; route 10.5.5.0/24 next-hop gr-1/1/10.4; }
Measuring GRE Tunnel Transmission Rates Without Shaping Applied
Step-by-Step Procedure
To establish a baseline measurement, note the transmission rates at each GRE tunnel source.
Pass traffic through the GRE tunnel at logical interfaces
gr-1/1/10.1
,gr-1/1/10.2
, andgr-1/1/10.3
.To display the traffic rates at each GRE tunnel source, use the
show interfaces queue
operational mode command.The following example command output shows detailed CoS queue statistics for logical interface gr-1/1/10.1 (the GRE tunnel from source IP address 10.70.1.1 to destination IP address 10.70.1.3).
content_copy zoom_out_mapuser@host> show interfaces queue gr-1/1/10.1 Logical interface gr-1/1/10.1 (Index 331) (SNMP ifIndex 4045) Forwarding classes: 16 supported, 8 in use Egress queues: 8 supported, 8 in use Burst size: 0 Queue: 0, Forwarding classes: be Queued: Packets : 31818312 102494 pps Bytes : 6522753960 168091936 bps Transmitted: Packets : 1515307 4879 pps Bytes : 310637935 8001632 bps Tail-dropped packets : 21013826 68228 pps RED-dropped packets : 9289179 29387 pps Low : 9289179 29387 pps Medium-low : 0 0 pps Medium-high : 0 0 pps High : 0 0 pps RED-dropped bytes : 1904281695 48194816 bps Low : 1904281695 48194816 bps Medium-low : 0 0 bps Medium-high : 0 0 bps High : 0 0 bps ...
Note:This step shows command output for queue
0
(forwarding classbe
) only.The command output shows that the GRE tunnel device transmits traffic from queue
0
at a rate of 4879 pps. Allowing for 182 bytes per Layer 3 packet, preceded by 24 bytes of GRE overhead (a 20-byte delivery header consisting of the IPv4 packet header followed by 4 bytes for GRE flags plus encapsulated protocol type), the traffic rate received at the tunnel destination device is 8,040,592 bps:The command output shows that the GRE tunnel device transmits traffic from queue
0
at a rate of 4879 pps. Allowing for 182 bytes per Layer 3 packet, preceded by 24 bytes of GRE overhead (a 20-byte delivery header consisting of the IPv4 packet header followed by 4 bytes for GRE flags plus encapsulated protocol type), the traffic rate received at the tunnel destination device is 8,040,592 bps:content_copy zoom_out_map4879 packets/second X 206 bytes/packet X 8 bits/byte = 8,040,592 bits/second
Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces
Step-by-Step Procedure
To configure the GRE tunnel device with scheduling and shaping at GRE tunnel physical and logical interfaces:
Define eight transmission queues.
content_copy zoom_out_map[edit] user@host# set class-of-service forwarding-classes queue 0 be user@host# set class-of-service forwarding-classes queue 1 ef user@host# set class-of-service forwarding-classes queue 2 af user@host# set class-of-service forwarding-classes queue 3 nc user@host# set class-of-service forwarding-classes queue 4 be1 user@host# set class-of-service forwarding-classes queue 5 ef1 user@host# set class-of-service forwarding-classes queue 6 af1 user@host# set class-of-service forwarding-classes queue 7 nc1
Note:To configure up to eight forwarding classes with one-to-one mapping to output queues for interfaces on M120 , M320, MX Series, and T Series routers and EX Series switches, use the
queue
statement at the[edit class-of-service forwarding-classes]
hierarchy level.If you need to configure up to 16 forwarding classes with multiple forwarding classes mapped to single queues for those interface types, use the
class
statement instead.Configure BA classifier
gr-inet
that, based on IPv4 precedence bits set in an incoming packet, sets the forwarding class, loss-priority value, and DSCP bits of the packet.content_copy zoom_out_map[edit] user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class be loss-priority low code-points 000 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class ef loss-priority low code-points 001 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class af loss-priority low code-points 010 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class nc loss-priority low code-points 011 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class be1 loss-priority low code-points 100 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class ef1 loss-priority low code-points 101 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class af1 loss-priority low code-points 110 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class nc1 loss-priority low code-points 111
Apply BA classifier
gr-inet
to the GRE tunnel device input at logical interface ge-1/1/0.0.content_copy zoom_out_map[edit] user@host# set class-of-service interfaces ge-1/1/0 unit 0 classifiers inet-precedence gr-inet
Define a scheduler for each forwarding class.
content_copy zoom_out_map[edit] user@host# set class-of-service schedulers be_sch transmit-rate percent 30 user@host# set class-of-service schedulers ef_sch transmit-rate percent 40 user@host# set class-of-service schedulers af_sch transmit-rate percent 25 user@host# set class-of-service schedulers nc_sch transmit-rate percent 5 user@host# set class-of-service schedulers be1_sch transmit-rate percent 60 user@host# set class-of-service schedulers be1_sch priority low user@host# set class-of-service schedulers ef1_sch transmit-rate percent 40 user@host# set class-of-service schedulers ef1_sch priority medium-low user@host# set class-of-service schedulers af1_sch transmit-rate percent 10 user@host# set class-of-service schedulers af1_sch priority strict-high user@host# set class-of-service schedulers nc1_sch shaping-rate percent 10 user@host# set class-of-service schedulers nc1_sch priority high
Define a scheduler map for each of three GRE tunnels.
content_copy zoom_out_map[edit] user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class be scheduler be_sch user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class ef scheduler ef_sch user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class af scheduler af_sch user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class nc scheduler nc_sch user@host# set class-of-service scheduler-maps sch_map_2 forwarding-class be scheduler be1_sch user@host# set class-of-service scheduler-maps sch_map_2 forwarding-class ef scheduler ef1_sch user@host# set class-of-service scheduler-maps sch_map_3 forwarding-class af scheduler af_sch user@host# set class-of-service scheduler-maps sch_map_3 forwarding-class nc scheduler nc_sch
Define traffic control profiles for three GRE tunnel interfaces.
content_copy zoom_out_map[edit] user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 scheduler-map sch_map_1 user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 shaping-rate 8m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 guaranteed-rate 3m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp2 scheduler-map sch_map_2 user@host# set class-of-service traffic-control-profiles gr-ifl-tcp2 guaranteed-rate 2m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp3 scheduler-map sch_map_3 user@host# set class-of-service traffic-control-profiles gr-ifl-tcp3 guaranteed-rate 5m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp shaping-rate 10m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp-remain shaping-rate 7m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp-remain guaranteed-rate 4m
Apply CoS scheduling and shaping to the output traffic at the physical interface and logical interfaces.
content_copy zoom_out_map[edit] user@host# set class-of-service interfaces gr-1/1/10 output-traffic-control-profile gr-ifd-tcp user@host# set class-of-service interfaces gr-1/1/10 output-traffic-control-profile-remaining gr-ifd-remain user@host# set class-of-service interfaces gr-1/1/10 unit 1 output-traffic-control-profile gr-ifl-tcp1 user@host# set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp2 user@host# set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp3
If you are done configuring the device, commit the configuration.
content_copy zoom_out_map[edit] user@host# commit
Results
From configuration mode, confirm your configuration
by entering the show class-of-service forwarding-classes
, show class-of-service classifiers
, show class-of-service
interfaces ge-1/1/0
, show class-of-service schedulers
, show class-of-service scheduler-maps
, show class-of-service
traffic-control-profiles
, and show class-of-service
interfaces gr-1/1/10
commands. If the output does not display
the intended configuration, repeat the instructions in this example
to correct the configuration.
Confirm the configuration of output scheduling and shaping at the GRE tunnel physical and logical interfaces.
user@host# show class-of-service forwarding-classes queue 0 be; queue 1 ef; queue 2 af; queue 3 nc; queue 4 be1; queue 5 ef1; queue 6 af1; queue 7 nc1; user@host# show class-of-service classifiers inet-precedence gr-inet { forwarding-class be { loss-priority low code-points 000; } forwarding-class ef { loss-priority low code-points 001; } forwarding-class af { loss-priority low code-points 010; } forwarding-class nc { loss-priority low code-points 011; } forwarding-class be1 { loss-priority low code-points 100; } forwarding-class ef1 { loss-priority low code-points 101; } forwarding-class af1 { loss-priority low code-points 110; } forwarding-class nc1 { loss-priority low code-points 111; } } user@host# show class-of-service interfaces ge-1/1/0 unit 0 { classifiers { inet-precedence gr-inet; } } user@host# show class-of-service schedulers be_sch { transmit-rate percent 30; } ef_sch { transmit-rate percent 40; } af_sch { transmit-rate percent 25; } nc_sch { transmit-rate percent 5; } be1_sch { transmit-rate percent 60; priority low; } ef1_sch { transmit-rate percent 40; priority medium-low; } af1_sch { transmit-rate percent 10; priority strict-high; } nc1_sch { shaping-rate percent 10; priority high; } user@host# show class-of-service scheduler-maps sch_map_1 { forwarding-class be scheduler be_sch; forwarding-class ef scheduler ef_sch; forwarding-class af scheduler af_sch; forwarding-class nc scheduler nc_sch; } sch_map_2 { forwarding-class be scheduler be1_sch; forwarding-class ef scheduler ef1_sch; } sch_map_3 { forwarding-class af scheduler af_sch; forwarding-class nc scheduler nc_sch; } user@host# show class-of-service traffic-control-profiles gr-ifl-tcp1 { scheduler-map sch_map_1; shaping-rate 8m; guaranteed-rate 3m; } gr-ifl-tcp2 { scheduler-map sch_map_2; guaranteed-rate 2m; } gr-ifl-tcp3 { scheduler-map sch_map_3; guaranteed-rate 5m; } gr-ifd-remain { shaping-rate 7m; guaranteed-rate 4m; } gr-ifd-tcp { shaping-rate 10m; } user@host# show class-of-service interfaces gr-1/1/10 gr-1/1/10 { output-traffic-control-profile gr-ifd-tcp; output-traffic-control-profile-remaining gr-ifd-remain; unit 1 { output-traffic-control-profile gr-ifl-tcp1; } unit 2 { output-traffic-control-profile gr-ifl-tcp2; } unit 3 { output-traffic-control-profile gr-ifl-tcp3; } }
Verification
Confirm that the configurations are working properly.
- Verifying That Scheduling and Shaping Are Attached to the GRE Tunnel Interfaces
- Verifying That Scheduling and Shaping Are Functioning at the GRE Tunnel Interfaces
Verifying That Scheduling and Shaping Are Attached to the GRE Tunnel Interfaces
Purpose
Verify the association of traffic control profiles with GRE tunnel interfaces.
Action
Verify the traffic control profile attached to the GRE tunnel physical interface by using the show class-of-service interface gr-1/1/10 detail operational mode command.
- content_copy zoom_out_map
user@host> show class-of-service interface gr-1/1/10 detail Physical interface: gr-1/1/10, Enabled, Physical link is Up Type: GRE, Link-level type: GRE, MTU: Unlimited, Speed: 1000mbps Device flags : Present Running Interface flags: Point-To-Point SNMP-Traps Physical interface: gr-1/1/10, Index: 220 Queues supported: 8, Queues in use: 8 Output traffic control profile: gr-ifd-tcp, Index: 17721 Output traffic control profile remaining: gr-ifd-remain, Index: 58414 Congestion-notification: Disabled Logical interface gr-1/1/10.1 Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.70.1.3:10.70.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down inet 10.100.1.1/24 Logical interface: gr-1/1/10.1, Index: 331 Object Name Type Index Traffic-control-profile gr-ifl-tcp1 Output 17849 Classifier ipprec-compatibility ip 13 Logical interface gr-1/1/10.2 Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.80.1.3:10.80.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down inet 10.200.1.1/24 Logical interface: gr-1/1/10.2, Index: 332 Object Name Type Index Traffic-control-profile gr-ifl-tcp2 Output 17856 Classifier ipprec-compatibility ip 13 Logical interface gr-1/1/10.3 Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.90.1.3:10.90.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down inet 10.201.1.1/24 Logical interface: gr-1/1/10.3, Index: 333 Object Name Type Index Traffic-control-profile gr-ifl-tcp3 Output 17863 Classifier ipprec-compatibility ip 13
Meaning
Ingress IPv4 traffic routed to GRE tunnels on the device is subject to CoS output scheduling and shaping.
Verifying That Scheduling and Shaping Are Functioning at the GRE Tunnel Interfaces
Purpose
Verify the traffic rate shaping at the GRE tunnel interfaces.
Action
Pass traffic through the GRE tunnel at logical interfaces
gr-1/1/10.1
,gr-1/1/10.2
, andgr-1/1/10.3
.To verify the rate shaping at each GRE tunnel source, use the
show interfaces queue
operational mode command.The following example command output shows detailed CoS queue statistics for logical interface gr-1/1/10.1 (the GRE tunnel from source IP address 10.70.1.1 to destination IP address 10.70.1.3):
content_copy zoom_out_mapuser@host> show interfaces queue gr-1/1/10.1 Logical interface gr-1/1/10.1 (Index 331) (SNMP ifIndex 4045) Forwarding classes: 16 supported, 8 in use Egress queues: 8 supported, 8 in use Burst size: 0 Queue: 0, Forwarding classes: be Queued: Packets : 59613061 51294 pps Bytes : 12220677505 84125792 bps Transmitted: Packets : 2230632 3039 pps Bytes : 457279560 4985440 bps Tail-dropped packets : 4471146 2202 pps RED-dropped packets : 52911283 46053 pps Low : 49602496 46053 pps Medium-low : 0 0 pps Medium-high : 0 0 pps High : 3308787 0 pps RED-dropped bytes : 10846813015 75528000 bps Low : 10168511680 75528000 bps Medium-low : 0 0 bps Medium-high : 0 0 bps High : 678301335 0 bps Queue: 1, Forwarding classes: ef Queued: Packets : 15344874 51295 pps Bytes : 3145699170 84125760 bps Transmitted: Packets : 366115 1218 pps Bytes : 75053575 1997792 bps Tail-dropped packets : 364489 1132 pps RED-dropped packets : 14614270 48945 pps Low : 14614270 48945 pps Medium-low : 0 0 pps Medium-high : 0 0 pps High : 0 0 pps RED-dropped bytes : 2995925350 80270528 bps Low : 2995925350 80270528 bps Medium-low : 0 0 bps Medium-high : 0 0 bps High : 0 0 bps ...
Note:This step shows command output for queue
0
(forwarding classbe
) and queue1
(forwarding classef
) only.
Meaning
Now that traffic shaping is attached to the GRE tunnel
interfaces, the command output shows that traffic shaping specified
for the tunnel at logical interface gr-1/1/10.1 (shaping-rate 8m
and guaranteed-rate 3m
) is honored.
For queue
0
, the GRE tunnel device transmits traffic at a rate of 3039 pps. The traffic rate received at the tunnel destination device is 5,008,272 bps:content_copy zoom_out_map3039 packets/second X 206 bytes/packet X 8 bits/byte = 5,008,272 bits/second
For queue
0
, the GRE tunnel device transmits traffic at a rate of 1218 pps. The traffic rate received at the tunnel destination device is 2,007,264 bps:content_copy zoom_out_map1218 packets/second X 206 bytes/packet X 8 bits/byte = 2,007,264 bits/second
Compare these statistics to the baseline measurements taken without traffic shaping, as described in Measuring GRE Tunnel Transmission Rates Without Shaping Applied.