Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Example: Performing Output Scheduling and Shaping in Hierarchical CoS Queues for Traffic Routed to GRE Tunnels

date_range 29-Nov-23

This example shows how to configure a generic routing encapsulation (GRE) tunnel device to perform CoS output scheduling and shaping of IPv4 traffic routed to GRE tunnels. This feature is supported on MX Series routers running Junos OS Release 12.3R4 or later revisions, 13.2R2 or later revision, or 13.3R1 or later, with GRE tunnel interfaces configured on MPC1 Q, MPC2 Q, or MPC2 EQ modules.

Requirements

This example uses the following Juniper Networks hardware and Junos OS software:

  • Transport network—An IPv4 network running Junos OS Release 13.3.

  • GRE tunnel device—One MX80 router installed as an ingress provider edge (PE) router.

  • Input and output logical interfaces configurable on two ports of the built-in 10-Gigabit Ethernet Modular Interface Card (MIC):

Overview

In this example, you configure the router with input and output logical interfaces for IPv4 traffic, and then you convert the output logical interface to four GRE tunnel source interfaces. You also install static routes in the routing table so that input traffic is routed to the four GRE tunnels.

Note:

Before you apply a traffic control profile with a scheduler-map and shaping rate to a GRE tunnel interface, you must configure and commit a hierarchical scheduler on the GRE tunnel physical interface, specifying a maximum of two hierarchical scheduling levels for node scaling.

Configuration

To configure scheduling and shaping in hierarchical CoS queues for traffic routed to GRE tunnel interfaces configured on MPC1Q, MPC2Q, or MPC2 EQ modules on an MX Series router, perform these tasks:

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes

content_copy zoom_out_map
set chassis fpc 1 pic 1 tunnel-services bandwidth 1g
set interfaces ge-1/1/0 unit 0 family inet address 10.6.6.1/24
set interfaces ge-1/1/1 unit 0 family inet address 10.70.1.1/24 arp 10.70.1.3 mac 00:00:03:00:04:00
set interfaces ge-1/1/1 unit 0 family inet address 10.80.1.1/24 arp 10.80.1.3 mac 00:00:03:00:04:01
set interfaces ge-1/1/1 unit 0 family inet address 10.90.1.1/24 arp 10.90.1.3 mac 00:00:03:00:04:02
set interfaces ge-1/1/1 unit 0 family inet address 10.100.1.1/24 arp 10.100.1.3 mac 00:00:03:00:04:04
set interfaces gr-1/1/10 unit 1 family inet address 10.100.1.1/24
set interfaces gr-1/1/10 unit 1 tunnel source 10.70.1.1 destination 10.70.1.3
set interfaces gr-1/1/10 unit 2 family inet address 10.200.1.1/24
set interfaces gr-1/1/10 unit 2 tunnel source 10.80.1.1 destination 10.80.1.3
set interfaces gr-1/1/10 unit 3 family inet address 10.201.1.1/24
set interfaces gr-1/1/10 unit 3 tunnel source 10.90.1.1 destination 10.90.1.3
set interfaces gr-1/1/10 unit 4 family inet address 10.202.1.1/24
set interfaces gr-1/1/10 unit 4 tunnel source 10.100.1.1 destination 10.100.1.3
set interfaces gr-1/1/10 hierarchical-scheduler
set routing-options static route 10.2.2.0/24 next-hop gr-1/1/10.1
set routing-options static route 10.3.3.0/24 next-hop gr-1/1/10.2
set routing-options static route 10.4.4.0/24 next-hop gr-1/1/10.3
set routing-options static route 10.5.5.0/24 next-hop gr-1/1/10.4

Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces

content_copy zoom_out_map
set class-of-service forwarding-classes queue 0 be
set class-of-service forwarding-classes queue 1 ef
set class-of-service forwarding-classes queue 2 af
set class-of-service forwarding-classes queue 3 nc
set class-of-service forwarding-classes queue 4 be1
set class-of-service forwarding-classes queue 5 ef1
set class-of-service forwarding-classes queue 6 af1
set class-of-service forwarding-classes queue 7 nc1
set class-of-service classifiers inet-precedence gr-inet forwarding-class be loss-priority low code-points 000
set class-of-service classifiers inet-precedence gr-inet forwarding-class ef loss-priority low code-points 001
set class-of-service classifiers inet-precedence gr-inet forwarding-class af loss-priority low code-points 010
set class-of-service classifiers inet-precedence gr-inet forwarding-class nc loss-priority low code-points 011
set class-of-service classifiers inet-precedence gr-inet forwarding-class be1 loss-priority low code-points 100
set class-of-service classifiers inet-precedence gr-inet forwarding-class ef1 loss-priority low code-points 101
set class-of-service classifiers inet-precedence gr-inet forwarding-class af1 loss-priority low code-points 110
set class-of-service classifiers inet-precedence gr-inet forwarding-class nc1 loss-priority low code-points 111
set class-of-service interfaces ge-1/1/0 unit 0 classifiers inet-precedence gr-inet
set class-of-service schedulers be_sch transmit-rate percent 30
set class-of-service schedulers ef_sch transmit-rate percent 40
set class-of-service schedulers af_sch transmit-rate percent 25
set class-of-service schedulers nc_sch transmit-rate percent 5
set class-of-service schedulers be1_sch transmit-rate percent 60
set class-of-service schedulers be1_sch priority low
set class-of-service schedulers ef1_sch transmit-rate percent 40
set class-of-service schedulers ef1_sch priority medium-low
set class-of-service schedulers af1_sch transmit-rate percent 10
set class-of-service schedulers af1_sch priority strict-high
set class-of-service schedulers nc1_sch shaping-rate percent 10
set class-of-service schedulers nc1_sch priority high
set class-of-service scheduler-maps sch_map_1 forwarding-class be scheduler be_sch
set class-of-service scheduler-maps sch_map_1 forwarding-class ef scheduler ef_sch
set class-of-service scheduler-maps sch_map_1 forwarding-class af scheduler af_sch
set class-of-service scheduler-maps sch_map_1 forwarding-class nc scheduler nc_sch
set class-of-service scheduler-maps sch_map_2 forwarding-class be scheduler be1_sch
set class-of-service scheduler-maps sch_map_2 forwarding-class ef scheduler ef1_sch
set class-of-service scheduler-maps sch_map_3 forwarding-class af scheduler af_sch
set class-of-service scheduler-maps sch_map_3 forwarding-class nc scheduler nc_sch
set class-of-service traffic-control-profiles gr-ifl-tcp3 guaranteed-rate 5m
set class-of-service traffic-control-profiles gr-ifd-tcp shaping-rate 10m
set class-of-service traffic-control-profiles gr-ifd-tcp-remain shaping-rate 7m
set class-of-service traffic-control-profiles gr-ifd-tcp-remain guaranteed-rate 4m
set class-of-service traffic-control-profiles gr-ifl-tcp1 scheduler-map sch_map_1
set class-of-service traffic-control-profiles gr-ifl-tcp1 shaping-rate 8m
set class-of-service traffic-control-profiles gr-ifl-tcp1 guaranteed-rate 3m
set class-of-service traffic-control-profiles gr-ifl-tcp2 scheduler-map sch_map_2
set class-of-service traffic-control-profiles gr-ifl-tcp2 guaranteed-rate 2m
set class-of-service traffic-control-profiles gr-ifl-tcp3 scheduler-map sch_map_3
set class-of-service interfaces gr-1/1/10 output-traffic-control-profile gr-ifd-tcp
set class-of-service interfaces gr-1/1/10 output-traffic-control-profile-remaining gr-ifd-remain
set class-of-service interfaces gr-1/1/10 unit 1 output-traffic-control-profile gr-ifl-tcp1
set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp2
set class-of-service interfaces gr-1/1/10 unit 3 output-traffic-control-profile gr-ifl-tcp3 

Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes

Step-by-Step Procedure

To configure GRE tunnel interfaces (including enabling hierarchical scheduling) and static routes:

  1. Configure the amount of bandwidth for tunnel services on the physical interface.

    content_copy zoom_out_map
    [edit]
    user@host# set chassis fpc 1 pic 1 tunnel-services bandwidth 1g
    
  2. Configure the GRE tunnel device output logical interface.

    content_copy zoom_out_map
    [edit]
    user@host# set interfaces ge-1/1/0 unit 0 family inet address 10.6.6.1/24
    
  3. Configure the GRE tunnel device output logical interface.

    content_copy zoom_out_map
    [edit]
    user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.70.1.1/24 arp 10.70.1.3 mac 00:00:03:00:04:00
    user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.80.1.1/24 arp 10.80.1.3 mac 00:00:03:00:04:01
    user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.90.1.1/24 arp 10.90.1.3 mac 00:00:03:00:04:02
    user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.100.1.1/24 arp 10.100.1.3 mac 00:00:03:00:04:04
    
  4. Convert the output logical interface to four GRE tunnel interfaces.

    content_copy zoom_out_map
    [edit]
    user@host# set interfaces gr-1/1/10 unit 1 family inet address 10.100.1.1/24
    user@host# set interfaces gr-1/1/10 unit 1 tunnel source 10.70.1.1 destination 10.70.1.3
    user@host# set interfaces gr-1/1/10 unit 2 family inet address 10.200.1.1/24
    user@host# set interfaces gr-1/1/10 unit 2 tunnel source 10.80.1.1 destination 10.80.1.3
    user@host# set interfaces gr-1/1/10 unit 3 family inet address 10.201.1.1/24
    user@host# set interfaces gr-1/1/10 unit 3 tunnel source 10.90.1.1 destination 10.90.1.3
    user@host# set interfaces gr-1/1/10 unit 4 family inet address 10.202.1.1/24
    user@host# set interfaces gr-1/1/10 unit 4 tunnel source 10.100.1.1 destination 10.100.1.3
    
  5. Enable the GRE tunnel interfaces to use hierarchical scheduling.

    content_copy zoom_out_map
    [edit]
    user@host# set interfaces gr-1/1/10 hierarchical-scheduler
    
  6. Install static routes in the routing table so that the device routes IPv4 traffic to the GRE tunnel source interfaces.

    Traffic destined to the subnets 10.2.2.0/24, 10.3.3.0/24, 10.4.4.0/24, and 10.5.5.0/24 is routed to the tunnel interfaces at IP addresses 10.70.1.1, 10.80.1.1, 10.90.1.1, and 10.100.1.1, respectively.

    content_copy zoom_out_map
    [edit]
    user@host# set routing-options static route 10.2.2.0/24 next-hop gr-1/1/10.1
    user@host# set routing-options static route 10.3.3.0/24 next-hop gr-1/1/10.2
    user@host# set routing-options static route 10.4.4.0/24 next-hop gr-1/1/10.3
    user@host# set routing-options static route 10.5.5.0/24 next-hop gr-1/1/10.4
    
  7. If you are done configuring the device, commit the configuration.

    content_copy zoom_out_map
    [edit]
    user@host# commit
    

Results

From configuration mode, confirm your configuration by entering the show chassis fpc 1 pic 1, show interfaces ge-1/1/0, show interfaces ge-1/1/1, show interfaces gr-1/1/10, and show routing-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Confirm the configuration of interfaces, hierarchical scheduling on the GRE tunnel physical interface, and static routes.

content_copy zoom_out_map
user@host# show chassis  fpc 1 pic 1
tunnel-services { 
    bandwidth 1g;
}
 
user@host# show interfaces ge-1/1/0
unit 0 {
    family inet {
        address 10.6.6.1/24;
    ]
}
 
user@host# show interfaces ge-1/1/1
unit 0 {
    family inet {
        address 10.70.1.1/24 (
            arp 10.70.1.3 mac 00:00:03:00:04:00;
        }
        address 10.80.1.1/24 {
            arp 10.80.1.3 mac 00:00:03:00:04:01;
        }
        address 10.90.1.1/24 {
            arp 10.90.1.3 mac 00:00:03:00:04:02;
        }
        address 10.100.1.1/24 {
            arp 10.100.1.3 mac 00:00:03:00:04:04;
        }
    ]
}
 
user@host# show interfaces gr-1/1/10
hierarchical-scheduler;
unit 1 {
    tunnel {
        destination 10.70.1.3;
        source 10.70.1.1;
    }
    family inet {
        address 10.100.1.1/24;
    }
}
unit 2 {
    tunnel {
        destination 10.80.1.3;
        source 10.80.1.1;
    }
    family inet {
        address 10.200.1.1/24;
    }
}
unit 3 {
    tunnel {
        destination 10.90.1.3;
        source 10.90.1.1;
    }
    family inet {
        address 10.201.1.1/24;
    }
}
unit 4 {
    tunnel {
        destination 10.100.1.3;
        source 10.100.1.1;
    }
    family inet {
        address 10.202.1.1/24;
    }
}
 
user@host# show routing-options
static {
    route 10.2.2.0/24 next-hop gr-1/1/10.1;
    route 10.3.3.0/24 next-hop gr-1/1/10.2;
    route 10.4.4.0/24 next-hop gr-1/1/10.3;
    route 10.5.5.0/24 next-hop gr-1/1/10.4;
}

Measuring GRE Tunnel Transmission Rates Without Shaping Applied

Step-by-Step Procedure

To establish a baseline measurement, note the transmission rates at each GRE tunnel source.

  1. Pass traffic through the GRE tunnel at logical interfaces gr-1/1/10.1, gr-1/1/10.2, and gr-1/1/10.3.

  2. To display the traffic rates at each GRE tunnel source, use the show interfaces queue operational mode command.

    The following example command output shows detailed CoS queue statistics for logical interface gr-1/1/10.1 (the GRE tunnel from source IP address 10.70.1.1 to destination IP address 10.70.1.3).

    content_copy zoom_out_map
    user@host> show interfaces queue gr-1/1/10.1
    Logical interface gr-1/1/10.1 (Index 331) (SNMP ifIndex 4045)
    Forwarding classes: 16 supported, 8 in use
    Egress queues: 8 supported, 8 in use
    Burst size: 0
    Queue: 0, Forwarding classes: be
      Queued:
        Packets              :              31818312                102494 pps
        Bytes                :            6522753960             168091936 bps
      Transmitted:
        Packets              :               1515307                  4879 pps
        Bytes                :             310637935               8001632 bps
        Tail-dropped packets :              21013826                 68228 pps
        RED-dropped packets  :               9289179                 29387 pps
         Low                 :               9289179                 29387 pps
         Medium-low          :                     0                     0 pps
         Medium-high         :                     0                     0 pps
         High                :                     0                     0 pps
        RED-dropped bytes    :            1904281695              48194816 bps
         Low                 :            1904281695              48194816 bps
         Medium-low          :                     0                     0 bps
         Medium-high         :                     0                     0 bps
         High                :                     0                     0 bps
    ... 
    Note:

    This step shows command output for queue 0 (forwarding class be) only.

    The command output shows that the GRE tunnel device transmits traffic from queue 0 at a rate of 4879 pps. Allowing for 182 bytes per Layer 3 packet, preceded by 24 bytes of GRE overhead (a 20-byte delivery header consisting of the IPv4 packet header followed by 4 bytes for GRE flags plus encapsulated protocol type), the traffic rate received at the tunnel destination device is 8,040,592 bps:

    The command output shows that the GRE tunnel device transmits traffic from queue 0 at a rate of 4879 pps. Allowing for 182 bytes per Layer 3 packet, preceded by 24 bytes of GRE overhead (a 20-byte delivery header consisting of the IPv4 packet header followed by 4 bytes for GRE flags plus encapsulated protocol type), the traffic rate received at the tunnel destination device is 8,040,592 bps:

    content_copy zoom_out_map
      4879 packets/second X 206 bytes/packet X 8 bits/byte = 8,040,592 bits/second

Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces

Step-by-Step Procedure

To configure the GRE tunnel device with scheduling and shaping at GRE tunnel physical and logical interfaces:

  1. Define eight transmission queues.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service forwarding-classes queue 0 be
    user@host# set class-of-service forwarding-classes queue 1 ef
    user@host# set class-of-service forwarding-classes queue 2 af
    user@host# set class-of-service forwarding-classes queue 3 nc
    user@host# set class-of-service forwarding-classes queue 4 be1
    user@host# set class-of-service forwarding-classes queue 5 ef1
    user@host# set class-of-service forwarding-classes queue 6 af1
    user@host# set class-of-service forwarding-classes queue 7 nc1
    
    Note:

    To configure up to eight forwarding classes with one-to-one mapping to output queues for interfaces on M120 , M320, MX Series, and T Series routers and EX Series switches, use the queue statement at the [edit class-of-service forwarding-classes] hierarchy level.

    If you need to configure up to 16 forwarding classes with multiple forwarding classes mapped to single queues for those interface types, use the class statement instead.

  2. Configure BA classifier gr-inet that, based on IPv4 precedence bits set in an incoming packet, sets the forwarding class, loss-priority value, and DSCP bits of the packet.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class be loss-priority low code-points 000
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class ef loss-priority low code-points 001
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class af loss-priority low code-points 010
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class nc loss-priority low code-points 011
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class be1 loss-priority low code-points 100
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class ef1 loss-priority low code-points 101
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class af1 loss-priority low code-points 110
    user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class nc1 loss-priority low code-points 111
    
  3. Apply BA classifier gr-inet to the GRE tunnel device input at logical interface ge-1/1/0.0.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service interfaces ge-1/1/0 unit 0 classifiers inet-precedence gr-inet
    
  4. Define a scheduler for each forwarding class.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service schedulers be_sch transmit-rate percent 30
    user@host# set class-of-service schedulers ef_sch transmit-rate percent 40
    user@host# set class-of-service schedulers af_sch transmit-rate percent 25
    user@host# set class-of-service schedulers nc_sch transmit-rate percent 5
    user@host# set class-of-service schedulers be1_sch transmit-rate percent 60
    user@host# set class-of-service schedulers be1_sch priority low
    user@host# set class-of-service schedulers ef1_sch transmit-rate percent 40
    user@host# set class-of-service schedulers ef1_sch priority medium-low
    user@host# set class-of-service schedulers af1_sch transmit-rate percent 10
    user@host# set class-of-service schedulers af1_sch priority strict-high
    user@host# set class-of-service schedulers nc1_sch shaping-rate percent 10
    user@host# set class-of-service schedulers nc1_sch priority high
    
  5. Define a scheduler map for each of three GRE tunnels.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class be scheduler be_sch
    user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class ef scheduler ef_sch
    user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class af scheduler af_sch
    user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class nc scheduler nc_sch
    user@host# set class-of-service scheduler-maps sch_map_2 forwarding-class be scheduler be1_sch
    user@host# set class-of-service scheduler-maps sch_map_2 forwarding-class ef scheduler ef1_sch
    user@host# set class-of-service scheduler-maps sch_map_3 forwarding-class af scheduler af_sch
    user@host# set class-of-service scheduler-maps sch_map_3 forwarding-class nc scheduler nc_sch
    
  6. Define traffic control profiles for three GRE tunnel interfaces.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 scheduler-map sch_map_1
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 shaping-rate 8m
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 guaranteed-rate 3m
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp2 scheduler-map sch_map_2
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp2 guaranteed-rate 2m
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp3 scheduler-map sch_map_3
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp3 guaranteed-rate 5m
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp shaping-rate 10m
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp-remain shaping-rate 7m
    user@host# set class-of-service traffic-control-profiles gr-ifl-tcp-remain guaranteed-rate 4m
    
  7. Apply CoS scheduling and shaping to the output traffic at the physical interface and logical interfaces.

    content_copy zoom_out_map
    [edit]
    user@host# set class-of-service interfaces gr-1/1/10 output-traffic-control-profile gr-ifd-tcp
    user@host# set class-of-service interfaces gr-1/1/10 output-traffic-control-profile-remaining gr-ifd-remain
    user@host# set class-of-service interfaces gr-1/1/10 unit 1 output-traffic-control-profile gr-ifl-tcp1
    user@host# set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp2
    user@host# set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp3
    
  8. If you are done configuring the device, commit the configuration.

    content_copy zoom_out_map
    [edit]
    user@host# commit
    

Results

From configuration mode, confirm your configuration by entering the show class-of-service forwarding-classes, show class-of-service classifiers, show class-of-service interfaces ge-1/1/0, show class-of-service schedulers, show class-of-service scheduler-maps, show class-of-service traffic-control-profiles, and show class-of-service interfaces gr-1/1/10 commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Confirm the configuration of output scheduling and shaping at the GRE tunnel physical and logical interfaces.

content_copy zoom_out_map
user@host# show class-of-service forwarding-classes
queue 0 be;
queue 1 ef; 
queue 2 af;
queue 3 nc;
queue 4 be1;
queue 5 ef1;
queue 6 af1;
queue 7 nc1;
 
user@host# show class-of-service classifiers
inet-precedence gr-inet {
    forwarding-class be {
        loss-priority low code-points 000;
    }
    forwarding-class ef {
        loss-priority low code-points 001;
    }
    forwarding-class af {
        loss-priority low code-points 010;
    }
    forwarding-class nc {
        loss-priority low code-points 011;
    }
    forwarding-class be1 {
        loss-priority low code-points 100;
    }
    forwarding-class ef1 {
        loss-priority low code-points 101;
    }
    forwarding-class af1 {
        loss-priority low code-points 110;
    }
    forwarding-class nc1 {
        loss-priority low code-points 111;
    }
}
 
user@host# show class-of-service interfaces ge-1/1/0
unit 0 {
    classifiers {
        inet-precedence gr-inet;
    }
}
 
user@host# show class-of-service schedulers
be_sch {
    transmit-rate percent 30;
}
ef_sch {
    transmit-rate percent 40;
}
af_sch {
    transmit-rate percent 25;
}
nc_sch {
    transmit-rate percent 5;
}
be1_sch {
    transmit-rate percent 60;
    priority low;
}
ef1_sch {
    transmit-rate percent 40;
    priority medium-low;
}
af1_sch {
    transmit-rate percent 10;
    priority strict-high;
}
nc1_sch {
    shaping-rate percent 10;
    priority high;
}
 
user@host# show class-of-service scheduler-maps
sch_map_1 {
    forwarding-class be scheduler be_sch;
    forwarding-class ef scheduler ef_sch;
    forwarding-class af scheduler af_sch;
    forwarding-class nc scheduler nc_sch;
}
sch_map_2 {
    forwarding-class be scheduler be1_sch;
    forwarding-class ef scheduler ef1_sch;
}
sch_map_3 {
    forwarding-class af scheduler af_sch;
    forwarding-class nc scheduler nc_sch;
}
 
user@host# show class-of-service traffic-control-profiles
gr-ifl-tcp1 {
    scheduler-map sch_map_1;
    shaping-rate 8m;
    guaranteed-rate 3m;
}
gr-ifl-tcp2 {
    scheduler-map sch_map_2;
    guaranteed-rate 2m;
}
gr-ifl-tcp3 {
    scheduler-map sch_map_3;
    guaranteed-rate 5m;
}
gr-ifd-remain {
    shaping-rate 7m;
    guaranteed-rate 4m;
}
gr-ifd-tcp {
    shaping-rate 10m;
}
 
user@host# show class-of-service interfaces gr-1/1/10
gr-1/1/10 {
    output-traffic-control-profile gr-ifd-tcp;
    output-traffic-control-profile-remaining gr-ifd-remain;
    unit 1 {
        output-traffic-control-profile gr-ifl-tcp1;
    }
    unit 2 {
        output-traffic-control-profile gr-ifl-tcp2;
    }
    unit 3 {
        output-traffic-control-profile gr-ifl-tcp3;
    }
}

Verification

Confirm that the configurations are working properly.

Verifying That Scheduling and Shaping Are Attached to the GRE Tunnel Interfaces

Purpose

Verify the association of traffic control profiles with GRE tunnel interfaces.

Action

Verify the traffic control profile attached to the GRE tunnel physical interface by using the show class-of-service interface gr-1/1/10 detail operational mode command.

  • content_copy zoom_out_map
    user@host> show class-of-service interface gr-1/1/10 detail
    Physical interface: gr-1/1/10, Enabled, Physical link is Up
      Type: GRE, Link-level type: GRE, MTU: Unlimited, Speed: 1000mbps
      Device flags   : Present Running
      Interface flags: Point-To-Point SNMP-Traps
    
    Physical interface: gr-1/1/10, Index: 220
    Queues supported: 8, Queues in use: 8
      Output traffic control profile: gr-ifd-tcp, Index: 17721
      Output traffic control profile remaining: gr-ifd-remain, Index: 58414
      Congestion-notification: Disabled
    
      Logical interface gr-1/1/10.1
        Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.70.1.3:10.70.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL
        Gre keepalives configured: Off, Gre keepalives adjacency state: down
        inet  10.100.1.1/24
      Logical interface: gr-1/1/10.1, Index: 331
    Object                  Name                   Type                    Index
    Traffic-control-profile gr-ifl-tcp1            Output                  17849
    Classifier              ipprec-compatibility   ip                         13
    
      Logical interface gr-1/1/10.2
        Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.80.1.3:10.80.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL
        Gre keepalives configured: Off, Gre keepalives adjacency state: down
        inet  10.200.1.1/24
      Logical interface: gr-1/1/10.2, Index: 332
    Object                  Name                   Type                    Index
    Traffic-control-profile gr-ifl-tcp2            Output                  17856
    Classifier              ipprec-compatibility   ip                         13
    
      Logical interface gr-1/1/10.3
        Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.90.1.3:10.90.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL
        Gre keepalives configured: Off, Gre keepalives adjacency state: down
        inet  10.201.1.1/24
      Logical interface: gr-1/1/10.3, Index: 333
    Object                  Name                   Type                    Index
    Traffic-control-profile gr-ifl-tcp3            Output                  17863
    Classifier              ipprec-compatibility   ip                         13
    

Meaning

Ingress IPv4 traffic routed to GRE tunnels on the device is subject to CoS output scheduling and shaping.

Verifying That Scheduling and Shaping Are Functioning at the GRE Tunnel Interfaces

Purpose

Verify the traffic rate shaping at the GRE tunnel interfaces.

Action

  1. Pass traffic through the GRE tunnel at logical interfaces gr-1/1/10.1, gr-1/1/10.2, and gr-1/1/10.3.

  2. To verify the rate shaping at each GRE tunnel source, use the show interfaces queue operational mode command.

    The following example command output shows detailed CoS queue statistics for logical interface gr-1/1/10.1 (the GRE tunnel from source IP address 10.70.1.1 to destination IP address 10.70.1.3):

    content_copy zoom_out_map
    user@host> show interfaces queue gr-1/1/10.1
    Logical interface gr-1/1/10.1 (Index 331) (SNMP ifIndex 4045)
    Forwarding classes: 16 supported, 8 in use
    Egress queues: 8 supported, 8 in use
    Burst size: 0
    Queue: 0, Forwarding classes: be
      Queued:
        Packets              :              59613061                 51294 pps
        Bytes                :           12220677505              84125792 bps
      Transmitted:
        Packets              :               2230632                  3039 pps
        Bytes                :             457279560               4985440 bps
        Tail-dropped packets :               4471146                  2202 pps
        RED-dropped packets  :              52911283                 46053 pps
         Low                 :              49602496                 46053 pps
         Medium-low          :                     0                     0 pps
         Medium-high         :                     0                     0 pps
         High                :               3308787                     0 pps
        RED-dropped bytes    :           10846813015              75528000 bps
         Low                 :           10168511680              75528000 bps
         Medium-low          :                     0                     0 bps
         Medium-high         :                     0                     0 bps
         High                :             678301335                     0 bps 
    Queue: 1, Forwarding classes: ef
      Queued:
        Packets              :              15344874                 51295 pps
        Bytes                :            3145699170              84125760 bps
      Transmitted:
        Packets              :                366115                  1218 pps
        Bytes                :              75053575               1997792 bps
        Tail-dropped packets :                364489                  1132 pps
        RED-dropped packets  :              14614270                 48945 pps
         Low                 :              14614270                 48945 pps
         Medium-low          :                     0                     0 pps
         Medium-high         :                     0                     0 pps
         High                :                     0                     0 pps
        RED-dropped bytes    :            2995925350              80270528 bps
         Low                 :            2995925350              80270528 bps
         Medium-low          :                     0                     0 bps
         Medium-high         :                     0                     0 bps
         High                :                     0                     0 bps
    ... 
    Note:

    This step shows command output for queue 0 (forwarding class be) and queue 1 (forwarding class ef) only.

Meaning

Now that traffic shaping is attached to the GRE tunnel interfaces, the command output shows that traffic shaping specified for the tunnel at logical interface gr-1/1/10.1 (shaping-rate 8m and guaranteed-rate 3m) is honored.

  • For queue 0, the GRE tunnel device transmits traffic at a rate of 3039 pps. The traffic rate received at the tunnel destination device is 5,008,272 bps:

    content_copy zoom_out_map
      3039 packets/second X 206 bytes/packet X 8 bits/byte = 5,008,272 bits/second
  • For queue 0, the GRE tunnel device transmits traffic at a rate of 1218 pps. The traffic rate received at the tunnel destination device is 2,007,264 bps:

    content_copy zoom_out_map
      1218 packets/second X 206 bytes/packet X 8 bits/byte = 2,007,264 bits/second

Compare these statistics to the baseline measurements taken without traffic shaping, as described in Measuring GRE Tunnel Transmission Rates Without Shaping Applied.

footer-navigation