Class of Service on Link Services Interfaces
Link Services Configuration for Junos Interfaces
This topic provides links to topics explaining link services configuration for the following interface types:
For information about configuring LSQ interface redundancy across multiple routers using SONET APS interfaces, see Configuring LSQ Interface Redundancy Across Multiple Routers Using SONET APS
For information about configuring LSQ interface redundancy in a single router using SONET APS interfaces, see Configuring LSQ Interface Redundancy in a Single Router Using SONET APS
For information about configuring LSQ interface redundancy in a single router using Virtual Interfaces, see Configuring LSQ Interface Redundancy in a Single Router Using Virtual Interfaces
For information about configuring CoS scheduling queues on Logical LSQ interfaces, see Configuring CoS Scheduling Queues on Logical LSQ Interfaces
For information about configuring CoS fragmentation by forwarding class on LSQ interfaces, seeConfiguring CoS Fragmentation by Forwarding Class on LSQ Interfaces
For information about reserving bundle bandwidth for Link-Layer overhead on LSQ interfaces, see Reserving Bundle Bandwidth for Link-Layer Overhead on LSQ Interfaces
For information about oversubscribing interface bandwidth on LSQ interfaces, see Oversubscribing Interface Bandwidth on LSQ Interfaces
For information about configuring guaranteed minimum rate on LSQ interfaces, see Configuring Guaranteed Minimum Rate on LSQ Interfaces
For information about configuring link services and CoS on services PICs, seeConfiguring Link Services and CoS on Services PICs
For information about configuring LSQ interfaces as NxT1 or NxE1 bundles using MLPPP, seeConfiguring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP
For information about configuring LSQ interfaces as NxT1 or NxE1 bundles using FRF.16, seeConfiguring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.16
For information about configuring LSQ interfaces for single fractional T1 or E1 interfaces using MLPPP and LFI, see Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using MLPPP and LFI
For information about configuring LSQ interfaces for single fractional T1 or E1 interfaces using FRF.12, see Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using FRF.12
For information about configuring LSQ interfaces as NxT1 or NxE1 bundles using FRF.15, see Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.15
For information about configuring LSQ interfaces for T3 links configured for compressed RTP over MLPPP, see Configuring LSQ Interfaces for T3 Links Configured for Compressed RTP over MLPPP
For information about configuring LSQ interfaces as T3 or OC3 bundles using FRF.12, see Configuring LSQ Interfaces as T3 or OC3 Bundles Using FRF.12
For information about configuring LSQ interfaces for ATM2 IQ interfaces using MLPPP, see Configuring LSQ Interfaces for ATM2 IQ Interfaces Using MLPPP
See Also
Configuring CoS Scheduling Queues on Logical LSQ Interfaces
For link services IQ (lsq-
) interfaces, you can specify
a scheduler map for each logical unit. A logical unit represents either
an MLPPP bundle or a DLCI configured on a FRF.16 bundle. The scheduler
is applied to the traffic sent to an AS or Multiservices PIC running
the Layer 2 link services package.
If you configure a scheduler map on a bundle, you must include
the per-unit-scheduler
statement at the [edit interfaces
lsq-fpc/pic/port]
hierarchy level. If you configure a scheduler map on an FRF.16
DLCI, you must include the per-unit-scheduler
statement
at the [edit interfaces lsq-fpc/pic/port:channel]
hierarchy level. For more information, see the Class of Service User Guide (Routers and EX9200 Switches).
If you need latency guarantees for multiclass or LFI traffic, you must use channelized IQ PICs for the constituent links. With non-IQ PICs, because queueing is not done at the channelized interface level on the constituent links, latency-sensitive traffic might not receive the type of service that it should. Constituent links from the following PICs support latency guarantees:
Channelized E1 IQ PIC
Channelized OC3 IQ PIC
Channelized OC12 IQ PIC
Channelized STM1 IQ PIC
Channelized T3 IQ PIC
For scheduling queues on a logical interface, you can
configure the following scheduler map properties at the [edit
class-of-service schedulers]
hierarchy level:
buffer-size
—The queue size; for more information, see Configuring Scheduler Buffer Size.priority
—The transmit priority (low, high, strict-high); for more information, see Configuring Scheduler Priority.shaping-rate
—The subscribed transmit rate; for more information, see Configuring Scheduler Shaping Rate.drop-profile-map
—The random early detection (RED) drop profile; for more information, see Configuring Drop Profiles.
When you configure MLPPP and FRF.12 on M Series and T Series
routers, you should configure a single scheduler with non-zero percent
transmission rates and buffer sizes for queues 0 through 3, and assign
this scheduler to the link services IQ interface (lsq
)
and to each constituent link.
When you configure FRF.16 on M Series and T Series routers,
you can assign a single scheduler map to the link services IQ interface
(lsq
) and to each link services IQ DLCI, or you can assign
different scheduler maps to the various DLCIs of the bundle, as shown
in Example: Configuring
an LSQ Interface as an NxT1 Bundle Using FRF.16. For the
constituent links of an FRF.16 bundle, you do not need to configure
a custom scheduler. Because LFI and multiclass are not supported for
FRF.16, the traffic from each constituent link is transmitted from
queue 0. This means you should allow most of the bandwidth to be used
by queue 0. The default scheduler transmission rate and buffer size
percentages for queues 0 through 3 are 95, 0, 0, and 5 percent,
respectively. This default scheduler sends all user traffic to queue
0 and all network-control traffic to queue 3, and therefore it is
well suited to the behavior of FRF.16. You can configure a custom
scheduler that explicitly replicates the 95, 0, 0, and 5 percent
queuing behaviors, and apply it to the constituent links.
On T Series and M320 routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.
For link services IQ interfaces (lsq
), these scheduling
properties work as they do in other PICs, except as noted in the following
sections.
On T Series and M320 routers, lsq
interfaces
do not support DiffServ code point (DSCP) and DSCP-IPv6 rewrite markers.
- Configuring Scheduler Buffer Size
- Configuring Scheduler Priority
- Configuring Scheduler Shaping Rate
- Configuring Drop Profiles
Configuring Scheduler Buffer Size
You can configure the scheduler buffer size in three ways: as a temporal value, as a percentage, and as a remainder. On a single logical interface (MLPPP or a FRF.16 DLCI), each queue can have a different buffer size.
If you specify a temporal value, the queuing algorithm starts dropping packets when it queues more than a computed number of bytes. This number is computed by multiplying logical interface speed by the temporal value. For MLPPP bundles, logical interface speed is equal to the bundle bandwidth, which is the sum of constituent link speeds minus link-layer overhead. For MLFR FRF.16 DLCIs, logical interface speed is equal to bundle bandwidth multiplied by the DLCI shaping rate. In all cases, the maximum temporal value is limited to 200 milliseconds.
Buffer size percentages are implicitly converted into temporal
values by multiplying the percentage by 200 milliseconds. For
example, buffer size specified as buffer-size percent 20
is the same as a 40-millisecond temporal delay. The link services
IQ implementation guarantees 200 milliseconds of buffer delay
for all interfaces with T1 and higher speeds. For slower interfaces,
it guarantees one second of buffer delay.
The queueing algorithm evenly distributes leftover bandwidth
among all queues that are configured with the buffer-size remainder
statement. The queuing algorithm guarantees enough space in the
transmit buffer for two MTU-sized packets.
Configuring Scheduler Priority
The transmit priority of each queue is determined by the scheduler
and the forwarding class. Each queue receives a guaranteed amount
of bandwidth specified with the scheduler transmit-rate
statement.
Configuring Scheduler Shaping Rate
You use the shaping rate to set the percentage of total bundle bandwidth that is dedicated to a DLCI. For link services IQ DLCIs, only percentages are accepted, which allows adjustments in response to dynamic changes in bundle bandwidth—for example, when a link goes up or down. This means that absolute shaping rates are not supported on FRF.16 bundles. Absolute shaping rates are allowed for MLPPP and MLFR bundles only.
For scheduling between DLCIs in a MLFR FRF.16 bundle, you can
configure a shaping rate for each DLCI. A shaping rate is expressed
as a percentage of the aggregate bundle bandwidth. Shaping rate percentages
for all DLCIs within a bundle can add up to 100 percent or less. Leftover
bandwidth is distributed equally to DLCIs that do not have the shaping-rate
statement included at the [edit class-of-service
interfaces lsq-fpc/pic/port:channel unit logical-unit-number]
hierarchy level. If none of the DLCIs in an MLFR FRF.16 bundle
specify a DLCI scheduler, the total bandwidth is evenly divided across
all DLCIs.
For FRF.16 bundles on link services IQ interfaces, only shaping rates based on percentage are supported.
Configuring Drop Profiles
You can configure random early detection (RED) on LSQ interfaces as in other CoS scenarios. To configure RED, include one or more drop profiles and attach them to a scheduler for a particular forwarding class. For more information about RED profiles, see the Class of Service User Guide (Routers and EX9200 Switches).
The LSQ implementation performs tail RED. It supports a maximum of 256 drop profiles per PIC. Drop profiles are configurable on a per-queue, per-loss-priority, and per-TCP-bit basis.
You can attach scheduler maps with configured RED drop profiles to any LSQ logical interface: an MLPPP bundle, an FRF.15 bundle, or an FRF.16 DLCI. Different queues (forwarding classes) on the same logical interface can have different associated drop profiles.
The following example shows how to configure a RED profile on an LSQ interface:
[edit] class-of-service { drop-profiles { drop-low { # Configure suitable drop profile for low loss priority ... } drop-high { # Configure suitable drop profile for high loss priority ... } } scheduler-maps { schedmap { # Best-effort queue will use be-scheduler # Other queues may use different schedulers forwarding-class be scheduler be-scheduler; ... } } schedulers { be-scheduler { # Configure two drop profiles for low and high loss priority drop-profile-map loss-priority low protocol any drop-profile drop-low; drop-profile-map loss-priority high protocol any drop-profile drop-high; # Other scheduler parameters (buffer-size, priority, # and transmit-rate) are already supported. ... } } interfaces { lsq-1/3/0.0 { # Attach a scheduler map (that includes RED drop profiles) # to a LSQ logical interface. scheduler-map schedmap; } } }
The RED profiles should be applied only on the LSQ bundles and not on the egress links that constitute the bundle.
See Also
Configuring CoS Fragmentation by Forwarding Class on LSQ Interfaces
For link services IQ (lsq-
) interfaces, you can specify
fragmentation properties for specific forwarding classes. Traffic
on each forwarding class can be either multilink encapsulated (fragmented
and sequenced) or nonencapsulated (hashed with no fragmentation).
By default, traffic in all forwarding classes is multilink encapsulated.
When you do not configure fragmentation properties for the queues
on MLPPP interfaces, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number fragment-threshold]
hierarchy
level is the fragmentation threshold for all forwarding classes within
the MLPPP interface. For MLFR FRF.16 interfaces, the fragmentation
threshold you set at the [edit interfaces interface-name mlfr-uni-nni-bundle-options fragment-threshold]
hierarchy
level is the fragmentation threshold for all forwarding classes within
the MLFR FRF.16 interface.
If you do not set a maximum fragment size anywhere in the configuration, packets are still fragmented if they exceed the smallest maximum transmission unit (MTU) or maximum received reconstructed unit (MRRU) of all the links in the bundle. A nonencapsulated flow uses only one link. If the flow exceeds a single link, then the forwarding class must be multilink encapsulated, unless the packet size exceeds the MTU/MRRU.
Even if you do not set a maximum fragment size anywhere in the
configuration, you can configure the MRRU by including the mrru
statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
or [edit interfaces interface-name mlfr-uni-nni-bundle-options]
hierarchy
level. The MRRU is similar to the MTU, but is specific to link services
interfaces. By default the MRRU size is 1500 bytes, and you can
configure it to be from 1500 through 4500 bytes. For more information,
see Configuring MRRU on Multilink and Link Services
Logical Interfaces.
To configure fragmentation properties on a queue, include the fragmentation-maps
statement at the [edit class-of-service]
hierarchy level:
[edit class-of-service] fragmentation-maps { map-name { forwarding-class class-name { (fragment-threshold bytes | no-fragmentation); multilink-class number; } } }
To set a per-forwarding class fragmentation threshold, include
the fragment-threshold
statement in the fragmentation map.
This statement sets the maximum size of each multilink fragment.
To set traffic on a queue to be nonencapsulated rather than
multilink encapsulated, include the no-fragmentation
statement
in the fragmentation map. This statement specifies that an extra fragmentation
header is not prepended to the packets received on this queue and
that static link load balancing is used to ensure in-order packet
delivery.
For a given forwarding class, you can include either the fragment-threshold
or no-fragmentation
statement;
they are mutually exclusive.
You use the multilink-class
statement to map a forwarding
class into a multiclass MLPPP (MCML). For a given forwarding class,
you can include either the multilink-class
or no-fragmentation
statement; they are mutually exclusive.
To associate a fragmentation map with a multilink PPP interface
or MLFR FRF.16 DLCI, include the fragmentation-map
statement
at the [edit class-of-service interfaces interface-name unit logical-unit-number]
hierarchy level:
[edit class-of-service interfaces] lsq-fpc/pic/port { unit logical-unit-number { # Multilink PPP fragmentation-map map-name; } lsq-fpc/pic/port:channel { # MLFR FRF.16 unit logical-unit-number { fragmentation-map map-name; }
For configuration examples, see the following topics:
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.16
Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using MLPPP and LFI
Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using FRF.12
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.15
Configuring LSQ Interfaces for T3 Links Configured for Compressed RTP over MLPPP
Configuring LSQ Interfaces as T3 or OC3 Bundles Using FRF.12
Configuring LSQ Interfaces for ATM2 IQ Interfaces Using MLPPP
For Link Services PIC link services (ls-
) interfaces,
fragmentation maps are not supported. Instead, you enable LFI by including
the interleave-fragments
statement at the [edit interfaces interface-name unit logical-unit-number]
hierarchy level. For more information, see Configuring Delay-Sensitive Packet Interleaving
on Link Services Logical Interfaces.
See Also
Configuring Link Services and CoS on Services PICs
To configure link services and CoS on an AS or Multiservices PIC, you must perform the following steps:
Enable the Layer 2 service package. You enable service packages per PIC, not per port. When you enable the Layer 2 service package, the entire PIC uses the configured package. To enable the Layer 2 service package, include the
service-package
statement at the[edit chassis fpc slot-number pic pic-number adaptive-services]
hierarchy level, and specifylayer-2
:[edit chassis fpc slot-number pic pic-number adaptive-services] service-package layer-2;
For more information about AS or Multiservices PIC service packages, see Enabling Service Packages and Layer 2 Service Package Capabilities and Interfaces.
Configure a multilink PPP or FRF.16 bundle by combining constituent links into a virtual link, or bundle.
Configuring an MLPPP Bundle
To configure an MLPPP bundle, configure constituent links and bundle properties by including the following statements in the configuration:
[edit interfaces interface-name unit logical-unit-number] encapsulation ppp; family mlppp { bundle lsq-fpc/pic/port.logical-unit-number; } [edit interfaces lsq-fpc/pic/port unit logical-unit-number] drop-timeout milliseconds; encapsulation multilink-ppp; fragment-threshold bytes; link-layer-overhead percent; minimum-links number; mrru bytes; short-sequence; family inet { address address; }
For more information about these statements, see the Link and Multilink Services Interfaces User Guide for Routing Devices.
Configuring an MLFR FRF.16 Bundle
To configure an MLFR FRF.16 bundle, configure constituent links and bundle properties by including the following statements in the configuration:
[edit chassis fpc slot-number pic slot-number] mlfr-uni-nni-bundles number; [edit interfaces interface-name ] encapsulation multilink-frame-relay-uni-nni; unit logical-unit-number { family mlfr-uni-nni { bundle lsq-fpc/pic/port:channel; } }
For more information about the
mlfr-uni-nni-bundles
statement, see the Junos OS Administration Library for Routing Devices. MLFR FRF.16 uses channels as logical units.For MLFR FRF.16, you must configure one end as data circuit-terminating equipment (DCE) by including the following statements at the
[edit interfaces lsq-fpc
/pic
/port
:channel]
hierarchy level.encapsulation multilink-frame-relay-uni-nni; dce; mlfr-uni-nni-options { acknowledge-retries number; acknowledge-timer milliseconds; action-red-differential-delay (disable-tx | remove-link); drop-timeout milliseconds; fragment-threshold bytes; hello-timer milliseconds; link-layer-overhead percent; lmi-type (ansi | itu); minimum-links number; mrru bytes; n391 number; n392 number; n393 number; red-differential-delay milliseconds; t391 number; t392 number; yellow-differential-delay milliseconds; } unit logical-unit-number { dlci dlci-identifier; family inet { address address; } }
For more information about MLFR UNI NNI properties, see Link and Multilink Services Interfaces User Guide for Routing Devices.
To configure CoS components for each multilink bundle, enable per-unit scheduling on the interface, configure a scheduler map, apply the scheduler to each queue, configure a fragmentation map, and apply the fragmentation map to each bundle. Include the following statements:
[edit interfaces] lsq-fpc/pic/port { per-unit-scheduler; # Enables per-unit scheduling on the bundle } [edit class-of-service] interfaces { lsq-fpc/pic/port { # Multilink PPP unit logical-unit-number { scheduler-map map-name; # Applies scheduler map to each queue } } lsq-fpc/pic/port:channel { # MLFR FRF.16 unit logical-unit-number { # Scheduler map provides scheduling information for # the queues within a single DLCI. scheduler-map map-name; shaping-rate percent percent; } forwarding-classes { queue queue-number class-name priority (high | low); } scheduler-maps { map-name { forwarding-class class-name scheduler scheduler-name; } } schedulers { scheduler-name { buffer-size (percent percentage | remainder | temporal microseconds); priority priority-level; transmit-rate (percent percentage | rate | remainder) <exact>; } } fragmentation-maps { map-name { forwarding-class class-name { fragment-threshold bytes; no-fragmentation; } } }
Associate a fragmentation map with a multilink PPP interface or MLFR FRF.16 DLCI by including the following statements at the
[edit class-of-service]
hierarchy level:interfaces { lsq-fpc/pic/port { unit logical-unit-number { # Multilink PPP fragmentation-map map-name; } } lsq-fpc/pic/port:channel { # MLFR FRF.16 unit logical-unit-number { fragmentation-map map-name; }
See Also
Oversubscribing Interface Bandwidth on LSQ Interfaces
The term oversubscribing interface bandwidth means configuring shaping rates (peak information rates [PIRs]) so that their sum exceeds the interface bandwidth.
On Channelized IQ PICs, Gigabit Ethernet IQ PICs, and FRF.16
link services IQ (lsq-
) interfaces on AS and Multiservices
PICs, you can oversubscribe interface bandwidth. The logical interfaces
(and DLCIs within an FRF.16 bundle) can be oversubscribed when there
is leftover bandwidth. The oversubscription is limited to the configured
PIR. Any unused bandwidth is distributed equally among oversubscribed
logical interfaces or DLCIs.
For networks that are not likely to experience congestion, oversubscribing interface bandwidth improves network utilization, thereby allowing more customers to be provisioned on a single interface. If the actual data traffic does not exceed the interface bandwidth, oversubscription allows you to sell more bandwidth than the interface can support.
We recommend avoiding oversubscription in networks that are likely to experience congestion. Be careful not to oversubscribe a service by too much, because this can cause degradation in the performance of the router during congestion. When you configure oversubscription, some output queues can be starved if the actual data traffic exceeds the physical interface bandwidth. You can prevent degradation by using statistical multiplexing to ensure that the actual data traffic does not exceed the interface bandwidth.
You cannot oversubscribe interface bandwidth when you configure traffic shaping using the method described in Applying Scheduler Maps and Shaping Rate to DLCIs and VLANs.
When configuring oversubscription for FRF.16 bundle interfaces, you can assign traffic control profiles that apply on a physical interface basis. When you apply traffic control profiles to FRF.16 bundles at the logical interface level, member link interface bandwidth is underutilized when there is a small proportion of traffic or no traffic at all on an individual DLCI. Support for traffic control features on the FRF.16 bundle physical interface level addresses this limitation.
To configure oversubscription of an interface, perform the following steps:
Include the
shaping-rate
statement at the[edit class-of-service traffic-control-profiles profile-name]
hierarchy level:[edit class-of-service traffic-control-profiles profile-name] shaping-rate (percent percentage | rate);
Note:When configuring oversubscription for FRF.16 bundle interfaces on a physical interface basis, you must specify
shaping-rate
as a percentage.On LSQ interfaces, you can configure the shaping rate as a percentage.
On IQ and IQ2 interfaces, you can configure the shaping rate as an absolute rate from 1000 through 6,400,000,000,000 bits per second.
Alternatively, you can configure a shaping rate for a logical interface and oversubscribe the physical interface by including the
shaping-rate
statement at the[edit class-of-service interfaces interface-name unit logical-unit-number]
hierarchy level. However, with this configuration approach, you cannot independently control the delay-buffer rate, as described in Step 2.Note:For channelized and Gigabit Ethernet IQ interfaces, the
shaping-rate
andguaranteed-rate
statements are mutually exclusive. You cannot configure some logical interfaces to use a shaping rate and others to use a guaranteed rate. This means there are no service guarantees when you configure a PIR. For these interfaces, you can configure either a PIR or a committed information rate (CIR), but not both.This restriction does not apply to Gigabit Ethernet IQ2 PICs or link services IQ (LSQ) interfaces on AS or Multiservices PICs. For LSQ and Gigabit Ethernet IQ2 interfaces, you can configure both a PIR and a CIR on an interface. For more information about CIRs, see Configuring Guaranteed Minimum Rate on LSQ Interfaces.
Optionally, you can base the delay buffer calculation on a delay-buffer rate. To do this, include the
delay-buffer-rate
statement at the[edit class-of-service traffic-control-profiles profile-name]
hierarchy level:Note:When configuring oversubscription for FRF.16 bundle interfaces on a physical interface basis, you must specify
delay-buffer-rate
as a percentage.[edit class-of-service traffic-control-profiles profile-name] delay-buffer-rate (percent percentage | rate);
The delay-buffer rate overrides the shaping rate as the basis for the delay-buffer calculation. In other words, the shaping rate or scaled shaping rate is used for delay-buffer calculations only when the delay-buffer rate is not configured.
For LSQ interfaces, if you do not configure a delay-buffer rate, the guaranteed rate (CIR) is used to assign buffers. If you do not configure a guaranteed rate, the shaping rate (PIR) is used in the undersubscribed case, and the scaled shaping rate is used in the oversubscribed case.
On LSQ interfaces, you can configure the delay-buffer rate as a percentage.
On IQ and IQ2 interfaces, you can configure the delay-buffer rate as an absolute rate from 1000 through 6,400,000,000,000 bits per second.
The actual delay buffer is based on the calculations described in the Class of Service User Guide (Routers and EX9200 Switches). For an example showing how the delay-buffer rates are applied, see Examples: Oversubscribing an LSQ Interface.
Configuring large buffers on relatively low-speed links can cause packet aging. To help prevent this problem, the software requires that the sum of the delay-buffer rates be less than or equal to the port speed.
This restriction does not eliminate the possibility of packet aging, so you should be cautious when using the
delay-buffer-rate
statement. Though some amount of extra buffering might be desirable for burst absorption, delay-buffer rates should not far exceed the service rate of the logical interface.If you configure delay-buffer rates so that the sum exceeds the port speed, the configured delay-buffer rate is not implemented for the last logical interface that you configure. Instead, that logical interface receives a delay-buffer rate of zero, and a warning message is displayed in the CLI. If bandwidth becomes available (because another logical interface is deleted or deactivated, or the port speed is increased), the configured delay-buffer-rate is reevaluated and implemented if possible.
If you do not configure a delay-buffer rate or a guaranteed rate, the logical interface receives a delay-buffer rate in proportion to the shaping rate and the remaining delay-buffer rate available. In other words, the delay-buffer rate for each logical interface with no configured delay-buffer rate is equal to:
(remaining delay-buffer rate * shaping rate) / (sum of shaping rates)
The remaining delay-buffer rate is equal to:
(interface speed) – (sum of configured delay-buffer rates)
To assign a scheduler map to the logical interface, include the
scheduler-map
statement at the[edit class-of-service traffic-control-profiles profile-name]
hierarchy level:[edit class-of-service traffic-control-profiles profile-name] scheduler-map map-name;
For information about configuring schedulers and scheduler maps, see the Class of Service User Guide (Routers and EX9200 Switches).
Optionally, you can enable large buffer sizes to be configured. To do this, include the
q-pic-large-buffer
statement at the[edit chassis fpc slot-number pic pic-number]
hierarchy level:[edit chassis fpc slot-number pic pic-number] q-pic-large-buffer;
If you do not include this statement, the delay-buffer size is more restricted. We recommend restricted buffers for delay-sensitive traffic, such as voice traffic. For more information, see the Class of Service User Guide (Routers and EX9200 Switches).
To enable scheduling on logical interfaces, include the
per-unit-scheduler
statement at the[edit interfaces interface-name]
hierarchy level:[edit interfaces interface-name ] per-unit-scheduler;
When you include this statement, the maximum number of VLANs supported is 768 on a single-port Gigabit Ethernet IQ PIC. On a two-port Gigabit Ethernet IQ PIC, the maximum number is 384.
To enable scheduling for FRF.16 bundles physical interfaces, include the
no-per-unit-scheduler
statement at the[edit interfaces interface-name]
hierarchy level:[edit interfaces interface-name] no-per-unit-scheduler;
To apply the traffic-scheduling profile to the logical interface, include the
output-traffic-control-profile
statement at the[edit class-of-service interfaces interface-name unit logical-unit-number]
hierarchy level:[edit class-of-service interfaces interface-name unit logical-unit-number] output-traffic-control-profile profile-name;
You cannot include the
output-traffic-control-profile
statement in the configuration if any of the following statements are included in the logical interface configuration:scheduler-map
,shaping-rate
,adaptive-shaper
, orvirtual-channel-group
.For a table that shows how the bandwidth and delay buffer are allocated in various configurations, see the Class of Service User Guide (Routers and EX9200 Switches).
Examples: Oversubscribing an LSQ Interface
Oversubscribing an LSQ Interface with Scheduling Based on the Logical Interface
Apply a traffic-control profile to a logical interface representing a DLCI on an FRF.16 bundle.
interfaces { lsq-1/3/0:0 { per-unit-scheduler; unit 0 { dlci 100; } unit 1 { dlci 200; } } } class-of-service { traffic-control-profiles { tc_0 { shaping-rate percent 100; guaranteed-rate percent 60; delay-buffer-rate percent 80; } tc_1 { shaping-rate percent 80; guaranteed-rate percent 40; } } interfaces { lsq-1/3/0 { unit 0 { output-traffic-control-profile tc_0; } unit 1 { output-traffic-control-profile tc_1; } } } }
Oversubscribing an LSQ Interface with Scheduling Based on the Physical Interface
Apply a traffic-control profile to the physical interface representing an FRF.16 bundle:
interfaces { lsq-0/2/0:0 { no-per-unit-scheduler; encapsulation multilink-frame-relay-uni-nni; unit 0 { dlci 100; family inet { address 18.18.18.2/24; } } } class-of-service { traffic-control-profiles { rlsq_tc { scheduler-map rlsq; shaping-rate percent 60; delay-buffer-rate percent 10; } } interfaces { lsq-0/2/0:0 { output-traffic-control-profile rlsq_tc; } } } scheduler-maps { rlsq { forwarding-class best-effort scheduler rlsq_scheduler; forwarding-class expedited-forwarding scheduler rlsq_scheduler1; } } schedulers { rlsq_scheduler { transmit-rate percent 20; priority low; } rlsq_scheduler1 { transmit-rate percent 40; priority high; } }
See Also
Configuring Guaranteed Minimum Rate on LSQ Interfaces
On Gigabit Ethernet IQ PICs, Channelized IQ PICs, and FRF.16 link services IQ (LSQ) interfaces on AS and Multiservices PICs, you can configure guaranteed bandwidth, also known as a committed information rate (CIR). This allows you to specify a guaranteed rate for each logical interface. The guaranteed rate is a minimum. If excess physical interface bandwidth is available for use, the logical interface receives more than the guaranteed rate provisioned for the interface.
You cannot provision the sum of the guaranteed rates to be more than the physical interface bandwidth, or the bundle bandwidth for LSQ interfaces. If the sum of the guaranteed rates exceeds the interface or bundle bandwidth, the commit operation does not fail, but the software automatically decreases the rates so that the sum of the guaranteed rates is equal to the available bundle bandwidth.
To configure a guaranteed minimum rate, perform the following steps:
Include the
guaranteed-rate
statement at the[edit class-of-service traffic-control-profiles profile-name]
hierarchy level:[edit class-of-service traffic-control-profiles profile-name] guaranteed-rate (percent percentage | rate);
On LSQ interfaces, you can configure the guaranteed rate as a percentage.
On IQ and IQ2 interfaces, you can configure the guaranteed rate as an absolute rate from 1000 through 160,000,000,000 bits per second.
Note:For channelized and Gigabit Ethernet IQ interfaces, the
shaping-rate
andguaranteed-rate
statements are mutually exclusive. You cannot configure some logical interfaces to use a shaping rate and others to use a guaranteed rate. This means there are no service guarantees when you configure a PIR. For these interfaces, you can configure either a PIR or a committed information rate (CIR), but not both.This restriction does not apply to Gigabit Ethernet IQ2 PICs or link services IQ (LSQ) interfaces on AS or Multiservices PICs. For LSQ and Gigabit Ethernet IQ2 interfaces, you can configure both a PIR and a CIR on an interface. For more information about CIRs, see the Class of Service User Guide (Routers and EX9200 Switches).
Optionally, you can base the delay buffer calculation on a delay-buffer rate. To do this, include the
delay-buffer-rate
statement at the[edit class-of-service traffic-control-profiles profile-name]
hierarchy level:[edit class-of-service traffic-control-profiles profile-name] delay-buffer-rate (percent percentage | rate);
On LSQ interfaces, you can configure the delay-buffer rate as a percentage.
On IQ and IQ2 interfaces, you can configure the delay-buffer rate as an absolute rate from 1000 through 160,000,000,000 bits per second.
The actual delay buffer is based on the calculations described in tables in the Class of Service User Guide (Routers and EX9200 Switches). For an example showing how the delay-buffer rates are applied, see Example: Configuring Guaranteed Minimum Rate.
If you do not include the
delay-buffer-rate
statement, the delay-buffer calculation is based on the guaranteed rate, the shaping rate if no guaranteed rate is configured, or the scaled shaping rate if the interface is oversubscribed.If you do not specify a shaping rate or a guaranteed rate, the logical interface receives a minimal delay-buffer rate and minimal bandwidth equal to 4 MTU-sized packets.
You can configure a rate for the delay buffer that is higher than the guaranteed rate. This can be useful when the traffic flow might not require much bandwidth in general, but in some cases can be bursty and therefore needs a large buffer.
Configuring large buffers on relatively low-speed links can cause packet aging. To help prevent this problem, the software requires that the sum of the delay-buffer rates be less than or equal to the port speed. This restriction does not eliminate the possibility of packet aging, so you should be cautious when using the
delay-buffer-rate
statement. Though some amount of extra buffering might be desirable for burst absorption, delay-buffer rates should not far exceed the service rate of the logical interface.If you configure delay-buffer rates so that the sum exceeds the port speed, the configured delay-buffer rate is not implemented for the last logical interface that you configure. Instead, that logical interface receives a delay-buffer rate of 0, and a warning message is displayed in the CLI. If bandwidth becomes available (because another logical interface is deleted or deactivated, or the port speed is increased), the configured delay-buffer-rate is reevaluated and implemented if possible.
If the guaranteed rate of a logical interface cannot be implemented, that logical interface receives a delay-buffer rate of 0, even if the configured delay-buffer rate is within the interface speed. If at a later time the guaranteed rate of the logical interface can be met, the configured delay-buffer rate is reevaluated and if the delay-buffer rate is within the remaining bandwidth, it is implemented.
If any logical interface has a configured guaranteed rate, all other logical interfaces on that port that do not have a guaranteed rate configured receive a delay-buffer rate of 0. This is because the absence of a guaranteed rate configuration corresponds to a guaranteed rate of 0 and, consequently, a delay-buffer rate of 0.
To assign a scheduler map to the logical interface, include the
scheduler-map
statement at the[edit class-of-service traffic-control-profiles profile-name]
hierarchy level:[edit class-of-service traffic-control-profiles profile-name] scheduler-map map-name;
For information about configuring schedulers and scheduler maps, see the Class of Service User Guide (Routers and EX9200 Switches).
To enable large buffer sizes to be configured, include the
q-pic-large-buffer
statement at the[edit chassis fpc slot-number pic pic-number]
hierarchy level:[edit chassis fpc slot-number pic pic-number] q-pic-large-buffer;
If you do not include this statement, the delay-buffer size is more restricted. For more information, see the Class of Service User Guide (Routers and EX9200 Switches).
To enable scheduling on logical interfaces, include the
per-unit-scheduler
statement at the[edit interfaces interface-name]
hierarchy level:[edit interfaces interface-name ] per-unit-scheduler;
When you include this statement, the maximum number of VLANs supported is 767 on a single-port Gigabit Ethernet IQ PIC. On a two-port Gigabit Ethernet IQ PIC, the maximum number is 383.
To apply the traffic-scheduling profile to the logical interface, include the output-traffic-control-profile statement at the
[edit class-of-service interfaces interface-name unit logical-unit-number]
hierarchy level:[edit class-of-service interfaces interface-name unit logical-unit-number] output-traffic-control-profile profile-name;
Example: Configuring Guaranteed Minimum Rate
Two logical interface units, 0
and 1
,
are provisioned with a guaranteed minimum of 750 Kbps and 500 Kbps,
respectively. For logical unit 1
, the delay buffer is based
on the guaranteed rate setting. For logical unit 0
, a delay-buffer
rate of 500 Kbps is specified. The actual delay buffers allocated
to each logical interface are 2 seconds of 500 Kbps. The
2-second value is based on the following calculation:
delay-buffer-rate < [8 x 64 Kbps]): 2 seconds of delay-buffer-rate
For more information about this calculation, see the Class of Service User Guide (Routers and EX9200 Switches).
chassis { fpc 3 { pic 0 { q-pic-large-buffer; } } } interfaces { t1-3/0/1 { per-unit-scheduler; } } class-of-service { traffic-control-profiles { tc-profile3 { guaranteed-rate 750k; scheduler-map sched-map3; delay-buffer-rate 500k; # 500 Kbps is less than 8 x 64 Kbps } tc-profile4 { guaranteed-rate 500k; # 500 Kbps is less than 8 x 64 Kbps scheduler-map sched-map4; } } interface t1-3/0/1 { unit 0 { output-traffic-control-profile tc-profile3; } unit 1 { output-traffic-control-profile tc-profile4; } } }