ON THIS PAGE
Reserving Bundle Bandwidth for Link-Layer Overhead on LSQ Interfaces
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.16
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.15
Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using MLPPP and LFI
Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using FRF.12
Configuring LSQ Interfaces for T3 Links Configured for Compressed RTP over MLPPP
Configuring LSQ Interfaces as T3 or OC3 Bundles Using FRF.12
Configuring LSQ Interfaces for ATM2 IQ Interfaces Using MLPPP
Inline Multlink Services
Inline MLPPP for WAN Interfaces Overview
Inline Multilink PPP (MLPPP), Multilink Frame Relay (FRF.16), and Multilink Frame Relay End-to-End (FRF.15) for time-division multiplexing (TDM) WAN interfaces provide bundling services through the Packet Forwarding Engine without requiring a PIC or Dense Port Concentrator (DPC).
Traditionally, bundling services are used to bundle multiple low-speed links to create a higher bandwidth pipe. This combined bandwidth is available to traffic from all links and supports link fragmentation and interleaving (LFI) on the bundle, reducing high priority packet transmission delay.
This support includes multiple links on the same bundle as well as multiclass extension for MLPPP. Through this service you can enable bundling services without additional DPC slots to support Service DPC and free up the slots for other MICs.
MLPPP is not supported on MX Series Virtual Chassis.
Starting in Junos OS Release 15.1, you can configure inline MLPPP interfaces on MX80, MX104, MX240, MX480, and MX960 routers with Channelized E1/T1 Circuit Emulation MICs. A maximum of up to eight inline MLPPP interface bundles are supported on Channelized E1/T1 Circuit Emulation MICs, similar to the support for inline MLPPP bundles on other MICs with which they are compatible.
Configuring inline MLPPP for WAN interfaces benefits the following services:
CE-PE link for Layer 3 VPN and DIA service with public switched telephone networks (PSTN)-based access networks.
PE-P link when PSTN is used for MPLS networks.
This feature is used by the following service providers:
Service providers that use PSTN to offer Layer 3 VPN and DIA service with PSTN-based access networks to medium or large business customers.
Service providers with SONET-based core networks.
The following figure illustrates the scope of this feature:
For connecting many smaller sites in VPNs, bundling the TDM circuits together with MLPPP/MLFR technology is the only way to offer higher bandwidth and link redundancy.
MLPPP enables you to bundle multiple PPP links into a single multilink bundle, and MLFR enables you to bundle multiple Frame Relay data-link connection identifiers (DLCIs) into a single multilink bundle. Multilink bundles provide additional bandwidth, load balancing, and redundancy by aggregating low-speed links, such as T1, E1, and serial links.
MLPPP is a protocol for aggregating multiple constituent links into one larger PPP bundle. MLFR allows you to aggregate multiple Frame Relay links by inverse multiplexing. MLPPP and MLFR provide service options between low-speed T1 and E1 services. In addition to providing additional bandwidth, bundling multiple links can add a level of fault tolerance to your dedicated access service. Because you can implement bundling across multiple interfaces, you can protect users against loss of access when a single interface fails.
To configure inline MLPPP for WAN interfaces, see:
See Also
Reserving Bundle Bandwidth for Link-Layer Overhead on LSQ Interfaces
Link-layer overhead can cause packet drops on constituent links because of bit stuffing on serial links. Bit stuffing is used to prevent data from being interpreted as control information.
By default, 4 percent of the total bundle bandwidth is set aside for link-layer overhead. In most network environments, the average link-layer overhead is 1.6 percent. Therefore, we recommend 4 percent as a safeguard. For more information, see RFC 4814, Hash and Stuffing: Overlooked Factors in Network Device Benchmarking.
For link services IQ (lsq-
) interfaces, you can configure
the percentage of bundle bandwidth to be set aside for link-layer
overhead. To do this, include the link-layer-overhead
statement:
link-layer-overhead percent;
You can include this statement at the following hierarchy levels:
[edit interfaces interface-name mlfr-uni-nni-bundle-options]
[edit interfaces interface-name unit logical-unit-number]
[edit logical-systems logical-system-name interfaces interface-name unit logical-unit-number]
You can configure the value to be from 0 percent through 50 percent.
See Also
Enabling Inline LSQ Services
Inline Multilink PPP (MLPPP), Multilink Frame Relay (FRF.16), and Multilink Frame Relay End-to-End (FRF.15) for time-division multiplexing (TDM) WAN interfaces provide bundling services through the Packet Forwarding Engine without requiring a PIC or Dense Port Concentrator (DPC).
Traditionally, bundling services are used to bundle multiple low-speed links to create a higher bandwidth pipe. This combined bandwidth is available to traffic from all links and supports link fragmentation and interleaving (LFI) on the bundle, reducing high priority packet transmission delay.
This support includes multiple links on the same bundle as well as multiclass extension for MLPPP. Through this service you can enable bundling services without additional DPC slots to support Service DPC and free up the slots for other MICs.
The inline LSQ logical interface (referred to as lsq-) is a
virtual service logical interface that resides on the Packet Forwarding
Engine to provide Layer 2 bundling services that do not need a service
PIC. The naming convention is lsq-slot/pic/0
.
Click here for a compatibility matrix of MICs currently supported by MPC1, MPC2, MPC3, MPC6, MPC8, and MPC9 on MX240, MX480, MX960, MX2008, MX2010, MX2020, and MX10003 routers.
A Type1 MPC has only one logical unit (LU); therefore only one LSQ logical interface can be created. When configuring a Type1 MPC, use PIC slot 0. Type2 MPC has two LUs; therefore two LSQ logical interfaces can be created. When configuring a Type2 MPC, use PIC slot 0 and slot 2.
Configure each LSQ logical interface with one loopback stream. This stream can be shaped like a regular stream, and is shared with other inline interfaces, such as the inline services (SI) interface.
To support FRF.16 bundles, create logical interfaces with the
naming convention lsq-slot/pic/0:bundle_id
, where bundle_id can range from 0 to
254. You can configure logical interfaces created on the main LSQ
logical interface as MLPPP or FRF.16.
Because SI and LSQ logical interfaces might share the same stream, and there could be multiple LSQ logical interfaces on that stream, any logical interface-related shaping is configured at the Layer 2 node instead of the Layer 1 node. As a result, when SI is enabled, instead of limiting the stream bandwidth to 1Gb or 10Gb based on the configuration, only the Layer 2 queue allocated for the SI interface is shaped at 1Gb or 10Gb.
For MLPPP and FRF.15, each LSQ logical interface is shaped based on the total bundle bandwidth (sum of member link bandwidths with control packet flow overhead) by configuring one unique Layer 3 node per bundle. Similarly, each FRF.16 logical interface is shaped based on total bundle bandwidth by configuring one unique Layer 2 node per bundle. FRF16 logical interface data-link connection identifiers (DLCIs) are mapped to Layer 3 nodes.
To enable inline LSQ services and create the lsq-
logical interface for the specified PIC, specify the multi-link-layer-2-inline and mlfr-uni-nni-bundles-inline configuration
statements.
[edit chassis fpc number pic number] user@host# set multi-link-layer-2-inline user@host# set mlfr-uni-nni-bundles-inline number
On MX80 and MX104 routers that have a single Packet Forwarding Engine, you can configure the LSQ logical interface only on FPC 0 and PIC 0. The channelized card must be in slot FPC 0/0 for the corresponding bundle to work.
For example, to enable inline service for PIC 0 on a Type1 MPC on slot 1:
[edit chassis fpc 1 pic 0] user@host# set multi-link-layer-2-inline user@host# set mlfr-uni-nni-bundles-inline 1
As a result, logical interfaces lsq-1/0/0, and lsq-1/0/0:0 are created. The number of inline multilink frame relay user-to-network interface (UNI) and network-to-network interface (NNI) bundles is set to 1.
For example, to enable inline service for both PIC 0 and PIC 2 on Type2 MPC installed in slot 5:
[edit chassis fpc 5 pic 0] user@host# set multi-link-layer-2-inline user@host# set mlfr-uni-nni-bundles-inline 1 [edit chassis fpc 5 pic 2] user@host# set multi-link-layer-2-inline user@host# set mlfr-uni-nni-bundles-inline 1
As a result, logical interfaces lsq-5/0/0, lsq-5/0/0:0, lsq-5/0/0:1, lsq-5/2/0, lsq-5/2/0:0, and lsq-5/2/0:1 are created. The number of inline multilink frame relay user-to-network interface (UNI) and network-to-network interface (NNI) bundles is set to 1.
The PIC number here is only used as an anchor to choose the correct LU to bind the inline LSQ interface. The bundling services are operational as long as the Packet Forwarding Engine to which it is bound is operational, even if the logical PIC is offline.
See Also
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP
To configure an NxT1 bundle using MLPPP,
you aggregate N different T1 links into a bundle.
The NxT1 bundle is called a logical interface,
because it can represent, for example, a routing adjacency. To aggregate
T1 links into a an MLPPP bundle, include the bundle
statement
at the [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp]
hierarchy level:
[edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp] bundle lsq-fpc/pic/port.logical-unit-number;
Link services IQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.
To configure the link services IQ interface properties, include
the following statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level:
[edit interfaces lsq-fpc/pic/port unit logical-unit-number] drop-timeout milliseconds; encapsulation multilink-ppp; fragment-threshold bytes; link-layer-overhead percent; minimum-links number; mrru bytes; short-sequence; family inet { address address; }
ACX Series routers do not support drop-timeout and link-layer-overhead properties.
The logical link services IQ interface represents the MLPPP bundle. For the MLPPP bundle, there are four associated queues on M Series routers and eight associated queues on M320 and T Series routers. A scheduler removes packets from the queues according to a scheduling policy. Typically, you designate one queue to have strict priority, and the remaining queues are serviced in proportion to weights you configure.
For MLPPP, assign a single scheduler map to the link services
IQ interface (lsq
) and to each constituent link. The default
schedulers for M Series and T Series routers, which assign 95, 0,
0, and 5 percent bandwidth for the transmission rate and buffer size
of queues 0, 1, 2, and 3, are not adequate when you configure LFI
or multiclass traffic. Therefore, for MLPPP, you should configure
a single scheduler with nonzero percent transmission rates and buffer
sizes for queues 0 through 3, and assign this scheduler to the link
services IQ interface (lsq
) and to each constituent link,
as shown in Example: Configuring an LSQ Interface as an NxT1 Bundle Using MLPPP.
For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.
If the member link belonging to one MLPP, MLFR, or MFR bundle interface is moved to another bundle interface, or the links are swapped between two bundle interfaces, a commit is required between the delete and add operations to ensure that the configuration is applied correctly.
If the bundle has more than one link, you must include the per-unit-scheduler
statement at the [edit interfaces lsq-fpc/pic/port]
hierarchy level:
[edit interfaces lsq-fpc/pic/port] per-unit-scheduler;
To configure and apply the scheduling policy, include the following
statements at the [edit class-of-service]
hierarchy level:
[edit class-of-service] interfaces { t1-fpc/pic/port unit logical-unit-number { scheduler-map map-name; } } forwarding-classes { queue queue-number class-name; } scheduler-maps { map-name { forwarding-class class-name scheduler scheduler-name; } } schedulers { scheduler-name { buffer-size (percent percentage | remainder | temporal microseconds); priority priority-level; transmit-rate (rate | percent percentage | remainder) <exact>; } }
For link services IQ interfaces, a strict-high-priority queue might starve the other three queues because traffic in a strict-high priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue does round-robin with high-priority queues, as described in the Class of Service User Guide (Routers and EX9200 Switches).
After the scheduler removes a packet from a queue, a certain
action is taken. The action depends on whether the packet came from
a multilink encapsulated queue (fragmented and sequenced) or a nonencapsulated
queue (hashed with no fragmentation). Each queue can be designated
as either multilink encapsulated or nonencapsulated, independently
of the other. By default, traffic in all forwarding classes is multilink
encapsulated. To configure packet fragmentation handling on a queue,
include the fragmentation-maps
statement at the [edit
class-of-service]
hierarchy level:
fragmentation-maps { map-name { forwarding-class class-name { fragment-threshold bytes; multilink-class number; no-fragmentation; } } }
For NxT1 bundles using MLPPP, the byte-wise
load balancing used in multilink-encapsulated queues is superior to
the flow-wise load balancing used in nonencapsulated queues. All other
considerations are equal. Therefore, we recommend that you configure
all queues to be multilink encapsulated. You do this by including
the fragment-threshold
statement in the configuration.
If you choose to set traffic on a queue to be nonencapsulated rather
than multilink encapsulated, include the no-fragmentation
statement in the fragmentation map. You use the multilink-class
statement to map a forwarding class into a multiclass MLPPP (MCML). . For more information about fragmentation maps, see Configuring CoS Fragmentation by Forwarding Class
on LSQ Interfaces.
When a packet is removed from a multilink-encapsulated queue, the software gives the packet an MLPPP header. The MLPPP header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on one of the N different T1 links. The link is chosen on a packet-by-packet basis to balance the load across the various T1 links.
If the packet exceeds the minimum link MTU, or if a queue has
a fragment threshold configured at the [edit class-of-service
fragmentation-maps map-name forwarding-class class-name]
hierarchy level, the software splits
the packet into two or more fragments, which are assigned consecutive
multilink sequence numbers. The outgoing link for each fragment is
selected independently of all other fragments.
If you do not include the fragment-threshold
statement
in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number]
hierarchy level is the default
for all forwarding classes. If you do not set a maximum fragment size
anywhere in the configuration, packets are fragmented if they exceed
the smallest MTU of all the links in the bundle.
Even if you do not set a maximum fragment size anywhere in the
configuration, you can configure the maximum received reconstructed
unit (MRRU) by including the mrru
statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level. The MRRU is similar to the MTU, but is specific
to link services interfaces. By default the MRRU size is 1500 bytes,
and you can configure it to be from 1500 through 4500 bytes.
For more information, see Configuring MRRU on Multilink and Link Services
Logical Interfaces.
When a packet is removed from a nonencapsulated queue, it is transmitted with a plain PPP header. Because there is no MLPPP header, there is no sequence number information. Therefore, the software must take special measures to avoid packet reordering. To avoid packet reordering, the software places the packet on one of the N different T1 links. The link is determined by hashing the values in the header. For IP, the software computes the hash based on source address, destination address, and IP protocol. For MPLS, the software computes the hash based on up to five MPLS labels, or four MPLS labels and the IP header.
For UDP and TCP the software computes the hash based on the source and destination ports, as well as source and destination IP addresses. This guarantees that all packets belonging to the same TCP/UDP flow always pass through the same T1 link, and therefore cannot be reordered. However, it does not guarantee that the load on the various T1 links is balanced. If there are many flows, the load is usually balanced.
The N different T1 interfaces link to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from all the T1 links. If a packet has an MLPPP header, the sequence number field is used to put the packet back into sequence number order. If the packet has a plain PPP header, the software accepts the packet in the order in which it arrives and makes no attempt to reassemble or reorder the packet.
Example: Configuring an LSQ Interface as an NxT1 Bundle Using MLPPP
[edit chassis] fpc 1 { pic 3 { adaptive-services { service-package layer-2; } } } [edit interfaces] t1-0/0/0 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/3/0.1; # This adds t1-0/0/0 to the specified bundle. } } } t1-0/0/1 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/3/0.1; } } } lsq-1/3/0 { unit 1 { # This is the virtual link that concatenates multiple T1s. encapsulation multilink-ppp; drop-timeout 1000; fragment-threshold 128; link-layer-overhead 0.5; minimum-links 2; mrru 4500; short-sequence; family inet { address 10.2.3.4/24; } } [edit interfaces] lsq-1/3/0 { per-unit-scheduler; } [edit class-of-service] interfaces { lsq-1/3/0 { # multilink PPP constituent link unit 0 { scheduler-map sched-map1; } } t1-0/0/0 { # multilink PPP constituent link unit 0 { scheduler-map sched-map1; } t1-0/0/1 { # multilink PPP constituent link unit 0 { scheduler-map sched-map1; } forwarding-classes { queue 0 be; queue 1 ef; queue 2 af; queue 3 nc; } scheduler-maps { sched-map1 { forwarding-class af scheduler af-scheduler; forwarding-class be scheduler be-scheduler; forwarding-class ef scheduler ef-scheduler; forwarding-class nc scheduler nc-scheduler; } } schedulers { af-scheduler { transmit-rate percent 30; buffer-size percent 30; priority low; } be-scheduler { transmit-rate percent 25; buffer-size percent 25; priority low; } ef-scheduler { transmit-rate percent 40; buffer-size percent 40; priority strict-high; # voice queue } nc-scheduler { transmit-rate percent 5; buffer-size percent 5; priority high; } } fragmentation-maps { fragmap-1 { forwarding-class be { fragment-threshold 180; } forwarding-class ef { fragment-threshold 100; } } } [edit interfaces] lsq-1/3/0 { unit 0 { fragmentation-map fragmap-1; } }
See Also
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.16
To configure an NxT1 bundle using FRF.16, you aggregate N different T1 links into a bundle. The NxT1 bundle carries a potentially large number of Frame Relay PVCs, identified by their DLCIs. Each DLCI is called a logical interface, because it can represent, for example, a routing adjacency.
To aggregate T1 links into an FRF.16 bundle, include the mlfr-uni-nni-bundles
statement at the [edit chassis fpc slot-number pic slot-number]
hierarchy level and include the bundle
statement at the [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlfr-uni-nni]
hierarchy level:
[edit chassis fpc slot-number pic slot-number] mlfr-uni-nni-bundles number; [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlfr-uni-nni] bundle lsq-fpc/pic/port:channel;
Link services IQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.
To configure the link services IQ interface properties, include
the following statements at the [edit interfaces lsq- fpc/pic/port:channel]
hierarchy level:
[edit interfaces lsq- fpc/pic/port:channel] encapsulation multilink-frame-relay-uni-nni; dce; mlfr-uni-nni-options { acknowledge-retries number; acknowledge-timer milliseconds; action-red-differential-delay (disable-tx | remove-link); drop-timeout milliseconds; fragment-threshold bytes; hello-timer milliseconds; link-layer-overhead percent; lmi-type (ansi | itu); minimum-links number; mrru bytes; n391 number; n392 number; n393 number; red-differential-delay milliseconds; t391 number; t392 number; yellow-differential-delay milliseconds; } unit logical-unit-number { dlci dlci-identifier; family inet { address address; } }
The link services IQ channel represents the FRF.16 bundle. Four queues are associated with each DLCI. A scheduler removes packets from the queues according to a scheduling policy. On the link services IQ interface, you typically designate one queue to have strict priority. The remaining queues are serviced in proportion to weights you configure.
For link services IQ interfaces, a strict-high-priority queue might starve the other three queues because traffic in a strict-high-priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue does round-robin with high-priority queues, as described in the Class of Service User Guide (Routers and EX9200 Switches).
If the bundle has more than one link, you must include the per-unit-scheduler
statement at the [edit interfaces lsq-fpc/pic/port:channel]
hierarchy level:
[edit interfaces lsq-fpc/pic/port:channel] per-unit-scheduler;
For FRF.16, you can assign a single scheduler map to the link
services IQ interface (lsq
) and to each link services IQ
DLCI, or you can assign different scheduler maps to the various DLCIs
of the bundle, as shown in Example: Configuring an LSQ Interface as an NxT1 Bundle Using FRF.16.
For the constituent links of an FRF.16 bundle, you do not need to configure a custom scheduler. Because LFI and multiclass are not supported for FRF.16, the traffic from each constituent link is transmitted from queue 0. This means you should allow most of the bandwidth to be used by queue 0. For M Series and T Series routers, the default schedulers’ transmission rate and buffer size percentages for queues 0 through 3 are 95, 0, 0, and 5 percent. These default schedulers send all user traffic to queue 0 and all network-control traffic to queue 3, and therefore are well suited to the behavior of FRF.16. If desired, you can configure a custom scheduler that explicitly replicates the 95, 0, 0, and 5 percent queuing behavior, and apply it to the constituent links.
For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.
If the member link belonging to one MLPP, MLFR, or MFR bundle interface is moved to another bundle interface, or the links are swapped between two bundle interfaces, a commit is required between the delete and add operations to ensure that the configuration is applied correctly.
To configure and apply the scheduling policy, include the following
statements at the [edit class-of-service]
hierarchy level:
[edit class-of-service] interfaces { lsq-fpc/pic/port:channel { unit logical-unit-number { scheduler-map map-name; } } } forwarding-classes { queue queue-number class-name; } scheduler-maps { map-name { forwarding-class class-name scheduler scheduler-name; } } schedulers { scheduler-name { buffer-size (percent percentage | remainder | temporal microseconds); priority priority-level; transmit-rate (rate | percent percentage | remainder) <exact>; } }
To configure packet fragmentation handling on a queue, include
the fragmentation-maps
statement at the [edit class-of-service]
hierarchy level:
[edit class-of-service] fragmentation-maps { map-name { forwarding-class class-name { fragment-threshold bytes; } } }
For FRF.16 traffic, only multilink encapsulated (fragmented
and sequenced) queues are supported. This is the default queuing behavior
for all forwarding classes. FRF.16 does not allow for nonencapsulated
traffic because the protocol requires that all packets carry the fragmentation
header. If a large packet is split into multiple fragments, the fragments
must have consecutive sequential numbers. Therefore, you cannot include
the no-fragmentation
statement at the [edit class-of-service
fragmentation-maps map-name forwarding-class class-name]
hierarchy level for FRF.16 traffic.
For FRF.16, if you want to carry voice or any other latency-sensitive
traffic, you should not use slow links. At T1 speeds and above, the
serialization delay is small enough so that you do not need to use
explicit LFI.
When a packet is removed from a multilink-encapsulated queue, the software gives the packet an FRF.16 header. The FRF.16 header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on one of the N different T1 links. The link is chosen on a packet-by-packet basis to balance the load across the various T1 links.
If the packet exceeds the minimum link MTU, or if a queue has
a fragment threshold configured at the [edit class-of-service
fragmentation-maps map-name forwarding-class class-name]
hierarchy level, the software splits
the packet into two or more fragments, which are assigned consecutive
multilink sequence numbers. The outgoing link for each fragment is
selected independently of all other fragments.
If you do not include the fragment-threshold
statement
in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number]
or [edit interfaces interface-name mlfr-uni-nni-bundle-options]
hierarchy
level is the default for all forwarding classes. If you do not set
a maximum fragment size anywhere in the configuration, packets are
fragmented if they exceed the smallest MTU of all the links in the
bundle.
Even if you do not set a maximum fragment size anywhere in the
configuration, you can configure the maximum received reconstructed
unit (MRRU) by including the mrru
statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
or [edit interfaces interface-name mlfr-uni-nni-bundle-options]
hierarchy level. The MRRU is
similar to the MTU but is specific to link services interfaces. By
default, the MRRU size is 1500 bytes, and you can configure it
to be from 1500 through 4500 bytes. For more information, see Configuring MRRU on Multilink and Link Services
Logical Interfaces.
The N different T1 interfaces link to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from all the T1 links. Because each packet has an FRF.16 header, the sequence number field is used to put the packet back into sequence number order.
Example: Configuring an LSQ Interface as an NxT1 Bundle Using FRF.16
Configure an NxT1 bundle using FRF.16 with multiple CoS scheduler maps:
[edit chassis fpc 1 pic 3] adaptive-services { service-package layer-2; } mlfr-uni-nni-bundles 2; # Creates channelized LSQ interfaces/FRF.16 bundles. [edit interfaces] t1-0/0/0 { encapsulation multilink-frame-relay-uni-nni; unit 0 { family mlfr-uni-nni { bundle lsq-1/3/0:1; } } } t1-0/0/1 { encapsulation multilink-frame-relay-uni-nni; unit 0 { family mlfr-uni-nni { bundle lsq-1/3/0:1; } } } lsq-1/3/0:1 { # Bundle link consisting of t1-0/0/0 and t1-0/0/1 per-unit-scheduler; encapsulation multilink-frame-relay-uni-nni; dce; # One end needs to be configured as DCE. mlfr-uni-nni-bundle-options { drop-timeout 180; fragment-threshold 64; hello-timer 180; minimum-links 2; mrru 3000; link-layer-overhead 0.5; } unit 0 { dlci 26; # Each logical unit maps a single DLCI. family inet { address 10.2.3.4/24; } } unit 1 { dlci 42; family inet { address 10.20.30.40/24; } } unit 2 { dlci 69; family inet { address 10.20.30.40/24; } } [edit class-of-service] scheduler-maps { sched-map-lsq0 { forwarding-class af scheduler af-scheduler-lsq0; forwarding-class be scheduler be-scheduler-lsq0; forwarding-class ef scheduler ef-scheduler-lsq0; forwarding-class nc scheduler nc-scheduler-lsq0; } sched-map-lsq1 { forwarding-class af scheduler af-scheduler-lsq1; forwarding-class be scheduler be-scheduler-lsq1; forwarding-class ef scheduler ef-scheduler-lsq1; forwarding-class nc scheduler nc-scheduler-lsq1; } } schedulers { af-scheduler-lsq0 { transmit-rate percent 60; buffer-size percent 60; priority low; } be-scheduler-lsq0 { transmit-rate percent 30; buffer-size percent 30; priority low; } ef-scheduler-lsq0 { transmit-rate percent 5; buffer-size percent 5; priority strict-high; } nc-scheduler-lsq0 { transmit-rate percent 5; buffer-size percent 5; priority high; } af-scheduler-lsq1 { transmit-rate percent 50; buffer-size percent 50; priority low; } be-scheduler-lsq1 { transmit-rate percent 30; buffer-size percent 30; priority low; } ef-scheduler-lsq1 { transmit-rate percent 15; buffer-size percent 15; priority strict-high; } nc-scheduler-lsq1 { transmit-rate percent 5; buffer-size percent 5; priority high; } } interfaces { lsq-1/3/0:1 { # MLFR FRF.16 unit 0 { scheduler-map sched-map-lsq0; } unit 1 { scheduler-map sched-map-lsq1; } }
See Also
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using FRF.15
This example configures an N
xT1 bundle using FRF.15 on a link services IQ interface. FRF.15 is
similar to FRF.12, as described in Configuring LSQ Interfaces for Single Fractional
T1 or E1 Interfaces Using FRF.12. The difference is that
FRF.15 supports multiple physical links in a bundle, whereas FRF.12
supports only one physical link per bundle. For the Junos OS implementation
of FRF.15, you can configure one DLCI per physical link.
Link services IQ interfaces support both T1 and E1 physical interfaces. This example refers to T1 interfaces, but the configuration for E1 interfaces is similar.
[edit interfaces] lsq-1/3/0 { per-unit-scheduler; unit 0 { dlci 69; encapsulation multilink-frame-relay-end-to-end; } } unit 1 { dlci 13; encapsulation multilink-frame-relay-end-to-end; } # First physical link t1-1/1/0:1 { encapsulation frame-relay; unit 0 { family mlfr-end-to-end { bundle lsq-1/3/0.0; } } } # Second physical link t1-1/1/0:2 { encapsulation frame-relay; unit 0 { family mlfr-end-to-end { bundle lsq-1/3/0.0; } } }
See Also
Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using MLPPP and LFI
When you configure a single fractional T1 interface, it is called a logical interface, because it can represent, for example, a routing adjacency.
The logical link services IQ interface represents the MLPPP bundle. Four queues are associated with the logical interface. A scheduler removes packets from the queues according to a scheduling policy. Typically, you designate one queue to have strict priority, and the remaining queues are serviced in proportion to weights you configure.
To configure a single fractional T1 interface using MLPPP and
LFI, you associate one DS0 (fractional T1) interface with a link services
IQ interface. To associate a fractional T1 interface with a link services
IQ interface, include the bundle
statement at the [edit interfaces ds-fpc/pic/port:channel unit logical-unit-number family mlppp]
hierarchy level:
[edit interfaces ds-fpc/pic/port:channel unit logical-unit-number family mlppp] bundle lsq-fpc/pic/port.logical-unit-number;
Link services IQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.
To configure the link services IQ interface properties, include
the following statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level:
[edit interfaces lsq-fpc/pic/port unit logical-unit-number] drop-timeout milliseconds; encapsulation multilink-ppp; fragment-threshold bytes; link-layer-overhead percent; minimum-links number; mrru bytes; short-sequence; family inet { address address; }
For MLPPP, assign a single scheduler map to the link services
IQ (lsq
) interface and to each constituent link. The default
schedulers for M Series and T Series routers, which assign 95, 0,
0, and 5 percent bandwidth for the transmission rate and buffer size
of queues 0, 1, 2, and 3, are not adequate when you configure LFI
or multiclass traffic. Therefore, for MLPPP, you should configure
a single scheduler with nonzero percent transmission rates and buffer
sizes for queues 0 through 3, and assign this scheduler to the link
services IQ (lsq
) interface and to each constituent link
and to each constituent link, as shown in Example: Configuring an LSQ Interface for a Fractional T1 Interface Using MLPPP and LFI.
For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.
To configure and apply the scheduling policy, include the following
statements at the [edit class-of-service]
hierarchy level:
[edit class-of-service] interfaces { ds-fpc/pic/port.channel { scheduler-map map-name; } } forwarding-classes { queue queue-number class-name; } scheduler-maps { map-name { forwarding-class class-name scheduler scheduler-name; } } schedulers { scheduler-name { buffer-size (percent percentage | remainder | temporal microseconds); priority priority-level; transmit-rate (rate | percent percentage | remainder) <exact>; } }
For link services IQ interfaces, a strict-high-priority queue might starve all the other queues because traffic in a strict-high priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue receives infinite credits and does round-robin with high-priority queues, as described in the Class of Service User Guide (Routers and EX9200 Switches).
After the scheduler removes a packet from a queue, a certain
action is taken. The action depends on whether the packet came from
a multilink encapsulated queue (fragmented and sequenced) or a nonencapsulated
queue (hashed with no fragmentation). Each queue can be designated
as either multilink encapsulated or nonencapsulated, independently
of the other. By default, traffic in all forwarding classes is multilink
encapsulated. To configure packet fragmentation handling on a queue,
include the fragmentation-maps
statement at the [edit
class-of-service]
hierarchy level:
[edit class-of-service] fragmentation-maps { map-name { forwarding-class class-name { fragment-threshold bytes; no-fragmentation; } } }
If you require the queue to transmit small packets with low
latency, configure the queue to be nonencapsulated by including the no-fragmentation
statement. If you require the queue to transmit
large packets with normal latency, configure the queue to be multilink
encapsulated by including the fragment-threshold
statement.
If you require the queue to transmit large packets with low latency,
we recommend using a faster link and configuring the queue to be nonencapsulated.
For more information about fragmentation maps, see Configuring CoS Fragmentation by Forwarding Class
on LSQ Interfaces.
When a packet is removed from a multilink-encapsulated queue,
it is fragmented. If the packet exceeds the minimum link MTU, or if
a queue has a fragment threshold configured at the [edit class-of-service
fragmentation-maps map-name forwarding-class class-name]
hierarchy level, the software splits
the packet into two or more fragments, which are assigned consecutive
multilink sequence numbers.
If you do not include the fragment-threshold
statement
in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number]
hierarchy level is the default
for all forwarding classes. If you do not set a maximum fragment size
anywhere in the configuration, packets are fragmented if they exceed
the smallest MTU of all the links in the bundle.
Even if you do not set a maximum fragment size anywhere in the
configuration, you can configure the maximum received reconstructed
unit (MRRU) by including the mrru
statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level. The MRRU is similar to the MTU, but is specific
to link services interfaces. By default the MRRU size is 1500 bytes,
and you can configure it to be from 1500 through 4500 bytes.
For more information, see Configuring MRRU on Multilink and Link Services
Logical Interfaces.
When a packet is removed from a multilink-encapsulated queue, the software gives the packet an MLPPP header. The MLPPP header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on the fractional T1 link. Traffic from another queue might be interleaved between two fragments of the packet.
When a packet is removed from a nonencapsulated queue, it is transmitted with a plain PPP header. The packet is then placed on the fractional T1 link as soon as possible. If necessary, the packet is placed between the fragments of a packet from another queue.
The fractional T1 interface links to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from the fractional T1 link. If a packet has an MLPPP header, the software assumes the packet is a fragment of a larger packet, and the fragment number field is used to reassemble the larger packet. If the packet has a plain PPP header, the software accepts the packet in the order in which it arrives, and the software makes no attempt to reassemble or reorder the packet.
Example: Configuring an LSQ Interface for a Fractional T1 Interface Using MLPPP and LFI
Configure a single fractional T1 logical interface:
[edit interfaces] lsq-0/2/0 { per-unit-scheduler; unit 0 { encapsulation multilink-ppp; link-layer-overhead 0.5; family inet { address 10.40.1.1/30; } } } ct3-1/0/0 { partition 1 interface-type ct1; } ct1-1/0/0:1 { partition 1 timeslots 1-2 interface-type ds; } ds-1/0/0:1:1 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-0/2/0.0; } } } [edit class-of-service] interfaces { ds-1/0/0:1:1 { # multilink PPP constituent link unit 0 { scheduler-map sched-map1; } } forwarding-classes { queue 0 be; queue 1 ef; queue 2 af; queue 3 nc; } scheduler-maps { sched-map1 { forwarding-class af scheduler af-scheduler; forwarding-class be scheduler be-scheduler; forwarding-class ef scheduler ef-scheduler; forwarding-class nc scheduler nc-scheduler; } } schedulers { af-scheduler { transmit-rate percent 20; buffer-size percent 20; priority low; } be-scheduler { transmit-rate percent 20; buffer-size percent 20; priority low; } ef-scheduler { transmit-rate percent 50; buffer-size percent 50; priority strict-high; # voice queue } nc-scheduler { transmit-rate percent 10; buffer-size percent 10; priority high; } } fragmentation-maps { fragmap-1 { forwarding-class be { fragment-threshold 180; } forwarding-class ef { fragment-threshold 100; } } } [edit interfaces] lsq-0/2/0 { unit 0 { fragmentation-map fragmap-1; } }
See Also
Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using FRF.12
To configure a single fractional T1 interface using FRF.16,
you associate a DS0 interface with a link services IQ (lsq
) interface. When you configure a single fractional T1, the fractional
T1 carries a potentially large number of Frame Relay PVCs identified
by their DLCIs. Each DLCI is called a logical interface, because it
can represent, for example, a routing adjacency. To associate the
DS0 interface with a link services IQ interface, include the bundle
statement at the [edit interfaces ds-fpc/pic/port:channel unit logical-unit-number family mlfr-end-to-end]
hierarchy level:
[edit interfaces ds-fpc/pic/port:channel unit logical-unit-number family mlfr-end-to-end] bundle lsq-fpc/pic/port.logical-unit-number;
Link services IQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.
To configure the link services IQ interface properties, include
the following statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level:
[edit interfaces lsq-fpc/pic/port unit logical-unit-number] drop-timeout milliseconds; encapsulation multilink-frame-relay-end-to-end; fragment-threshold bytes; link-layer-overhead percent; minimum-links number; mrru bytes; short-sequence; family inet { address address; }
The logical link services IQ interface represents the FRF.12 bundle. Four queues are associated with each logical interface. A scheduler removes packets from the queues according to a scheduling policy. Typically, you designate one queue to have strict priority, and the remaining queues are serviced in proportion to weights you configure.
For FRF.12, assign a single scheduler map to the link services
IQ interface (lsq
) and to each constituent link. For M
Series and T Series routers, the default schedulers, which assign
95, 0, 0, and 5 percent bandwidth for the transmission rate and buffer
size of queues 0, 1, 2, and 3, are not adequate when you configure
LFI or multiclass traffic. Therefore, for FRF.12, you should configure
schedulers with nonzero percent transmission rates and buffer sizes
for queues 0 through 3, and assign them to the link services IQ interface
(lsq
) and to each constituent link, as shown in Examples: Configuring an LSQ Interface for a Fractional T1 Interface Using FRF.12.
For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.
To configure and apply the scheduling policy, include the following
statements at the [edit class-of-service]
hierarchy level:
[edit class-of-service] interfaces { ds-fpc/pic/port.channel { scheduler-map map-name; } } forwarding-classes { queue queue-number class-name; } scheduler-maps { map-name { forwarding-class class-name scheduler scheduler-name; } } schedulers { scheduler-name { buffer-size (percent percentage | remainder | temporal microseconds); priority priority-level; transmit-rate (rate | percent percentage | remainder) <exact>; } }
For link services IQ interfaces, a strict-high-priority queue might starve the other three queues because traffic in a strict-high-priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue does round-robin with high-priority queues, as described in the Class of Service User Guide (Routers and EX9200 Switches).
After the scheduler removes a packet from a queue, a certain
action is taken. The action depends on whether the packet came from
a multilink encapsulated queue (fragmented and sequenced) or a nonencapsulated
queue (hashed with no fragmentation). Each queue can be designated
as either multilink encapsulated or nonencapsulated, independently
of the other. By default, traffic in all forwarding classes is multilink
encapsulated. To configure packet fragmentation handling on a queue,
include the fragmentation-maps
statement at the [edit
class-of-service]
hierarchy level:
[edit class-of-service] fragmentation-maps { map-name { forwarding-class class-name { fragment-threshold bytes; no-fragmentation; } } }
If you require the queue to transmit small packets with low
latency, configure the queue to be nonencapsulated by including the no-fragmentation
statement. If you require the queue to transmit
large packets with normal latency, configure the queue to be multilink
encapsulated by including the fragment-threshold
statement.
If you require the queue to transmit large packets with low latency,
we recommend using a faster link and configuring the queue to be nonencapsulated.
For more information about fragmentation maps, see Configuring CoS Fragmentation by Forwarding Class
on LSQ Interfaces.
When a packet is removed from a multilink-encapsulated queue,
it is fragmented. If the packet exceeds the minimum link MTU, or if
a queue has a fragment threshold configured at the [edit class-of-service
fragmentation-maps map-name forwarding-class class-name]
hierarchy level, the software splits
the packet into two or more fragments, which are assigned consecutive
multilink sequence numbers.
If you do not include the fragment-threshold
statement
in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number]
hierarchy level is the default
for all forwarding classes. If you do not set a maximum fragment size
anywhere in the configuration, packets are fragmented if they exceed
the smallest MTU of all the links in the bundle.
Even if you do not set a maximum fragment size anywhere in the
configuration, you can configure the maximum received reconstructed
unit (MRRU) by including the mrru
statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level. The MRRU is similar to the MTU but is specific
to link services interfaces. By default, the MRRU size is 1500 bytes,
and you can configure it to be from 1500 through 4500 bytes.
For more information, see Configuring MRRU on Multilink and Link Services
Logical Interfaces.
When a packet is removed from a multilink-encapsulated queue, the software gives the packet an FRF.12 header. The FRF.12 header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on the fractional T1 link. Traffic from another queue might be interleaved between two fragments of the packet.
When a packet is removed from a nonencapsulated queue, it is transmitted with a plain Frame Relay header. The packet is then placed on the fractional T1 link as soon as possible. If necessary, the packet is placed between the fragments of a packet from another queue.
The fractional T1 interface links to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from the fractional T1 link. If a packet has an FRF.12 header, the software assumes the packet is a fragment of a larger packet, and the fragment number field is used to reassemble the larger packet. If the packet has a plain Frame Relay header, the software accepts the packet in the order in which it arrives, and the software makes no attempt to reassemble or reorder the packet.
A whole packet from a nonencapsulated queue can be placed between fragments of a multilink-encapsulated queue. However, fragments from one multilink-encapsulated queue cannot be interleaved with fragments from another multilink-encapsulated queue. This is the intent of the specification FRF.12, Frame Relay Fragmentation Implementation Agreement. If fragments from two different queues were interleaved, the header fields might not have enough information to separate the fragments.
Examples: Configuring an LSQ Interface for a Fractional T1 Interface Using FRF.12
FRF.12 with Fragmentation and Without LFI
This example shows a 128 KB DS0 interface. There is one
traffic stream on ge-0/0/0
, which is classified into queue
0 (be
). Packets are fragmented in the link services IQ
(lsq-
) interface according to the threshold configured
in the fragmentation map.
[edit chassis] fpc 0 { pic 3 { adaptive-services { service-package layer-2; } } } [edit interfaces] ge-0/0/0 { unit 0 { family inet { address 20.1.1.1/24 { arp 20.1.1.2 mac 00.00.5e.00.53.56; } } } } ce1-0/2/0 { partition 1 timeslots 1-2 interface-type ds; } ds-0/2/0:1 { no-keepalives; dce; encapsulation frame-relay; unit 0 { dlci 100; family mlfr-end-to-end { bundle lsq-0/3/0.0; } } } lsq-0/3/0 { per-unit-scheduler; unit 0 { encapsulation multilink-frame-relay-end-to-end; family inet { address 10.200.0.78/30; } } } fxp0 { unit 0 { family inet { address 172.16.1.162/24; } } } lo0 { unit 0 { family inet { address 10.0.0.1/32; } } } [edit class-of-service] forwarding-classes { queue 0 be; queue 1 ef; queue 2 af; queue 3 nc; } interfaces { lsq-0/3/0 { unit 0 { fragmentation-map map1; } } } fragmentation-maps { map1 { forwarding-class { be { fragment-threshold 160; } } } }
FRF.12 with Fragmentation and LFI
This example shows a 512 KB DS0 bundle and four traffic
streams on ge-0/0/0
that are classified into four queues.
The fragment size is 160 for queue 0, queue 1, and queue 2. The voice
stream on queue 3 has LFI configured.
[edit chassis] fpc 0 { pic 3 { adaptive-services { service-package layer-2; } } } [edit interfaces] ge-0/0/0 { unit 0 { family inet { address 20.1.1.1/24 { arp 20.1.1.2 mac 00.00.5e.00.53.56; } } } ce1-0/2/0 { partition 1 timeslots 1-8 interface-type ds; } ds-0/2/0:1 { no-keepalives; dce; encapsulation frame-relay; unit 0 { dlci 100; family mlfr-end-to-end { bundle lsq-0/3/0.0; } } } lsq-0/3/0 { per-unit-scheduler; unit 0 { encapsulation multilink-frame-relay-end-to-end; family inet { address 10.200.0.78/30; } } } [edit class-of-service] classifiers { inet-precedence ge-interface-classifier { forwarding-class be { loss-priority low code-points 000; } forwarding-class ef { loss-priority low code-points 010; } forwarding-class af { loss-priority low code-points 100; } forwarding-class nc { loss-priority low code-points 110; } } } forwarding-classes { queue 0 be; queue 1 ef; queue 2 af; queue 3 nc; } interfaces { lsq-0/3/0 { unit 0 { scheduler-map sched2; fragmentation-map map2; } } ds-0/2/0:1 { scheduler-map link-map2; } ge-0/0/0 { unit 0 { classifiers { inet-precedence ge-interface-classifier; } } } } scheduler-maps { sched2 { forwarding-class be scheduler economy; forwarding-class ef scheduler business; forwarding-class af scheduler stream; forwarding-class nc scheduler voice; } link-map2 { forwarding-class be scheduler link-economy; forwarding-class ef scheduler link-business; forwarding-class af scheduler link-stream; forwarding-class nc scheduler link-voice; } } fragmentation-maps { map2 { forwarding-class { be { fragment-threshold 160; } ef { fragment-threshold 160; } af { fragment-threshold 160; } nc { no-fragmentation; } } } schedulers { economy { transmit-rate percent 26; buffer-size percent 26; } business { transmit-rate percent 26; buffer-size percent 26; } stream { transmit-rate percent 35; buffer-size percent 35; } voice { transmit-rate percent 13; buffer-size percent 13; } link-economy { transmit-rate percent 26; buffer-size percent 26; } link-business { transmit-rate percent 26; buffer-size percent 26; } link-stream { transmit-rate percent 35; buffer-size percent 35; } link-voice { transmit-rate percent 13; buffer-size percent 13; } } } }
See Also
Configuring LSQ Interfaces for T3 Links Configured for Compressed RTP over MLPPP
This example bundles a single T3 interface on a link services IQ interface with MLPPP encapsulation. Binding a single T3 interface to a multilink bundle allows you to configure compressed RTP (CRTP) on the T3 interface.
This scenario applies to MLPPP bundles only. The Junos OS does not currently support CRTP over Frame Relay. For more information, see Configuring Services Interfaces for Voice Services.
There is no need to configure LFI at DS3 speeds, because the packet serialization delay is negligible.
[edit interfaces] t3-0/0/0 { unit 0 { family mlppp { bundle lsq-1/3/0.1; } } } lsq-1/3/0.1 { encapsulation multilink-ppp; } compression { rtp { # cRTP parameters go here # port minimum 2000 maximum 64009; } }
This configuration uses a default fragmentation map, which results in all forwarding classes (queues) being sent out with a multilink header.
To eliminate multilink headers, you can configure a fragmentation
map in which all queues have the no-fragmentation
statement
at the [edit class-of-service fragmentation-maps map-name forwarding-class class-name]
hierarchy
level, and attach the fragmentation map to the lsq-1/3/0.1
interface, as shown here:
[edit class-of-service] fragmentation-maps { fragmap { forwarding-class { be { no-fragmentation; } af { no-fragmentation; } ef { no-fragmentation; } nc { no-fragmentation; } } } } interfaces { lsq-1/3/0.1 { fragmentation-map fragmap; } }
See Also
Configuring LSQ Interfaces as T3 or OC3 Bundles Using FRF.12
This example configures a clear-channel T3 or OC3 interface with multiple logical interfaces (DLCIs) on the link. In this scenario, each DLCI represents a customer. DLCIs are shaped at the egress PIC to a particular speed (NxDS0). This allows you to configure LFI using FRF.12 End-to-End Protocol on Frame Relay DLCIs.
To do this, first configure logical interfaces (DLCIs) on the physical interface. Then bundle the DLCIs, so that there is only one DLCI per bundle.
The physical interface must be capable of per-DLCI scheduling, which allows you to attach shaping rates to each DLCI. For more information, see the Junos OS Network Interfaces Library for Routing Devices.
To prevent fragment drops at the egress PIC, you must assign a shaping rate to the link services IQ logical interfaces and to the egress DLCIs. Shaping rates on DLCIs specify how much bandwidth is available for each DLCI. The shaping rate on link services IQ interfaces should match the shaping rate assigned to the DLCI that is associated with the bundle.
Egress interfaces also must have a scheduler map attached. The queue that carries voice should be strict-high-priority, while all other queues should be low-priority. This makes LFI possible.
This example shows voice traffic in the ef
queue.
The voice traffic is interleaved with bulk data. Alternatively, you
can use multiclass MLPPP to carry multiple classes of traffic in different
multilink classes.
[edit interfaces] t3-0/0/0 { per-unit-scheduler; encapsulation frame-relay; unit 0 { dlci 69; family mlfr-end-to-end { bundle lsq-1/3/0.0; } } unit 1 { dlci 42; family mlfr-end-to-end { bundle lsq-1/3/0.1; } } } lsq-1/3/0 { unit 0 { encapsulation multilink-frame-relay-end-to-end; } fragment-threshold 320; # Multilink packets must be fragmented } unit 1 { encapsulation multilink-frame-relay-end-to-end; } fragment-threshold 160; [edit class-of-service] scheduler-maps { sched { # Scheduling parameters that apply to bundles on AS or Multiservices PICs. ... } pic-sched { # Scheduling parameters for egress DLCIs. # The voice queue should be strict-high priority. # All other queues should be low priority. ... } fragmentation-maps { fragmap { forwarding-class { ef { no-fragmentation; } # Voice is carried in the ef queue. # It is interleaved with bulk data. } } } interfaces { t3-0/0/0 { unit 0 { shaping-rate 512k; scheduler-map pic-sched; } unit 1 { shaping-rate 128k; scheduler-map pic-sched; } } lsq-1/3/0 { # Assign fragmentation and scheduling to LSQ interfaces. unit 0 { shaping-rate 512k; scheduler-map sched; fragmentation-map fragmap; } unit 1 { shaping-rate 128k; scheduler-map sched; fragmentation-map fragmap; } }
For more information about how FRF.12 works with links services IQ interfaces, see Configuring LSQ Interfaces for Single Fractional T1 or E1 Interfaces Using FRF.12.
See Also
Configuring LSQ Interfaces for ATM2 IQ Interfaces Using MLPPP
This example configures an ATM2 IQ interface with MLPPP bundled with link services IQ interfaces. This allows you to configure LFI on ATM virtual circuits.
For this type of configuration, the ATM2 IQ interface must have LLC encapsulation.
The following ATM PICs are supported in this scenario:
2-port OC-3/STM1 ATM2 IQ
4-port DS3 ATM2 IQ
Virtual circuit multiplexed PPP over AAL5 is not supported. Frame Relay is not supported. Bundling of multiple ATM VCs into a single logical interface is not supported.
Unlike DS3 and OC3 interfaces, there is no need to create a
separate scheduler map for the ATM PIC. For ATM, you define CoS components
at the [edit interfaces at-fpc/pic/port atm-options]
hierarchy
level, as described in the Junos OS Network Interfaces Library for Routing Devices.
Do not configure RED profiles on ATM logical interfaces that are bundled. Drops do not occur at the ATM interface.
In this example, two ATM VCs are configured and bundled into two link services IQ bundles. A fragmentation map is used to interleave voice traffic with other multilink traffic. Because MLPPP is used, each link services IQ bundle can be configured for CRTP.
[edit interfaces] at-1/2/0 { atm-options { vpi 0; pic-type atm2; } unit 0 { vci 0.69; encapsulation atm-mlppp-llc; family mlppp { bundle lsq-1/3/0.10; } } unit 1 { vci 0.42; encapsulation atm-mlppp-llc; family mlppp { bundle lsq-1/3/0.11; } } } lsq-1/3/0 { unit 10 { encapsulation multilink-ppp; } # Large packets must be fragmented. # You can specify fragmentation for each forwarding class. fragment-threshold 320; compression { rtp { port minimum 2000 maximum 64009; } } } unit 11 { encapsulation multilink-ppp; } fragment-threshold 160; [edit class-of-service] scheduler-maps { sched { # Scheduling parameters that apply to LSQ bundles on AS or Multiservices PICs. ... } fragmentation-maps { fragmap { forwarding-class { ef { no-fragmentation; } } } } interfaces { # Assign fragmentation and scheduling parameters to LSQ interfaces. lsq-1/3/0 { unit 0 { shaping-rate 512k; scheduler-map sched; fragmentation-map fragmap; } unit 1 { shaping-rate 128k; scheduler-map sched; fragmentation-map fragmap; } }
See Also
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.