Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation

Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP

To configure an NxT1 bundle using MLPPP, you aggregate N different T1 links into a bundle. The NxT1 bundle is called a logical interface, because it can represent, for example, a routing adjacency. To aggregate T1 links into a an MLPPP bundle, include the bundle statement at the [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp] hierarchy level:

[edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp]bundle lsq-fpc/pic/port.logical-unit-number;

Note: Link services IQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.

To configure the link services IQ interface properties, include the following statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number] hierarchy level:

[edit interfaces lsq-fpc/pic/port unit logical-unit-number]drop-timeout milliseconds;encapsulation multilink-ppp;fragment-threshold bytes;link-layer-overhead percent;minimum-links number;mrru bytes;short-sequence;
family inet {address address;}

The logical link services IQ interface represents the MLPPP bundle. For the MLPPP bundle, there are four associated queues on M Series routers and eight associated queues on M320 and T Series routers. A scheduler removes packets from the queues according to a scheduling policy. Typically, you designate one queue to have strict priority, and the remaining queues are serviced in proportion to weights you configure.

For MLPPP, assign a single scheduler map to the link services IQ interface (lsq) and to each constituent link. The default schedulers for M Series and T Series routers, which assign 95, 0, 0, and 5 percent bandwidth for the transmission rate and buffer size of queues 0, 1, 2, and 3, are not adequate when you configure LFI or multiclass traffic. Therefore, for MLPPP, you should configure a single scheduler with nonzero percent transmission rates and buffer sizes for queues 0 through 3, and assign this scheduler to the link services IQ interface (lsq) and to each constituent link, as shown in Example: Configuring an LSQ Interface as an NxT1 Bundle Using MLPPP.

Note: For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.

If the bundle has more than one link, you must include the per-unit-scheduler statement at the [edit interfaces lsq-fpc/pic/port] hierarchy level:

[edit interfaces lsq-fpc/pic/port]per-unit-scheduler;

To configure and apply the scheduling policy, include the following statements at the [edit class-of-service] hierarchy level:

[edit class-of-service]
interfaces {t1-fpc/pic/port unit logical-unit-number {scheduler-map map-name;}}
forwarding-classes {queue queue-number class-name;}
scheduler-maps {map-name {forwarding-class class-name scheduler scheduler-name;}}
schedulers {scheduler-name {buffer-size (percent percentage | remainder | temporal microseconds);priority priority-level;transmit-rate (rate | percent percentage | remainder) <exact>;}}

For link services IQ interfaces, a strict-high-priority queue might starve the other three queues because traffic in a strict-high priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue does round-robin with high-priority queues, as described in the Junos OS Class of Service Configuration Guide.

After the scheduler removes a packet from a queue, a certain action is taken. The action depends on whether the packet came from a multilink encapsulated queue (fragmented and sequenced) or a nonencapsulated queue (hashed with no fragmentation). Each queue can be designated as either multilink encapsulated or nonencapsulated, independently of the other. By default, traffic in all forwarding classes is multilink encapsulated. To configure packet fragmentation handling on a queue, include the fragmentation-maps statement at the [edit class-of-service] hierarchy level:

fragmentation-maps {map-name {forwarding-class class-name {fragment-threshold bytes;multilink-class number;no-fragmentation;}}}

For NxT1 bundles using MLPPP, the byte-wise load balancing used in multilink-encapsulated queues is superior to the flow-wise load balancing used in nonencapsulated queues. All other considerations are equal. Therefore, we recommend that you configure all queues to be multilink encapsulated. You do this by including the fragment-threshold statement in the configuration. If you choose to set traffic on a queue to be nonencapsulated rather than multilink encapsulated, include the no-fragmentation statement in the fragmentation map. You use the multilink-class statement to map a forwarding class into a multiclass MLPPP (MCML). For more information about MCML, see Configuring Multiclass MLPPP on LSQ Interfaces. For more information about fragmentation maps, see Configuring CoS Fragmentation by Forwarding Class on LSQ Interfaces.

When a packet is removed from a multilink-encapsulated queue, the software gives the packet an MLPPP header. The MLPPP header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on one of the N different T1 links. The link is chosen on a packet-by-packet basis to balance the load across the various T1 links.

If the packet exceeds the minimum link MTU, or if a queue has a fragment threshold configured at the [edit class-of-service fragmentation-maps map-name forwarding-class class-name] hierarchy level, the software splits the packet into two or more fragments, which are assigned consecutive multilink sequence numbers. The outgoing link for each fragment is selected independently of all other fragments.

If you do not include the fragment-threshold statement in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number] hierarchy level is the default for all forwarding classes. If you do not set a maximum fragment size anywhere in the configuration, packets are fragmented if they exceed the smallest MTU of all the links in the bundle.

Even if you do not set a maximum fragment size anywhere in the configuration, you can configure the maximum received reconstructed unit (MRRU) by including the mrru statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number] hierarchy level. The MRRU is similar to the MTU, but is specific to link services interfaces. By default the MRRU size is 1500 bytes, and you can configure it to be from 1500 through 4500 bytes. For more information, see Configuring MRRU on Multilink and Link Services Logical Interfaces.

When a packet is removed from a nonencapsulated queue, it is transmitted with a plain PPP header. Because there is no MLPPP header, there is no sequence number information. Therefore, the software must take special measures to avoid packet reordering. To avoid packet reordering, the software places the packet on one of the N different T1 links. The link is determined by hashing the values in the header. For IP, the software computes the hash based on source address, destination address, and IP protocol. For MPLS, the software computes the hash based on up to five MPLS labels, or four MPLS labels and the IP header.

For UDP and TCP the software computes the hash based on the source and destination ports, as well as source and destination IP addresses. This guarantees that all packets belonging to the same TCP/UDP flow always pass through the same T1 link, and therefore cannot be reordered. However, it does not guarantee that the load on the various T1 links is balanced. If there are many flows, the load is usually balanced.

The N different T1 interfaces link to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from all the T1 links. If a packet has an MLPPP header, the sequence number field is used to put the packet back into sequence number order. If the packet has a plain PPP header, the software accepts the packet in the order in which it arrives and makes no attempt to reassemble or reorder the packet.

Example: Configuring an LSQ Interface as an NxT1 Bundle Using MLPPP

[edit chassis]
fpc 1 {pic 3 {adaptive-services {service-package layer-2;}}}
[edit interfaces]
t1-0/0/0 {encapsulation ppp;unit 0 {family mlppp {bundle lsq-1/3/0.1; # This adds t1-0/0/0 to the specified bundle.}}}
t1-0/0/1 {encapsulation ppp;unit 0 {family mlppp {bundle lsq-1/3/0.1;}}}
lsq-1/3/0 {unit 1 { # This is the virtual link that concatenates multiple T1s.encapsulation multilink-ppp;drop-timeout 1000;fragment-threshold 128;link-layer-overhead 0.5;minimum-links 2;mrru 4500;short-sequence;family inet {address 10.2.3.4/24;}}
[edit interfaces]
lsq-1/3/0 {per-unit-scheduler;}
[edit class-of-service]
interfaces {lsq-1/3/0 { # multilink PPP constituent linkunit 0 {scheduler-map sched-map1;}}
t1-0/0/0 { # multilink PPP constituent link
unit 0 {scheduler-map sched-map1;}
t1-0/0/1 { # multilink PPP constituent link
unit 0 {scheduler-map sched-map1;}
forwarding-classes {queue 0 be;queue 1 ef;queue 2 af;queue 3 nc;}
scheduler-maps {sched-map1 {forwarding-class af scheduler af-scheduler;forwarding-class be scheduler be-scheduler;forwarding-class ef scheduler ef-scheduler;forwarding-class nc scheduler nc-scheduler;}}
schedulers {af-scheduler {transmit-rate percent 30;buffer-size percent 30;priority low;}be-scheduler {transmit-rate percent 25;buffer-size percent 25;priority low;}ef-scheduler {transmit-rate percent 40;buffer-size percent 40;priority strict-high; # voice queue}nc-scheduler {transmit-rate percent 5;buffer-size percent 5;priority high;}}
fragmentation-maps {fragmap-1 {forwarding-class be {fragment-threshold 180;}forwarding-class ef {fragment-threshold 100;}}}
[edit interfaces]
lsq-1/3/0 {unit 0 {fragmentation-map fragmap-1;}}

Published: 2013-02-15

Supported Platforms

Published: 2013-02-15