Configuring MLPPP
Understanding MLPPP
Multilink Point-to-Point Protocol (MLPPP) enables you to bundle multiple PPP links into a single multilink bundle. Multilink bundles provide additional bandwidth, load balancing, and redundancy by aggregating low-speed links, such as T1 and E1 links.
You configure multilink bundles as logical units or channels
on the link services interface. With MLPPP, multilink bundles are
configured as logical units on the link service interface—for
example, lsq-0/0/0.0
, lsq-0/0/0.1
. After creating
multilink bundles, you add constituent links to the bundle. The constituent
links are the low-speed physical links that are to be aggregated.
The following rules apply when you add constituent links to a multilink bundle:
On each multilink bundle, add only interfaces of the same type. For example, you can add either T1 or E1, but not both.
Only interfaces with a PPP encapsulation can be added to an MLPPP bundle.
If an interface is a member of an existing bundle and you add it to a new bundle, the interface is automatically deleted from the existing bundle and added to the new bundle.
With MLPPP bundles, you can use PPP Challenge Handshake Authentication Protocol (CHAP) and Password Authentication Protocol (PAP) for secure transmission over the PPP interfaces. For more information, see Configuring the PPP Challenge Handshake Authentication Protocol and Configuring the PPP Password Authentication Protocol.
MLPPP Support on ACX Series Routers
ACX Series routers support MLPPP encapsulations. MLPPP is supported on ACX1000, ACX2000, ACX2100 routers, and with Channelized OC3/STM1 (Multi-Rate) MICs with SFP and 16-port Channelized E1/T1 Circuit Emulation MIC on ACX4000 routers.
The following table shows the maximum number of multilink bundles you can create on ACX Series routers:
ACX Platform |
Maximum Bundles |
Maximum Links |
Maximum Links Per Bundle |
---|---|---|---|
ACX2000 ACX2100 |
16 |
16 |
16 |
ACX4000ACX-MIC-16CHE1-T1-CE |
16 |
16 |
16 |
ACX4000ACX-MIC-4COC3-1COC12CE |
50 |
336 |
16 |
ACX1000 |
8 |
8 |
8 |
Guidelines for Configuring MLPPP With LSQ Interfaces on ACX Series Routers
You can configure MLPPP bundle interfaces with T1/E1 member links. The traffic that is transmitted over the MLPPP bundle interface is spread over the member links in a round-robin manner. If the packet size is higher than the fragmentation size configured on the MLPPP interface, the packet are fragmented. The fragments are also sent over member links in a round-robin pattern. The PPP control packets received on the interface are terminated on the router. The fragmentation size is configured at the MLPPP bundle-level. This fragmentation size is applied to all the packets on the bundle, regardless of the multilink class.
Multiclass MLPPP segregates the multilink protocol packets in to multiple classes. ACX routers support up to a maximum of four classes. One queue is associated with each of the four classes of multiclass MLPPP. The packets can be classified to be part of one of the classes. These packets take the queue associated with the class. The packets inside a queue are served in first-in first-out (FIFO) sequence.
Multiclass MLPPP is required to provide preferential treatment to high-priority, delay-sensitive traffic. The delay-sensitive smaller real-time frames are classified such that they end up in higher priority queue. While a lower priority packet is being fragmented, if a higher priority packet is enqueued, the lower priority fragmentation is suspended, the higher priority packet is fragmented and enqueued for transmission, and then the lower priority packet fragmentation is resumed.
Traditional LSQ interfaces (anchored on PICs) are supported
to combine T1/E1 interfaces in an MLPPP bundle interface. Inline services
(si-) interfaces and inline LSQ interfaces are not supported in MLPPP
bundles. On ACX routers, MLPPP bundling is performed on the TDM MICs
and traditional LSQ model is most effective mechanism. You can configure
channelized OC interfaces (t1-x/y/z:n:m,
e1-x/y/z:n
) as members of an MLPPP bundle
interface. A maximum of 16 member links per bundle is supported.
The MPLS, ISO, and inet address families are supported. The ISO address
family is supported only for IS-IS. You can configure MLPPP bundles
on network-to-network interface (NNI) direction of an Ethernet pseudowire.
Interleaving using multiclass MLPPP is supported.
Keep the following points in mind when you configure MLPPP bundles on ACX routers:
The physical links must be of the same type and bandwidth.
Round-robin packet distribution is performed over the member links.
To add a T1 or E1 member link to the MLPPP bundle as link services LSQ interfaces, include the
bundle
statement at the[edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp]
hierarchy level:[edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp] bundle lsq-fpc/pic/port.logical-unit-number;
To configure the link services LSQ interface properties, include the following statements at the
[edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level:[edit interfaces lsq-fpc/pic/port unit logical-unit-number] encapsulation multilink-ppp; fragment-threshold bytes; minimum-links number; mrru bytes; short-sequence; family inet { address address; }
You can configure the address family as MPLS for the LSQ interfaces in an MLPPP bundle.
PPP control protocol support depends on the processing of the PPP application for MLPPP bundle interfaces IPv4, Internet Protocol Control Protocol (IPCP), PPP Challenge Handshake Authentication Protocol (CHAP), and Password Authentication Protocol (PAP) applications are supported for PPP.
Drop timeout configuration is not applicable to ACX routers
The member links across MICs cannot be bundled. Only physical interfaces on the same MIC can be bundled.
Fractional T1 and E1 interfaces are not supported. CoS is supported only for full T1 and E1 interfaces. Selective time slots of T1/E1 cannot be used and full T1/E1 interfaces must be used.
Detailed statistics displayed depend on the parameters supported by the hardware. The counters that are supported by the hardware are displayed with appropriate values in the output of the
show interfaces lsq-fpc/pic/port detail
command.In the following sample output, the fields that are displayed with a value of 0 denote the fields that are not supported for computation by ACX routers. In the lsq- interface statistics, non-fragment statistics of the bundle are not accounted. Non-fragments are typically treated as single-fragment frames and counted in the fragment statistics.
user@host# show interfaces lsq-1/1/0 detail Physical interface: lsq-1/1/0, Enabled, Physical link is Up Interface index: 162, SNMP ifIndex: 550, Generation: 165 Description: LSQ-interface Link-level type: LinkService, MTU: 1504 Device flags : Present Running Interface flags: Point-To-Point SNMP-Traps Internal: 0x0 Last flapped : 2015-06-22 19:01:47 PDT (2d 04:56 ago) Statistics last cleared: 2015-06-23 05:01:49 PDT (1d 18:56 ago) Traffic statistics: Input bytes : 108824 208896 bps Output bytes : 90185 174080 bps Input packets: 1075 256 pps Output packets: 1061 256 pps IPv6 transit statistics: Input bytes : 0 Output bytes : 0 Input packets: 0 Output packets: 0 Frame exceptions: Oversized frames 0 Errored input frames 0 Input on disabled link/bundle 0 Output for disabled link/bundle 0 Queuing drops 0 Buffering exceptions: Packet data buffer overflow 0 Fragment data buffer overflow 0 Assembly exceptions: Fragment timeout 0 Missing sequence number 0 Out-of-order sequence number 0 Out-of-range sequence number 0 Hardware errors (sticky): Data memory error 0 Control memory error 0 Logical interface lsq-1/1/0.0 (Index 326) (SNMP ifIndex 599) (Generation 177) Flags: Up Point-To-Point SNMP-Traps 0x0 Encapsulation: Multilink-PPP Last flapped: 2015-06-24 23:57:34 PDT (00:00:51 ago) Bandwidth: 6144kbps Bundle links information: Active bundle links 4 Removed bundle links 0 Disabled bundle links 0 Bundle options: MRRU 2000 Remote MRRU 2000 Drop timer period 0 Inner PPP Protocol field compression enabled Sequence number format short (12 bits) Fragmentation threshold 450 Links needed to sustain bundle 3 Multilink classes 4 Link layer overhead 4.0 % Bundle status: Received sequence number 0x0 Transmit sequence number 0x0 Packet drops 0 (0 bytes) Fragment drops 0 (0 bytes) MRRU exceeded 0 Fragment timeout 0 Missing sequence number 0 Out-of-order sequence number 0 Out-of-range sequence number 0 Packet data buffer overflow 0 Fragment data buffer overflow 0 Statistics Frames fps Bytes bps Bundle: Multilink: Input : 1076 256 484200 921600 Output: 1061 256 477450 921600 Network: Input : 2182 256 201812 208896 Output: 2168 256 192029 174080 IPV6 Transit Statistics Packets Bytes Network: Input : 0 0 Output: 0 0 Multilink class 0: Multilink: Input : 1075 256 483750 921600 Output: 1061 256 477450 921600 Network: Input : 1061 256 477450 921600 Output: 1075 256 483750 921600 Multilink class 1: Multilink: Input : 0 0 0 0 Output: 0 0 0 0 Network: Input : 0 0 0 0 Output: 0 0 0 0 Multilink class 2: Multilink: Input : 0 0 0 0 Output: 0 0 0 0 Network: Input : 0 0 0 0 Output: 0 0 0 0 Multilink class 3: Multilink: Input : 0 0 0 0 Output: 0 0 0 0 Network: Input : 0 0 0 0 Output: 0 0 0 0 Link: t1-1/1/1.0 Up time: 00:00:51 Input : 280 64 126000 230400 Output: 266 64 119700 230400 t1-1/1/2.0 Up time: 00:00:51 Input : 266 64 119700 230400 Output: 265 64 119250 230400 t1-1/1/3.0 Up time: 00:00:51 Input : 265 64 119250 230400 Output: 265 64 119250 230400 t1-1/1/4.0 Up time: 00:00:51 Input : 265 64 119250 230400 Output: 265 64 119250 230400 Multilink detail statistics: Bundle: Fragments: Input : 1076 256 484200 921600 Output: 1061 256 477450 921600 Non-fragments: Input : 0 0 0 0 Output: 0 0 0 0 LFI: Input : 0 0 0 0 Output: 0 0 0 0 Multilink class 0: Fragments: Input : 1076 256 484200 921600 Output: 1061 256 477450 921600 Non-fragments: Input : 0 0 0 0 Output: 0 0 0 0 Multilink class 1: Fragments: Input : 0 0 0 0 Output: 0 0 0 0 Non-fragments: Input : 0 0 0 0 Output: 0 0 0 0 Multilink class 2: Fragments: Input : 0 0 0 0 Output: 0 0 0 0 Non-fragments: Input : 0 0 0 0 Output: 0 0 0 0 Multilink class 3: Fragments: Input : 0 0 0 0 Output: 0 0 0 0 Non-fragments: Input : 0 0 0 0 Output: 0 0 0 0 NCP state: inet: Opened, inet6: Not-configured, iso: Opened, mpls: Opened Protocol inet, MTU: 1500, Generation: 232, Route table: 0 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 9.1.9/24, Local: 9.1.9.18, Broadcast: Unspecified, Generation: 212 Protocol iso, MTU: 1500, Generation: 233, Route table: 0 Flags: Is-Primary Protocol mpls, MTU: 1488, Maximum labels: 3, Generation: 234, Route table: 0 Flags: Is-Primary
For modifying the frame checksum (FCS) in the set of T1 options or E1 options on a MLPPP bundle member link, you must remove the member link out of the bundle by deactivating the link or unconfiguring it as a bundle member, and add the link back to the bundle after FCS modification. You must first remove the link from the bundle and modify FCS. If you are configuring FCS for the first time on the member link, specify the value before it is added to the bundle.
The following MLPPP functionalities are not supported on ACX Series routers:
Member links across MICs.
Fragmentation per class (only configurable at bundle level).
IPv6 address family header compression (no address and control field compression [ACFC] or protocol field compression [PFC]).
Prefix elision as defined in RFC 2686, The Multi-Class Extension to Multi-Link PPP.
A functionality that resembles link fragmentation and interleaving (LFI) can be achieved using multiclass MLPPP (RFC 2686), which interleaves the high priority packets between lower priority packets. This methodology ensures that the delay desitive packets are sent as soon as they arrive. While LFI-classified packets are sent to a specific member link as PPP packets, the ACX implementation of interleaving contains multilink PPP (also referred to as PPP Multilink, MLP, and MP) headers and fragments that are sent on all member links in a round-robin manner.
PPP over MLPPP bundle interfaces.
Example: Configuring an MLPPP Bundle on ACX Series
Requirements
You require ACX Series routers to configure the following example
Overview
The following is a sample for configuring an MLPPP bundle on ACX Series routers:
Configuration
CLI Quick Configuration
[edit] user@host# show interfaces lsq-1/1/0 { description LSQ-interface; per-unit-scheduler; unit 0 { encapsulation multilink-ppp; mrru 2000; short-sequence; fragment-threshold 450; minimum-links 3; multilink-max-classes 4; family inet { address 9.1.9.18/24 } family iso; family mpls; } } ct1-1/1/1 { enable; no-partition interface-type t1; } t1-1/1/1 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/1/0.0; } } } ct1-1/1/2 { enable; no-partition interface-type t1; } t1-1/1/2 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/1/0.0; } } } ct1-1/1/3 { enable; no-partition interface-type t1; } t1-1/1/3 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/1/0.0; } } } ct1-1/1/4 { enable; no-partition interface-type t1; } t1-1/1/4 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/1/0.0; } } }
Procedure
Step-by-Step Procedure
Configuring LSQ Interfaces as NxT1 or NxE1 Bundles Using MLPPP on ACX Series
LSQ interfaces support both T1 and E1 physical interfaces. These instructions apply to T1 interfaces, but the configuration for E1 interfaces is similar.
To configure an NxT1 bundle using MLPPP,
you aggregate N different T1 links into a bundle.
The NxT1 bundle is called a logical interface,
because it can represent, for example, a routing adjacency. To aggregate
T1 links into a an MLPPP bundle, include the bundle
statement
at the [edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp]
hierarchy level:
[edit interfaces t1-fpc/pic/port unit logical-unit-number family mlppp] bundle lsq-fpc/pic/port.logical-unit-number;
To configure the LSQ interface properties, include the following
statements at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level:
[edit interfaces lsq-fpc/pic/port unit logical-unit-number] drop-timeout milliseconds; encapsulation multilink-ppp; fragment-threshold bytes; link-layer-overhead percent; minimum-links number; mrru bytes; short-sequence; family inet { address address; }
ACX Series routers do not support drop-timeout and link-layer-overhead properties.
The logical link services IQ interface represents the MLPPP bundle. For the MLPPP bundle, there are four associated queues on M Series routers and eight associated queues on M320 and T Series routers. A scheduler removes packets from the queues according to a scheduling policy. Typically, you designate one queue to have strict priority, and the remaining queues are serviced in proportion to weights you configure.
For MLPPP, assign a single scheduler map to the link services
IQ interface (lsq
) and to each constituent link. The default
schedulers for M Series and T Series routers, which assign 95, 0,
0, and 5 percent bandwidth for the transmission rate and buffer size
of queues 0, 1, 2, and 3, are not adequate when you configure LFI
or multiclass traffic. Therefore, for MLPPP, you should configure
a single scheduler with nonzero percent transmission rates and buffer
sizes for queues 0 through 3, and assign this scheduler to the link
services IQ interface (lsq
) and to each constituent link..
For M320 and T Series routers, the default scheduler transmission rate and buffer size percentages for queues 0 through 7 are 95, 0, 0, 5, 0, 0, 0, and 0 percent.
If the bundle has more than one link, you must include the per-unit-scheduler
statement at the [edit interfaces lsq-fpc/pic/port]
hierarchy level:
[edit interfaces lsq-fpc/pic/port] per-unit-scheduler;
To configure and apply the scheduling policy, include the following
statements at the [edit class-of-service]
hierarchy level:
[edit class-of-service] interfaces { t1-fpc/pic/port unit logical-unit-number { scheduler-map map-name; } } forwarding-classes { queue queue-number class-name; } scheduler-maps { map-name { forwarding-class class-name scheduler scheduler-name; } } schedulers { scheduler-name { buffer-size (percent percentage | remainder | temporal microseconds); priority priority-level; transmit-rate (rate | remainder) <exact>; } }
For link services IQ interfaces, a strict-high-priority queue might starve the other three queues because traffic in a strict-high priority queue is transmitted before any other queue is serviced. This implementation is unlike the standard Junos CoS implementation in which a strict-high-priority queue does round-robin with high-priority queues, as described in the Junos OS Class of Service User Guide for Routing Devices.
After the scheduler removes a packet from a queue, a certain
action is taken. The action depends on whether the packet came from
a multilink encapsulated queue (fragmented and sequenced) or a nonencapsulated
queue (hashed with no fragmentation). Each queue can be designated
as either multilink encapsulated or nonencapsulated, independently
of the other. By default, traffic in all forwarding classes is multilink
encapsulated. To configure packet fragmentation handling on a queue,
include the fragmentation-maps
statement at the [edit
class-of-service]
hierarchy level:
fragmentation-maps { map-name { forwarding-class class-name { multilink-class number; } } }
For NxT1 bundles using MLPPP, the byte-wise
load balancing used in multilink-encapsulated queues is superior to
the flow-wise load balancing used in nonencapsulated queues. All other
considerations are equal. Therefore, we recommend that you configure
all queues to be multilink encapsulated. You do this by including
the fragment-threshold
statement in the configuration.
You use the multilink-class
statement to map a forwarding
class into a multiclass MLPPP. For more information about fragmentation maps, see Configuring CoS Fragmentation by Forwarding Class
on LSQ Interfaces.
When a packet is removed from a multilink-encapsulated queue, the software gives the packet an MLPPP header. The MLPPP header contains a sequence number field, which is filled with the next available sequence number from a counter. The software then places the packet on one of the N different T1 links. The link is chosen on a packet-by-packet basis to balance the load across the various T1 links.
If the packet exceeds the minimum link MTU, or if a queue has
a fragment threshold configured at the [edit class-of-service
fragmentation-maps map-name forwarding-class class-name]
hierarchy level, the software splits
the packet into two or more fragments, which are assigned consecutive
multilink sequence numbers. The outgoing link for each fragment is
selected independently of all other fragments.
If you do not include the fragment-threshold
statement
in the fragmentation map, the fragmentation threshold you set at the [edit interfaces interface-name unit logical-unit-number]
hierarchy level is the default
for all forwarding classes. If you do not set a maximum fragment size
anywhere in the configuration, packets are fragmented if they exceed
the smallest MTU of all the links in the bundle.
Even if you do not set a maximum fragment size anywhere in the
configuration, you can configure the maximum received reconstructed
unit (MRRU) by including the mrru
statement at the [edit interfaces lsq-fpc/pic/port unit logical-unit-number]
hierarchy level. The MRRU is similar to the MTU, but is specific
to link services interfaces. By default the MRRU size is 1500 bytes,
and you can configure it to be from 1500 through 4500 bytes.
For more information, see Configuring MRRU on Multilink and Link Services Logical Interfaces.
When a packet is removed from a nonencapsulated queue, it is transmitted with a plain PPP header. Because there is no MLPPP header, there is no sequence number information. Therefore, the software must take special measures to avoid packet reordering. To avoid packet reordering, the software places the packet on one of the N different T1 links. The link is determined by hashing the values in the header. For IP, the software computes the hash based on source address, destination address, and IP protocol. For MPLS, the software computes the hash based on up to five MPLS labels, or four MPLS labels and the IP header.
For UDP and TCP the software computes the hash based on the source and destination ports, as well as source and destination IP addresses. This guarantees that all packets belonging to the same TCP/UDP flow always pass through the same T1 link, and therefore cannot be reordered. However, it does not guarantee that the load on the various T1 links is balanced. If there are many flows, the load is usually balanced.
The N different T1 interfaces link to another router, which can be from Juniper Networks or another vendor. The router at the far end gathers packets from all the T1 links. If a packet has an MLPPP header, the sequence number field is used to put the packet back into sequence number order. If the packet has a plain PPP header, the software accepts the packet in the order in which it arrives and makes no attempt to reassemble or reorder the packet.
Example: Configuring an LSQ Interface as an NxT1 Bundle Using MLPPP
[edit interfaces] lsq-1/1/0 { per-unit-scheduler; unit 0 { encapsulation multilink-ppp; mrru 2000; multilink-max-classes 4; family inet { address 20.1.1.1/24; } family mpls; } } ct1-1/1/4 { enable; no-partition interface-type t1; } t1-1/1/4 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/1/0.0; } } } ct1-1/1/5 { enable; no-partition interface-type t1; } t1-1/1/5 { encapsulation ppp; unit 0 { family mlppp { bundle lsq-1/1/0.0; } } } } class-of-service { classifiers { inet-precedence myIPv4 { forwarding-class best-effort { loss-priority low code-points 000; } forwarding-class expedited-forwarding { loss-priority low code-points 001; } forwarding-class assured-forwarding { loss-priority low code-points 011; } forwarding-class network-control { loss-priority low code-points 111; } } } drop-profiles { dp1 { fill-level 50 drop-probability 0; fill-level 100 drop-probability 100; } dp2 { fill-level 50 drop-probability 0; fill-level 100 drop-probability 100; } } interfaces { lsq-1/1/0 { unit 0 { scheduler-map sm; fragmentation-map frag; rewrite-rules { inet-precedence myRRIPv4; } } } } rewrite-rules { inet-precedence myRRIPv4 { forwarding-class best-effort { loss-priority low code-point 111; } forwarding-class expedited-forwarding { loss-priority low code-point 011; } forwarding-class assured-forwarding { loss-priority low code-point 001; } forwarding-class network-control { loss-priority low code-point 000; } } } scheduler-maps { sm { forwarding-class best-effort scheduler new; forwarding-class network-control scheduler new_nc; forwarding-class assured-forwarding scheduler new_af; forwarding-class expedited-forwarding scheduler new_ef; } } fragmentation-maps { frag { forwarding-class { best-effort { multilink-class 3; } network-control { multilink-class 0; } assured-forwarding { multilink-class 2; } expedited-forwarding { multilink-class 1; } } } } schedulers { new { transmit-rate 32k; shaping-rate 3m; priority low; drop-profile-map loss-priority low protocol any drop-profile dp1; drop-profile-map loss-priority high protocol any drop-profile dp2; } new_nc { transmit-rate 32k; shaping-rate 3m; priority strict-high; } new_af { transmit-rate 32k; shaping-rate 3m; priority medium-low; } new_ef { transmit-rate 32k; shaping-rate 3m; priority medium-high; } } }
Understanding Multiclass MLPPP
Multiclass MLPPP makes it possible to have multiple classes
of latency-sensitive traffic that are carried over a single multilink
bundle with bulk traffic. In effect, multiclass MLPPP allows different
classes of traffic to have different latency guarantees. With multiclass
MLPPP, you can map each forwarding class into a separate multilink
class, thus preserving priority and latency guarantees. Multiclass
MLPPP is defined in RFC 2686, The Multi-Class Extension
to Multi-Link PPP. You can only configure multiclass MLPPP
for link services intelligent queuing (LSQ) interfaces (lsq-
) with MLPPP encapsulation.
Multiclass MLPPP greatly simplifies packet ordering issues that occur when multiple links are used. Without multiclass MLPPP, all voice traffic belonging to a single flow is hashed to a single link to avoid packet ordering issues. With multiclass MLPPP, you can assign voice traffic to a high-priority class, and you can use multiple links. For more information about voice services support on LSQ interfaces, see Configuring Services Interfaces for Voice Services.
If you do not configure multiclass MLPPP, fragments from different classes cannot be interleaved. All fragments for a single packet must be sent before the fragments from another packet are sent. Nonfragmented packets can be interleaved between fragments of another packet to reduce latency seen by nonfragmented packets. In effect, latency-sensitive traffic is encapsulated as regular PPP traffic, and bulk traffic is encapsulated as multilink traffic. This model works as long as there is a single class of latency-sensitive traffic, and there is no high-priority traffic that takes precedence over latency-sensitive traffic.
This approach to link fragmentation interleaving (LFI), used on the Link Services PIC, supports only two levels of traffic priority, which is not sufficient to carry the four to eight forwarding classes that are supported by M Series and T Series routers. For more information about the Link Services PIC support of LFI, see Configuring Delay-Sensitive Packet Interleaving on Link Services Logical Interfaces.
ACX Series routers do not support LFI.
Configuring both LFI and multiclass MLPPP on the same bundle is not necessary, nor is it supported, because multiclass MLPPP represents a superset of functionality. When you configure multiclass MLPPP, LFI is automatically enabled.
The Junos OS implementation of multiclass MLPPP does not support compression of common header bytes, which is referred to in RFC 2686 as “prefix elision.”
Configuring Multiclass MLPPP on LSQ Interfaces
To configure multiclass MLPPP on a LSQ interface, you must specify how many multilink classes should be negotiated when a link joins the bundle, and you must specify the mapping of a forwarding class into an multiclass MLPPP class.
Considerations for link services IQ (lsq) interfaces on ACX Series routers:
The maximum number of multilink classes to be negotiated when a link joins the bundle that you can specify by using the
multilink-max-classes
statement at the[edit interfaces interface-name unit logical-unit-number]
hierarchy level is limited to 4.Fragmentation size is not specified under fragmentation map; instead, fragmentation size configured on the bundle is used.
Compressed Real-Time Transport Protocol (RTP) is not supported.
HDLC address and control field compression (ACFC) and PPP protocol field compression (PFC) are not supported.