Inline Transmission Mode
Use this topic to understand what inline transmission is and how to enable it for maximum scaling for CFM, LFM, and performance monitoring functions.
Enabling Inline Transmission of Continuity Check Messages for Maximum Scaling
Scaling is the ability of a system to handle increasing amounts of work and to continue to function well. Scaling can refer to increasing capacity and the ability to handle increasing workload, number of subscribers or sessions, hardware components, and so on. Continuity check protocol is used for fault detection within a maintenance association. The maintenance association end points (MEPs) send continuity check messages (CCMs) periodically. The time between the transmissions of CCMs is known as the interval. The receiving MEP maintains a database of all MEPs in the maintenance association.
By default, CCMs are transmitted by the CPU of a line card, such as a Modular Port Concentrator (MPC). If the duration between transmissions of CCMs is low or if the CCMs for a specific line card scale, then we recommend that you delegate transmission of CCMs to the forwarding ASIC (that is, to the hardware) by enabling inline transmission of CCMs. Inline transmission of CCMs is also known as inline keepalives or Inline-KA. Inline transmission enables the system to handle more connectivity fault management (CFM) sessions per line card. By enabling inline transmission of CCMs, you can achieve maximum scaling of CCMs.
To enable inline transmission of CCMs, perform the following steps:
CCM statistics in inline transmission mode are not supported on PTX10001-36 MR, PTX10004, PTX10008, and PTX10016.
When you enable inline transmission of CCMs by configuring the statement
hardware-assisted-keepalives enable
at the edit
protocols oam ethernet connectivity-fault-management
performance-monitoring
hierarchy, the statistics for existing CFM
CCM sessions stop incrementing and reset.
To disable inline transmission, use the hardware-assisted-keepalives disable
statement. After disabling inline transmission, you
must reboot the router for the changes to take effect.
See Also
Enabling Inline Transmission of Link Fault Management Keepalives for Maximum Scaling
Scaling is the ability of a system to handle increasing amounts of work and to continue to function well. Scaling can refer to increasing capacity and the ability to handle increasing workload, number of subscribers or sessions, hardware components, and so on.
By default, LFM keepalive packets are transmitted by the periodic
packet management ppm
process on the line-card. You can
delegate transmission of LFM keepalive packets to the forwarding ASIC
(that is, to the hardware) by enabling inline transmission. Inline
transmission of LFM keepalives is also known as inline keepalives
or Inline-KA. By enabling inline transmission of LFM keepalive packets,
you can achieve maximum scaling of keepalive packets, reduction of
the load on the ppm
process, and support LFM in-service
software upgrade (ISSU) for non-juniper peers (for a keepalive interval
of 1 second).
Do not enable or disable inline transmission of LFM when
an LFM session is already established. To enable or disable inline
transmission, you must first deactivate the existing established LFM
session using the deactivate
command, and then reactivate
the LFM session using the activate
command after enabling
or disabling inline LFM.
Before you enable inline transmission of LFM keepalive packets, complete the following tasks:
Verify if any LFM session is online and active. To verify if any existing or established LFM session is online and active, issue the following command:
user@host> show oam ethernet link-fault-management detail Oct 18 02:04:17 Interface: ge-0/0/0 Status: Running, Discovery state: Active Send Local Transmit interval: 1000ms, PDU threshold: 3 frames, Hold time: 0ms Peer address: 00:00:00:00:00:00 Flags:0x8 OAM receive statistics: Information: 0, Event: 0, Variable request: 0, Variable response: 0 Loopback control: 0, Organization specific: 0 OAM flags receive statistics: Critical event: 0, Dying gasp: 0, Link fault: 0 OAM transmit statistics: Information: 28, Event: 0, Variable request: 0, Variable response: 0 = after waiting for a while count increased by 15 Loopback control: 0, Organization specific: 0 OAM received symbol error event information: Events: 0, Window: 0, Threshold: 0 Errors in period: 0, Total errors: 0 OAM received frame error event information: Events: 0, Window: 0, Threshold: 0 Errors in period: 0, Total errors: 0 OAM received frame period error event information: Events: 0, Window: 0, Threshold: 0 Errors in period: 0, Total errors: 0 OAM received frame seconds error event information: Events: 0, Window: 0, Threshold: 0 Errors in period: 0, Total errors: 0 OAM transmitted symbol error event information: Events: 0, Window: 0, Threshold: 1 Errors in period: 0, Total errors: 0 OAM current symbol error event information: Events: 0, Window: 0, Threshold: 1 Errors in period: 0, Total errors: 0 OAM transmitted frame error event information: Events: 0, Window: 0, Threshold: 1 Errors in period: 0, Total errors: 0 OAM current frame error event information: Events: 0, Window: 0, Threshold: 1 Errors in period: 0, Total errors: 0 Loopback tracking: Disabled, Loop status: Unknown Detect LOC: Disabled, LOC status: Unknown
The OAM transmit statistics reflect that the
ppm
process is handling the transmission of LFM keepalive packets.Deactivate the LFM session so that you can enable inline LFM mode. To deactivate the LFM session, issue the following command:
[edit] user@host # deactivate protocols oam ethernet link-fault-management interface interface-name
Commit the configuration. To commit the configuration, issue the following command:
[edit] user@host # commit
To enable inline transmission of LFM keepalive packets, perform the following steps:
To disable inline LFM, verify if any existing established LFM
session is online and active. Deactivate the LFM session and commit.
Disable inline LFM by deleting the hardware-assisted-keepalives
statement and commit. Then, reactivate LFM session and commit the
configuration.
See Also
Enabling Inline Mode Of Performance Monitoring To Achieve Maximum Scaling
Performance monitoring is useful for studying the traffic pattern in a network over a period of time. It helps to identify network problems before you are impacted by network defects.
By default, performance monitoring packets are handled by the
CPU of a line-card, such as Modular Port Concentrator (MPC). Enabling
inline mode of performance monitoring delegates the processing of
the protocol data units (PDUs) to the forwarding ASIC (that is, to
the hardware). By enabling inline mode of performance monitoring,
the load on the CPU of the line-card is reduced and you can configure
an increased number of performance monitoring sessions and achieve
maximum scaling for service OAM performance monitoring sessions. On
MX Series routers, you can configure inline mode of performance monitoring
only if the network services mode on the router is configured to enhanced-ip
and enhanced connectivity fault management (enhanced-cfm-mode)
) is configured.
By enabling inline mode of performance monitoring, you can achieve maximum scaling for performance monitoring sessions. To achieve maximum scaling for performance monitoring sessions, you must enable scaling of continuity check messages (CCMs) sessions. To enable scaling of CCM sessions, enable inline transmission of continuity check messages. For more information on inline transmission of continuity check messages, see Enabling Inline Transmission of Continuity Check Messages for Maximum Scaling. To view the supported scaling values for CCM and PM, see Supported Inline CCM and Inline PM Scaling Values.
Inline mode of performance monitoring is supported only for proactive mode of frame delay measurement (Two-way Delay Measurements) and synthetic loss measurements (SLM)sessions. Performance monitoring functions configured using the iterator profile (CFM)are referred to as proactive performance monitoring. Inline mode of performance monitoring for frame loss measurement using service frames (LM) is not supported.
MPC3E (MX-MPC3E-3D) and MPC4E (MPC4E-3D-32XGE-SFPP and MPC4E-3D-2CGE-8XGE) do not support inline mode of performance monitoring. User-defined Data TLV is not supported if you have configured inline mode of performance monitoring. Also, only 12 history records per PM sessions are supported.
We recommend that you enable inline mode of performance monitoring before you configure the performance monitoring sessions as the change may interfere with the existing performance monitoring sessions.
To enable inline mode of performance monitoring, perform the following steps:
See Also
Supported Inline CCM and Inline PM Scaling Values
This topic lists the scaling values for inline mode of performance
monitoring and inline transmission of continuity check messages. The
scaling values are based on the different cycle-time interval values.
Each table lists the maximum number of connectivity fault management
(CFM) sessions and performance monitoring (PM) sessions per line card
and per chassis when you configure inline CCM, enhanced CFM, and enhanced
PM by using the hardware-assisted-keepalives
, enhanced-cfm-mode
, and hardware-assisted-pm
options.
The scaling values do not consider the load from other protocols in the system and so the actual realized scaling values for line card and chassis vary depending on other protocol configurations and scaling in the system. We recommend that you configure DDoS for CFM. Limit the number of CFM packets, that are sent to the CPU of the line card, to 3000. Limiting the number of packets safeguards the CPU from scaled CFM configurations of various CFM protocol events.
Table 1 lists the maximum number of connectivity fault management (CFM) sessions and performance monitoring (PM) sessions per line card and per chassis when you configure both the CCM interval and the PM interval as 1 second.
CFM Line Card Scale |
PM Line Card Scale |
CFM Chassis Scale |
PM Chassis Scale |
---|---|---|---|
4000 |
4500 |
16000 |
16000 |
6000 |
3750 |
16000 |
16000 |
7000 |
3375 |
16000 |
16000 |
8000 |
3000 |
16000 |
16000 |
Table 2 lists the maximum number of connectivity fault management (CFM) sessions and performance monitoring (PM) sessions per line card and per chassis when you configure the CCM interval as 1 second and the PM interval as 100 milliseconds.
CFM Line Card Scale |
PM Line Card Scale |
CFM Chassis Scale |
PM Chassis Scale |
---|---|---|---|
4000 |
450 |
12000 |
4000 |
6000 |
375 |
12000 |
4000 |
7000 |
337 |
12000 |
4000 |
8000 |
300 |
12000 |
4000 |
Table 3 lists the maximum number of connectivity fault management (CFM) sessions and performance monitoring (PM) sessions per line card and per chassis when you configure the CCM interval as 100 milliseconds and the PM interval as 1 second.
CFM Line Card Scale |
PM Line Card Scale |
CFM Chassis Scale |
PM Chassis Scale |
---|---|---|---|
4000 |
3000 |
8000 |
6000 |
3000 |
3750 |
8000 |
6000 |
2000 |
4500 |
8000 |
6000 |
1000 |
4500 |
8000 |
6000 |
Table 4 lists the maximum number of connectivity fault management (CFM) sessions and performance monitoring (PM) sessions per line card and per chassis when you configure both the CCM interval and the PM interval as 100 milliseconds.
CFM Line Card Scale |
PM Line Card Scale |
CFM Chassis Scale |
PM Chassis Scale |
---|---|---|---|
4000 |
300 |
8000 |
3000 |
3000 |
375 |
8000 |
3000 |
2000 |
450 |
8000 |
3000 |
1000 |
450 |
8000 |
3000 |