- play_arrow Flow Monitoring and Flow Collection Services
- play_arrow Understanding Flow Monitoring
- play_arrow Monitoring Traffic Using Active Flow Monitoring
- Configuring Active Flow Monitoring
- Active Flow Monitoring System Requirements
- Active Flow Monitoring Applications
- Active Flow Monitoring PIC Specifications
- Active Flow Monitoring Overview
- Active Flow Monitoring Overview
- Example: Configuring Active Monitoring on an M, MX or T Series Router’s Logical System
- Example: Configuring Flow Monitoring on an MX Series Router with MS-MIC and MS-MPC
- Configuring Services Interface Redundancy with Flow Monitoring
- Configuring Inline Active Flow Monitoring Using Routers, Switches or NFX250
- Configuring Flow Offloading on MX Series Routers
- Configuring Active Flow Monitoring on PTX Series Packet Transport Routers
- Configuring Actively Monitored Interfaces on M, MX and T Series Routers
- Collecting Flow Records
- Configuring M, MX and T Series Routers for Discard Accounting with an Accounting Group
- Configuring M, MX and T Series Routers for Discard Accounting with a Sampling Group
- Configuring M, MX and T Series Routers for Discard Accounting with a Template
- Defining a Firewall Filter on M, MX and T Series Routers to Select Traffic for Active Flow Monitoring
- Processing IPv4 traffic on an M, MX or T Series Router Using Monitoring services, Adaptive services or Multiservices Interfaces
- Replicating M, MX and T Series Routing Engine-Based Sampling to Multiple Flow Servers
- Replicating Version 9 Flow Aggregation From M, MX and T Series Routers to Multiple Flow Servers
- Configuring Routing Engine-Based Sampling on M, MX and T Series Routers for Export to Multiple Flow Servers
- Example: Copying Traffic to a PIC While an M, MX or T Series Router Forwards the Packet to the Original Destination
- Configuring an Aggregate Export Timer on M, MX and T Series Routers for Version 8 Records
- Example: Sampling Configuration for M, MX and T Series Routers
- Associating Sampling Instances for Active Flow Monitoring with a Specific FPC, MPC, or DPC
- Example: Sampling Instance Configuration
- Example: Sampling and Discard Accounting Configuration on M, MX and T Series Routers
- play_arrow Monitoring Traffic Using Passive Flow Monitoring
- Passive Flow Monitoring Overview
- Passive Flow Monitoring System Requirements for T Series, M Series and MX Series Routers
- Passive Flow Monitoring Router and Software Considerations for T Series, M Series and MX Series Routers
- Understanding Passive Flow Monitoring on T Series, M Series and MX Series Routers
- Enabling Passive Flow Monitoring on M Series, MX Series or T Series Routers
- Configuring Passive Flow Monitoring
- Example: Passive Flow Monitoring Configuration on M, MX and T Series Routers
- Configuring a Routing Table Group on an M, MX or T Series Router to Add Interface Routes into the Forwarding Instance
- Using IPSec and an ES PIC on an M, MX or T Series Router to Send Encrypted Traffic to a Packet Analyzer
- Applying a Firewall Filter Output Interface on an M, MX or T Series Router to Port-mirror Traffic to PICs or Flow Collection Services
- Monitoring Traffic on a Router with a VRF Instance and a Monitoring Group
- Specifying a Firewall Filter on an M, MX or T Series Router to Select Traffic to Monitor
- Configuring Input Interfaces, Monitoring Services Interfaces and Export Interfaces on M, MX or T Series Routers
- Establishing a VRF Instance on an M, MX or T Series Router for Monitored Traffic
- Configuring a Monitoring Group on an M, MX or T Series Router to Send Traffic to the Flow Server
- Configuring Policy Options on M, MX or T Series Routers
- Stripping MPLS Labels on ATM, Ethernet-Based and SONET/SDH Router Interfaces
- Using an M, MX or T Series Router Flow Collector Interface to Process and Export Multiple Flow Records
- Example: Configuring a Flow Collector Interface on an M, MX or T Series Router
- play_arrow Processing and Exporting Multiple Records Using Flow Collection
- play_arrow Logging Flow Monitoring Records with Version 9 and IPFIX Templates for NAT Events
- Understanding NAT Event Logging in Flow Monitoring Format on an MX Series Router or NFX250
- Configure Active Flow Monitoring Logs for NAT44/NAT64
- Configuring Log Generation of NAT Events in Flow Monitoring Record Format on an MX Series Router or NFX250
- Exporting Syslog Messages to an External Host Without Flow Monitoring Formats Using an MX Series Router or NFX250
- Exporting Version 9 Flow Data Records to a Log Collector Overview Using an MX Series Router or NFX250
- Understanding Exporting IPFIX Flow Data Records to a Log Collector Using an MX Series Router or NFX250
- Mapping Between Field Values for Version 9 Flow Templates and Logs Exported From an MX-Series Router or NFX250
- Mapping Between Field Values for IPFIX Flow Templates and Logs Exported From an MX Series Router or NFX250
- Monitoring NAT Events on MX Series Routers by Logging NAT Operations in Flow Template Formats
- Example: Configuring Logs in Flow Monitoring Format for NAT Events on MX Series Routers for Troubleshooting
-
- play_arrow Flow Capture Services
- play_arrow Dynamically Capturing Packet Flows Using Junos Capture Vision
- play_arrow Detecting Threats and Intercepting Flows Using Junos Flow-Tap and FlowTapLite Services
- Understanding the FlowTap and FlowTapLite Services
- Understanding FlowTap and FlowTapLite Architecture
- Configuring the FlowTap Service on MX Series Routers
- Configuring a FlowTap Interface on MX Series Routers
- Configuring FlowTap and FlowTapLite Security Properties
- FlowTap and FlowTapLite Application Restrictions
- Examples: Configuring the FlowTapLite Application on MX Series and ACX Series Routers
- Configuring FlowTapLite on MX Series Routers and M320 Routers with FPCs
-
- play_arrow Inline Monitoring Services and Inband Network Telemetry
- play_arrow Inline Monitoring Services
- play_arrow Flow-Based Telemetry
- play_arrow Inband Flow Analyzer 2.0
- play_arrow Juniper Resiliency Interface
-
- play_arrow Sampling and Discard Accounting Services
- play_arrow Sampling Data Using Traffic Sampling and Discard Accounting
- play_arrow Sampling Data Using Inline Sampling
- Understand Inline Active Flow Monitoring
- Configuring Inline Active Flow Monitoring Using Routers, Switches or NFX250
- Configuring Inline Active Flow Monitoring on MX80 and MX104 Routers
- Configuring Inline Active Flow Monitoring on PTX Series Routers
- Inline Active Flow Monitoring of MPLS-over-UDP Flows on PTX Series Routers
- Inline Active Flow Monitoring on IRB Interfaces
- Example: Configuring Inline Active Flow Monitoring on MX Series and T4000 Routers
- play_arrow Sampling Data Using Flow Aggregation
- Understanding Flow Aggregation
- Enabling Flow Aggregation
- Configuring Flow Aggregation on MX, M and T Series Routers and NFX250 to Use Version 5 or Version 8 cflowd
- Configuring Flow Aggregation on MX, M, vMX and T Series Routers and NFX250 to Use Version 9 Flow Templates
- Configuring Flow Aggregation on PTX Series Routers to Use Version 9 Flow Templates
- Configuring Inline Active Flow Monitoring to Use IPFIX Flow Templates on MX, vMX and T Series Routers, EX Series Switches, NFX Series Devices, and SRX Series Firewalls
- Configuring Flow Aggregation to Use IPFIX Flow Templates on PTX Series Routers
- Configuring Observation Domain ID and Source ID for Version 9 and IPFIX Flows
- Configuring Template ID and Options Template ID for Version 9 and IPFIX Flows
- Including Fragmentation Identifier and IPv6 Extension Header Elements in IPFIX Templates on MX Series Routers
- Directing Replicated Flows from M and T Series Routers to Multiple Flow Servers
- Logging cflowd Flows on M and T Series Routers Before Export
- Configuring Next-Hop Address Learning on MX Series and PTX Series Routers for Destinations Accessible Over Multiple Paths
-
- play_arrow Configuration Statements and Operational Commands
Analyzing Network Efficiency in IPv6 Networks on MX Series Routers Using RPM Probes
Real-time performance monitoring (RPM) is a mechanism that enables you to monitor network performance in real time and to assess and analyze network efficiency. Typically, network performance is assessed in real time based on the jitter, delay, and packet loss experienced on the network. RPM is a service available in Junos OS that enables a router to measure metrics such as round-trip delays and unanswered echo requests. To compute these parameters, RPM exchanges a set of probes with other IP hosts in the network for monitoring and network tracking purposes. These probes are sent from a source node to other destination devices in the network that require tracking. Data such as transit delay and jitter can be collected from these probes, and this data can be used to provide an approximation of the delay and jitter experienced by live traffic in the network. Different live traffic metrics such as round-trip time (RTT), positive egress jitter, negative egress jitter, positive ingress jitter, negative ingress jitter, positive round-trip jitter, and negative round-trip jitter can be obtained from the results of the RPM test. RPM calculates minimum, maximum, average, peak-to-peak, standard deviation, and sum calculations for each of these measurements. RPM probes can also be used to verify the path between BGP neighbors.
Starting with Junos OS release 16.1, the RPM client router (the
router or switch that originates the RPM probes) can send probe packets
to the RPM probe server (the device that receives the RPM probes)
that contains an IPv6 address. To specify the destination IPv6 address
used for the probes, include the target (url ipv6-url | address ipv6-address)
statement at
the [edit services rpm probe owner test test-name]
hierarchy level. The protocol family for IPv6 is named inet6.
[edit services rpm] probe owner { test test-name { target (url ipv6-url | address ipv6-address); } }
To specify the IPv6 protocol-related settings and the source
IPv6 address of the client from which the RPM probes are sent, include
the inet6-options source-address ipv6-address
statement at the [edit services rpm probe owner test test-name]
hierarchy level. A probe request is
a standard packet with corresponding TCP, UDP, and ICMP headers over
the IPv6 header. No RPM header is appended to the standard packet
for Routing Engine-based RPM implementation. A probe response is also
a standard packet with corresponding TCP, UDP, and ICMP headers over
the IPv6 header. No RPM header is appended to the standard packet
for Routing Engine-based RPM implementation.
[edit services rpm] probe owner { test test-name { inet6-options source-address ipv6-address; } }
The
output of the show services rpm probe-results owner probe-name test test-name
and show services rpm history-results owner owner test name
commands that display the results
of the most recent RPM probes and results of historical RPM probes
respectively have been enhanced to display the target address as IPv6
address and other IPv6 information for probes sent to IPv6 servers
or destinations. The existing SNMP Get requests and traps for IPv6
are applicable for IPv6 probes. The target type field in the SNMP
set operation contains IPv6 source and destination addresses.
Guidelines for Configuring RPM Probes for IPv6 Destinations
Keep the following points in mind when you configure IPv6 addresses for RPM destinations or servers:
Only Routing Engine-based RPM is supported for IPv6 targets including VRF support, specification of the size of the data portion of ICMP probes, data pattern, and traffic class.
You can configure probes with a combination of IPv4 and IPv6 tests. However, a test can be either IPv4 or IPv6-based at a point in time. The OS impacts the accuracy of the measurements because the variability factor introduced by the general OS that performs the system processing proved is significantly larger than the amount of time spent by the packet traversing on the wire. This condition causes round-trip time (RTT) spikes to be seen even with a single test.
Routing Engine-based RPM does not support one-way hardware-based timestamping.
One-way measurements are not supported here because timestamping is done only on the RPM client side.
The maximum number of concurrent probes allowed (by including the
probe-limit
statement at the[edit services rpm]
hierarchy level) is 1000. We recommend that the limit on concurrent probes be set as 10. Higher concurrent probes can result in higher spikes. The maximum number of tests you can configure is 1000. RPM cannot be configured on logical systems. SNMP set operation is permitted only on ICMP probes and it is not supported for other type of probes.The
hardware-timestamp
andone-way-hardware-timestamp
statements at the[edit services rpm probe owner test test-name]
hierarchy level are not supported for IPv6.You cannot specify the
icmp-ping
(which sends ICMP echo requests to a target address) and theicmp-ping-timestamp
(which sends ICMP timestamp requests to a target address) options with the probe-type statement at the[edit services rpm probe owner test test-name]
hierarchy level.Some of the RPM problems can resolved by restarting the SNMP remote operations process (rmopd) on the Routing Engine by using the restart remote-operations command. If RPM needs to be disabled, the rpm statement at the [edit services] hierarchy level needs to be deleted or deactivated. PIC, Packet Forwarding Engine, and lookup chip (LU) based RPM implementation for IPv6 are not supported.
The following table describes the IPv6 special address prefixes that are not supported.
IPV6 Address Type
IPV6 Address Prefix
Node-Scoped Unicast
::1/128 is the loopback address
::/128 is the unspecified address
IPv4-Mapped Addresses
::FFFF:0:0/96
IPv4-Compatible Addresses
:<ipv4-address>/96
Link-Scoped Unicast
fe80::/10
Unique-Local
fc00::/7
Documentation Prefix
2001:db8::/32
6to4
2002::/16
6bone
5f00::/8
ORCHID
2001:10::/28
Teredo
2001::/32
Default Route
::/0
Multicast
ff00::/8
The current scaling number for IPv4 probes is a maximum of 500 concurrent probes and the limit on the maximum number of configurable tests is 1000. These scaling parameters are applicable for IPv6 probes. The same scaling limits are applicable, even in cases where both IPv4-based tests and IPv6-based tests are run at the same time.
The minimum rate of probes is 1 probe per second and the maximum interval between tests is 86400 seconds. These scaling and performance numbers vary based on whether the Two-Way Active Measurement Protocol (TWAMP) server and client are configured on the same router. This condition occurs because the TWAMP server/client has packet processing in RMOPD and it competes with RPM functionality in the same process. The RTT of IPv6-based RPM and ping utilities must be equivalent for data size. In Routing Engine-based RPM implementation, RTT spikes are seen owing to various queuing delays introduced in the system. This behavior can be noticed even with a single test.
Some of the TCP and UDP ports might be opened to communicate between the RPM server and RPM client. Therefore, we recommend that you use firewalls and distributed denial-of-service (DDoS) attack filters to ensure that no security threats are possible by some third-party attackers or hackers.
The different packet types that can be used within the probe include:
ICMP6 echo
UDP echo
UDP timestamp