- play_arrow Flow Monitoring and Flow Collection Services
- play_arrow Understanding Flow Monitoring
- play_arrow Monitoring Traffic Using Active Flow Monitoring
- Configuring Active Flow Monitoring
- Active Flow Monitoring System Requirements
- Active Flow Monitoring Applications
- Active Flow Monitoring PIC Specifications
- Active Flow Monitoring Overview
- Active Flow Monitoring Overview
- Example: Configuring Active Monitoring on an M, MX or T Series Router’s Logical System
- Example: Configuring Flow Monitoring on an MX Series Router with MS-MIC and MS-MPC
- Configuring Services Interface Redundancy with Flow Monitoring
- Configuring Inline Active Flow Monitoring Using Routers, Switches or NFX250
- Configuring Flow Offloading on MX Series Routers
- Configuring Active Flow Monitoring on PTX Series Packet Transport Routers
- Configuring Actively Monitored Interfaces on M, MX and T Series Routers
- Collecting Flow Records
- Configuring M, MX and T Series Routers for Discard Accounting with an Accounting Group
- Configuring M, MX and T Series Routers for Discard Accounting with a Sampling Group
- Configuring M, MX and T Series Routers for Discard Accounting with a Template
- Defining a Firewall Filter on M, MX and T Series Routers to Select Traffic for Active Flow Monitoring
- Processing IPv4 traffic on an M, MX or T Series Router Using Monitoring services, Adaptive services or Multiservices Interfaces
- Replicating M, MX and T Series Routing Engine-Based Sampling to Multiple Flow Servers
- Replicating Version 9 Flow Aggregation From M, MX and T Series Routers to Multiple Flow Servers
- Configuring Routing Engine-Based Sampling on M, MX and T Series Routers for Export to Multiple Flow Servers
- Example: Copying Traffic to a PIC While an M, MX or T Series Router Forwards the Packet to the Original Destination
- Configuring an Aggregate Export Timer on M, MX and T Series Routers for Version 8 Records
- Example: Sampling Configuration for M, MX and T Series Routers
- Associating Sampling Instances for Active Flow Monitoring with a Specific FPC, MPC, or DPC
- Example: Sampling Instance Configuration
- Example: Sampling and Discard Accounting Configuration on M, MX and T Series Routers
- play_arrow Monitoring Traffic Using Passive Flow Monitoring
- Passive Flow Monitoring Overview
- Passive Flow Monitoring System Requirements for T Series, M Series and MX Series Routers
- Passive Flow Monitoring Router and Software Considerations for T Series, M Series and MX Series Routers
- Understanding Passive Flow Monitoring on T Series, M Series and MX Series Routers
- Enabling Passive Flow Monitoring on M Series, MX Series or T Series Routers
- Configuring Passive Flow Monitoring
- Example: Passive Flow Monitoring Configuration on M, MX and T Series Routers
- Configuring a Routing Table Group on an M, MX or T Series Router to Add Interface Routes into the Forwarding Instance
- Using IPSec and an ES PIC on an M, MX or T Series Router to Send Encrypted Traffic to a Packet Analyzer
- Applying a Firewall Filter Output Interface on an M, MX or T Series Router to Port-mirror Traffic to PICs or Flow Collection Services
- Monitoring Traffic on a Router with a VRF Instance and a Monitoring Group
- Specifying a Firewall Filter on an M, MX or T Series Router to Select Traffic to Monitor
- Configuring Input Interfaces, Monitoring Services Interfaces and Export Interfaces on M, MX or T Series Routers
- Establishing a VRF Instance on an M, MX or T Series Router for Monitored Traffic
- Configuring a Monitoring Group on an M, MX or T Series Router to Send Traffic to the Flow Server
- Configuring Policy Options on M, MX or T Series Routers
- Stripping MPLS Labels on ATM, Ethernet-Based and SONET/SDH Router Interfaces
- Using an M, MX or T Series Router Flow Collector Interface to Process and Export Multiple Flow Records
- Example: Configuring a Flow Collector Interface on an M, MX or T Series Router
- play_arrow Processing and Exporting Multiple Records Using Flow Collection
- play_arrow Logging Flow Monitoring Records with Version 9 and IPFIX Templates for NAT Events
- Understanding NAT Event Logging in Flow Monitoring Format on an MX Series Router or NFX250
- Configure Active Flow Monitoring Logs for NAT44/NAT64
- Configuring Log Generation of NAT Events in Flow Monitoring Record Format on an MX Series Router or NFX250
- Exporting Syslog Messages to an External Host Without Flow Monitoring Formats Using an MX Series Router or NFX250
- Exporting Version 9 Flow Data Records to a Log Collector Overview Using an MX Series Router or NFX250
- Understanding Exporting IPFIX Flow Data Records to a Log Collector Using an MX Series Router or NFX250
- Mapping Between Field Values for Version 9 Flow Templates and Logs Exported From an MX-Series Router or NFX250
- Mapping Between Field Values for IPFIX Flow Templates and Logs Exported From an MX Series Router or NFX250
- Monitoring NAT Events on MX Series Routers by Logging NAT Operations in Flow Template Formats
- Example: Configuring Logs in Flow Monitoring Format for NAT Events on MX Series Routers for Troubleshooting
-
- play_arrow Flow Capture Services
- play_arrow Dynamically Capturing Packet Flows Using Junos Capture Vision
- play_arrow Detecting Threats and Intercepting Flows Using Junos Flow-Tap and FlowTapLite Services
- Understanding the FlowTap and FlowTapLite Services
- Understanding FlowTap and FlowTapLite Architecture
- Configuring the FlowTap Service on MX Series Routers
- Configuring a FlowTap Interface on MX Series Routers
- Configuring FlowTap and FlowTapLite Security Properties
- FlowTap and FlowTapLite Application Restrictions
- Examples: Configuring the FlowTapLite Application on MX Series and ACX Series Routers
- Configuring FlowTapLite on MX Series Routers and M320 Routers with FPCs
-
- play_arrow Inline Monitoring Services and Inband Network Telemetry
- play_arrow Inline Monitoring Services
- play_arrow Flow-Based Telemetry
- play_arrow Inband Flow Analyzer 2.0
- play_arrow Juniper Resiliency Interface
-
- play_arrow Sampling and Discard Accounting Services
- play_arrow Sampling Data Using Traffic Sampling and Discard Accounting
- play_arrow Sampling Data Using Inline Sampling
- Understand Inline Active Flow Monitoring
- Configuring Inline Active Flow Monitoring Using Routers, Switches or NFX250
- Configuring Inline Active Flow Monitoring on MX80 and MX104 Routers
- Configuring Inline Active Flow Monitoring on PTX Series Routers
- Inline Active Flow Monitoring of MPLS-over-UDP Flows on PTX Series Routers
- Inline Active Flow Monitoring on IRB Interfaces
- Example: Configuring Inline Active Flow Monitoring on MX Series and T4000 Routers
- play_arrow Sampling Data Using Flow Aggregation
- Understanding Flow Aggregation
- Enabling Flow Aggregation
- Configuring Flow Aggregation on MX, M and T Series Routers and NFX250 to Use Version 5 or Version 8 cflowd
- Configuring Flow Aggregation on MX, M, vMX and T Series Routers and NFX250 to Use Version 9 Flow Templates
- Configuring Flow Aggregation on PTX Series Routers to Use Version 9 Flow Templates
- Configuring Inline Active Flow Monitoring to Use IPFIX Flow Templates on MX, vMX and T Series Routers, EX Series Switches, NFX Series Devices, and SRX Series Firewalls
- Configuring Flow Aggregation to Use IPFIX Flow Templates on PTX Series Routers
- Configuring Observation Domain ID and Source ID for Version 9 and IPFIX Flows
- Configuring Template ID and Options Template ID for Version 9 and IPFIX Flows
- Including Fragmentation Identifier and IPv6 Extension Header Elements in IPFIX Templates on MX Series Routers
- Directing Replicated Flows from M and T Series Routers to Multiple Flow Servers
- Logging cflowd Flows on M and T Series Routers Before Export
- Configuring Next-Hop Address Learning on MX Series and PTX Series Routers for Destinations Accessible Over Multiple Paths
-
- play_arrow Configuration Statements and Operational Commands
Configuring RPM Probes on M, MX and T Series Routers and EX Series Switches
The probe owner and test name of an RPM probe together represent a single RPM configuration instance. When you specify the test name, you also can configure the test parameters.
To configure the probe owner, test name, and test parameters,
include the probe
statement at the [edit services
rpm]
hierarchy level:
[edit services rpm] probe owner { delegate-probes; test test-name { data-fill data; data-size size; destination-interface interface-name; destination-port (RPM) port; dscp-code-points (RPM) dscp-bits; hardware-timestamp; history-size size; inet6-options; moving-average-size number; one-way-hardware-timestamp; probe-count count; probe-interval seconds; probe-type type; routing-instance (RPM) instance-name; rpm-scale { destination { interface interface-name.logical-unit-number; subunit-cnt subunit-cnt; } source { address-base ipv4-address-base; count ipv4-count; step ipv4-step; } source-inet6 { address-base ipv6-address-base; count ipv6-count; step ipv6-step; } target { address-base ipv4-address-base; count ipv4-count; step ipv4-step; } target-inet6 { address-base ipv6-address-base; count ipv6-count; step ipv6-step; } tests-count tests-count; } source-address address; target (url url | address address); test-interval interval; thresholds (Junos OS) thresholds; traps traps; ttl [hop-count] } }
Keep the following points in mind when you configure RPM clients and RPM servers:
RPM is not supported on logical systems.
Starting in Junos OS Release 17.3R1, PIC-based and Routing Engine-based RPM is supported for IPsec tunnels and GRE tunnels if you are using MS-MPCs or MS-MICs. Packet Forwarding Engine-based RPM is not supported for IPsec tunnels. Support of RPM on IPSec tunnels enables service level agreement (SLA) monitoring for traffic transported in IPSec tunnels.
Starting in Junos OS Release 17.3R1, you can configure the generation of IPv4
icmp-ping
andicmp-ping-timestamp
RPM probes on an MS-MPC or MS-MIC, which increases the number of probes generated upto 1 million per second on every service-NPU compared to the number of probes that are generated on the Packet Forwarding Engine. Starting in Junos OS Release 18.1R1, you can configure the generation oficmp6-ping
RPM probes on an MS-MPC or MS-MIC. To configure the generation of RPM probes on an MS-MPC or MS-MIC:Include the
destination-interface interface-name.logical-unit-number
at the[edit services rpm probe owner test test-name]
hierarchy level, and include thedelegate-probes
statement at the[edit services rpm probe owner]
hierarchy level. The interface-name.logical-unit-number specifies a logical interface on an MS-MPC or MS-MIC slot, PIC, and port that has a valid IP address defined on it (for example, ms-1/2/1.1). The interface cannot be an aggregated multiservices interface (ams-).Include the
rpm client-delegate-probes
and thefamily (inet | inet6) address address
statements at the[edit interfaces interface-name unit logical-unit-number]
hierarchy level. The interface-name and the logical-unit-number must match the interface-name.logical-unit-number that you used for thedestination-interface
.
For RPM probes configured on an MS-MPC or MS-MIC, you cannot configure the
routing-instance
statement at the[edit services rpm probe owner test test-name]
hierarchy level, and you cannot configure both IPv4 and IPv6 probes within the same test.Starting in Junos OS Release 18.1R1, you can use additional filters to limit the output of the show services rpm probe-results and show services rpm history-results commands for RPM probes generated on an MS-MPC or MS-MIC.
Starting in Junos OS Release 17.4R1, you can optimize the CLI configuration for RPM tests for IPv4. Starting in Junos OS Release 18.2R1, you can also optimize the CLI configuration for RPM tests for IPv6. This optimization allows the use of minimal RPM configuration statements to generate multiple tests (up to 100K tests) with pre-defined, reserved RPM test names. This optimization can be configured for tests with probes that are generated by either the Packet Forwarding Engine or by an MS-MPC or MS-MIC. Tests are generated for multiple combinations of source and target addresses, which are incremented based on your configuration.
The maximum number of concurrent RPM probes supported for various Junos releases are as follows:
Junos OS release older than 17.3R1—500
Junos OS release 17.3R1 and later—2000 for ICMP and ICMP-Timestamp probe types. For probes of other types (UDP and TCP) the limit is 500.
Junos OS Release 17.3R1 and later (with the implementation of delegate-probes)—1 Million per Service-NPU.
Note:One MS-MIC contains one service-NPU and one MS-MPC contains four service-NPUs.
With the implementation of delegate-probes, the RPM probes are compliant to RFC792 and RFC4443. Hence, they can be used to monitor any IP device compliant to either RFC and are able to respond to icmp-timestamp and/or icmp6-ping packets.
Tests are first generated for all the source addresses with the initial target address, then tests are generated for all the source addresses with the next available target address, and so on. You can also configure a group that contains global values for a particular probe owner, and apply the group to the probe owner.
To generate multiple RPM tests, configure the following:
content_copy zoom_out_map[edit services rpm probe owner] apply-groups group-name; test test-name { rpm-scale { destination { interface interface-name.logical-unit-number; subunit-cnt subunit-cnt; } source { address-base ipv4-address-base; count ipv4-count; step ipv4-step; } source-inet6 { address-base ipv6-address-base; count ipv6-count; step ipv6-step; } target { address-base ipv4-address-base; count ipv4-count; step ipv4-step; } target-inet6 { address-base ipv6-address-base; count ipv6-count; step ipv6-step; } tests-count tests-count; } }
The options are:
ipv4-address-base The IPv4 source or target address that is incremented to generate the addresses used in the RPM tests.
ipv6-address-base The IPv6 source or target address that is incremented to generate the addresses used in the RPM tests.
ipv4-step The amount to increment the IPv4 source or target address for each generated RPM test.
ipv6-step The amount to increment the IPv6 source or target address for each generated RPM test.
ipv4-count The maximum number of IPv4 source or target addresses to use for the generated RPM tests.
ipv6-count The maximum number of IPv6 source or target addresses to use for the generated RPM tests.
interface-name.logical-unit-number The services interface that is generating RPM probes and the logical unit number that is used for the first test that is generated.
subunit-cnt The maximum number of logical units used by the services interface in the generated tests. The first generated test uses the logical unit specified in the interface-name.logical-unit-number option, and each successive test increments the logical unit number by one. Once the maximum number of logical units has been used, the next generated test cycles back to the logical unit that was used in the first test.
tests-count The maximum number of RPM tests to generate. This number must be less than or equal to the number of generated source addresses multiplied by the number of generated target addresses.
To configure a group with global values for a particular probe owner:
content_copy zoom_out_map[edit groups group-name] services { rpm { probe <*> { test { data-fill data; data-size size; dscp-code-points (RPM) dscp-bits; history-size size; moving-average-size number; probe-count count; probe-type type; test-interval interval; thresholds (Junos OS) thresholds; } } } }
To specify a probe owner, include the
probe
statement at the[edit services rpm]
hierarchy level. The probe owner identifier can be up to 32 characters in length.To specify a test name, include the
test
statement at the[edit services rpm probe owner]
hierarchy level. The test name identifier can be up to 32 characters in length. A test represents the range of probes over which the standard deviation, average, and jitter are calculated.To specify the contents of the data portion of Internet Control Message Protocol (ICMP) probes, include the
data-fill
statement at the[edit services rpm probe owner]
hierarchy level. The value can be a hexadecimal value. Thedata-fill
statement is not valid with thehttp-get
orhttp-metadata-get
probe types.To specify the size of the data portion of ICMP probes, include the
data-size
statement at the[edit services rpm probe owner]
hierarchy level. The size can be from0
through65400
and the default size is0
. Thedata-size
statement is not valid with thehttp-get
orhttp-metadata-get
probe types.Note:If you configure the hardware timestamp feature (see Configuring RPM Timestamping on MX, M, T, and PTX Series Routers and EX Series Switches):
This is a deprecated element
data-size
default value is 32 bytes and this is a deprecated element 32 is the minimum value for explicit configuration. The UDP timestamp probe type is an exception; it requires a minimum data size of 44 bytes.The
data-size
must be at least 100 bytes smaller than the default MTU of the interface of the RPM client interface.
On M Series and T Series routers, you configure the
destination-interface
statement to enable hardware timestamping of RPM probe packets. You specify an sp- interface to have the AS or Multiservices PIC add the hardware timestamps; for more information, see Configuring RPM Timestamping on MX, M, T, and PTX Series Routers and EX Series Switches. You can also include theone-way-hardware-timestamp
statement to enable one-way delay and jitter measurements.To specify the User Datagram Protocol (UDP) port or Transmission Control Protocol (TCP) port to which the probe is sent, include the
destination-port
statement at the[edit services rpm probe owner test test-name]
hierarchy level. Thedestination-port
statement is used only for the UDP and TCP probe types. The value can be7
or from49160
through65535
.When you configure either
probe-type udp-ping
orprobe-type udp-ping-timestamp
along with hardware timestamping, the value for thedestination-port
can be only 7. A constraint check prevents you from configuring any other value for the destination port in this case. This constraint does not apply when you are using one-way hardware timestamping.To specify the value of the Differentiated Services (DiffServ) field within the IP header, include the
dscp-code-point
statement at the[edit services rpm probe owner test test-name]
hierarchy level. The DiffServ code point (DSCP) bits value can be set to a valid 6-bit pattern; for example,001111
. It also can be set using an alias configured at the[edit class-of-service code-point-aliases dscp]
hierarchy level. The default is000000
.To specify the number of stored history entries, include the
history-size
statement at the[edit services rpm probe owner test test-name]
hierarchy level. Specify a value from0
to512
. The default is50
.To specify a number of samples for making statistical calculations, include the
moving-average-size
statement at the[edit services rpm probe owner test test-name]
hierarchy level. Specify a value from0
through255
.To specify the number of probes within a test, include the
probe-count
statement at the[edit services rpm probe owner test test-name]
hierarchy level. Specify a value from1
through15
.To specify the time to wait between sending packets, include the
probe-interval
statement at the[edit services rpm probe owner test test-name]
hierarchy level. Specify a value from1
through255
seconds.To specify the packet and protocol contents of the probe, include the
probe-type
statement at the[edit services rpm probe owner test test-name]
hierarchy level. The following probe types are supported:http-get
—Sends a Hypertext Transfer Protocol (HTTP) get request to a target URL.http-metadata-get
—Sends an HTTP get request for metadata to a target URL.icmp-ping
—Sends ICMP echo requests to a target address.icmp-ping-timestamp
—Sends ICMP timestamp requests to a target address.tcp-ping
—Sends TCP packets to a target.udp-ping
—Sends UDP packets to a target.udp-ping-timestamp
—Sends UDP timestamp requests to a target address.
The following probe types support hardware timestamping of probe packets:
icmp-ping
,icmp-ping-timestamp
,udp-ping
,udp-ping-timestamp
. Starting in Junos OS Release 17.3R3, the delegate probes are distributed evenly across the interval of 3 seconds to avoid the packet bursts in the network due to real-time performance monitoring (RPM). RPM syslogs are processed with the increase in the ramp up time of RPM delegates tests to 60 seconds. With RPM syslogs processed, the chances of multiple tests starting and ending at the same time are smaller, thus a potential restriction inevent-processing
.Note:Some probe types require additional parameters to be configured. For example, when you specify the
tcp-ping
orudp-ping
option, you must configure the destination port using thedestination-port
statement. Theudp-ping-timestamp
option requires a minimum data size of 12; any smaller data size results in a commit error. The minimum data size for TCP probe packets is 1.When you configure either
probe-type udp-ping
orprobe-type udp-ping-timestamp
along with theone-way-hardware-timestamp
command, the value for thedestination-port
can be only 7. A constraint check prevents you for configuring any other value for the destination port in this case.To specify the routing instance used by ICMP probes, include the
routing-instance
statement at the[edit services rpm probe owner test test-name]
hierarchy level. The default routing instance is Internet routing tableinet.0
.To specify the source IP address used for ICMP probes, include the
source-address
statement at the[edit services rpm probe owner test test-name]
hierarchy level. If the source IP address is not one of the router’s assigned addresses, the packet uses the outgoing interface’s address as its source.Starting in Junos OS Release 16.1R1, to specify the source IPv6 address to be used for RPM probes that are sent from the RPM client (the device that originates the RPM packets) to the RPM server (the device that receives the RPM probes), include the
inet6-options source-address ipv6-address statement
at the[edit services rpm probe owner test test-name]
hierarchy level. If the source IPv6 address is not one of the router’s or switch’s assigned addresses, the packet use the outgoing interface’s address as its source.To specify the destination address used for the probes, include the
target
statement at the[edit services rpm probe owner test test-name]
hierarchy level.For HTTP probe types, specify a fully formed URL that includes
http://
in the URL address.For all other probe types, specify an IP version 4 (IPv4) or IP version 6 (IPv6) (IPv6 support starts in Junos OS release 16.1R1) address for the target host.
To specify the time to wait between tests, include the
test-interval
statement at the[edit services rpm probe owner test test-name]
hierarchy level. Specify a value from0
through86400
seconds. A value of 0 seconds causes the RPM test to stop after one iteration. The default value is 1.To specify thresholds used for the probes, include the
thresholds
statement at the[edit services rpm probe owner test test-name]
hierarchy level. A system log message is generated when the configured threshold is exceeded. Likewise, an SNMP trap (if configured) is generated when a threshold is exceeded. The following options are supported:egress-time
—Measures maximum source-to-destination time per probe.ingress-time
—Measures maximum destination-to-source time per probe.jitter-egress
—Measures maximum source-to-destination jitter per test.jitter-ingress
—Measures maximum destination-to-source jitter per test.jitter-rtt
—Measures maximum jitter per test, from 0 through 60000000 microseconds.rtt
—Measures maximum round-trip time per probe, in microseconds.std-dev-egress
—Measures maximum source-to-destination standard deviation per test.std-dev-ingress
—Measures maximum destination-to-source standard deviation per test.std-dev-rtt
—Measures maximum standard deviation per test, in microseconds.successive-loss
—Measures successive probe loss count, indicating probe failure.total-loss
—Measures total probe loss count indicating test failure, from 0 through 15. The default for this threshold is 1.
Traps are sent if the configured threshold is met or exceeded. To set the trap bit to generate traps, include the
traps
statement at the[edit services rpm probe owner test test-name]
hierarchy level. The following options are supported:egress-jitter-exceeded
—Generates traps when the jitter in egress time threshold is met or exceeded.egress-std-dev-exceeded
—Generates traps when the egress time standard deviation threshold is met or exceeded.egress-time-exceeded
—Generates traps when the maximum egress time threshold is met or exceeded.ingress-jitter-exceeded
—Generates traps when the jitter in ingress time threshold is met or exceeded.ingress-std-dev-exceeded
—Generates traps when the ingress time standard deviation threshold is met or exceeded.ingress-time-exceeded
—Generates traps when the maximum ingress time threshold is met or exceeded.jitter-exceeded
—Generates traps when the jitter in round-trip time threshold is met or exceeded.probe-failure
—Generates traps for successive probe loss thresholds crossed.rtt-exceeded
—Generates traps when the maximum round-trip time threshold is met or exceeded.std-dev-exceeded
—Generates traps when the round-trip time standard deviation threshold is met or exceeded.test-completion
—Generates traps when a test is completed.test-failure
—Generates traps when the total probe loss threshold is met or exceeded.
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.
icmp6-ping
RPM probes on an MS-MPC or
MS-MIC.event-processing
.icmp-ping
and icmp-ping-timestamp
RPM probes on an MS-MPC or MS-MIC,
which increases the number of probes generated upto 1 million per
second on every service-NPU compared to the number of probes that
are generated on the Packet Forwarding Engine.inet6-options source-address ipv6-address statement
at the [edit services rpm probe owner test test-name]
hierarchy
level.