- play_arrow Flow Monitoring and Flow Collection Services
- play_arrow Understanding Flow Monitoring
- play_arrow Monitoring Traffic Using Active Flow Monitoring
- Configuring Active Flow Monitoring
- Active Flow Monitoring System Requirements
- Active Flow Monitoring Applications
- Active Flow Monitoring PIC Specifications
- Active Flow Monitoring Overview
- Active Flow Monitoring Overview
- Example: Configuring Active Monitoring on an M, MX or T Series Router’s Logical System
- Example: Configuring Flow Monitoring on an MX Series Router with MS-MIC and MS-MPC
- Configuring Services Interface Redundancy with Flow Monitoring
- Configuring Inline Active Flow Monitoring Using Routers, Switches or NFX250
- Configuring Flow Offloading on MX Series Routers
- Configuring Active Flow Monitoring on PTX Series Packet Transport Routers
- Configuring Actively Monitored Interfaces on M, MX and T Series Routers
- Collecting Flow Records
- Configuring M, MX and T Series Routers for Discard Accounting with an Accounting Group
- Configuring M, MX and T Series Routers for Discard Accounting with a Sampling Group
- Configuring M, MX and T Series Routers for Discard Accounting with a Template
- Defining a Firewall Filter on M, MX and T Series Routers to Select Traffic for Active Flow Monitoring
- Processing IPv4 traffic on an M, MX or T Series Router Using Monitoring services, Adaptive services or Multiservices Interfaces
- Replicating M, MX and T Series Routing Engine-Based Sampling to Multiple Flow Servers
- Replicating Version 9 Flow Aggregation From M, MX and T Series Routers to Multiple Flow Servers
- Configuring Routing Engine-Based Sampling on M, MX and T Series Routers for Export to Multiple Flow Servers
- Example: Copying Traffic to a PIC While an M, MX or T Series Router Forwards the Packet to the Original Destination
- Configuring an Aggregate Export Timer on M, MX and T Series Routers for Version 8 Records
- Example: Sampling Configuration for M, MX and T Series Routers
- Associating Sampling Instances for Active Flow Monitoring with a Specific FPC, MPC, or DPC
- Example: Sampling Instance Configuration
- Example: Sampling and Discard Accounting Configuration on M, MX and T Series Routers
- play_arrow Monitoring Traffic Using Passive Flow Monitoring
- Passive Flow Monitoring Overview
- Passive Flow Monitoring System Requirements for T Series, M Series and MX Series Routers
- Passive Flow Monitoring Router and Software Considerations for T Series, M Series and MX Series Routers
- Understanding Passive Flow Monitoring on T Series, M Series and MX Series Routers
- Enabling Passive Flow Monitoring on M Series, MX Series or T Series Routers
- Configuring Passive Flow Monitoring
- Example: Passive Flow Monitoring Configuration on M, MX and T Series Routers
- Configuring a Routing Table Group on an M, MX or T Series Router to Add Interface Routes into the Forwarding Instance
- Using IPSec and an ES PIC on an M, MX or T Series Router to Send Encrypted Traffic to a Packet Analyzer
- Applying a Firewall Filter Output Interface on an M, MX or T Series Router to Port-mirror Traffic to PICs or Flow Collection Services
- Monitoring Traffic on a Router with a VRF Instance and a Monitoring Group
- Specifying a Firewall Filter on an M, MX or T Series Router to Select Traffic to Monitor
- Configuring Input Interfaces, Monitoring Services Interfaces and Export Interfaces on M, MX or T Series Routers
- Establishing a VRF Instance on an M, MX or T Series Router for Monitored Traffic
- Configuring a Monitoring Group on an M, MX or T Series Router to Send Traffic to the Flow Server
- Configuring Policy Options on M, MX or T Series Routers
- Stripping MPLS Labels on ATM, Ethernet-Based and SONET/SDH Router Interfaces
- Using an M, MX or T Series Router Flow Collector Interface to Process and Export Multiple Flow Records
- Example: Configuring a Flow Collector Interface on an M, MX or T Series Router
- play_arrow Processing and Exporting Multiple Records Using Flow Collection
- play_arrow Logging Flow Monitoring Records with Version 9 and IPFIX Templates for NAT Events
- Understanding NAT Event Logging in Flow Monitoring Format on an MX Series Router or NFX250
- Configure Active Flow Monitoring Logs for NAT44/NAT64
- Configuring Log Generation of NAT Events in Flow Monitoring Record Format on an MX Series Router or NFX250
- Exporting Syslog Messages to an External Host Without Flow Monitoring Formats Using an MX Series Router or NFX250
- Exporting Version 9 Flow Data Records to a Log Collector Overview Using an MX Series Router or NFX250
- Understanding Exporting IPFIX Flow Data Records to a Log Collector Using an MX Series Router or NFX250
- Mapping Between Field Values for Version 9 Flow Templates and Logs Exported From an MX-Series Router or NFX250
- Mapping Between Field Values for IPFIX Flow Templates and Logs Exported From an MX Series Router or NFX250
- Monitoring NAT Events on MX Series Routers by Logging NAT Operations in Flow Template Formats
- Example: Configuring Logs in Flow Monitoring Format for NAT Events on MX Series Routers for Troubleshooting
-
- play_arrow Inline Monitoring Services and Inband Network Telemetry
- play_arrow Inline Monitoring Services
- play_arrow Flow-Based Telemetry
- play_arrow Inband Flow Analyzer 2.0
- play_arrow Juniper Resiliency Interface
-
- play_arrow Sampling and Discard Accounting Services
- play_arrow Sampling Data Using Traffic Sampling and Discard Accounting
- play_arrow Sampling Data Using Inline Sampling
- Understand Inline Active Flow Monitoring
- Configuring Inline Active Flow Monitoring Using Routers, Switches or NFX250
- Configuring Inline Active Flow Monitoring on MX80 and MX104 Routers
- Configuring Inline Active Flow Monitoring on PTX Series Routers
- Inline Active Flow Monitoring of MPLS-over-UDP Flows on PTX Series Routers
- Inline Active Flow Monitoring on IRB Interfaces
- Example: Configuring Inline Active Flow Monitoring on MX Series and T4000 Routers
- play_arrow Sampling Data Using Flow Aggregation
- Understanding Flow Aggregation
- Enabling Flow Aggregation
- Configuring Flow Aggregation on MX, M and T Series Routers and NFX250 to Use Version 5 or Version 8 cflowd
- Configuring Flow Aggregation on MX, M, vMX and T Series Routers and NFX250 to Use Version 9 Flow Templates
- Configuring Flow Aggregation on PTX Series Routers to Use Version 9 Flow Templates
- Configuring Inline Active Flow Monitoring to Use IPFIX Flow Templates on MX, vMX and T Series Routers, EX Series Switches, NFX Series Devices, and SRX Series Firewalls
- Configuring Flow Aggregation to Use IPFIX Flow Templates on PTX Series Routers
- Configuring Observation Domain ID and Source ID for Version 9 and IPFIX Flows
- Configuring Template ID and Options Template ID for Version 9 and IPFIX Flows
- Including Fragmentation Identifier and IPv6 Extension Header Elements in IPFIX Templates on MX Series Routers
- Directing Replicated Flows from M and T Series Routers to Multiple Flow Servers
- Logging cflowd Flows on M and T Series Routers Before Export
- Configuring Next-Hop Address Learning on MX Series and PTX Series Routers for Destinations Accessible Over Multiple Paths
-
- play_arrow Real-Time Performance Monitoring and Video Monitoring Services
- play_arrow Monitoring Traffic Using Real-Time Performance Monitoring and Two-Way Active Monitoring Protocol (TWAMP)
- Understanding Using Probes for Real-Time Performance Monitoring on M, T, ACX, MX, and PTX Series Routers, EX and QFX Switches
- Configuring RPM Probes on M, MX and T Series Routers and EX Series Switches
- Understanding Real-Time Performance Monitoring on EX and QFX Switches
- Real-Time Performance Monitoring for SRX Devices
- Configuring RPM Receiver Servers
- Limiting the Number of Concurrent RPM Probes on M, MX, T and PTX Routers and EX Series Switches
- Configuring RPM Timestamping on MX, M, T, and PTX Series Routers and EX Series Switches
- Configuring the Interface for RPM Timestamping for Client/Server on a Switch (EX Series)
- Analyzing Network Efficiency in IPv6 Networks on MX Series Routers Using RPM Probes
- Configuring BGP Neighbor Discovery Through RPM
- Examples: Configuring BGP Neighbor Discovery on SRX Series Firewalls and MX, M, T and PTX Series Routers With RPM
- Trace RPM Operations
- Examples: Configuring Real-Time Performance Monitoring on MX, M, T and PTX Series Routers
- Enabling RPM on MX, M and T Series Routers and SRX Firewalls for the Services SDK
- Understand Two-Way Active Measurement Protocol
- Configure TWAMP on ACX, MX, M, T, and PTX Series Routers, EX Series and QFX10000 Series Switches
- Example: Configuring TWAMP Client and Server on MX Series Routers
- Example: Configuring TWAMP Client and Server for SRX Series Firewalls
- Understanding TWAMP Auto-Restart
- Configuring TWAMP Client and TWAMP Server to Reconnect Automatically After TWAMP Server Unavailability
- play_arrow Managing License Server for Throughput Data Export
- play_arrow Testing the Performance of Network Devices Using RFC 2544-Based Benchmarking
- Understanding RFC 2544-Based Benchmarking Tests on MX Series Routers and SRX Series Firewalls
- Understanding RFC2544-Based Benchmarking Tests for E-LAN and E-Line Services on MX Series Routers
- Supported RFC 2544-Based Benchmarking Statements on MX Series Routers
- Configuring an RFC 2544-Based Benchmarking Test
- Enabling Support for RFC 2544-Based Benchmarking Tests on MX Series Routers
- Example: Configure an RFC 2544-Based Benchmarking Test on an MX104 Router for Layer 3 IPv4 Services
- Example: Configuring an RFC 2544-Based Benchmarking Test on an MX104 Router for UNI Direction of Ethernet Pseudowires
- Example: Configuring an RFC 2544-Based Benchmarking Test on an MX104 Router for NNI Direction of Ethernet Pseudowires
- Example: Configuring RFC2544-Based Benchmarking Tests on an MX104 Router for Layer 2 E-LAN Services in Bridge Domains
- Example: Configuring Benchmarking Tests to Measure SLA Parameters for E-LAN Services on an MX104 Router Using VPLS
- play_arrow Configuring RFC 2544-Based Benchmarking Tests on ACX Series
- RFC 2544-Based Benchmarking Tests for ACX Routers Overview
- Layer 2 and Layer 3 RFC 2544-Based Benchmarking Test Overview
- Configuring RFC 2544-Based Benchmarking Tests
- Configuring Ethernet Loopback for RFC 2544-Based Benchmarking Tests
- RFC 2544-Based Benchmarking Test States
- Example: Configure an RFC 2544-Based Benchmarking Test for Layer 3 IPv4 Services
- Example: Configuring an RFC 2544-Based Benchmarking Test for NNI Direction of Ethernet Pseudowires
- Example: Configuring an RFC 2544-Based Benchmarking Test for UNI Direction of Ethernet Pseudowires
- Configuring a Service Package to be Used in Conjunction with PTP
- play_arrow Tracking Streaming Media Traffic Using Inline Video Monitoring
- Understanding Inline Video Monitoring on MX Series Routers
- Configuring Inline Video Monitoring on MX Series Routers
- Inline Video Monitoring Syslog Messages on MX Series Routers
- Generation of SNMP Traps and Alarms for Inline Video Monitoring on MX Series Routers
- SNMP Traps for Inline Video Monitoring Statistics on MX Series Routers
- Processing SNMP GET Requests for MDI Metrics on MX Series Routers
-
- play_arrow Configuration Statements and Operational Commands
ON THIS PAGE
Configuring Junos Capture Vision
Configuring the Capture Group
A capture group defines a profile of Junos Capture Vision configuration information. The static configuration includes information about control sources, content destinations, and notification destinations. Dynamic configuration is added through interaction with control sources using a control protocol.
To configure a capture group, include the capture-group
statement at the [edit services dynamic-flow-capture]
hierarchy level:
capture-group client-name { content-destination identifier { address address; hard-limit bandwidth; hard-limit-target bandwidth; soft-limit bandwidth; soft-limit-clear bandwidth; ttl hops; } control-source identifier { allowed-destinations [ destinations ]; minimum-priority value; no-syslog; notification-targets address port port-number; service-port port-number; shared-key value; source-addresses [ addresses ]; } duplicates-dropped-periodicity seconds; input-packet-rate-threshold rate; interfaces interface-name; max-duplicates number; pic-memory-threshold percentage percentage; }
To specify the capture-group
, assign it a unique client-name
that associates the information
with the requesting control sources.
Configuring the Content Destination
You must specify a destination for the packets that match DFC
PIC filter criteria. To configure the content destination, include
the content-destination
statement at the [edit services
dynamic-flow-capture capture-group client-name]
hierarchy level:
content-destination identifier { address address; hard-limit bandwidth; hard-limit-target bandwidth; soft-limit bandwidth; soft-limit-clear bandwidth; ttl hops; }
Assign the content-destination
a unique identifier
. You must also specify its IP address
and you can optionally include additional settings:
address
—The DFC PIC interface appends an IP header with this destination address on the matched packet (with its own IP header and contents intact) and sends it out to the content destination.ttl
—The time-to-live (TTL) value for the IP-IP header. By default, the TTL value is 255. Its range is 0 through 255.Congestion thresholds—You can specify per-content destination bandwidth limits that control the amount of traffic produced by the DFC PIC during periods of congestion. The thresholds are arranged in two pairs:
hard-limit
andhard-limit-target
, andsoft-limit
andsoft-limit-clear
. You can optionally include one or both of these paired settings. All four settings are 10–second average bandwidth values in bits per second. Typicallysoft-limit-clear
<soft-limit
<hard-limit-target
<hard-limit
. When the content bandwidth exceeds thesoft-limit
setting:A congestion notification message is sent to each control source of the criteria that point to this content destination
If the control source is configured for
syslog
, a system log message is generated.A latch is set, indicating that the control sources have been notified. No additional notification messages are sent until the latch is cleared, when the bandwidth falls below the
soft-limit-clear
value.
When the bandwidth exceeds the
hard-limit
value:Junos Capture Vision begins deleting criteria until the bandwidth falls below the
hard-limit-target
value.For each criterion deleted, a CongestionDelete notification is sent to the control source for that criterion.
If the control source is configured for
syslog
, a log message is generated.
The application evaluates criteria for deletion using the following data:
Priority—Lower priority criteria are purged first, after adjusting for control source minimum priority.
Bandwidth—Higher bandwidth criteria are purged first.
Timestamp—The more recent criteria are purged first.
Configuring the Control Source
You configure information about the control source, including
allowed source addresses and destinations and authentication key values.
To configure the control source information, include the control-source
statement at the [edit services dynamic-flow-capture capture-group client-name]
hierarchy level:
control-source identifier { allowed-destinations [ destination-identifiers ]; minimum-priority value; no-syslog; notification-targets address port port-number; service-port port-number; shared-key value; source-addresses [ addresses ]; }
Assign the control-source
statement a unique identifier
. You can also include values
for the following statements:
allowed-destinations
—One or more content destination identifiers to which this control source can request that matched data be sent in its control protocol requests. If you do not specify any content destinations, all available destinations are allowed.minimum-priority
—Value assigned to the control source that is added to the priority of the criteria in the DTCP ADD request to determine the total priority for the criteria. The lower the value, the higher the priority. By default,minimum-priority
has a value of 0 and the allowed range is 0 through 254.notification-targets
—One or more destinations to which the DFC PIC interface can log information about control protocol-related events and other events such as PIC bootup messages. You configure eachnotification-target
entry with an IPaddress
value and a User Datagram Protocol (UDP)port
number.service-port
—UDP port number to which the control protocol requests are directed. Control protocol requests that are not directed to this port are discarded by DFC PIC interfaces.shared-key
—20-byte authentication key value shared between the control source and the DFC PIC monitoring platform.source-addresses
—One or more allowed IP addresses from which the control source can send control protocol requests to the DFC PIC monitoring platform. These are /32 addresses.
Configuring the DFC PIC Interface
You specify the interface that interacts with the control sources configured in the same capture group. A Monitoring Services III PIC can belong to only one capture group, and you can configure only one PIC for each group.
To configure a DFC PIC interface, include the interfaces
statement at the [edit services dynamic-flow-capture capture-group client-name]
hierarchy level:
interfaces interface-name;
You specify DFC interfaces using the dfc-
identifier
at the [edit interfaces]
hierarchy level. You must specify
three logical units on each DFC PIC interface, numbered 0
, 1
, and 2
. You cannot configure any other
logical interfaces.
unit 0
processes control protocol requests and responses.unit 1
receives monitored data.unit 2
transmits the matched packets to the destination address.
The following example shows the configuration necessary to set up a DFC PIC interface and intercept both IPv4 and IPv6 traffic:
[edit interfaces dfc-0/0/0] unit 0 { family inet { filter { output high; #Firewall filter to route control packets # through 'network-control' forwarding class. Control packets # are loss sensitive. } address 10.1.0.0/32 { # DFC PIC address destination 10.36.100.1; # DFC PIC address used by # the control source to correspond with the # monitoring platform } } unit 1 { # receive data packets on this logical interface family inet; # receive IPv4 traffic for interception family inet6; # receive IPv6 traffic for interception } unit 2 { # send out copies of matched packets on this logical interface family inet; }
In addition, you must configure Junos Capture Vision to run
on the DFC PIC in the correct chassis location. The following example
shows this configuration at the [edit chassis]
hierarchy
level:
fpc 0 { pic 0 { monitoring-services application dynamic-flow-capture; } }
Configuring the Firewall Filter
You can specify the firewall filter to route control packets
through the network control forwarding class. The control packets
are loss sensitive. To configure the firewall filter, include the
following statements at the [edit]
hierarchy level:
firewall { family inet { filter high { term all { then forwarding-class network-control; } } } }
Configuring System Logging
By default, control protocol activity is logged as a separate
system log facility, dfc
. To modify the filename or level
at which control protocol activity is recorded, include the following
statements at the [edit syslog]
hierarchy level:
file dfc.log { dfc any; }
To cancel logging, include the no-syslog
statement
at the [edit services dynamic-flow-capture capture-group client-name control-source identifier]
hierarchy level:
Junos Capture Vision (dfc-
) interface supports up
to 10,000 filter criteria. When more than 10,000 filters are added
to the interface, the filters are accepted, but system log messages
are generated indicating that the filter is full.
Configuring Tracing Options for Junos Capture Vision Events
You can enable tracing options for Junos Capture Vision events
by including the traceoptions
statement at the [edit
services dynamic-flow-capture]
hierarchy level.
When you include the traceoptions
configuration,
you can also specify the trace file name, maximum number of trace
files, the maximum size of trace files, and whether the trace file
can be read by all users or not.
To enable tracing options for Junos Capture Vision events, include
the following configuration at the [edit services dynamic-flow-capture]
hierarchy level:
traceoptions{ file filename <files number> <size size> <world-readable | non-world-readable>; }
To disable tracing for Junos Capture Vision events, delete the traceoptions
configuration from the [edit services dynamic-flow-capture]
hierarchy level.
In Junos OS releases earlier than 9.2R1, tracing of Junos Capture Vision was enabled by default, and the logs were saved to the/var/log/dfcd directory.
Configuring Thresholds
You can optionally specify threshold values for the following situations in which warning messages be recorded in the system log:
Input packet rate to the DFC PIC interfaces
Memory usage on the DFC PIC interfaces
To configure threshold values, include the input-packet-rate-threshold
or pic-memory-threshold
statements at the [edit
services dynamic-flow-capture capture-group client-name]
hierarchy level:
input-packet-rate-threshold rate; pic-memory-threshold percentage percentage;
If these statements are not configured, no threshold messages are logged. The threshold settings are configured for the capture group as a whole.
The range of configurable values for the input-packet-rate-threshold
statement is 0 through 1 Mpps. The PIC calibrates the value accordingly;
the Monitoring Services III PIC caps the threshold value at 300 Kpps
and the Multiservices 400 PIC uses the full configured value. The
range of values for the pic-memory-threshold
statement
is 0 to 100 percent.
Limiting the Number of Duplicates of a Packet
You can optionally specify the maximum number of duplicate packets the DFC PIC is allowed to generate from a single input packet. This limitation is intended to reduce the load on the PIC when packets are sent to multiple destinations. When the maximum number is reached, the duplicates are sent to the destinations with the highest criteria class priority. Within classes of equal priority, criteria having earlier timestamps are selected first.
To configure this limitation, include the max-duplicates
statement at the [edit services dynamic-flow-capture capture-group client-name]
hierarchy level:
max-duplicates number;
You can also apply the limitation on a global basis for the
DFC PIC by including the g-max-duplicates
statement at
the [edit services dynamic-flow-capture]
hierarchy level:
g-max-duplicates number;
By default, the maximum number of duplicates is set to 3. The
range of allowed values is 1 through 64. A setting for max-duplicates
for an individual capture-group overrides the global setting.
In addition, you can specify the frequency with which the application
sends notifications to the affected control sources that duplicates
are being dropped because the threshold has been reached. You configure
this setting at the same levels as the maximum duplicates settings,
by including the duplicates-dropped-periodicity
statement
at the [edit services dynamic-flow-capture capture-group client-name]
hierarchy level or the g-duplicates-dropped-periodicity
statement at the [edit services dynamic-flow-capture]
hierarchy level:
duplicates-dropped-periodicity seconds; g-duplicates-dropped-periodicity seconds;
As with the g-max-duplicates
statement, the g-duplicates-dropped-periodicity
statement applies the setting
globally for the application and is overridden by a setting applied
at the capture-group level. By default, the frequency for sending
notifications is 30 seconds.