ON THIS PAGE
Examples: Configuring Data MDTs
Understanding Data MDTs
In a draft-rosen Layer 3 multicast virtual private network (MVPN) configured with service provider tunnels, the VPN is multicast-enabled and configured to use the Protocol Independent Multicast (PIM) protocol within the VPN and within the service provider (SP) network. A multicast-enabled VPN routing and forwarding (VRF) instance corresponds to a multicast domain (MD), and a PE router attached to a particular VRF instance is said to belong to the corresponding MD. For each MD there is a default multicast distribution tree (MDT) through the SP backbone, which connects all of the PE routers belonging to that MD. Any PE router configured with a default MDT group address can be the multicast source of one default MDT.
To provide optimal multicast routing, you can configure the PE routers so that when the multicast source within a site exceeds a traffic rate threshold, the PE router to which the source site is attached creates a new data MDT and advertises the new MDT group address. An advertisement of a new MDT group address is sent in a User Datagram Protocol (UDP) type-length-value (TLV) packet called an MDT join TLV. The MDT join TLV identifies the source and group pair (S,G) in the VRF instance as well as the new data MDT group address used in the provider space. The PE router to which the source site is attached sends the MDT join TLV over the default MDT for that VRF instance every 60 seconds as long as the source is active.
All PE routers in the VRF instance receive the MDT join TLV because it is sent over the default MDT, but not all the PE routers join the new data MDT group:
PE routers connected to receivers in the VRF instance for the current multicast group cache the contents of the MDT join TLV, adding a 180-second timeout value to the cache entry, and also join the new data MDT group.
PE routers not connected to receivers listed in the VRF instance for the current multicast group also cache the contents of the MDT join TLV, adding a 180-second timeout value to the cache entry, but do not join the new data MDT group at this time.
After the source PE stops sending the multicast traffic stream over the default MDT and uses the new MDT instead, only the PE routers that join the new group receive the multicast traffic for that group.
When a remote PE router joins the new data MDT group, it sends a PIM join message for the new group directly to the source PE router from the remote PE routers by means of a PIM (S,G) join.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE router can obtain the new group address from its cache and can join the data MDT immediately without waiting up to 59 seconds for the next data MDT advertisement.
When the PE router to which the source site is attached sends a subsequent MDT join TLV for the VRF instance over the default MDT, any existing cache entries for that VRF instance are simply refreshed with a timeout value of 180 seconds.
To display the information cached from MDT join TLV packets received by all PE routers in a PIM-enabled VRF instance, use the show pim mdt data-mdt-joins operational mode command.
The source PE router starts encapsulating the multicast traffic for the VRF instance using the new data MDT group after 3 seconds, allowing time for the remote PE routers to join the new group. The source PE router then halts the flow of multicast packets over the default MDT, and the packet flow for the VRF instance source shifts to the newly created data MDT.
The PE router monitors the traffic rate during its periodic statistics-collection cycles. If the traffic rate drops below the threshold or the source stops sending multicast traffic, the PE router to which the source site is attached stops announcing the MDT join TLVs and switches back to sending on the default MDT for that VRF instance.
See Also
Data MDT Characteristics
A data multicast distribution tree (MDT) solves the problem of routers flooding unnecessary multicast information to PE routers that have no interested receivers for a particular VPN multicast group.
The default MDT uses multicast tunnel (mt-) logical interfaces. Data MDTs also use multicast tunnel logical interfaces. If you administratively disable the physical interface that the multicast tunnel logical interfaces are configured on, the multicast tunnel logical interfaces are moved to a different physical interface that is up. In this case the traffic is sent over the default MDT until new data MDTs are created.
The maximum number of data MDTs for all VPNs on a PE router is 1024, and the maximum number of data MDTs for a VRF instance is 1024. The configuration of a VRF instance can limit the number of MDTs possible. No new MDTs can be created after the 1024 MDT limit is reached in the VRF instance, and all traffic for other sources that exceed the configured limit is sent on the default MDT.
Tear-down of data MDTs depends on the monitoring of the multicast source data rate. This rate is checked once per minute, so if the source data rate falls below the configured value, data MDT deletion can be delayed for up to 1 minute until the next statistics-monitoring collection cycle.
Changes to the configured data MDT limit value do not affect existing tunnels that exceed the new limit. Data MDTs that are already active remain in place until the threshold conditions are no longer met.
In a draft-rosen MVPN in which PE routers are already configured to create data MDTs in response to exceeded multicast source traffic rate thresholds, you can change the group range used for creating data MDTs in a VRF instance. To remove any active data MDTs created using the previous group range, you must restart the PIM routing process. This restart clears all remnants of the former group addresses but disrupts routing and therefore requires a maintenance window for the change.
Never restart any of the software processes unless instructed to do so by a customer support engineer.
Multicast tunnel (mt) interfaces created because of exceeded thresholds are not re-created if the routing process crashes. Therefore, graceful restart does not automatically reinstate the data MDT state. However, as soon as the periodic statistics collection reveals that the threshold condition is still exceeded, the tunnels are quickly re-created.
Data MDTs are supported for customer traffic with PIM sparse mode, dense mode, and sparse-dense mode. Note that the provider core does not support PIM dense mode.
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast Mode
This example shows how to configure data multicast distribution trees (MDTs) for a provider edge (PE) router attached to a VPN routing and forwarding (VRF) instance in a draft-rosen Layer 3 multicast VPN operating in source-specific multicast (SSM) mode. The example is based on the Junos OS implementation of RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) and on section 7 of the IETF Internet draft draft-rosen-vpn-mcast-07.txt, Multicast in MPLS/BGP IP VPNs.
Requirements
Before you begin:
Make sure that the routing devices support multicast tunnel (mt) interfaces.
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See "Tunnel Services PICs and Multicast” and "Load Balancing Multicast Tunnel Interfaces Among Available PICs” in the Multicast Protocols User Guide.
Make sure that the PE router has been configured for a draft-rosen Layer 3 multicast VPN operating in SSM mode in the provider core.
In this type of multicast VPN, PE routers discover one another by sending MDT subsequent address family identifier (MDT-SAFI) BGP network layer reachability information (NLRI) advertisements. Key configuration statements for the master instance are highlighted in Table 1. Key configuration statements for the VRF instance to which your PE router is attached are highlighted in Table 2. For complete configuration details, see "Example: Configuring Source-Specific Multicast for Draft-Rosen Multicast VPNs" in the Multicast Protocols User Guide.
Overview
By using data MDTs in a Layer 3 VPN, you can prevent multicast packets from being flooded unnecessarily to specified provider edge (PE) routers within a VPN group. This option is primarily useful for PE routers in your Layer 3 VPN multicast network that have no receivers for the multicast traffic from a particular source.
When a PE router that is directly connected to the multicast source (also called the source PE) receives Layer 3 VPN multicast traffic that exceeds a configured threshold, a new data MDT tunnel is established between the PE router connected to the source site and its remote PE router neighbors.
The source PE advertises the new data MDT group as long as the source is active. The periodic announcement is sent over the default MDT for the VRF. Because the data MDT announcement is sent over the default tunnel, all the PE routers receive the announcement.
Neighbors that do not have receivers for the multicast traffic cache the advertisement of the new data MDT group but ignore the new tunnel. Neighbors that do have receivers for the multicast traffic cache the advertisement of the new data MDT group and also send a PIM join message for the new group.
The source PE encapsulates the VRF multicast traffic using the new data MDT group and stops the packet flow over the default multicast tree. If the multicast traffic level drops back below the threshold, the data MDT is torn down automatically and traffic flows back across the default multicast tree.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE router can obtain the new group address from its cache and can join the data-MDT immediately without waiting up to 59 seconds for the next data MDT advertisement.
By default, automatic creation of data MDTs is disabled.
The following sections summarize the data MDT configuration statements used in this example and in the prerequisite configuration for this example:
In the master instance, the PE router’s prerequisite draft-rosen PIM-SSM multicast configuration includes statements that directly support the data MDT configuration you will enable in this example. Table 1 highlights some of these statements†.
Table 1: Data MDTS—Key Prerequisites in the Master Instance Statement
Description
[edit protocols] pim { interface (Protocols PIM) interface-name <options>; }
Enables the PIM protocol on PE router interfaces.
[edit protocols] bgp { group name { type internal; peer-as autonomous-system; neighbor address; family inet-mdt { signaling; } } }
[edit routing-options] autonomous-system autonomous-system;
In the internal BGP full mesh between PE routers in the VRF instance, enables the BGP protocol to carry MDT-SAFI NLRI signaling messages for IPv4 traffic in Layer 3 VPNs.
[edit routing-options] multicast { ssm-groups [ ip-addresses ]; }
(Optional) Configures one or more SSM groups to use inside the provider network in addition to the default SSM group address range of 232.0.0.0/8.
Note:For this example, it is assumed that you previously specified an additional SSM group address range of 239.0.0.0/8.
† This table contains only a partial list of the PE router configuration statements for a draft-rosen multicast VPN operating in SSM mode in the provider core. For complete configuration information about this prerequisite, see “Example: Configuring Source-Specific Multicast for Draft-Rosen Multicast VPNs” in the Multicast Protocols User Guide.
In the VRF instance to which the PE router is attached—at the
[edit routing-instances name]
hierarchy level—the PE router’s prerequisite draft-rosen PIM-SSM multicast configuration includes statements that directly support the data MDT configuration you will enable in this example. Table 2 highlights some of these statements‡.Table 2: Data MDTs—Key Prerequisites in the VRF Instance Statement
Description
[edit routing-instances name] instance-type vrf; vrf-target community;
Creates a VRF table (instance-name.mdt.0) that contains the routes originating from and destined for the Layer 3 VPN.
Creates a VRF export policy that automatically accepts routes from the instance-name.mdt.0 routing table. ensures proper PE autodiscovery using the inet-mdt address family
You must also configure the interface and
route-distinguisher
statements for this type of routing instance.[edit routing-instances name] protocols { pim { mvpn { family { inet | inet6 { autodiscovery { inet-mdt; } } } } } }
Configures the PE router in a VPN to use an MDT-SAFI NLRI for autodiscovery of other PE routers:
[edit routing-instances name] provider-tunnelfamily inet | inet6{ pim-ssm { group-address (Routing Instances) address; } }
Configures the PIM-SSM provider tunnel default MDT group address.
Note:For this example, it assumed that you previously configured the PIM-SSM provider tunnel default MDT for the VPN instance ce1 with the group address 239.1.1.1.
To verify the configuration of the default MDT tunnel for the VRF instance to which the PE router is attached, use the show pim mvpn operational mode command.
‡ This table contains only a partial list of the PE router configuration statements for a draft-rosen multicast VPN operating in SSM mode in the provider core. For complete configuration information about this prerequisite, see “Example: Configuring Source-Specific Multicast for Draft-Rosen Multicast VPNs” in the Multicast Protocols User Guide.
For a rosen 7 MVPN—a draft-rosen multicast VPN with provider tunnels operating in SSM mode—you configure data MDT creation for a tunnel multicast group by including statements under the PIM-SSM provider tunnel configuration for the VRF instance associated with the multicast group. Because data MDTs are specific to VPNs and VRF routing instances, you cannot configure MDT statements in the primary routing instance. Table 3 summarizes the data MDT configuration statements for PIM-SSM provider tunnels.
Table 3: Data MDTs for PIM-SSM Provider Tunnels in a Draft-Rosen MVPN Statement
Description
[edit routing-instances name] provider-tunnel family inet | inet6{{ mdt { group-range multicast-prefix; } }
Configures the IP group range used when a new data MDT needs to be created in the VRF instance on the PE router. This address range cannot overlap the default MDT addresses of any other VPNs on the router. If you configure overlapping group ranges, the configuration commit fails.
This statement has no default value. If you do not set the multicast-prefix to a valid, nonreserved multicast address range, then no data MDTs are created for this VRF instance.
Note:For this example, it is assumed that you previously configured the PE router to automatically select an address from the 239.10.10.0/24 range when a new data MDT needs to be initiated.
[edit routing-instances name] provider-tunnel family inet | inet6{{ mdt { tunnel-limit limit; } }
Configures the maximum number of data MDTs that can be created for the VRF instance.
The default value is 0. If you do not configure the limit to a non-zero value, then no data MDTs are created for this VRF instance.
The valid range is from 0 through 1024 for a VRF instance. There is a limit of 8000 tunnels for all data MDTs in all VRF instances on a PE router.
If the configured maximum number of data MDT tunnels is reached, then no new tunnels are created for the VRF instance, and traffic that exceeds the configured threshold is sent on the default MDT.
Note:For this example, you limit the number of data MDTs for the VRF instance to 10.
[edit routing-instances name] provider-tunnel family inet | inet6{{ mdt { threshold { group group-address { source source-address { rate threshold-rate; } } } } }
Configures a data rate for the multicast source of a default MDT. When the source traffic in the VRF instance exceeds the configured data rate, a new tunnel is created.
group group-address—Multicast group address of the default MDT that corresponds to a VRF instance to which the PE router is attached. The group-address explicit (all 32 bits of the address specified) or a prefix (network address and prefix length specified). This is typically a well-known address for a certain type of multicast traffic.
source source-address—Unicast IP prefix of one or more multicast sources in the specified default MDT group.
rate threshold-rate—Data rate for the multicast source to trigger the automatic creation of a data MDT. The data rate is specified in kilobits per second (Kbps).
The default threshold-rate is 10 kilobits per second (Kbps).
Note:For this example, you configure the following data MDT threshold:
Multicast group address or address range to which the threshold limits apply—224.0.9.0/32
Multicast source address or address range to which the threshold limits apply—10.1.1.2/32
Data rate—10 Kbps
When the traffic stops or the rate falls below the threshold value, the source PE router switches back to the default MDT.
Configuration
The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see the Junos OS CLI User Guide.
- CLI Quick Configuration
- Enabling Data MDTs and PIM-SSM Provider Tunnels on the Local PE Router Attached to a VRF
- (Optional) Enabling Logging of Detailed Trace Information for Multicast Tunnel Interfaces on the Local PE Router
- Results
CLI Quick Configuration
To quickly configure this example, copy the following commands, paste
them into a text file, remove any line breaks, change any details necessary to match your
network configuration, copy and paste the commands into the CLI at the [edit]
hierarchy
level and then enter commit
from configuration mode.
set routing-instances ce1 provider-tunnel family inet mdt group-range 239.10.10.0/24 set routing-instances ce1 provider-tunnel family inet mdt tunnel-limit 10 set routing-instances ce1 provider-tunnel family inet mdt threshold group 224.0.9.0/32 source 10.1.1.2/32 rate 10 set protocols pim traceoptions file trace-pim-mdt set protocols pim traceoptions file files 5 set protocols pim traceoptions file size 1m set protocols pim traceoptions file world-readable set protocols pim traceoptions flag mdt detail
Enabling Data MDTs and PIM-SSM Provider Tunnels on the Local PE Router Attached to a VRF
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User Guide.
To configure the local PE router attached to the VRF instance ce1 in a PIM-SSM multicast VPN to initiate new data MDTs and provider tunnels for that VRF:
Enable configuration of provider tunnels operating in SSM mode.
[edit] user@host# edit routing-instances ce1 provider-tunnel
Configure the range of multicast IP addresses for new data MDTs.
[edit routing-instances ce1 provider-tunnel] user@host# set mdt group-range 239.10.10.0/24
Configure the maximum number of data MDTs for this VRF instance.
[edit routing-instances ce1 provider-tunnel] user@host# set mdt tunnel-limit 10
Configure the data MDT-creation threshold for a multicast group and source.
[edit routing-instances ce1 provider-tunnel] user@host# set mdt threshold group 224.0.9.0/32 source 10.1.1.2/32 rate 10
If you are done configuring the device, commit the configuration.
[edit] user@host# commit
Results
Confirm the configuration of data MDTs for PIM-SSM provider tunnels by entering
the show routing-instances
command from configuration mode. If the output does
not display the intended configuration, repeat the instructions in this procedure to correct
the configuration.
[edit] user@host# show routing-instances ce1 { instance-type vrf; vrf-target target:100:1; ... provider-tunnel { pim-ssm { group-address 239.1.1.1; } mdt { threshold { group 224.0.9.0/32 { source 10.1.1.2/32 { rate 10; } } } tunnel-limit 10; group-range 239.10.10.0/24; } } protocols { ... pim { mvpn { family { inet { autodiscovery { inet-mdt; } } } } } } } }
The show routing-instances
command output above does not show the complete
configuration of a VRF instance in a draft-rosen MVPN operating in SSM mode in the provider
core.
(Optional) Enabling Logging of Detailed Trace Information for Multicast Tunnel Interfaces on the Local PE Router
Step-by-Step Procedure
To enable logging of detailed trace information for all multicast tunnel interfaces on the local PE router:
Enable configuration of PIM tracing options.
[edit] user@host# set protocols pim traceoptions
Configure the trace file name, maximum number of trace files, maximum size of each trace file, and file access type.
[edit protocols pim traceoptions] set file trace-pim-mdt set file files 5 set file size 1m set file world-readable
Specify that messages related to multicast data tunnel operations are logged.
[edit protocols pim traceoptions] set flag mdt detail
If you are done configuring the device, commit the configuration.
[edit] user@host# commit
Results
Confirm the configuration of multicast tunnel logging by entering the show
protocols
command from configuration mode. If the output does not display the intended
configuration, repeat the instructions in this procedure to correct the configuration.
[edit] user@host# show protocols pim { traceoptions { file trace-pim-mdt size 1m files 5 world-readable; flag mdt detail; } interface lo0.0; ... }
Verification
To verify that the local PE router is managing data MDTs and PIM-SSM provider tunnels properly, perform the following tasks:
- Monitor Data MDTs Initiated for the Multicast Group
- Monitor Data MDT Group Addresses Cached by All PE Routers in the Multicast Group
- (Optional) View the Trace Log for Multicast Tunnel Interfaces
Monitor Data MDTs Initiated for the Multicast Group
Purpose
For the VRF instance ce1, check the incoming and outgoing tunnels established by the local PE router for the default MDT and monitor the data MDTs initiated by the local PE router.
Action
Use the show pim mdt instance ce1 detail operational mode command.
For the default MDT, the command displays details about the incoming and outgoing tunnels established by the local PE router for specific multicast source addresses in the multicast group using the default MDT and identifies the tunnel mode as PIM-SSM.
For the data MDTs initiated by the local PE router, the command identifies the multicast source using the data MDT, the multicast tunnel logical interface set up for the data MDT tunnel, the configured threshold rate, and current statistics.
Monitor Data MDT Group Addresses Cached by All PE Routers in the Multicast Group
Purpose
For the VRF instance ce1, check the data MDT group addresses cached by all PE routers that participate in the VRF.
Action
Use the show pim mdt data-mdt-joins instance ce1 operational mode command. The command output displays the information cached from MDT join TLV packets received by all PE routers participating in the specified VRF instance, including the current timeout value of each entry.
(Optional) View the Trace Log for Multicast Tunnel Interfaces
Purpose
If you configured logging of trace Information for multicast tunnel interfaces, you can trace the creation and tear-down of data MDTs on the local router through the mt interface-related activity in the log.
Action
To view the trace file, use the file show /var/log/trace-pim-mdt operational mode command.
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode
This example shows how to configure data multicast distribution trees (MDTs) in a draft-rosen Layer 3 VPN operating in any-source multicast (ASM) mode. This example is based on the Junos OS implementation of RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) and on section 2 of the IETF Internet draft draft-rosen-vpn-mcast-06.txt, Multicast in MPLS/BGP VPNs (expired April 2004).
Requirements
Before you begin:
Configure the draft-rosen multicast over Layer 3 VPN scenario.
Make sure that the routing devices support multicast tunnel (mt) interfaces.
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See "Tunnel Services PICs and Multicast” and "Load Balancing Multicast Tunnel Interfaces Among Available PICs” in the Multicast Protocols User Guide..
Overview
By using data multicast distribution trees (MDTs) in a Layer 3 VPN, you can prevent multicast packets from being flooded unnecessarily to specified provider edge (PE) routers within a VPN group. This option is primarily useful for PE routers in your Layer 3 VPN multicast network that have no receivers for the multicast traffic from a particular source.
When a PE router that is directly connected to the multicast source (also called the source PE) receives Layer 3 VPN multicast traffic that exceeds a configured threshold, a new data MDT tunnel is established between the PE router connected to the source site and its remote PE router neighbors.
The source PE advertises the new data MDT group as long as the source is active. The periodic announcement is sent over the default MDT for the VRF. Because the data MDT announcement is sent over the default tunnel, all the PE routers receive the announcement.
Neighbors that do not have receivers for the multicast traffic cache the advertisement of the new data MDT group but ignore the new tunnel. Neighbors that do have receivers for the multicast traffic cache the advertisement of the new data MDT group and also send a PIM join message for the new group.
The source PE encapsulates the VRF multicast traffic using the new data MDT group and stops the packet flow over the default multicast tree. If the multicast traffic level drops back below the threshold, the data MDT is torn down automatically and traffic flows back across the default multicast tree.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE router can obtain the new group address from its cache and can join the data-MDT immediately without waiting up to 59 seconds for the next data MDT advertisement.
By default, automatic creation of data MDTs is disabled.
For a rosen 6 MVPN—a draft-rosen multicast VPN with provider tunnels operating in ASM mode—you configure data MDT creation for a tunnel multicast group by including statements under the PIM protocol configuration for the VRF instance associated with the multicast group. Because data MDTs apply to VPNs and VRF routing instances, you cannot configure MDT statements in the master routing instance.
This example includes the following configuration options:
group—Specifies the multicast group address to which the threshold applies. This could be a well-known address for a certain type of multicast traffic.
The group address can be explicit (all 32 bits of the address specified) or a prefix (network address and prefix length specified). Explicit and prefix address forms can be combined if they do not overlap. Overlapping configurations, in which prefix and more explicit address forms are used for the same source or group address, are not supported.
group-range—Specifies the multicast group IP address range used when a new data MDT needs to be initiated on the PE router. For each new data MDT, one address is automatically selected from the configured group range.
The PE router implementing data MDTs for a local multicast source must be configured with a range of multicast group addresses. Group addresses that fall within the configured range are used in the join messages for the data MDTs created in this VRF instance. Any multicast address range can be used as the multicast prefix. However, the group address range cannot overlap the default MDT group address configured for any VPN on the router. If you configure overlapping group addresses, the configuration commit operation fails.
pim—Supports data MDTs for service provider tunnels operating in any-source multicast mode.
rate—Specifies the data rate that initiates the creation of data MDTs. When the source traffic in the VRF exceeds the configured data rate, a new tunnel is created. The range is from 10 kilobits per second (Kbps), the default, to 1 gigabit per second (Gbps, equivalent to 1,000,000 Kbps).
source—Specifies the unicast address of the source of the multicast traffic. It can be a source locally attached to or reached through the PE router. A group can have more than one source.
The source address can be explicit (all 32 bits of the address specified) or a prefix (network address and prefix length specified). Explicit and prefix address forms can be combined if they do not overlap. Overlapping configurations, in which prefix and more explicit address forms are used for the same source or group address, are not supported.
threshold—Associates a rate with a group and a source. The PE router implementing data MDTs for a local multicast source must establish a data MDT-creation threshold for a multicast group and source.
When the traffic stops or the rate falls below the threshold value, the source PE router switches back to the default MDT.
tunnel-limit—Specifies the maximum number of data MDTs that can be created for a single routing instance. The PE router implementing a data MDT for a local multicast source must establish a limit for the number of data MDTs created in this VRF instance. If the limit is 0 (the default), then no data MDTs are created for this VRF instance.
If the number of data MDT tunnels exceeds the maximum configured tunnel limit for the VRF, then no new tunnels are created. Traffic that exceeds the configured threshold is sent on the default MDT.
The valid range is from 0 through 1024 for a VRF instance. There is a limit of 8000 tunnels for all data MDTs in all VRF instances on a PE router.
Configuration
Procedure
CLI Quick Configuration
To quickly configure this example, copy the following commands, paste
them into a text file, remove any line breaks, change any details necessary to match your
network configuration, and then copy and paste the commands into the CLI at the [edit]
hierarchy level.
[edit] set routing-instances vpn-A protocols pim mdt group-range 227.0.0.0/8 set routing-instances vpn-A protocols pim mdt threshold group 224.4.4.4/32 source 10.10.20.43/32 rate 10 set routing-instances vpn-A protocols pim mdt tunnel-limit 10
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User Guide.
To configure a PE router attached to the VRF instance vpn-A in a PIM-ASM multicast VPN to initiate new data MDTs and provider tunnels for that VRF:
Configure the group range.
[edit] user@host# edit routing-instances vpn-A protocols pim mdt [edit routing-instances vpn-A protocols pim mdt] user@host# set group-range 227.0.0.0/8
Configure a data MDT-creation threshold for a multicast group and source.
[edit routing-instances vpn-A protocols pim mdt] user@host# set threshold group 224.4.4.4 source 10.10.20.43 rate 10
Configure a tunnel limit.
[edit routing-instances vpn-A protocols pim mdt] user@host# set tunnel-limit 10
If you are done configuring the device, commit the configuration.
[edit routing-instances vpn-A protocols pim mdt] user@host# commit
Verification
To display information about the default MDT and any data MDTs for the VRF instance vpn-A, use the show pim mdt instance ce1 detail operational mode command. This command displays either the outgoing tunnels (the tunnels initiated by the local PE router), the incoming tunnels (tunnels initiated by the remote PE routers), or both.
To display the data MDT group addresses cached by PE routers that participate in the VRF instance vpn-A, use the show pim mdt data-mdt-joins instance vpn-A operational mode command. The command displays the information cached from MDT join TLV packets received by all PE routers participating in the specified VRF instance.
You can trace the operation of data MDTs by including the mdt detail flag
in the [edit protocols pim traceoptions]
configuration. When this flag is set,
all the mt interface-related activity is logged in trace files.
Example: Enabling Dynamic Reuse of Data MDT Group Addresses
This example describes how to enable dynamic reuse of data multicast distribution tree (MDT) group addresses.
Requirements
Before you begin:
Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library for Routing Devices.
Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
A limited number of multicast group addresses are available for use in data MDT tunnels. By default, when the available multicast group addresses are all used, no new data MDTs can be created.
You can enable dynamic reuse of data MDT group addresses. Dynamic reuse of data MDT group addresses allows multiple multicast streams to share a single MDT and multicast provider group address. For example, three streams can use the same provider group address and MDT tunnel.
The streams are assigned to a particular MDT in a round-robin fashion. Since a provider tunnel might be used by multiple customer streams, this can result in egress routers receiving customer traffic that is not destined for their attached customer sites. This example shows the plain PIM scenario, without the MVPN provider tunnel.
Topology
Figure 5 shows the topology used in this example.
Configuration
CLI Quick Configuration
To quickly configure this example, copy the
following commands, paste them into a text file, remove any line breaks,
change any details necessary to match your network configuration,
and then copy and paste the commands into the CLI at the [edit]
hierarchy level.
set policy-options policy-statement bgp-to-ospf term 1 from protocol bgp set policy-options policy-statement bgp-to-ospf term 1 then accept set protocols mpls interface all set protocols bgp local-as 65520 set protocols bgp group ibgp type internal set protocols bgp group ibgp local-address 10.255.38.17 set protocols bgp group ibgp family inet-vpn unicast set protocols bgp group ibgp neighbor 10.255.38.21 set protocols bgp group ibgp neighbor 10.255.38.15 set protocols ospf traffic-engineering set protocols ospf area 0.0.0.0 interface all set protocols ospf area 0.0.0.0 interface fxp0.0 disable set protocols ldp interface all set protocols pim rp static address 10.255.38.21 set protocols pim interface all mode sparse set protocols pim interface all version 2 set protocols pim interface fxp0.0 disable set routing-instances VPN-A instance-type vrf set routing-instances VPN-A interface ge-1/1/2.0 set routing-instances VPN-A interface lo0.1 set routing-instances VPN-A route-distinguisher 10.0.0.10:04 set routing-instances VPN-A vrf-target target:100:10 set routing-instances VPN-A protocols ospf export bgp-to-ospf set routing-instances VPN-A protocols ospf area 0.0.0.0 interface all set routing-instances VPN-A protocols pim traceoptions file pim-VPN-A.log set routing-instances VPN-A protocols pim traceoptions file size 5m set routing-instances VPN-A protocols pim traceoptions flag mdt detail set routing-instances VPN-A protocols pim dense-groups 224.0.1.39/32 set routing-instances VPN-A protocols pim dense-groups 224.0.1.40/32 set routing-instances VPN-A protocols pim dense-groups 229.0.0.0/8 set routing-instances VPN-A protocols pim vpn-group-address 239.1.0.0 set routing-instances VPN-A protocols pim rp static address 10.255.38.15 set routing-instances VPN-A protocols pim interface lo0.1 mode sparse-dense set routing-instances VPN-A protocols pim interface ge-1/1/2.0 mode sparse-dense set routing-instances VPN-A protocols pim mdt threshold group 224.1.1.1/32 source 192.168.255.245/32 rate 20 set routing-instances VPN-A protocols pim mdt threshold group 224.1.1.2/32 source 192.168.255.245/32 rate 20 set routing-instances VPN-A protocols pim mdt threshold group 224.1.1.3/32 source 192.168.255.245/32 rate 20 set routing-instances VPN-A protocols pim mdt data-mdt-reuse set routing-instances VPN-A protocols pim mdt tunnel-limit 2 set routing-instances VPN-A protocols pim mdt group-range 239.1.1.0/30
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User Guide.
To configure dynamic reuse of data MDT group addresses:
Configure the bgp-to-ospf export policy.
[edit policy-options policy-statement bgp-to-ospf] user@host# set term 1 from protocol bgp user@host# set term 1 then accept
Configure MPLS, LDP, BGP, OSPF, and PIM.
[edit] user@host# edit protocols [edit protocols] user@host# set mpls interface all [edit protocols] user@host# set ldp interface all [edit protocols] user@host# set bgp local-as 65520 [edit protocols] user@host# set bgp group ibgp type internal [edit protocols] user@host# set bgp group ibgp local-address 10.255.38.17 [edit protocols] user@host# set bgp group ibgp family inet-vpn unicast [edit protocols] user@host# set bgp group ibgp neighbor 10.255.38.21 [edit protocols] user@host# set bgp group ibgp neighbor 10.255.38.15 [edit protocols] user@host# set ospf traffic-engineering [edit protocols] user@host# set ospf area 0.0.0.0 interface all [edit protocols] user@host# set ospf area 0.0.0.0 interface fxp0.0 disable [edit protocols] user@host# set pim rp static address 10.255.38.21 [edit protocols] user@host# set pim interface all mode sparse [edit protocols] user@host# set pim interface all version 2 [edit protocols] user@host# set pim interface fxp0.0 disable [edit protocols] user@host# exit
Configure the routing instance, and apply the bgp-to-ospf export policy.
[edit] user@host# edit routing-instances VPN-A [edit routing-instances VPN-A] user@host# set instance-type vrf [edit routing-instances VPN-A] user@host# set interface ge-1/1/2.0 [edit routing-instances VPN-A] user@host# set interface lo0.1 [edit routing-instances VPN-A] user@host# set route-distinguisher 10.0.0.10:04 [edit routing-instances VPN-A] user@host# set vrf-target target:100:10 [edit routing-instances VPN-A] user@host# set protocols ospf export bgp-to-ospf [edit routing-instances VPN-A] user@host# set protocols ospf area 0.0.0.0 interface all
Configure PIM trace operations for troubleshooting.
[edit routing-instances VPN-A] user@host# set protocols pim traceoptions file pim-VPN-A.log [edit routing-instances VPN-A] user@host# set protocols pim traceoptions file size 5m [edit routing-instances VPN-A] user@host# set protocols pim traceoptions flag mdt detail
Configure the groups that operate in dense mode and the group address on which to encapsulate multicast traffic from the routing instance.
[edit routing-instances VPN-A] user@host# set protocols pim dense-groups 224.0.1.39/32 [edit routing-instances VPN-A] user@host# set protocols pim dense-groups 224.0.1.40/32 [edit routing-instances VPN-A] user@host# set protocols pim dense-groups 229.0.0.0/8 [edit routing-instances VPN-A] user@host# set protocols pim group-address 239.1.0.0 [edit routing-instances VPN-A]
Configure the address of the RP and the interfaces operating in sparse-dense mode.
[edit routing-instances VPN-A] user@host# set protocols pim rp static address 10.255.38.15 [edit routing-instances VPN-A] user@host# set protocols pim interface lo0.1 mode sparse-dense [edit routing-instances VPN-A] user@host# set protocols pim interface ge-1/1/2.0 mode sparse-dense
Configure the data MDT, including the
data-mdt-reuse
statement.[edit routing-instances VPN-A] user@host# set protocols pim mdt threshold group 224.1.1.1/32 source 192.168.255.245/32 rate 20 [edit routing-instances VPN-A] user@host# set protocols pim mdt threshold group 224.1.1.2/32 source 192.168.255.245/32 rate 20 [edit routing-instances VPN-A] user@host# set protocols pim mdt threshold group 224.1.1.3/32 source 192.168.255.245/32 rate 20 [edit routing-instances VPN-A] user@host# set protocols pim mdt data-mdt-reuse [edit routing-instances VPN-A] user@host# set protocols pim mdt tunnel-limit 2 [edit routing-instances VPN-A] user@host# set protocols pim mdt group-range 239.1.1.0/30
If you are done configuring the device, commit the configuration.
[edit routing-instances VPN-A] user@host# commit
Results
From configuration mode, confirm your configuration
by entering the show policy-options
, show protocols
, and show routing-instances
commands. If the output does
not display the intended configuration, repeat the instructions in
this example to correct the configuration.
user@host# show policy-options policy-statement bgp-to-ospf { term 1 { from protocol bgp; then accept; } }
user@host# show protocols mpls { interface all; } bgp { local-as 65520; group ibgp { type internal; local-address 10.255.38.17; family inet-vpn { unicast; } neighbor 10.255.38.21; neighbor 10.255.38.15; } } ospf { traffic-engineering; area 0.0.0.0 { interface all; interface fxp0.0 { disable; } } } ldp { interface all; } pim { rp { static { address 10.255.38.21; } } interface all { mode sparse; version 2; } interface fxp0.0 { disable; } }
user@host# show routing-instances VPN-A { instance-type vrf; interface ge-1/1/2.0; interface lo0.1; route-distinguisher 10.0.0.10:04; vrf-target target:100:10; protocols { ospf { export bgp-to-ospf; area 0.0.0.0 { interface all; } } pim { traceoptions { file pim-VPN-A.log size 5m; flag mdt detail; } dense-groups { 224.0.1.39/32; 224.0.1.40/32; 229.0.0.0/8; } vpn-group-address 239.1.0.0; rp { static { address 10.255.38.15; } } interface lo0.1 { mode sparse-dense; } interface ge-1/1/2.0 { mode sparse-dense; } mdt { threshold { group 224.1.1.1/32 { source 192.168.255.245/32 { rate 20; } } group 224.1.1.2/32 { source 192.168.255.245/32 { rate 20; } } group 224.1.1.3/32 { source 192.168.255.245/32 { rate 20; } } } data-mdt-reuse; tunnel-limit 2; group-range 239.1.1.0/30; } } } }
Verification
To verify the configuration, run the following commands:
show pim join instance VPN-A extensive
show multicast route instance VPN-A extensive
show pim mdt instance VPN-A
show pim mdt data-mdt-joins instance VPN-A