Signaling Provider Tunnels and Data Plane Setup
In a next-generation multicast virtual private network (MVPN), provider tunnel information is communicated to the receiver PE routers in an out-of-band manner. This information is advertised via BGP and is independent of the actual tunnel signaling process. Once the tunnel is signaled, the sender PE router binds the VPN routing and forwarding (VRF) table to the locally configured tunnel. The receiver PE routers bind the tunnel signaled to the VRF table where the Type 1 autodiscovery route with the matching provider multicast service interface (PMSI) attribute is installed. The same binding process is used for both Protocol Independent Multicast (PIM) and RSVP-Traffic Engineering (RSVP-TE) signaled provider tunnels.
Provider Tunnels Signaled by PIM (Inclusive)
A sender provider edge (PE) router configured to use an inclusive PIM-sparse mode (PIM-SM) any-source multicast (ASM ) provider tunnel for a VPN creates a multicast tree (using the P-group address configured) in the service provider network. This tree is rooted at the sender PE router and has the receiver PE routers as the leaves. VPN multicast packets received from the local VPN source are encapsulated by the sender PE router with a multicast generic routing encapsulation (GRE) header containing the P-group address configured for the VPN. These packets are then forwarded on the service provider network as normal IP multicast packets per normal P-PIM procedures. At the leaf nodes, the GRE header is stripped and the packets are passed on to the local VRF C-PIM protocol for further processing.
In Junos OS, a logical interface called multicast tunnel (MT) is used for GRE encapsulation and de-encapsulation of VPN multicast packets. The multicast tunnel interface is created automatically if a Tunnel PIC is present.
Encapsulation subinterfaces are created from an mt-x/y/z.[32768-49151] range.
De-encapsulation subinterfaces are created from an mt-x/y/z.[49152-65535] range.
The multicast tunnel subinterfaces act as pseudo upstream or downstream interfaces between C-PIM and P-PIM.
In the following two examples, assume that the network uses PIM-SM (ASM) signaled GRE tunnels as the tunneling technology. Routers referenced in this topic are shown in Understanding Next-Generation MVPN Network Topology.
Use the show interfaces mt-0/1/0 terse
command to
verify that Router PE1 has created the following multicast tunnel
subinterface. The logical interface number is 32768, indicating that
this sub-unit is used for GRE encapsulation.
user@PE1> show interfaces mt-0/1/0 terse Interface Admin Link Proto Local Remote mt-0/1/0 up up mt-0/1/0.32768 up up inet inet6
Use the show interfaces mt-0/1/0 terse
command to
verify that Router PE2 has created the following multicast tunnel
subinterface. The logical interface number is 49152, indicating that
this sub-unit is used for GRE de-encapsulation.
user@PE2> show interfaces mt-0/1/0 terse Interface Admin Link Proto Local Remote mt-0/1/0 up up mt-0/1/0.49152 up up inet inet6
P-PIM and C-PIM on the Sender PE Router
The sender PE router installs a local join entry in its P-PIM
database for each VRF table configured to use PIM as the provider
tunnel. The outgoing interface list (OIL) of this entry points to
the core-facing interface. Since the P-PIM entry is installed as Local
, the sender PE router sets the source address to its
primary loopback IP address.
Use the show pim join extensive
command to verify
that Router PE1 has installed the following state in its P-PIM database.
user@PE1> show pim join extensive Instance: PIM.master Family: INET R = Rendezvous Point Tree, S = Sparse, W = Wildcard Group: 239.1.1.1 Source: 10.1.1.1 Flags: sparse,spt Upstream interface: Local Upstream neighbor: Local Upstream state: Local Source Keepalive timeout: 339 Downstream neighbors: Interface: fe-0/2/3.0 10.12.100.6 State: Join Flags: S Timeout: 195 Instance: PIM.master Family: INET6 R = Rendezvous Point Tree, S = Sparse, W = Wildcard
On the VRF side of the sender PE router, C-PIM installs a Local Source
entry in its C-PIM database for the active local
VPN source. The OIL of this entry points to Pseudo-MVPN
, indicating that the downstream interface points to the receivers
in the next-generation MVPN network. Routers referenced in this topic
are shown in Understanding Next-Generation
MVPN Network Topology.
Use the show pim join extensive instance vpna 224.1.1.1
command to verify that Router PE1 has installed the following entry
in its C-PIM database.
user@PE1> show pim join extensive instance vpna 224.1.1.1 Instance: PIM.vpna Family: INET R = Rendezvous Point Tree, S = Sparse, W = Wildcard Group: 224.1.1.1 Source: 192.168.1.2 Flags: sparse,spt Upstream interface: fe-0/2/0.0 Upstream neighbor: 10.12.97.2 Upstream state: Local RP, Join to Source Keepalive timeout: 0 Downstream neighbors: Interface: Pseudo-MVPN
The forwarding entry corresponding to the C-PIM Local Source
(or Local RP
) on the sender PE router points to the multicast
tunnel encapsulation subinterface as the downstream interface. This
indicates that the local multicast data packets are encapsulated as
they are passed on to the P-PIM protocol.
Use the show multicast route extensive instance vpna group
224.1.1.1
command to verify that Router PE1 has the following
multicast forwarding entry for group 224.1.1.1. The upstream interface
is the PE-CE interface and the downstream interface is the multicast
tunnel encapsulation subinterface:
user@PE1> show multicast route extensive instance vpna group 224.1.1.1 Family: INET Group: 224.1.1.1 Source: 192.168.1.2/32 Upstream interface: fe-0/2/0.0 Downstream interface list: mt-0/1/0.32768 Session description: ST Multicast Groups Statistics: 7 kBps, 79 pps, 719738 packets Next-hop ID: 262144 Upstream protocol: MVPN Route state: Active Forwarding state: Forwarding Cache lifetime/timeout: forever Wrong incoming interface notifications: 0
P-PIM and C-PIM on the Receiver PE Router
On the receiver PE router, multicast data packets received from the network are de-encapsulated as they are passed through the multicast tunnel de-encapsulation interface.
The P-PIM database on the receiver PE router contains two P-joins. One is for P-RP, and the other is for the sender PE router. For both entries, the OIL contains the multicast tunnel de-encapsulation interface from which the GRE header is stripped. The upstream interface for P-joins is the core-facing interface that faces towards the sender PE router.
Use the show pim join extensive
command to verify
that Router PE3 has the following state in its P-PIM database. The
downstream neighbor interface points to the GRE de-encapsulation subinterface:
user@PE3> show pim join extensive Instance: PIM.master Family: INET R = Rendezvous Point Tree, S = Sparse, W = Wildcard Group: 239.1.1.1 Source: * RP: 10.1.1.10 Flags: sparse,rptree,wildcard Upstream interface: so-0/0/3.0 Upstream neighbor: 10.12.100.21 Upstream state: Join to RP Downstream neighbors: Interface: mt-1/2/0.49152 10.12.53.13 State: Join Flags: SRW Timeout: Infinity Group: 239.1.1.1 Source: 10.1.1.1 Flags: sparse,spt Upstream interface: so-0/0/3.0 Upstream neighbor: 10.12.100.21 Upstream state: Join to Source Keepalive timeout: 351 Downstream neighbors: Interface: mt-1/2/0.49152 10.12.53.13 State: Join Flags: S Timeout: Infinity Instance: PIM.master Family: INET6 R = Rendezvous Point Tree, S = Sparse, W = Wildcard
On the VRF side of the receiver PE router, C-PIM installs a join entry in its C-PIM database. The OIL of this entry points to the local VPN interface, indicating active local receivers. The upstream protocol, interface, and neighbor of this entry point to the next-generation-MVPN network. Routers referenced in this topic are shown in Understanding Next-Generation MVPN Network Topology.
Use the show pim join extensive instance vpna 224.1.1.1
command to verify that Router PE3 has the following state in its
C-PIM database:
user@PE3> show pim join extensive instance vpna 224.1.1.1 Instance: PIM.vpna Family: INET R = Rendezvous Point Tree, S = Sparse, W = Wildcard Group: 224.1.1.1 Source: * RP: 10.12.53.1 Flags: sparse,rptree,wildcard Upstream protocol: BGP Upstream interface: Through BGP Upstream neighbor: Through MVPN Upstream state: Join to RP Downstream neighbors: Interface: so-0/2/0.0 10.12.87.1 State: Join Flags: SRW Timeout: Infinity Group: 224.1.1.1 Source: 192.168.1.2 Flags: sparse Upstream protocol: BGP Upstream interface: Through BGP Upstream neighbor: Through MVPN Upstream state: Join to Source Keepalive timeout: Downstream neighbors: Interface: so-0/2/0.0 10.12.87.1 State: Join Flags: S Timeout: 195 Instance: PIM.vpna Family: INET6 R = Rendezvous Point Tree, S = Sparse, W = Wildcard
The forwarding entry corresponding to the C-PIM entry on the receiver PE router uses the multicast tunnel de-encapsulation subinterface as the upstream interface.
Use the show multicast route extensive instance vpna group
224.1.1.1
command to verify that Router PE3 has installed the
following multicast forwarding entry for the local receiver:
user@PE3> show multicast route extensive instance vpna group 224.1.1.1 Family: INET Group: 224.1.1.1 Source: 192.168.1.2/32 Upstream interface: mt-1/2/0.49152 Downstream interface list: so-0/2/0.0 Session description: ST Multicast Groups Statistics: 1 kBps, 10 pps, 149 packets Next-hop ID: 262144 Upstream protocol: MVPN Route state: Active Forwarding state: Forwarding Cache lifetime/timeout: forever Wrong incoming interface notifications: 0
Provider Tunnels Signaled by RSVP-TE (Inclusive and Selective)
Junos OS supports signaling both inclusive and selective provider tunnels by RSVP-TE point-to-multipoint label-switched paths (LSPs). You can configure a combination of inclusive and selective provider tunnels per VPN.
If you configure a VPN to use an inclusive provider tunnel, the sender PE router signals one point-to-multipoint LSP for the VPN.
If you configure a VPN to use selective provider tunnels, the sender PE router signals a point-to-multipoint LSP for each selective tunnel configured.
Sender (ingress) PE routers and receiver (egress) PE routers play different roles in the point-to-multipoint LSP setup. Sender PE routers are mainly responsible for initiating the parent point-to-multipoint LSP and the sub-LSPs associated with it. Receiver PE routers are responsible for setting up state such that they can forward packets received over a sub-LSP to the correct VRF table (binding a provider tunnel to the VRF).
- Inclusive Tunnels: Ingress PE Router Point-to-Multipoint LSP Setup
- Inclusive Tunnels: Egress PE Router Point-to-Multipoint LSP Setup
- Inclusive Tunnels: Egress PE Router Data Plane Setup
- Inclusive Tunnels: Ingress and Branch PE Router Data Plane Setup
- Selective Tunnels: Type 3 S-PMSI Autodiscovery and Type 4 Leaf Autodiscovery Routes
Inclusive Tunnels: Ingress PE Router Point-to-Multipoint LSP Setup
The point-to-multipoint LSP and associated sub-LSPs are signaled by the ingress PE router. The information about the point-to-multipoint LSP is advertised to egress PE routers in the PMSI attribute via BGP.
The ingress PE router signals point-to-multipoint sub-LSPs by
originating point-to-multipoint RSVP path messages toward egress PE
routers. The ingress PE router learns the identity of the egress PE
routers from Type 1 routes installed in its <routing-instance-name>.mvpn.0
table. Each RSVP path message carries an S2L_Sub_LSP
object
along with the point-to-multipoint session object. The S2L_Sub_LSP
object carries a 4-byte sub-LSP destination (egress) IP address.
In Junos OS, sub-LSPs associated with a point-to-multipoint LSP can be signaled automatically by the system or via a static sub-LSP configuration. When they are automatically signaled, the system chooses a name for the point-to-multipoint LSP and each sub-LSP associated with it using the following naming convention.
Point-to-multipoint LSPs naming convention:
<ingress PE rid>:<a per VRF unique number>:mvpn:<routing-instance-name>
Sub-LSPs naming convention:
<egress PE rid>:<ingress PE rid>:<a per VRF unique number>:mvpn:<routing-instance-name>
Use the show mpls lsp p2mp
command to verify that
the following LSPs have been created by Router PE1:
Parent P2MP LSP: 10.1.1.1:65535:mvpn:vpna
Sub-LSPs: 10.1.1.2:10.1.1.1:65535:mvpn:vpna (Router PE1 to Router PE2) and
10.1.1.3:10.1.1.1:65535:mvpn:vpna (Router PE1 to Router PE3)
user@PE1> show mpls lsp p2mp Ingress LSP: 1 sessions P2MP name: 10.1.1.1:65535:mvpn:vpna, P2MP branch count: 2 To From State Rt P ActivePath LSPname 10.1.1.2 10.1.1.1 Up 0 * 10.1.1.2:10.1.1.1:65535:mvpn:vpna 10.1.1.3 10.1.1.1 Up 0 * 10.1.1.3:10.1.1.1:65535:mvpn:vpna Total 2 displayed, Up 2, Down 0 Egress LSP: 0 sessions Total 0 displayed, Up 0, Down 0 Transit LSP: 0 sessions Total 0 displayed, Up 0, Down 0
The values in this example are as follows:
I-PMSI P2MP LSP name: 10.1.1.1:65535:mvpn:vpna
I-PMSI P2MP sub-LSP name (to PE2): 10.1.1.2:10.1.1.1:65535:mvpn:vpna
I-PMSI P2MP sub-LSP name (to PE3): 10.1.1.3:10.1.1.1:65535:mvpn:vpna
Inclusive Tunnels: Egress PE Router Point-to-Multipoint LSP Setup
An egress PE router responds to an RSVP path message by originating an RSVP reservation (RESV) message per normal RSVP procedures. The RESV message contains the MPLS label allocated by the egress PE router for this sub-LSP and is forwarded hop by hop toward the ingress PE router, thus setting up state on the network. Routers referenced in this topic are shown in Understanding Next-Generation MVPN Network Topology.
Use the show rsvp session
command to verify that
Router PE2 has assigned label 299840
for the sub-LSP 10.1.1.2:10.1.1.1:65535:mvpn:vpna
:
user@PE2> show rsvp session Total 0 displayed, Up 0, Down 0 Egress RSVP: 1 sessions To From State Rt Style Labelin Labelout LSPname 10.1.1.2 10.1.1.1 Up 0 1 SE 299840 - 10.1.1.2:10.1.1.1:65535:mvpn:vpna Total 1 displayed, Up 1, Down 0 Transit RSVP: 0 sessions Total 0 displayed, Up 0, Down 0
Use the show mpls lsp p2mp
command to verify that
Router PE3 has assigned label 16
for the sub-LSP 10.1.1.3:10.1.1.1:65535:mvpn:vpna
:
user@PE3> show mpls lsp p2mp Ingress LSP: 0 sessions Total 0 displayed, Up 0, Down 0 Egress LSP: 1 sessions P2MP name: 10.1.1.1:65535:mvpn:vpna, P2MP branch count: 1 To From State Rt Style Labelin Labelout LSPname 10.1.1.3 10.1.1.1 Up 0 1 SE 16 - 10.1.1.3:10.1.1.1:65535:mvpn:vpna Total 1 displayed, Up 1, Down 0 Transit LSP: 0 sessions Total 0 displayed, Up 0, Down 0
Inclusive Tunnels: Egress PE Router Data Plane Setup
The egress PE router installs a forwarding entry in its mpls
table for the label it allocated for the sub-LSP. The MPLS label is installed
with a pop operation (a pop operation removes the top MPLS label), and the
packet is passed on to the VRF table for a second route lookup. The second
lookup on the egress PE router is necessary for the VPN multicast data packets
to be processed inside the VRF table using normal C-PIM procedures.
Use the show route table mpls label 16
command to verify that
Router PE3 has installed the following label entry in its MPLS forwarding
table:
user@PE3> show route table mpls label 16 + = Active Route, - = Last Active, * = Both 16 *[VPN/0] 03:03:17 to table vpna.inet.0, Pop
In Junos OS, VPN multicast routing entries are stored in the
<routing-instance-name>.inet.1
table, which is where
the second route lookup occurs. In the example above, even though
vpna.inet.0
is listed as the routing table where the second
lookup happens after the pop operation, internally the lookup is pointed to the
vpna.inet.1
table. Routers referenced in this topic are
shown in Understanding Next-Generation
MVPN Network Topology.
Use the show route table vpna.inet.1
command to verify that
Router PE3 contains the following entry in its VPN multicast routing table:
user@PE3> show route table vpna.inet.1 vpna.inet.1: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 224.1.1.1,192.168.1.2/32*[MVPN/70] 00:04:10 Multicast (IPv4)
Use the show multicast route extensive instance vpna
command to
verify that Router PE3 contains the following VPN multicast forwarding entry
corresponding to the multicast routing entry for the Llocal join. The upstream
interface points to lsi.0
and the downstream interface (OIL)
points to the so-0/2/0.0
interface (toward local receivers).
The Upstream protocol
value is MVPN
because
the VPN multicast source is reachable via the next-generation MVPN network. The
lsi.0
interface is similar to the multicast tunnel
interface used when PIM-based provider tunnels are used. The
lsi.0
interface is used for removing the top MPLS
header.
user@PE3> show multicast route extensive instance vpna Family: INET Group: 224.1.1.1 Source: 192.168.1.2/32 Upstream interface: lsi.0 Downstream interface list: so-0/2/0.0 Session description: ST Multicast Groups Statistics: 1 kBps, 10 pps, 3472 packets Next-hop ID: 262144 Upstream protocol: MVPN Route state: Active Forwarding state: Forwarding Cache lifetime/timeout: forever Wrong incoming interface notifications: 0 Family: INET6
The requirement for a double route lookup on the VPN packet header requires two additional configuration statements on the egress PE routers when provider tunnels are signaled by RSVP-TE.
First, since the top MPLS label used for the point-to-multipoint sub-LSP is actually tied to the VRF table on the egress PE routers, the penultimate-hop popping (PHP) operation is not used for next-generation MVPNs. Only ultimate-hop popping is used. PHP allows the penultimate router (router before the egress PE router) to remove the top MPLS label. PHP works well for VPN unicast data packets because they typically carry two MPLS labels: one for the VPN and one for the transport LSP.
After the LSP label is removed, unicast VPN packets still have a VPN label that can be used for determining the VPN to which the packets belong. VPN multicast data packets, on the other hand, carry only one MPLS label that is directly tied to the VPN. Therefore, the MPLS label carried by VPN multicast packets must be preserved until the packets reach the egress PE router. Normally, PHP must be disabled through manual configuration.
To simplify the configuration, PHP is disabled by default on Juniper Networks PE
routers when you include the mvpn
statement at the
[edit routing-instances routing-interface-name
interface]
hierarchy level. PHP is also disabled by default when
you include the vrf-table-label
statement at the [edit
routing-instances routing-instance-name]
hierarchy level.
Second, in Junos OS, VPN labels associated with a VRF table can be allocated in two ways.
-
Allocate a unique label for each VPN next hop (PE-CE interface). This is the default behavior.
-
Allocate one label for the entire VRF table, which requires additional configuration. Only allocating a label for the entire VRF table allows a second lookup on the VPN packet’s header. Therefore, PE routers supporting next-generation-MVPN services must be configured to allocate labels for the VRF table. There are two ways to do this as shown in Figure 1.
-
One is by including a virtuall tunnel interface named
vt
at the[edit routing-instances routing-instance-name interfaces]
hierarchy level, which requires a Tunnel PIC. -
The second is by including the
vrf-table-label
statement at the[routing-instances routing-instance-name]
hierarchy level, which does not require a Tunnel PIC.
-
Both of these options enable an egress PE router to perform two route lookups. However, there are some differences in the way in which the second lookup is done
If the vt
interface is used, the allocated label is installed in
the mpls
table with a pop
operation and a
forwarding next hop pointing to the vt
interface.
Use the show route table mpls label 299840
command to verify
that Router PE2 has installed the following entry and uses a vt
interface in the mpls
table. The label associated with the
point-to-multipoint sub-LSP (299840
) is installed with a pop
and a forward operation with the vt-0/1/0.0
interface being the
next hop. VPN multicast packets received from the core exit the
vt-0/1/0.0
interface without their MPLS header, and the
egress Router PE2 does a second lookup on the packet header in the
vpna.inet.1
table.
user@PE2> show route table mpls label 299840 mpls.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 299840 *[VPN/0] 00:00:22 > via vt-0/1/0.0, Pop
If the vrf-table-label
is configured, the allocated label is
installed in the mpls
table with a pop operation, and the
forwarding entry points to the
<routing-instance-name>.inet.0
table (which
internally triggers the second lookup to be done in the
<routing-instance-name>.inet.1
table).
Use the show route table mpls label 16
command to verify that
Router PE3 has installed the following entry in its mpls
table
and uses the vrf-table-label
statement:
user@PE3> show route table mpls label 16 mpls.0: 8 destinations, 8 routes (8 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 16 *[VPN/0] 03:03:17 to table vpna.inet.0, Pop
Configuring label allocation for each VRF table affects both unicast VPN and MVPN
routes. However, you can enable per-VRF label allocation for MVPN routes only if
per-VRF allocation is configured via vt
. This feature is
configured via multicast and unicast keywords at the [edit
routing-instances routing-instance-name interface
vt-x/y/z.0]
hierarchy level.
Note that including the vrf-table-label
statement enables
per-VRF label allocation for both unicast and MVPN routes and cannot be turned
off for either type of routes (it is either on or off for both).
If a PE router is a bud router, meaning it has local receivers and also forwards
MPLS packets received over a point-to-multipoint LSP downstream to other P and
PE routers, then there is a difference in how the
vrf-table-label
and vt
statements work.
When, the vrf-table-label
statement is included, the bud PE
router receives two copies of the packet from the penultimate router: one to be
forwarded to local receivers and the other to be forwarded to downstream P and
PE routers. When the vt
statement is included, the PE router
receives a single copy of the packet.
Inclusive Tunnels: Ingress and Branch PE Router Data Plane Setup
On the ingress PE router, local VPN data packets are encapsulated with the MPLS label received from the network for sub-LSPs.
Use the show rsvp session
command to verify that
on the ingress Router PE1, VPN multicast data packets are encapsulated
with MPLS label 300016
(advertised by Router P1 per normal
RSVP RESV procedures) and forwarded toward Router P1 down the sub-LSPs 10.1.1.3:10.1.1.1:65535:mvpn:vpna
and 10.1.1.2:10.1.1.1:65535:mvpn:vpna
.
user@PE1> show rsvp session Ingress RSVP: 2 sessions To From State Rt Style Labelin Labelout LSPname 10.1.1.3 10.1.1.1 Up 0 1 SE - 300016 10.1.1.3:10.1.1.1:65535:mvpn:vpna 10.1.1.2 10.1.1.1 Up 0 1 SE - 300016 10.1.1.2:10.1.1.1:65535:mvpn:vpna Total 2 displayed, Up 2, Down 0 Egress RSVP: 0 sessions Total 0 displayed, Up 0, Down 0 Transit RSVP: 0 sessions Total 0 displayed, Up 0, Down 0
RFC 4875 describes a branch node as “an LSR that replicates the incoming data on to one or more outgoing interfaces.” On a branch Rrouter, the incoming data carrying an MPLS label is replicated onto one or more outgoing interfaces that can use different MPLS labels. Branch nodes keep track of incoming and outgoing labels associated with point-to-multipoint LSPs. Routers referenced in this topic are shown in Understanding Next-Generation MVPN Network Topology.
Use the show rsvp session
command to verify that
branch node P1 has the incoming label 300016
and outgoing
labels 16
for sub-LSP 10.1.1.3:10.1.1.1:65535:mvpn:vpna
(to Router PE3) and 299840
for sub-LSP 10.1.1.2:10.1.1.1:65535:mvpn:vpna
(to Router PE2).
user@P1> show rsvp session Ingress RSVP: 0 sessions Total 0 displayed, Up 0, Down 0 Egress RSVP: 0 sessions Total 0 displayed, Up 0, Down 0 Transit RSVP: 2 sessions To From State Rt Style Labelin Labelout LSPname 10.1.1.3 10.1.1.1 Up 0 1 SE 300016 16 10.1.1.3:10.1.1.1:65535:mvpn:vpna 10.1.1.2 10.1.1.1 Up 0 1 SE 300016 299840 10.1.1.2:10.1.1.1:65535:mvpn:vpna Total 2 displayed, Up 2, Down 0
Use the show route table mpls label 300016
command
to verify that the corresponding forwarding entry on Router P1 shows
that the packets coming in with one MPLS label (300016
)
are swapped with labels 16
and 299840
and forwarded
out through their respective interfaces (so-0/0/3.0
and so-0/0/1.0
respectively toward Router PE2 and Router PE3).
user@P1> show route table mpls label 300016 mpls.0: 10 destinations, 10 routes (10 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 300016 *[RSVP/7] 01:58:15, metric 1 > via so-0/0/3.0, Swap 16 via so-0/0/1.0, Swap 299840
Selective Tunnels: Type 3 S-PMSI Autodiscovery and Type 4 Leaf Autodiscovery Routes
Selective provider tunnels are configured by including the selective
statement at the [edit routing-instances routing-instance-name provider-tunnel]
hierarchy
level. You can configure a threshold to trigger the signaling of a
selective provider tunnel. Including the selective
statement
triggers the following events.
First, the ingress PE router originates a Type 3 S-PMSI autodiscovery route. The S-PMSI autodiscovery route contains the route distinguisher of the VPN where the tunnel is configured and the (C-S, C-G) pair that uses the selective provider tunnel.
In this section assume that Router PE1 is signaling a selective
tunnel for (192.168.1.2, 224.1.1.1
) and Router PE3 has
an active receiver.
Use the show route table vpna.mvpn.0 | find 3:
command
to verify that Router PE1 has installed the following Type 3 route
after the selective provider tunnel is configured:
user@PE1> show route table vpna.mvpn.0 | find 3: 3:10.1.1.1:1:32:192.168.1.2:32:224.1.1.1:10.1.1.1/240 *[MVPN/70] 00:05:07, metric2 1 Indirect
Second, the ingress PE router attaches a PMSI attribute to a
Type 3 route. This PMSI attribute is similar to the PMSI attribute
advertised for inclusive provider tunnels with one difference: the
PMSI attribute carried with Type 3 routes has its Flags
bit set to Leaf Information Required
. This means that
the sender PE router is requesting receiver PE routers to send a Type
4 route if they have active receivers for the (C-S, C-G) carried in
the Type 3 route. Also, remember that for each selective provider
tunnel, a new point-to-multipoint and associated sub-LSPs are signaled.
The PMSI attribute of a Type 3 route carries information about the
new point-to-multipoint LSP.
Use the show route advertising-protocol bgp 10.1.1.3 detail
table vpna.mvpn | find 3:
command to verify that Router PE1
advertises the following Type 3 route and the PMSI
attribute.
The point-to-multipoint session object included in the PMSI
attribute has a different port number (29499
) than the
one used for the inclusive tunnel (6574
) indicating that
this is a new point-to-multipoint tunnel.
user@PE1> show route advertising-protocol bgp 10.1.1.3 detail table vpna.mvpn | find 3: * 3:10.1.1.1:1:32:192.168.1.2:32:224.1.1.1:10.1.1.1/240 (1 entry, 1 announced) BGP group int type Internal Route Distinguisher: 10.1.1.1:1 Nexthop: Self Flags: Nexthop Change Localpref: 100 AS path: [65000] I Communities: target:10:1 PMSI: Flags 1:RSVP-TE:label[0:0:0]:Session_13[10.1.1.1:0:29499:10.1.1.1]
Egress PE routers with active receivers should respond to a
Type 3 route by originating a Type 4 leaf autodiscovery route. A leaf
autodiscovery route contains a route key and the originating router’s
IP address fields. The Route Key
field of the leaf autodiscovery
route contains the original Type 3 route that is received. The originating
router’s IP address field is set to the router ID of the PE
router originating the leaf autodiscovery route.
The ingress PE router adds each egress PE router that originated the leaf autodiscovery route as a leaf (destination of the sub-LSP for the selective point-to-multipoint LSP). Similarly, the egress PE router that originated the leaf autodiscovery route sets up forwarding state to start receiving data through the selective provider tunnel.
Egress PE routers advertise Type 4 routes with a route target that is specific to the PE router signaling the selective provider tunnel. This route target is in the form of target:<rid of the sender PE>:0. The sender PE router (the PE router signaling the selective provider tunnel) applies a special internal import policy to Type 4 routes that looks for a route target with its own router ID. Routers referenced in this topic are shown in Understanding Next-Generation MVPN Network Topology.
Use the show route table vpna.mvpn | find 4:3:
command
to verify that Router PE3 originates the following Type 4 route. The
local Type 4 route is installed by the MVPN module.
user@PE3> show route table vpna.mvpn | find 4:3: 4:3:10.1.1.1:1:32:192.168.1.2:32:224.1.1.1:10.1.1.1:1.1.1.3/240 *[MVPN/70] 00:15:29, metric2 1 Indirect
Use the show route advertising-protocol bgp 10.1.1.1 table
vpna.mvpn detail | find 4:3:
command to verify that Router PE3
has advertised the local Type 4 route with the following route target
community. This route target carries the IP address of the sender
PE router (10.1.1.1
) followed by a 0.
user@PE3> show route advertising-protocol bgp 10.1.1.1 table vpna.mvpn detail | find 4:3: * 4:3:10.1.1.1:1:32:192.168.1.2:32:224.1.1.1:10.1.1.1:10.1.1.3/240 (1 entry, 1 announced) BGP group int type Internal Nexthop: Self Flags: Nexthop Change Localpref: 100 AS path: [65000] I Communities: target:10.1.1.1:0
Use the show policy __vrf-mvpn-import-cmcast-leafAD-global-internal__
command to verify that Router PE1 (the PE router signaling the selective
provider tunnel) has applied the following import policy to Type 4
routes. The routes are accepted if their route target matches target:10.1.1.1:0
.
user@PE1> show policy __vrf-mvpn-import-cmcast-leafAD-global-internal__ Policy __vrf-mvpn-import-cmcast-leafAD-global-internal__: Term unnamed: from community __vrf-mvpn-community-rt_import-target-global-internal__ [target:10.1.1.1:0 ] then accept Term unnamed: then reject
For each selective provider tunnel configured, a Type 3 route is advertised and a new point-to-multipoint LSP is signaled. Point-to-multipoint LSPs created by Junos OS for selective provider tunnels are named using the following naming conventions:
Selective point-to-multipoint LSPs naming convention:
<ingress PE rid>:<a per VRF unique number>:mv<a unique number>:<routing-instance-name>
Selective point-to-multipoint sub-LSP naming convention:
<egress PE rid>:<ingress PE rid>:<a per VRF unique>:mv<a unique number>:<routing-instance-name>
Use the show mpls lsp p2mp
command to verify that
Router PE1 signals point-to-multipoint LSP 10.1.1.1:65535:mv5:vpna
with one sub-LSP 10.1.1.3:10.1.1.1:65535:mv5:vpna
. The
first point-to-multipoint LSP 10.1.1.1:65535:mvpn:vpna
is
the LSP created for the inclusive tunnel.
user@PE1> show mpls lsp p2mp Ingress LSP: 2 sessions P2MP name: 10.1.1.1:65535:mvpn:vpna, P2MP branch count: 2 To From State Rt P ActivePath LSPname 10.1.1.3 10.1.1.1 Up 0 * 10.1.1.3:10.1.1.1:65535:mvpn :vpna 10.1.1.2 10.1.1.1 Up 0 * 10.1.1.2:10.1.1.1:65535:mvpn :vpna P2MP name: 10.1.1.1:65535:mv5:vpna, P2MP branch count: 1 To From State Rt P ActivePath LSPname 10.1.1.3 10.1.1.1 Up 0 * 10.1.1.3:10.1.1.1:65535:mv5 :vpna Total 3 displayed, Up 3, Down 0 Egress LSP: 0 sessions Total 0 displayed, Up 0, Down 0 Transit LSP: 0 sessions Total 0 displayed, Up 0, Down 0
The values in this example are as follows.
I-PMSI P2MP LSP name: 10.1.1.1:65535:mvpn:vpna
I-PMSI P2MP sub-LSP name (to PE2): 10.1.1.2:10.1.1.1:65535:mvpn:vpna
I-PMSI P2MP sub-LSP name (to PE3): 10.1.1.3:10.1.1.1:65535:mvpn:vpna
S-PMSI P2MP LSP name: 10.1.1.1:65535:mv5:vpna
S-PMSI P2MP sub-LSP name (to PE3): 10.1.1.3:10.1.1.1:65535:mv5:vpna