Tunnel Services Overview
Tunnel Services Overview
By encapsulating arbitrary packets inside a transport protocol, tunneling provides a private, secure path through an otherwise public network. Tunnels connect discontinuous subnetworks and enable encryption interfaces, virtual private networks (VPNs), and MPLS. If you have a Tunnel Physical Interface Card (PIC) installed in your M Series or T Series router, you can configure unicast, multicast, and logical tunnels.
You can configure two types of tunnels for VPNs: one to facilitate routing table lookups and another to facilitate VPN routing and forwarding instance (VRF) table lookups.
For information about encryption interfaces, see Configuring Encryption Interfaces. For information about VPNs, see the Junos OS VPNs Library for Routing Devices. For information about MPLS, see the MPLS Applications User Guide.
On SRX Series Firewalls, Generic Routing Encapsulation (GRE) and IP-IP tunnels use internal interfaces, gr-0/0/0 and ip-0/0/0, respectively. The Junos OS creates these interfaces at system bootup; they are not associated with physical interfaces.
The Juniper Networks Junos OS supports the tunnel types shown in the following table.
Interface |
Description |
---|---|
|
Configurable generic routing encapsulation (GRE) interface. GRE allows the encapsulation of one routing protocol over another routing protocol. Within a router, packets are routed to this internal interface, where they are first encapsulated with a GRE packet and then re-encapsulated with another protocol packet to complete the GRE. The GRE interface is an internal interface only and is not associated with a physical interface. You must configure the interface for it to perform GRE. |
|
Internally generated GRE interface. This interface is generated by the Junos OS to handle GRE. You cannot configure this interface. |
|
Configurable IP-over-IP encapsulation (also called IP tunneling) interface. IP tunneling allows the encapsulation of one IP packet over another IP packet. Packets are routed to an internal interface where they are encapsulated with an IP packet and then forwarded to the encapsulating packet's destination address. The IP-IP interface is an internal interface only and is not associated with a physical interface. You must configure the interface for it to perform IP tunneling. |
|
Internally generated IP-over-IP interface. This interface is generated by the Junos OS to handle IP-over-IP encapsulation. It is not a configurable interface. |
|
The On SRX Series Firewalls, the |
|
Internally generated multicast tunnel interface. Multicast
tunnels filter all unicast packets; if an incoming packet is not destined
for a Within a router, packets are routed to this internal interface
for multicast filtering. The multicast tunnel interface is an internal
interface only and is not associated with a physical interface. If
your router has a Tunnel Services PIC, the Junos OS automatically
configures one multicast tunnel interface ( |
|
Internally generated multicast tunnel interface. This interface is generated by the Junos OS to handle multicast tunnel services. It is not a configurable interface. |
|
Configurable Protocol Independent Multicast (PIM) de-encapsulation interface. In PIM sparse mode, the first-hop router encapsulates packets destined for the rendezvous point router. The packets are encapsulated with a unicast header and are forwarded through a unicast tunnel to the rendezvous point. The rendezvous point then de-encapsulates the packets and transmits them through its multicast tree. Within a router, packets are routed to this internal interface for de-encapsulation. The PIM de-encapsulation interface is an internal interface only and is not associated with a physical interface. You must configure the interface for it to perform PIM de-encapsulation. Note:
On SRX Series Firewalls, this interface type is |
|
Configurable PIM encapsulation interface. In PIM sparse mode, the first-hop router encapsulates packets destined for the rendezvous point router. The packets are encapsulated with a unicast header and are forwarded through a unicast tunnel to the rendezvous point. The rendezvous point then de-encapsulates the packets and transmits them through its multicast tree. Within a router, packets are routed to this internal interface for encapsulation. The PIM encapsulation interface is an internal interface only and is not associated with a physical interface. You must configure the interface for it to perform PIM encapsulation. Note:
On SRX Series Firewalls, this interface type is |
|
Internally generated PIM de-encapsulation interface. This interface is generated by the Junos OS to handle PIM de-encapsulation. It is not a configurable interface. |
|
Internally generated PIM encapsulation interface. This interface is generated by the Junos OS to handle PIM encapsulation. It is not a configurable interface. |
|
Configurable virtual loopback tunnel interface. Facilitates VRF table lookup based on MPLS labels. This interface type is supported on M Series and T Series routers, but not on SRX Series Firewalls. To configure a virtual loopback tunnel to facilitate VRF table lookup based on MPLS labels, you specify a virtual loopback tunnel interface name and associate it with a routing instance that belongs to a particular routing table. The packet loops back through the virtual loopback tunnel for route lookup. |
Starting in Junos OS Release
15.1, you can configure Layer 2 Ethernet services over GRE interfaces
(gr-fpc/pic/port
to use GRE encapsulation). To enable Layer 2 Ethernet packets to be terminated on GRE tunnels,
you must configure the bridge domain protocol family on the gr-
interfaces and associate the gr-
interfaces
with the bridge domain. You must configure the GRE interfaces as core-facing
interfaces, and they must be access or trunk interfaces. To configure
the bridge domain family on gr-
interfaces, include the family bridge
statement at the [edit interfaces gr-fpc/pic/port unit logical-unit-number]
hierarchy level. To associate the gr-
interface
with a bridge domain, include the interface gr-fpc/pic/port
statement at the [edit routing-instances routing-instance-name bridge-domains bridge-domain-name]
hierarchy
level. You can associate GRE interfaces in a bridge domain with the
corresponding VLAN ID or list of VLAN IDs in a bridge domain by including
the vlan-id (all | none | number)
statement or the vlan-id-list [ vlan-id-numbers ]
statement
at the [edit bridge-domains bridge-domain-name]
hierarchy level. The VLAN IDs configured for the bridge domain
must match with the VLAN IDs that you configure for GRE interfaces
by using the vlan-id (all | none | number)
statement or
the vlan-id-list [ vlan-id-numbers ]
statement at the [edit interfaces gr-fpc/pic/port unit logical-unit-number]
hierarchy level. You can also configure
GRE interfaces within a bridge domain associated with a virtual switch
instance. Layer 2 Ethernet packets over GRE tunnels are also supported
with the GRE key option. The gre-key match condition allows a user
to match against the GRE key field, which is an optional field in
GRE encapsulated packets. The key can be matched as a single key
value, a range of key values, or both.
Starting in Junos OS Release 16.1, Layer 2 Port mirroring to a remote collector over a GRE interface is supported.
See Also
Tunnel Interfaces on MX Series Routers with Line Cards (MPC7E through MPC11E)
MPC7E-10G, MPC7E-MRATE, MX2K-MPC8E, and MX2K-MPC9E support a total of four inline tunnel interfaces per MPC, one per PIC. You can create a set of tunnel interfaces per PIC slot up to a maximum of four slots (from 0 through 3) on MX Series routers with these MPCs.
MPC10E-15C supports three inline tunnel interfaces per MPC, one per PIC, whereas MPC10E-10C supports two inline tunnel interfaces per MPC, one per PIC. On MX Series routers with MPC10E-15C, you can
create a set of tunnel interfaces per PIC slot up to a maximum of three slots (from 0 through 2). And, on MX Series routers with MPC10E-10C, you can create a set of tunnel interfaces per PIC slot up to a maximum of two slots (0 and 1).MX2K-MPC11E supports 8 inline tunnel interfaces per MPC, one per PIC. On MX Series routers with MX2K-MPC11E, you can create a set of tunnel interfaces per PIC slot up to a maximum of eight slots (from 0 through 7). These PICs are referred to as pseudo tunnel PICs. You create tunnel interfaces on MX Series routers with MPC7E-10G, MPC7E-MRATE, MX2K-MPC8E, MX2K-MPC9E, MPC10E-15C, MPC10E-10C, and MX2K-MPC11E by including the following statements at the [edit chassis] hierarchy level:
[edit chassis] fpc slot-number { pic number { tunnel-services { bandwidth ; } } }
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC7E-MRATE
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC7E-10G
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX2K-MPC8E
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX2K-MPC9E
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC10E-10C
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC10E-15C
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX2K-MPC11E
- Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX10K-LC9600
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC7E-MRATE
The tunnel bandwidth for MPC7E-MRATE is 1–120Gbps with an increment of 1Gbps. However, if you do not specify the bandwidth in the configuration, it is set to 120Gbps.
Table 2 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MPC7-MRATE .
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
PFE Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
120Gbps |
PFE0 |
120Gbps |
240Gbps |
PIC1 |
120Gbps |
|||
PIC2 |
120Gbps |
PFE1 |
120Gbps |
240Gbps |
PIC3 |
120Gbps |
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC7E-10G
The tunnel bandwidth for MPC7E-10G is 1–120Gbps with an incrementof 1Gbps However, if you do not specify the bandwidth in the configuration, it is set to 120Gbps.
Table 3 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MPC7E-10G.
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
PFE Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
120Gbps |
PFE0 |
120Gbps |
200Gbps |
PIC1 |
120Gbps |
|||
PIC2 |
120Gbps |
PFE1 |
120Gbps |
200Gbps |
PIC3 |
120Gbps |
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX2K-MPC8E
The tunnel bandwidth for MX2K-MPC8E is 1– 120Gbps with an increment of 1Gbps. However, if you do not specify the bandwidth in the configuration, it is set to 120Gbps.
Table 4 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MX2K-MPC8E.
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
Packet Forwarding Engine Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
120Gbps |
PFE0 |
120Gbps |
240Gbps |
PIC1 |
120Gbps |
PFE1 |
120Gbps |
240Gbps |
PIC2 |
120Gbps |
PFE2 |
120Gbps |
240Gbps |
PIC3 |
120Gbps |
PFE3 |
120Gbps |
240Gbps |
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX2K-MPC9E
The tunnel bandwidth for MX2K-MPC9E is 1–200Gbps with an increment of 1Gbps However, if you do not specify the bandwidth in the configuration, it is set to 200Gbps.
Table 5 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MX2K-MPC9E.
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
PFE Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
200Gbps |
PFE0 |
200Gbps |
400Gbps |
PIC1 |
200Gbps |
PFE1 |
200Gbps |
400Gbps |
PIC2 |
200Gbps |
PFE2 |
200Gbps |
400Gbps |
PIC3 |
200Gbps |
PFE3 |
200Gbps |
400Gbps |
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC10E-10C
The tunnel bandwidth for MPC10E-10C is 1–400Gbps with an increment of 1Gbps. However, if you do not specify the bandwidth in the configuration, it is set to 400Gbps.
Table 6 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MPC10E-10C.
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
Packet Forwarding Engine Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
250Gbps |
PFE0 |
250Gbps |
500Gbps |
PIC1 |
250Gbps |
PFE1 |
250Gbps |
500Gbps |
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC10E-15C
The tunnel bandwidth for MPC10E-15C is 1–400Gbps with an increment of 1Gbps. However, if you do not specify the bandwidth in the configuration, it is set to 400Gbps.
Table 7 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MPC10E-15C.
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
Packet Forwarding Engine Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
250Gbps |
PFE0 |
250Gbps |
500Gbps |
PIC1 |
250Gbps |
PFE1 |
250Gbps |
500Gbps |
PIC2 |
250Gbps |
PFE2 |
250Gbps |
500Gbps |
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX2K-MPC11E
The tunnel bandwidth for MX2K-MPC11E is 1–400Gbps with an increment of 1Gbps. However, if you do not specify the bandwidth in the configuration, it is set to 400Gbps.
Table 8 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MX2K-MPC11E .
Pseudo Tunnel PIC |
Maximum Bandwidth per Tunnel PIC |
PFE Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|
PIC0 |
250Gbps |
PFE0 |
250Gbps |
500Gbps |
PIC1 |
250Gbps |
PFE1 |
250Gbps |
500Gbps |
PIC2 |
250Gbps |
PFE2 |
250Gbps |
500Gbps |
PIC3 |
250Gbps |
PFE3 |
250Gbps |
500Gbps |
PIC4 |
250Gbps |
PFE4 |
250Gbps |
500Gbps |
PIC5 |
250Gbps |
PFE5 |
250Gbps |
500Gbps |
PIC6 |
250Gbps |
PFE6 |
250Gbps |
500Gbps |
PIC7 |
250Gbps |
PFE7 |
250Gbps |
500Gbps |
An unspecified tunnel services bandwidth value in the configuration for MPC10E-10C, MPC10E-15C, and MX2K-MPC11E results in a value larger than the maximum tunnel bandwidth per PFE in certain traffic conditions.
Packet Forwarding Engine Mapping and Tunnel Bandwidth for MX10K-LC9600
The tunnel bandwidth for MX10K-LC9600 is 1–400Gbps with an increment of 1Gbps. However, if you do not specify the bandwidth in the configuration, it is set to 400Gbps.
Table 9 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MX10K-LC9600.
Pseudo Tunnel PIC |
Tunnel Port |
Maximum Bandwidth per Tunnel PIC |
PFE Mapping |
Maximum Tunnel Bandwidth per PFE |
Maximum PFE Bandwidth |
---|---|---|---|---|---|
PIC0 |
0 |
200Gbps |
PFE0 |
200Gbps |
800Gbps |
1 |
200Gbps |
PFE0 |
200Gbps |
||
2 |
200Gbps |
PFE1 |
200Gbps |
||
3 |
200Gbps |
PFE1 |
200Gbps |
||
PIC1 |
0 |
200Gbps |
PFE2 |
200Gbps |
800Gbps |
1 |
200Gbps |
PFE2 |
200Gbps |
||
2 |
200Gbps |
PFE3 |
200Gbps |
||
3 |
200Gbps |
PFE3 |
200Gbps |
||
PIC2 |
0 |
200Gbps |
PFE4 |
200Gbps |
800Gbps |
1 |
200Gbps |
PFE4 |
200Gbps |
||
2 |
200Gbps |
PFE5 |
200Gbps |
||
3 |
200Gbps |
PFE5 |
200Gbps |
||
PIC3 |
0 |
200Gbps |
PFE6 |
200Gbps |
800Gbps |
1 |
200Gbps |
PFE6 |
200Gbps |
||
2 |
200Gbps |
PFE7 |
200Gbps |
||
3 |
200Gbps |
PFE7 |
200Gbps |
||
PIC4 |
0 |
200Gbps |
PFE8 |
200Gbps |
800Gbps |
1 |
200Gbps |
PFE8 |
200Gbps |
||
2 |
200Gbps |
PFE9 |
200Gbps |
||
3 |
200Gbps |
PFE9 |
200Gbps |
||
PIC5 |
0 |
200Gbps |
PFE10 |
200Gbps |
800Gbps |
1 |
200Gbps |
PFE10 |
200Gbps |
||
2 |
200Gbps |
PFE11 |
200Gbps |
||
3 |
200Gbps |
PFE11 |
200Gbps |
See Also
Dynamic Tunnels Overview
A VPN that travels through a non-MPLS network requires a GRE tunnel. This tunnel can be either a static tunnel or a dynamic tunnel. A static tunnel is configured manually between two PE routers. A dynamic tunnel is configured using BGP route resolution.
When a router receives a VPN route that resolves over a BGP next hop that does not have an MPLS path, a GRE tunnel can be created dynamically, allowing the VPN traffic to be forwarded to that route. Only GRE IPv4 tunnels are supported.
To configure a dynamic tunnel between two PE routers, include
the dynamic-tunnels
statement:
dynamic-tunnels tunnel-name { destination-networks prefix; source-address address; }
You can configure this statement at the following hierarchy levels:
[edit routing-options]
[edit routing-instances routing-instance-name routing-options]
[edit logical-systems logical-system-name routing-options]
[edit logical-systems logical-system-name routing-instances routing-instance-name routing-options]
See Also
Change History Table
Feature support is determined by the platform and release you are using. Use Feature Explorer to determine if a feature is supported on your platform.
gr-fpc/pic/port
to use GRE encapsulation).