ON THIS PAGE
Layer 2 Services over GRE Tunnel Interfaces on MX Series with MPCs
Format of GRE Frames and Processing of GRE Interfaces for Layer 2 Ethernet Packets
Guidelines for Configuring Layer 2 Ethernet Traffic Over GRE Tunnels
Sample Scenarios of Configuring Layer 2 Ethernet Traffic Over GRE Tunnels
Configuring Layer 2 Services over GRE Logical Interfaces in Bridge Domains
Example: Configuring Layer 2 Services Over GRE Logical Interfaces in Bridge Domains
Configuring Layer 2 Ethernet Services over GRE Tunnel Interfaces
Layer 2 Services over GRE Tunnel Interfaces on MX Series with MPCs
Starting in Junos OS Release 15.1, you can configure
Layer 2 Ethernet services over GRE interfaces (gr-fpc/pic/port
to use GRE encapsulation).
Starting in Release 19.1R1, Junos OS supports Layer 2 Ethernet services over GRE interfaces (to use GRE encapsulation) with IPv6 traffic.
The outputs of the show bridge mac-table
and show vpls mac-table
commands have been enhanced to display
the MAC addresses learned on a GRE logical interface and the status
of MAC address learning properties in the MAC address and MAC flags
fields. Also, the L2 Routing Instance
and L3 Routing Instance
fields are added to the output
of the show interfaces gr
command to display the names
of the routing instances associated with the GRE interfaces are displayed.
To enable Layer 2 Ethernet packets to be terminated on GRE tunnels,
you must configure the bridge domain protocol family on the gr-
interfaces and associate the gr-
interfaces
with the bridge domain. You must configure the GRE interfaces as core-facing
interfaces, and they must be access or trunk interfaces. To configure
the bridge domain family on gr-
interfaces, include the family bridge
statement at the [edit interfaces gr-fpc/pic/port unit logical-unit-number]
hierarchy level. To associate the gr-
interface
with a bridge domain, include the interface gr-fpc/pic/port
statement at the [edit routing-instances routing-instance-name bridge-domains bridge-domain-name]
hierarchy
level.
You can associate GRE interfaces in a bridge domain with the
corresponding VLAN ID or list of VLAN IDs in a bridge domain by including
the vlan-id (all | none | number)
statement or the vlan-id-list [ vlan-id-numbers ]
statement
at the [edit bridge-domains bridge-domain-name]
hierarchy level. The VLAN IDs configured for the bridge domain
must match with the VLAN IDs that you configure for GRE interfaces
by using the vlan-id (all | none | number)
statement or
the vlan-id-list [ vlan-id-numbers ]
statement at the [edit interfaces gr-fpc/pic/port unit logical-unit-number]
hierarchy level. You can also configure
GRE interfaces within a bridge domain associated with a virtual switch
instance. Layer 2 Ethernet packets over GRE tunnels are also supported
with the GRE key option. The gre-key match condition allows a user
to match against the GRE key field, which is an optional field in
GRE encapsulated packets. The key can be matched as a single key
value, a range of key values, or both.
Format of GRE Frames and Processing of GRE Interfaces for Layer 2 Ethernet Packets
The GRE frame contains the outer MAC header, outer IP header, GRE header, original layer 2 frame, and frame checksum (FCS).
In the outer MAC header, the following fields are present:
The outer destination MAC address is set as the next-hop MAC address
The outer source MAC address is set as the source address of the MX Series router that functions as the gateway
The outer VLAN tag information
In the outer IP header, the following fields are contained:
The outer source address is set as the source address of the MX Series router gateway
The outer destination address is set as the remote GRE tunnel address
The outer protocol type is set as 47 (encapsulation type is GRE)
The VLAN ID configuration within the bridge domain updates the VLAN ID of the original Layer 2 header
The gr-interface supports GRE encapsulation over IPv4 and IPv6, which is supported over Layer 3 over GRE. Support for bridging over GRE enables you to configure bridge domain families on gr- interfaces and also enable integrated routing and bridging (IRB) on gr- interfaces. The device control daemon (dcd) that controls the physical and logical interface processes enables the processing of bridge domain families under the GRE interfaces. The kernel supports IRB to send and receive packets on IRB interfaces.
The Packet Forwarding Engine supports the Layer 2 encapsulation and decapsulation over GRE interfaces. The chassis daemon is responsible for creating the GRE physical interface when an FPC comes online and triggering the deletion of the GRE interfaces when the FPC goes offline. The kernel receives the GRE logical interface that is added over the underlying physical interface and propagates the GRE logical interface to other clients, including the Packet Forwarding Engine to create the Layer 2 over GRE data path in the hardware. In addition, it adds the GRE logical interface into a bridge domain. The Packet Forwarding Engine receives interprocess communication message (IPC) from the kernel and adds the interface into the forwarding plane. The existing MTU size for the GRE interface is increased by 22 bytes for the L2 header addition (6 DMAC + 6 SMAC + 4 CVLAN + 4 SVLAN + 2 EtherType)
Guidelines for Configuring Layer 2 Ethernet Traffic Over GRE Tunnels
Observe the following guidelines while configuring Layer 2 packets to be transmitted over GRE tunnel interfaces on MX Series routers with MPCs:
For integrated routing and bridging (IRB) to work, at least one Layer 2 interface must be up and active, and it must be associated with the bridge domain as an IRB interface along with a GRE Layer 2 logical interface. This configuration is required to leverege the existing broadcast infrastructure of Layer 2 with IRB.
Graceful Routing Engine switchover (GRES) is supported and unified ISSU is not currently supported.
MAC addresses learned from the GRE networks are learned on the bridge domain interfaces associated with the gr-fpc/pic/port.unit logical interface. The MAC addresses are learned on GRE logical interfaces and the Layer 2 token used for forwarding is the token associated with the GRE interface. Destination MAC lookup yields an L2 token, which causes the next-hop lookup. This next-hop is used to forward the packet.
The GRE tunnel encapsulation and decapsulation next-hops are enhanced to support this functionality. The GRE tunnel encapsulation next-hop is used to encapsulate the outer IP and GRE headers with the incoming L2 packet. The GRE tunnel decapsulation next-hop is used to decapsulate the outer IP and GRE headers, parse the inner Layer 2 packet, and set the protocol as bridge for further bridge domain properties processing in the Packet Forwarding Engine.
The following packet flows are supported:
As part of Layer 2 packet flows, L2 unicast from L2 to GRE, L2 unicast from GRE to L2, Layer 2 broadcast, unknown unicast, and multicast (L2 BUM) from L2 to GRE, and L2 BUM from GRE to L2 are supported.
As part of Layer 3 packet flows, L3 Unicast from L2 to GRE, L3 Unicast from GRE to L2, L3 Multicast from L2 to GRE, L3 Multicast from GRE to L2, and L3 Multicast from Internet to GRE and L2 are supported.
Support for L2 control protocols is not available.
At the GRE decapsulation side, packets destined to the tunnel IP are processed and decapsulated by the forwarding plane, and inner L2 packets are processed. MAC learned packets are generated for control plane processing for newly learned MAC entries. However, these entries are throttled for MAC learning.
802.1x authentication can be used to validate the individual endpoints and protect them from unauthorrized access.
With the capability to configure bridge domain families on GRE tunnel interfaces, the maximum number of GRE interfaces supported depends on the maximum number of tunnel devices allocated, where each tunnel device can host upto 4000 logical interfaces. The maximum number of logical tunnel interfaces supported is not changed with the support for Layer 2 GRE tunnels. For example, in a 4x10 MIC on MX960 routers, 8000 tunnel logical interfaces can be created.
The tunnels are pinned to a specific Packet Forwarding Engine instance.
Statistical information for GRE Layer 2 tunnels is displayed in the output of the
show interfaces gr-fpc/pic/port
command.Only trunk and access mode configuration is supported for the bridge family of GRE interfaces; subinterface style configuration is not supported.
You can enable a connection to a traditional Layer 2 network. Connection to a VPLS network is not currently supported. IRB in bridge domains with GRE interfaces is supported.
Virtual switch instances are supported.
Configuration of the GRE Key and using it to perform the hash load-balancing at the GRE tunnel-initiated and transit routers is supported.
Sample Scenarios of Configuring Layer 2 Ethernet Traffic Over GRE Tunnels
You can configure Layer 2 Ethernet services over GRE
interfaces (gr-fpc/pic/port
to use
GRE encapsulation). This topic contains the following sections that
illustrate sample network deployments that support Layer 2 packets
over GRE tunnel interfaces:
GRE Tunnels with an MX Series Router as the Gateway in Layer 3
You can configure an MX Series router as the gateway that contains GRE tunnels configured to connect to legacy switches on the one end and to a Layer 3 network on the other end. The Layer 3 network in turn can be linked with multiple servers on a LAN where the GRE tunnel is terminated from the WAN.
GRE Tunnels With an MX Series Router as the Gateway and Aggregator
You can configure an MX Series router as the gateway with GRE tunnels configured and also with aggregation specified. The gateway can be connected to legacy switches on one end of the network and the aggregator can be connected to a top-of-rack (ToR) switch, as a QFX Series device, which handles GRE tunneled packets with load balancing. The ToR switch can be connected, in turn, over a Layer 3 GRE tunnel to several servers in data centers.
GRE Tunnels with MX Series Gateways for Enterprise and Data Center Servers
You can configure an MX Series router as the gateway with GRE tunnels configured. Over the Internet, GRE tunnels connect multiple gateways, which are MX routers, to servers in enterprises where the GRE tunnel is terminated from the WAN on one end, and to servers in data centers on the other end.
The following configuration scenarios are supported for Layer 2 Ethernet over GRE tunnels:
In a Layer 2 Ethernet over GRE with VPLS environment, an MX Series router supports Layer 2 over GRE tunnels (without the MPLS layer) and terminate these tunnels into a VPLS or an routed VLAN interface (RVI) into a L3VPN. The tunnels serve to cross the cable modem termination system (CMTS) and cable modem CM infrastructure transparently, up to the MX Series router that serves as the gateway. Every GRE tunnel terminates over a VLAN interface, a VPLS instance, or an IRB interface.
In a Layer 2 Ethernet over GRE without VPLS environment, Layer 2 VPN encapsulations are needed for conditions that do not involve these protocols. Certain data center users terminate the other end of GRE tunnels directly on the servers on the LAN, while an MX Series router functions as the gateway router between the WAN and LAN. This type of termination of tunnels enables users to build overlay networks within the data center without having to configure end-user VLANs, IP addresses, and other network parameters on the underlying switches. Such a setup simplifies data center network design and provisioning.
Layer 2 over GRE is not supported in ACX2200 router.
Configuring Layer 2 Services over GRE Logical Interfaces in Bridge Domains
You can configure Layer 2 Ethernet services over GRE
interfaces (gr-fpc/pic/port
to use
GRE encapsulation).
To configure a GRE tunnel interface, associate it in a bridge domain within a virtual-switch instance, and specify the amount of bandwidth reserved for tunnel services traffic:
Example: Configuring Layer 2 Services Over GRE Logical Interfaces in Bridge Domains
This example illustrates how you can configure GRE logical interfaces in a bridge domain. You can also configure a virtual switch instance associated with a bridge domain and include GRE interfaces in the bridge domain. This type of configuration enables you to specify Layer 2 Ethernet packets to be terminated on GRE tunnels. In a Layer 2 Ethernet over GRE with VPLS environment, an MX Series router supports Layer 2 over GRE tunnels (without the MPLS layer) and terminate these tunnels into a VPLS or an routed VLAN interface (RVI) into a L3VPN. The tunnels serve to cross the cable modem termination system (CMTS) and cable modem CM infrastructure transparently, up to the MX Series router that serves as the gateway. Every GRE tunnel terminates over a VLAN interface, a VPLS instance, or an IRB interface.
Requirements
This example uses the following hardware and software components:
An MX Series router
Junos OS Release 15.1R1 or later running on an MX Series router with MPCs.
Overview
GRE encapsulates packets into IP packets and redirects them to an intermediate host, where they are de-encapsulated and routed to their final destination. Because the route to the intermediate host appears to the inner datagrams as one hop, Juniper Networks EX Series Ethernet switches can operate as if they have a virtual point-to-point connection with each other. GRE tunnels allow routing protocols like RIP and OSPF to forward data packets from one switch to another switch across the Internet. In addition, GRE tunnels can encapsulate multicast data streams for transmission over the Internet.
Ethernet frames have all the essentials for networking, such as globally unique source and destination addresses, error control, and so on. •Ethernet frames can carry any kind of packet. Networking at Layer 2 is protocol independent (independent of the Layer 3 protocol). If more of the end-to-end transfer of information from a source to a destination can be done in the form of Ethernet frames, more of the benefits of Ethernet can be realized on the network. Networking at Layer 2 can be a powerful adjunct to IP networking, but it is not usually a substitute for IP networking.
Consider a sample network topology in which a GRE tunnel interface is configured with the bandwidth set as 10 gigabits per second for tunnel traffic on each Packet Forwarding Engine. The GRE interface, gr-0/1/10.0, is specified with the source address of 192.0.2.2 and the destination address of 192.0.2.1. Two gigabit Ethernet interfaces, ge-0/1/2.0 and ge-0/1/6.0, are also configured. A virtual switch instance, VS1, is defined and a bridge domain, bd0, is associated with VS1. The bridge domain contains the VLAN ID of 10. The GRE interface is configured as a trunk interface and associated with the bridge domain, bd0. With such a setup, Layer 2 Ethernet services can be terminated over GRE tunnel interfaces in virtual switch instances that contain bridge domains.
Configuration
To configure a GRE tunnel interface, associate it in a bridge domain within a virtual-switch instance, and specify the amount of bandwidth reserved for tunnel services traffic.
Procedure
CLI Quick Configuration
To quickly configure this example, copy the
following commands, paste them in a text file, remove any line breaks,
change any details necessary to match your network configuration,
and then copy and paste the commands into the CLI at the [edit
] hierarchy level:
set chassis fpc 0 pic 1 tunnel-services bandwidth 1g set chassis network-services enhanced-ip set interfaces ge-0/1/2 unit 0 family inet address 192.0.2.2/30 set interfaces ge-0/1/6 unit 0 family bridge interface-mode trunk set interfaces ge-0/1/6 unit 0 family bridge vlan-id-list 1-100 set interfaces gr-0/1/10 unit 0 tunnel source 192.0.2.2 set interfaces gr-0/1/10 unit 0 tunnel destination 192.0.2.1 set interfaces gr-0/1/10 unit 0 family bridge interface-mode trunk set interfaces gr-0/1/10 unit 0 family bridge vlan-id-list 1-100 set routing-instances VS1 instance-type virtual-switch set routing-instances VS1 bridge-domains bd0 vlan-id 10 set routing-instances VS1 interface ge-0/1/6.0 set routing-instances VS1 interface gr-0/1/10.0
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.
To configure GRE logical tunnel interfaces for Layer 2 services in bridge domains:
Configure GRE tunnel interface and specify the amount of bandwidth to reserve for tunnel traffic on each Packet Forwarding Engine.
[edit] user@host# set chassis fpc 0 pic 1 tunnel-services bandwidth 1g user@host# set chassis network-services enhanced-ip
Configure the interfaces and their VLAN IDs.
[edit] user@host# set interfaces ge-0/1/2 unit 0 family inet address 192.0.2.2/30 user@host# set interfaces ge-0/1/6 unit 0 family bridge interface-mode trunk user@host# set interfaces ge-0/1/6 unit 0 family bridge vlan-id-list 1-100 user@host# set interfaces gr-0/1/10 unit 0 tunnel source 192.0.2.2 user@host# set interfaces gr-0/1/10 unit 0 tunnel destination 192.0.2.1 user@host# set interfaces gr-0/1/10 unit 0 family bridge interface-mode trunk user@host# set interfaces gr-0/1/10 unit 0 family bridge vlan-id-list 1-100
Configure the bridge domain in a virtual switch instance and associate the GRE interface with it.
[edit] user@host# set routing-instances VS1 instance-type virtual-switch user@host# set routing-instances VS1 bridge-domains bd0 vlan-id 10 user@host# set routing-instances VS1 interface ge-0/1/6.0 user@host# set routing-instances VS1 interface gr-0/1/10.0
Results
Display the results of the configuration:
user@host> show configuration
chassis {
fpc 0 {
pic 1 {
tunnel-services {
bandwidth 1g;
}
}
}
network-services enhanced-ip;
}
interfaces {
ge-0/1/2 {
unit 0 {
family inet {
address 192.0.2.2/30;
}
}
}
ge-0/1/6 {
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 1-100;
}
}
}
gr-0/1/10 {
unit 0 {
tunnel {
source 192.0.2.2;
destination 192.0.2.1;
}
family bridge {
interface-mode trunk;
vlan-id-list 1-100;
}
}
}
}
VS-1 {
instance-type virtual-switch;
interface ge-0/1/6.0;
interface gr-0/1/10.0;
bridge-domains {
bd0 {
vlan-id 10;
}
}
}
Verification
To confirm that the configuration is working properly, perform these tasks:
Verifying the MAC Addresses Learned on GRE Interfaces
Purpose
Display the MAC addresses learned on a GRE logical interface.
Action
From operational mode, use the show bridge mac-table
command
MAC flags (S -static MAC, D -dynamic MAC, L -locally learned SE -Statistics enabled, NM -Non configured MAC, R -Remote PE MAC) Routing instance : default-switch Bridging domain : vlan-1, VLAN : 1 MAC MAC Logical address flags interface 00:00:5e:00:53:f7 D,SE gr-1/2/10.0 00:00:5e:00:53:32 D,SE gr-1/2/10.0 00:00:5e:00:53:21 DL ge-1/0/0.0 00:00:5e:00:53:11 DL ge-1/1/0.0 Routing instance : default-switch Bridging domain : vlan-2, VLAN : 2 MAC MAC Logical address flags interface 00:00:5e:00:53:33 D,SE gr-1/2/10.1 00:00:5e:00:53:10 DL ge-1/0/0.1 00:00:5e:00:53:23 DL ge-1/1/0.1
Meaning
The output displays MAC addresses learned on GRE logical tunnels.
Verifying the MAC Address Learning Status
Purpose
Display the status of MAC address learning properties in the MAC address and MAC flags fields.
Action
From operational mode, enter the show vpls mac-table
command.
MAC flags (S -static MAC, D -dynamic MAC, L -locally learned SE -Statistics enabled, NM -Non configured MAC, R -Remote PE MAC) Routing instance : vpls_4site:1000 Bridging domain : __vpls_4site:1000__, MAC MAC Logical address flags interface 00:00:5e:00:53:f4 D,SE ge-4/2/0.1000 00:00:5e:00:53:02 D,SE lsi.1052004 00:00:5e:00:53:03 D,SE lsi.1048840 00:00:5e:00:53:04 D,SE lsi.1052005 00:00:5e:00:53:33 D,SE gr-1/2/10.10 user@host> show interfaces gr-2/2/10 Physical interface: gr-2/2/10, Enabled, Physical link is Up Interface index: 214, SNMP ifIndex: 690 Type: GRE, Link-level type: GRE, MTU: Unlimited, Speed: 1000mbps Device flags : Present Running Interface flags: Point-To-Point SNMP-Traps Input rate : 0 bps (0 pps) Output rate : 0 bps (0 pps) Logical interface gr-2/2/10.0 (Index 342) (SNMP ifIndex 10834) Flags: Up Point-To-Point SNMP-Traps 0x4000 IP-Header 198.51.100.1:198.51.100.254:47:df:64:0000000000000000 Encapsulation: GRE-NULL L2 Routing Instance: vs1, L3 Routing Instance: default Copy-tos-to-outer-ip-header: Off Gre keepalives configured: Off, Gre keepalives adjacency state: down Input packets : 2 Output packets: 0 Protocol bridge, MTU: 1476 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 6/8, Local: 6.0.0.1, Broadcast: 6.255.255.255 user@host> show interfaces gr-2/2/10.0 Logical interface gr-2/2/10.0 (Index 342) (SNMP ifIndex 10834) Flags: Up Point-To-Point SNMP-Traps 0x4000 IP-Header 198.51.100.1:198.51.100.254:47:df:64:0000000000000000 Encapsulation: GRE-NULL L2 Routing Instance: vs1, L3 Routing Instance: default Copy-tos-to-outer-ip-header: Off Gre keepalives configured: Off, Gre keepalives adjacency state: down Input packets : 2 Output packets: 0 Protocol bridge, MTU: 1476 Flags: Sendbcast-pkt-to-re Addresses, Flags: Is-Preferred Is-Primary Destination: 6/8, Local: 6.0.0.1, Broadcast: 6.255.255.255
Meaning
The output displays the status of MAC address learning properties in the MAC address and MAC flags fields. The output displays the names of the routing instances associated with the GRE interfaces are displayed.
Example: Configuring Layer 2 Services Over GRE Logical Interfaces in Bridge Domains with IPv6 Transport
This example illustrates how you can configure GRE logical interfaces in a bridge domain. You can also configure a virtual switch instance associated with a bridge domain and include GRE interfaces in the bridge domain. This type of configuration enables you to specify Layer 2 Ethernet packets to be terminated on GRE tunnels. In a Layer 2 Ethernet over GRE with VPLS environment, an MX Series router supports Layer 2 over GRE tunnels (without the MPLS layer) and terminate these tunnels into a VPLS or an routed VLAN interface (RVI) into a L3VPN. The tunnels serve to cross the cable modem termination system (CMTS) and cable modem CM infrastructure transparently, up to the MX Series router that serves as the gateway. Every GRE tunnel terminates over a VLAN interface, a VPLS instance, or an IRB interface.
Requirements
This example uses the following hardware and software components:
Two MX Series routers
Junos OS Release 19.R1 or later running on MX Series routers with MPCs.
Overview
GRE encapsulated IPv6 packets are redirected to an intermediate host where GRE header is decapsulated and routed to the IPv6 destination.
Consider a sample network topology with two devices. On device 1, GRE tunnel interface is configured with the bandwidth set as 1 gigabits per second for tunnel traffic on each Packet Forwarding Engine. The GRE interface, gr-0/0/10.0, is specified with the source address of 2001:DB8::2:1 and the destination address of 2001:DB8::3:1. Two interfaces, ae0 and xe-0/0/19, are also configured. A virtual switch instance, VS1, is defined and a bridge domain, bd1, is associated with VS1. The bridge domain contains the VLAN ID of 20. The GRE interface is configured as a trunk interface and associated with the bridge domain, bd1. With such a setup, Layer 2 Ethernet services can be terminated over GRE tunnel interfaces in virtual switch instances that contain bridge domains.
On device 2, GRE tunnel interface is configured with the bandwidth set as 1 gigabits per second for tunnel traffic on each Packet Forwarding Engine. The GRE interface, gr-0/0/10.0, is specified with the source address of 2001:DB8::21:1 and the destination address of 2001:DB8::31:1. Two interfaces, ae0 and xe-0/0/1, are also configured. A virtual switch instance, VS1, is defined and a bridge domain, bd1, is associated with VS1. The bridge domain contains the VLAN ID of 20. The GRE interface is configured as an access interface and associated with the bridge domain, bd1.
Configuration
To configure a GRE tunnel interface, associate it in a bridge domain within a virtual-switch instance, and specify the amount of bandwidth reserved for tunnel services traffic.
Procedure
CLI Quick Configuration
To quickly configure this example, copy the
following commands, paste them in a text file, remove any line breaks,
change any details necessary to match your network configuration,
and then copy and paste the commands into the CLI at the [edit
] hierarchy level:
For Device 1:
set chassis aggregated-devices ethernet device-count 2 set chassis fpc 0 pic 0 tunnel-services bandwidth 1g set chassis network-services enhanced-ip set interfaces ae0 unit 0 family inet6 address 2001:DB8::1:1/32; set interfaces xe-0/0/19 unit 0 family bridge interface-mode trunk set interfaces xe-0/0/19 unit 0 family bridge vlan-id-list 20-21 set interfaces xe-1/0/2 gigether-options 802.3ad ae0 set interfaces xe-1/0/3 gigether-options 802.3ad ae0 set interfaces gr-0/0/10.0 unit 0 tunnel source 2001:DB8::2:1 set interfaces gr-0/0/10.0 unit 0 tunnel destination 2001:DB8::3:1/32 set interfaces gr-0/0/10.0 unit 0 family bridge interface-mode trunk set interfaces gr-0/0/10.0 unit 0 family bridge vlan-id-list 20-30 set routing-instances VS1 instance-type virtual-switch set routing-instances VS1 bridge-domains bd1 vlan-id 20 set routing-instances VS1 interface xe-0/0/19.0 set routing-instances VS1 interface gr-0/0/10.0
For device 2:
set chassis aggregated-devices ethernet device-count 2 set chassis fpc 0 pic 0 tunnel-services bandwidth 1g set chassis network-services enhanced-ip set interfaces ae0 unit 0 family inet6 address 2001:DB8::11:1/32; set interfaces xe-0/0/1unit 0 family bridge interface-mode trunk set interfaces xe-0/0/1 unit 0 family bridge vlan-id-list 20-21 set interfaces xe-1/0/2 gigether-options 802.3ad ae0 set interfaces xe-1/0/3 gigether-options 802.3ad ae0 set interfaces gr-0/0/10.0 unit 0 tunnel source 2001:DB8::21:1 set interfaces gr-0/0/10.0 unit 0 tunnel destination 2001:DB8::31:1/32 set interfaces gr-0/0/10.0 unit 0 family bridge interface-mode access set interfaces gr-0/0/10.0 unit 0 family bridge vlan-id-list 20-30 set routing-instances VS1 instance-type virtual-switch set routing-instances VS1 bridge-domains bd1 vlan-id 20 set routing-instances VS1 interface xe-0/0/1.0 set routing-instances VS1 interface gr-0/0/10.0
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.
To configure GRE logical tunnel interfaces over IPv6 for Layer 2 services in bridge domains for Device1 and Device2:
Configure GRE tunnel interface and specify the amount of bandwidth to reserve for tunnel traffic on each Packet Forwarding Engine of Device1.
[edit] user@host# set chassis aggregated-devices ethernet device-count 2 user@host# set chassis fpc 0 pic 0 tunnel-services bandwidth 1g user@host# set chassis network-services enhanced-ip
Configure the interfaces and their VLAN IDs.
[edit] user@host# set interfaces ae0 unit 0 family inet6 address 2001:DB8::1:1/32; user@host# set interfaces xe-0/0/19.0 unit 0 family bridge interface-mode trunk user@host# set interfaces xe-0/0/19.0 unit 0 family bridge vlan-id-list 20-21 user@host# set interfaces xe-1/0/2 gigether-options 802.3ad ae0 user@host# set interfaces xe-1/0/3 gigether-options 802.3ad ae0 user@host# set interfaces gr-0/0/10.0 unit 0 tunnel source 2001:DB8::2:1 user@host# set interfaces gr-0/0/10.0 unit 0 tunnel destination 2001:DB8::3:1 user@host# set interfaces gr-0/0/10.0 unit 0 family bridge interface-mode trunk user@host# set interfaces gr-0/0/10.0 unit 0 family bridge vlan-id-list 20-30
Configure the bridge domain in a virtual switch instance and associate the GRE interface with it.
[edit] user@host# set routing-instances VS1 instance-type virtual-switch user@host# set routing-instances VS1 bridge-domains bd1 vlan-id 20 user@host# set routing-instances VS1 interface xe-0/0/19.0 user@host# set routing-instances VS1 interface gr-0/0/10.0
Configure GRE tunnel interface and specify the amount of bandwidth to reserve for tunnel traffic on each Packet Forwarding Engine of Device2.
[edit] user@host# set chassis aggregated-devices ethernet device-count 2 user@host# set chassis fpc 0 pic 0 tunnel-services bandwidth 1g user@host# set chassis network-services enhanced-ip
Configure the interfaces and their VLAN IDs.
[edit] user@host# set interfaces ae0 unit 0 family inet6 address 2001:DB8::11:1/32; user@host# set interfaces xe-0/0/1unit 0 family bridge interface-mode trunk user@host# set interfaces xe-0/0/1 unit 0 family bridge vlan-id-list 20-21 user@host# set interfaces xe-1/0/2 gigether-options 802.3ad ae0 user@host# set interfaces xe-1/0/3 gigether-options 802.3ad ae0 user@host# set interfaces gr-0/0/10.0 unit 0 tunnel source 2001:DB8::21:1 user@host# set interfaces gr-0/0/10.0 unit 0 tunnel destination 2001:DB8::31:1/32 user@host# set interfaces gr-0/0/10.0 unit 0 family bridge interface-mode trunk user@host# set interfaces gr-0/0/10.0 unit 0 family bridge vlan-id-list 20-30
Configure the bridge domain in a virtual switch instance and associate the GRE interface with it.
[edit] user@host# set routing-instances VS1 instance-type virtual-switch user@host# set routing-instances VS1 bridge-domains bd1 vlan-id 20 user@host# set routing-instances VS1 bridge-domains routing-interface irb.0 user@host# set routing-instances VS1 interface xe-0/0/1.0 user@host# set routing-instances VS1 interface gr-0/0/10.0
Results
Display the results of the configuration on Device 1:
user@host> show configuration
chassis {
fpc 0 {
pic 0 {
tunnel-services {
bandwidth 1g;
}
}
}
network-services enhanced-ip;
}
interfaces {
ae0 {
unit 0 {
family inet6 {
address 2001:DB8::1:1/32;
}
}
}
xe-0/0/19 {
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 20-21;
}
}
}
gr-0/0/10 {
unit 0 {
tunnel {
source 2001:DB8::2:1;
destination 2001:DB8::3:1;
}
family bridge {
interface-mode trunk;
vlan-id-list 20-30;
}
}
}
}
VS-1 {
instance-type virtual-switch;
interface xe-0/0/19.0;
interface gr-0/0/10.0;
bridge-domains {
bd1 {
vlan-id 20;
}
}
}
Display the results of the configuration on Device 2:
user@host> show configuration
chassis {
fpc 0 {
pic 0 {
tunnel-services {
bandwidth 1g;
}
}
}
network-services enhanced-ip;
}
interfaces {
ae0 {
unit 0 {
family inet6 {
address 2001:DB8::11:1/32;
}
}
}
xe-0/0/1 {
unit 0 {
family bridge {
interface-mode trunk;
vlan-id-list 20-21;
}
}
}
gr-0/0/10 {
unit 0 {
tunnel {
source 2001:DB8::21:1;
destination 2001:DB8::31:1;
}
family bridge {
interface-mode access;
vlan-id-list 20-30;
}
}
}
}
VS-1 {
instance-type virtual-switch;
interface xe-0/0/1.0;
interface gr-0/0/10.0;
bridge-domains {
bd1 {
vlan-id 20;
}
}
}
Verification
To confirm that the configuration is working properly, perform these tasks:
Verifying the MAC Addresses Learned on GRE Interfaces
Purpose
Display the MAC addresses learned on a GRE logical interface.
Action
From operational mode, use the show bridge mac-table
command
MAC flags (S -static MAC, D -dynamic MAC, L -locally learned, C -Control MAC SE -Statistics enabled, NM -Non configured MAC, R -Remote PE MAC) Routing instance : VS1 Bridging domain : bd1, VLAN : 20 MAC MAC Logical NH RTR address flags interface Index ID 00:00:00:11:11:11 D gr-0/0/10.0 00:00:00:11:11:12 D gr-0/0/10.0 00:00:00:11:11:13 D gr-0/0/10.0 00:00:00:11:11:14 D gr-0/0/10.0 00:00:00:11:11:15 D gr-0/0/10.0 00:00:00:11:11:16 D gr-0/0/10.0 00:00:00:11:11:17 D gr-0/0/10.0 00:00:00:11:11:18 D gr-0/0/10.0 00:00:00:11:11:19 D gr-0/0/10.0 00:00:00:11:11:1a D gr-0/0/10.0 00:00:00:22:22:22 D xe-0/0/19.0 00:00:00:22:22:23 D xe-0/0/19.0 00:00:00:22:22:24 D xe-0/0/19.0 00:00:00:22:22:25 D xe-0/0/19.0 00:00:00:22:22:26 D xe-0/0/19.0 00:00:00:22:22:27 D xe-0/0/19.0 00:00:00:22:22:28 D xe-0/0/19.0 00:00:00:22:22:29 D xe-0/0/19.0 00:00:00:22:22:2a D xe-0/0/19.0 00:00:00:22:22:2b D xe-0/0/19.0
Meaning
The output displays MAC addresses learned on GRE logical tunnels.