Data Center Interconnect Design and Implementation Using IPVPN
This section describes how to configure DCI using IPVPN. We are using IPVPN to pass traffic between data centers.
In this reference architecture, IPVPN routes are exchanged between spine devices in different data centers to allow for the passing of traffic between data centers.
Physical connectivity between the data centers is required before IPVPN routes can be sent across data centers. The backbone devices in a WAN cloud provide the physical connectivity. A backbone device is connected to each spine device in a single data center and participates in the overlay IBGP and underlay EBGP sessions. EBGP also runs in a separate BGP group to connect the backbone devices to each other; EVPN signaling and IPVPN (inet-vpn) is enabled in this BGP group.
Figure 1 shows two data centers using IPVPN for DCI.
Configuring Data Center Interconnect Using IPVPN
Configuring DCI for IPVPN is similar to configuring DCI for EVPN Type 5 routes with the exceptions shown in this section.
In this example, we are showing the configuration of IPVPN on Spine 1.
Verifying Data Center Interconnect Using IPVPN
Data Center Interconnect—Release History
Table 1 provides a history of all of the features in this section and their support within this reference design.
Release |
Description |
---|---|
19.1R2 |
QFX10002-60C switches running Junos OS Release 19.1R2 and later releases in the same release train support DCI using IPVPN. |
18.4R2-S2 |
MX routers running Junos OS Release 18.4R2-S2 and later releases in the same release train also support DCI using IPVPN. |
18.1R3-S5 |
All devices in the reference design that support Junos OS Release 18.1R3-S5 and later releases in the same release train also support DCI using IPVPN. |