Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Configure Data Center Interconnect (DCI)

date_range 07-Jun-23

Data Center Interconnect Overview

You can use CEM to interconnect multiple data centers over a WAN such as the Internet or an enterprise network. We support DCI based on EVPN/VXLAN and not Layer 3 VPN and not EVPN/MPLS.

Multiple tenants connected to a logical router (VRF routing instance) in one data center can exchange routes with tenants connected to a logical router in another data center.

The implementation described in this section uses EBGP peering between the data centers.

Data Center Interconnect Configuration Overview

In this example, Figure 1 we are configuring DCI between Data Center 1 (DC1) and Data Center 2 (DC2). Physical connectivity between the data centers is provided by backbone devices in a WAN cloud. In DC1, we are connecting to the WAN cloud from the border leafs. In DCI2, we are connecting to the WAN cloud from the border spines. We are using BGP as the routing protocol between the border devices and the devices in the WAN cloud.

Figure 1: Data Center Interconnect Between DC1 and DC2Data Center Interconnect Between DC1 and DC2

DCI Configuration Overview

With CEM, you can automate data center interconnect (DCI) of two data centers. You can use the same CEM cluster to configure multiple data centers in distinct fabrics.

To configure DCI between Data Center 1 and Data Center 2:

  1. Assign device roles to the spines and border leafs used for DCI

  2. Configure EBGP peering on the underlay

  3. Create virtual networks

  4. Create logical routers

  5. Create Data Center Interconnect

  6. Configure BGP peers on the WAN cloud device.

Assign Device Roles for Border Spine and Border Leaf Devices

In this procedure we assign roles for the border leaf and border spine devices used for DCI.

To assign roles:

  1. On the Fabric Devices summary screen, select Action> Reconfigure Roles.
  2. Next to the spine devices, select Assign Roles.
  3. Be sure that the following roles are assigned.

    In DC1, set the roles as follows:

    • Border leaf—CRB Access, CRB Gateway, DCI Gateway

    • Spine—CRB Gateway, Route Reflector

    • Server leaf—CRB Access

    In DC2, set the roles as follows:

    • Border spine—CRB Gateway, DCI Gateway, Route Reflector

    • Leaf—CRB Access

    For a description of roles, see Device Roles.

Manually Configure BGP Peering

When you assign the CRB Gateway or DCI gateway role to a device, CEM autoconfigures IBGP overlay peering between the fabrics. In our implementation, it creates BGP peering between the spine and border leaf devices on DC1 and the border spine devices on DC2.

CEM cannot always configure the underlay automatically when the data centers are not directly connected to each other. In this case, CEM requires loopback-to-loopback reachability between the two data centers on devices with the DCI Gateway role.

We are using an MX Series router as the cloud device. On the cloud device configure the border leaf devices and border spine devices as BGP peers.

  1. Configure the following on the cloud device.
    content_copy zoom_out_map
    policy-options  {
        policy-statement dci {
            term 1 {
                from protocol direct;
                then accept;
            }
        }
    }
    
    content_copy zoom_out_map
    protocols bgp {
        group dci {
            export dci;
            multipath multiple-as;
            neighbor 10.200.1.1 {
                peer-as 65201;
            }
            neighbor 10.200.2.1 {
                peer-as 65202;
            }
            neighbor 10.100.1.1 {
                peer-as 65105;
            }
            neighbor 10.100.2.1 {
                peer-as 65106;
            }
        }
    }
    
  2. On DC1 border leaf 1, configure the MX device as a BGP peer.
    content_copy zoom_out_map
    protocols bgp {
        group dci {
            local-as 65105;
            neighbor 10.100.1.2 {
                peer-as 65221;
            }
        }
    }
    
  3. On DC1 border leaf 2, configure the MX device as a BGP peer.
    content_copy zoom_out_map
    protocols bgp {
        group dci {
            local-as 65106;
            neighbor 10.100.2.2 {
                peer-as 65221;
            }
        }
    }
    
  4. On DC2 border spine 1, configure the MX device as a BGP peer.
    content_copy zoom_out_map
    policy-options {
        policy-statement dci {
            term 1 {
                from protocol direct;
                then accept;
            }
        }
    }
    
    content_copy zoom_out_map
    protocols bgp {
        group dci {
            export dci;
            local-as 65201;
            neighbor 10.200.1.2 {
                peer-as 65221;
            }
        }
    }
    
  5. On DC2 border spine 2, configure the MX router as the peer.
    content_copy zoom_out_map
    policy-options  {
        policy-statement dci {
            term 1 {
                from protocol direct;
                then accept;
            }
        }
    }
    
    content_copy zoom_out_map
    protocols bgp {
        group dci {
            export dci;
            local-as 65202;
            neighbor 10.200.2.2 {
                peer-as 65221;
            }
        }
    }
    

Configure Virtual Networks

We are creating a virtual network in each data center. A virtual network lets hosts in the same network communicate with each other. This is like assigning a VLAN to each host.

To create a virtual network:

  1. Navigate to Overlay > Virtual Networks and click Create.

    The Virtual Networks screen appears.

  2. Create two virtual networks as follows:

    Field

    VN3-A Configuration

    VN3-B Configuration

    Name

    VN3-A

    VN3-B

    Allocation Mode

    User defined subnet only

    User defined subnet only

    Subnets

    Network IPAM

    default-domain:default-pruject:default...

    default-domain:default-pruject:default...

    CIDR

    10.10.1.0/24

    10.10.2.0/24

    Gateway

    10.10.1.1

    10.10.2.1

Create Virtual Port Groups

You configure VPGs to add interfaces to your virtual networks. To create a VPG:

  1. Navigate to Overlay > Virtual Port Group and click Create.

    The Create Virtual Port Group screen appears.

  2. Create two VPGs with the values shown in the following table.

    To assign a physical interface, find the interface under Available Physical Interface. There can be multiple pages of interfaces. To move an interface to the Assigned Physical Interface, click the > next to the interface.

    Name

    BMS5

    BMS6

    Assigned Physical Interface

    xe-0/0/3:0

    (on DC1-Server-Leaf 1)

    xe-0/0/3:0

    (DC1-Server-Leaf2)

    Network (Virtual Network)

    VN3-A

    VN3-B

    VLAN ID

    111

    112

Create Logical Routers

CEM uses logical routers (LRs) to create a virtual routing and forwarding (VRF) routing instance for each logical router with IRB interfaces on the border spine or border leaf devices.

  1. Navigate to Overlay > Logical Routers and click Create.

    The Logical Router screen appears:

  2. On the Logical Router screen, create a logical router:

    Field

    DC1-LR1 Configuration

    Name

    DC1-LR1

    Extend to Physical Router

    DC1-Border-Leaf1

    DC1-Border-Leaf2

    Logical Router Type

    VXLAN Routing

    Connected Networks

    VN3-A

    VN3-B

Create Data Center Interconnect

The DCI configuration sets up the connection between two data centers. Once you add DCI, CEM adds family EVPN to the BGP peers between the border leaf and border spine devices in DC1 and DC2.

  1. Click Overlay > DCI Interconnect.

    The Edit DCI screen appears.

  2. Fill in the screen as shown:

Verify Data Center Interconnect

To verify that DCI is working, we will ping from a server on a virtual network in one data center to a server on a virtual network in the other data center.

  1. Run ping from BMS6 (DC1 Server Leaf 2) to BMS3 (DC2 Leaf 3)
    content_copy zoom_out_map
    ping 10.2.3.101 -c 5
    PING 10.2.3.101 (10.2.3.101) 56(84) bytes of data.
    64 bytes from 10.2.3.101: icmp_seq=1 ttl=62 time=0.512 ms
    64 bytes from 10.2.3.101: icmp_seq=2 ttl=62 time=0.506 ms
    64 bytes from 10.2.3.101: icmp_seq=3 ttl=62 time=0.481 ms
    64 bytes from 10.2.3.101: icmp_seq=4 ttl=62 time=0.478 ms
    64 bytes from 10.2.3.101: icmp_seq=5 ttl=62 time=0.409 ms
    
    --- 10.2.3.101 ping statistics ---
    5 packets transmitted, 5 received, 0% packet loss, time 3999ms
    rtt min/avg/max/mdev = 0.409/0.477/0.512/0.039 ms
  2. Run ping from BMS6 (DC1 server Leaf 2) to BMS1 (DC2 Leaf 1)
    content_copy zoom_out_map
    ping 10.2.1.101 -c 5
    PING 10.2.1.101 (10.2.1.101) 56(84) bytes of data.
    64 bytes from 10.2.1.101: icmp_seq=1 ttl=62 time=0.462 ms
    64 bytes from 10.2.1.101: icmp_seq=2 ttl=62 time=0.535 ms
    64 bytes from 10.2.1.101: icmp_seq=3 ttl=62 time=0.535 ms
    64 bytes from 10.2.1.101: icmp_seq=4 ttl=62 time=0.571 ms
    64 bytes from 10.2.1.101: icmp_seq=5 ttl=62 time=0.467 ms
    
    --- 10.2.1.101 ping statistics ---
    5 packets transmitted, 5 received, 0% packet loss, time 4000ms
    rtt min/avg/max/mdev = 0.462/0.514/0.571/0.042 ms
    
footer-navigation