Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Benefits of Campus Fabric IP Clos

  • With the increasing number of devices connecting to the network, you need to scale your campus network rapidly without adding complexity. Many IoT devices have limited networking capabilities and require L2 adjacency across buildings and campuses. Traditionally, this problem was solved by extending virtual LANs (VLANs) between endpoints using data plane-based flood and learning mechanisms inherent with Ethernet switching technologies. The traditional Ethernet switching approach is inefficient because it leverages broadcast and multicast technologies to announce Media Access Control (MAC) addresses. It is also difficult to manage because you need to configure and manually manage VLANs to extend them to new network ports. This problem increases multi-fold when you take into consideration the explosive growth of mobile and IoT devices.
  • Campus fabrics have an underlay topology with a routing protocol that ensures loopback interface reachability between nodes. Devices participating in EVPN-VXLAN function as VXLAN tunnel endpoint (VTEP) that encapsulate and decapsulate the VXLAN traffic. VTEP represents construct within the switching platform that originates and terminates VXLAN tunnels. In addition, these devices route and bridge packets in and out of VXLAN tunnels as required.
  • The Campus Fabric IP Clos extends the EVPN fabric to connect VLANs across multiple buildings or floors of a single building. This is done by stretching the L2 VXLAN network with routing occurring in the access device instead of in the core (Centrally-Routed Bridging (CRB)) or distribution (Edge Routed Bridging (ERB)) devices.

Figure 1: Campus Fabric IP Clos A diagram of a server Description automatically generated

An IP Clos network encompasses the distribution, core, and access layers of your topology.

An EVPN-VXLAN fabric solves the problems of previous architectures and provides the following benefits:

  • Reduced flooding and learning—Control plane-based L2 and L3 learning reduces the flood and learn issues associated with data plane learning. Learning MAC addresses in the forwarding plane has an adverse impact on network performance as the number of endpoints grows. This is because more management traffic consumes the bandwidth which leaves less bandwidth available for production traffic. The EVPN control plane handles the exchange and learning of MAC addresses through eBGP routing, rather than an L2 forwarding plane.
  • Scalability—More efficient control-plane based L2 and L3 learning. For example, in a Campus Fabric IP Clos, core switches only learn the access layer switches addresses instead of the device endpoint addresses.
  • Consistency—A universal EVPN-VXLAN-based architecture across disparate campus and data center deployments enables a seamless end-to-end network for endpoints and applications.
  • Group-based policies—With group-based policy (GBP), you can enable microsegmentation with EVPN-VXLAN to provide traffic isolation within and between broadcast domains as well as simplify security policies across a campus fabric.
  • Location-agnostic connectivity—The EVPN-VXLAN campus architecture provides a consistent endpoint experience no matter where the endpoint is located. Some endpoints require L2 reachability, such as legacy building security systems or IoT devices. VXLAN overlay provides L2 extension across campuses without any changes to the underlay network. Juniper uses optimal BGP timers between the adjacent layers of the campus fabric with Bidirectional Forwarding Detection (BFD) that supports fast convergence in event of a node or link failure and equal-cost multipath (ECMP). For more information, see Configuring Per-Packet Load Balancing.