Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Reference Architecture Variations

This section provides a walkthrough of variations to the Contrail Cloud reference architecture.

Contrail Cloud Reference Architecture Variations

Contrail Cloud can be deployed with simpler server and network configurations in environments where performance and resilience requirements are more relaxed.

This section provides information about these architectural variations.

Supported Reference Architecture Variations Summary

Table 1 lists supported variations for this reference architecture:

Table 1: Supported Reference Architecture Variations
Architectural Variation Comment

Controller hosts in same rack

The controller networks don’t need to be stretched between racks, but there is an increased risk of outage. No change to the configuration files is necessary for this variation.

Separate OpenStack and Contrail controller hosts

Use this variation in environments where you want to reduce the impact of a node failure.

Separate Controller and Analytics, AppFormix hosts

To increase performance in this variation:

  • Place the Openstack and Contrail Controllers on 3 hosts

  • Place the Contrail Analytics, Analytics DB, and Appformix controllers on additional 3 hosts

Use of NICs on NUMA 0 and NUMA 1

Intel architectures have become more flexible for cross-NUMA traffic from DPDK core to NIC, but these configurations have less throughput than the recommendation of both NICs and DPDK cores on NUMA 0.

Single bond interface on servers

Use in cases where there are no separate storage nodes, or where network traffic is light and there is a low risk of contention causing packet drops. Note that DPDK for the Tenant network cannot share an interface, so DPDK mode cannot be used in this configuration.

Single subnet across racks for Tenant, Storage, Storage Mgt, and Internal API traffic

Use in smaller environments where per-rack addressing is not a requirement

Use the same network for External, Management, and Intranet traffic

Network sharing can be used in non-production networks like labs and POCs, but this variation is not recommended in production environments.

Single Bond Interface in Variation Architectures

When servers with a single bond interface are to be used, each of the networks in the overcloud-nics.yml file is specified to be present on the same bond. The configuration is performed in the controller_network_config hierarchy and in each of the compute[leaf][h/w profile] and storage[leaf][h/w profile] hierarchies.

Leaf switch ports must be configured as follows for connections to the bond interfaces of each node type:

Table 2: Leaf Switch Port VLAN Summary
Connected node VLANs

Controller

TenantStorageInternal APIExternal

Compute

TenantStorageInternal API

Storage

StorageStorage Mgmt

The following diagrams illustrate connectivity for this architecture.

Figure 1: Control Host Networking—Single Bond InterfaceControl Host Networking—Single Bond Interface
Figure 2: Compute and Storage Node Networking—Single Bond InterfaceCompute and Storage Node Networking—Single Bond Interface

The following is a full configuration network snippet for a compute node in the overcloud-nics.yml file.

Layer 2 Networks Between Racks

The same subnet address can be used across racks in small Contrail Cloud deployments.

Figure 3 illustrates networking for control hosts using layer 2 to stretch across racks in a Contrail Cloud deployment. Figure 4 illustrates networking for compute and storage nodes using layer 2 to stretch across racks.

Figure 3: Control Hosts—Networking with Stretched Layer 2 NetworksControl Hosts—Networking with Stretched Layer 2 Networks
Figure 4: Compute and Storage Nodes—Networking with Stretched Layer 2 NetworksCompute and Storage Nodes—Networking with Stretched Layer 2 Networks

This Layer 2 addressing scheme is not recommended for environments with a large number of devices. Layer 2 stretch can be achieved using trunking between switches, or VXLAN if additional scalability is needed. Use of a separate management switch is optional.

For this type of deployment, a single network of each type is defined and no supernet is specified.

The following configuration snippet from the site.yml file illustrates this deployment.

Leaf switch ports are configured with VLANs in the same way as described in the previous section.

Single Controller Node

Small scale experimental and lab deployments of Contrail Cloud can have a single controller host. This is configured by having a single entry in the control_host_nodes: hierarchy within the control-host-nodes.yml file.

Proof of Concept Environments with High Availability

For proof-of-concept trial environments, the following is the minimum Contrail Cloud environment that can be configured with High Availability support:

  • Jumphost

  • 3 controls hosts

  • 2 compute nodes, which can be used to validate routing and tunnels

  • 3 storage nodes (optional)

Simplified networking can be implemented with the following components:

  • IPMI connectivity from the jumphost

  • Single network connection from each server to a switch

  • Provision network configured as untagged on the interface

  • Other networks configured with VLANs on the interface

  • VLANs configured in switch to span between servers

This setup supports testing of most Contrail Networking features.

Single Controller Node

Small-scale Contrail Cloud environments—including experimental or controlled lab deployments—can be established with a jumphost, a single control host, and one or more compute nodes.

To configure this type of small-scale environment, include a single entry in the control_host_nodes: hierarchy in the control-host-nodes.yml file.

Underlay Routing Between Leaf Switches

You can configure routing between leaf switches in the underlay fabric network to simplify leaf switch configuration.

Figure 5 illustrates leaf switch routing in the underlay network.

Figure 5: Underlay Routing Between Leaf Switches in the FabricUnderlay Routing Between Leaf Switches in the Fabric

The IRB interfaces for the leaf device subnets are configured but are not placed in VRF instances. Traffic, therefore, is routed using routes in the inet.0 global routing table on each switch. A route to each IRB interface is advertised between the leaf switches using iBGP.

Supported Variations Requiring Additional Approval

The following variations can be supported in production environments, but the variations must be explicitly approved by Juniper Networks to receive full customer support. Email mailto:sre@juniper.net or contact your Juniper representative before deploying these variations to ensure your Contrail Cloud environment remains in compliance with your support agreement.

Engagement with the Juniper Networks professional services team is typically required to deploy these variations.

Variations that Require Approval Overview

The following variations can be supported in production environments. Email mailto:sre@juniper.net or contact your Juniper Networks representative before deploying these variations to ensure your Contrail Cloud environment remains in compliance with your support agreement.

Table 3 lists these variations.

Table 3: Architecture Variations that require Approval
Variation Explanation

Use of VLANs instead of EVPN VXLAN, including the use of MC-LAG for server connectivity

Use in labs, POCs and smaller production environments where VLAN configuration on switches is manageable and the limitations of STP are not impactful.

Collapsed spine/gateway

Configuring SDN gateway function in spine switches is possible providing the spine supports the required functionality and scale (number of externally connected VRFs)

Single leaf switch per rack

For truly cloud-native applications which are resilient to infrastructure failures

Non-IP CLOS connectivity

No management switches - for lab environments only

Single controller node

Use for labs and training for feature testing. Not supported for production environments

Note:

Contrail Cloud 13 releases do not support all-in-one deployments where a single node supports both controller and compute functions. Storage nodes also need to be separate devices.

The following sections provide information on how the configuration of Contrail Cloud can be modified to support these architectural variations.