Miscellaneous
This section of the guide covers options available in Contrail Cloud.
Capsule Server Configuration
Capsule servers mirror content from a Satellite Server to establish content sources in various geographical locations. This enables host systems to pull content and configuration from the capsule servers in their location and not from the central Satellite Server. For additional information on capsule servers, see What Satellite Server and Capsule Server do from the Red Hat Satellite Installation Guide.
Figure 1 illustrates a capsule server topology.
The Capsule VMs in this topology must have access to the Internet. Internet access is required to connect with Juniper Satellite servers for Contrail Cloud and to connect with RedHat subscription managers servers. For a detailed capsule server installation procedure, see Installing Capsule Server from the Red Hat Satellite Installation Guide.
The procedure for installing capsule server in Contrail Cloud:
Create a capsule VM using Red Hat Enterprise Linux 7.6.
Install the capsule keys for Red Hat Enterprise Linux from the Contrail Cloud Satellite device. The capsule keys are used to encrypt communication between the Contrail Cloud satellite and the capsule. The keys also allow Red Hat Enterprise Linux to register the operating system and are required to enable Red Hat Subscription Manager.
yum install ntp ntpdate ntp_server_address systemctl start ntpd systemctl enable ntpd rpm -Uvh https://contrail-cloud-satellite.juniper.net/pub/katello-ca-consumer-latest.noarch.rpm service goferd start systemctl start goferd
Register the Contrail Cloud satellite device with the Red Hat Openstack subscription manager:
subscription-manager register --activationkey=[satellite_key] --org=[contrail] –force
From the Red Hat Subscription Manager, get the ID of the pool with RedHat Satellite and capsule server.
subscription-manager repos --disable "*" subscription-manager release --unset yum clean all POOL=$(subscription-manager list --available --matches 'Red Hat Satellite' --pool-only | tail -1) subscription-manager attach --pool=$POOL
In Red Hat Subscription Manager, enable repositories.
subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-server-rhscl-7-rpms --enable=rhel-7-server-satellite-6.2-rpms --enable=rhel-7-server-satellite-6.2-puppet4-rpms
In Red Hat Subscription Manager, refresh the subscription manager.
subscription-manager refresh yum -y update
In Red Hat Subscription Manager, install the capsule server.
yum -y install satellite-capsule
Register your Capsule Server to the Contrail Cloud Satellite Server:
Create the certificates archive on the Contrail Cloud Satellite Server. This task requires access to the Contrail Cloud server and often has to be performed by a site’s Contrail Cloud administrator.
The certificates are delivered as a tar file.
A configuration snippet of the command that is executed by the Capsule Server administrator.
capsule-certs-generate \ --capsule-fqdn "mycapsule.example.com" \ --certs-tar "~/mycapsule.example.com-certs.tar"
Note:The provided FQDN must be resolvable from the satellite server to the VM IP address with capsule server.
Run the capsule installation script with the provided certificates:
satellite-installer --scenario capsule \ --foreman-proxy-content-parent-fqdn "contrail-cloud-satellite.juniper.net" \ --foreman-proxy-register-in-foreman "true"\ --foreman-proxy-foreman-base-url "https://contrail-cloud-satellite.juniper.net" \ --foreman-proxy-trusted-hosts "contrail-cloud-satellite.juniper.net" \ --foreman-proxy-oauth-consumer-key "[foreman-proxy-oauth-consumer-key]" \ --foreman-proxy-oauth-consumer-secret "[foreman-proxy-oauth-consumer-secret]" \ --foreman-proxy-content-certs-tar "[local a path to a tar file with certificates]"\ --puppet-server-foreman-url "https://contrail-cloud-satellite.juniper.net
Deployment Validation Tools
The Contrail Cloud 13.1 software bundle includes scripts that assist with detecting connectivity and configuration issues.
This section provides information on these scripts.
- Check Server Hardware Specifications Script
- Check Disk allocation, CPU architecture and memory
- Check NUMA and numbers of NICs
- Pre-Deployment Check
- Post-Deployment Check
- Fabric Design
Check Server Hardware Specifications Script
The inventory-assigne.sh script—which is stored in the /var/lib/contrail_cloud/introspection/ directory on the jumphost—can be used to generate a JSON file with full hardware specifications for each server in your Contrail Cloud environment. The script uses the Red Hat Ironic service to help generate the JSON file.
The node-configuration.py python script—which is stored in the /var/lib/contrail_cloud/scripts/ directory on the jumphost—can be used to simplify the process of gathering server hardware specification information from the generated JSON files.
Check Disk allocation, CPU architecture and memory
In the following example, the node-configuration.py script is run after the inventory-assigne.sh script to gather information about check disk allocations, CPU architecture, and memory information:
/var/lib/contrail_cloud/scripts/node-configuration.py show -i /var/lib/contrail_cloud/introspection/5a6s1-node5.introspection INFO: SYS 5a6s1-node5 manufacturer:Supermicro cpu:Intel(R) Xeon(R) CPU E5-2630L v4 @ 1.80GHz core:20 core_ht:40 memory:94GB boot_mode:bios Memory per numa: NUMA0: 47GB NUMA1: 48GB CPU per numa: NUMA0: CPU: 0 TH: [0, 20], CPU: 1 TH: [1, 21], CPU: 2 TH: [2, 22], CPU: 3 TH: [3, 23], CPU: 4 TH: [4, 24], CPU: 8 TH: [5, 25], CPU: 9 TH: [6, 26], CPU: 10 TH: [7, 27], CPU: 11 TH: [8, 28], CPU: 12 TH: [9, 29] NUMA1: CPU: 0 TH: [10, 30], CPU: 1 TH: [11, 31], CPU: 2 TH: [12, 32], CPU: 3 TH: [13, 33], CPU: 4 TH: [14, 34], CPU: 8 TH: [15, 35], CPU: 9 TH: [16, 36], CPU: 10 TH: [17, 37], CPU: 11 TH: [18, 38], CPU: 12 TH: [19, 39] [...] # NICs informations INFO: Disk /dev/sda root_disk:True, size:931GB serial:S4704LHK, model:ST1000NX0313 by_path:/dev/disk/by-path/pci-0000:00:1f.2-ata-1.0, hctl:4:0:0:0 INFO: Disk /dev/sdb root_disk:False, size:931GB serial:S470BVD4, model:ST1000NX0313 by_path:/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0, hctl:5:0:0:0 INFO: Disk /dev/sdc root_disk:False, size:931GB serial:S470BTJZ, model:ST1000NX0313 by_path:/dev/disk/by-path/pci-0000:00:1f.2-ata-3.0, hctl:6:0:0:0 INFO: Disk /dev/sdd root_disk:False, size:931GB serial:S470BVGP, model:ST1000NX0313 by_path:/dev/disk/by-path/pci-0000:00:1f.2-ata-4.0, hctl:7:0:0:0 INFO: Disk /dev/sde root_disk:False, size:931GB serial:S470AQ3P, model:ST1000NX0313 by_path:/dev/disk/by-path/pci-0000:00:1f.2-ata-5.0, hctl:8:0:0:0 INFO: Disk /dev/sdf root_disk:False, size:223GB serial:PHWA605401H8240AGN, model:INTEL SSDSC2BB24 by_path:/dev/disk/by-path/pci-0000:00:1f.2-ata-6.0, hctl:9:0:0:0
Check NUMA and numbers of NICs
In this example, you check the NUMA and NIC numbers after the inventory-assigne.sh script has been.
openstack baremetal introspection data save 5a6s1-node5 | jq .| grep -i nic -A 100 | grep -e name -e numa_node "name": "ens7f1", "numa_node": 0 "name": "ens7f0", "numa_node": 0 "name": "eno2", "numa_node": 0 "name": "eno1", "numa_node": 0
Additional information related to NICs and NUMAs can be gathered by running the node-configuration.py script from the jumphost:
/var/lib/contrail_cloud/scripts/node-configuration.py show -i /var/lib/contrail_cloud/introspection/5a6s1-node5.introspection [...] #CPU information INFO: NIC eno1 (nic1) NUMA:0 MAC:0c:c4:7a:81:a5:92, has_carrier:True PXE:True tagged_vlans:[300, 301, 302, 303, 304], native_vlan:300 link_aggregation:False, link_aggregation_id:0 port_desription:ge-1/0/0.0, switch_name:5a6-ex1 INFO: NIC eno2 (nic2) NUMA:0 MAC:0c:c4:7a:81:a5:93, has_carrier:True PXE:False tagged_vlans:[301, 302, 303, 304, 1008], native_vlan:1008 link_aggregation:False, link_aggregation_id:0 port_desription:ge-0/0/0.0, switch_name:5a6-ex1 INFO: NIC ens7f0 (nic3) NUMA:0 MAC:0c:c4:7a:b7:26:7c, has_carrier:True PXE:False tagged_vlans:[305, 306, 307, 308, 309], native_vlan:309 link_aggregation:True, link_aggregation_id:736 port_desription:xe-0/0/0, switch_name:5a6-qfx-1 INFO: NIC ens7f1 (nic4) NUMA:0 MAC:0c:c4:7a:b7:26:7d, has_carrier:True PXE:False tagged_vlans:[305, 306, 307, 308, 309], native_vlan:309 link_aggregation:True, link_aggregation_id:736 port_desription:xe-1/0/0, switch_name:5a6-qfx-1 INFO: NIC ens7f2 (nic5) NUMA:0 MAC:0c:c4:7a:b7:26:8a, has_carrier:True PXE:False tagged_vlans:[305, 306, 307, 308, 309], native_vlan:309 link_aggregation:True, link_aggregation_id:737 port_desription:xe-0/0/1, switch_name:5a6-qfx-1 INFO: NIC ens7f3 (nic6) NUMA:0 MAC:0c:c4:7a:b7:26:8b, has_carrier:True PXE:False tagged_vlans:[305, 306, 307, 308, 309], native_vlan:309 link_aggregation:True, link_aggregation_id:737 port_desription:xe-1/0/1, switch_name:5a6-qfx-1 [...] #disk information
Pre-Deployment Check
The Contrail Cloud software bundle includes a script to check a server deployment before installing the Red Hat OpenStack platform. We suggest running the script before deploying the OpenStack cluster. The process is outlined in the Contrail Cloud Deployment Guide.
The validation-node.sh script—which is stored in the /var/lib/contrail_cloud/scripts directory on the jumphost—should be run before the openstack-deploy.sh script. The validation-node.sh script validates the configuration YAML files for the following components:
Network for Openstack Controllers, Contrail Controller, Contrail Analytics and Analytics DB.
Networking for controller hosts and compute hosts.
Disk validation (comparing actual available disks with configured disks).
The following configuration snippet displays the process when the validation-node.sh script is run.
./validate-node.sh [...] Friday 29 March 2019 10:20:10 +0100 (0:00:06.661) 0:03:02.041 ********** =============================================================================== Reprocess introspection data ------------------------------------------- 48.56s validate network for contrail-analytics --------------------------------- 6.85s validate network for control -------------------------------------------- 6.80s validate network for contrail-analytics-database ------------------------ 6.66s validate network for contrail-controller -------------------------------- 6.59s validate network for baremetal ------------------------------------------ 6.16s Gathering Facts --------------------------------------------------------- 4.89s validate network for computes ------------------------------------------- 4.67s Get all nodes for introspection reprocess ------------------------------- 2.75s ironic-node : Save data to file ----------------------------------------- 2.57s ironic-node : Save data to file ----------------------------------------- 2.55s ironic-node : Save data to file ----------------------------------------- 2.50s ironic-node : Save data to file ----------------------------------------- 2.47s ironic-node : Save data to file ----------------------------------------- 2.45s ironic-node : Save data to file ----------------------------------------- 2.45s ironic-node : Save data to file ----------------------------------------- 2.44s ironic-node : Save data to file ----------------------------------------- 2.44s ironic-node : Save data to file ----------------------------------------- 2.41s ironic-node : Save data to file ----------------------------------------- 2.40s ironic-node : Save data to file ----------------------------------------- 2.36s Playbook run took 0 days, 0 hours, 3 minutes, 2 seconds
The script verifies configuration parameters but cannot guarantee the success of the deployment.
Post-Deployment Check
The Contrail Cloud software bundle includes a set of Tempest test packages to check the health of the Contrail Cloud environment.
The Tempest test packages are launched using the overcloud-validation.sh script. The script runs the following Tempest test packages:
openstack-tempest
python2-cinder-tests-tempest
python2-horizon-tests-tempest
python2-keystone-tests-tempest
python2-neutron-tests-tempest
Contrail tempest checks including following scenarios: contrail, lbaasv2, pagination, sorting, security-group, ipam, port-security, binding, provider, agent, quotas, route-table, standard-attr-description, external-net, policy, router, allowed-address-pairs, extra_dhcp_opt, project-id, extra_lbaas_opts
Fabric Design
This reference architecture does not cover fabric design.
To configure an EVPN-VXLAN fabric that can be used by the nodes in a Contrail Cloud to transport Layer 2 and Layer 3 traffic, see Data Center Fabric Architecture Guide.