Deploy Contrail Cloud
Prerequisites for Contrail Cloud Deployment
Before you deploy Contrail Cloud, ensure that your system meets the following prerequisites:
Infrastructure networking
Every system must have access to the Contrail Cloud repository satellite. The satellite is used to distribute packages and control software versions.
The Contrail Cloud jump host must have access to the Intelligent Platform Management Interface (IPMI) of every managed server.
The Contrail Cloud jump host must be in the same broadcast domain as each managed server’s management interface to allow Preboot Execution Environment (PXE) booting. When you use multiple networks that use different switching devices per rack, this is accomplished by stretching a VLAN across the interfaces. BOOTP forwarding in the network fabric is not supported. The undercloud is the only DHCP server in this network.
Additional networks are created for the control plane, tenant traffic, storage access, and storage backend, as described in Red Hat OpenStack Platform director (OSPd) installation and usage.
Contrail Cloud jump host setup
The OpenStack undercloud is deployed as a virtual machine on a Linux kernel-based virtual machine (KVM) Contrail Cloud jump host. You must ensure that the KVM host OS:
Runs Red Hat Enterprise Linux 7.8 or earlier with only base packages installed. Contrail Cloud will install RHEL 7.9 and all necessary packages as part of the Contrail Cloud installation process.
Does not run other virtual machines.
Has a network connection that can reach the Contrail Cloud Repository Satellite and has IPMI access to physical hardware.
Has a network connection that can be used for provisioning other infrastructure resources.
Has at least 500 GB space in the /var directory to host virtual machines, packages, and images.
Has at least 40 GB RAM and 24 vCPUs.
Supports users such as a root user with password-less
sudo
permissions.Provides password-less SSH access in loopback for users with
sudo
permissions.Resolves Internet and satellite sites with DNS.
Has time synchronized with an NTP source.
Deployment Sequence for Contrail Cloud Deployment
The following sections describe how to install, configure, and deploy Contrail Cloud.
- Install Contrail Cloud Installer on the Contrail Cloud Host
- Prepare the Deployment Templates
- Add Nodes to the Inventory
- Deploy Control Hosts
- Create VMs for all Control Roles
- Assign Compute Hosts
- Assign Storage Hosts
- Deploy the OpenStack Cluster
- Deploy the AppFormix Cluster
- Deploy Contrail Command Web UI
- Validate the OpenStack Environment
- Install VNF Images and Templates
- Add New Compute and Storage Nodes
- Gather Logs
Install Contrail Cloud Installer on the Contrail Cloud Host
Send a request to mailto:contrail_cloud_subscriptions@juniper.net regarding the purchase or upgrade of Contrail Cloud. You will receive an e-mail containing:
A tar file containing the Contrail Cloud installer in .sh format. Untar the file to extract the
contrail_cloud_installer.sh
script.A unique satellite activation key.
The satellite DNS name.
The satellite organization.
You configure site.yml file settings to use the Contrail Cloud Satellite as the repository to access all Contrail Cloud packages including:
Red Hat OpenStack and Red Hat Enterprise Linux
Red Hat Ceph Storage
Contrail Networking
AppFormix
Ensure that the root user has SSH keys before performing the installation. Create new SSH keys:
yes '' | sudo ssh-keygen -t rsa -N ''
To create a passphrase-protected key:
sudo ssh-keygen -t rsa
If a passphrase is set on the SSH key, you can use the ssh-agent
to cache the passphrase. For example, as the contrail
user
on the jump host:
ssh-agent bash ssh-add <key_path> (default: /home/contrail/.ssh/id_rsa)
Ensure that the root user can connect via SSH to the localhost without a password. To authorize access (a password might be required the first time):
sudo ssh-copy-id localhost
Install Contrail Cloud:
Untar the
contrail_cloud_installer.sh
on the jump host. The jump host is the Contrail Cloud host, and it is the starting point for deploying Contrail Cloud.Specify the Contrail Cloud activation key by setting the environment variables.
For example:
SATELLITE="contrail-cloud-satellite.juniper.net" SATELLITE_KEY="ak-my-account-key" SATELLITE_ORG=”ContrailCloud”
Ensure that the Contrail Cloud Installer script has the required permissions to install the Contrail Cloud packages:
./contrail_cloud_installer.sh \ --satellite_host ${SATELLITE} \ --satellite_key ${SATELLITE_KEY} \ --satellite_org ${SATELLITE_ORG}
The Contrail Cloud packages are installed in the /var/lib/contrail_cloud directory.
Define site-specific information in the Ansible variables:
Change the directory to /var/lib/contrail_cloud/config.
Copy the sample /var/lib/contrail_cloud/samples/*.yml configuration files to the /var/lib/contrail_cloud/config directory. You can skip this step if you have existing configuration files in the config directory.
Customize the /var/lib/contrail_cloud/config/site.yml file with site-specific settings to reflect your environment. Ensure that the following fields are changed for each site:
global: # List of DNS nameservers dns: - "8.8.8.8" # List of NTP time servers ntp: - "66.129.255.62" # Timezone for all servers timezone: 'America/Los_Angeles' rhel: # Contrail Cloud Activation Key satellite: #SATELLITE_KEY should be defined in vault-data.yml file #SATELLITE_ORG organization: "ContrailCloud" #SATELLITE_FQDN fqdn: contrail-cloud-satellite.juniper.net # DNS domain information. # Must be unique for every deployment to avoid name conflicts domain: "my.unique.domain" jumphost: network: provision: # jumphost nic to be used for provisioning (PXE booting) servers nic: eno1
Set the DPDK driver to
vfio-pci
in the site.yml file when deploying DPDK on an Intel X710 NIC.For a complete matrix of supported NIC and driver mapping, see Contrail Networking NIC Support Matrix.
Note:For Netronome, see: Appendix C: Deploying Netronome SmartNIC.
overcloud: contrail: vrouter: dpdk: driver: "vfio-pci"
Prepare Ansible Vault:
Store all sensitive data in Ansible Vault. Credentials are commonly stored in Ansible Vault. Customize the /var/lib/contrail_cloud/config/vault-data.yml:
ansible-vault edit /var/lib/contrail_cloud/config/vault-data.yml
Change the password for the vault-encrypted file after customization.
The default password is
c0ntrail123
.ansible-vault rekey /var/lib/contrail_cloud/config/vault-data.yml
You can use a plain-text password to create the /var/lib/contrail_cloud/config/.vault_password file to avoid being asked for a vault password during deployment. The file should be read-only by the
contrail
user, and it is best practice to delete the file after deployment is finished.sudo chown contrail /var/lib/contrail_cloud/config/.vault_password sudo chmod 0400 /var/lib/contrail_cloud/config/.vault_password
Run the Contrail Cloud Ansible provisioning:
Verify that you can establish an SSH connection without specifying a password:
sudo ssh localhost true
Install the Contrail Cloud automation scripts:
sudo /var/lib/contrail_cloud/scripts/install_contrail_cloud_manager.sh
Note:Take the optional step to configure Netronome at this time. For more information, see: Appendix C: Deploying Netronome SmartNIC.
A new user with the username contrail
is created
on the Contrail Cloud host (jump host). The default password is c0ntrail123
. You should change the default password in your vault-data.yml file. SSH keys have been authorized
from the root
user. A new set of SSH keys is generated
for this new user. This gives access to the undercloud VM by executing ssh undercloud
, and also control hosts for the contrail
user. The overcloud nodes including AppFormix nodes are accessible
by the heat-admin
user, and use a separate pair of keys
stored on the undercloud VM, by default.
Contrail Cloud adds entries into /home/contrail/.ssh/config directory to provide the username used for each of the overcloud
nodes (and the undercloud). Thus, you can ssh undercloud
or ssh <address>
without specifying a user.
The contrail
user keys can be authorized for heat-admin
when it is defined in the site.yml configuration
file:
global: service_user: use_ssh_key_in_overcloud: true
Use the contrail
user to run all subsequent operations
in Contrail Cloud from the /var/lib/contrail_cloud/scripts directory:
su - contrail cd /var/lib/contrail_cloud/scripts
The SSH keys are authorized by the root
user.
Prepare the Deployment Templates
You can validate your configuration files at any time by running
the /var/lib/contrail_cloud/scripts/node-configuration.py
script. This scipt will load all the configuration files, check
the syntax, and verify that the structures and values conform to the
schema. Different arguments can be used with the Python script depending
on what results you are looking for.
Site settings
The /var/lib/contrail_cloud/config/site.yml file defines the properties for your deployment environment. The properties in this file are unique for every deployment and need to be customized.
For more information, see sample site.yml in Appendix A: Sample Configuration Files.
Inventory settings
The inventory defines all the servers that are used by Contrail Cloud. The /var/lib/contrail_cloud/config/inventory.yml file contains the descriptions of the inventory.yml file configurations. You can copy a sample inventory file from /var/lib/contrail_cloud/samples/.
For more information, see sample inventory.yml in Appendix A: Sample Configuration Files.
Control hosts settings
The control hosts run virtual machines for all Contrail Cloud control functions. The following are the various Contrail Cloud control VMs that are created on the control hosts:
OpenStack and Ceph Controller
Contrail Controller
Contrail Analytics
Contrail Analytics Database
AppFormix Controller
The /var/lib/contrail_cloud/config/control-host-nodes.yml file defines the server and network properties for each control host. To ensure high availability of the control functions, three control hosts must be defined. Hosts must also be defined in the inventory.yml file. You can copy a sample control-hosts.yml file from the /var/lib/contrail_cloud/samples/ directory.
Note:The control host systems must have sufficient resources to host the control VMs. The control host systems must have the following minimum specifications:
156 GB RAM
Minimum 100 GB first disk for the operating system
Minimum 1 TB hard disk for VM storage (multiple SSDs with RAID is recommended)
Minimum 200 GB SSD drive for VM journals
Hardware RAID controller set to the appropriate RAID level for your operating environment. The operating environment includes: operating system disk, VM storage, and VM journals.
For more information, see sample control-host-nodes.yml in Appendix A: Sample Configuration Files.
Overcloud network settings
The overcloud roles are deployed to the control VMs, compute, and storage resources that you have identified. The /var/lib/contrail_cloud/config/overcloud-nics.yml file defines the network layout for each role.
For more information, see sample overcloud-nics.yml in Appendix A: Sample Configuration Files.
Compute node settings
The compute nodes are used for Nova compute resources. The /var/lib/contrail_cloud/config/compute-nodes.yml file defines the compute resources and host aggregates. You also manage host aggregates and match them with availability zones in this file. Nodes must also be defined in the inventory.yml file.
For more information, see sample compute-nodes.yml in Appendix A: Sample Configuration Files.
Storage node settings
The /lib/contrail_cloud/config/storage-nodes.yml file defines the storage nodes that run Ceph storage services. You must define a minimum of three storage hosts to ensure high availability of the storage functions. Nodes must also be defined in the inventory.yml file. You can copy a sample storage-nodes.yml file from /var/lib/contrail_cloud/samples/.
For more information, see sample storage-nodes.yml in Appendix A: Sample Configuration Files.
Add Nodes to the Inventory
The /var/lib/contrail_cloud/scripts/inventory-assign.sh script adds all nodes that are defined in the /var/lib/contrail_cloud/config/inventory.yml file to the ironic inventory. The nodes added to the ironic inventory are managed by Contrail Cloud.
To add nodes to the ironic inventory:
Deploy Control Hosts
The control-hosts-deploy.sh script assigns all nodes that are defined in the /var/lib/contrail_cloud/config/control-host-nodes.yml file as control hosts. The hosts are imaged, booted, configured, and prepared to host the overcloud control plane VMs.
To deploy control host roles to the inventory:
Create VMs for all Control Roles
The control-vms-deploy.sh script imports VM details into the ironic inventory.
To create VMs for control roles:
Assign Compute Hosts
The compute-nodes-assign.sh script assigns the Nova compute role for all nodes that are defined in the /var/lib/contrail_cloud/config/compute-nodes.yml file.
To assign compute hosts:
Assign Storage Hosts
The storage-nodes-assign.yml playbook assigns the Ceph storage role for all nodes that are defined in the /var/lib/contrail_cloud/config/storage-nodes.yml file.
To assign storage hosts:
Deploy the OpenStack Cluster
The openstack-deploy.sh script deploys the OpenStack overcloud with all control functions and all compute and storage resources that have been defined in the previous playbooks.
To deploy the OpenStack cluster:
Deploy the AppFormix Cluster
The appformix-deploy.sh script deploys the AppFormix controllers.
When you deploy AppFormix in a highly available cluster with a floating virtual IP (VIP) that is used to service requests from the active elected leader, you must define the appformix_vip variable in the site.yml file with a valid IP address. If you do not define the variable, you must comment out the appformix_vip variable. Copy the AppFormix license file to the /var/lib/contrail_cloud/appformix/appformix.sig directory.
To deploy the AppFormix cluster:
Deploy Contrail Command Web UI
The install_contrail_command.sh
script deploys the
Contrail Command web UI on a virtual machine on the jump host.The
Contrail Command web UI can be reached via https://<jumphost
ip>:9091.
Review the /var/lib/contrail_cloud/config/vault-data.yml for Contrail Command authentication details.
Validate the OpenStack Environment
By default, tests that require floating IPs (FIPs) are skipped.
The provision-sdn-gateway.sh
script can be executed before
validation to provision SDN gateways and external network that can
be used by Tempest. Examples of object definitions can be found at /var/lib/contrail_cloud/samples/features/provision-sdn-gateway/site.yml.
The overcloud-validation.sh
script can be used to
run Tempest test collections in newly deployed environments. The script
downloads a CirrOS VM image, uploads it to the overcloud, and creates
new flavors. After the script execution, results of the test can be
found in the undercloud home directory where two files are created:
tempest-subunit-smoke.xml
tempest-subunit-full.xml
The first line of the file shows the number of failures and the total count of conducted tests.
Install VNF Images and Templates
You can use Horizon or OpenStack command-line clients to install Glance images and Heat templates for the VNF services.
Add New Compute and Storage Nodes
To add new compute and storage nodes to an existing environment:
Update the inventory.yml configuration file.
Run the inventory-assign.sh script.
Update the compute-nodes.yml configuration file with the new nodes, and run the compute-nodes-assign.sh script.
Update the storage-nodes.yml configuration file with the new nodes, and run the storage-nodes-assign.sh script.
Rerun the openstack-deploy.sh script.
Gather Logs
This section provides a script for you to run, that will gather into one place, important log, configuration, and status data from your deployed nodes. This script is typically used when you need to find and provide specific information for troubleshooting or while making support calls.
We recommend you use the script after a successful deployment to provide a baseline that can be compared against future upgrades or failures. To archive the configuration, status, and logs from the deployment:
/var/lib/contrail_cloud/scripts/collect_data.sh -r all
The usage description of the collect_data.sh
script is as follows:
Usage: ./collect_data.sh [-r ROLE ] [ -d ] [ -h|? ] -r ROLE collect data from specific role or environment. Possible values for ROLE: jumphost, undercloud, control_hosts, openstack_controllers, contrail_controllers, contrail_analytics, contrail_analytics_db, appformix_controllers, contrail_tsn, compute_nodes, storage_nodes, all - collect data from all nodes. You can specify multiple roles, eg. -r \"undercloud jumphost appformix_controllers\" -d enable debugging messages -e external config -h print usage information