Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Deploy Contrail Cloud

Prerequisites for Contrail Cloud Deployment

Before you deploy Contrail Cloud, ensure that your system meets the following prerequisites:

  • Infrastructure networking

    • Every system must have access to the Contrail Cloud repository satellite. The satellite is used to distribute packages and control software versions.

    • The Contrail Cloud jump host must have access to the Intelligent Platform Management Interface (IPMI) of every managed server.

    • The Contrail Cloud jump host must be in the same broadcast domain as each managed server’s management interface to allow Preboot Execution Environment (PXE) booting. When you use multiple networks that use different switching devices per rack, this is accomplished by stretching a VLAN across the interfaces. BOOTP forwarding in the network fabric is not supported. The undercloud is the only DHCP server in this network.

      Additional networks are created for the control plane, tenant traffic, storage access, and storage backend, as described in Red Hat OpenStack Platform director (OSPd) installation and usage.

  • Contrail Cloud jump host setup

    The OpenStack undercloud is deployed as a virtual machine on a Linux kernel-based virtual machine (KVM) Contrail Cloud jump host. You must ensure that the KVM host OS:

    • Runs Red Hat Enterprise Linux 7.8 or earlier with only base packages installed. Contrail Cloud will install RHEL 7.9 and all necessary packages as part of the Contrail Cloud installation process.

    • Does not run other virtual machines.

    • Has a network connection that can reach the Contrail Cloud Repository Satellite and has IPMI access to physical hardware.

    • Has a network connection that can be used for provisioning other infrastructure resources.

    • Has at least 500 GB space in the /var directory to host virtual machines, packages, and images.

    • Has at least 40 GB RAM and 24 vCPUs.

    • Supports users such as a root user with password-less sudo permissions.

    • Provides password-less SSH access in loopback for users with sudo permissions.

    • Resolves Internet and satellite sites with DNS.

    • Has time synchronized with an NTP source.

Deployment Sequence for Contrail Cloud Deployment

The following sections describe how to install, configure, and deploy Contrail Cloud.

Install Contrail Cloud Installer on the Contrail Cloud Host

Send a request to mailto:contrail_cloud_subscriptions@juniper.net regarding the purchase or upgrade of Contrail Cloud. You will receive an e-mail containing:

  • A tar file containing the Contrail Cloud installer in .sh format. Untar the file to extract the contrail_cloud_installer.sh script.

  • A unique satellite activation key.

  • The satellite DNS name.

  • The satellite organization.

You configure site.yml file settings to use the Contrail Cloud Satellite as the repository to access all Contrail Cloud packages including:

  • Red Hat OpenStack and Red Hat Enterprise Linux

  • Red Hat Ceph Storage

  • Contrail Networking

  • AppFormix

Ensure that the root user has SSH keys before performing the installation. Create new SSH keys:

To create a passphrase-protected key:

If a passphrase is set on the SSH key, you can use the ssh-agent to cache the passphrase. For example, as the contrail user on the jump host:

Ensure that the root user can connect via SSH to the localhost without a password. To authorize access (a password might be required the first time):

Install Contrail Cloud:

  1. Untar the contrail_cloud_installer.sh on the jump host. The jump host is the Contrail Cloud host, and it is the starting point for deploying Contrail Cloud.

  2. Specify the Contrail Cloud activation key by setting the environment variables.

    For example:

  3. Ensure that the Contrail Cloud Installer script has the required permissions to install the Contrail Cloud packages:

    The Contrail Cloud packages are installed in the /var/lib/contrail_cloud directory.

  4. Define site-specific information in the Ansible variables:

    1. Change the directory to /var/lib/contrail_cloud/config.

    2. Copy the sample /var/lib/contrail_cloud/samples/*.yml configuration files to the /var/lib/contrail_cloud/config directory. You can skip this step if you have existing configuration files in the config directory.

    3. Customize the /var/lib/contrail_cloud/config/site.yml file with site-specific settings to reflect your environment. Ensure that the following fields are changed for each site:

  5. Set the DPDK driver to vfio-pci in the site.yml file when deploying DPDK on an Intel X710 NIC.

    For a complete matrix of supported NIC and driver mapping, see Contrail Networking NIC Support Matrix.

    Note:

    For Netronome, see: Appendix C: Deploying Netronome SmartNIC.

  6. Prepare Ansible Vault:

    1. Store all sensitive data in Ansible Vault. Credentials are commonly stored in Ansible Vault. Customize the /var/lib/contrail_cloud/config/vault-data.yml:

    2. Change the password for the vault-encrypted file after customization.

      The default password is c0ntrail123.

    You can use a plain-text password to create the /var/lib/contrail_cloud/config/.vault_password file to avoid being asked for a vault password during deployment. The file should be read-only by the contrail user, and it is best practice to delete the file after deployment is finished.

  7. Run the Contrail Cloud Ansible provisioning:

    1. Verify that you can establish an SSH connection without specifying a password:

      sudo ssh localhost true

    2. Install the Contrail Cloud automation scripts:

      sudo /var/lib/contrail_cloud/scripts/install_contrail_cloud_manager.sh

      Note:

      Take the optional step to configure Netronome at this time. For more information, see: Appendix C: Deploying Netronome SmartNIC.

A new user with the username contrail is created on the Contrail Cloud host (jump host). The default password is c0ntrail123. You should change the default password in your vault-data.yml file. SSH keys have been authorized from the root user. A new set of SSH keys is generated for this new user. This gives access to the undercloud VM by executing ssh undercloud, and also control hosts for the contrail user. The overcloud nodes including AppFormix nodes are accessible by the heat-admin user, and use a separate pair of keys stored on the undercloud VM, by default.

Contrail Cloud adds entries into /home/contrail/.ssh/config directory to provide the username used for each of the overcloud nodes (and the undercloud). Thus, you can ssh undercloud or ssh <address> without specifying a user.

The contrail user keys can be authorized for heat-admin when it is defined in the site.yml configuration file:

Use the contrail user to run all subsequent operations in Contrail Cloud from the /var/lib/contrail_cloud/scripts directory:

Note:

The SSH keys are authorized by the root user.

Prepare the Deployment Templates

You can validate your configuration files at any time by running the /var/lib/contrail_cloud/scripts/node-configuration.py script. This scipt will load all the configuration files, check the syntax, and verify that the structures and values conform to the schema. Different arguments can be used with the Python script depending on what results you are looking for.

  • Site settings

    The /var/lib/contrail_cloud/config/site.yml file defines the properties for your deployment environment. The properties in this file are unique for every deployment and need to be customized.

    For more information, see sample site.yml in Appendix A: Sample Configuration Files.

  • Inventory settings

    The inventory defines all the servers that are used by Contrail Cloud. The /var/lib/contrail_cloud/config/inventory.yml file contains the descriptions of the inventory.yml file configurations. You can copy a sample inventory file from /var/lib/contrail_cloud/samples/.

    For more information, see sample inventory.yml in Appendix A: Sample Configuration Files.

  • Control hosts settings

    The control hosts run virtual machines for all Contrail Cloud control functions. The following are the various Contrail Cloud control VMs that are created on the control hosts:

    • OpenStack and Ceph Controller

    • Contrail Controller

    • Contrail Analytics

    • Contrail Analytics Database

    • AppFormix Controller

    The /var/lib/contrail_cloud/config/control-host-nodes.yml file defines the server and network properties for each control host. To ensure high availability of the control functions, three control hosts must be defined. Hosts must also be defined in the inventory.yml file. You can copy a sample control-hosts.yml file from the /var/lib/contrail_cloud/samples/ directory.

    Note:

    The control host systems must have sufficient resources to host the control VMs. The control host systems must have the following minimum specifications:

    • 156 GB RAM

    • Minimum 100 GB first disk for the operating system

    • Minimum 1 TB hard disk for VM storage (multiple SSDs with RAID is recommended)

    • Minimum 200 GB SSD drive for VM journals

    • Hardware RAID controller set to the appropriate RAID level for your operating environment. The operating environment includes: operating system disk, VM storage, and VM journals.

    For more information, see sample control-host-nodes.yml in Appendix A: Sample Configuration Files.

  • Overcloud network settings

    The overcloud roles are deployed to the control VMs, compute, and storage resources that you have identified. The /var/lib/contrail_cloud/config/overcloud-nics.yml file defines the network layout for each role.

    For more information, see sample overcloud-nics.yml in Appendix A: Sample Configuration Files.

  • Compute node settings

    The compute nodes are used for Nova compute resources. The /var/lib/contrail_cloud/config/compute-nodes.yml file defines the compute resources and host aggregates. You also manage host aggregates and match them with availability zones in this file. Nodes must also be defined in the inventory.yml file.

    For more information, see sample compute-nodes.yml in Appendix A: Sample Configuration Files.

  • Storage node settings

    The /lib/contrail_cloud/config/storage-nodes.yml file defines the storage nodes that run Ceph storage services. You must define a minimum of three storage hosts to ensure high availability of the storage functions. Nodes must also be defined in the inventory.yml file. You can copy a sample storage-nodes.yml file from /var/lib/contrail_cloud/samples/.

    For more information, see sample storage-nodes.yml in Appendix A: Sample Configuration Files.

Add Nodes to the Inventory

The /var/lib/contrail_cloud/scripts/inventory-assign.sh script adds all nodes that are defined in the /var/lib/contrail_cloud/config/inventory.yml file to the ironic inventory. The nodes added to the ironic inventory are managed by Contrail Cloud.

To add nodes to the ironic inventory:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the inventory-assign.sh script.

    /var/lib/contrail_cloud/scripts/inventory-assign.sh

  3. Generate a report of the available resource properties.

    These details are helpful when configuring roles, disk devices, and network interfaces. Nodes must be loaded into the Ironic inventory before running the node-cofiguration.py script. The node-cofiguration.py script is also used to validate configurations against the schema, and can be used after editing any of the configuration files. The report can be generated by executing:

    /var/lib/contrail_cloud/scripts/node-configuration.py group

    More detailed reports for a specific resource can be generated (where <resource> is the inventory resource name):

Deploy Control Hosts

The control-hosts-deploy.sh script assigns all nodes that are defined in the /var/lib/contrail_cloud/config/control-host-nodes.yml file as control hosts. The hosts are imaged, booted, configured, and prepared to host the overcloud control plane VMs.

To deploy control host roles to the inventory:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the control-hosts-deploy.sh script.

    /var/lib/contrail_cloud/scripts/control-hosts-deploy.sh

Create VMs for all Control Roles

The control-vms-deploy.sh script imports VM details into the ironic inventory.

To create VMs for control roles:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the control-vms-deploy.sh script.

    /var/lib/contrail_cloud/scripts/control-vms-deploy.sh

Assign Compute Hosts

The compute-nodes-assign.sh script assigns the Nova compute role for all nodes that are defined in the /var/lib/contrail_cloud/config/compute-nodes.yml file.

To assign compute hosts:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the compute-nodes-assign.sh script.

    /var/lib/contrail_cloud/scripts/compute-nodes-assign.sh

Assign Storage Hosts

The storage-nodes-assign.yml playbook assigns the Ceph storage role for all nodes that are defined in the /var/lib/contrail_cloud/config/storage-nodes.yml file.

To assign storage hosts:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the storage-nodes-assign.sh script.

    /var/lib/contrail_cloud/scripts/storage-nodes-assign.sh

Deploy the OpenStack Cluster

The openstack-deploy.sh script deploys the OpenStack overcloud with all control functions and all compute and storage resources that have been defined in the previous playbooks.

To deploy the OpenStack cluster:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the validate-node.sh script to verify that the environment is set correctly.

    Run the validate-node.sh script on the jump host in /var/lib/contrail_cloud/scripts to validate the YAML configuration files for:

    • Network for OpenStack Controllers, Contrail Controller, Contrail Analytics, and Analytics DB.

    • Networking for controller hosts and compute hosts.

    • Disk resource and configuration validation.

  3. Run the openstack-deploy.sh script.

    /var/lib/contrail_cloud/scripts/openstack-deploy.sh

Deploy the AppFormix Cluster

The appformix-deploy.sh script deploys the AppFormix controllers.

When you deploy AppFormix in a highly available cluster with a floating virtual IP (VIP) that is used to service requests from the active elected leader, you must define the appformix_vip variable in the site.yml file with a valid IP address. If you do not define the variable, you must comment out the appformix_vip variable. Copy the AppFormix license file to the /var/lib/contrail_cloud/appformix/appformix.sig directory.

To deploy the AppFormix cluster:

  1. Log in to the Contrail Cloud jump host with the username contrail and password c0ntrail123.
  2. Run the appformix-deploy.sh script.

    /var/lib/contrail_cloud/scripts/appformix-deploy.sh

Deploy Contrail Command Web UI

The install_contrail_command.sh script deploys the Contrail Command web UI on a virtual machine on the jump host.The Contrail Command web UI can be reached via https://<jumphost ip>:9091.

  • Review the /var/lib/contrail_cloud/config/vault-data.yml for Contrail Command authentication details.

Validate the OpenStack Environment

By default, tests that require floating IPs (FIPs) are skipped. The provision-sdn-gateway.sh script can be executed before validation to provision SDN gateways and external network that can be used by Tempest. Examples of object definitions can be found at /var/lib/contrail_cloud/samples/features/provision-sdn-gateway/site.yml.

The overcloud-validation.sh script can be used to run Tempest test collections in newly deployed environments. The script downloads a CirrOS VM image, uploads it to the overcloud, and creates new flavors. After the script execution, results of the test can be found in the undercloud home directory where two files are created:

  • tempest-subunit-smoke.xml

  • tempest-subunit-full.xml

The first line of the file shows the number of failures and the total count of conducted tests.

Install VNF Images and Templates

You can use Horizon or OpenStack command-line clients to install Glance images and Heat templates for the VNF services.

Add New Compute and Storage Nodes

To add new compute and storage nodes to an existing environment:

  1. Update the inventory.yml configuration file.

  2. Run the inventory-assign.sh script.

  3. Update the compute-nodes.yml configuration file with the new nodes, and run the compute-nodes-assign.sh script.

  4. Update the storage-nodes.yml configuration file with the new nodes, and run the storage-nodes-assign.sh script.

  5. Rerun the openstack-deploy.sh script.

Gather Logs

This section provides a script for you to run, that will gather into one place, important log, configuration, and status data from your deployed nodes. This script is typically used when you need to find and provide specific information for troubleshooting or while making support calls.

We recommend you use the script after a successful deployment to provide a baseline that can be compared against future upgrades or failures. To archive the configuration, status, and logs from the deployment:

The usage description of the collect_data.sh script is as follows: