Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installing Contrail with OpenStack and Kolla Ansible

This topic provides the steps needed to install Contrail Release 5.0.X. with OpenStack, using Kolla Ansible playbook contrail-kolla-ansible. Kolla is an OpenStack project that provides Docker containers and Ansible playbooks to provide production-ready containers and deployment tools for operating OpenStack clouds.

The contrail-kolla-ansible playbook works in conjunction with contrail-ansible-deployer to install OpenStack and Contrail Release 5.0.x. containers.

To deploy a Contrail Cluster using Contrail Command, see Installing Contrail Cluster using Contrail Command and instances.yml.

Deployment of Kolla containers using contrail-kolla-ansible and Contrail containers using contrail-ansible-deployer is presented in this topic:

Set Up the Base Host

This procedure assumes you are installing with CentOS 7.5 kernel 3.10.0-862.11.6.el7.x86_64. The vRouter has a dependency with the host kernel. Install this kernel version on the target nodes before provisioning.

To set up the base host:

  1. Download the appropriate installer package from the Contrail Download page.

  2. Install Ansible.

    yum -y install epel-release

    yum -y install git ansible-2.4.2.0

  3. Untar the tgz. file.

    - tar xvf contrail-ansible-deployer-5.0.1-0.214.tgz

    The instances.yaml is located at the contrail-ansible-deployer/config/

  4. Configure Contrail and Kolla parameters in the file instances.yaml, using the following guidelines:

    • The provider configuration (provider_config) section refers to the cloud provider where the Contrail cluster will be hosted, and contains all parameters relevant to the provider. For bare metal servers, the provider is bms.

    • The kolla_globals section refers to OpenStack services. For more information about all possible kolla_globals, see https://github.com/Juniper/contrail-kolla-ansible/.../globals.yml.

    • Additional Kolla configurations (contrail-kolla-ansible) are possible as contrail_additions. For more information about all possible contrail_additions to Kolla, see https://github.com/Juniper/contrail-kolla-ansible/.../all.yml.

    • The contrail_configuration section contains parameters for Contrail services.

      • CONTAINER_REGISTRY specifies the registry from which to pull Contrail containers. It can be set to your local Docker registry if you are building your own containers. If a registry is not specified, it will try to pull the containers from the Docker hub.

        If a custom registry is specified, also specify the same registry under kolla_globals as contrail_docker_registry.

      • CONTRAIL_VERSION, if not specified, will default to the "latest" tag. It is possible to specify a tag from nightly builds.

      • For more information about all possible parameters for contrail_configuration, see https://github.com/Juniper/contrail-container-builder/.../common.sh.

      • If “roles” is not specified, the following roles are assumed.

      • If there are host-specific values per host, for example, if the names of the interfaces used for "network_interface" are different on the servers in your cluster, use the example configuration at Configuration Sample for Multi Node OpenStack HA and Contrail (multi interface).

      • Many of the parameters are automatically derived to sane defaults (how the first configuration works). You can explicitly specify variables to override the derived values if required. Review the code to see the derivation logic.

      • CONTROL_DATA_NET_LIST can be a comma separated list of CIDR subnets that can be designated for CONTROL/DATA plane traffic. The 'kolla' parameters 'network_interface' will be derived from this subnet as the interface that corresponds to an IP address in this subnet. CONTROL_DATA_NET_LIST can still be used in a single interface setup by specifying the management subnet as the value so that the interface names need not be specified.

    Example: instances.yaml

    This example is a bare minimum configuration for a single node, single interface, all-in-one cluster.

    Example: instances.yaml

    This example is a more elaborate configuration for a single node, single interface, all-in-one cluster.

  5. Run the following Commands:

    • ansible-playbook -e orchestrator=openstack -i inventory/ playbooks/configure_instances.yml

    • ansible-playbook -i inventory/ playbooks/install_openstack.yml

    • ansible-playbook -e orchestrator=openstack -i inventory/ playbooks/install_contrail.yml

  6. Open web browser and type https://contrail-server-ip:8143 to access Contrail WebUI.

    The default login user name is admin. Use the same password which was entered in step 4

Run OpenStack Commands

At this time, it is necessary to manually install the OpenStack client (python-openstackclient) using pip. You cannot install using Yum repos because some dependent Python libraries conflict with the installation of the python-openstackclient. You also cannot install using pip repos because Ansible libraries can be overwritten.

  1. Manually install the python-openstackclient.

    yum install -y gcc python-devel

    pip install python-openstackclient

    pip install python-ironicclient

  2. Test the setup with VM-to-VM ping.

Multiple Interface Configuration Sample for Multinode OpenStack HA and Contrail

This is a configuration sample for a multiple interface, multiple node deployment of high availability OpenStack and Contrail Release 5.0.x. Use this sample to configure parameters specific to your system.

For more information or for recent updates, refer to the github topic Configuration Sample for Multi Node OpenStack HA and Contrail (multi interface).

Configuration Sample—Multiple Interface

Note:

This example shows host-specific parameters, where interface names are different on each host and are specified under each role. The most specific setting takes precedence. As an example, if there was no network_interface setting under the role openstack for bms1, then it would take the name value eth2 from the global variable. However, because there is a setting under the bms1 openstack section, that network_interface name will be eno1.

Single Interface Configuration Sample for Multinode OpenStack HA and Contrail

This is a configuration sample for a multiple interface, single node deployment of high availability OpenStack and Contrail Release 5.0.x. Use this sample to configure parameters specific to your system.

For more information or for recent updates, refer to the github topic Configuration Sample for Multi Node OpenStack HA and Contrail (single interface).

Configuration Sample—Single Interface

Note:

Replace <contrail_version> with the correct contrail_container_tag value for your Contrail release. The respective contrail_container_tag values are listed in README Access to Contrail Registry.

Frequently Asked Questions

This section presents some common error situations and gives guidance on how to resolve the error condition.

Using Host-Specific Parameters

You might have a situation where you need to specify host-specific parameters, for example, the interface names are different for the different servers in the cluster. In this case, you could specify the individual names under each role, and the more specific setting takes precedence.

For example, if there is no "network_interface" setting under the role "openstack" for example “bms1”, then it will take its setting from the global variable.

An extended example is available at: Configuration Sample for Multi Node OpenStack HA and Contrail.

Containers from Private Registry Not Accessible

  1. You might have a situation in which containers that are pulled from a private registry named CONTAINER_REGISTRY are not accessible.

  2. To resolve, check to ensure that REGISTRY_PRIVATE_INSECURE is set to True.

Error: Failed to insert vrouter kernel module

  1. You might have a situation in which the vrouter module is not getting installed on the compute nodes, with the vrouter container in an error state and errors are shown in the Docker logs.

  2. In this release, the vrouter module requires the host kernel version to be 3.10.0-862.11.6.el7.x86_64. To get this kernel version, before running provision, install the kernel version on the target nodes.

Fatal Error When Vrouter Doesn’t Specify OpenStack

  1. You might encounter a fatal error when vrouter needs to be provisioned without nova-compute.

  2. There is a use case in which vrouter needs to be provisioned without being accompanied by nova-compute. Consequently, the "openstack_compute" is not automatically inferred when "vrouter" role is specified. To resolve this issue, the "openstack_compute" role needs to be explicitly stated along with "vrouter".

    For more information about this use case, refer to the bug #1756133.

Need for HAProxy and Virtual IP on a Single OpenStack Cluster

By default, all OpenStack services listen on the IP interface provided by the kolla_internal_vip_address/network_interface variables under the kolla_globals section in config/instances.yaml. In most cases this corresponds to the ctrl-data network, which means that even Horizon will now run only on the ctrl-data network. The only way Kolla provides access to Horizon on the management network is by using HAProxy and keepalived. Enabling keepalived requires a virtual IP for VRRP, and it cannot be the interface IP. There is no way to enable HAProxy without enabling keepalived when using Kolla configuration parameters. For this reason,you need to provide two virtual IP addresses: one on management (kolla_external_vip_address) and one on ctrl-data-network (kolla_internal_vip_address). With this configuration, Horizon will be accessible on the management network by means of the kolla_external_vip_address.

Using the kolla_toolbox Container to Run OpenStack Commands

The directory /etc/kolla/kolla-toolbox on the base host on which OpenStack containers are running is mounted and accessible as /var/lib/kolla/config_files from inside the kolla_toolbox container. If you need other files when executing OpenStack commands, for example the command openstack image create needs an image file, you can copy the relevant files into the /etc/kolla/kolla-toolbox directory of the base host and use them inside the container.

The following example shows how to run OpenStack commands in this way: