Installing a Contrail Cluster using Contrail Command and instances.yml
Contrail Networking supports deploying Contrail cluster using Contrail Command and the instances.yml file. A YAML file provides a concise format for specifying the instance settings.
We recommend installing Contrail Command and deploying your Contrail cluster from Contrail Command in most Contrail Networking deployments. See How to Install Contrail Command and Provision Your Contrail Cluster. You should only use the procedure in this document if you have a strong reason to not use the recommended procedure.
System Requirements
A VM or physical server with:
4 vCPUs
32 GB RAM
100 GB disk
Internet access to and from the physical server, hereafter referred to as the Contrail Command server
(Recommended) x86 server with CentOS 7.6 as the base OS to install Contrail Command
For a list of supported platforms for all Contrail Networking releases, see Contrail Networking Supported Platforms List.
Contrail Release 5.1 does not support Contrail Insights deployment from command line with Contrail Cluster instances.yml file.
Before you begin
docker-py
Python module is superseded by docker
Python module. You must remove docker-py
and docker
Python packages from all the nodes where you want to install the
Contrail Command UI.
pip uninstall docker-py docker
Configuration
Perform the following steps to deploy a Contrail Cluster using Contrail Command and the instances.yml file.
Enable subscription on all the RedHat nodes.
sudo subscription-manager register --username <USERNAME> –-password <PASSWORD> sudo subscription-manager attach --pool pool_id sudo subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-ha-for-rhel-7-server-rpms --enable=rhel-7-server-extras-rpms
Sample instances.yml File
global_configuration: CONTAINER_REGISTRY: hub.juniper.net/contrail CONTAINER_REGISTRY_USERNAME: < container_registry_username > CONTAINER_REGISTRY_PASSWORD: < container_registry_password > provider_config: bms: ssh_pwd: <Pwd> ssh_user: root ntpserver: <NTP Server> domainsuffix: local instances: bms1: provider: bms ip: <BMS IP> roles: config_database: config: control: analytics_database: analytics: webui: vrouter: openstack: openstack_compute: bms2: provider: bms ip: <BMS2 IP> roles: openstack: bms3: provider: bms ip: <BMS3 IP> roles: openstack: bms4: provider: bms ip: <BMS4 IP> roles: config_database: config: control: analytics_database: analytics: webui: bms5: provider: bms ip: <BMS5 IP> roles: config_database: config: control: analytics_database: analytics: webui: bms6: provider: bms ip: <BMS6 IP> roles: config_database: config: control: analytics_database: analytics: webui: bms7: provider: bms ip: <BMS7 IP> roles: vrouter: PHYSICAL_INTERFACE: <Interface name> VROUTER_GATEWAY: <Gateway IP> openstack_compute: bms8: provider: bms ip: <BMS8 IP> roles: vrouter: # Add following line for TSN Compute Node TSN_EVPN_MODE: True openstack_compute: contrail_configuration: CLOUD_ORCHESTRATOR: openstack CONTRAIL_VERSION: latest or <contrail_container_tag> RABBITMQ_NODE_PORT: 5673 KEYSTONE_AUTH_PUBLIC_PORT: 5005 VROUTER_GATEWAY: <Gateway IP> ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE AUTH_MODE: keystone KEYSTONE_AUTH_HOST: <Internal VIP> KEYSTONE_AUTH_URL_VERSION: /v3 CONTROLLER_NODES: < list of mgmt. ip of control nodes > CONTROL_NODES: <list of control-data ip of control nodes> OPENSTACK_VERSION: queens kolla_config: kolla_globals: openstack_release: queens kolla_internal_vip_address: <Internal VIP> kolla_external_vip_address: <External VIP> openstack_release: queens enable_haproxy: "no" ("no" by default, set "yes" to enable) enable_ironic: "no" ("no" by default, set "yes" to enable) enable_swift: "no" ("no" by default, set "yes" to enable) keystone_public_port: 5005 swift_disk_partition_size = 10GB keepalived_virtual_router_id: <Value between 0-255> kolla_passwords: keystone_admin_password: <Keystone Admin Password>
This representative instances.yaml file configures non-default Keystone ports by setting the keystone_public_port: and KEYSTONE_AUTH_PUBLIC_PORT:.