Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
Expand All close
Expand All close
list Table of Contents
{ "lCode": "en_US", "lName": "English", "folder": "en_US" }
English
 

Importing Contrail Cluster Data using Contrail Command

Release: Contrail Networking 5.1
{}
Change Release
date_range 21-May-19

Contrail Networking supports importing of Contrail Cluster data to Contrail Command provisioned using one of the following applications - OpenStack, Kubernetes, VMware vCenter, and TripleO.

System Requirements

  • A VM or physical server with:

    • 4 vCPUs

    • 32 GB RAM

    • 100 GB storage

  • Internet access to and from the physical server, which is the Contrail Command server.

  • (Recommended) x86 server with CentOS 7.6 as the base OS to install Contrail Command.

For a list of supported platforms, see Supported Platforms Contrail 5.1.

Before you begin

docker-py Python module is superseded by docker Python module. You must remove docker-py and docker Python packages from all the nodes where you want to install the Contrail Command UI.

pip uninstall docker-py docker

Configuration

Perform the following steps to import Contrail Cluster data.

  1. Install Docker to pull contrail-command-deployer container. This package is necessary to automate the deployment of Contrail Command software.

    yum install -y yum-utils device-mapper-persistent-data lvm2

    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    yum install -y docker-ce-18.03.1.ce

    systemctl start docker

  2. Download the contrail-command-deployer Docker container image to deploy contrail-command (contrail_command, contrail_psql containers) from hub.juniper.net. Allow Docker to connect to the private secure registry.

    docker login hub.juniper.net --username <container_registry_username> --password <container_registry_password>

    Pull contrail-command-deployer container from the private secure registry.

    docker pull hub.juniper.net/contrail/contrail-command-deployer:<container_tag>

    Example, for container_tag:5.1.0-0.38, use the following command:

    docker pull hub.juniper.net/contrail/contrail-command-deployer:5.1.0-0.38

  3. Get the command_servers.yml file that was used to bring the Contrail Command server up and the configuration file that was used to provision the Contrail Cluster.
    Note

    "For OpenShift orchestrator use the ose-install file instead of instances.yml file.

  4. Start the contrail-command-deployer container to deploy the Contrail Command (UI) server and import Contrail Cluster data to Contrail Command (UI) server using the Cluster configuration file provided.
    • Import Contrail‑Cluster provisioned using a supported orchestrator (OpenStack/Kubernetes/OpenShift/vCenter/Mesos).

      docker run -td --net host -e orchestrator=<YOUR_ORCHESTRATOR> -e action=import_cluster -v < ABSOLUTE_PATH_TO_COMMAND_SERVERS_FILE>:/command_servers.yml -v < ABSOLUTE_PATH_TO_CLUSTER_CONFIG_FILE>:/instances.yml --privileged --name contrail_command_deployer hub.juniper.net/contrail/contrail-command-deployer:<container_tag>

      To use the following supported orchestrators, replace <YOUR_ORCHESTRATOR> in the command with the options given below.

      • For OpenStack, use openstack.

      • For Kubernetes, use kubernetes.

      • For Red Hat OpenShift, use openshift.

        Note

        You must use ose-install file instead of instances.yml file.

      • For VMware vCenter, use vcenter.

      • For Mesos, use mesos.

    • Import Contrail‑Cluster provisioned using OSPDirector/TripleO Life Cycle Manager for RedHat OpenStack Orchestration.

      Prerequisites:

      • IP_ADDRESS_OF_UNDERCLOUD_NODE is an Undercloud node IP that must be reachable from the contrail-command-deployer node. You must be able to SSH to Undercloud node from the contrail-command-deployer node.

      • External VIP is an Overcloud VIP where OpenStack and Contrail public endpoints are available. External VIP must be reachable from Contrail Command node.

      • DNS host name for Overcloud external VIP must be resolvable on Contrail Command node. Add the entry in the /etc/hosts file.

      docker run -td --net host -e orchestrator=tripleo -e action=import_cluster -e undercloud=<IP_ADDRESS_OF_UNDERCLOUD_NODE> -e undercloud_password=<STACK_USER_PASSWORD_FOR_SSH_TO_UNDERCLOUD> -v < ABSOLUTE_PATH_TO_COMMAND_SERVERS_FILE>:/command_servers.yml --privileged --name contrail_command_deployer hub.juniper.net/contrail/contrail-command-deployer:<container_tag>

      • Contrail command server must have access to External VIP network to communicate with the configured endpoints.

        Run the following commands:

        ovs-vsctl add-port br0 vlan<externalNetworkVlanID> tag=<externalNetworkVlanID> -- set interface vlan<externalNetworkVlanID> type=internal
        ip link set dev vlan<externalNetworkVlanID> up
        ip addr add <externalNetworkGatewayIP>/<subnetMask> dev vlan<externalNetworkVlanID>
      • If you have used domain name for the external VIP, add the entry in the /etc/hosts file.

        Run the following commands:

        docker exec -it contrail_command bash
        vi /etc/hosts
        <externalVIP> <externalVIP’sDomainName>

Sample instances.yml file

content_copy zoom_out_map
global_configuration:
  CONTAINER_REGISTRY: hub.juniper.net/contrail
  CONTAINER_REGISTRY_USERNAME: < container_registry_username >
  CONTAINER_REGISTRY_PASSWORD: < container_registry_password >
provider_config:
  bms:
    ssh_pwd: <Pwd>
    ssh_user: root
    ntpserver: <NTP Server>
    domainsuffix: local
instances:
  bms1:
    provider: bms
    ip: <BMS1 IP>
    roles:
      openstack:
  bms2:
    provider: bms
    ip: <BMS2 IP>
    roles:
      openstack:
  bms3:
    provider: bms
    ip: <BMS3 IP>
    roles:
      openstack:
  bms4:
    provider: bms
    ip: <BMS4 IP>
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  bms5:
    provider: bms
    ip: <BMS5 IP>
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  bms6:
    provider: bms
    ip: <BMS6 IP>
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  bms7:
    provider: bms
    ip: <BMS7 IP>
    roles:
      vrouter:
        PHYSICAL_INTERFACE: <Interface name>
        VROUTER_GATEWAY: <Gateway IP>
      openstack_compute:
  bms8:
    provider: bms
    ip: <BMS8 IP>
    roles:
      vrouter:
        # Add following line for TSN Compute Node
        TSN_EVPN_MODE: True
      openstack_compute:
contrail_configuration:
  CLOUD_ORCHESTRATOR: openstack
  CONTRAIL_VERSION: latest or <contrail_container_tag>
  CONTRAIL_CONTAINER_TAG: <contrail_container_tag>-queens
  RABBITMQ_NODE_PORT: 5673
  VROUTER_GATEWAY: <Gateway IP>
  ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE
  AUTH_MODE: keystone
  KEYSTONE_AUTH_HOST: <Internal VIP>
  KEYSTONE_AUTH_URL_VERSION: /v3
  CONTROLLER_NODES: < list of mgmt. ip of control nodes >
  CONTROL_NODES: <list of control-data ip of control nodes>
  OPENSTACK_VERSION: queens
kolla_config:
  kolla_globals:
    openstack_release: queens
    kolla_internal_vip_address: <Internal VIP>
    kolla_external_vip_address: <External VIP>
    openstack_release: queens
    enable_haproxy: "no" 	 ("no" by default, set "yes" to enable)
    enable_ironic: "no"       ("no" by default, set "yes" to enable)
    enable_swift: "no"        ("no" by default, set "yes" to enable)
    keepalived_virtual_router_id: <Value between 0-255>
  kolla_passwords:
    keystone_admin_password: <Keystone Admin Password>
footer-navigation