Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
Contrail Getting Started Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Importing Contrail Cluster Data using Contrail Command

date_range 16-Oct-23

Contrail Release 5.0.1 supports importing of Contrail cluster data using Contrail Command.

Before you begin

docker-py is obsolete in Contrail Release 5.0.2. You must remove docker-py and docker Python packages from all the nodes where you want to install the Contrail Command UI.

content_copy zoom_out_map
pip uninstall docker-py docker

System Requirements

  • A VM or physical server with:

    • 8 vCPUs

    • 64 GB RAM

    • 300 GB disk out of which 256 GB is allocated to /root directory.

  • Internet access to and from the physical server, hereafter referred to as the Contrail Command server

  • (Recommended) x86 server with CentOS 7.5 as the base OS to install Contrail Command

Configuration

Perform the following steps to import Contrail cluster data.

  1. Install Docker on the Contrail Command server. These packages are necessary to automate the deployment of Contrail Command software.

    yum install -y yum-utils device-mapper-persistent-data lvm2

    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    yum install -y docker-ce

    systemctl start docker

  2. Download the contrail-command-deployer Docker container image to deploy contrail-command (contrail_command, contrail_mysql containers) from hub.juniper.net. Allow Docker to connect to the private secure registry.

    docker login hub.juniper.net --username <container_registry_username> --password <container_registry_password>

    Pull Contrail-Command-Deployer Container from the private secure registry.

    docker pull hub.juniper.net/contrail/contrail-command-deployer:<container_tag>

    Example, for container_tag: 5.0.1-0.214, use the following command:

    docker pull hub.juniper.net/contrail/contrail-command-deployer:5.0.1-0.214

  3. Get the instances.yml file that was used to provision the Contrail cluster.
  4. Start the Contrail-Command-Deployer container to deploy the Contrail Command (UI) server and import Contrail cluster data to Contrail Command (UI) server using the instances.yml file provided.
    • To import a Contrail cluster using OpenStack:

      docker run -t --net host -e orchestrator=openstack -e action=import_cluster -v < ABSOLUTE_PATH_TO_COMMAND_SERVERS_FILE>:/command_servers.yml -v < ABSOLUTE_PATH_TO_INSTANCES_FILE>:/instances.yml -d --privileged --name contrail_command_deployer hub.juniper.net/contrail/contrail-command-deployer: <container_tag>

    • To import a Contrail cluster using Kubernetes:

      docker run -t --net host -e orchestrator=kubernetes -e action=import_cluster -v < ABSOLUTE_PATH_TO_COMMAND_SERVERS_FILE>:/command_servers.yml -v < ABSOLUTE_PATH_TO_INSTANCES_FILE>:/instances.yml -d --privileged --name contrail_command_deployer hub.juniper.net/contrail/contrail-command-deployer: <container_tag>

    Note:

    These steps explain how to import Contrail cluster data provisioned using OpenStack and Kubernetes. If your orchestrator is different, use -e orchestrator=<YOUR_ORCHESTRATOR> in the above command. The following orchestrators are supported:

    • OpenStack—Use orchestrator=openstack

      Kubernetes—Use orchestrator=kubernetes

      Red Hat OpenShift—Use orchestrator=openshift

      VMware vCenter—Use orchestrator=vcenter

Sample instances.yml File

content_copy zoom_out_map
global_configuration:
  CONTAINER_REGISTRY: hub.juniper.net/contrail
  CONTAINER_REGISTRY_USERNAME: < container_registry_username >
  CONTAINER_REGISTRY_PASSWORD: < container_registry_password >
provider_config:
  bms:
    ssh_pwd: <Pwd>
    ssh_user: root
    ntpserver: <NTP Server>
    domainsuffix: local
instances:
  bms1:
    provider: bms
    ip: <BMS1 IP>
    roles:
      openstack:
  bms2:
    provider: bms
    ip: <BMS2 IP>
    roles:
      openstack:
  bms3:
    provider: bms
    ip: <BMS3 IP>
    roles:
      openstack:
  bms4:
    provider: bms
    ip: <BMS4 IP>
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  bms5:
    provider: bms
    ip: <BMS5 IP>
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  bms6:
    provider: bms
    ip: <BMS6 IP>
    roles:
      config_database:
      config:
      control:
      analytics_database:
      analytics:
      webui:
  bms7:
    provider: bms
    ip: <BMS7 IP>
    roles:
      vrouter:
        PHYSICAL_INTERFACE: <Interface name>
        VROUTER_GATEWAY: <Gateway IP>
      openstack_compute:
  bms8:
    provider: bms
    ip: <BMS8 IP>
    roles:
      vrouter:
        # Add following line for TSN Compute Node
        TSN_EVPN_MODE: True
      openstack_compute:
contrail_configuration:
  CLOUD_ORCHESTRATOR: openstack
  CONTRAIL_VERSION: latest or <contrail_container_tag>
  CONTRAIL_CONTAINER_TAG: <contrail_container_tag>-queens
  RABBITMQ_NODE_PORT: 5673
  VROUTER_GATEWAY: <Gateway IP>
  ENCAP_PRIORITY: VXLAN,MPLSoUDP,MPLSoGRE
  AUTH_MODE: keystone
  KEYSTONE_AUTH_HOST: <Internal VIP>
  KEYSTONE_AUTH_URL_VERSION: /v3
  CONTROLLER_NODES: < list of mgmt. ip of control nodes >
  CONTROL_NODES: <list of control-data ip of control nodes>
  OPENSTACK_VERSION: queens
kolla_config:
  kolla_globals:
    openstack_release: queens
    kolla_internal_vip_address: <Internal VIP>
    kolla_external_vip_address: <External VIP>
    openstack_release: queens
    enable_haproxy: "no" 	 ("no" by default, set "yes" to enable)
    enable_ironic: "no"       ("no" by default, set "yes" to enable)
    enable_swift: "no"        ("no" by default, set "yes" to enable)
    keepalived_virtual_router_id: <Value between 0-255>
  kolla_passwords:
    keystone_admin_password: <Keystone Admin Password>
footer-navigation