Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
Expand All close
Expand All close
list Table of Contents
file_download PDF
{ "lCode": "en_US", "lName": "English", "folder": "en_US" }
English

Upgrading Contrail In-Service Software from Releases 3.2 and 4.1 to 5.0.x using Ansible Deployer

Release: Contrail Networking 5.1
{}
Change Release
date_range 24-Jun-19

Contrail In-Service Software Upgrade (ISSU) Overview

If your installed version is Contrail Release 3.2 or higher, you can perform an in-service software upgrade (ISSU) to upgrade to Contrail Release 5.0.x using the Ansible deployer. In performing the ISSU, the Contrail controller cluster is upgraded side-by-side with a parallel setup, and the compute nodes are upgraded in place.

Note

We recommend that you take snapshots of your current system before you proceed with the upgrade process.

The procedure for performing the ISSU using the Contrail Ansible deployer is similar to previous ISSU upgrade procedures.

Note

This Contrail ansible deployer ISSU procedure does not include steps for upgrading OpenStack. If an OpenStack version upgrade is required, it should be performed using applicable OpenStack procedures.

In summary, the ISSU process consists of the following parts, in sequence:

  1. Deploy the new cluster.
  2. Synchronize the new and old clusters.
  3. Upgrade the compute nodes.
  4. Finalize the synchronization and complete the upgrades.

Prerequisites

The following prerequisites are required to use the Contrail ansible deployer ISSU procedure:

  • A previous version of Contrail installed, no earlier than Release 3.2.

  • There are OpenStack controller and compute nodes, and Contrail nodes.

  • OpenStack needs to have been installed from packages.

  • Contrail and OpenStack should be installed on different nodes.

Note

Upgrade for compute nodes with Ubuntu 14.04 is not supported. Compute nodes need to be upgraded to Ubuntu 16.04 first.

Preparing the Contrail System for the Ansible Deployer ISSU Procedure

In summary, these are the general steps for the system preparation phase of the Contrail ansible deployer ISSU procedure:

  1. Deploy the 5.0.x version of Contrail using the Contrail ansible deployer, but make sure to include only the following Contrail controller services:
    • Config

    • Control

    • Analytics

    • Databases

    • Any additional support services like rmq, kafka, and zookeeper. (The vrouter service will be deployed later on the old compute nodes.)

    Note

    You must provide keystone authorization information for setup.

  2. After deployment is finished, you can log into the Contrail web interface to verify that it works.

The detailed steps for deploying the new controller using the ansible deployer are as follows:

  1. To deploy the new controller, download contrail-ansible-deployer-release-tag.tgz onto your provisioning host from Juniper Networks.
  2. The new controller file config/instances.yaml appears as follows, with actual values in place of the variables as shown in the example:
    content_copy zoom_out_map
    provider_config:
     bms:
       domainsuffix: local
       ssh_user: user
       ssh_pwd: password
    instances:
     server1:
      ip: controller 1 ip
      provider: bms
      roles:
       analytics: null
       analytics_database: null
       config: null
       config_database: null
       control: null
       webui: null
    contrail_configuration:
     CONTROLLER_NODES: controller ip-s from api/mgmt network
     CONTROL_NODES: controller ip-s from ctrl/data network
     AUTH_MODE: keystone
     KEYSTONE_AUTH_ADMIN_TENANT: old controller's admin's tenant
     KEYSTONE_AUTH_ADMIN_USER: old controller's admin's user name
     KEYSTONE_AUTH_ADMIN_PASSWORD: password for admin user
     KEYSTONE_AUTH_HOST: keystone host/ip of old controller
     KEYSTONE_AUTH_URL_VERSION: "/v3"
     KEYSTONE_AUTH_USER_DOMAIN_NAME: user's domain in case of keystone v3
     KEYSTONE_AUTH_PROJECT_DOMAIN_NAME: project's domain in case of keystone v3
     RABBITMQ_NODE_PORT: 5673
     IPFABRIC_SERVICE_HOST: metadata service host/ip of old controller
     AAA_MODE: cloud-admin
     METADATA_PROXY_SECRET: secret phrase that is used in old controller
    kolla_config:
     kolla_globals:
       kolla_internal_vip_address: keystone host/ip of old controller
       kolla_external_vip_address: keystone host/ip of old controller
    
  3. Finally, run the ansible playbooks to deploy the new controller.
    content_copy zoom_out_map
    ansible-playbook -v -e orchestrator=none -i inventory/ playbooks/configure_instances.yml
    ansible-playbook -v -e orchestrator=openstack -i inventory/ playbooks/install_contrail.yml
    

After successful completion of these commands, the new controller should be up and alive.

Provisioning Control Nodes and Performing Synchronization Steps

In summary, these are the general steps for the node provisioning and synchronization phase of the Contrail ansible deployer ISSU procedure:

  1. Provision new control nodes in the old cluster and old control nodes in the new cluster.
  2. Stop the following containers in the new cluster on all nodes:
    • contrail-device-manager

    • contrail-schema-transformer

    • contrail-svcmonitor

  3. Switch the new controller into maintenance mode to prevent provisioning computes in the new cluster.
  4. Prepare the config file for the ISSU.
  5. Run the pre-sync script from the ISSU package.
  6. Run the run-sync script from the ISSU package in background mode.

The detailed steps to provision the control nodes and perform the synchronization are as follows:

  1. Pair the old control nodes in the new cluster. It is recommended to run it from any config-api container.
    content_copy zoom_out_map
    config_api_image=`docker ps | awk '/config-api/{print $1}' | head`
  2. Run the following command for each old control node, substituting actual values where indicated:
    content_copy zoom_out_map
    docker exec -it $config_api-image /bin/bash -c "LOG_LEVEL=SYS_NOTICE source /common.sh ; 
    python /opt/contrail/utils/provision_control.py --host_name hostname of old control node 
    --host_ip IP of old control node --api_server_ip $(hostname -i)
     --api_server_port 8082 --oper add --router_asn 64512 --ibgp_auto_mesh \$AUTH_PARAMS"
  3. Pair the new control nodes in the old cluster with similar commands (the specific syntax depends on the deployment method of the old cluster), again substituting actual values where indicated.
    content_copy zoom_out_map
    python /opt/contrail/utils/provision_control.py --host_name new controller hostname
     --host_ip new controller IP --api_server_ip old api-server IP/VIP
     --api_server_port 8082 --oper add --admin_user admin --admin_password password
     --admin_tenant_name admin --router_asn 64512 --ibgp_auto_mesh
  4. Stop all the containers for contrail-device-manager, contrail-schema-transformer, and contrail-svcmonitor in the new cluster on all controller nodes.
    content_copy zoom_out_map
    docker stop config_devicemgr_1
    docker stop config_schema_1
    docker stop config_svcmonitor_1
    

These next steps should be performed from any new controller. Then the configuration prepared for ISSU runs. (For now, only manual preparation is available.)

Note

In various deployments, old cassandra may use port 9160 or 9161. You can learn the configuration details for the old services on any old controller node, in the file /etc/contrail-contrail-api.conf.

The configuration appears as follows and can be stored locally:

content_copy zoom_out_map
[DEFAULTS]
# details about oldrabbit
old_rabbit_user = contrail
old_rabbit_password = ab86245f4f3640a29b700def9e194f72
old_rabbit_q_name = vnc-config.issu-queue
old_rabbit_vhost = contrail
old_rabbit_port = 5672
old_rabbit_address_list = ip-addresses
# details about new rabbit
# new_rabbit_user = rabbitmq
# new_rabbit_password = password
# new_rabbit_ha_mode =
new_rabbit_q_name = vnc-config.issu-queue
new_rabbit_vhost = /
new_rabbit_port = 5673
new_rabbit_address_list = ip-addresses
# details about other old/new services
old_cassandra_user = controller
old_cassandra_password = 04dc0540b796492fad6f7cbdcfb18762
old_cassandra_address_list = ip-address:9161
old_zookeeper_address_list = ip-address:2181
new_cassandra_address_list = ip-address:9161 ip-address:9161 ip-address:9161
new_zookeeper_address_list = ip-address:2181
# details about new controller nodes
new_api_info = {"ip-address": [("root"), ("password")], "ip-address": [("root"), ("password")], "ip-address": [("root"), ("password")]}

  1. Detect the config-api image ID.
    content_copy zoom_out_map
    image_id=`docker images | awk '/config-api/{print $3}' | head -1`
  2. Run the pre-synchronization.
    content_copy zoom_out_map
    docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf
     --entrypoint /bin/bash -v /root/.ssh:/root/.ssh $image_id -c "/usr/bin/contrail-issu-pre-sync -c /etc/contrail/contrail-issu.conf"
  3. Run the run-synchronization.
    content_copy zoom_out_map
    docker run --rm --detach -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf
     --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id
     -c "/usr/bin/contrail-issu-run-sync -c /etc/contrail/contrail-issu.conf"
  4. Check the logs of the run-sync process. To do this, open the run-sync container.
    content_copy zoom_out_map
    docker exec -it issu-run-sync /bin/bash
    cat /var/log/contrail/issu_contrail_run_sync.log
  5. Stop and remove the run-sync process after all compute nodes are upgraded.
    content_copy zoom_out_map
    docker rm -f issu-run-sync

Transferring the Compute Nodes into the New Cluster

In summary, these are the general steps for the node transfer phase of the Contrail ansible deployer ISSU procedure:

  1. Select the compute node(s) for transferring into the new cluster.
  2. Move all workloads from the node(s) to other compute nodes. You also have the option to terminate workloads as appropriate.
  3. For Contrail Release 3.x, remove Contrail from the node(s) as follows:
    • Stop the vrouter-agent service.

    • Remove the vhost0 interface.

    • Switch the physical interface down, then up.

    • Remove the vrouter.ko module from the kernel.

  4. For Contrail Release 4.x, remove Contrail from the node(s) as follows:
    • Stop the agent container.

    • Restore the physical interface.

  5. Add the required node(s) to instances.yml with the roles vrouter and openstack_legacy_compute.
  6. Run the Contrail ansible deployer to deploy the new vrouter and to configure the old compute service.
  7. All new compute nodes will have:
    • The collector setting pointed to the new Contrail cluster

    • The Control/DNS nodes pointed to the new Contrail cluster

    • The config-api setting in vnc_api_lib.ini pointed to the new Contrail cluster

  8. (Optional) Run a test workload on transferred nodes to ensure the new vrouter-agent works correctly.

Follow these steps to rollback a compute node, if needed:

  1. Move the workload from the compute node.
  2. Stop the Contrail Release 5.0.x containers.
  3. Ensure the network configuration has been successfully reverted.
  4. Deploy the previous version of Contrail using the deployment method for that version.

The detailed steps for transferring compute nodes into the new cluster are as follows:

Note

After moving workload from the chosen compute nodes, you should remove the previous version of contrail-agent. For example, for Ubuntu 16.04 and vrouter-agent installed directly on the host, these would be the steps to remove the previous contrail-agent:

content_copy zoom_out_map
# stop services
systemctl stop contrail-vrouter-nodemgr
systemctl stop contrail-vrouter-agent
# remove packages
apt-get purge -y contrail*
# restore original interfaces definition
cd /etc/network/interfaces.d/
cp 50-cloud-init.cfg.save 50-cloud-init.cfg
rm vrouter.cfg
# restart networking
systemctl restart networking.service
# remove old kernel module
rmmod vrouter
# maybe you need to restore default route
ip route add 0.0.0.0/0 via 10.0.10.1 dev ens3

  1. The new instance should be added to instances.yaml with two roles: vrouter and openstack_compute_legacy. To avoid reprovisioning the compute node, set the maintenance mode to TRUE. For example:
    content_copy zoom_out_map
    instances:
     server10:
      ip: compute 10 ip
      provider: bms
      roles:
       vrouter:
        MAINTENANCE_MODE: TRUE
        VROUTER_ENCRYPTION: FALSE
       openstack_compute_legacy: null
    
  2. Run the ansible playbooks.
    content_copy zoom_out_map
    ansible-playbook -v -e orchestrator=none -e config_file=/root/contrail-ansible-deployer/instances.yaml playbooks/configure_instances.yml
    ansible-playbook -v -e orchestrator=openstack -e config_file=/root/contrail-ansible-deployer/instances.yaml playbooks/install_contrail.yml
    
    
  3. The contrail-status for the compute node appears as follows:
    content_copy zoom_out_map
    vrouter kernel module is PRESENT
    == Contrail vrouter ==
    nodemgr: active
    agent: initializing (No Configuration for self)
    
  4. Restart contrail-control on all new controller nodes after the upgrade is complete:
    content_copy zoom_out_map
    docker restart control_control_1
  5. Check status of new compute nodes by running contrail-status on them. All components should be active now. You can also check the status of the new instance by creating AZ/aggregates with the new compute nodes and run some test workloads to ensure it operates correctly.

Finalizing the Contrail Ansible Deployer ISSU Process

Finalize the Contrail ansible deployer ISSU as follows:

  1. Stop the issu-run-sync container.
    content_copy zoom_out_map
    docker rm -f issu-run-sync
  2. Run the post synchronization commands.
    content_copy zoom_out_map
    docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-post-sync -c /etc/contrail/contrail-issu.conf"
    docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-zk-sync -c /etc/contrail/contrail-issu.conf"
    
  3. Disengage maintenance mode and start all previously stopped containers. To do this, set the entry MAINTENANCE_MODE in instances.yaml to FALSE, then run the following command from the deployment node:
    content_copy zoom_out_map
    ansible-playbook -v -e orchestrator=openstack -i inventory/ playbooks/install_contrail.yml
  4. Clean up and remove the old Contrail controllers. Use the provision-issu.py script called from the config-api container with the config issu.conf. Replace the credential variables and API server IP with appropriate values as indicated.
    content_copy zoom_out_map
    [DEFAULTS]
    db_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"}
    config_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"}
    analytics_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"}
    control_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"}
    admin_password = <admin password>
    admin_tenant_name = <admin tenant>
    admin_user = <admin username>
    api_server_ip= <any IP of new config-api controller>
    api_server_port=8082
    
  5. Run the following commands from any controller node.
    Note

    All *host_info parameters should contain the list of new hosts.

    content_copy zoom_out_map
    docker cp issu.conf config_api_1:issu.conf
    docker exec -it config_api_1 python /opt/contrail/utils/provision_issu.py -c issu.conf
    
  6. Servers can be cleaned up if there are no other services present.
  7. All configurations for the neutron-api must be edited to have the parameter api_server_ip point to the list of new config-api IP addresses. Locate ContrailPlugin.ini (or other file that contains this parameter) and change the IP addresses to the list of new config-api IP addresses.
  8. The heat configuration needs the same changes. Locate the parameter [clients_contrail]/api_server and change it to point to the list of the new config-api IP addresses.
footer-navigation