- play_arrow Downloads
ON THIS PAGE
Upgrading Contrail In-Service Software from Releases 3.2 and 4.1 to 5.0.x using Helm Deployer
Contrail In-Service Software Upgrade (ISSU) Overview
If your installed version is Contrail Release 3.2 or higher, you can perform an in-service software upgrade (ISSU) to upgrade to Contrail Release 5.0.x using the Helm deployer. In performing the ISSU, the Contrail controller cluster is upgraded side-by-side with a parallel setup, and the compute nodes are upgraded in place.
We recommend that you take snapshots of your current system before you proceed with the upgrade process.
The procedure for performing the ISSU using the Contrail Helm deployer is similar to previous ISSU upgrade procedures.
This Contrail Helm deployer ISSU procedure does not include steps for upgrading OpenStack. If an OpenStack version upgrade is required, it should be performed using applicable OpenStack procedures.
In summary, the ISSU process consists of the following parts, in sequence:
- Deploy the new cluster.
- Synchronize the new and old clusters.
- Upgrade the compute nodes.
- Finalize the synchronization and complete the upgrades.
Prerequisites
The following prerequisites are required to use the Contrail Helm deployer ISSU procedure:
A previous version of Contrail installed, not earlier than Release 3.2.
There are OpenStack controller and compute nodes, and Contrail nodes.
OpenStack needs to have been installed from packages.
Contrail and OpenStack should be installed on different nodes.
Upgrade for compute nodes with Ubuntu 14.04 is not supported. Compute nodes need to be upgraded to Ubuntu 16.04 first.
Preparing the Contrail System for the Helm Deployer ISSU Procedure
In summary, these are the general steps for the system preparation phase of the Contrail Helm deployer ISSU procedure:
- Deploy the 5.0.x version of Contrail
using the Contrail Helm deployer, but make sure to include only the
following Contrail controller services:
Config
Control
Analytics
Databases
Any additional support services like rmq, kafka, and zookeeper. (The vrouter service will be deployed later on the old compute nodes.)
NoteYou must provide keystone authorization information for setup.
- After deployment is finished, you can log into the Contrail web interface to verify that it works.
Detailed instructions for deploying the new cloud using Helm are provided in Installing Contrail Networking for Kubernetes using Helm.
Provisioning Control Nodes and Performing Synchronization Steps
In summary, these are the general steps for the node provisioning and synchronization phase of the Contrail Helm deployer ISSU procedure:
- Provision new control nodes in the old cluster and old control nodes in the new cluster.
- Stop the following containers in the new cluster on all
nodes:
contrail-device-manager
contrail-schema-transformer
contrail-svcmonitor
- Switch the new cloud into maintenance mode to prevent provisioning computes in the new cluster.
- Prepare the config file for the ISSU.
- Run the pre-sync script from the ISSU package.
- Run the run-sync script from the ISSU package in background mode.
The detailed steps to provision the control nodes and perform the synchronization are as follows:
- Pair the old control nodes in the new cluster. It is recommended
to run it from any config-api container:content_copy zoom_out_map
config_api_cid=`docker ps | awk '/config-api/{print $1}' | head`
- Run this command for each old control node, substituting
actual values where indicated:content_copy zoom_out_map
docker exec -it $config_api_cid /bin/bash -c "LOG_LEVEL=SYS_NOTICE source /common.sh ; python /opt/contrail/utils/provision_control.py --host_name hostname of old control node --host_ip IP of old control node --api_server_ip $(hostname -i) --api_server_port 8082 --oper add --router_asn 64512 --ibgp_auto_mesh \$AUTH_PARAMS"
- Pair the new control nodes in the old cluster with similar
commands (the specific syntax depends on the deployment method of
the old cluster), again substituting actual values where indicated.content_copy zoom_out_map
python /opt/contrail/utils/provision_control.py --host_name new controller hostname --host_ip new controller IP --api_server_ip old api-server IP/VIP --api_server_port 8082 --oper add --admin_user admin --admin_password password --admin_tenant_name admin --router_asn 64512 --ibgp_auto_mesh
- Stop all the containers for contrail-device-manager, contrail-schema-transformer,
and contrail-svcmonitor in the new cluster on all controller nodes.content_copy zoom_out_map
docker ps | grep config-devicemgr | awk '{print $1}' | xargs docker pause docker ps | grep config-schema | awk '{print $1}' | xargs docker pause docker ps | grep config-svcmonitor | awk '{print $1}' | xargs docker pause
These next steps should be performed from any new Contrail controller. Then the configuration prepared for ISSU runs. (For now, only manual preparation is available.)
In various deployments, old cassandra may use port 9160 or 9161.
You can learn the configuration details for the old services on any
old controller node, in the file /etc/contrail-contrail-api.conf
.
The configuration appears as follows and can be stored locally:
[DEFAULTS] # details about oldrabbit old_rabbit_user = contrail old_rabbit_password = ab86245f4f3640a29b700def9e194f72 old_rabbit_q_name = vnc-config.issu-queue old_rabbit_vhost = contrail old_rabbit_port = 5672 old_rabbit_address_list = ip-address # details about new rabbit # new_rabbit_user = rabbitmq # new_rabbit_password = password # new_rabbit_ha_mode = new_rabbit_q_name = vnc-config.issu-queue new_rabbit_vhost = / new_rabbit_port = 5673 new_rabbit_address_list = rabbitmq.contrail # details about other old/new services old_cassandra_user = controller old_cassandra_password = 04dc0540b796492fad6f7cbdcfb18762 old_cassandra_address_list = ip-address:9161 old_zookeeper_address_list = ip-address:2181 new_cassandra_address_list = ip-address:9161 ip-address:9161 ip-address:9161 new_zookeeper_address_list = ip-address:2181 # details about new controller nodes new_api_info = {"ip-address": [("root"), ("password")], "ip-address": [("root"), ("password")], "ip-address": [("root"), ("password")]}
- Detect the config-api image ID:content_copy zoom_out_map
image_id=`docker images | awk '/config-api/{print $3}' | head -1`
- Run the pre-synchronization.content_copy zoom_out_map
docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh $image_id -c "/usr/bin/contrail-issu-pre-sync -c /etc/contrail/contrail-issu.conf"
- Run the run-synchronization.content_copy zoom_out_map
docker run --rm --detach -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-run-sync -c /etc/contrail/contrail-issu.conf"
- Check the logs of the run-sync process. To do this, open
the run-sync container.content_copy zoom_out_map
docker exec -it issu-run-sync /bin/bash cat /var/log/contrail/issu_contrail_run_sync.log
- Stop and remove the run-sync process after all compute
nodes are upgraded.content_copy zoom_out_map
docker rm -f issu-run-sync
Transferring the Compute Nodes into the New Cluster
In summary, these are the general steps for the node transfer phase of the Contrail Helm deployer ISSU procedure:
- Select the compute node(s) for transferring into the new cluster.
- Move all workloads from the node(s) to other compute nodes. You also have the option to terminate workloads as appropriate.
- For Contrail Release 3.x, remove Contrail from the node(s)
as follows:
Stop the vrouter-agent service.
Remove the
vhost0
interface.Switch the physical interface down, then up.
Remove the vrouter.ko module from the kernel.
- For Contrail Release 4.x, remove Contrail from the node(s)
as follows:
Stop the agent container.
Restore the physical interface.
- Add the required node(s) to
instances.yml
with the rolesvrouter
andopenstack_legacy_compute
. - Run the Contrail Helm deployer to deploy the new vrouter and to configure the old compute service.
- All new compute nodes will have:
The collector setting pointed to the new Contrail cluster
The Control/DNS nodes pointed to the new Contrail cluster
The config-api setting in
vnc_api_lib.ini
pointed to the new Contrail cluster
- (Optional) Run a test workload on transferred nodes to ensure the new vrouter-agent works correctly.
Follow these steps to rollback a compute node, if needed:
- Move the workload from the compute node.
- Stop the Contrail Release 5.0.x containers.
- Ensure the network configuration has been successfully reverted.
- Deploy the previous version of Contrail using the deployment method for that version.
The detailed steps for transferring compute nodes into the new cluster are as follows:
After moving workload from the chosen compute nodes, you should remove the previous version of contrail-agent. For example, for Ubuntu 16.04 and vrouter-agent installed directly on the host, these would be the steps to remove the previous contrail-agent:
# stop services systemctl stop contrail-vrouter-nodemgr systemctl stop contrail-vrouter-agent # remove packages apt-get purge -y contrail* # restore original interfaces definition cd /etc/network/interfaces.d/ cp 50-cloud-init.cfg.save 50-cloud-init.cfg rm vrouter.cfg # restart networking systemctl restart networking.service # remove old kernel module rmmod vrouter # maybe you need to restore default route ip route add 0.0.0.0/0 via 10.0.10.1 dev ens3
The new instance requires two Helm repositories which can be downloaded from Juniper Networks.
- Download the file
contrail-helm-deployer-release-tag.tgz
onto your provisioning host - Run the command scp contrail-helm-deployer-release-tag.tgz for all nodes in the cluster
- Untar
contrail-helm-deployer-release-tag.tgz
on all nodes:content_copy zoom_out_maptar -zxf contrail-helm-deployer-release-tag.tgz -C /opt/
The next set of steps sets up the new compute nodes for Contrail deployment.
You should run the steps in the following procedure from the same node where Contrail was deployed.
- Add the new instance to
/opt/openstack-helm-infra/tools/gate/devel/multinode-inventory.yaml
, in the nodes section. - Prepare the new compute nodes for Contrail deployment:content_copy zoom_out_map
export BASE_DIR=/opt export OSH_INFRA_PATH=${BASE_DIR}/openstack-helm-infra export CHD_PATH=${BASE_DIR}/contrail-helm-deployer cd ${OSH_INFRA_PATH} make dev-deploy setup-host multinode make dev-deploy k8s multinode
- Verify the new node names by using the command
kubectl get nodes
. - Label the new nodes as follows:content_copy zoom_out_map
kubectl label node name --overwrite openstack-control-plane=disable kubectl label node name opencontrail.org/vrouter-kernel=enabled
- To avoid reprovisioning compute nodes when adding them,
set the maintenance mode to
TRUE
invalues.yaml
. For example:content_copy zoom_out_mapglobal: contrail_env_vrouter_kernel: MAINTENANCE_MODE: TRUE
- If adding vrouter with the DPDK or SRIOV role, switch
the kernel to dpdk or sriov mode as appropriate. Note
You need only to deploy the vrouter Helm chart just once for the first compute node or nodes. Upon subsequent deployments, k8s will automatically deploy vrouter on the new nodes.
- Add vrouter as follows:content_copy zoom_out_map
helm install --name contrail-vrouter ${CHD_PATH}/contrail-vrouter --namespace=contrail --values=/tmp/values.yaml
- After labeling and installing the new nodes, get the pods
to verify they are operational.content_copy zoom_out_map
kubectl get pods -n contrail
NoteIf the new nodes are not deployed correctly, check for the presence of a default route. If a default route is not present, restore it.
- At this point, contrail-status for compute nodes should
have output as follows:content_copy zoom_out_map
vrouter kernel module is PRESENT == Contrail vrouter == nodemgr: active agent: initializing (No Configuration for self)
- Restart contrail-control on all the new controller
nodes after upgrading the compute nodes.content_copy zoom_out_map
docker ps | grep control-control | awk '{print $1}' | xargs docker
- Transfer the new code into the compute node as follows:content_copy zoom_out_map
pythonpath=`python -c "import sys; paths = [path for path in sys.path if 'packages' in path] ; print(paths[-1])"` init_image_id=`docker images | awk '/contrail-vrouter-agent/{print $1":"$2}' | head -1 | sed 's/contrail-vrouter-agent/contrail-openstack-compute-init/'` docker run --rm -it --network host -v /usr/bin:/opt/plugin/bin -v $pythonpath:/opt/plugin/site-packages $init_image_id
- Check status of new compute nodes by running
contrail-status
on them. All components should be active now. You can also check the status of the new instance by creating AZ/aggregates with the new compute nodes and run some test workloads to ensure it operates correctly.
Finalizing the Contrail Helm Deployer ISSU Process
Finalize the Contrail Helm deployer ISSU as follows:
- Stop the issu-run-sync container.content_copy zoom_out_map
docker rm -f issu-run-sync
- Run the post synchronization commands.content_copy zoom_out_map
docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-post-sync -c /etc/contrail/contrail-issu.conf" docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-zk-sync -c /etc/contrail/contrail-issu.conf"
- Start all previously stopped containers.content_copy zoom_out_map
docker ps | grep config-devicemgr | awk '{print $1}' | xargs docker unpause | xargs docker restart docker ps | grep config-schema | awk '{print $1}' | xargs docker unpause | xargs docker restart docker ps | grep config-svcmonitor | awk '{print $1}' | xargs docker unpause | xargs docker restart
- Disengage maintenance mode. To do this, set the entry
MAINTENANCE_MODE
invalues.yaml
to FALSE, then run the following command from the deployment node:content_copy zoom_out_maphelm upgrade -f /tmp/values.yaml contrail-vrouter /opt/contrail-helm-deployer/contrail-vrouter
- Clean up and remove the old Contrail controllers. Use
the
provision-issu.py
script called from the config-api container, with the configissu.conf
. Replace the credential variables and API server IP with appropriate values as indicated.content_copy zoom_out_map[DEFAULTS] db_host_info={"ip-address": “node-ip-address”, "ip-address": “node-ip-address”, "ip-address": “node-ip-address”} config_host_info={"ip-address": “node-ip-address”, "ip-address": “node-ip-address”, "ip-address": “node-ip-address”} analytics_host_info={"ip-address": “node-ip-address”, "ip-address": “node-ip-address”, "ip-address": “node-ip-address”} control_host_info={"ip-address": “node-ip-address”, "ip-address": “node-ip-address”, "ip-address": “node-ip-address”} admin_password = admin password admin_tenant_name = admin tenant admin_user = admin username api_server_ip= any IP of new config-api controller api_server_port=8082
- Run the following commands from any controller node:Note
All *host_info parameters should contain the list of new hosts.
content_copy zoom_out_mapconfig_api_cid=`docker ps | awk '/config-api/{print $1}' | head` docker cp issu.conf $config_api_cid:issu.conf docker exec -it $config_api_cid python /opt/contrail/utils/provision_issu.py -c issu.conf
- Servers can be cleaned up if there are no other services present.
- All configurations for the neutron-api must be edited
to have the parameter
api_server_ip
point to the list of new config-api IP addresses. LocateContrailPlugin.ini
(or other file that contains this parameter) and change the IP addresses to the list of new config-api IP addresses. - The heat configuration needs the same changes. Locate
the parameter
[clients_contrail]/api_server
and change it to point to the list of the new config-api IP addresses.