- play_arrow Administration Portal
- play_arrow Introduction
- Unified Administration and Customer Portal Overview
- Administration Portal Overview
- Logging in to Administration Portal
- Switching the Tenant Scope
- Changing the Administration Portal Password
- Changing the Password on First Login
- Resetting the Password
- Setting Password Duration
- Extending the User Login Session
- Setting Up the Cloud CPE Centralized Deployment Model with Administration Portal
- Setting Up the Cloud CPE Distributed Deployment Model with Administration Portal
- play_arrow Managing Objects
- play_arrow Using the Dashboard
- play_arrow Monitoring Alerts, Alarms, and Device Events
- play_arrow Monitoring Tenants SLA Performance
- Multidepartment CPE Device Support
- About the SLA Performance of All Tenants Page
- About the SLA Performance of a Single Tenant Page
- Monitoring Application-Level SLA Performance for real time-optimized SD-WAN
- Viewing the SLA Performance of a Site
- Viewing the SLA Performance of an Application or Application Group
- Understanding SLA Performance Score for Applications, Links, Sites, and Tenants
- play_arrow Monitoring Jobs
- play_arrow Managing POPs
- About the POPs Page
- Creating a Single POP
- Importing Data for Multiple POPs
- Viewing the History of POP Data Imports
- Viewing the History of POP Data Deletions
- Managing a Single POP
- About the VIMs Page
- Creating a Cloud VIM
- About the EMS Page
- Creating an EMS
- Changing the Junos Space Virtual Appliance Password
- About the Routers Page
- Creating Devices
- Configuring Devices
- View the History of Device Data Deletions
- play_arrow Managing Devices
- About the Tenant Devices Page
- About the Cloud Hub Devices Page
- Managing a Tenant Device
- Managing a Cloud Hub Device
- Device Redundancy Support Overview
- Viewing the History of Tenant Device Activation Logs
- Viewing the History of Cloud Hub Device Activation Logs
- Secure OAM Network Overview
- Adding a Cloud Hub Device
- Upgrading a Cloud Hub Device
- Rebooting a CPE Device
- play_arrow Managing Device Templates
- play_arrow Managing Software Images
- play_arrow Configuring Network Services in a Centralized Deployment
- Network Services Overview
- About the Network Services Page
- About the Service Overview Page
- About the Service Instances Page
- Configuring VNF Properties
- Allocating a Service to Tenants
- Removing a Service from Tenants
- Viewing a Service Configuration
- vSRX VNF Configuration Settings
- LxCIPtable VNF Configuration Settings
- Cisco CSR-1000v VNF Configuration Settings
- Riverbed Steelhead VNF Configuration Settings
- Managing a Single Service
- play_arrow Configuring Application SLA Profiles
- Application Quality of Experience (AppQoE) Overview
- About the Application Traffic Type Profiles Page
- Creating Traffic Type Profiles
- Editing and Deleting Traffic Type Profiles
- SLA Profiles and SD-WAN Policies Overview
- Cost-Based Link Switching
- Local Breakout Overview
- About the Application SLA Profiles Page
- Creating SLA Profiles
- Editing and Deleting SLA Profiles
- play_arrow Configuring Application Signatures
- play_arrow Managing Tenants
- play_arrow Managing Operating Companies
- play_arrow Configuring SP Users
- play_arrow Managing Audit Logs
- play_arrow Managing Roles
- play_arrow Configuring Authentication
- play_arrow Configuring Licenses
- play_arrow Customizing the Unified Portal
- play_arrow Managing Signature Database
-
- play_arrow Customer Portal
- play_arrow Introduction
- Unified Administration and Customer Portal Overview
- Customer Portal Overview
- Switching the Tenant Scope
- Accessing Customer Portal
- Setting Up Your Network with Customer Portal
- Changing the Password on First Login
- Changing the Customer Portal Password
- Resetting the Password
- Extending the User Login Session
- play_arrow Using the Dashboard
- play_arrow Managing Objects
- play_arrow Monitoring Security Alerts and Alarms
- play_arrow Monitoring Security and Device Events
- About the All Security Events Page
- About the Firewall Events Page
- About the Web Filtering Events Page
- About the IPsec VPNs Events Page
- About the Content Filtering Events Page
- About the Antispam Events Page
- About the Antivirus Events Page
- About the IPS Events Page
- About the Device Events Page
- About the Screen Events Page
- play_arrow Monitoring SD-WAN Events
- play_arrow Monitoring Applications
- play_arrow Monitoring Threats
- play_arrow Monitoring Jobs
- play_arrow Managing Devices
- play_arrow Managing Device Images
- play_arrow Configuring Network Services in a Distributed Deployment
- Network Service Overview
- About the Network Services Page
- About the Service Overview Page
- About the Service Instances Page
- Configuring VNF Properties
- vSRX VNF Configuration Settings
- LxCIPtable VNF Configuration Settings
- Cisco CSR-1000v VNF Configuration Settings
- Riverbed Steelhead VNF Configuration Settings
- play_arrow Managing Firewall Policies
- Firewall Policy Overview
- About the Firewall Policy Page
- Creating Firewall Policy Intents
- Editing, Cloning, and Deleting Firewall Policy Intents
- Selecting Firewall Source
- Selecting Firewall Destination
- Firewall Policy Examples
- Firewall Policy Schedules Overview
- About the Firewall Policy Schedules Page
- Creating Schedules
- Editing, Cloning, and Deleting Schedules
- play_arrow Unified Threat Management
- UTM Overview
- Configuring UTM Settings
- About the UTM Profiles Page
- Creating UTM Profiles
- Editing, Cloning, and Deleting UTM Profiles
- About the Web Filtering Profiles Page
- Creating Web Filtering Profiles
- Editing, Cloning, and Deleting Web Filtering Profiles
- About the Antivirus Profiles Page
- Creating Antivirus Profiles
- Editing, Cloning, and Deleting Antivirus Profiles
- About the Antispam Profiles Page
- Creating Antispam Profiles
- Editing, Cloning, and Deleting Antispam Profiles
- About the Content Filtering Profiles Page
- Creating Content Filtering Profiles
- Editing, Cloning, and Deleting Content Filtering Profiles
- About the URL Patterns Page
- Creating URL Patterns
- Editing, Cloning, and Deleting URL Patterns
- About the URL Categories Page
- Creating URL Categories
- Editing, Cloning, and Deleting URL Categories
- play_arrow Managing SD-WAN
- play_arrow Managing NAT Policies
- NAT Policies Overview
- About the NAT Policies Page
- Creating NAT Policies
- Editing and Deleting NAT Policies
- About the Single NAT Policy Page
- Creating NAT Policy Rules
- Editing, Cloning, and Deleting NAT Policy Rules
- Deploying NAT Policy Rules
- Selecting NAT Source
- Selecting NAT Destination
- NAT Pools Overview
- About the NAT Pools Page
- Creating NAT Pools
- Editing, Cloning, and Deleting NAT Pools
- play_arrow Managing SSL Proxies
- SSL Forward Proxy Overview
- About the SSL Proxy Policy Page
- Creating SSL Proxy Policy Intents
- Editing, Cloning, and Deleting SSL Proxy Policy Intents
- Understanding How SSL Proxy Policy Intents Are Applied
- About the SSL Proxy Profiles Page
- Creating SSL Forward Proxy Profiles
- Editing, Cloning, and Deleting SSL Forward Proxy Profiles
- Configuring and Deploying an SSL Forward Proxy Policy
- play_arrow Managing Shared Objects
- Addresses and Address Groups Overview
- About the Addresses Page
- Creating Addresses or Address Groups
- Editing, Cloning, and Deleting Addresses and Address Groups
- Services and Service Groups Overview
- About the Services Page
- Creating Services and Service Groups
- Creating Protocols
- Editing and Deleting Protocols
- Editing, Cloning, and Deleting Services and Service Groups
- Application Signatures Overview
- About the Application Signatures Page
- Creating Application Signature Groups
- Editing, Cloning, and Deleting Application Signature Groups
- About the Departments Page
- Creating a Department
- Modifying a Department
- Deleting a Department
- play_arrow Managing Deployments
- play_arrow Managing Sites
- About the Sites Page
- Local Breakout Overview
- Multihoming Overview
- Device Redundancy Support Overview
- Upgrading Sites Overview
- Creating Spoke Sites for Hybrid WAN Deployment
- Creating Local Service Edge Sites for Hybrid WAN Deployment
- Creating Regional Service Edge Sites for Hybrid WAN Deployment
- Creating On-Premise Hub Sites for SD-WAN Deployment
- Creating On-Premise Spoke Sites for SD-WAN Deployment
- Creating Cloud Hub Sites for SD-WAN Deployment
- Creating Cloud Spoke Sites for SD-WAN Deployment
- Provisioning a Cloud Spoke Site in AWS VPC
- Importing Multiple Sites
- Managing a Single Site
- Configuring a Single Site
- Upgrading Sites
- Managing LAN Segments on a Tenant Site
- Activating a CPE Device
- Activating Dual CPE Devices (Device Redundancy)
- Viewing the History of Tenant Device Activation Logs
- Configuring VRFs and PNE Details for a Site in a Centralized Deployment
- play_arrow Managing Site Groups
- play_arrow Security Reports
- Reports Overview
- About the Security Report Definitions Page
- Performing Different Actions on Reports
- About the Security Generated Reports Page
- Creating Log Report Definition
- Creating Bandwidth Report Definition
- Editing and Deleting Log Report Definitions
- Editing and Deleting Bandwidth Report Definitions
- play_arrow SD-WAN Reports
- play_arrow Managing Tenant Users
- play_arrow Managing Audit Logs
- play_arrow Managing Tenant User Roles
- play_arrow Licenses
- play_arrow Signature Database
- play_arrow Managing Certificates
- play_arrow Managing Juniper Identity Management Service
-
- play_arrow Designer Tools
- play_arrow Configuration Designer
- Configuration Designer Overview
- Accessing the Configuration Designer
- Using the Configuration Designer
- Changing Your Password
- About the Requests Page for the Configuration Designer
- Creating Requests for Configuration Templates
- Designing Templates with a YANG Configuration
- Designing Templates with a Configuration
- Publishing Configuration Templates
- About the Designs Page for the Configuration Designer
- Cloning Configuration Templates
- Deleting Configuration Template Designs
- play_arrow Resource Designer
- Resource Designer Overview
- Using the Resource Designer
- Accessing the Resource Designer
- About the Requests Page for the Resource Designer
- VNF Overview
- Creating Requests for VNF Packages
- Designing VNF Packages
- Adding VNF Managers
- Publishing VNF Packages
- About the Designs Page for the Resource Designer
- Cloning VNF Packages
- Importing VNF Packages
- Exporting VNF Packages
- Deleting VNF Packages
- play_arrow Network Service Designer introduction
- play_arrow Creating Requests for Network Services
- play_arrow Creating Network Services
- About the Build Page for the Network Service Designer
- Viewing Information About VNFs
- Designing Network Services
- Connecting VNFs in a Service Chain
- Defining Ingress and Egress Points for a Service Chain
- Monitoring Performance Goals
- Configuring Network Services
- vSRX Configuration Settings
- LxCIPtable VNF Configuration Settings
- Cisco CSR-1000v VNF Configuration Settings
- Riverbed Steelhead VNF Configuration Settings
- Fortinet VNF Configuration Settings
- Ubuntu VNF Configuration Settings
- play_arrow Managing Network Services
-
- play_arrow Downloads
Upgrading Contrail In-Service Software from Releases 3.2 and 4.1 to 5.0.x using Ansible Deployer
Contrail In-Service Software Upgrade (ISSU) Overview
If your installed version is Contrail Release 3.2 or higher, you can perform an in-service software upgrade (ISSU) to upgrade to Contrail Release 5.0.x using the Ansible deployer. In performing the ISSU, the Contrail controller cluster is upgraded side-by-side with a parallel setup, and the compute nodes are upgraded in place.
We recommend that you take snapshots of your current system before you proceed with the upgrade process.
The procedure for performing the ISSU using the Contrail Ansible deployer is similar to previous ISSU upgrade procedures.
This Contrail ansible deployer ISSU procedure does not include steps for upgrading OpenStack. If an OpenStack version upgrade is required, it should be performed using applicable OpenStack procedures.
In summary, the ISSU process consists of the following parts, in sequence:
- Deploy the new cluster.
- Synchronize the new and old clusters.
- Upgrade the compute nodes.
- Finalize the synchronization and complete the upgrades.
Prerequisites
The following prerequisites are required to use the Contrail ansible deployer ISSU procedure:
A previous version of Contrail installed, no earlier than Release 3.2.
There are OpenStack controller and compute nodes, and Contrail nodes.
OpenStack needs to have been installed from packages.
Contrail and OpenStack should be installed on different nodes.
Upgrade for compute nodes with Ubuntu 14.04 is not supported. Compute nodes need to be upgraded to Ubuntu 16.04 first.
Preparing the Contrail System for the Ansible Deployer ISSU Procedure
In summary, these are the general steps for the system preparation phase of the Contrail ansible deployer ISSU procedure:
- Deploy the 5.0.x version of Contrail
using the Contrail ansible deployer, but make sure to include only
the following Contrail controller services:
Config
Control
Analytics
Databases
Any additional support services like rmq, kafka, and zookeeper. (The vrouter service will be deployed later on the old compute nodes.)
NoteYou must provide keystone authorization information for setup.
- After deployment is finished, you can log into the Contrail web interface to verify that it works.
The detailed steps for deploying the new controller using the ansible deployer are as follows:
- To deploy the new controller, download
contrail-ansible-deployer-release-tag.tgz
onto your provisioning host from Juniper Networks. - The new controller file
config/instances.yaml
appears as follows, with actual values in place of the variables as shown in the example:content_copy zoom_out_mapprovider_config: bms: domainsuffix: local ssh_user: user ssh_pwd: password instances: server1: ip: controller 1 ip provider: bms roles: analytics: null analytics_database: null config: null config_database: null control: null webui: null contrail_configuration: CONTROLLER_NODES: controller ip-s from api/mgmt network CONTROL_NODES: controller ip-s from ctrl/data network AUTH_MODE: keystone KEYSTONE_AUTH_ADMIN_TENANT: old controller's admin's tenant KEYSTONE_AUTH_ADMIN_USER: old controller's admin's user name KEYSTONE_AUTH_ADMIN_PASSWORD: password for admin user KEYSTONE_AUTH_HOST: keystone host/ip of old controller KEYSTONE_AUTH_URL_VERSION: "/v3" KEYSTONE_AUTH_USER_DOMAIN_NAME: user's domain in case of keystone v3 KEYSTONE_AUTH_PROJECT_DOMAIN_NAME: project's domain in case of keystone v3 RABBITMQ_NODE_PORT: 5673 IPFABRIC_SERVICE_HOST: metadata service host/ip of old controller AAA_MODE: cloud-admin METADATA_PROXY_SECRET: secret phrase that is used in old controller kolla_config: kolla_globals: kolla_internal_vip_address: keystone host/ip of old controller kolla_external_vip_address: keystone host/ip of old controller
- Finally, run the ansible playbooks to deploy the new controller.content_copy zoom_out_map
ansible-playbook -v -e orchestrator=none -i inventory/ playbooks/configure_instances.yml ansible-playbook -v -e orchestrator=openstack -i inventory/ playbooks/install_contrail.yml
After successful completion of these commands, the new controller should be up and alive.
Provisioning Control Nodes and Performing Synchronization Steps
In summary, these are the general steps for the node provisioning and synchronization phase of the Contrail ansible deployer ISSU procedure:
- Provision new control nodes in the old cluster and old control nodes in the new cluster.
- Stop the following containers in the new cluster on all
nodes:
contrail-device-manager
contrail-schema-transformer
contrail-svcmonitor
- Switch the new controller into maintenance mode to prevent provisioning computes in the new cluster.
- Prepare the config file for the ISSU.
- Run the pre-sync script from the ISSU package.
- Run the run-sync script from the ISSU package in background mode.
The detailed steps to provision the control nodes and perform the synchronization are as follows:
- Pair the old control nodes in the new cluster. It is recommended
to run it from any config-api container.content_copy zoom_out_map
config_api_image=`docker ps | awk '/config-api/{print $1}' | head`
- Run the following command for each old control node, substituting
actual values where indicated:content_copy zoom_out_map
docker exec -it $config_api-image /bin/bash -c "LOG_LEVEL=SYS_NOTICE source /common.sh ; python /opt/contrail/utils/provision_control.py --host_name hostname of old control node --host_ip IP of old control node --api_server_ip $(hostname -i) --api_server_port 8082 --oper add --router_asn 64512 --ibgp_auto_mesh \$AUTH_PARAMS"
- Pair the new control nodes in the old cluster with similar
commands (the specific syntax depends on the deployment method of
the old cluster), again substituting actual values where indicated.content_copy zoom_out_map
python /opt/contrail/utils/provision_control.py --host_name new controller hostname --host_ip new controller IP --api_server_ip old api-server IP/VIP --api_server_port 8082 --oper add --admin_user admin --admin_password password --admin_tenant_name admin --router_asn 64512 --ibgp_auto_mesh
- Stop all the containers for contrail-device-manager, contrail-schema-transformer,
and contrail-svcmonitor in the new cluster on all controller nodes.content_copy zoom_out_map
docker stop config_devicemgr_1 docker stop config_schema_1 docker stop config_svcmonitor_1
These next steps should be performed from any new controller. Then the configuration prepared for ISSU runs. (For now, only manual preparation is available.)
In various deployments, old cassandra may use port 9160 or 9161.
You can learn the configuration details for the old services on any
old controller node, in the file /etc/contrail-contrail-api.conf
.
The configuration appears as follows and can be stored locally:
[DEFAULTS] # details about oldrabbit old_rabbit_user = contrail old_rabbit_password = ab86245f4f3640a29b700def9e194f72 old_rabbit_q_name = vnc-config.issu-queue old_rabbit_vhost = contrail old_rabbit_port = 5672 old_rabbit_address_list = ip-addresses # details about new rabbit # new_rabbit_user = rabbitmq # new_rabbit_password = password # new_rabbit_ha_mode = new_rabbit_q_name = vnc-config.issu-queue new_rabbit_vhost = / new_rabbit_port = 5673 new_rabbit_address_list = ip-addresses # details about other old/new services old_cassandra_user = controller old_cassandra_password = 04dc0540b796492fad6f7cbdcfb18762 old_cassandra_address_list = ip-address:9161 old_zookeeper_address_list = ip-address:2181 new_cassandra_address_list = ip-address:9161 ip-address:9161 ip-address:9161 new_zookeeper_address_list = ip-address:2181 # details about new controller nodes new_api_info = {"ip-address": [("root"), ("password")], "ip-address": [("root"), ("password")], "ip-address": [("root"), ("password")]}
- Detect the config-api image ID.content_copy zoom_out_map
image_id=`docker images | awk '/config-api/{print $3}' | head -1`
- Run the pre-synchronization.content_copy zoom_out_map
docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh $image_id -c "/usr/bin/contrail-issu-pre-sync -c /etc/contrail/contrail-issu.conf"
- Run the run-synchronization.content_copy zoom_out_map
docker run --rm --detach -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-run-sync -c /etc/contrail/contrail-issu.conf"
- Check the logs of the run-sync process. To do this, open
the run-sync container.content_copy zoom_out_map
docker exec -it issu-run-sync /bin/bash cat /var/log/contrail/issu_contrail_run_sync.log
- Stop and remove the run-sync process after all compute
nodes are upgraded.content_copy zoom_out_map
docker rm -f issu-run-sync
Transferring the Compute Nodes into the New Cluster
In summary, these are the general steps for the node transfer phase of the Contrail ansible deployer ISSU procedure:
- Select the compute node(s) for transferring into the new cluster.
- Move all workloads from the node(s) to other compute nodes. You also have the option to terminate workloads as appropriate.
- For Contrail Release 3.x, remove Contrail from the node(s)
as follows:
Stop the vrouter-agent service.
Remove the
vhost0
interface.Switch the physical interface down, then up.
Remove the vrouter.ko module from the kernel.
- For Contrail Release 4.x, remove Contrail from the node(s)
as follows:
Stop the agent container.
Restore the physical interface.
- Add the required node(s) to
instances.yml
with the rolesvrouter
andopenstack_legacy_compute
. - Run the Contrail ansible deployer to deploy the new vrouter and to configure the old compute service.
- All new compute nodes will have:
The collector setting pointed to the new Contrail cluster
The Control/DNS nodes pointed to the new Contrail cluster
The config-api setting in
vnc_api_lib.ini
pointed to the new Contrail cluster
- (Optional) Run a test workload on transferred nodes to ensure the new vrouter-agent works correctly.
Follow these steps to rollback a compute node, if needed:
- Move the workload from the compute node.
- Stop the Contrail Release 5.0.x containers.
- Ensure the network configuration has been successfully reverted.
- Deploy the previous version of Contrail using the deployment method for that version.
The detailed steps for transferring compute nodes into the new cluster are as follows:
After moving workload from the chosen compute nodes, you should remove the previous version of contrail-agent. For example, for Ubuntu 16.04 and vrouter-agent installed directly on the host, these would be the steps to remove the previous contrail-agent:
# stop services systemctl stop contrail-vrouter-nodemgr systemctl stop contrail-vrouter-agent # remove packages apt-get purge -y contrail* # restore original interfaces definition cd /etc/network/interfaces.d/ cp 50-cloud-init.cfg.save 50-cloud-init.cfg rm vrouter.cfg # restart networking systemctl restart networking.service # remove old kernel module rmmod vrouter # maybe you need to restore default route ip route add 0.0.0.0/0 via 10.0.10.1 dev ens3
- The new instance should be added to
instances.yaml
with two roles: vrouter and openstack_compute_legacy. To avoid reprovisioning the compute node, set the maintenance mode toTRUE
. For example:content_copy zoom_out_mapinstances: server10: ip: compute 10 ip provider: bms roles: vrouter: MAINTENANCE_MODE: TRUE VROUTER_ENCRYPTION: FALSE openstack_compute_legacy: null
- Run the ansible playbooks.content_copy zoom_out_map
ansible-playbook -v -e orchestrator=none -e config_file=/root/contrail-ansible-deployer/instances.yaml playbooks/configure_instances.yml ansible-playbook -v -e orchestrator=openstack -e config_file=/root/contrail-ansible-deployer/instances.yaml playbooks/install_contrail.yml
- The contrail-status for the compute node appears as follows:content_copy zoom_out_map
vrouter kernel module is PRESENT == Contrail vrouter == nodemgr: active agent: initializing (No Configuration for self)
- Restart contrail-control on all new controller nodes after
the upgrade is complete:content_copy zoom_out_map
docker restart control_control_1
- Check status of new compute nodes by running
contrail-status
on them. All components should be active now. You can also check the status of the new instance by creating AZ/aggregates with the new compute nodes and run some test workloads to ensure it operates correctly.
Finalizing the Contrail Ansible Deployer ISSU Process
Finalize the Contrail ansible deployer ISSU as follows:
- Stop the issu-run-sync container.content_copy zoom_out_map
docker rm -f issu-run-sync
- Run the post synchronization commands.content_copy zoom_out_map
docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-post-sync -c /etc/contrail/contrail-issu.conf" docker run --rm -it --network host -v $(pwd)/contrail-issu.conf:/etc/contrail/contrail-issu.conf --entrypoint /bin/bash -v /root/.ssh:/root/.ssh --name issu-run-sync $image_id -c "/usr/bin/contrail-issu-zk-sync -c /etc/contrail/contrail-issu.conf"
- Disengage maintenance mode and start all previously stopped
containers. To do this, set the entry
MAINTENANCE_MODE
ininstances.yaml
to FALSE, then run the following command from the deployment node:content_copy zoom_out_mapansible-playbook -v -e orchestrator=openstack -i inventory/ playbooks/install_contrail.yml
- Clean up and remove the old Contrail controllers. Use
the
provision-issu.py
script called from the config-api container with the configissu.conf
. Replace the credential variables and API server IP with appropriate values as indicated.content_copy zoom_out_map[DEFAULTS] db_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"} config_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"} analytics_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"} control_host_info={"ip-address": "node-ip-address", "ip-address": "node-ip-address", "ip-address": "node-ip-address"} admin_password = <admin password> admin_tenant_name = <admin tenant> admin_user = <admin username> api_server_ip= <any IP of new config-api controller> api_server_port=8082
- Run the following commands from any controller node.Note
All *host_info parameters should contain the list of new hosts.
content_copy zoom_out_mapdocker cp issu.conf config_api_1:issu.conf docker exec -it config_api_1 python /opt/contrail/utils/provision_issu.py -c issu.conf
- Servers can be cleaned up if there are no other services present.
- All configurations for the neutron-api must be edited
to have the parameter
api_server_ip
point to the list of new config-api IP addresses. LocateContrailPlugin.ini
(or other file that contains this parameter) and change the IP addresses to the list of new config-api IP addresses. - The heat configuration needs the same changes. Locate
the parameter
[clients_contrail]/api_server
and change it to point to the list of the new config-api IP addresses.