Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Migrate Data from NorthStar to Paragon Automation

Release: Paragon Automation 24.1
{}
Change Release
date_range 09-Oct-23

You can migrate DeviceProfile, Cassandra DB, and Analytics (ES DB) data from an existing NorthStar Release 6.x setup to a Paragon Automation setup.

SUMMARY Use the steps described in this topic to migrate date from NorthStar to Paragon Automation.

Prerequisites

  • Ensure that both the NorthStar and Paragon Automation setups are up and running.
  • Cassandra must be accessible from Paragon Automation. Set the rpc_address parameter in the /opt/northstar/data/apache-cassandra/conf/cassandra.yaml path to an address to which the Paragon Automation setup can connect. After setting the address, restart Cassandra for the configuration changes to take effect:
    content_copy zoom_out_map
    root@ns1: # supervisorctl restart infra:cassandra
  • Ensure that both NorthStar and Paragon Automation have sufficient disk space to migrate the Cassandra DB. The Cassandra migration exports all data to CSV files and sufficient space must be available for the migration operation. To ensure that sufficient space is available:

    1. Log in to NorthStar and check the current disk usage by Cassandra. For a multisite setup, issue the following command on all nodes in the setup and add them to calculate total disk usage:

      content_copy zoom_out_map
      [root@ns1-site1 ~]# du -sh /opt/northstar/data/apache-cassandra/
      404M	/opt/northstar/data/apache-cassandra/       <--- Disk space used by Cassandra
      
    2. Ensure that the available disk space on both NorthStar and Paragon exceeds the total Cassandra disk usage by at least a factor of 2. For Paragon Automation, this amount of space must be available on every node that has scheduling enabled on the device used for the /var/local directory. For NorthStar, only the node from which data is exported must have the available disk space.

      For example, on a Paragon Automation node that has a large root partition '/' without an optional partition for '/var/local':

      content_copy zoom_out_map
      root@pa-master:~# df -h
      Filesystem      Size  Used Avail Use% Mounted on
      udev            11G     0   11G   0% /dev
      tmpfs           2.2G   32M  2.1G   2% /run
      /dev/sda3       150G   33G  110G  24% /             <--- Available space for /var/local
      tmpfs           11G     0   11G   0% /dev/shm
      ...
      
      

      See Disk Requirements for more information on partition options.

      On NorthStar:

      content_copy zoom_out_map
      [root@ns1-site1 ~]# df -h
      Filesystem      Size  Used Avail Use% Mounted on
      /dev/sda1       108G  9.6G   99G   9% /             <--- Available space
      devtmpfs        7.6G     0  7.6G   0% /dev
      tmpfs           7.6G   12K  7.6G   1% /dev/shm
      tmpfs           7.6G   25M  7.6G   1% /run
      

Follow this procedure to migrate data from NorthStar to Paragon Automation.

Create the nsmigration Task Pod

  1. Log in to the Paragon Automation primary node.
  2. Create the nsmigration task pod.
    content_copy zoom_out_map
    root@pa-primary: # kubectl apply  -f /etc/kubernetes/po/nsmigration/kube-cfg.yml
     job.batch/nsmigration created
    
  3. Log in to the nsmigration task pod.
    content_copy zoom_out_map
    root@pa-primary:~# kubectl exec -it $(kubectl get po -n northstar -l app=nsmigration -o jsonpath={..metadata.name}) -c nsmigration -n northstar -- bash
    root@nsmigration-fcvl6:/# cd /opt/northstar/util/db/nsmigration
    

Export Cassandra DB Data to CSV Files

For the migration procedure, you must export the contents of the Cassandra database in NorthStar to CSV files and copy those files to Paragon Automation.
  1. Copy the opt/northstar/thirdparty/dsbulk-1.8.0.tar.gz file and /opt/northstar/util/db/export_csv/cass_dsbulk_export_csv.py from the nsmigration container in Paragon Automation to the target NorthStar installation:

    Copy the files locally to the current node:

    content_copy zoom_out_map
    root@pa-master:~ mkdir migration_files && cd migration_files
    root@pa-master:~/migration_files# kubectl cp northstar/$(kubectl get po -n northstar -l app=nsmigration -o jsonpath={..metadata.name}):/opt/northstar/thirdparty/dsbulk-1.8.0.tar.gz ./dsbulk-1.8.0.tar.gz
    root@pa-master:~/migration_files# kubectl cp northstar/$(kubectl get po -n northstar -l app=nsmigration -o jsonpath={..metadata.name}):/opt/northstar/util/db/export_csv/cass_dsbulk_export_csv.py ./cass_dsbulk_export_csv.py
    

    Copy the files to the target NorthStar installation.

    content_copy zoom_out_map
    root@pa-master:~# scp -r migration_files root@${northstar_host}:/root/
    
  2. Log in to the NorthStar instance, and install the migration utils by extracting the dsbulk-1.8.0.tar.gz file,
    content_copy zoom_out_map
    [root@ns1-site1 migration_files]# tar -xf dsbulk-1.8.0.tar.gz
  3. Export the contents of the Cassandra database to CSV files by running the cass_dsbulk_export_csv.py script. The --skip-historical-data option can be passed to this script to skip the export of historical event date. For more information, see Table 1.

    Source the NorthStar environment file.

    content_copy zoom_out_map
    [root@ns1-site1 migration_files]# source /opt/northstar/northstar.env
    

    Run the export script.

    content_copy zoom_out_map
    [root@ns1-site1 migration_files]# python3 cass_dsbulk_export_csv.py --dsbulk=$PWD/dsbulk-1.8.0/bin/dsbulk
    
    Table 1: Historical Event Data Tables
    keyspace table
    taskscheduler taskstatus
    pcs topology, lsp_topo, lsp_link, ntad, messages, pcs_lsp_event, link_event, node_event
    pcs_provision provision

    Running the script exports the contents of the Cassandra database (according to db_schema.json) to the export_csv folder in the current working directory. The script pipes the progress output from the dsbulk invocations to stdout. Each table has its own sub-directory with one or more CSV files. The procedure may take a long time for larger databases.

    content_copy zoom_out_map
    [root@ns1-site1 migration_files]# python3 cass_dsbulk_export_csv.py --dsbulk=$PWD/dsbulk-1.8.0/bin/dsbulk
    2021-11-22 23:12:36,908: INFO: ns_dsbulk_export: Exporting NorthStarMLO:Default (page size 500)
    2021-11-22 23:12:39,232: INFO: ns_dsbulk_export: Operation directory: /root/dsbulk/logs/UNLOAD_20211122-231239-029958
    2021-11-22 23:12:43,580: INFO: ns_dsbulk_export: total | failed | rows/s | p50ms | p99ms | p999ms
    2021-11-22 23:12:43,580: INFO: ns_dsbulk_export:     1 |      0 |      2 |  8.18 |  8.19 |   8.19
    2021-11-22 23:12:43,581: INFO: ns_dsbulk_export: Operation UNLOAD_20211122-231239-029958 completed successfully in less than one second.
    ...
    2021-11-22 23:14:22,886: INFO: ns_dsbulk_export: Exporting pcs:links (page size 500)
    2021-11-22 23:14:24,891: INFO: ns_dsbulk_export: Operation directory: /root/dsbulk/logs/UNLOAD_20211122-231424-683902
    2021-11-22 23:14:28,863: INFO: ns_dsbulk_export: total | failed | rows/s | p50ms | p99ms | p999ms
    2021-11-22 23:14:28,863: INFO: ns_dsbulk_export:    16 |      0 |     29 |  6.08 |  6.09 |   6.09
    2021-11-22 23:14:28,863: INFO: ns_dsbulk_export: Operation UNLOAD_20211122-231424-683902 completed successfully in less than one second
    ...
    [root@ns1-site1 migration_files]# ls -l export_csv/
    total 0
    drwxr-xr-x. 2 root root  6 Nov 22 23:20 anycastgroup-anycastgroupIndex
    drwxr-xr-x. 2 root root  6 Nov 22 23:20 cmgd-configuration
    drwxr-xr-x. 2 root root  6 Nov 22 23:19 device_config-configlets
    drwxr-xr-x. 2 root root  6 Nov 22 23:19 device_config-configlets_workorder
    ...
    [root@ns1-site1 migration_files]# ls -l export_csv/pcs-links
    total 16
    -rw-r--r--. 1 root root 14685 Nov 22 23:14 pcs-links-000001.csv
    
    Note:

    The exported CSV files also serve as a backup of the Cassandra DB data. We recommend archiving the files in case data needs to be restored in the future.

  4. Copy the export_csv folder to the Paragon Automation node where the nsmigration pod is running.
    content_copy zoom_out_map
    root@pa-master:~# kubectl get po -n northstar -l app=nsmigration -o jsonpath={..spec.nodeName}
    10.52.44.210                                        <--- In this example, this is the worker3 node

    Copy the exported files to the correct directory on worker3 node.

    content_copy zoom_out_map
    root@pa-worker3:~# cd /var/local/ns_db_migration/ && scp -r root@{northstar_ip}:/root/migration_files/export_csv .
    

Migrate DeviceProfile and Cassandra DB

  1. Run the ns_data_migration.py -a -sp -dp script from the nsmigration task pod. The complete command syntax is ./ns_data_migration.py -a ns-app-server-ip -su root -sp ns-app-user-ssh-password -dh cassandra-db-host -du cassandra -dp cassandra-password -dcsv /opt/northstar/ns_db_migration/export_csv -pu postgres-user -pp postgres-password -ph postgres-host -po postgres-port -pah vip-of-ingress-controller-or-hostname-of-main-web-application -pau paragon-web-ui-login -pap paragon-web-ui-password -dr 1.
    For example:
    content_copy zoom_out_map
    root@nsmigration-7xbbz:/opt/northstar/util/db/nsmigration# ./ns_data_migration.py -a 10.xx.xx.200 -su root -sp password -dh 10.xx.xx.200 -dp password -dcsv /opt/northstar/ns_db_migration/export_csv -pu northstar -pp BB91qaDCfjpGWPbjEZBV -ph atom-db.common -po 5432 -pah 10.xx.xx.11 -pau admin -pap password1 -dr 1                         
    Logs stored at /opt/northstar/util/db/nsmigration/logs/nsdatamigration.log
    Cassandra connection established...connection attempt: 1
    Testing cassandra connectivity
    Connected to cluster Test Cluster
    Testing EMS connectivity
    scope_id: d3ae39f7-35c6-49dd-a1bd-c509a38bd4ea, auth_token length: 1160
    scoped token length: 1303
    jwt_token length: 40974
    All connection ok starting mirgation
    Starting device profile migration...
    Found 2 devices in Northstar device profile
    
    ...
    2022-04-26 20:57:01,976:INFO:Loading health_monitor-health_history-000001.csv (~ 5 rows)
    2022-04-26 20:57:01,996:INFO:Loaded 5/~5 rows
    2022-04-26 20:57:01,996:INFO:Copying csv data for table health_monitor:thresholds
    2022-04-26 20:57:01,997:INFO:Using batch size 500
    2022-04-26 20:57:02,001:INFO:Loading health_monitor-thresholds-000001.csv (~ 1 rows)
    2022-04-26 20:57:02,003:INFO:Loaded 1/~1 rows
    2022-04-26 20:57:02,004:INFO:Copying csv data for table planner:networkdata
    2022-04-26 20:57:02,005:INFO:Using batch size 20
    2022-04-26 20:57:02,008:INFO:Loading planner-networkdata-000001.csv (~ 1 rows)
    2022-04-26 20:57:02,071:INFO:Loaded 1/~1 rows
    ...
    The NS data migration completed
    

    You must specify the following parameters while running the ns_data_migration.py script.

    • -a APP, --app APP—IP address or hostname of the application server
    • -su SSHUSER, --sshuser SSHUSER—SSH username (default is root)
    • -sp SSHPASS, --sshpass SSHPASS—SSH password
    • -so SSHPORT, --sshport SSHPORT—SSH port (default is 22)
    • -du DBUSER, --dbuser DBUSER—Cassandra DB username (default is cassandra)
    • -dp DBPASS, --dbpass DBPASS—Cassandra DB password
    • -do DBPORT, --dbport DBPORT—Cassandra DB port (default is 9042)
    • -dh DBHOST, --dbhost DBHOST—Comma-separated host IP addresses of Cassandra DB
    • -pu PGUSER, --pguser PGUSER—Postgres DB username (default is northstar)
    • -dcsv DBCSV, --dbCsvPath DBCSV—The path with CSV data exported from Cassandra
    • -pp PGPASS, --pgpass PGPASS—Postgres DB password
    • -ph PGHOST, --pghost PGHOST—Postgres DB host (default is atom-db.common)
    • -po PGPORT, --pgport PGPORT—Postgres DB port (default is 5432)
    • -pah PARAGONHOST, --paragonHost PARAGONHOST—Virtual IP (VIP) address of Paragon Automation Web UI
    • -pau PARAGONUSER, --paragonUser PARAGONUSER—Paragon Automation Web UI username
    • -pap PARAGONPASSWORD, --paragonPassword PARAGONPASSWORD—Paragon Automation Web UI user password
    • -dr DISCOVERYRETRIES, --discoveryRetries DISCOVERYRETRIES—Device discovery retries (default is 2).

      You use the dr DISCOVERYRETRIES option for DeviceProfile migration when Paragon Automation fails to discover devices at the first attempt. There are multiple reasons for discovery failure, such as devices not being reachable or device credentials being incorrect. Despite discovery failure for devices with incorrect information, Paragon Automation discovers devices with correct information. Partial failure for a subset of devices while discovering multiple devices at a time is possible. To determine the exact reason of failure, see the Monitoring > Jobs page in the Paragon Automation Web UI.

      If the dr option is set to more than 1, on getting a discovery failure, the ns_data_migration.py script retries the discovery for all the devices. This attempt does not impact the devices that are already discovered. However, the chances of successfully discovering devices in subsequent attempts for any failed device discovery is minimal. We recommend that you set the maximum value for the dr option to 2, which is the default value. If there are too many devices in the network, then use a value of 1 to avoid unnecessary retries.

    Note:

    When migrating Cassandra DB data from NorthStar to Paragon Automation, large tables with millions of rows might cause the migration to proceed very slowly and take a long time. Often these large tables contain historical event data that you can discard during migration. To skip migrating this data, you can manually set the '--dbSkipHistoricalData' flag while calling the 'ns_data_migration.py' script. This means that the data in the historical event tables listed in Table 1 is not available in Paragon Automation. This data is permanently lost if not backed up once the NorthStar instance is removed.

  2. Verify the DeviceProfile data.

    Log in to Paragon Automation Web UI and navigate to Configuration > Device. Verify that all the devices are discovered and present. Also, verify that the configuration information is the same as that in the NorthStar device profile.

    To view the device discovery result, go to the Monitoring > Jobs page in the Paragon Automation Web UI.

  3. Verify Cassandra DB data.
    The log output of the ns_data_migration.py script indicates whether there were any problems migrating data from Cassandra. You can also run a script to verify the data in Paragon Automation against the exported CSV files. Note, this may take a long time for larger databases. From the nsmigration container, run:
    content_copy zoom_out_map
    root@nsmigration-h7b9m:~# python3 /opt/northstar/util/db/dbinit.py --schema=/opt/northstar/util/db/db_schema.json --host=$PG_HOST --port=$PG_PORT --user=$PG_USER --password=$PG_PASS --dbtype=postgres --check-schema-version --from-cassandra-csv=/opt/northstar/ns_db_migration/export_csv --verify-data --log-level=DEBUG 2>&1 | tee debug_migration.log
    ...
    2022-04-26 21:11:12,466:INFO:Loading health_monitor-health_history-000001.csv (~ 5 rows)
    2022-04-26 21:11:12,484:INFO:Loaded 5/~5 rows
    2022-04-26 21:11:12,484:INFO:Verify stats health_monitor:health_history: Verified 5/5
    2022-04-26 21:11:12,484:INFO:Copying csv data for table health_monitor:thresholds
    2022-04-26 21:11:12,484:INFO:Using batch size 500
    2022-04-26 21:11:12,489:INFO:Loading health_monitor-thresholds-000001.csv (~ 1 rows)
    2022-04-26 21:11:12,491:INFO:Loaded 1/~1 rows
    2022-04-26 21:11:12,491:INFO:Verify stats health_monitor:thresholds: Verified 1/1
    2022-04-26 21:11:12,491:INFO:Copying csv data for table planner:networkdata
    2022-04-26 21:11:12,491:INFO:Using batch size 20
    2022-04-26 21:11:12,496:INFO:Loading planner-networkdata-000001.csv (~ 1 rows)
    2022-04-26 21:11:12,532:INFO:Loaded 1/~1 rows
    2022-04-26 21:11:12,533:INFO:Verify stats planner:networkdata: Verified 1/1
    

    The script outputs (rows verified)/(rows checked) in each table (see lines beginning with "Verify") to stdout and debug_migration.log. Note that some rows may have been updated after the data was imported but before it was verified, so 'rows verified' may not always equal 'rows checked'. The exported CSV files can be removed once the migration is complete by simply removing the /var/local/ns_db_migration/export_csv directory on the relevant node.

(Optional) Migrate Analytics Data

If you have installed Analytics, perform the following steps to migrate analytics data from NorthStar ES DB to Paragon Automation Influx DB:
  1. Log in to the nsmigration task pod, and run the import_es_data.py -a script.
    content_copy zoom_out_map
    root@nsmigration-p7tcd:/# cd /opt/northstar/util/db/nsmigration
    root@nsmigration-p7tcd:/opt/northstar/util/db/nsmigration# ./import_es_data.py -a 10.xx.xx.95
    Logs stored at /opt/northstar/util/db/nsmigration/logs/es_data_migration.log
    Certs are missing, fetching them from Northstar app server
    Please enter SSH password:
    Testing Elasticsearch connectivity
    Elasticsearch DB connection ok
    Testing Influx DB connectivity
    Influx DB connection ok
    Starting data extraction for type= interface
    
    <OUTPUT SNIPPED>
    
      "migration_rate_sec": 1471.1758360302051,
      "timetaken_min": 0.7725,
      "total_points": 68189
    }
    ETLWorker-2 completed, total points=68189 in 0.7725 minutes with migration_rate=1471.1758360302051
    You must specify the following import_es_data.py script options:.
    • Statistics type—By default, supports interface, label-switched path (LSP), and link-latency statistics data. You can select a specific type by using the --type option.

    • Rollups type—By default, supports daily and hourly time periods. You can select a specific type by using the --rollup_type option.

    • Migration schema—The mapping of ES DB to Influx DB schema is defined in the /opt/northstar/util/db/nsmigration/es_influx_mapping.json file.

    • Rollup ages—By default, fetches hourly and daily data for the last 180 days and 1000 days, respectively. You can change the ages by using the --hourly_age and --daily_age options.

    • ETL parallel worker process—By default, uses four ETL parallel worker processes. You can change the worker process by using the --wp option.

    • Execution time—The script execution time varies based on data volume, the number of days, and the number of ETL parallel worker processes. For example, if four ETL workers use a migration rate of 1500, then:

      • 25,000 LSP statistics of 180 days hourly can take 5 hours

      • 50,000 LSP statistics of 180 days hourly can take 10 hours

    For more information about script arguments, see help 'import_es_data.py -h'.

  2. Verify Influx DB data using the following commands.
    • To query all tunnel traffic data for the last 30 days in Influx DB, run the /opt/pcs/bin/getTrafficFiles.py script inside the dcscheduler pod:
      content_copy zoom_out_map
      root@pa-primary:~# kubectl exec -it $(kubectl get po -n northstar -l app=dcscheduler -o jsonpath={..metadata.name}) -c dcscheduler  -n northstar -- /opt/pcs/bin/getTrafficFiles.py -t tunnel_traffic -i 1d -b 30
      #Starting Time : 08/17/21 12:00:00 AM
      #Interval : 24 hour
      # UNIT = 1
      
      # Aggregation:
      #   - Series: time series
      #   - Statistic: 95th percentile
      #   - Interval: 1 day
      # Report Date= 2021-09-16 (Thu) 08:56
      
      vmx101:Silver-101-102 A2Z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1 -1 0 -1 -1
      vmx101:Silver-101-104 A2Z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 -1
      vmx102:Silver-102-101 A2Z 0 1071913 1072869 1073082 1073378 1073436 1073378 1073620 1073378 1073388 1073484 1073896 1074086 1073974 1073795 1073378 1073590 1073790 1074498 1074595 1074498 1074092 1076565 1076565 1076919 1075502 1075857 1075325 1075148 -1 -1
      vmx102:Silver-102-103 A2Z 0 2118101 2120705 2121258 2120438 2120773 2119652 2121258 2120296 2120190 2120962 2121364 2121867 2121817 2122209 2120167 2120323 2121665 2122733 2122685 2122321 2121511 2121855 2119546 2119700 2109572 2102489 2101604 2121258 2109749 2110280
      vmx102:Silver-102-104 A2Z 0 3442749 3449550 3450757 3448983 3448603 3446081 3453525 3451513 3448142 3449008 3450874 3452721 3451650 3450733 3447297 3447147 3449132 3451747 3450887 3450727 3448429 3452310 3448132 3447328 3200657 3200480 3197646 3445363 3215530 3215884
      vmx103:Silver-103-101 A2Z 0 2149705 2151625 2158319 2170251 2170980 2171171 2169252 2167757 2168518 2172730 2168582 2166350 2161904 2161460 2167162 2158050 2160413 2166131 2167033 2166226 2165632 2171717 2178973 2178102 2158015 2158015 2157661 2157306 -1 -1
      vmx103:Silver-103-102 A2Z 0 2122922 2125508 2131074 2141411 2142899 2141840 2139937 2138338 2139743 2144156 2139602 2138745 2134561 2132725 2137973 2129397 2132755 2138203 2138653 2136713 2135444 2144637 2150006 2147677 2108332 2107801 2107270 2124800 2112228 2113113
      vmx103:Silver-103-104 A2Z 0 3426540 3437589 3447876 3461550 3464308 3461249 3460710 3453848 3458821 3463446 3456119 3456969 3450036 3446943 3451602 3439059 3445325 3455444 3455491 3454308 3449833 3468558 3472376 3470223 3185429 3187731 3183304 3430135 3198001 3202781
      vmx104:Silver-104-102 A2Z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      vmx104:Silver-104-103 A2Z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      vmx105:rsvp-105-106 A2Z 0 114 114 121 116 122 125 125 114 214 224 215 223 213 223 222 226 222 217 213 214 216 219 218 219 202 202 202 211 204 202
      
    • To query all egress interface traffic data for the last 30 days in Influx DB, run the /opt/pcs/bin/getTrafficFiles.py script inside the dcscheduler pod:

      content_copy zoom_out_map
      root@pa-primary:~# kubectl exec -it $(kubectl get po -n northstar -l app=dcscheduler -o jsonpath={..metadata.name}) -c dcscheduler  -n northstar -- /opt/pcs/bin/getTrafficFiles.py -t interface_out -i 1d -b 30
      #Starting Time : 08/17/21 12:00:00 AM
      #Interval : 24 hour
      # UNIT = 1
      
      # Aggregation:
      #   - Series: time series
      #   - Statistic: 95th percentile
      #   - Interval: 1 day
      # Report Date= 2021-09-16 (Thu) 08:49
      
      
      vmx101 ge-0/0/8.0 A2Z 0 2620 2620 2621 2621 2621 2621 2622 2622 2623 2624 2626 2627 2627 2627 2627 2627 2627 2627 2627 2628 2631 2631 2632 2632 0 0 0 2632 -1 -1
      vmx101 ge-0/0/5 A2Z 0 843 846 848 860 843 858 863 866 1001 1012 1012 1018 1011 1048 1018 1048 1027 1013 1025 1017 1010 1046 1046 1048 1053 1055 1073 1045 -1 -1
      ...
      ...
      ...
      vmx107 ge-0/0/8.0 A2Z 0 2620 2621 2622 2622 2623 2624 2626 2626 2630 2631 2632 2632 2632 2632 2632 2632 2633 2633 2635 2635 2635 2635 2636 2636 0 0 0 2636 0 0
      vmx107 ge-0/1/9.0 A2Z 0 6888955 6907022 6907653 6902645 6899706 6892876 6905804 6902894 6899395 6897851 6897322 6896863 6900351 6898745 6890080 6889337 6896781 6902034 6899116 6898749 6898630 6903136 6889662 6890800 6401393 6410976 6400867 6885900 6431500 6436156
      vmx107 ge-0/0/5 A2Z 0 4290428 4296767 4297691 4295393 4292480 4290593 4293842 4293149 4295279 4294504 4294045 4294905 4294996 4294921 4292093 4292703 4295408 4297494 4296424 4295983 4295972 4296808 4294929 4299425 4285605 4286205 4285146 4288390 2126258 2127510
      vmx107 ge-0/0/6 A2Z 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 122 122 122 122 122 122 122 122 122
      vmx107 ge-0/0/7 A2Z 0 878 874 915 898 886 879 897 889 1028 1021 1022 1023 1055 1079 1077 1097 1094 1092 1044 1007 1028 1062 1065 1057 1094 1075 1071 1102 1082 1054
      vmx107 ge-0/0/8 A2Z 0 2921 2925 2925 2924 2925 2926 2928 2928 2930 2934 2934 2936 2934 2935 2935 2934 2935 2936 2939 2938 2938 19892 20581 20965 20582 21076 20376 21578 23252 21312
      vmx107 ge-0/1/8 A2Z 0 2127443 2130145 2130846 2128792 2128138 2127177 2128628 2128331 2128820 2128716 2128916 2129022 2129380 2128995 2127648 2127240 2128885 2130132 2130474 2130345 2129410 2129376 2125952 2125957 2117061 2114139 2119148 2126518 2122808 2121792
      vmx107 ge-0/1/9 A2Z 0 6889737 6907821 6908350 6903585 6900486 6893779 6906516 6903747 6900412 6898908 6898427 6897892 6901325 6899809 6891078 6890377 6897822 6903099 6900152 6899763 6899726 6904248 6890745 6891782 6402507 6412083 6401924 6884168 6432556 6437280
      

(Optional) Migrate NorthStar Planner Data

If you want to use saved NorthStar Planner models on the NorthStar application server file system in Paragon Automation, copy the models using the following steps:
  1. Log in to the NorthStar server.
  2. Use scp and copy the directory (/opt/northstar/data/specs) where your Planner models are saved to the Paragon Automation primary node (/root/ns_specs). For example:
    content_copy zoom_out_map
    [root@ns1-site1 specs]# ls -l /opt/northstar/data/specs
    total 8
    drwx------ 2 root root 4096 Sep 16 08:18 network1
    drwx------ 2 root root 4096 Sep 16 08:18 sample_fish
    
    
    [root@ns1-site1 data]# 
    [root@ns1-site1 ~]# scp -r /opt/northstar/data/specs root@10.xx.xx.153:/root/ns_specs
    The authenticity of host '10.xx.xx.153 (10.xx.xx.153)' can't be established.
    ECDSA key fingerprint is SHA256:haylHqFfEuIEm8xThKbHJhG2uuTpT2xBpC2GZdzfZss.
    ECDSA key fingerprint is MD5:15:71:76:c7:d2:2b:0d:fe:ff:0d:5f:62:7f:52:80:fe.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '10.xx.xx.153' (ECDSA) to the list of known hosts.
    bblink.x                                                                                                     100% 3893     2.2MB/s   00:00
    bgplink.x                                                                                                    100%  140     9.6KB/s   00:00
    bgpnode.x                                                                                                    100%  120    56.5KB/s   00:00
    bgpobj.x                                                                                                     100% 4888     1.8MB/s   00:00
    cosalias.x                                                                                                   100%  385   180.4KB/s   00:00
    custrate.x                                                                                                   100% 1062   184.0KB/s   00:00
    demand.x                                                                                                     100%  104KB   2.1MB/s   00:00
    dparam.x                                                                                                     100%   11KB   2.5MB/s   00:00
    ...
    
  3. Log in to the Paragon Automation primary node.
  4. Copy the /root/ns_specs folder to the NorthStar Planner pod at /opt/northstar/data/specs using the kubectl command. For example:
    content_copy zoom_out_map
    root@pa-primary:~# ls -l /root/ns_specs
    total 8
    drwx------ 4 root root 4096 Sep 16 01:41 network1
    drwx------ 4 root root 4096 Sep 16 01:41 sample_fish
    
    
    root@pa-primary:~# kubectl cp /root/ns_specs  northstar/$(kubectl get po -n northstar -l app=ns-web-planner -o jsonpath={..metadata.name}):/opt/northstar/data/specs -c ns-web-planner
    
    
  5. Verify that the NorthStar Planner models are copied inside the NorthStar Planner pod at /opt/northstar/data/specs/ns_specs.
    content_copy zoom_out_map
    root@pa-primary:~/ns_specs# kubectl exec -it $(kubectl get po -n northstar -l app=ns-web-planner -o jsonpath={..metadata.name}) -c ns-web-planner -n northstar -- ls -l /opt/northstar/data/specs/ns_specs
    total 8
    drwx------ 2 root root 4096 Sep 16 08:18 network1
    drwx------ 2 root root 4096 Sep 16 08:18 sample_fish
    
    
footer-navigation