Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Upgrade to Paragon Automation Release 24.1

Release: Paragon Automation 24.1
{}
Change Release
date_range 18-Jul-24

You can directly upgrade only from Paragon Automation Release 23.2 to Release 24.1. If your release is earlier than Release 23.2, you must install Release 24.1 afresh.

Use the instructions provided in this topic to upgrade Release 23.2 to Release 24.1.

In order to directly upgrade, we recommend that you use the health-check utility to ensure that your system is stable, and then take a backup of your current configuration using the backup utility. Copy your backed-up data before the upgrade process to an external location outside the cluster as a fail-safe.

If you are installing Release 24.1 afresh, we recommend that you use the backup and restore utility for your existing release.

Note:

You cannot custom select applications to be backed up and restored. You can back up and restore only a preconfigured and fixed set of applications and administrations settings for each component, See Backup and Restore for a complete list of applications that can be backed up.

Before You Upgrade:

Upgrade your current release of Paragon Automation to Release 23.2. For information on how to upgrade your current release to Release 23.2, see Upgrade to Paragon Automation Release 23.2.

Upgrade from Release 23.2 to Release 24.1

  1. Log in to the primary node of your Release 23.2 cluster.
  2. Check for any errors in the pods using the health-check.sh script.
    content_copy zoom_out_map
    root@primary23.1:~# health-check.sh               
    =======================================================
    Get node count of Kubernetes cluster.
    =======================================================
     There are 4 nodes in the cluster.                                            
    =======================================================
    Get node status of Kubernetes cluster.
    =======================================================
    4 nodes are in the Ready state.
    NAME           STATUS   ROLES                  AGE   VERSION
    10.16.18.20   Ready    control-plane,master   26h   v1.21.14
    10.16.18.21   Ready    <none>                 26h   v1.21.14
    10.16.18.22   Ready    <none>                 26h   v1.21.14
    10.16.18.23   Ready    <none>                 26h   v1.21.14
    =======================================================
    Get node readiness and taint status of Kubernetes cluster.
    =======================================================
    All 4 nodes are in a Ready state.
    All 4 nodes have no taints.
    =======================================================
    Check DiskPressure status for each node
    ======================================================       
    DiskPressure status for each node:
    Node    DiskPressure
    10.16.18.20    False
    10.16.18.21    False
    10.16.18.22    False
    10.16.18.23    False
    ======================================================
    Check Network and Calico status for each node
    ======================================================                 
    NetworkUnavailable and Calico status for each node:
    Node    NetworkUnavailable      Ready   Calico
    10.16.18.20    False   True
    10.16.18.21    False   True
    10.16.18.22    False   True
    10.16.18.23    False   True
                                                           
    ======================================================
    Checking Memory Pressure status on nodes
    ======================================================
                                                           
    Node 10.16.18.20 is not reporting any memory pressure issues.
    Node 10.16.18.21 is not reporting any memory pressure issues.
    Node 10.16.18.22 is not reporting any memory pressure issues.
    Node 10.16.18.23 is not reporting any memory pressure issues.
                                                           
    ======================================================
    Checking PIDPressure on nodes
    ======================================================
                                                           
    Node 10.16.18.20 is not reporting any PID pressure issues.
    Node 10.16.18.21 is not reporting any PID pressure issues.
    Node 10.16.18.22 is not reporting any PID pressure issues.
    Node 10.16.18.23 is not reporting any PID pressure issues.
                                                           
    
    ======================================================
    Checking Kubernetes PODS status
    ======================================================                                                     
    No errors found in pods.   
    ======================================================
    Checking Kubernetes services status
    ======================================================         
    No Kubernetes services found in Pending state.
    ======================================================
    Checking Postgres Status
    ======================================================
    Result includes the NorthStar database schema.
     version |                 description                 
    ---------+---------------------------------------------
     0.28    | Version 28 of the NorthStar database schema
    (1 row)
    

    Your pods are healthy, if your output contains the "No errors found in pods." message.

  3. Execute the data.sh –backup backup script.
    content_copy zoom_out_map
    root@primary23.1:~# data.sh –backup
    ===============================Backup Report================================                                                                          
                                                                                 
    Name:           db-backup-paa-2023-10-18
    Namespace:      common
    Selector:       controller-uid=446d45fd-0a7e-4b21-94b1-02f079b11879
    Labels:         apps=db-backup
                    common=db-backup
                    id=paa-2023-10-18
    Annotations:    <none>
    Parallelism:    1
    Completions:    1
    Start Time:     Wed, 18 Oct 2023 08:39:04 -0700
    Completed At:   Wed, 18 Oct 2023 08:39:23 -0700
    Duration:       19s
    Pods Statuses:  0 Running / 1 Succeeded / 0 Failed
    Pod Template:
      Labels:           app=db-backup
                        common=db-backup
                        controller-uid=446d45fd-0a7e-4b21-94b1-02f079b11879
                        id=paa-2023-10-18
                        job-name=db-backup-paa-2023-10-18
      Service Account:  db-backup
      Containers:
       db-backup:
        Image:      localhost:5000/eng-registry.juniper.net/northstar-scm/northstar-containers/ns_dbinit:release-23-1-ge572e4b914
        Port:       <none>
        Host Port:  <none>
        Command:
          /bin/sh
        Args:
          -c
          exec /entrypoint.sh --backup /paa-2023-10-18
        Environment:
          PG_HOST:        atom-db.common
          PG_PORT:        5432
          PG_ADMIN_USER:  <set to the key 'username' in secret 'atom.atom-db.credentials'>  Optional: false
          PG_ADMIN_PASS:  <set to the key 'password' in secret 'atom.atom-db.credentials'>  Optional: false
        Mounts:
          /opt/northstar/data/backup from postgres-backup (rw)
      Volumes:
       postgres-backup:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  db-backup-pvc
        ReadOnly:   false
    Events:
      Type    Reason            Age   From            Message
      ----    ------            ----  ----            -------
      Normal  SuccessfulCreate  47m   job-controller  Created pod: db-backup-paa-2023-10-18-95b8j
      Normal  Completed         47m   job-controller  Job completed
                                                                                 
                                                                                 
    =============================================================================
    Running EMS Backup.
    ===============================Get Backup file location======================
                                                                                 
                                                                                 
    Name:              local-pv-81fa4ecb
    Labels:            <none>
    Annotations:       pv.kubernetes.io/bound-by-controller: yes
                       pv.kubernetes.io/provisioned-by: local-volume-provisioner-10.16.18.20-b73872bc-257c-4e82-b744-c6981bc3e131
    Finalizers:        [kubernetes.io/pv-protection]
    StorageClass:      local-storage
    Status:            Bound
    Claim:             common/db-backup-pvc
    Reclaim Policy:    Delete
    Access Modes:      RWO
    VolumeMode:        Filesystem
    Capacity:          149Gi
    Node Affinity:     
      Required Terms:  
        Term 0:        kubernetes.io/hostname in [10.16.18.20]
    Message:           
    Source:
        Type:  LocalVolume (a persistent volume backed by local storage on a node)
        Path:  /export/local-volumes/pv1
    Events:    <none>
                                                                                                                                                         
    =============================================================================
    Running Pathfinder Kubernetes Config Backup.
    =============================================================================
    
                                                                                 
    =============================Backup Completed================================
    
  4. Search for the backup source in the Backup Report and navigate to that directory and verify that the files are present.
    content_copy zoom_out_map
    root@primary23.1:~# cd /export/local-volumes/pv1
    root@primary:/export/local-volumes/pv1# ls
    paa-2023-10-18
    root@primary23.1:/export/local-volumes/pv1# cd paa-2023-10-18/
    root@primary23.1:/export/local-volumes/pv1/paa-2023-10-18# ls
    auditlog.pgdump       dmc-scope-bkup.yml  jobmanager.pgdump   ns_cmgd.pgdump           ns_deviceprofiles.pgdump  ns_NorthStarMLO.pgdump  ns_pcs_provision.pgdump  ns_rest.pgdump
    devicemanager.pgdump  dpm.pgdump          job-scope-bkup.yml  ns_db_meta.pgdump        ns_health_monitor.pgdump  ns_pcsadmin.pgdump      ns_pcs_restconf.pgdump   ns_taskscheduler.pgdump
    devicemodel.pgdump    iam.pgdump          jobstore.pgdump     ns_device_config.pgdump  ns_ipe.pgdump             ns_pcs.pgdump           ns_planner.pgdump        paragon_insights.tar.gz
    
    Note:

    Ensure that you maintain a copy of the backed-up data in a location outside the cluster prior to upgrading.

  5. Install a Paragon Automation Release 24.1 cluster.
    Note:

    If you are using the config.yml file of your older release of Paragon Automation to install Release 24.1, ensure that you comment out kubernetes_master_address in the file.

  6. Log in to one of the primary nodes.
  7. Check for any errors in the pods using the health-check.sh script.
    content_copy zoom_out_map
    root@primary:~# health-check.sh
  8. Execute the backup script to create a dummy back up of your 24.1 configuration.
    content_copy zoom_out_map
    root@primary:~# data.sh –backup
  9. Search for the back up data directory in the back up report, navigate to the data directory and rename the Release 24.1 backup file.
    content_copy zoom_out_map
    root@primary:~# cd /export/local-volumes/pv1
    root@primary:/export/local-volumes/pv1# mv paa-2023-10-18/ paa-2023-10-18-dummy
    
  10. Copy the Release 23.2 backup data to the Release 24.1 backup data directory.
    content_copy zoom_out_map
    root@primary:/export/local-volumes/pv1# scp -prv paa-2023-10-18 10.52.43.112:/export/local-volumes/pv2/
  11. Get your MGD container name:
    content_copy zoom_out_map
    root@primary:# kubectl get po -n healthbot | grep mgd
  12. Execute the restore script on a Release 23.2 primary node.
    content_copy zoom_out_map
    root@primary:# kubectl exec -ti -n healthbot mgd-858f4b8c9-sttnh -- cli request system restore path /paa-2023-10-18
  13. Find the restore pod in common namespace.
    content_copy zoom_out_map
    root@primary:# kubectl get po -n common | grep restore
    db-restore-paa-2023-10-18-6znb8
  14. Check the logs from the restore pod.
    content_copy zoom_out_map
    root@primary:# kubectl logs -n common db-restore-paa-2023-10-18-6znb8
  15. Follow the logs and refresh the output looking for Restore Complete towards the end of the logs.
    content_copy zoom_out_map
    2023-10-18 16:01:11,127:DEBUG:pg_restore: creating ACL "metric_helpers.TABLE pg_stat_statements"
    2023-10-18 16:01:11,129:DEBUG:pg_restore: creating ACL "metric_helpers.TABLE table_bloat"
    2023-10-18 16:01:11,131:DEBUG:pg_restore: creating ACL "pg_catalog.TABLE pg_stat_activity"
    2023-10-18 16:01:11,137:INFO:Restore complete
    2023-10-18 16:01:11,388:INFO:Deleted secret ems/jobmanager-identitysrvcreds
    2023-10-18 16:01:11,396:INFO:Deleted secret ems/devicemodel-connector-default-scope-id
    2023-10-18 16:01:11,396:WARNING:Could not restore common/iam-smtp-config, iam-smtp-bkup.yml not found
    2023-10-18 16:01:21,405:DEBUG:Waiting for secrets to be deleted (10/60) sec
    2023-10-18 16:01:21,433:INFO:Created secret ems/jobmanager-identitysrvcreds
    2023-10-18 16:01:21,443:INFO:Created secret ems/devicemodel-connector-default-scope-id
    2023-10-18 16:01:21,444:INFO:Starting northstar applications
    2023-10-18 16:01:22,810:INFO:Starting ems applications
    2023-10-18 16:01:23,164:INFO:Starting auditlog applications
    2023-10-18 16:01:23,247:INFO:Starting iam applications
    
  16. Log in to the paragon Automation Release 24.1 UI and verify the restored data.
footer-navigation