Upgrade to Paragon Automation Release 23.2
You cannot custom select applications to be backed up and restored. You can back up and restore only a preconfigured and fixed set of applications and administrations settings for each component, See Backup and Restore for a complete list of applications that can be backed up.
Before You Upgrade:
Upgrade your current release of Paragon Automation to Release 23.1. For information on how to upgrade your current release to Release 23.1, see Upgrade to Paragon Automation Release 23.1.
Upgrade from Release 23.1 to Release 23.2
- Log in to the primary node of your Release 23.1 cluster.
-
Check for any errors in the pods using the
health-check.sh
script.root@primary23.1:~# health-check.sh ======================================================= Get node count of Kubernetes cluster. ======================================================= There are 4 nodes in the cluster. ======================================================= Get node status of Kubernetes cluster. ======================================================= 4 nodes are in the Ready state. NAME STATUS ROLES AGE VERSION 10.16.18.20 Ready control-plane,master 26h v1.21.14 10.16.18.21 Ready <none> 26h v1.21.14 10.16.18.22 Ready <none> 26h v1.21.14 10.16.18.23 Ready <none> 26h v1.21.14 ======================================================= Get node readiness and taint status of Kubernetes cluster. ======================================================= All 4 nodes are in a Ready state. All 4 nodes have no taints. ======================================================= Check DiskPressure status for each node ====================================================== DiskPressure status for each node: Node DiskPressure 10.16.18.20 False 10.16.18.21 False 10.16.18.22 False 10.16.18.23 False ====================================================== Check Network and Calico status for each node ====================================================== NetworkUnavailable and Calico status for each node: Node NetworkUnavailable Ready Calico 10.16.18.20 False True 10.16.18.21 False True 10.16.18.22 False True 10.16.18.23 False True ====================================================== Checking Memory Pressure status on nodes ====================================================== Node 10.16.18.20 is not reporting any memory pressure issues. Node 10.16.18.21 is not reporting any memory pressure issues. Node 10.16.18.22 is not reporting any memory pressure issues. Node 10.16.18.23 is not reporting any memory pressure issues. ====================================================== Checking PIDPressure on nodes ====================================================== Node 10.16.18.20 is not reporting any PID pressure issues. Node 10.16.18.21 is not reporting any PID pressure issues. Node 10.16.18.22 is not reporting any PID pressure issues. Node 10.16.18.23 is not reporting any PID pressure issues. ====================================================== Checking Kubernetes PODS status ====================================================== No errors found in pods. ====================================================== Checking Kubernetes services status ====================================================== No Kubernetes services found in Pending state. ====================================================== Checking Postgres Status ====================================================== Result includes the NorthStar database schema. version | description ---------+--------------------------------------------- 0.28 | Version 28 of the NorthStar database schema (1 row)
Your pods are healthy, if your output contains the "
No errors found in pods
." message. -
Execute the
data.sh –backup
backup script.root@primary23.1:~# data.sh –backup ===============================Backup Report================================ Name: db-backup-paa-2023-10-18 Namespace: common Selector: controller-uid=446d45fd-0a7e-4b21-94b1-02f079b11879 Labels: apps=db-backup common=db-backup id=paa-2023-10-18 Annotations: <none> Parallelism: 1 Completions: 1 Start Time: Wed, 18 Oct 2023 08:39:04 -0700 Completed At: Wed, 18 Oct 2023 08:39:23 -0700 Duration: 19s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: app=db-backup common=db-backup controller-uid=446d45fd-0a7e-4b21-94b1-02f079b11879 id=paa-2023-10-18 job-name=db-backup-paa-2023-10-18 Service Account: db-backup Containers: db-backup: Image: localhost:5000/eng-registry.juniper.net/northstar-scm/northstar-containers/ns_dbinit:release-23-1-ge572e4b914 Port: <none> Host Port: <none> Command: /bin/sh Args: -c exec /entrypoint.sh --backup /paa-2023-10-18 Environment: PG_HOST: atom-db.common PG_PORT: 5432 PG_ADMIN_USER: <set to the key 'username' in secret 'atom.atom-db.credentials'> Optional: false PG_ADMIN_PASS: <set to the key 'password' in secret 'atom.atom-db.credentials'> Optional: false Mounts: /opt/northstar/data/backup from postgres-backup (rw) Volumes: postgres-backup: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: db-backup-pvc ReadOnly: false Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 47m job-controller Created pod: db-backup-paa-2023-10-18-95b8j Normal Completed 47m job-controller Job completed ============================================================================= Running EMS Backup. ===============================Get Backup file location====================== Name: local-pv-81fa4ecb Labels: <none> Annotations: pv.kubernetes.io/bound-by-controller: yes pv.kubernetes.io/provisioned-by: local-volume-provisioner-10.16.18.20-b73872bc-257c-4e82-b744-c6981bc3e131 Finalizers: [kubernetes.io/pv-protection] StorageClass: local-storage Status: Bound Claim: common/db-backup-pvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 149Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [10.16.18.20] Message: Source: Type: LocalVolume (a persistent volume backed by local storage on a node) Path: /export/local-volumes/pv1 Events: <none> ============================================================================= Running Pathfinder Kubernetes Config Backup. ============================================================================= Saving ns-anuta-rest secret Saving ns-anuta-rest configmaps =============================Backup Completed================================
-
Search for the backup source in the
Backup Report
and navigate to that directory and verify that the files are present.root@primary23.1:~# cd /export/local-volumes/pv1 root@primary:/export/local-volumes/pv1# ls paa-2023-10-18 root@primary23.1:/export/local-volumes/pv1# cd paa-2023-10-18/ root@primary23.1:/export/local-volumes/pv1/paa-2023-10-18# ls auditlog.pgdump dmc-scope-bkup.yml jobmanager.pgdump ns_cmgd.pgdump ns_deviceprofiles.pgdump ns_NorthStarMLO.pgdump ns_pcs_provision.pgdump ns_rest.pgdump devicemanager.pgdump dpm.pgdump job-scope-bkup.yml ns_db_meta.pgdump ns_health_monitor.pgdump ns_pcsadmin.pgdump ns_pcs_restconf.pgdump ns_taskscheduler.pgdump devicemodel.pgdump iam.pgdump jobstore.pgdump ns_device_config.pgdump ns_ipe.pgdump ns_pcs.pgdump ns_planner.pgdump paragon_insights.tar.gz
-
Install a Paragon Automation Release 23.2 cluster.
Note:
If you are using the config.yml file of your older release of Paragon Automation to install Release 23.2, ensure that you comment out
kubernetes_master_address
in the file. - Log in to one of the primary nodes.
-
Check for any errors in the pods using the
health-check.sh
script.root@primary:~# health-check.sh
-
Execute the backup script to create a dummy back up of your 23.2 configuration.
root@primary:~# data.sh –backup
-
Search for the back up data directory in the back up report, navigate to the data
directory and rename the Release 23.2 backup file.
root@primary:~# cd /export/local-volumes/pv1 root@primary:/export/local-volumes/pv1# mv paa-2023-10-18/ paa-2023-10-18-dummy
-
Copy the Release 23.1 backup data to the Release 23.2 backup data directory.
root@primary:/export/local-volumes/pv1# scp -prv paa-2023-10-18 10.52.43.112:/export/local-volumes/pv2/
-
Get your MGD container name:
root@primary:# kubectl get po -n healthbot | grep mgd
-
Execute the restore script on a Release 23.2 primary node.
root@primary:# kubectl exec -ti -n healthbot mgd-858f4b8c9-sttnh -- cli request system restore path /paa-2023-10-18
-
Find the restore pod in common namespace.
root@primary:# kubectl get po -n common | grep restore db-restore-paa-2023-10-18-6znb8
-
Check the logs from the restore pod.
root@primary:# kubectl logs -n common db-restore-paa-2023-10-18-6znb8
-
Follow the logs and refresh the output looking for
Restore Complete
towards the end of the logs.2023-10-18 16:01:11,127:DEBUG:pg_restore: creating ACL "metric_helpers.TABLE pg_stat_statements" 2023-10-18 16:01:11,129:DEBUG:pg_restore: creating ACL "metric_helpers.TABLE table_bloat" 2023-10-18 16:01:11,131:DEBUG:pg_restore: creating ACL "pg_catalog.TABLE pg_stat_activity" 2023-10-18 16:01:11,137:INFO:Restore complete 2023-10-18 16:01:11,388:INFO:Deleted secret ems/jobmanager-identitysrvcreds 2023-10-18 16:01:11,396:INFO:Deleted secret ems/devicemodel-connector-default-scope-id 2023-10-18 16:01:11,396:WARNING:Could not restore common/iam-smtp-config, iam-smtp-bkup.yml not found 2023-10-18 16:01:21,405:DEBUG:Waiting for secrets to be deleted (10/60) sec 2023-10-18 16:01:21,433:INFO:Created secret ems/jobmanager-identitysrvcreds 2023-10-18 16:01:21,443:INFO:Created secret ems/devicemodel-connector-default-scope-id 2023-10-18 16:01:21,444:INFO:Starting northstar applications 2023-10-18 16:01:22,810:INFO:Starting ems applications 2023-10-18 16:01:23,164:INFO:Starting auditlog applications 2023-10-18 16:01:23,247:INFO:Starting iam applications
- Log in to the paragon Automation Release 23.2 UI and verify the restored data.