Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Backup and Restore

This topic describes the backup and restore capabilities available in Paragon Automation. Although Paragon Automation is a GUI-based application, the backup and restore operations are managed from the Paragon Insights cMGD CLI. Postgres is the primary persistent storage database for microservices. Backup files are saved in a local persistent volume on the cluster nodes. The backup procedure can be performed while microservices are running and does not affect the operation of the cluster. However, for restore procedures, microservices are stopped and the cluster is not functional until the databases are restored.

Currently, you cannot custom select applications to be backed up and restored. You can back up and restore only a preconfigured and fixed set of applications and administrations settings for each component, as listed in Table 1.

Table 1: Fixed Set of Backup Configuration Settings

Devices

Alerts/Alarm Settings

Admin Groups

Topics

Plot Settings

User Defined Actions and Functions

Playbooks

Summarization Profiles

Auditlogs

Device Groups

Ingest Settings

Topology Filter Configuration

Network Groups

SNMP Proxy Configuration

Pathfinder Settings

Notification Settings

IAM Settings

LSP Policies and Profiles

Retention Policies

Workflows

Report Generation Settings (Destination, Report and Scheduler Settings)

The backup procedure has the following limitations:

  • Telemetry data—Data captured from the devices will not be backed up, by default. Telemetry data must be backed up manually.

    For more information, see Backup and Restore the TSDB.

  • Transient and logging data—Data which is being processed and expired events will not be backed up. For example:

    • Alerts and alarms generated

    • Configuration changes which are not committed

    • Most application logs

  • Non-Paragon-Automation Configuration—Configuration done on third-party services supported by Paragon Automation will not be backed up. For example:

    • LDAP user details

  • Topology Ingest Configuration-The cRPD configuration to peer with BGP-LS routers for topology information will not be backed up. This must be manually reconfigured again as required. For more information, see Modify cRPD Configuration.

You use containerized scripts invoked through Kubernetes jobs to implement the backup and restore procedures.

You can manually back up your cluster using the instructions described in Back Up the Configuration. You can also, use a backup script to back up your cluster using the instructions described in Backup and Restore Scripts.

Similarly, you can manually restore the backed up configuration using the instructions described in Restore the Configuration. You can also use a restore script to restore your backed up configuration using the instructions described in Backup and Restore Scripts.

Figure 1: Backup and Restore Process Backup and Restore Process

For Paragon Automation Release 23.2, you can restore a backed up configuration from earlier releases of Paragon Automation only after you perform a dummy back up of a fresh Release 23.2 installation. To use the restore operation on a Release 23.2 cluster, we recommend that you:

  1. Upgrade your current Paragon Automation cluster to Release 23.1.

  2. Back up the Release 23.1 configuration.

  3. Install a Release 23.2 cluster.

  4. Back up the 23.2 cluster.

  5. Copy the Release 23.1 configuration to the backed up Release 23.2 location.

  6. Restore the copied backed up configuration.

Back Up the Configuration

Data across most Paragon Automation applications is primarily stored in Postgres. When you back up a configuration, system-determined and predefined data is backed up. When you perform a backup, the operational system and microservices are not affected. You can continue to use Paragon Automation while a backup is running. You'll use the management daemon (MGD) CLI, managed by Paragon Insights (formerly Healthbot), to perform the backup.

To back up the current Paragon Automation configuration:

  1. Determine the name of the MGD Kubernetes pod, and connect to the cMGD CLI using this name.

    For example:

    Note:

    The main CLI tool in Kubernetes is kubectl, which is installed on a primary node. You can use a node other than the primary node, but you must ensure that you copy the admin.conf file and set the kubeconfig environment variable. Alternatively, you can use the export KUBECONFIG=config-dir/admin.conf command.

    You can also access the Kubernetes API from any node that has access to the cluster, including the control host.

  2. Enter the request system backup path path-to-backup-folder command to start a backup job that backs up all databases up until the moment you run the command.

    For example:

    The command creates a corresponding Kubernetes db-backup-hello-world job. The Kubernetes job creates a backup of the predefined data. The files are stored in a local persistent volume.

  3. After backup is complete, you must explicitly and manually back up the base platform resources using kubectl.
    1. Back up jobmanager-identitysrvcreds and devicemodel-connector-default-scope-id.
    2. (Optional) If SMTP is configured on the Paragon Automation cluster, then back up the available iam-smtp-config secret.

      If this command fails, then SMTP is not configured in the cluster and you can ignore the error.

Frequently Used kubectl Commands to View Backup Details

To view the status of your backup or the location of your backup files, or to view more information on the backup files, use the following commands.

  • Backup jobs exist in the common namespace and use the common=db-backup label. To view all backup jobs:

  • To view more details of a specific Kubernetes job:

  • To view the logs of a specific Kubernetes job:

  • To determine the location of the backup files:

    The output points you to the local persistent volume. Use that persistent volume to determine the node on which the backup files are stored.

    To view all the backup files, log in to the node and navigate to the location of the backup folder.

To view commonly seen backup and restore failure scenarios, see Common Backup and Restore Issues.

Restore the Configuration

You can restore a Paragon Automation configuration from a previously backed-up configuration folder. A restore operation rewrites the databases with all the backed-up configuration information. You cannot selectively restore databases. When you perform a restore operation, a Kubernetes job is spawned, which stops the affected microservices. The job restores the backed-up configuration and restarts the microservices. Paragon Automation remains nonfunctional until the restoration procedure is complete.

You cannot run multiple restore jobs at the same time because the Kubernetes job stops the microservices during the restoration process. Also, you cannot run both backup and restore processes concurrently.

Note:

We strongly recommend that you restore a configuration during a maintenance window, otherwise the system can go into an inconsistent state.

To restore the Paragon Automation configuration to a previously backed-up configuration:

  1. Determine the name of the MGD Kubernetes pod, and connect to the cMGD CLI using this name.

    For example:

  2. Enter the request system restore path path-to-backup-folder command to restore the configuration with the files in the specified backup folder on the persistent volume.

    For example:

    A corresponding Kubernetes db-restore-hello-world job is created. The restore process takes longer than a backup process because the Kubernetes job stops restarts the microservices. When the restoration is complete, the Paragon Automation system is not operational immediately. You must wait around ten minutes for the system to stabilize and become fully functional.

    Note:

    If you are logged in during the restore process, you must log out and log back in after the restore process is complete.

  3. After restore process is complete, you must explicitly restore the base platform resources with the previously manually backed-up base-platform backup files.
    1. Delete the jobmanager-identitysrvcreds and devicemodel-connector-default-scope-id base-platform secrets resources.
    2. Restore the previously backed-up base-platform resources.
    3. Restart the jobmanager and devicemodel-connector base-platform services.
    4. (Optional) If SMTP is configured on the Paragon Automation cluster, delete the current SMTP secrets file and restore from the previously backed-up file.
    5. (Optional) Delete the manually backed-up files. You can delete the manually backed-up files, if you have nightly backup schedule or if you have already restored from a particular file and no longer need it.

Frequently Used kubectl Commands to View Restore Details

To view more information and the status of your restore process, use the following commands:

  • Restore jobs exist in the common namespace and use the common=db-restore label. To view all restore jobs:

  • To view more details of a specific Kubernetes job:

  • To view the logs of a particular Kubernetes job:

To view commonly seen backup and restore failure scenarios, see Common Backup and Restore Issues.

Backup and Restore Scripts

You can also use the Paragon Automation backup and restore scripts to simplify the backup and restore operations. This topic describes the backup and restore script operations and the caveats around the usage of the scripts.

Backup Script Operation

The backup script automatically backs up your current configuration. The primary benefit of the backup script is that you can run it as a cron job with the required frequency so as to schedule regular backups. Additionally, the backup script creates distinguishable date stamped backup folders and the folders do not get overwritten if the script is run on different days.

To back up your configuration using the backup script:

  1. Log in to any one of the primary nodes.

  2. Execute the backup script.

The script runs a backup job to back up your current configuration. A backup folder is created and saved in a local persistent volume on one of the cluster nodes. The folder name is in the <name>-year_month_day format. The folder in your cluster node contains all your backed up configuration metadata.

The script also creates a folder of the same name in the current path in your primary node. The backup folder in your primary node contains the JSON files required for base platform used while restoring the backed up configuration.

As the script is running, a backup summary is generated and displayed onscreen. The summary contains the node and location of the backup files. For example:

In this example, the backup folder containing all the backup metadata is stored in your cluster node with IP address 10.16.18.20 in the /export/local-volumes/pv1 folder.

Restore Script Operation

The restore script automatically restores your backed up configuration.

To restore your configuration using the restore script:

  1. Log in to any one of the primary nodes.

  2. Get your MGD container name:

  3. Execute the restore command.

  4. Find the restore pod in common namespace.

  5. Check logs from restore pod.

  6. Follow logs and refresh looking for Restore Complete towards the end of the logs.

  7. Log in to the Release 23.2 UI and verify the restored data.

Caveats of Backup and Restore Scripts

The caveats of the backup and restore scripts are as following:

  • You can run the scripts either on a weekly basis or only once daily. Running them multiple times in a 24-hour period returns an error since there is already a backup folder for that day named <name>-year_month_day. If you need to take a manual backup in the same 24-hour period, you must remove the job using the kubectl delete -n common jobs command. For example:

    # kubectl delete -n common jobs db-backup-paa-2023_20_04

  • The scripts fill disk space with backup files depending on the frequency and size of backup files. Consider removing outdated backup metadata and files to free up disk space. You can remove the Kubernetes metadata using the kubectl delete -n common jobs command. For example:

    # kubectl delete -n common jobs db-backup-paa-2023_20_04

    You can remove the backup files by deleting the <name>-year-month-day folders created in the /root/ folder in the local volume path displayed in the summary when you run the backup script.