Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
Juniper Apstra 5.1.0 User Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Restore Apstra Database

Release: Juniper Apstra 5.1
{}
Change Release
date_range 21-Mar-25
CAUTION:

Always restore a database from a new backup, never from older backups or from the backup included in a show_tech.

When you restore a database, the worker VMs will go into a failed state. This problem also occurs when you restore a backup to another worker VM with the same IP address. To fix this issue, add the worker VMs again.

If you make changes after you back up the database, those changes aren't included in the restore. This could create differences between device configs and the Apstra environment. If this happens, you must perform a full config push, which is service-impacting.

Don't restore a database using the backup included in a show_tech. Juniper Support and Engineering use it for analysis. It doesn't include credentials, so it's not suitable for restoring your production environment.

Note:

If you're restoring a backup to a new Apstra server that uses a different network interface for access (eth1 vs eth0 for example), you must update the metadb variable in the [controller] section of the /etc/aos/aos.conf configuration file, then restart the Apstra server.

  1. Backups are saved in dated snapshot directories. Verify that you have a fresh backup in the /var/lib/aos/snapshots/ directory.
    content_copy zoom_out_map
    admin@aos-server:~$ sudo ls -lah /var/lib/aos/snapshot/
    total 12K
    drwx------  3 root root 4.0K Dec 19 21:24 .
    drwxr-xr-x 13 root root 4.0K Dec 19 21:24 ..
    drwx------  3 root root 4.0K Dec 19 21:24 2023-12-19_21-24-10
  2. The file name must be aos.data.tar.gz. Verify the file name, and correct it, if needed.
    content_copy zoom_out_map
    admin@aos-server:~$ sudo ls -lah /var/lib/aos/snapshot/2023-12-19_21-24-10
    total 125M
    drwx------ 3 root root 4.0K Dec 19 21:24 .
    drwx------ 3 root root 4.0K Dec 19 21:24 ..
    -rw------- 1 root root 125M Dec 19 21:24 aos.data.tar.gz
    -rwxr-xr-x 1 root root 2.6K Dec 19 21:24 aos_restore
    -rw------- 1 root root    1 Dec 19 21:24 comment.txt
    drwx------ 2 root root 4.0K Dec 19 21:24 metadata
    admin@aos-server:~$ 
  3. Run the aos_restore command as illustrated below. The restore process first backs up the current database.
    content_copy zoom_out_map
    admin@aos-server:~$ sudo bash /var/lib/aos/snapshot/2023-12-19_21-24-10/aos_restore
    Including secret keys from the backup
    Include all sysdb files
    New AOS snapshot: 2023-12-19_21-49-08
    [+] Running 5/5
     ⠿ Container aos_controller_1  Stopped                                                                11.1s
     ⠿ Container aos_auth_1        Stopped                                                                11.0s
     ⠿ Container aos_metadb_1      Stopped                                                                11.0s
     ⠿ Container aos_sysdb_1       Stopped                                                                11.0s
     ⠿ Container aos_nginx_1       Stopped                                                                 0.7s
    (Reading database ... 83704 files and directories currently installed.)
    Removing aos-compose (4.2.0-236) ...
    tar: Removing leading `/' from member names
    /var/lib/aos/db/
    /var/lib/aos/db/_Main-00000000656e68c2-000bc9e4-log
    /var/lib/aos/db/_Auth-00000000656e68be-000eacab-log-valid
    /var/lib/aos/db/_Auth-000000006553e3a7-0000be2f-log-valid
    /var/lib/aos/db/_Central-00000000656e68b4-0002ce01-checkpoint
    /var/lib/aos/db/_AosController-00000000656e68b9-000bbbf0-log
    /var/lib/aos/db/_Central-000000006553e3a5-00064668-log-valid
    /var/lib/aos/db/_Main-000000006553e3aa-00052829-log
    /var/lib/aos/db/_Auth-000000006553e3a7-0000be2f-checkpoint
    /var/lib/aos/db/_Main-00000000656e68c2-000bc9e4-checkpoint
    /var/lib/aos/db/_Central-00000000656e68b4-0002ce01-log
    /var/lib/aos/db/_AosSysdb-00000000656e68aa-0000ee5d-log
    /var/lib/aos/db/_Auth-00000000656e68be-000eacab-log
    /var/lib/aos/db/_Main-000000006553e3aa-00052829-checkpoint-valid
    /var/lib/aos/db/_AosController-00000000656e68b9-000bbbf0-checkpoint
    /var/lib/aos/db/.devpi/
    /var/lib/aos/db/.devpi/server/
    /var/lib/aos/db/.devpi/server/.event_serial
    /var/lib/aos/db/.devpi/server/.serverversion
    /var/lib/aos/db/.devpi/server/.sqlite
    /var/lib/aos/db/.devpi/server/.nodeinfo
    /var/lib/aos/db/_AosSysdb-00000000656e68aa-0000ee5d-log-valid
    /var/lib/aos/db/_Central-00000000656e68b4-0002ce01-log-valid
    /var/lib/aos/db/_Central-000000006553e3a5-00064668-checkpoint-valid
    /var/lib/aos/db/_Main-000000006553e3aa-00052829-checkpoint
    /var/lib/aos/db/_AosSysdb-00000000656e68aa-0000ee5d-checkpoint
    /var/lib/aos/db/_Metadb-00000000656e68a9-000c719b-log
    /var/lib/aos/db/_Metadb-00000000656e68a9-000c719b-log-valid
    /var/lib/aos/db/_AosAuth-00000000656e68a9-0007cb45-log-valid
    /var/lib/aos/db/_Auth-000000006553e3a7-0000be2f-log
    /var/lib/aos/db/_Main-000000006553e3aa-00052829-log-valid
    /var/lib/aos/db/_AosAuth-00000000656e68a9-0007cb45-checkpoint
    /var/lib/aos/db/_Central-00000000656e68b4-0002ce01-checkpoint-valid
    /var/lib/aos/db/_Central-000000006553e3a5-00064668-log
    /var/lib/aos/db/_Auth-000000006553e3a7-0000be2f-checkpoint-valid
    /var/lib/aos/db/_Metadb-00000000656e68a9-000c719b-checkpoint
    /var/lib/aos/db/_Main-00000000656e68c2-000bc9e4-checkpoint-valid
    /var/lib/aos/db/_AosAuth-00000000656e68a9-0007cb45-log
    /var/lib/aos/db/_AosController-00000000656e68b9-000bbbf0-checkpoint-valid
    /var/lib/aos/db/_Auth-00000000656e68be-000eacab-checkpoint
    /var/lib/aos/db/_Metadb-00000000656e68a9-000c719b-checkpoint-valid
    /var/lib/aos/db/_Auth-00000000656e68be-000eacab-checkpoint-valid
    /var/lib/aos/db/_AosController-00000000656e68b9-000bbbf0-log-valid
    /var/lib/aos/db/_AosSysdb-00000000656e68aa-0000ee5d-checkpoint-valid
    /var/lib/aos/db/_AosAuth-00000000656e68a9-0007cb45-checkpoint-valid
    /var/lib/aos/db/_Central-000000006553e3a5-00064668-checkpoint
    /var/lib/aos/db/_Main-00000000656e68c2-000bc9e4-log-valid
    /var/lib/aos/anomaly/
    /var/lib/aos/anomaly/_Anomaly-00000000650916f3-000e3d9b-checkpoint
    /var/lib/aos/anomaly/_Anomaly-00000000656e68bf-0006052e-checkpoint
    /var/lib/aos/anomaly/_Anomaly-00000000650916f3-000e3d9b-checkpoint-valid
    /var/lib/aos/anomaly/_Anomaly-000000006553e3a7-0004794b-checkpoint
    /var/lib/aos/anomaly/_Anomaly-000000006553e3a7-0004794b-log-valid
    /var/lib/aos/anomaly/_Anomaly-00000000656e68bf-0006052e-checkpoint-valid
    /var/lib/aos/anomaly/_Anomaly-00000000650916f3-000e3d9b-log
    /var/lib/aos/anomaly/_Anomaly-00000000656e68bf-0006052e-log-valid
    /var/lib/aos/anomaly/_Anomaly-000000006553e3a7-0004794b-checkpoint-valid
    /var/lib/aos/anomaly/_Anomaly-00000000656e68bf-0006052e-log
    /var/lib/aos/anomaly/_Anomaly-000000006553e3a7-0004794b-log
    /var/lib/aos/anomaly/_Anomaly-00000000650916f3-000e3d9b-log-valid
    /etc/aos/aos.conf
    /etc/aos-img-chksum/
    /etc/aos-img-chksum/checksums
    /etc/aos-img-chksum/key.pub
    /etc/aos-img-chksum/checksums.signed
    /opt/aos/aos-compose.deb
    /opt/aos/frontend_images/
    /opt/aos/frontend_images/jinja_docs.zip
    /opt/aos/frontend_images/aos-web-ui.zip
    /opt/aos/frontend_images/sdt_docs.zip
    /etc/aos/version
    /etc/aos-auth/secret_key
    /etc/aos-credential/secret_key
    Selecting previously unselected package aos-compose.
    (Reading database ... 83670 files and directories currently installed.)
    Preparing to unpack /opt/aos/aos-compose.deb ...
    Unpacking aos-compose (4.2.0-236) ...
    Setting up aos-compose (4.2.0-236) ...
    Verifying checksums for docker images...
    Signature Verified Successfully
    Verified.
    [+] Running 5/5
     ⠿ Container aos_auth_1        Started                                                                 0.5s
     ⠿ Container aos_metadb_1      Started                                                                 0.7s
     ⠿ Container aos_sysdb_1       Started                                                                 0.4s
     ⠿ Container aos_controller_1  Started                                                                 0.5s
     ⠿ Container aos_nginx_1       Started                                                                 0.4s
    admin@aos-server:~$ 
  4. When the database has been restored and migrated to a new server, the entire system state has been copied from the backed up installation to the new target. Run the command service aos status to validate the restoration.
    content_copy zoom_out_map
    admin@aos-server:~$ sudo service aos status
    ● aos.service - LSB: Start AOS management system
         Loaded: loaded (/etc/init.d/aos; generated)
         Active: active (exited) since Tue 2023-12-05 00:02:46 UTC; 2 weeks 0 days ago
           Docs: man:systemd-sysv-generator(8)
            CPU: 541ms
    
    Dec 05 00:02:45 aos-server aos[1112]: Container aos_nginx_1  Starting
    Dec 05 00:02:45 aos-server aos[1112]: Container aos_metadb_1  Starting
    Dec 05 00:02:45 aos-server aos[1112]: Container aos_auth_1  Starting
    Dec 05 00:02:45 aos-server aos[1112]: Container aos_sysdb_1  Starting
    Dec 05 00:02:46 aos-server aos[1112]: Container aos_auth_1  Started
    Dec 05 00:02:46 aos-server aos[1112]: Container aos_sysdb_1  Started
    Dec 05 00:02:46 aos-server aos[1112]: Container aos_metadb_1  Started
    Dec 05 00:02:46 aos-server aos[1112]: Container aos_controller_1  Started
    Dec 05 00:02:46 aos-server aos[1112]: Container aos_nginx_1  Started
    Dec 05 00:02:46 aos-server systemd[1]: Started LSB: Start AOS management system.
    admin@aos-server:~$ 
  5. The database is stored on the Apstra server itself. If the server needs to be restored or if its disk image becomes corrupt, any backups/restores are lost along with the Apstra server. We recommend that you periodically move backups/restores off of the Apstra server to a secure location. Also, if you've scheduled cron jobs to periodically backup the database, make sure to rotate those files off of the Apstra server to keep the Apstra server VM disk from becoming full. Copy the contents of the snapshot directory to your backup infrastructure.
    content_copy zoom_out_map
    admin@aos-server:~$ sudo ls -lah /var/lib/aos/snapshot/
    total 32K
    drwx------  8 root root 4.0K Jun 29 19:31 .
    drwxr-xr-x 13 root root 4.0K Jun 29 19:32 ..
    drwx------  3 root root 4.0K Jun 29 15:44 2023-12-19_21-24-10
    drwx------  3 root root 4.0K Jun 29 15:45 2023-12-19_15-45-37
    drwx------  3 root root 4.0K Jun 29 16:21 2023-12-19_16-21-36
    drwx------  3 root root 4.0K Jun 29 18:11 2023-12-19_18-11-34
    drwx------  3 root root 4.0K Jun 29 18:40 2023-12-19_18-40-03
    drwx------  3 root root 4.0K Jun 29 19:31 2023-12-19_19-31-43
    admin@aos-server:~$ 
footer-navigation