Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Increase TimescaleDB PVC Size

date_range 29-Jan-25

Use the steps detailed in this topic to manually increase the size for TimescaleDB persistent volume claim (PVC).

The minimum recommended system requirements to deploy a Paragon Automation cluster is described in Paragon Automation System Requirements. If you want to scale up your cluster you can increase the hardware resources that are required on each node VM when you are creating the VMs for the first time as per your requirement. If you have already deployed your cluster, use the information detailed in this topic to increase storage size, specifically Ceph and consequently TimescaleDB PVC.

Ceph storage is used for both object storage and the PVC for the pods. ~50% of Ceph storage is allocated for TimescaleDB PVC.

To increase the size of the PVC, perform the following steps:

  1. Increase Ceph storage.

  2. Update the allocation quota for Timescale DB PVC in Ceph storage.

  3. Increase the Timescale DB PVC size.

Increase Ceph Storage

Increase the total Ceph storage size.

  1. Increase the virtual disk size from your ESXi server.

    Ceph uses the second disk attached to the node virtual machine (VMs). Ensure that you resize the correct virtual disk. You do not need to reboot the VM.

  2. Verify the new virtual disk size.
    1. Log in to the Linux root shell of the node VM.

    2. Use the lsblk command to verify that the OS detects the new disk size. For example:

      content_copy zoom_out_map
      root@vm4:~# lsblk
      NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
      loop0    7:0    0   64M  1 loop /snap/core20/2379
      loop1    7:1    0 63.7M  1 loop /snap/core20/2434
      loop2    7:2    0   87M  1 loop /snap/lxd/29351
      loop3    7:3    0 89.4M  1 loop /snap/lxd/31333
      loop4    7:4    0 38.8M  1 loop /snap/snapd/21759
      loop5    7:5    0 44.3M  1 loop /snap/snapd/23258
      sr0     11:0    1    4M  0 rom
      nbd0    43:0    0    0B  0 disk
      nbd1    43:32   0    0B  0 disk
      nbd2    43:64   0    0B  0 disk
      nbd3    43:96   0    0B  0 disk
      nbd4    43:128  0    0B  0 disk
      nbd5    43:160  0    0B  0 disk
      nbd6    43:192  0    0B  0 disk
      nbd7    43:224  0    0B  0 disk
      vda    252:0    0  700G  0 disk
      ├─vda1 252:1    0    1M  0 part
      └─vda2 252:2    0  700G  0 part /var/lib/kubelet/pods/fde8c46d-f069-4203-bd4e-3897d5915559/volume-subpaths/config/network/0
      ....
                                      /export/local-volumes/pv1
                                      /
      vdb    252:16   0   55G  0 disk

      In this example, the OS detects that the vdb was increased from 50-GB to 55-GB.

  3. Restart Ceph OSD to detect the size change. For example:
    1. Verify the existing OSD size.

      1. Launch the Rook tools pod.

        content_copy zoom_out_map
        root@vm1:~# kubectl exec -ti -n rook-ceph $(kubectl get po -n rook-ceph -l app=rook-ceph-tools -o jsonpath={..metadata.name}) -- bash
      2. Retrieve the current OSD size.
        content_copy zoom_out_map
        bash-4.4$ ceph osd status
        ID  HOST USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
         0  vm2  2846M  47.2G      0     2633       0        0   exists,up
         1  vm3  3132M  46.9G      1     28.6k      0     1135k  exists,up
         2  vm4  3065M  47.0G      4     23.9k      3      201k  exists,up
         3  vm1  2897M  47.1G      1     2698       1      979k  exists,up

        In this example, you are modifying the OSD on vm4. Here, we still see original size, which is ~50-GB (USED+AVAIL).

    2. Go back to the Linux root shell and determine the OSD pod that runs on the node for which the disk size is to be increased (vm4).

      content_copy zoom_out_map
      root@vm1:~# kubectl get pod -A -o wide | grep osd | grep vm4
      ...
      rook-ceph         rook-ceph-osd-2-787df64c87-bkjt8                              2/2     Running            2 (4d3h ago)     13d     10.1.2.8    vm4   <none>           <none>
      ...
    3. Restart the pod.

      content_copy zoom_out_map
      root@vm1:~# kubectl rollout restart deploy -n rook-ceph rook-ceph-osd-2-787df64c87-bkjt8
    4. Verify the new size.

      1. Launch the Rook tools pod and execute ceph osd status again.

        content_copy zoom_out_map
        bash-4.4$ ceph osd status
        ID  HOST USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
         0  vm2  2924M  47.1G      0     4403       0        0   exists,up
         1  vm3  3202M  46.8G      0      426k      0     1140k  exists,up
         2  vm4  1474M  53.5G      0      477       1      186k  exists,up
         3  vm1  2977M  47.0G      3     21.4k      3     1008k  exists,up

        Here, the OSD on vm4 has increased to ~55-GB.

      2. Verify that the total size has also increased.

        content_copy zoom_out_map
        bash-4.4$ ceph status
        ...
          data:
        ...
            usage:   11 GiB used, 194 GiB / 205 GiB avail

        Here, the total size has increased from 200-GB to 205-GB.

    Note:

    In this example, you increased the size of Ceph storage on one node VM. We recommend that you increase the size on all the node VMs to make the storage value consistent across all the nodes. Repeat these steps for the remaining three VMs.

Update Timescale DB PVC Quota

If you increase the total size of Ceph storage, you must update the allocation quota between object storage and PVC. Use Paragon Shell to increase the allocation quota. For example:

content_copy zoom_out_map
root@vm1> request paragon deploy cluster input "-t rook-quota"
Process running with PID: 1830232
To track progress, run 'monitor start /epic/config/log'

Increase Timescale DB PVC

You can increase the PVC size for TimescaleDB by using Paragon Shell. When you are installing Paragon Automation afresh, override the default PVC size and add the set paragon cluster custom-config keyValue paa-timescaledb-storage-size string size configuration before deploying the Paragon Automation cluster.

  1. Log in to the installer node VM.

  2. Configure the new PVC size in configure mode. For example:

    content_copy zoom_out_map
    root@vm1> configure
    Entering configuration mode
    
    [edit]
    root@vm1# set paragon cluster custom-config keyValue paa-timescaledb-storage-size string 40Gi
    
    [edit]
    root@vm1# commit
    warning: *** operating on 10.1.2.8 ***
    warning: *** operation on 10.1.2.8 succeeds ***
    warning: *** operating on 10.1.2.7 ***
    warning: *** operation on 10.1.2.7 succeeds ***
    warning: *** operating on 10.1.2.6 ***
    warning: *** operation on 10.1.2.6 succeeds ***
    commit complete
    
    [edit]
    root@vm1# exit
    Exiting configuration mode
    
    root@vm1> request paragon config
    Paragon inventory file saved at /epic/config/inventory
    Paragon config file saved at /epic/config/config.yml
  3. Deploy the Paragon Automation cluster as usual.

footer-navigation