Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Manage Time Series Database Settings

Release: Paragon Automation 23.2
{}
Change Release
date_range 21-Mar-24

You can use the Paragon Automation GUI to configure the time series database (TSDB) settings.

To configure TSDB settings:

Warning:

Selecting, deleting, or dedicating TSDB nodes must be done during a maintenance window because some services will be restarted and the Paragon Automation GUI will likely be unresponsive.

  1. Select Configuration > Insights Settings.

    The Insights Settings page appears.

  2. Click Time Series Database.

    The TSDB Settings tabbed page appears.

  3. From the TSDB Settings tabbed page, you can:
    1. Select one or more nodes (from the TSDB Nodes list) to be used as TSDB nodes.

      (The TSDB Nodes list displays the available nodes in the Paragon Automation installation that you can select as TSDB nodes. By default, Paragon Automation automatically selects one node as a TSDB node.)

    2. Set the replication factor by typing a value (or by using the arrows to specify a value) in the Replication Factor text box.

      (The replication factor determines how many copies of the database are needed. The replication factor is set to 1 by default.)

    3. Dedicate nodes as TSDB nodes by clicking the Dedicate toggle to turn it on.

      A TSDB node might have more than one microservice running. However, when you dedicate a node as TSDB node, it runs only the TSDB microservice, and stops running all other microservices.

      Note:
      • If the node is associated to a persistent volume (storage in a cluster), then you cannot use that node as a dedicated TSDB node.

      • A fail-safe mechanism ensures that you cannot dedicate all Paragon Automation nodes as as TSDB nodes.

    4. Ignore system errors (when you remove or replace a failed TSDB node from Paragon Automation) by clicking the Force toggle to turn it on.

      For example, when a TSDB node fails and the replication factor for that node is set to one, the TSDB data for that node is lost. In this scenario, the failed TSDB node must be removed from Paragon Automation. However, when you try to replace the failed node with a new node, the backup of the node fails with a system error because the replication factor was set to one. If you want to proceed with replacing the node, you must turn the Force toggle on.

    5. Delete a node that was previously assigned as a TSDB node by clicking X next to the name of the TSDB node.

      The node is removed as a TSDB node when you deploy the new configuration changes.

  4. Do one of the following:
    • Click Save to only save the configuration changes to the database without applying the changes to the TSDB nodes.

      You must commit (or rollback) the configuration changes later. For more information, see Commit or Roll Back Configuration Changes in Paragon Insights.

    • Click Save & Deploy to save configuration changes to the database and to apply the changes to the TSDB nodes.

  5. In the pop-up that appears, click OK to confirm.

    You are returned to the TSDB Settings tabbed page.

Adjust Memory Allocation for TSDB Nodes

By default, all InfluxDB pods are capped at 12-GB memory. You can adjust the memory allocation on InfluxDB during the installation of the Paragon Automation cluster, or in the healthbot namespace, or while adding TSDB nodes.

Note:

Choose one of the following options to increase the memory limit.

  • During Installation

    During installation of your Paragon Automation cluster, manually edit the config.yml file to increase the memory limit, before you run the deploy command to deploy the cluster. Edit the memory_default_max parameter to add the memory limit. For example to cap the memory to 16-GB, edit the config.yml to include memory_default_max: 16Gi. Note that editing the config.yml will affect all pods.

  • In the healthbot namespace

    Edit the default limit on the healthbot namespace.

    1. content_copy zoom_out_map
      root@ns1:~# kubectl edit limitranges -n healthbot memory-limit
    2. Restart the InfluxDB pod for the limit to take effect.

    Note that, the new memory limit will take effect on all pods (under the healthbot namespace), only when the pods are restarted.

  • While adding a TSDB node

    Edit the InfluxDB deployment specifications and explicitly add the resource limit, before or after adding a TSDB node.

    1. Determine the InfluxDB pod and deployment name.

      content_copy zoom_out_map
      root@ns1:~# kubectl get deploy -n healthbot | grep influx
      influxdb-ns4              1/1     1            1           3d23h
      
      root@ns1:~# kubectl get pod -A | grep influx
      healthbot         influxdb-ns4-678c9b9b47-zpcwz                                    1/1     Running     0                61s
      

      There might be more than one InfluxDB deployments if multiple TSDB nodes are present. If there are multiple InfluxDB deployments, we must perform these steps on each deployment.

    2. Check the current memory limit.

      content_copy zoom_out_map
      root@ns1:~# kubectl describe  pod -n healthbot influxdb-ns4-678c9b9b47-zpcwz
      ...
      ...
      Containers:
        influxdb:
          Container ID:   containerd://bf47b5e7cf1cf70c1dba76aa1ccd66c689fab616b130b330feb9f8e20bf4dd51
          Image:          paragon-registry.local/abc.example.net/healthbot-registry/ci/healthbot_influxdb:23.2.0-dv
          Image ID:       paragon-registry.local/abc.example.net/healthbot-registry/ci/healthbot_influxdb@sha256:614750fc042d16ef2fccedc83062248e66770754f397e5284a45d1af09fd81d4
          Ports:          8086/TCP, 8088/TCP
          Host Ports:     0/TCP, 0/TCP
          State:          Running
            Started:      Tue, 24 Oct 2023 16:49:51 +0000
          Ready:          True
          Restart Count:  0
          Limits:
            cpu:     6
            memory:  12Gi
          Requests:
            cpu:     20m
            memory:  50Mi
      
    3. Edit the limit on the pod. For example, change the limit to 16-GB.

      content_copy zoom_out_map
      root@ns1:~# kubectl edit deploy -n healthbot influxdb-ns4
      
              image: paragon-registry.local/abc.example.net/healthbot-registry/ci/healthbot_influxdb:23.2.0-dv
              imagePullPolicy: IfNotPresent
              name: influxdb
              resources:
                limits:
                  cpu: "6"
                  memory: 16Gi
                requests:
                  cpu: 20m
                  memory: 50Mi
      

      Before changing the limit, the original values under the Resources parameter are empty. This implies that the limit will default to what is defined at the healthbot namespace level.

      content_copy zoom_out_map
      ...
              image: paragon-registry.local/abc.example.net/healthbot-registry/ci/healthbot_influxdb:23.2.0-dv
              imagePullPolicy: IfNotPresent
              name: influxdb
              ports:
              - containerPort: 8086
                name: http
                protocol: TCP
              - containerPort: 8088
                name: rpc
                protocol: TCP
              resources: {}       
      
      ...
    4. Save and exit the InfluxDB pod. The pod is automatically restarted.

    5. Determine the new InfluxDB pod and deployment name.

      content_copy zoom_out_map
      root@ns1:~# kubectl get deploy -n healthbot | grep influx
      content_copy zoom_out_map
      root@ns1:~# kubectl get pod -A | grep influx
    6. Verify that the edited limit is reflected in the pod.

      content_copy zoom_out_map
      root@ns1:~# kubectl describe  pod -n healthbot influxdb-pod-name
      ...
      ...
      Containers:
        influxdb:
          Container ID:   containerd://3b364b69021324fe423322a6e999940925e88abe5c1c2230d8ae6a4236352303
          Image:          paragon-registry.local/abc.example.net/healthbot-registry/ci/healthbot_influxdb:23.2.0-dv
          Image ID:       paragon-registry.local/abc.example.net/healthbot-registry/ci/healthbot_influxdb@sha256:614750fc042d16ef2fccedc83062248e66770754f397e5284a45d1af09fd81d4
          Ports:          8086/TCP, 8088/TCP
          Host Ports:     0/TCP, 0/TCP
          State:          Running
            Started:      Tue, 24 Oct 2023 16:52:10 +0000
          Ready:          True
          Restart Count:  0
          Limits:
            cpu:     6
            memory:  16Gi
          Requests:
            cpu:     20m
            memory:  50Mi
footer-navigation