Paragon Automation System Requirements
Before you install the Paragon Automation software, ensure that your system meets the requirements that we describe in these sections.
To determine the resources required to implement Paragon Automation, you must understand the fundamentals of the Paragon Automation underlying infrastructure.
Paragon Automation is a collection of microservices that interact with one another through APIs and run within containers in a Kubernetes cluster. A Kubernetes cluster is a set of nodes or machines running containerized applications. Each node is a single machine, either physical (bare-metal server) or virtual (virtual machine).
The nodes within a cluster implement different roles or functions depending on which Kubernetes components are installed. During installation you specify which role each node will have and the installation playbooks will install the corresponding components on each node accordingly.
-
Control plane (primary) node—Monitors the state of the cluster, manages the worker nodes, schedules application workloads, and manages the life cycle of the workloads.
-
Compute (worker) node—Performs tasks that the control plane node assigns, and hosts the pods and containers that execute the application workloads. Each worker node hosts one or more pods which are collections of containers.
-
Storage node—Provides storage for objects, blocks, and files within the cluster. In Paragon Automation, Ceph provides storage services in the cluster. A storage node must be in a worker node, although not every worker node needs to provide storage.
For detailed information on minimum configuration for primary, worker, and storage nodes, see Paragon Automation Implementation and Hardware Requirements.
A Kubernetes cluster comprises several primary nodes and worker nodes. A single node can function as both primary and worker if the components required for both roles are installed in the same node.
You need to consider the intended system's capacity (number of devices, LSPs, etc), the level of availability required, and the expected system's performance, to determine the following cluster parameters:
- Total number of nodes (virtual or physical) in the cluster
- Amount of resources on each node (CPU, memory, and disk space)
- Number of nodes acting as primary, worker, and storage nodes
Paragon Automation Implementation
Paragon Automation is implemented on top of a Kubernetes cluster, which consists of one or more primary nodes and one or more worker nodes. At minimum, one primary node and one worker node are required for a functional cluster. Paragon Automation can be implemeted in two different ways:
-
Single-node implementation—A single-node implementation comprises one node, either a virtual machine (VM) or a bare-metal server (BMS), acting as primary, worker, and storage node. When you install Paragon Automation with a single node, you must configure the node as "master" in the inventory.yml file and also select Master Scheduling during installation. For more information see, Install Single-Node Cluster on Ubuntu, Install Single-Node Cluster on CentOS, or Install Single-Node Cluster on Red Hat Enterprise Linux.
Note:You must only implement Paragon Automation as a single-node setup for limited lab (learning or demo) purposes, and not for customer POC or production deployments.
Implementing Paragon Automation using a single node is not recommended, because of limited performance and potential applications/services failures when the number of managed devices, or the number of Paragon Insights playbooks/rules, and the amount of telemetry data to be processed, increases.
-
Multinode Implementation—A multinode implementation comprises multiple nodes, either VMs or BMSs, where at least one node acts as primary and at least three nodes as workers and provide storage. This implementation not only improves performance but allows for high availability within the cluster:
-
Control plane high availability—For control plane redundancy, you must have a minimum of three primary nodes. You can add more primary nodes as long as the total number of primary nodes is an odd number.
When you install Paragon Automation with multiple primary nodes, you must configure a Kubernetes Master Virtual IP address and select the
Install Loadbalancer for Master Virtual IP address
option during installation. For more information, see Install Multinode Cluster on Ubuntu, Install MultiNode Cluster on CentOS, or Install Multinode Cluster on Red Hat Enterprise Linux. -
Workload high availability—For workload high availability and workload performance, you must have more than one worker. You can add more workers to the cluster as needed.
-
Storage high availability—For storage high availability, you must have at least three nodes for Ceph storage. You must enable
Master Scheduling
during installation if you want any of the primary nodes to provide Ceph storage. Enabling master scheduling allows the primary to act as a worker as well.
You could implement a setup that provides redundancy in different ways, as shown in the examples in Figure 3.
Figure 3: Multinode Redundant SetupsNote:For Paragon Automation production deployments, we recommend that you have a fully redundant setup with a minimum of three primary nodes (multi-primary node setup), and a minimum of three worker nodes providing Ceph storage. You must enable
Master Scheduling
during the installation process. -
Hardware Requirements
This section lists the minimum hardware resources required for the Ansible control host node and the primary and worker nodes of a Paragon Automation cluster.
The compute, memory, and disk requirements of the Ansible control host node are not dependent on the implementation type (single or multinode) of the cluster or the intended capacity of the system. The following table shows the requirements for the Ansible control host node:
Node |
Minimum Hardware Requirement |
Storage Requirement |
Role |
---|---|---|---|
Ansible control host |
2–4-core CPU, 12-GB RAM, 100-GB HDD |
No disk partitions or extra disk space required |
Carry out Ansible operations to install the cluster. |
In contrast, the compute, memory, and disk requirements of the cluster nodes vary widely based on the implementation type (single-node or multinode) and the intended capacity of the system. The intended capacity includes the number of devices to be monitored, type of sensors, frequency of telemetry messages, and number of playbooks and rules. If you increase the number of device groups, devices, or playbooks, you'll need higher CPU and memory capacities.
The following table summarizes the minimum hardware resources required per node for a successful installation of a multinode cluster.
Node |
Minimum Hardware Requirement |
Storage Requirement |
Role |
---|---|---|---|
Primary or worker node |
8-core CPU, 32-GB RAM, 200 GB SSD storage (including Ceph storage) Minimum 1000 IOPS for the disks |
The cluster must include a minimum of three storage nodes. Each node must have an unformatted disk partition or a separate unformatted disk, with at least 30-GB space, for Ceph storage. See Disk Requirements. |
Kubernetes primary or worker node |
The following table summarizes the minimum hardware resources required for successful installation of a single-node cluster.
Node |
Minimum Hardware Requirement |
Storage Requirement |
Role |
---|---|---|---|
Primary or worker node |
8-core CPU, 32-GB RAM, 200-GB SSD storage (including Ceph storage) Minimum 1000 IOPS for the disks |
The node must have an unformatted disk partition or a separate unformatted disk, with minimum 30-GB space, for Ceph storage. See Disk Requirements. |
Kubernetes primary or worker node |
SSDs are mandatory on bare-metal servers.
Paragon Automation, by default, generates a Docker registry and stores it internally in Ceph storage. In the current release, you can optionally configure Paragon Automation to generate multiple Docker registries and store them on multiple external nodes. The following table summarizes the minimum hardware resources required for each external Docker registry node.
Node |
Minimum Hardware Requirement |
Storage Requirement |
Role |
---|---|---|---|
External registry node |
2–4-core CPU, 12-GB RAM, 100-GB HDD |
No disk partitions or extra disk space required |
Store the Docker registry. |
Here, we've listed only minimum requirements for small deployments supporting up to two device groups. In such deployments, each device group may comprise two devices and two to three playbooks across all Paragon Automation components. See Paragon Automation User Guide, for information about devices and device groups.
To get a scale and size estimate of a production deployment and to discuss detailed dimensioning requirements, contact your Juniper Partner or Juniper Sales Representative.
Software Requirements
-
You must install a base OS of Ubuntu version 18.04.04 or later, CentOS version 7.6 or later, or RHEL version 8.3 or later on all nodes. All the nodes must run the same OS (Ubuntu, CentOS, or RHEL) version of Linux.
Paragon Automation Release 23.1 is qualified to work with the following OS versions:
-
Ubuntu versions 18.04.05 LTS (Bionic Beaver) and 20.04.4 LTS (Focal Fossa)
-
RHEL version 8.4 and RHEL version 8.10
Note:If you are using RHEL version 8.10, you must remove the following RPM bundle:
rpm -e buildah cockpit-podman podman-catatonit podman
Release 23.1 also has experimental support on Ubuntu 22.04.2 LTS (Jammy Jellyfish).
-
You must install Docker on the Ansible control host. The control host is where the installation packages are downloaded and the Ansible installation playbooks are executed. For more information, see Installation Prerequisites on Ubuntu, Installation Prerequisites on CentOS, or Installation Prerequisites on Red Hat Enterprise Linux.
If you are using Docker CE, we recommend version 18.09 or later.
If you are using Docker EE, we recommend version 18.03.1-ee-1 or later. Also, to use Docker EE, you must install Docker EE on all the cluster nodes acting as primary and worker nodes in addition to the control host.
Docker enables you to run the Paragon Automation installer file, which is packaged with Ansible (version 2.9.5) as well as the roles and playbooks that are required to install the cluster.
Installation will fail if you don't have the correct versions. We've described the commands to verify these versions in subsequent sections in this guide.
Disk Requirements
The following disk requirements apply to the primary and worker nodes, in both single-node and multinode deployments:
- Disk must be SSD.
- Required partitions:
-
Root partition:
You must mount the root partition at /.
You can create one single root partition with at least 170-GB space.
Alternatively, you can create a root partition with at least 30-GB space and a data partition with at least 140-GB space. You must mount the data partition at "/export". You must also bind-mount the system directories "/var/local" and "/var/lib/docker" in the same partition. For example:
# mkdir -p /export/docker /var/lib/docker /export/local /var/local # vi /etc/fstab [...] /export/docker /var/lib/docker none bind 0 0 /export/local /var/local none bind 0 0 [...] # mount -a
You use the data partition mounted at /export for Postgres, ZooKeeper, Kafka, and Elasticsearch. You use the data partition mounted at /var/local for Paragon Insights Influxdb.
-
Ceph partition:
The unformatted partition for Ceph storage must have at least 30-GB space.
Note:Instead of using this partition, you can use a separate unformatted disk with at least 30-GB space for Ceph storage.
-
Network Requirements
- All nodes must run NTP or other time-synchronization at all times.
- An SSH server must be running on all nodes. You need a common SSH username and password for all nodes.
- You must configure DNS on all nodes, and make sure all the nodes (including the Ansible control host node) are synchronized.
- All nodes need Internet connection. If the cluster nodes do not have Internet connection, you can use the air-gap method for installation. The air-gap method is supported only on nodes with RHEL as the base OS.
-
You must allow intercluster communication between the nodes. In particular, you must keep the ports listed in Table 5 open for communication. Ensure that you check for any iptables entry on the servers that might be blocking any of these ports.
Table 5: Ports That Firewalls Must Not Block Port Numbers Purpose Enable these ports on all cluster nodes for administrative user access. 80 HTTP (TCP) 443 HTTPS (TCP) 7000 Paragon Planner communications (TCP) Enable these ports on all cluster nodes for communication with network elements. 67 ztpservicedhcp (UDP) 161 SNMP, for telemetry collection (UDP) 162 ingest-snmp-proxy-udp (UDP) 11111 hb-proxy-syslog-udp (UDP) 4000 ingest-jti-native-proxy-udp (UDP) 830 NETCONF communication (TCP) 7804 NETCONF callback (TCP) 4189 PCEP Server (TCP) 30000-32767 Kubernetes port assignment range (TCP) Enable communication between cluster nodes on all ports. At the least, open the following ports. 6443 Communicate with worker nodes in the cluster (TCP) 3300 ceph (TCP) 6789 ceph (TCP) 6800-7300 ceph (TCP) 6666 calico etcd (TCP) 2379 etcd client requests (TCP) 2380 etcd peer communication (TCP) 9080 cephcsi (TCP) 9081 cephcsi (TCP) 7472 metallb (TCP) 7964 metallb (TCP) 179 calico (TCP) 10250-10256 Kubernetes API communication (TCP) Enable this port between the control host and the cluster nodes. 22 TCP
Web Browser Requirements
Table 6 lists the 64-bit Web browsers that support Paragon Automation.
Browser |
Supported Versions |
Supported OS Versions |
---|---|---|
Chrome |
85 and later |
Windows 10 |
Firefox |
79 and later |
Windows 10 |
Safari |
14.0.3 |
MacOS 10.15 and later |
Installation on VMs
Paragon Automation can be installed on virtual machines (VMs). The VMs can be created on any Hypervisor, but must fulfill all the size, software, and networking requirements described in this topic.
The VMs must have the recommended base OS installed. The installation process for VMs and bare-metal servers is the same.