Air-Gap Install Paragon Automation on RHEL
You can install and deploy a paragon Automation cluster using the air-gap method of installation. In the air-gap method you need not have Internet access on the cluster nodes. You need a control host to download the distribution software and then create and configure the installation files to run the installation from the control host. You must be able to use SSH to connect to all the nodes.
Prerequisites
Before you download and install the distribution software, you must preconfigure the control host and the cluster nodes as described in the following sections.
- Prepare the control host for the installation process as described in Prepare the Control Host.
-
Prepare the cluster nodes for the installation process as described in
Prepare Cluster Nodes.
If you want to use Chrony, you must pre-install Chrony. The installer does not install Chrony during air-gap installations.
- Ensure you have the required virtual IP addresses as described in Virtual IP Address Considerations.
Download and Install Paragon Automation
- Log in to the control host.
-
Download the Paragon Automation Setup installation
folder to a download directory and extract the folder. You can use the
wget "http://cdn.juniper.net/software/file-download-url"
command to download the folder and any extraction utility to extract the files.You need a Juniper account to download the Paragon Automation software.
Note:During the installation process, you must download the rhel-84-airgap.tar.gz file to use the air-gap method.
-
-
Copy the rhel-84-airgap.tar.gz file to all your
cluster nodes.
-
Log in to a cluster node.
-
Copy the rhel-84-airgap.tar.gz file to the /root directory.
-
Change directory to /root.
-
Extract the rhel-84-airgap.tar.gz using the
tar -zxvf rhel-84-airgap.tar.gz
command. -
Run the
yum -y install *.rpm
command to deploy the RPM packages.
Repeat Step 3 on all your cluster nodes. -
- Log back in to your control host.
- Follow steps 1 through 7 of the installation process as described in Install Paragon Automation on a Multinode Cluster.
-
Manually edit the config.yml file using a text editor
and set the following values.
docker_version: 20.10.13-3 containerd_version_redhat: 1.5.10-3
-
Log in to the cluster nodes through SSH using the install-user account.
Perform the following steps on all the cluster nodes.
-
Set all the repos in /etc/yum.repos.d/ to
enabled = 0
, using a text editor.Repeat this step for all cluster nodes.
-
Apply the following firewall rules to all nodes:
"iptables -A OUTPUT --dst=10.0.0.0/8 -j ACCEPT" "iptables -A OUTPUT --dst=172.16.0.0/12 -j ACCEPT" "iptables -A OUTPUT --dst=192.168.0.0/16 -j ACCEPT" "iptables -A OUTPUT --dst=127.0.0.1 -j ACCEPT"
-
-
Log back in to the control host, and install the Paragon Automation cluster
based on the information that you configured in the
config.yml and inventory
files.
# ./run -c config-dir deploy -e offline_install=true
The installation time to install the configured cluster depends on the complexity of the cluster. A basic setup installation takes at least 45 minutes to complete.
NTP synchronization is checked at the start of deployment. If clocks are out of sync, deployment fails.
-
When deployment is completed, log in to the worker nodes.
Use a text editor to configure the soft and hard memory limits for influx DB memory requirements for Paragon Insights in the limits.conf and sysctl.conf files.
-
# vi /etc/security/limits.conf # End of file * hard nofile 1048576 * soft nofile 1048576 root hard nofile 1048576 root soft nofile 1048576 influxdb hard nofile 1048576 influxdb soft nofile 1048576
-
# vi /etc/sysctl.conf fs.file-max = 2097152 vm.max_map_count=262144 fs.inotify.max_user_watches=524288 fs.inotify.max_user_instances=512
Repeat this step for all worker nodes.
-
- Follow the steps described in Log in to the Paragon Automation UI- Multinode installation to access the GUI.