Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installation Prerequisites on Ubuntu

To successfully install and deploy a Paragon Automation cluster, you must have a control host that installs the distribution software on multiple cluster nodes. You can download the distribution software on the control host and then create and configure the installation files to run the installation from the control host. You must have Internet access to download the packages on the control host. You must also have Internet access on the cluster nodes to download any additional software such as Docker and OS patches. The order of installation tasks is shown at a high level in Figure 1.

Figure 1: High-Level Process Flow for Installing Paragon Automation High-Level Process Flow for Installing Paragon Automation

Before you download and install the distribution software, you must configure the control host and the cluster nodes as described in this topic.

Prepare the Control Host

The control host is a dedicated machine that orchestrates the installation and upgrade of a Paragon Automation cluster. It carries out the Ansible operations that run the software installer and install the software on the cluster nodes as illustrated in Control Host Functions.

You must download the installer packages on the Ansible control host. As part of the Paragon Automation installation process, the control host installs any additional packages required on the cluster nodes. The packages include optional OS packages, Docker, and Elasticsearch. All microservices, including third-party microservices, are downloaded onto the cluster nodes. The microservices do not access any public registries during installation.

The control host can be on a different broadcast domain from the cluster nodes, but you must ensure that the control host can use SSH to connect to all the nodes.

Figure 2: Control Host Functions Control Host Functions

After installation is complete, the control host plays no role in the functioning of the cluster. However, you'll need the control host to update the software or any component, make changes to the cluster, or reinstall the cluster if a node fails. You can also use the control host to archive configuration files. We recommend that you keep the control host available, and not use it for something else, after installation.

Prepare the control host for the installation process as follows:

  1. Install the base OS—Install Ubuntu version 20.04.4 LTS (Focal Fossa) or Ubuntu 22.04.2 LTS (Jammy Jellyfish). Release 23.2 is qualified to work with Ubuntu 22.04.2 LTS (Jammy Jellyfish).
  2. Install Docker—Install and configure Docker on the control host to implement the Linux container environment. Paragon Automation supports Docker CE and Docker EE. The Docker version you choose to install in the control host is independent of the Docker version you plan to use in the cluster nodes.

    If you want to install Docker EE, ensure that you have a trial or subscription before installation. For more information about Docker EE, supported systems, and installation instructions, see https://www.docker.com/blog/docker-enterprise-edition/.

    To download and install Docker CE, perform the following steps: To verify that Docker is installed and running, use the # docker run hello-world command.

    To verify the Docker version installed, use the # docker version or # docker --version commands.

    For full instructions and more information, see https://docs.docker.com/engine/install/ubuntu/.
  3. Configure SSH client authentication—The installer running on the control host connects to the cluster nodes using SSH. For SSH authentication, you must use a root or non-root user account with superuser (sudo) privileges. We will refer to this account as the install user account in subsequent steps. You must ensure that the install user account is configured on all the nodes in the cluster. The installer will use the inventory file to determine which username to use, and whether the authentication will use SSH keys or a password. See Customize the Inventory File - Multinode Implementation.

    If you choose the ssh-key authentication (recommended) method, generate the SSH key.

    If you want to protect the SSH key with a passphrase, you can use ssh-agent key manager. See https://www.ssh.com/academy/ssh/agent.

    Note:

    You'll need to copy this key to the nodes as part of the cluster nodes preparation tasks, as described in the next section.

  4. (Optional) Install wget—Install the wget utility to download the Paragon Automation distribution software.

    # apt install wget

    Alternatively, you can use rsync or any other file download software to copy the distribution software.

Prepare Cluster Nodes

The primary and worker nodes are collectively called cluster nodes. Each cluster node must have at least one unique static IP address, as illustrated in Figure 3. When configuring the hostnames, use only lowercase letters, and do not include any special characters other than hyphen (-) or the period (.). If the implementation has a separate IP network to provide communication between the Paragon Automation components, as described in Paragon Automation Portfolio Installation Overview, you must assign a second set of IP addresses to the worker nodes. These IP addresses enable devices outside the cluster to reach the worker nodes and also enable communication between:
  • Paragon Automation and the managed devices
  • Paragon Automation and the network administrator

We recommend that you place all the nodes in the same broadcast domain. For cluster nodes in different broadcast domains, see Configure Load Balancing for additional load balancing configuration.

Figure 3: Cluster Node Functions Cluster Node Functions

As described in Paragon Automation System Requirements, you can install Paragon Automation using a multinode deployment.

You need to prepare the cluster nodes for the Paragon Automation installation process as follows:

  1. Configure raw disk storage—The cluster nodes must have raw storage block devices with unpartitioned disks or unformatted disk partitions attached. You can also partition the nodes such that the root partition and other file systems can use a portion of the disk space available. You must leave the remaining space unformatted, with no file systems, and reserve it for Ceph to use. For more information, see Disk Requirements.
    Note:

    You don't need to install or configure anything to allow Ceph to use the unpartitioned disks or unformatted disk partitions. The Paragon Automation installation process automatically assigns the space for Ceph storage.

    For multinode clusters, you must have a minimum of three cluster nodes with storage space attached. That is, a minimum of three worker nodes with an unpartitioned disk or unformatted disk partition for storage.

    Installation fails if unformatted disks are not available.

    Ceph requires newer Kernel versions. If your Linux kernel is very old, consider upgrading or reinstalling a new one. For a list of minimum Linux kernel versions supported by Ceph for your OS, see https://docs.ceph.com/en/latest/start/os-recommendations. To upgrade your Linux kernel version, see Upgrade your Ubuntu Linux Kernel Version.

    Note:

    Ceph does not work on Linux kernel version 4.15.0-55.60.

  2. Install the base OS—Install Ubuntu version 20.04.4 LTS (Focal Fossa) or Ubuntu 22.04.2 LTS (Jammy Jellyfish). Release 23.2 is qualified to work with Ubuntu 22.04.2 LTS (Jammy Jellyfish).
  3. Create install-user account—The install user is the user that the Ansible playbooks use to log in to the primary and worker nodes and perform all the installation tasks. Ensure that you configure either a root password or an account with superuser (sudo) privileges. You will add this information to the inventory file during the installation process.
    Set the root user password.
  4. Install SSH authentication—The installer running on the control host connects to the cluster nodes through SSH using the install-user account.
    1. Log in to the cluster nodes. and install the open-ssh server on all nodes.
    2. Edit the sshd_config file.

      # vi /etc/ssh/sshd_config

    3. If you are using "root" as the install-user account, then permit root login.

      PermitRootLogin yes

      If you chose to use plain text password for authentication, then you must enable password authentication.

      PasswordAuthentication yes

      We do not recommend the use of password authentication.

    4. Ensure that the AllowTcpForwarding parameter is set to yes.

      AllowTcpForwarding yes
      Note:

      Paragon Automation installation fails when the AllowTcpForwarding parameter is set to no.

    5. If you changed /etc/ssh/sshd_config, restart the SSH daemon.

      # systemctl restart sshd

    6. Log in to the control host:
      1. To allow authentication using the SSH key, copy id_rsa.pub to the cluster nodes.

        Repeat this step for all the nodes in the cluster (primary and workers). cluster-node-IP is the unique address of the node as shown in Figure 3. If a hostname is used instead, the Ansible control host should be able to resolve the name to its IP address.

      2. Use SSH authentication to log in to the cluster node using the install-user account. You must not need a password to log in.

        You should be able to use SSH to connect to all nodes in the cluster (primary and workers) from the control host using the install-user account. If you are not able to log in, review the previous steps and make sure that you didn't miss anything.

  5. Install Docker—Select one of the following Docker versions to install.
    • Docker CE—If you want to use Docker CE, you do not need to install it on the cluster nodes. The deploy script installs Docker CE on the nodes during Paragon Automation installation.

    • Docker EE—If you want to use Docker EE, you must install Docker EE on all the cluster nodes. If you install Docker EE on the nodes, the deploy script uses the installed version and does not attempt to install Docker CE in its place. For more information about Docker EE and supported systems, and for instructions to download and install Docker EE, see https://www.docker.com/blog/docker-enterprise-edition/.

    The Docker version you choose to install in the cluster nodes is not dependent on the Docker version installed in the control host.

  6. Install Python—Install Python 3, if it is not preinstalled with your OS, on the cluster nodes:

    # apt install python3

    To verify the Python version installed, use the # python3 -V or # python3 --version command.

  7. Use the # apt list --installed command and ensure that the following packages are installed:

    apt-transport-https, bash-completion, gdisk, iptables, lvm2, openssl

    If you want to use the air-gap method to install Paragon Automation on a cluster running Ubuntu base OS, ensure that the following packages are pre-installed:
    ca-certificates, curl, docker.io, jq, keepalived

    Additionally, the following optional packages are recommended to be installed to aid in troubleshooting:

    net-tools, tcpdump, traceroute

  8. If you base OS is Ubuntu version 20.04.4 LTS, set the iptables FORWARD chain policy to ACCEPT on all the cluster nodes.
    1. Log in to a cluster node.

    2. Set the iptables FORWARD chain policy to ACCEPT.

    3. Install the iptables-persistent package to make the change persistent across reboots.

      You can choose to answer no if prompted to save rules.

    4. Add the following rule.

    5. Delete the /etc/iptables/rules.v6 file.

    Repeat these steps on all cluster nodes.
  9. Install and enable NTP—All nodes must run Network Time Protocol (NTP) or any other time-synchronization protocol at all times. By default, Paragon Automation installs the Chrony NTP client. If you don't want to use Chrony, you can manually install NTP on all nodes and ensure that the timedatectl command reports that the clocks are synchronized. However, if you want to use the air-gap method to install Paragon Automation, and you want to use Chrony, you must pre-install Chrony. The installer does not install Chrony during air-gap installations.
    1. Install ntpdate to synchronize date and time by querying an NTP server.

      # apt install ntpdate -y

    2. Run the following command twice to reduce the offset with the NTP server.

      # ntpdate ntp-server

    3. Install the NTP protocol.

      # apt install ntp -y

    4. Configure the NTP server pools.

      # vi /etc/ntp.conf

    5. Replace the default Ubuntu pools with the NTP server closest to your location in the ntp.conf file.

      server ntp-server prefer iburst

      Save and exit the file.

    6. Restart the NTP service.

      # systemctl restart ntp

    7. Confirm that the system is in sync with the NTP server.

      # timedatectl

  10. (Optional) Upgrade your Ubuntu Linux kernel version—To upgrade the kernel version of your Ubuntu server to the latest LTS version to meet the requirements for Paragon Automation installation:
    1. Log in as the root user.

    2. Check the existing kernel version.

      root@server# uname -msr

      If the Linux kernel version is earlier than 4.15, upgrade the kernel.

    3. Update apt repositories:

      root@server# apt update

    4. Upgrade existing software packages, including kernel upgrades:

      root@server# apt upgrade -y

      root@server# apt install --install-recommends linux-generic-hwe-xx.xx

      Here, xx.xx is your Ubuntu OS version.

    5. Reboot the server to load the new kernel:

      root@server# reboot
    6. Verify the new kernel version:

      root@server# uname -msr

Virtual IP Address Considerations

The Kubernetes worker nodes host the pods that handle the workload of the applications.

A pod is the smallest deployable unit of computing created and managed in Kubernetes. A pod contains one or more containers, with shared storage and network resources, and with specific instructions on how to run the applications. Containers are the lowest level of processing, and you execute applications or microservices in containers.

The primary node in the cluster determines which worker node will host a particular pod and containers.

You implement all features of Paragon Automation using a combination of microservices. You need to make some of these microservices accessible from outside the cluster as they provide services to end users (managed devices) and administrators. For example, you must make the pceserver service accessible to establish Path Computation Element Protocol (PCEP) sessions between provider edge (PE) routers and Paragon Automation.

You need to expose these services outside of the Kubernetes cluster with specific addresses that are reachable from the external devices. Because a service can be running on any of the worker nodes at a given time, you must use virtual IP addresses (VIPs) as the external addresses. You must not use the address of any given worker node as an external address.

In this example:

  • Consider that Worker 1 is 10.1.x.3 and Worker 2 is 10.1.x.4.

  • SERVICE IP = PCEP VIP is 10.1.x.200

  • PCC_IP is 10.1.x.100

Paragon Automation services use one of two methods of exposing services outside the cluster:

  • Load balancer—Each load balancer is associated with a specific IP address and routes external traffic to a specific service in the cluster. This is the default method for many Kubernetes installations in the cloud. The load balancer method supports multiple protocols and multiple ports per service. Each service has its own load balancer and IP address.

  • Paragon Automation uses the MetalLB load balancer. MetalLB simulates external load balancer by either managing virtual IP addresses in Layer 2 mode, or interacts with external router(s) in Layer 3 mode. MetalLB provides load-balancing infrastructure to the kubernetes cluster.

    Services of type "LoadBalancer" will interact with the Kubernetes load-balancing infrastructure to assign an externally reachable IP address. Some services can share an external IP address.

  • Ingress—The ingress method acts as a proxy to bring traffic into the cluster, and then uses internal service routing to route the traffic to its destination. Under the hood, this method also uses a load balancer service to expose itself to the world so it can act as that proxy.

    Paragon Automation uses the following ingress proxies:

    • Ambassador
    • Nginx

Devices from outside the cluster need to access the following services and thus these services require a VIP address.

Table 1: Services That Need VIPs
Required VIP Address Description Load Balancer/Proxy

Ingress controller

Used for accessing the Paragon Automation GUI over the Web.

Paragon Automation provides a common Web server that provides access to the components and applications. Access to the server is managed through the Kubernetes Ingress Controller.

Ambassador

MetalLB

Paragon Insights services

Used for Insights services such as syslog, DHCP relay, and JTI.

MetalLB

Paragon Pathfinder PCE server

Used to establish PCEP sessions with devices in the network.

MetalLB

SNMP trap receiver proxy (Optional)

User for the SNMP trap receiver proxy only if this functionality is required.

MetalLB

Infrastructure Nginx Ingress Controller

Used as a proxy for the Paragon Pathfinder netflowd server and, optionally, the Paragon Pathfinder PCE server.

The Nginx Ingress Controller needs a VIP within the MetalLB load balancer pool. This means that during the installation process you need to include this address as part of the LoadBalancer IP address ranges that you will be required to include while creating the configuration file.

Nginx

MetalLB

Pathfinder Netflowd

Used for Paragon Pathfinder netflowd server.

Netflowd can use Nginx as proxy, in which case it will not require its own VIP address.

MetalLB

PCEP server (Optional)

Used for the PCE server for MD5 authentication.

-

cRPD (Optional)

Used to connect to the BGP Monitoring Protocol (BMP) pod for MD5 authentication.

-

Ports used by Ambassador:

  • HTTP 80 (TCP) redirect to HTTPS

  • HTTPS 443 (TCP)

  • Paragon Planner 7000 (TCP)

  • DCS/NETCONF initiated 7804 (TCP)

Figure 4: Ambassador Ambassador

Ports used by Insights Services, Path Computation Element (PCE) server, and SNMP:

  • Insights Services

    JTI — 4000 (UDP)

    DHCP — (ZTP) 67 (UDP)

    SYSLOG — 514 (UDP)

    SNMP proxy — 162 (UDP)

  • PCE Server

    PCEP — 4189 (TCP)

  • SNMP

    SNMP Trap Receiver — 162 (UDP)

Figure 5: Ports Used by Services Ports Used by Services

Ports used by Nginx Controller:

  • NetFlow 9000 (UDP)

  • PCEP 4189 (TCP)

Using Nginx for PCEP

During the installation process, you will be asked whether you want to enable ingress proxy for PCEP. You can select from None or Nginx-Ingress as the proxy for the Path Computation Element (PCE) server.

If you select Nginx-Ingress as the proxy, you do not need to configure the VIP for the PCE server described in Table 1. In this case, the VIP address for Infrastructure Nginx Ingress Controller is used for the PCE server also. If you choose to not use a netflowd proxy, the VIP for the Infrastructure Nginx Ingress Controller is used for netflowd, as well.

Note:

The benefit of using Nginx is that you can use a single IP address for multiple services.

Figure 6: Nginx Controller Nginx Controller

VIP Addresses for MD5 Authentication

You can configure MD5 authentication to secure PCEP sessions between the router and Paragon Pathfinder as well as ensure that the BMP service is peering with the correct BGP-LS router. Paragon Automation uses Multus to provide the secondary interface on the PCE server and BMP pod for direct access to the router. You need the following VIP addresses in the same subnet as your cluster nodes:

  • VIP address for the PCE server in the CIDR format

  • VIP address for cRPD in the CIDR format

The VIP address pool of the MetalLB load balancer must not contain these VIP addresses.

If you choose to configure MD5 authentication, you must additionally configure the authentication key and virtual IP addresses on the routers. You must also configure the authentication key in the Paragon Automation UI.

  • MD5 on PCEP sessions.—Configure the MD5 authentication key on the router and the Paragon Automation UI and VIP address on the router.

    • Configure the following in the Junos CLI:

      user@pcc# set protocols pcep pce pce-id authentication-key pce-md5-key

      user@pcc# set protocols pcep pce pce-id destination-ipv4-address vip-for-pce

    • Enter the pce-md5-key authentication key in the MD5 String field in the Protocols:PCEP section on the Configuration > Devices > Edit Device Name page.

    The MD5 authentication key must be less than or equal to 79 characters.

  • MD5 on cRPD— Determine the cRPD MD5 authentication key and configure the key and VIP address of cRPD on the router.

    1. Determine or set the MD5 authentication key in the following ways.

      1. Run the conf command script and enable MD5 authentication on cRPD. Search for the crpd_auth_key parameter in the config.yml file. If there is a key present, it indicates that cRPD is configured for MD5. For example: crpd_auth_key : northstar. You can use the key present in the config.yml file (or you can also edit the key) and enter it on the router.

      2. If no MD5 authentication key is present in the config.yml file, you must log in to cRPD and set the authentication key using one of the following commands:

        set groups extra protocols bgp group name authentication-key crpd-md5-key

        or

        set protocols bgp group name authentication-key crpd-md5-key

        The MD5 authentication key must be less than or equal to 79 characters.

    2. Configure the router to enable MD5 for cRPD.

      user@pcc# set protocols bgp group name neighbor vip-for-crpd authentication-key md5-key
Note:

You must identify all the required VIP addresses before you start the Paragon Automation installation process. You will be asked to enter these addresses as part of the installation process.

Configure Load Balancing

VIPs are managed in Layer 2 by default. When all cluster nodes are in the same broadcast domain, each VIP address is assigned to one cluster node at a time. Layer 2 mode provides fail-over of the VIP and does not provide actual load balancing. For true load balancing between the cluster nodes or if the nodes are in different broadcast domains, you must configure load balancing in Layer 3.

You must configure a BGP router to advertise the VIP address to the network. Make sure that the BGP router uses ECMP to balance TCP/IP sessions between different hosts. Connect the BGP router directly to the cluster nodes.

To configure load balancing on the cluster nodes, edit the config.yml file. For example:

In this example, The BGP router at 192.x.x.1 is responsible for advertising reachability of the VIP addresses with the 10.x.x.0/24 prefix to the rest of the network. The cluster allocates the VIP address of this range and advertises the address for the cluster nodes that can handle the address.

Configure DNS Server (Optional)

You can access the main Web gateway either through the ingress controller's VIP address or through a hostname that is configured in the Domain Name System (DNS) server that resolves to the ingress controller's VIP address. You need to configure the DNS server only if you want to use a hostname to access the Web gateway.

Add the hostname to the DNS as an A, AAAA, or CNAME record. For lab and Proof of Concept (POC) setups, you can add the hostname to the /etc/hosts file on the cluster nodes.