Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Installing the NorthStar Controller

You can use the procedures described in the following sections if you are performing a fresh install of NorthStar Controller or upgrading from an earlier release, unless you are using NorthStar analytics and are upgrading from a release older than NorthStar 4.3. Steps that are not required if upgrading are noted. Before performing a fresh install of NorthStar, you must first use the ./uninstall_all.sh script to uninstall any older versions of NorthStar on the device. See Uninstalling the NorthStar Controller Application.

If you are upgrading from a release earlier than NorthStar 4.3 and you are using NorthStar analytics, you must upgrade NorthStar manually using the procedure described in Upgrading from Pre-4.3 NorthStar with Analytics.

If you are upgrading NorthStar from a release earlier than NorthStar 6.0.0, you must redeploy the analytics settings after you upgrade the NorthStar application nodes. This is done from the Analytics Data Collector Configuration Settings menu described in Installing Data Collectors for Analytics. This is to ensure that netflowd can communicate with cMGD (necessary for the NorthStar CLI available starting in NorthStar 6.1.0).

We also recommend that you uninstall any pre-existing older versions of Docker before you install NorthStar. Installing NorthStar will install a current version of Docker.

The NorthStar software and data are installed in the /opt directory. Be sure to allocate sufficient disk space. See NorthStar Controller System Requirements for our memory recommendations.

Note:

When upgrading NorthStar Controller, ensure that the /tmp directory has enough free space to save the contents of the /opt/pcs/data directory because the /opt/pcs/data directory contents are backed up to /tmp during the upgrade process.

If you are installing NorthStar for a high availability (HA) cluster, ensure that:

  • You configure each server individually using these instructions before proceeding to HA setup.

  • The database and rabbitmq passwords are the same for all servers that will be in the cluster.

  • All server time is synchronized by NTP using the following procedure:

    1. Install NTP.

    2. Specify the preferred NTP server in ntp.conf.

    3. Verify the configuration.

    Note:

    All cluster nodes must have the same time zone and system time settings. This is important to prevent inconsistencies in the database storage of SNMP and LDP task collection delta values.

Note:

To upgrade NorthStar Controller in an HA cluster environment, see Upgrade the NorthStar Controller Software in an HA Environment.

For HA setup after all the servers that will be in the cluster have been configured, see Configuring a NorthStar Cluster for High Availability.

To set up a remote server for NorthStar Planner, see Using a Remote Server for NorthStar Planner.

The high-level order of tasks is shown in Figure 1. Installing and configuring NorthStar comes first. If you want a NorthStar HA cluster, you would set that up next. Finally, if you want to use a remote server for NorthStar Planner, you would install and configure that. The text in italics indicates the topics in the NorthStar Getting Started Guide that cover the steps.

Figure 1: High Level Process Flow for Installing NorthStarHigh Level Process Flow for Installing NorthStar

The following sections describe the download, installation, and initial configuration of NorthStar.

Note:

The NorthStar software includes a number of third-party packages. To avoid possible conflict, we recommend that you only install these packages as part of the NorthStar Controller RPM bundle installation rather than installing them manually.

Activate Your NorthStar Software

To obtain your serial number certificate and license key, see Obtain Your License Keys and Software for the NorthStar Controller.

Download the Software

The NorthStar Controller software download page is available at https://www.juniper.net/support/downloads/?p=northstar#sw.

  1. From the Version drop-down list, select the version number.
  2. Click the NorthStar Application (which includes the RPM bundle and the Ansible playbook) and the NorthStar JunosVM to download them.

If Upgrading, Back Up Your JunosVM Configuration and iptables

If you are doing an upgrade from a previous NorthStar release, and you previously installed NorthStar and Junos VM together, back up your JunosVM configuration before installing the new software. Restoration of the JunosVM configuration is performed automatically after the upgrade is complete as long as you use the net_setup.py utility to save your backup.

  1. Launch the net_setup.py script:
  2. Type D and press Enter to select Maintenance and Troubleshooting.
  3. Type 1 and press Enter to select Backup JunosVM Configuration.
  4. Confirm the backup JunosVM configuration is stored at '/opt/northstar/data/junosvm/junosvm.conf'.
  5. Save the iptables.

If Upgrading from an Earlier Service Pack Installation

You cannot directly upgrade to NorthStar Release 6.2.3 from an earlier NorthStar Release with service pack installation; for example, you cannot upgrade to NorthStar Release 6.2.3 directly from a NorthStar 6.2.0 SP1 or 6.1.0 SP5 installation. So, to upgrade to NorthStar Release 6.2.3 from an earlier NorthStar Release with service pack installation, you must rollback the service packs or run the upgrade_NS_with_patches.sh script to allow installation of a newer NorthStar version over the service packs.

To upgrade to NorthStar Release 6.2.3, before proceeding with the installation:

  1. Navigate to the service pack deployment directory. For example:
  2. Do one of the following:
    • Rollback the service packs by running the batch-uninstall.sh script.

    • Upgrade the installation by executing upgrade_NS_with_patches.sh.

      The upgrade_NS_with_patches.sh script removes the entries from the package database so that the NorthStar Release 6.2.3 packages can be installed without any dependency conflict.

Install NorthStar Controller

You can either install the RPM bundle on a physical server or use a two-VM installation method in an OpenStack environment, in which the JunosVM is not bundled with the NorthStar Controller software.

The following optional parameters are available for use with the install.sh command:

––vm

Same as ./install-vm.sh, creates a two-VM installation.

––crpd

Creates a cRPD installation.

––skip-bridge

For a physical server installation, skips checking if the external0 and mgmt0 bridges exist.

The default bridges are external0 and mgmt0. If you have two interfaces such as eth0 and eth1 in the physical setup, you must configure the bridges to those interfaces. However, you can also define any bridge names relevant to your deployment.

Note:

We recommend that you configure the bridges before running install.sh.

Note:

Bridges are not used with cRPD installations.

  • For a physical server installation, execute the following commands to install NorthStar Controller:

    Note:

    yum install works for both upgrade and fresh installation.

  • For a two-VM installation, execute the following commands to install NorthStar Controller:

    Note:

    yum install works for both upgrade and fresh installation.

    The script offers the opportunity to change the JunosVM IP address from the system default of 172.16.16.2.

  • For a cRPD installation, you must have:

    • CentOS or Red Hat Enterprise Linux 7.x. Earlier versions are not supported.

    • A Junos cRPD license.

      The license is installed during NorthStar installation. Verify that the cRPD license is installed by running the show system license command in the cRPD container.

    Note:

    If you require multiple BGP-LS peering on different subnets for different AS domains at the same time, you should choose the default JunosVM approach. This configuration for cRPD is not supported.

    For a cRPD installation, execute the following commands to install NorthStar Controller:

    Note:

    yum install works for both upgrade and fresh installation.

Configure Support for Different JunosVM Versions

Note:

This procedure is not applicable to cRPD installations.

If you are using a two-VM installation, in which the JunosVM is not bundled with the NorthStar Controller, you might need to edit the northstar.cfg file to make the NorthStar Controller compatible with the external VM by changing the version of NTAD used. For a NorthStar cluster configuration, you must change the NTAD version in the northstar.cfg file for every node in the cluster. NTAD is a 32-bit process which requires that the JunosVM device running NTAD be configured accordingly. You can copy the default JunosVM configuration from what is provided with the NorthStar release (for use in a nested installation). You must at least ensure that the force-32-bit flag is set:

To change the NTAD version in the northstar.cfg file:

  1. SSH to the NorthStar application server.
  2. Using a text editor such as vi, edit the ntad_version statement in the opt/northstar/data/northstar.cfg file to the appropriate NTAD version according to Table 1:
    Table 1: NTAD Versions by Junos OS Release

    NTAD Version

    Junos OS Release

    Change

    1

    Earlier than Release 17.2

    Initial version

    2

    17.2

    Segment routing

    3

    18.2

    NTAD version 2 + local address

    “Local address” refers to multiple secondary IP addresses on interfaces. This is especially relevant in certain use cases such as loopback interface for VPN-LSP binding.

    4

    18.3R2, 18.4R2

    NTAD version 3 + BGP peer SID

    5

    19.1 and later

    NTAD version 4 + OSPF SR

  3. Manually restart the toposerver process:
  4. Log into the Junos VM and restart NTAD:
  5. Set up the SSH key for the external VM by selecting option H from the Setup Main Menu when you run the net_setup.py script, and entering the requested information.

Create Passwords

Note:

This step is not required if you are doing an upgrade rather than a fresh installation.

When prompted, enter new database/rabbitmq, web UI Admin, and cMGD root passwords.

  1. Create an initial database/rabbitmq password by typing the password at the following prompts:
  2. Create an initial Admin password for the web UI by typing the password at the following prompts:
  3. Create a cMGD root password (for access to the NorthStar CLI) by typing the password at the following prompts:

Enable the NorthStar License

Note:

This step is not required if you are doing an upgrade rather than a fresh installation.

You must enable the NorthStar license as follows, unless you are performing an upgrade and you have an activated license.

  1. Copy or move the license file.
  2. Set the license file owner to the PCS user.
  3. Wait a few minutes and then check the status of the NorthStar Controller processes until they are all up and running.

Adjust Firewall Policies

The iptables default rules could interfere with NorthStar-related traffic. If necessary, adjust the firewall policies.

Refer to NorthStar Controller System Requirements for a list of ports that must be allowed by iptables and firewalls.

Launch the Net Setup Utility

Note:

This step is not required if you are doing an upgrade rather than a fresh installation.

Note:

For installations that include a remote Planner server, the Net Setup utility is used only on the Controller server and not on the remote Planner server. Instead, the install-remote_planner.sh installation script launches a different setup utility, called setup_remote_planner.py. See Using a Remote Server for NorthStar Planner.

Launch the Net Setup utility to perform host server configuration.

The main menu that appears is slightly different depending on whether your installation uses Junos VM or is a cRPD installation.

For Junos VM installations (installation on a physical server or a two-server installation), the main menu looks like this:

For cRPD installations, the main menu looks like this:

Notice that option B is specific to cRPD and option H is not available as it is not relevant to cRPD.

Configure the Host Server

Note:

This step is not required if you are doing an upgrade rather than a fresh installation.

  1. From the NorthStar Controller setup Main Menu, type A and press Enter to display the Host Configuration menu:

    To interact with this menu, type the number or letter corresponding to the item you want to add or change, and press Enter.

  2. Type 1 and press Enter to configure the hostname. The existing hostname is displayed. Type the new hostname and press Enter.
  3. Type 2 and press Enter to configure the host default gateway. The existing host default gateway IP address (if any) is displayed. Type the new gateway IP address and press Enter.
  4. Type 3A and press Enter to configure the host interface #1 (external_interface). The first item of existing host interface #1 information is displayed. Type each item of new information (interface name, IPv4 address, netmask, type), and press Enter to proceed to the next.
    Note:

    The designation of network or management for the type of interface is a label only, for your convenience. NorthStar Controller does not use this information.

  5. Type A and press Enter to add a host candidate static route. The existing route, if any, is displayed. Type the new route and press Enter.
  6. If you have more than one static route, type A and press Enter again to add each additional route.
  7. Type Z and press Enter to save your changes to the host configuration.
    Note:

    If the host has been configured using the CLI, the Z option is not required.

    The following example shows saving the host configuration.

  8. Press Enter to return to the Main Menu.

Configure the JunosVM and its Interfaces

This section applies to physical server or two-VM installations that use Junos VM. If you are installing NorthStar using cRPD, skip this section and proceed to Configure Junos cRPD Settings.

Note:

This step is not required if you are doing an upgrade rather than a fresh installation.

From the Setup Main Menu, configure the JunosVM and its interfaces. Ping the JunosVM to ensure that it is up before attempting to configure it. The net_setup script uses IP 172.16.16.2 to access the JunosVM using the login name northstar.

  1. From the Main Menu, type B and press Enter to display the JunosVM Configuration menu:

    To interact with this menu, type the number or letter corresponding to the item you want to add or change, and press Enter.

  2. Type 1 and press Enter to configure the JunosVM hostname. The existing JunosVM hostname is displayed. Type the new hostname and press Enter.
  3. Type 2 and press Enter to configure the JunosVM default gateway. The existing JunosVM default gateway IP address is displayed. Type the new IP address and press Enter.
  4. Type 3 and press Enter to configure the JunosVM BGP AS number. The existing JunosVM BGP AS number is displayed. Type the new BGP AS number and press Enter.
  5. Type 4A and press Enter to configure the JunosVM interface #1 (external_interface). The first item of existing JunosVM interface #1 information is displayed. Type each item of new information (interface name, IPv4 address, netmask, type), and press Enter to proceed to the next.
    Note:

    The designation of network or management for the type of interface is a label only, for your convenience. NorthStar Controller does not use this information.

  6. Type B and press Enter to add a JunosVM candidate static route. The existing JunosVM candidate static route (if any) is displayed. Type the new candidate static route and press Enter.
  7. If you have more than one static route, type B and press Enter again to add each additional route.
    Note:

    If you are adding a route and not making any other additional configuration changes, you can use option Y on the menu to apply the JunosVM static route only, without restarting the NorthStar services.

  8. Type Z and press Enter to save your changes to the JunosVM configuration.

    The following example shows saving the JunosVM configuration.

  9. Press Enter to return to the Main Menu.

Configure Junos cRPD Settings

From the Setup Main Menu, configure the Junos cRPD settings. This section applies only to cRPD installations (not to installations that use Junos VM).

  1. From the Main Menu, type B and press Enter to display the Junos cRPD Configuration menu:

    To interact with this menu, type the number or letter corresponding to the item you want to add or change, and press Enter. Notice that option Y in the lower section is omitted from this menu as it is not relevant to cRPD.

  2. Type 1 and press Enter to configure the BGP AS number. The existing AS number is displayed. Type the new number and press Enter.
  3. Type 2 and press Enter if you need to change the default BGP Monitor IPv4 Address. By default, BMP monitor runs on the same host as cRPD, and the address is configured based on the local address of the host. We therefore recommend not changing this address.
  4. Type 3 and press Enter if you need to change the default BGP Monitor Port. We recommend not changing this port from the default of 10001. The BMP monitor listens on port 10001 for incoming BMP connections from the network. The connection is opened from cRPD, which runs on the same host as the BMP monitor.
  5. Type Z and press Enter to save your configuration changes. The following example show saving the Junos cRPD configuration.

Set Up the SSH Key for External JunosVM

This section only applies to two-VM installations. Skip this section if you are installing NorthStar using cRPD.

Note:

This step is not required if you are doing an upgrade rather than a fresh installation.

For a two-VM installation, you must set up the SSH key for the external JunosVM.

From the Main Menu, type H and press Enter.

Follow the prompts to provide your JunosVM username and router login class (super-user, for example). The script verifies your login credentials, downloads the JunosVM SSH key file, and returns you to the main menu.

For example:

Upgrade the NorthStar Controller Software in an HA Environment

There are some special considerations for upgrading NorthStar Controller when you have an HA cluster configured. Use the following procedure:

  1. Before installing the new release of the NorthStar software, ensure that all individual cluster members are working. On each node, execute the supervisorctl status script:

    For an active node, all processes should be listed as RUNNING as shown in this example:

    This is just an example. The actual list of processes varies according to the version of NorthStar on the node, your deployment setup, and the optional features installed.

    For a standby node, processes beginning with “northstar” and “northstar_pcs”should be listed as STOPPED. Also, if you have analytics installed, some of the processes beginning with “collector” are STOPPED. Other processes, including those needed to preserve connectivity, remain RUNNING. An example is shown here.

    Note:

    This is just an example; the actual list of processes varies according to the version of NorthStar on the node, your deployment setup, and the optional features installed.

  2. Ensure that the SSH keys for HA are set up. To test this, try to SSH from each node to every other node in the cluster using user “root”. If the SSH keys for HA are set up, you will not be prompted for a password. If you are prompted for a password, see Configuring a NorthStar Cluster for High Availability for the procedure to set up the SSH keys.
  3. On one of the standby nodes, install the new release of the NorthStar software according to the instructions at the beginning of this topic. Check the processes on this node before proceeding to the other standby node(s) by executing the supervisorctl status script.

    Since the node comes up as a standby node, some processes will be STOPPED, but the “infra” group of processes, the “listener1” process, the “collector:worker” group of processes (if you have them), and the “junos:junosvm” process (if you have it) should be RUNNING. Wait until those processes are running before proceeding to the next node.

  4. Repeat this process on each of the remaining standby nodes, one by one, until all standby nodes have been upgraded.
  5. On the active node, restart the ha-agent process to trigger a switchover to a standby node.

    One of the standby nodes becomes active and the previously active node switches to standby mode.

  6. On the previously active node, install the new release of the NorthStar software according to the instructions at the beginning of this section. Check the processes in this node using supervisorctl status; their status (RUNNING or STOPPED) should be consistent with the node’s new standby role.
Note:

The newly upgraded software automatically inherits the net_setup settings, HA configurations, and all credentials from the previous installation. Therefore, it is not necessary to re-run net_setup unless you want to change settings, HA configurations, or password credentials.