Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Configuring a NorthStar Cluster for High Availability

date_range 19-Jan-24

Before You Begin

Configuring a NorthStar application cluster for high availability (HA) is an optional process. This topic describes the steps for configuring, testing, deploying, and maintaining an HA cluster. If you are not planning to use the NorthStar application HA feature, you can skip this topic.

Note:

See High Availability Overview in the NorthStar Controller User Guide for overview information about HA. For information about analytics HA, see Installing Data Collectors for Analytics.

Note:

Throughout your use of NorthStar Controller HA, be aware that you must replicate any changes you make to northstar.cfg to all cluster nodes so the configuration is uniform across the cluster. NorthStar CLI configuration changes, on the other hand, are replicated across the cluster nodes automatically.

  • Download the NorthStar Controller and install it on each server that will be part of the cluster. Each server must be completely enabled as a single node implementation before it can become part of a cluster.

    This includes:

    • Creating passwords

    • License verification steps

    • Connecting to the network for various protocol establishments such as PCEP or BGP-LS

    Note:

    All of the servers must be configured with the same database and RabbitMQ passwords.

  • All server time must be synchronized by NTP using the following procedure:

    1. Install NTP.

      content_copy zoom_out_map
      yum -y install ntp
    2. Specify the preferred NTP server in ntp.conf.

    3. Verify the configuration.

      content_copy zoom_out_map
      ntpq -p
    Note:

    All cluster nodes must have the same time zone and system time settings. This is important to prevent inconsistencies in the database storage of SNMP and LDP task collection delta values.

  • Run the net_setup.py utility to complete the required elements of the host and JunosVM configurations. Keep that configuration information available.

    Note:

    If you are using an OpenStack environment, you will have one JunosVM that corresponds to each NorthStar Controller VM.

  • Know the virtual IPv4 address you want to use for Java Planner client and web UI access to NorthStar Controller (required). This VIP address is configured for the router-facing network for single interface configurations, and for the user-facing network for dual interface configurations. This address is always associated with the active node, even if failover causes the active node to change.

  • A virtual IP (VIP) is required when setting up a NorthStar cluster. Ensure that all servers that will be in the cluster are part of the same subnet as the VIP.

  • Decide on the priority that each node will have for active node candidacy upon failover. The default value for all nodes is 0, the highest priority. If you want all nodes to have equal priority for becoming the active node, you can just accept the default value for all nodes. If you want to rank the nodes in terms of their active node candidacy, you can change the priority values accordingly—the lower the number, the higher the priority.

Set Up SSH Keys

Set up SSH keys between the selected node and each of the other nodes in the cluster, and each JunosVM.

  1. Obtain the public SSH key from one of the nodes. You will need the ssh-rsa string from the output:
    content_copy zoom_out_map
    [root@rw01-ns ~]# cat /root/.ssh/id_rsa.pub
  2. Copy the public SSH key from each node to each of the other nodes, from each machine.

    From node 1:

    content_copy zoom_out_map
    [root@rw01-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-2-ip
    
    [root@rw01-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-3-ip
    

    From node 2:

    content_copy zoom_out_map
    [root@rw02-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-1-ip
    
    [root@rw02-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-3-ip
    

    From node 3:

    content_copy zoom_out_map
    [root@rw03-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-1-ip
    
    [root@rw03-ns northstar_bundle_x.x.x]# ssh-copy-id root@node-2-ip
    
  3. Copy the public SSH key from the selected node to each remote JunosVM (JunosVM hosted on each other node). To do this, log in to each of the other nodes and connect to its JunosVM.
    content_copy zoom_out_map
    [root@rw02-ns ~]# ssh northstar@JunosVM-ip
    [root@rw02-ns ~]# configure
    [root@rw02-ns ~]# set system login user northstar authentication ssh-rsa replacement-string
    [root@rw02-ns ~]# commit
    
    content_copy zoom_out_map
    [root@rw03-ns ~]# ssh northstar@JunosVM-ip
    [root@rw03-ns ~]# configure
    [root@rw03-ns ~]# set system login user northstar authentication ssh-rsa replacement-string
    [root@rw03-ns ~]# commit
    

Access the HA Setup Main Menu

The /opt/northstar/utils/net_setup.py utility (the same utility you use to configure NorthStar Controller) includes an option for configuring high availability (HA) for a node cluster. Run the /opt/northstar/utils/net_setup.py utility on one of the servers in the cluster to set up the entire cluster.

  1. Select one of the nodes in the cluster on which to run the setup utility to configure all the nodes in the cluster.
  2. On the selected node, launch the NorthStar setup utility to display the NorthStar Controller Setup Main Menu.
    content_copy zoom_out_map
    [root@northstar]# /opt/northstar/utils/net_setup.py
    Main Menu:
        .............................................
        A.) Host Setting
        .............................................
        B.) JunosVM Setting
        .............................................
        C.) Check Network Setting
        .............................................
        D.) Maintenance & Troubleshooting
        .............................................
        E.) HA Setting
        .............................................
        F.) Collect Trace/Log
        .............................................
        G.) Analytics Data Collector Setting
          (External standalone/cluster analytics server)
        .............................................
        H.) Setup SSH Key for external JunosVM  setup
        .............................................
        I.) Internal Analytics Setting (HA)
        .............................................
        X.) Exit
        .............................................
    Please select a letter to execute.
  3. Type E and press Enter to display the HA Setup main menu.

    Figure 1 shows the top portion of the HA Setup main menu in which the current configuration is listed. It includes the five supported interfaces for each node, the VIP addresses, and the ping interval and timeout values. In this figure, only the first of the nodes is included, but you would see the corresponding information for all three of the nodes in the cluster configuration template. HA functionality requires an odd number of nodes in a cluster, and a minimum of three.

    Note:

    If you have a cRPD installation, the JunosVM information is not displayed as it is not applicable.

    Figure 1: HA Setup Main Menu, Top PortionHA Setup Main Menu, Top Portion
    Note:

    If you are configuring a cluster for the first time, the IP addresses are blank and other fields contain default values. If you are modifying an existing configuration, the current cluster configuration is displayed, and you have the opportunity to change the values.

    Note:

    If the servers are located in geodiverse locations, you can use Site Name to indicate which servers are in the same or different geographical locations.

    Figure 2 shows the lower portion of the HA Setup main menu. To complete the configuration, you type the number or letter of an option and provide the requested information. After each option is complete, you are returned to the HA Setup main menu so you can select another option.

    Figure 2: HA Setup Main Menu, Lower PortionHA Setup Main Menu, Lower Portion
    Note:

    If you have a cRPD installation, options 3, 4, and 8 are not displayed as they are not applicable. The remaining options are not renumbered.

Configure the Three Default Nodes and Their Interfaces

The HA Setup main menu initially offers three nodes for configuration because a cluster must have a minimum of three nodes. You can add more nodes as needed.

For each node, the menu offers five interfaces. Configure as many of those as you need.

  1. Type 5 and press Enter to modify the first node.
  2. When prompted, enter the number of the node to be modified, the hostname, the site name, and the priority, pressing Enter between entries.
    Note:

    The NorthStar Controller uses root as a username to access other nodes.

    The default priority is 0. You can just press Enter to accept the default or you can type a new value.

    For each interface, enter the interface name, IPv4 address, and switchover (yes/no), pressing Enter between entries.

    Note:

    For each node, interface #1 is reserved for the cluster communication interface which is used to facilitate communication between nodes. For this interface, it is required that switchover be set to Yes, and you cannot change that parameter.

    When finished, you are returned to the HA Setup main menu.

    The following example configures Node #1 and two of its available five interfaces.

    content_copy zoom_out_map
    Please select a number to modify.
    [<CR>=return to main menu]
    5
    Node ID : 1
    
        HA Setup:
           ..........................................................
           Node #1
           Hostname                        : 
           Site Name                       : site1
           Priority                        : 0
           Cluster Communication Interface : external0
           Cluster Communication IP        : 
              Interfaces
                Interface #1
                   Name                    : external0
                   IPv4                    : 
                   Switchover              : yes
                Interface #2
                   Name                    : mgmt0
                   IPv4                    : 
                   Switchover              : yes
                Interface #3
                   Name                    : 
                   IPv4                    : 
                   Switchover              : yes
                Interface #4
                   Name                    : 
                   IPv4                    : 
                   Switchover              : yes
                Interface #5
                   Name                    : 
                   IPv4                    : 
                   Switchover              : yes
    
    current node 1 Node hostname (without domain name) : 
    new node 1 Node hostname (without domain name) : node-1
    
    current node 1 Site Name : site1
    new node 1 Site Name : site1
    
    current node 1 Node priority : 0
    new node 1 Node priority : 10
    
    current node 1 Node cluster communication interface : external0
    new node 1 Node cluster communication interface : external0
    
    current node 1 Node cluster communication IPv4 address : 
    new node 1 Node cluster communication IPv4 address : 10.25.153.6
    
    
    current node 1 Node interface #2 name : mgmt0
    new node 1 Node interface #2 name : external1
    
    current node 1 Node interface #2 IPv4 address : 
    new node 1 Node interface #2 IPv4 address : 10.100.1.1
    
    current node 1 Node interface #2 switchover (yes/no) : yes
    new node 1 Node interface #2 switchover (yes/no) : 
    
    current node 1 Node interface #3 name : 
    new node 1 Node interface #3 name : 
    
    current node 1 Node interface #3 IPv4 address : 
    new node 1 Node interface #3 IPv4 address : 
    
    current node 1 Node interface #3 switchover (yes/no) : yes
    new node 1 Node interface #3 switchover (yes/no) : 
    
    current node 1 Node interface #4 name : 
    new node 1 Node interface #4 name : 
    
    current node 1 Node interface #4 IPv4 address : 
    new node 1 Node interface #4 IPv4 address : 
    
    current node 1 Node interface #4 switchover (yes/no) : yes
    new node 1 Node interface #4 switchover (yes/no) : 
    
    current node 1 Node interface #5 name : 
    new node 1 Node interface #5 name : 
    
    current node 1 Node interface #5 IPv4 address : 
    new node 1 Node interface #5 IPv4 address : 
    
    current node 1 Node interface #5 switchover (yes/no) : yes
    new node 1 Node interface #5 switchover (yes/no) : 
  3. Type 5 and press Enter again to repeat the data entry for each of the other two nodes.

Configure the JunosVM for Each Node

To complete the node-specific setup, configure the JunosVM for each node in the cluster.

  1. From the HA Setup main menu, type 8 and press Enter to modify the JunosVM for a node.
  2. When prompted, enter the node number, the JunosVM hostname, and the JunosVM IPv4 address, pressing Enter between entries.

    Figure 3 shows these JunosVM setup fields.

    Figure 3: Node 1 JunosVM Setup FieldsNode 1 JunosVM Setup Fields

    When finished, you are returned to the HA Setup main menu.

  3. Type 8 and press Enter again to repeat the JunosVM data entry for each of the other two nodes.

(Optional) Add More Nodes to the Cluster

If you want to add additional nodes, type 1 and press Enter. Then configure the node and the node’s JunosVM using the same procedures previously described. Repeat the procedures for each additional node.

Note:

HA functionality requires an odd number of nodes and a minimum of three nodes per cluster.

The following example shows adding an additional node, node #4, with two interfaces.

content_copy zoom_out_map
Please select a number to modify.
[<CR>=return to main menu]:
1
New Node ID : 4

current node 4 Node hostname (without domain name) : 
new node 4 Node hostname (without domain name) : node-4

current node 4 Site Name : site1
new node 4 Site Name : site1

current node 4 Node priority : 0
new node 4 Node priority : 40

current node 4 Node cluster communication interface : external0
new node 4 Node cluster communication interface : external0

current node 4 Node cluster communication IPv4 address : 
new node 4 Node cluster communication IPv4 address : 10.25.153.12


current node 4 Node interface #2 name : mgmt0
new node 4 Node interface #2 name : external1

current node 4 Node interface #2 IPv4 address : 
new node 4 Node interface #2 IPv4 address : 10.100.1.7

current node 4 Node interface #2 switchover (yes/no) : yes
new node 4 Node interface #2 switchover (yes/no) : 

current node 4 Node interface #3 name : 
new node 4 Node interface #3 name : 

current node 4 Node interface #3 IPv4 address : 
new node 4 Node interface #3 IPv4 address : 

current node 4 Node interface #3 switchover (yes/no) : yes
new node 4 Node interface #3 switchover (yes/no) : 

current node 4 Node interface #4 name : 
new node 4 Node interface #4 name : 

current node 4 Node interface #4 IPv4 address : 
new node 4 Node interface #4 IPv4 address : 

current node 4 Node interface #4 switchover (yes/no) : yes
new node 4 Node interface #4 switchover (yes/no) : 

current node 4 Node interface #5 name : 
new node 4 Node interface #5 name : 

current node 4 Node interface #5 IPv4 address : 
new node 4 Node interface #5 IPv4 address : 

current node 4 Node interface #5 switchover (yes/no) : yes
new node 4 Node interface #5 switchover (yes/no) :

The following example shows configuring the JunosVM that corresponds to node #4.

content_copy zoom_out_map
Please select a number to modify.
[<CR>=return to main menu]
3
New JunosVM ID : 4
current junosvm 4 JunOSVM hostname : 
new junosvm 4 JunOSVM hostname : junosvm-4

current junosvm 4 JunOSVM IPv4 address : 
new junosvm 4 JunOSVM IPv4 address : 10.25.153.13

Configure Cluster Settings

The remaining settings apply to the cluster as a whole.

  1. From the HA Setup main menu, type 9 and press Enter to configure the VIP address for the external (router-facing) network. This is the virtual IP address that is always associated with the active node, even if failover causes the active node to change. The VIP is required, even if you are configuring a separate user-facing network interface. If you have upgraded from an earlier NorthStar release in which you did not have VIP for external0, you must now configure it.
    Note:

    Make a note of this IP address. If failover occurs while you are working in the NorthStar Planner UI, the client is disconnected and you must re-launch it using this VIP address. For the NorthStar Controller web UI, you would be disconnected and would need to log back in.

    The following example shows configuring the VIP address for the external network.

    content_copy zoom_out_map
    Please select a number to modify.
    [<CR>=return to main menu]
    9
    current VIP interface #1 IPv4 address : 
    new VIP interface #1 IPv4 address : 10.25.153.100
    
    current VIP interface #2 IPv4 address : 
    new VIP interface #2 IPv4 address : 10.100.1.1
    
    current VIP interface #3 IPv4 address : 
    new VIP interface #3 IPv4 address : 
    
    current VIP interface #4 IPv4 address : 
    new VIP interface #4 IPv4 address : 
    
    current VIP interface #5 IPv4 address : 
    new VIP interface #5 IPv4 address : 
  2. Type 9 and press Enter to configure the VIP address for the user-facing network for dual interface configurations. If you do not configure this IP address, the router-facing VIP address also functions as the user-facing VIP address.
  3. Type D and press Enter to configure the setup mode as cluster (local cluster).
  4. Type E and press Enter to configure the PCEP session. The default is physical_ip. If you are using the cluster VIP for your PCEP session, configure the PCEP session as vip.
    Note:

    All of your PCC sessions must use either physical IP or VIP (no mixing and matching), and that must also be reflected in the PCEP configuration on the router.

Test and Deploy the HA Configuration

You can test and deploy the HA configuration from within the HA Setup main menu.

  1. Type G to test the HA connectivity for all the interfaces. You must verify that all interfaces are up before you deploy the HA cluster.
  2. Type H and press Enter to launch a script that connects to and deploys all the servers and all the JunosVMs in the cluster. The process takes approximately 15 minutes, after which the display is returned to the HA Setup menu. You can view the log of the progress at /opt/northstar/logs/net_setup.log.
    Note:

    If the execution has not completed within 30 minutes, a process might be stuck. You can sometimes see this by examining the log at /opt/northstar/logs/net_setup.log. You can press Ctrl-C to cancel the script, and then restart it.

  3. To check if the election process has completed, examine the processes running on each node by logging into the node and executing the supervisorctl status script.
    content_copy zoom_out_map
    [root@node-1]# supervisorctl status
    

    For the active node, you should see all processes listed as RUNNING as shown here.

    Note:

    The actual list of processes depends on the version of NorthStar and your deployment setup.

    content_copy zoom_out_map
    [root@node-1 ~]# supervisorctl status
    bmp:bmpMonitor                   RUNNING   pid 2957, uptime 0:58:02
    collector:worker1                RUNNING   pid 19921, uptime 0:01:42
    collector:worker2                RUNNING   pid 19923, uptime 0:01:42
    collector:worker3                RUNNING   pid 19922, uptime 0:01:42
    collector:worker4                RUNNING   pid 19924, uptime 0:01:42
    collector_main:beat_scheduler    RUNNING   pid 19770, uptime 0:01:53
    collector_main:es_publisher      RUNNING   pid 19771, uptime 0:01:53
    collector_main:task_scheduler    RUNNING   pid 19772, uptime 0:01:53
    config:cmgd                      RUNNING   pid 22087, uptime 0:01:53 
    config:cmgd-rest                 RUNNING   pid 22088, uptime 0:01:53
    docker:dockerd                   RUNNING   pid 4368, uptime 0:57:34
    epe:epeplanner                   RUNNING   pid 9047, uptime 0:50:34
    infra:cassandra                  RUNNING   pid 2971, uptime 0:58:02
    infra:ha_agent                   RUNNING   pid 9009, uptime 0:50:45
    infra:healthmonitor              RUNNING   pid 9172, uptime 0:49:40
    infra:license_monitor            RUNNING   pid 2968, uptime 0:58:02
    infra:prunedb                    RUNNING   pid 19770, uptime 0:01:53
    infra:rabbitmq                   RUNNING   pid 7712, uptime 0:52:03
    infra:redis_server               RUNNING   pid 2970, uptime 0:58:02
    infra:zookeeper                  RUNNING   pid 2965, uptime 0:58:02
    ipe:ipe_app                      RUNNING   pid 2956, uptime 0:58:02
    listener1:listener1_00           RUNNING   pid 9212, uptime 0:49:29
    netconf:netconfd_00              RUNNING   pid 19768, uptime 0:01:53
    northstar:anycastGrouper         RUNNING   pid 19762, uptime 0:01:53
    northstar:configServer           RUNNING   pid 19767, uptime 0:01:53
    northstar:mladapter              RUNNING   pid 19765, uptime 0:01:53
    northstar:npat                   RUNNING   pid 19766, uptime 0:01:53
    northstar:pceserver              RUNNING   pid 19441, uptime 0:02:59
    northstar:privatet1vproxy        RUNNING   pid 19432, uptime 0:02:59
    northstar:prpdclient             RUNNING   pid 19763, uptime 0:01:53
    northstar:scheduler              RUNNING   pid 19764, uptime 0:01:53
    northstar:topologyfilter         RUNNING   pid 19760, uptime 0:01:53
    northstar:toposerver             RUNNING   pid 19762, uptime 0:01:53
    northstar_pcs:PCServer           RUNNING   pid 19487, uptime 0:02:49
    northstar_pcs:PCViewer           RUNNING   pid 19486, uptime 0:02:49
    web:app                          RUNNING   pid 19273, uptime 0:03:18
    web:gui                          RUNNING   pid 19280, uptime 0:03:18
    web:notification                 RUNNING   pid 19272, uptime 0:03:18
    web:proxy                        RUNNING   pid 19275, uptime 0:03:18
    web:restconf                     RUNNING   pid 19271, uptime 0:03:18
    web:resthandler                  RUNNING   pid 19275, uptime 0:03:18
    

    For a standby node, processes beginning with “northstar”and “northstar_pcs” should be listed as STOPPED. Also, if you have analytics installed, some of the processes beginning with “collector” are STOPPED. Other processes, including those needed to preserve connectivity, remain RUNNING. An example is shown here.

    Note:

    This is just an example; the actual list of processes depends on the version of NorthStar, your deployment setup, and the optional features you have installed.

    content_copy zoom_out_map
    [root@node-1 ~]# supervisorctl status
    bmp:bmpMonitor              RUNNING   pid 2957, uptime 0:58:02
    collector:worker1                RUNNING   pid 19921, uptime 0:01:42
    collector:worker2                RUNNING   pid 19923, uptime 0:01:42
    collector:worker3                RUNNING   pid 19922, uptime 0:01:42
    collector:worker4                RUNNING   pid 19924, uptime 0:01:42
    collector:main:beat_scheduler    STOPPED   Dec 24, 05:12 AM
    collector_main:es_publisher      STOPPED   Dec 24, 05:12 AM
    collector_main:task_scheduler    STOPPED   Dec 24, 05:12 AM
    config:cmgd                      STOPPED   Dec 24, 05:12 AM
    config:cmgd-rest                 STOPPED   Dec 24, 05:12 AM
    docker:dockerd                   RUNNING   pid 4368, uptime 0:57:34
    epe:epeplanner                   RUNNING   pid 9047, uptime 0:50:34
    infra:cassandra                  RUNNING   pid 2971, uptime 0:58:02
    infra:ha_agent                   RUNNING   pid 9009, uptime 0:50:45
    infra:healthmonitor              RUNNING   pid 9172, uptime 0:49:40
    infra:license_monitor            RUNNING   pid 2968, uptime 0:58:02
    infra:prunedb                    STOPPED   Dec 24, 05:12 AM
    infra:rabbitmq                   RUNNING   pid 7712, uptime 0:52:03
    infra:redis_server               RUNNING   pid 2970, uptime 0:58:02
    infra:zookeeper                  RUNNING   pid 2965, uptime 0:58:02
    ipe:ipe_app                      STOPPED   Dec 24, 05:12 AM
    listener1:listener1_00           RUNNING   pid 9212, uptime 0:49:29
    netconf:netconfd_00              RUNNING   pid 19768, uptime 0:01:53
    northstar:anycastGrouper         STOPPED   Dec 24, 05:12 AM
    northstar:configServer           STOPPED   Dec 24, 05:12 AM
    northstar:mladapter              STOPPED   Dec 24, 05:12 AM
    northstar:npat                   STOPPED   Dec 24, 05:12 AM
    northstar:pceserver              STOPPED   Dec 24, 05:12 AM
    northstar:privatet1vproxy        STOPPED   Dec 24, 05:12 AM
    northstar:prpdclient             STOPPED   Dec 24, 05:12 AM
    northstar:scheduler              STOPPED   Dec 24, 05:12 AM
    northstar:topologyfilter         STOPPED   Dec 24, 05:12 AM
    northstar:toposerver             STOPPED   Dec 24, 05:12 AM
    northstar_pcs:PCServer           STOPPED   Dec 24, 05:12 AM
    northstar_pcs:PCViewer           STOPPED   Dec 24, 05:12 AM
    northstar_pcs:SRPCServer         STOPPED   Dec 24, 05:12 AM
    web:app                          STOPPED   Dec 24, 05:12 AM
    web:gui                          STOPPED   Dec 24, 05:12 AM
    web:notification                 STOPPED   Dec 24, 05:12 AM
    web:proxy                        STOPPED   Dec 24, 05:12 AM
    web:restconf                     STOPPED   Dec 24, 05:12 AM
    web:resthandler                  STOPPED   Dec 24, 05:12 AM
    
  4. Set the web UI admin password using either the web UI or net_setup.
    • For the web UI method, use the external IP address that was provided to you when you installed the NorthStar application. Type that address into the address bar of your browser (for example, https://10.0.1.29:8443). A window is displayed requesting the confirmation code in your license file (the characters after S-NS-SDN=), and the password you wish to use. See Figure 4.

      Figure 4: Web UI Method for Setting the Web UI PasswordWeb UI Method for Setting the Web UI Password
    • For the net_setup method, select D from the net_setup Main Menu (Maintenance & Troubleshooting), and then 3 from the Maintenance & Troubleshooting menu (Change UI Admin Password).

      content_copy zoom_out_map
      Main Menu:
          .............................................
          A.) Host Setting
          .............................................
          B.) JunosVM Setting
          .............................................
          C.) Check Network Setting
          .............................................
          D.) Maintenance & Troubleshooting
          .............................................
          E.) HA Setting
          .............................................
          F.) Collect Trace/Log
          .............................................
          G.) Analytics Data Collector Setting
             (External standalone/cluster analytics server)
          .............................................
          H.) Setup SSH Key for external JunosVM  setup
          .............................................
          I.) Internal Analytics Setting (HA)
          .............................................
          X.) Exit
          .............................................
      Please select a letter to execute.
      D
      
      
      Maintenance & Troubleshooting:
         ..................................................
         1.) Backup JunosVM Configuration
         2.) Restore JunosVM Configuration
         3.) Change UI Admin Password
         4.) Change Database Password
         5.) Change MQ Password
         6.) Change Host Root Password
         7.) Change JunosVM root and northstar User Password
         8.) Initialize all credentials ( 3,4,5,6,7 included)
         ..................................................
      
      Please select a number to modify.
      
      [<CR>=return to main menu]:
      3

      Type Y to confirm you wish to change the UI Admin password, and enter the new password when prompted.

      content_copy zoom_out_map
      Change UI Admin Password
      Are you sure you want to change the UI Admin password? (Y/N) y
      
      Please enter new UI Admin password : 
      Please confirm new UI Admin password : 
      Changing UI Admin password ...
      UI Admin password has been changed successfully
  5. Once the web UI admin password has been set, return to the HA Setup menu (select E from the Main Menu). View cluster information and check the cluster status by typing K, and pressing Enter. In addition to providing general cluster information, this option launches the ns_check_cluster.sh script. You can also run this script outside of the setup utility by executing the following commands:
    content_copy zoom_out_map
    [root@northstar]# cd /opt/northstar/utils/ 
    [root@northstar utils]# ./ns_check_cluster.sh

Replace a Failed Node if Necessary

On the HA Setup menu, options I and J can be used when physically replacing a failed node. They allow you to replace a node without having to redeploy the entire cluster which would wipe out all the data in the database.

CAUTION:

While a node is being replaced in a three-node cluster, HA is not guaranteed.

  1. Replace the physical node in the network and install NorthStar Controller on the replacement node.
  2. Run the NorthStar setup utility to configure the replaced node with the necessary IP addresses. Be sure you duplicate the previous node setup, including:
    • IP address and hostname

    • Initialization of credentials

    • Licensing

    • Network connectivity

  3. Go to one of the existing cluster member nodes (preferably the same node that was used to configure the HA cluster initially). Going forward, we will refer to this node as the anchor node.
  4. Set up the SSH key from the anchor node to the replacement node and JunosVM.

    Copy the public SSH key from the anchor node to the replacement node, from the replacement node to the other cluster nodes, and from the other cluster nodes to the replacement node.

    Note:

    Remember that in your initial HA setup, you had to copy the public SSH key from each node to each of the other nodes, from each machine.

    Copy the public SSH key from the anchor node to the replacement node’s JunosVM (the JunosVM hosted on each of the other nodes). To do this, log in to each of the replacement nodes and connect to its JunosVM.

    content_copy zoom_out_map
    [root@node-1 ~]# ssh northstar@JunosVM-ip
    [root@node-1 ~]# configure
    [root@node-1 ~]# set system login user northstar authentication ssh-rsa replacement-string
    [root@node-1 ~]# commit
  5. From the anchor node, remove the failed node from the Cassandra database. Run the command nodetool removenode host-id. To check the status, run the command nodetool status.

    The following example shows removing the failed node with IP address 10.25.153.10.

    content_copy zoom_out_map
    [root@node-1 ~]# . /opt/northstar/northstar.env 
    [root@node-1 ~]# nodetool status
    Datacenter: datacenter1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens       Owns    Host ID                               Rack
    UN  10.25.153.6  5.06 MB    256          ?       507e572c-0320-4556-85ec-443eb160e9ba  rack1
    UN  10.25.153.8  651.94 KB  256          ?       cd384965-cba3-438c-bf79-3eae86b96e62  rack1
    DN  10.25.153.10  4.5 MB     256          ?       b985bc84-e55d-401f-83e8-5befde50fe96  rack1
    
     [root@node-1 ~]# nodetool removenode b985bc84-e55d-401f-83e8-5befde50fe96 
    [root@node-1 ~]# nodetool status
    Datacenter: datacenter1
    =======================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address        Load       Tokens       Owns    Host ID                               Rack
    UN  10.25.153.6  5.06 MB    256          ?       507e572c-0320-4556-85ec-443eb160e9ba  rack1
    UN  10.25.153.8  639.61 KB  256          ?       cd384965-cba3-438c-bf79-3eae86b96e62  rack1
  6. From the HA Setup menu on the anchor node, select option I to copy the HA configuration to the replacement node.
  7. From the HA Setup menu on the anchor node, select option J to deploy the HA configuration, only on the replacement node.

Configure Fast Failure Detection Between JunosVM and PCC

You can use Bidirectional Forward Detection (BFD) in deploying the NorthStar application to provide faster failure detection as compared to BGP or IGP keepalive and hold timers. The BFD feature is supported in PCC and JunosVM.

To utilize this feature, configure bfd-liveness-detection minimum-interval milliseconds on the PCC, and mirror this configuration on the JunosVM. We recommend a value of 1000 ms or higher for each cluster node. Ultimately, the appropriate BFD value depends on your requirements and environment.

footer-navigation