Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuring the Junos Space Cluster for High Availability Overview

This topic provides an overview of the key steps required to configure a Junos Space cluster as a carrier-grade system with all high-availability capabilities enabled.

Requirements

You can choose Virtual Appliances for setting up a Junos Space cluster.

For a cluster of Virtual Appliances, the following recommendations apply for the underlying virtualization infrastructure on which the appliances are deployed:

  • Use VMware ESX server 4.0 or later or VMware ESXi server 4.0, 5.0, 5.1, 5.5, or 6.0 or a kernel-based virtual machine (KVM) server on qemu-kvm (KVM) Release 0.12.1.2-2/448.el6 or later (which is on CentOS Release 6.5) that can support a virtual machine.

  • Deploy the two Junos Space Virtual Appliances (JSVA) on two separate servers.

  • Each server must be able to dedicate 4 vCPUs or 2.66 GHz or more, 32 GB RAM, and sufficient hard disk for the Junos Space Virtual Appliance that it hosts.

  • The servers should have similar fault tolerance features as the Junos Space appliance: dual redundant power supplies connected to two separate power circuits, RAID array of hard disks for storage, and hot-swappable fans.

Note:

For more information on the requirements for the virtual appliance, refer to the Deploying a Junos Space Virtual Appliance on a VMware ESXi Server and Deploying a Junos Space Virtual Appliance on a KVM Server topics in the Junos Space Virtual Appliance documentation.

If you choose Junos Space appliances, you need to choose two instances of the corresponding SKUs for the appliance that you are using. In addition, order a second power supply module for each appliance in order to provide the redundant power supply module for each appliance.

Preparation

We recommend you use the following guidelines as you prepare a Junos Space cluster for high availability:

  • The Junos Space cluster architecture allows you to dedicate two Junos Space nodes solely for MySQL database functions. Dedicated database nodes can free up system resources such as CPU time and memory utilization on the Junos Space VIP node, thereby improving the performance of the Junos Space VIP node. If you decide to add dedicated database nodes to the Junos Space cluster, in the first instance you must add two nodes together as the primary and secondary database nodes, enabling database high availability by default.

  • Junos Space Platform enables you to run the Cassandra service on dedicated nodes with only the Cassandra service running or on nodes with the JBoss server running. When the Cassandra service is started on any of the nodes, device images and files from Junos Space applications are moved from the MySQL database to the Cassandra database, thereby improving the performance of the MySQL database. If you want to ensure redundancy for files stored in the Cassandra database, you must ensure that the Cassandra service is running on two or more nodes that together form the Cassandra cluster.

  • A Junos Space Virtual appliance utilizes two Ethernet interfaces: eth0 and eth3. The eth0 interface is used for all inter-node communication within the cluster and also for communication between GUI and NBI clients and the cluster. The eth3 interface can be configured as the device management interface, in which case, all communication between the cluster and the managed devices occur over this interface. If the eth3 interface is not configured, all device communication also takes place over the eth0 interface. So, you must first decide whether or not to use eth3 as the device management interface. If you choose to use eth3, you should use eth3 for all appliances in the same cluster.

  • You also must decide on the following networking parameters to be configured on the Junos Space appliances:

    • IP address and subnet mask for the interface “eth0”, the default gateway address, and the address of one or more name servers in the network.

    • IP address and subnet mask for the interface “eth3” if you choose to use a separate device management interface.

    • The virtual IP address to use for the cluster, which should be an address in the same subnet as the IP address assigned to the “eth0” interface.

      If you decide to add dedicated database nodes, you must choose a separate virtual IP (VIP) address in the same subnet as the VIP address of the Junos Space cluster. This database VIP address must be in the same subnet as the IP address assigned to the eth0 Ethernet interface and must be different from the VIP address of the Junos Space cluster nodes.

    • NTP server settings from which to synchronize the appliance’s time.

  • The IP address that you assign to each Junos Space node in the cluster and the virtual IP address for the cluster must be in the same subnet. This is required for the IP address takeover mechanism to function correctly.

    Note:

    Strictly speaking, you can choose to deploy the non‐HA nodes in a different subnet. However, doing so will cause a problem if one of the HA nodes goes down and you want to promote one of the other nodes as an HA node. So, we recommend that you configure eth0 on all nodes in the same subnet.

  • Because JBoss servers on all the nodes communicate using UDP multicast to form and manage the JBoss cluster, you must ensure that UDP multicast is enabled in the network where you deploy the cluster nodes. You must also disable IGMP snooping on the switches interconnecting the cluster, or configure them explicitly to allow UDP multicast between the nodes.

Configuring the First Node in the Cluster

After you power on the appliance and connect to its console, Junos Space displays a menu-driven command-line interface (CLI) that you use to specify the initial configuration of the appliance. To complete this initial configuration, you specify the following parameters:

  • IP address and subnet mask for the interface “eth0”

  • IP address of the default gateway

  • IP address of the name server

  • IP address and subnet mask for the interface “eth3”, if you choose to configure a cluster as described in the topic Understanding the Logical Clusters Within a Junos Space Cluster.

  • Whether this appliance being added to an existing cluster. Choose “n” to indicate that this is the first node in the cluster.

  • The virtual IP address that the cluster will use.

  • NTP server settings from which to synchronize the appliance’s time.

  • Maintenance mode user ID and password.

    Note:

    Make note of the user ID and password that you specify for maintenance mode, as you will need this ID and password to perform Network Management Platform software upgrades and database restoration.

For detailed step-by-step instructions on configuring the appliance for initial deployment, refer to the Junos Space appliance documentation. After you have completed the initial configuration, all Junos Space services are started on the appliance and you can log in to the Network Management Platform User Interface from the virtual IP address assigned to it. At this stage, you have a single node cluster with no HA, which you can see by navigating to the Network Management Platform > Administration> Fabric workspace.

Adding a Second Node to the Cluster

In order to add a second node to the cluster, you must first configure the second appliance using its console. The process is identical to that of the first appliance except that you need to choose “y” when it you are prompted to specify whether this appliance will be added to an existing cluster. Make sure that the IP address you assign to this node is in the same subnet as the first node. You must also ensure its uniformity in using a separate device management interface (eth3). If you chose to use eth3 for the first node, choose the same for all additional nodes in the cluster.

After you configure the second appliance, you can log in to the Network Management Platform user interface of the first node at its virtual IP address to add the node to the cluster from the Network Management Platform > Administration > Fabric > Add Fabric Node workspace. To add the node to the cluster, specify the IP address assigned to the eth0 interface of the new node, assign a name for the new node, and (optionally) schedule the date and time to add the node. The Distributed Resource Manager (DRM) service running on the first node contacts Node Management Agent (NMA) on the new node to make necessary configuration changes and add it to the cluster. The DRM service also ensures that required services are started on this node. After the new node joins the cluster, you can monitor its status from the Network Management Platform > Administration > Fabric workspace.

For more information about adding nodes to an existing cluster from the Junos Space Platform UI, see Fabric Management Overview (in the Junos Space Network Management Platform Workspaces User Guide).

Adding Additional Nodes to a Cluster

The process for adding additional nodes is identical to the process for adding the second node. However, these additional nodes do not participate in any of the HA clusters in the fabric, unless explicitly promoted to that role if another HA node is removed, or if they are added as dedicated database nodes to form the MySQL cluster.

For more information about adding nodes to an existing cluster from the Junos Space Platform UI, see Fabric Management Overview (in the Junos Space Network Management Platform Workspaces User Guide).

Removing Nodes from a Cluster

If a node has failed and needs to be replaced, you can easily remove the node from the cluster. Navigate to the Network Management Platform > Administration > Fabric workspace, select the node you want to remove, and choose the Delete Node action. If the node being deleted is an HA node, the system will check if other nodes in the cluster can be elected as the replacement for the HA node being deleted. The system then shows the list of capable nodes (only Node-3 in this example) and allows you to choose from the available nodes. The process is described in Understanding High Availability Nodes in a Cluster.

If the node being deleted is a database node, the system checks whether other nodes in the cluster can replace the database node being deleted. If there are nodes present that are capable of replacing the deleted node, the system displays the list of capable nodes and allows you to choose from the available nodes.

For more information about deleting nodes from the cluster, see Deleting a Node from the Junos Space Fabric (in the Junos Space Network Management Platform Workspaces User Guide).