Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

JNU Topology for CSDS

date_range 12-Mar-25

Learn about Junos Node Unifier (JNU) topology, and its nodes in the Connected Security Distributed Services (CSDS) architecture.

Junos Node Unifier (JNU) can manage the network devices in Connected Security Distributed Services (CSDS) architecture using a single touchpoint management solution. It allows you to centrally configure and manage network devices running Junos OS from a single point.

JNU Nodes

Figure 1 shows the JNU topology and its components.

Figure 1: JNU Topology JNU Topology
  • JNU controller—JNU controller is a centralized entity presenting the unified CLI view of multiple network devices that are added as JNU satellites. This node runs the jnud process in controller mode. JNU topology also supports active-active high availability with dual-controller setup. You can use dual controllers in JNU topology.

  • JNU satellite—JNU satellites are the physical or virtual network devices that operate under the control of JNU controller. The node runs the jnud process in satellite mode.

The connectivity between the JNU nodes uses the CSDS management network, eliminating the need for a separate network. The controller and satellites communicate uses jnuadmin user credentials that gets created during JNU configuration. The communication channel is secure NETCONF over SSH connection. The controller learns satellite's device management schema. Device management schema is a unique data model that is specific to a network device. It describes the complete configuration and operational capabilities of the device. Once JNU is configured, you can access all the satellite's schema from the controller allowing you to centrally manage the nodes.

JNU Topology Considerations

In a multi-node setup such as the CSDS architecture,

  • The MX Series running the jnud process acts as the JNU controller. You can use MX Series for JNU controller. We support single touch point management solution with single controller or dual controllers.

  • The SRX Series Firewalls, vSRX Virtual Firewalls, and Junos Device Manager (JDM) running the jnud process act as JNU satellites.

Note:
  • The external Ubuntu server that hosts the JDM and vSRX Virtual Firewall instances is not part of the JNU topology.

  • You must run the same Junos OS release on the controller and the satellite nodes.

Use Feature Explorer to confirm platform and release support for specific features.

JNU Deployment Process

The following procedure describes JNU deployment process:

  1. Configure MX Series as the controller. Note that you must configure the controller role on both the routing engines (RE).

  2. Configure SRX Series Firewall, vSRX Virtual Firewall, and JDM container as satellites.

  3. When satellites join the controller,

    • Satellites push their schema to the controller during the initial synchronization. The controller also learns the version and model of the satellite as part of the initial synchronization. The satellite has 30 minutes to synchronize with the controller, making 60 attempts at 30 second intervals. If the synchronization fails, you can run the command request jnu satellite sync on the satellite, to manually perform initial synchronization.

    • The operational command show chassis jnu satellites lists all the satellites managed by the controller, including the JDM. Although JDM is added as a satellite, JDM doesn’t send its configuration to the controller during the initial synchronization, unlike the other satellites. But you can run JDM specific operational commands using the controller.

    • As the controller comes with dual REs, the controller synchronizes the other RE with the schema details.

    • In a dual-controller setup, the satellite perform initial synchronization with both the controllers. The satellite fetches the controller IP address from the [edit chassis jnu-management other-controller controller-ip-address] hierarchy level and sends the schema to both controllers. If the other controller is unreachable, the commit fails.

    • Controller merges the satellite's command hierarchy with its own, but excludes the satellite’s configuration schema. The controller and the satellites separately maintain their configuration schemas.

    • The controller dynamically learns different versions of the schema that are running on satellites.

    Note:
    • The satellites must join the controller without leaving any uncommitted changes on the controller. Avoid running configuration commands during the satellite's joining and upgrade process. Use the operational command show chassis jnu satellites to check the status of the satellites, then run the configuration commands.

    • Do not run commands directly on the satellites once they join the JNU topology as the configuration may be overwritten by commits from the controller.

    • You cannot perform XML subtree filtering of configuration for satellites from the controller.

  4. Controller does the subsequent management of satellites. In the controller, you can run the Junos OS commands specific for the network devices that are added as satellites.
footer-navigation