Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
Junos Node Slicing User Guide
Table of Contents Expand all
list Table of Contents

Setting Up Junos Node Slicing

date_range 06-Sep-24

Before proceeding to perform the Junos node slicing setup tasks, if you are using the external server model, you must have completed the procedures described in the chapter Preparing for Junos Node Slicing Setup.

Configuring an MX Series Router to Operate in BSYS Mode (External Server Model)

Note:

Ensure that the MX Series router is connected to the x86 servers as described in Connecting the Servers and the Router.

Junos node slicing requires the MX Series router to function as the base system (BSYS).

Use the following steps to configure an MX Series router to operate in BSYS mode:

  1. Install the Junos OS package for MX Series routers on both the Routing Engines of the router.

    You can download the Junos OS package from the Downloads page. From the Downloads page, click View all products and then select the MX Series device model to download the supported Junos OS package.

  2. On the MX Series router, run the show chassis hardware command and verify that the transceivers on both the Control Boards (CBs) are detected. The following text represents a sample output:
    content_copy zoom_out_map
    root@router> show chassis hardware
    
    …
    CB 0             REV 23   750-040257   CABL4989          Control Board
      Xcvr 0         REV 01   740-031980   ANT00F9           SFP+-10G-SR
      Xcvr 1         REV 01   740-031980   APG0SC3           SFP+-10G-SR
    CB 1             REV 24   750-040257   CABX8889          Control Board
      Xcvr 0         REV 01   740-031980   AP41BKS           SFP+-10G-SR
      Xcvr 1         REV 01   740-031980   ALN0PCM           SFP+-10G-SR
    
  3. On the MX Series router, apply the following configuration statements:
    content_copy zoom_out_map
    root@router# set chassis network-slices guest-network-functions
    root@router# set chassis redundancy graceful-switchover
    root@router# set chassis network-services enhanced-ip
    root@router# set routing-options nonstop-routing
    root@router# set system commit synchronize
    root@router# commit
    Note:

    On MX960 routers, you must configure the network-services mode as enhanced-ip or enhanced-ethernet. On MX2020 routers, the enhanced-ip configuration statement is already enabled by default .

    The router now operates in BSYS mode.

Note:

A router in the BSYS mode is not expected to run features other than the ones required to run the basic management functionalities in Junos node slicing. For example, the BSYS is not expected to have interface configurations associated with the line cards installed in the system. Instead, guest network functions (GNFs) will have the full-fledged router configurations.

Installing JDM RPM Package on x86 Servers Running RHEL (External Server Model)

Before installing the JDM RPM package for x86 servers, ensure that you have installed the additional packages, as described in Installing Additional Packages for JDM.

Download and install the JDM RPM package for x86 servers running RHEL as follows:

To install the package on x86 servers running RHEL, perform the following steps on each of the servers:

  1. Download the JDM RPM package from the Downloads page.
    From the Downloads page, select All Products > Junos Node Slicing - Junos Device Manager to download the package, which is named JDM for Redhat.
  2. Disable SELINUX and reboot the server.

    Starting with RHEL 9, you can disable SELINUX using the grubby package to configure the boot loader to add selinux=0 to the kernel command line.

    root@Linux Server0# grubby --update-kernel ALL --args selinux=0

    On RHEL releases prior to RHEL 9, you can disable SELINUX by setting the value for SELINUX to disabled in the /etc/selinux/config file.

    Once SELINUX is disabled, you can reboot the server.

    root@Linux Server0# reboot

  3. Install the JDM RPM package (indicated by the .rpm extension) by using the following command. An example of the JDM RPM package used is shown below:

    root@Linux Server0# rpm -ivh jns-jdm-1.0-0-17.4R1.13.x86_64.rpm

    content_copy zoom_out_map
    Preparing...                          ################################# [100%]
    Detailed log of jdm setup saved in /var/log/jns-jdm-setup.log
    Updating / installing...
       1:jns-jdm-1.0-0           ################################# [100%]
    Setup host for jdm...
    Launch libvirtd in listening mode
    Done Setup host for jdm
    Installing /juniper/.tmp-jdm-install/juniper_ubuntu_rootfs.tgz...
    Configure /juniper/lxc/jdm/jdm1/rootfs...
    Configure /juniper/lxc/jdm/jdm1/rootfs DONE
    Created symlink from /etc/systemd/system/multi-user.target.wants/jdm.service to /usr/lib/systemd/system/jdm.service.
    Done Setup jdm
    Redirecting to /bin/systemctl restart  rsyslog.service
    

Repeat the steps for the second server.

Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 20.04 (External Server Model)

Before installing the JDM Ubuntu package for x86 servers, ensure that you have installed the additional packages. For more details, see Installing Additional Packages for JDM.

Download and install the JDM Ubuntu package for x86 servers running Ubuntu 20.04 as follows:

To install the JDM package on the x86 servers running Ubuntu 20.04, perform the following steps on each of the servers:

  1. Download the JDM Ubuntu package from the Downloads page.
    From the Downloads page, select All Products > Junos Node Slicing - Junos Device Manager to download the package, which is named JDM for Ubuntu.
  2. Disable apparmor and reboot the server.

    root@Linux Server0# systemctl stop apparmor

    root@Linux Server0# systemctl disable apparmor

    root@Linux Server0# reboot

  3. Install the JDM Ubuntu package (indicated by the .deb extension) by using the following command. An example of the JDM Ubuntu package used is shown below:
    content_copy zoom_out_map
    root@Linux Server0# dpkg -i jns-jdm-22.3-I.20220605.0.0258.x86_64.deb 
    Selecting previously unselected package jns-jdm.
    (Reading database ... 216562 files and directories currently installed.)
    Preparing to unpack .../jns-jdm-22.3-I.20220605.0.0258.x86_64.deb ...
    Detailed log of jdm setup saved in /var/log/jns-jdm-setup.log
    Doing version check for 20.04
    Warning: vm-primary not mounted on SSD
    Unpacking jns-jdm (22.3-I.20220605.0.0258) ...
    Setting up jns-jdm (22.3-I.20220605.0.0258) ...
    Setup host for jdm...
    Launch libvirtd in listening mode
    Done Setup host for jdm
    Installing /juniper/.tmp-jdm-install/juniper_ubuntu_rootfs.tgz...
    Configure /juniper/lxc/jdm/jdm1/rootfs...
    Configure /juniper/lxc/jdm/jdm1/rootfs DONE
    Setup Junos cgroups...Done
    Created symlink /etc/systemd/system/multi-user.target.wants/jdm.service → /lib/systemd/system/jdm.service.
    Done Setup jdm
    Processing triggers for libc-bin (2.31-0ubuntu9.7) ...
    

Repeat the steps for the second server.

Configuring JDM on the x86 Servers (External Server Model)

Use the following steps to configure JDM on each of the x86 servers.

  1. At each server, start the JDM, and assign identities for the two servers as server0 and server1, respectively, as follows:

    On one server, run the following command:

    root@Linux server0# jdm start server=0

    content_copy zoom_out_map
    Starting JDM
    

    On the other server, run the following command:

    root@Linux server1# jdm start server=1

    content_copy zoom_out_map
    Starting JDM
    Note:

    The identities, once assigned, cannot be modified without uninstalling the JDM and then reinstalling it:

  2. Enter the JDM console on each server by running the following command:

    root@Linux Server0# jdm console

    content_copy zoom_out_map
    Connected to domain jdm
    Escape character is ^]
     * Starting Signal sysvinit that the rootfs is mounted [ OK ]
     * Starting Populate /dev filesystem                   [ OK ]
     * Starting Populate /var filesystem                   [ OK ]
     * Stopping Send an event to indicate plymouth is up   [ OK ]
     * Stopping Populate /var filesystem                   [ OK ]
     * Starting Clean /tmp directory                       [ OK ]
    …
     jdm login:
    
    Note:

    Starting in Junos OS Release 23.2R1, the message 'Connected to domain jdm' is not displayed if the JDM uses the Pod Manager tool (podman). Note that only servers running RHEL 9 support podman-based JDMs.

  3. Log in as the root user.
  4. Enter the JDM CLI by running the following command:

    root@jdm% cli

    Note:

    The JDM CLI is similar to the Junos OS CLI.

  5. Set the root password for the JDM.

    root@jdm# set system root-authentication plain-text-password

    content_copy zoom_out_map
    New Password: 
    Note:
    • The JDM root password must be the same on both the servers.

    • Starting in Junos OS Release 18.3R1, you can create non-root users in JDM. For more information, see Configuring Non-Root Users in JDM (Junos Node Slicing).

    • JDM installation blocks libvirt port access from outside the host.

  6. Commit the changes:

    root@jdm# commit

  7. Enter Ctrl-PQ to exit from the JDM console.
  8. From the Linux host, run the ssh jdm command to log in to the JDM shell.

Configuring Non-Root Users in JDM (Junos Node Slicing)

In the external server model, you can create non-root users on Juniper Device Manager (JDM) for Junos node slicing, starting in Junos OS Release 18.3R1. You need a root account to create a non-root user. The non-root users can log in to JDM by using the JDM console or through SSH. Each non-root user is provided a username and assigned a predefined login class.

The non-root users can perform the following functions:

  • Interact with JDM.

  • Orchestrate and manage Guest Network Functions (GNFs).

  • Monitor the state of the JDM, the host server and the GNFs by using JDM CLI commands.

Note:

The non-root user accounts function only inside JDM, not on the host server.

To create non-root users in JDM:

  1. Log in to JDM as a root user.
  2. Define a user name and assign the user with a predefined login class.

    root@jdm# set system login user username class predefined-login-class

  3. Set the password for the user.

    root@jdm# set system login user username authentication plain-text-password

    content_copy zoom_out_map
    New Password: 
  4. Commit the changes.

    root@jdm# commit

Table 1 contains the predefined login classes that JDM supports for non-root users:

Table 1: Predefined Login Classes

Login Class

Permissions

super-user

  • Create, delete, start and stop GNFs.

  • Start and stop daemons inside the JDM.

  • Execute all CLIs.

  • Access the shell.

operator

  • Start and stop GNFs.

  • Restart daemons inside the JDM.

  • Execute all basic CLI operational commands (except the ones which modify the GNFs or JDM configuration).

read-only

Similar to operator class, except that the users cannot restart daemons inside JDM.

unauthorized

Ping and traceroute operations.

Configuring JDM interfaces (External Server Model)

If you want to modify the server interfaces configured in the JDM, perform the following steps:

In the JDM, you must configure:

  • The two 10-Gbps server ports that are connected to the MX Series router.

  • The server port to be used as the JDM management port.

  • The server port to be used as the GNF management port.

Therefore, you need to identify the following on each server before starting the configuration of the ports:

  • The server interfaces (for example, p3p1 and p3p2) that are connected to CB0 and CB1 on the MX Series router.

  • The server interfaces (for example, em2 and em3) to be used for JDM management and GNF management.

For more information, see the figure Connecting the Servers and the Router.

Note:
  • You need this information for both server0 and server1.

  • These interfaces are visible only on the Linux host.

To configure the x86 server interfaces in JDM, perform the following steps on both the servers:

  1. On server0, apply the following configuration statements:
    content_copy zoom_out_map
    root@jdm# set groups server0 server interfaces cb0 p3p1
    root@jdm# set groups server0 server interfaces cb1 p3p2
    root@jdm# set groups server1 server interfaces cb0 p3p1 
    root@jdm# set groups server1 server interfaces cb1 p3p2 
    root@jdm# set apply-groups [ server0 server1 ] 
    root@jdm# commit 
    content_copy zoom_out_map
    root@jdm# set groups server0 server interfaces jdm-management                                      em2
    root@jdm# set groups server0 server interfaces vnf-management                                      em3
    root@jdm# set groups server1 server interfaces jdm-management                                      em2 
    root@jdm# set groups server1 server interfaces vnf-management                                      em3 
    root@jdm# commit 
  2. Repeat the step 1 on server1.
    Note:

    Ensure that you apply the same configuration on both server0 and server1.

  3. Share the ssh identities between the two x86 servers.

    At both server0 and server1, run the following JDM CLI command:

    root@jdm> request server authenticate-peer-server

    Note:

    The request server authenticate-peer-server command displays a CLI message requesting you to log in to the peer server using ssh to verify the operation. To log in to the peer server, you need to prefix ip netns exec jdm_nv_ns to ssh root@jdm-server1.

    For example, to log in to the peer server from server0, exit the JDM CLI, and use the following command from JDM shell:

    content_copy zoom_out_map
    root@jdm:~# ip netns exec jdm_nv_ns ssh root@jdm-server1

    Similarly, to log in to the peer server from server1, use the following command:

    content_copy zoom_out_map
    root@jdm:~# ip netns exec jdm_nv_ns ssh root@jdm-server0
  4. Apply the configuration statements in the JDM CLI configuration mode to set the JDM management IP address, default route, and the JDM hostname for each JDM instance as shown in the following example.
    Note:
    • The management IP address and default route must be specific to your network.

    content_copy zoom_out_map
    root@jdm# set groups server0 interfaces jmgmt0 unit                                      0 family inet address 10.216.105.112/21 
    root@jdm# set groups server1 interfaces jmgmt0 unit                                      0 family inet address 10.216.105.113/21
    root@jdm# set groups server0 routing-options static                                      route 0.0.0.0/0 next-hop 10.216.111.254
    root@jdm# set groups server1 routing-options static                                      route 0.0.0.0/0 next-hop 10.216.111.254 
    root@jdm# set groups server0 system host-name test-jdm-server0
    root@jdm# set groups server1 system host-name test-jdm-server1                                      
    root@jdm# commit synchronize

    Remember to configure commit synchronization as shown in the above step to ensure that the random MAC prefixes generated by the JDM instances are in sync. The random MAC prefix forms part of a MAC address associated with an unlicensed GNF. JDM generates this pseudo-random MAC prefix when it is booted for the first time and doesn’t generate it again. To check if the random MAC prefixes are in sync, use the CLI command show server connections or show system random-mac-prefix at JDM. See also: Assigning MAC Addresses to GNF.

    Note:
    • jmgmt0 stands for the JDM management port. This is different from the Linux host management port. Both JDM and the Linux host management ports are independently accessible from the management network.

    • You must have done the ssh key exchange as described in the Step 3 before attempting the Step 4. If you attempt the Step 4 without completing the Step 3, the system displays an error message as shown in the following example:

      Failed to fetch JDM software version from server1. If authentication of peer server is not done yet, try running request server authenticate-peer-server.

  5. Run the following JDM CLI command on each server and ensure that all the interfaces are up.
    root@jdm> show server connections
    content_copy zoom_out_map
    Component               Interface                Status  Comments
    Host to JDM port        virbr0                   up
    Physical CB0 port       p3p1                     up
    Physical CB1 port       p3p2                     up
    Physical JDM mgmt port  em2                      up
    Physical VNF mgmt port  em3                      up
    JDM-GNF bridge          bridge_jdm_vm            up
    CB0                     cb0                      up
    CB1                     cb1                      up
    JDM mgmt port           jmgmt0                   up
    JDM to HOST port        bme1                     up
    JDM to GNF port         bme2                     up
    JDM to JDM link0*       cb0.4002                 up
    JDM to JDM link1        cb1.4002                 up
    GNF Mac-Pool Prefix     Primary CB               OK      Prefix: JDM0[0xfe] / JDM1[0xfe]
    
Note:

For sample JDM configurations, see Sample Configuration for Junos Node Slicing.

If you want to modify the server interfaces configured in the JDM, you need to delete the GNFs (if they were configured), configure the interfaces as described above, reboot JDM from shell, reconfigure and activate the GNFs, and commit the changes,

Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs. .

Configuring MX Series Router to Operate in In-Chassis Mode

Note:
  • To configure in-chassis Junos node slicing, the MX Series router must have one of the following types of Routing Engines installed:

    • RE-S-X6-128G (used in MX480 and MX960 routers)

    • REMX2K-X8-128G (used in MX2010 and MX2020 routers)

    • REMX2008-X8-128G (used in MX2008 routers)

In in-chassis model, the base system (BSYS), Juniper Device Manager (JDM), and all guest network functions (GNFs) run within the Routing Engine of the MX Series router. BSYS and GNFs run on the host as virtual machines (VMs). You need to first reduce the resource footprint of the standalone MX Series router as follows:

  1. Ensure that both the Routing Engines (re0 and re1) in the MX Series router have the required VM host package (example: junos-vmhost-install-mx-x86-64-19.2R1.tgz) installed. The VM host package should be of 19.1R1 or a later version.
  2. Applying the following configuration and then reboot VM host on both the Routing Engines (re0 and re1).
    content_copy zoom_out_map
    user@router# set vmhost resize vjunos compact
    user@router# set system commit synchronize
    user@router> request vmhost reboot (re0|re1)

    When this configuration is applied, and following the reboot, the Routing Engine resource footprint of the Junos VM on MX Series router shrinks in order to accommodate GNF VMs. A resized Junos VM, now operating as the base system (BSYS) on the MX Series Routing Engine has the following resources:

    • CPU Cores—1 (Physical)

    • DRAM—16GB

    • Storage—14GB (/var)

Note:

All files in the /var/ location, including the log files (/var/log) and core files (/var/crash), are deleted when you reboot VM host after configuring the set vmhost resize vjunos compact statement. You must save any files currently in /var/log or /var/crash before proceeding with the VM host resize configuration if you want to use them for reference.

Installing and Configuring JDM for In-Chassis Model

Steps listed in this topic apply only to in-chassis Junos node slicing configuration.

Installing JDM RPM Package on MX Series Router (In-Chassis Model)

Before installing the Juniper Device Manager (JDM) RPM package on an MX Series router, you must configure the MX Series router to operate in the in-chassis BSYS mode. For more information, see Configuring MX Series Router to Operate in In-Chassis Mode.

Note:

The RPM package jns-jdm-vmhost is meant for in-chassis Junos node slicing deployment, while the RPM package jns-jdm is used for external servers based Junos node slicing deployment.

  1. Download the JDM RPM package (JDM for VMHOST) from the Downloads page.
    From the Downloads page, select All Products > Junos Node Slicing - Junos Device Manager to download the package, which is named JDM for VMHOST.
  2. Install the JDM RPM package on both Routing Engines (re0 and re1), by using the command shown in the following example:
    content_copy zoom_out_map
    root@router> request vmhost jdm add jns-jdm-vmhost-18.3-20180930.0.x86_64.rpm
    
    Starting to validate the Package
    Finished validating the Package
    Starting to validate the Environment
    Finished validating the Environment
    Starting to copy the RPM package from Admin Junos to vmhost
    Finished Copying the RPM package from Admin Junos to vmhost
    Starting to install the JDM RPM package
    Preparing...                ##################################################
    Detailed log of jdm setup saved in /var/log/jns-jdm-setup.log
    jns-jdm-vmhost              ##################################################
    Setup host for jdm...
    Done Setup host for jdm
    Installing /vm/vm/iapps/jdm/install/juniper/.tmp-jdm-install/juniper_ubuntu_rootfs.tgz...
    Configure /vm/vm/iapps/jdm/install/juniper/lxc/jdm/jdm1/rootfs...
    Configure /vm/vm/iapps/jdm/install/juniper/lxc/jdm/jdm1/rootfs DONE
    Setup Junos cgroups...Done
    Done Setup jdm
    stopping rsyslogd ... done
    starting rsyslogd ... done
    Finished installing the JDM RPM package
    Installation Successful !
    Starting to generate the host public keys at Admin Junos
    Finished generating the host public keys at Admin Junos
    Starting to copy the host public keys from Admin Junos to vmhost
    Finished copying the host public keys from Admin Junos to vmhost
    Starting to copy the public keys of Admin junos from vmhost to JDM
    Finished copying the public keys of Admin junos from vmhost to JDM
    Starting to cleanup the temporary file from Vmhost containing host keys of Admin Junos
    Finished cleaning the temporary file from Vmhost containing host keys of Admin Junos
    
  3. Run the show vmhost status command to see the vJunos Resource Status on both the Routing Engines.
    content_copy zoom_out_map
    user@router> show vmhost status re0
    
    bsys-re0:
    --------------------------------------------------------------------------
    
    Compute cluster: rainier-re-cc
      Compute Node: rainier-re-cn, Online
    
    vJunos Resource Status: Compact
    content_copy zoom_out_map
    user@router> show vmhost status re1
    
    bsys-re1:
    --------------------------------------------------------------------------
    
    Compute cluster: rainier-re-cc
      Compute Node: rainier-re-cn, Online
    
    vJunos Resource Status: Compact

Configuring JDM (In-Chassis Model)

Use the following steps to configure JDM on both the Routing Engines of an MX Series router:

  1. Apply the following command on both the Routing Engines to start JDM:
    content_copy zoom_out_map
    user@router> request vmhost jdm start
    
    Starting JDM
    Starting jdm: Domain jdm defined from /vm/vm/iapps/jdm//install/juniper/lxc/jdm/current/config/jdm.xml
    
    Domain jdm started
    

    Starting in Junos OS 19.3R1, the JDM console does not display the message 'Domain JDM Started'. However, this message will be added to the system logs when the JDM is started.

    Note:

    If hyperthreading is disabled, a warning is displayed when you enter the command request vmhost jdm start, as shown in the following example:

    content_copy zoom_out_map
    Warning: Hyperthreading is disabled! Cores: (6) Processors: (6) Expected: (12)
  2. Use the command show vmhost jdm status to check if the JDM is running.
    content_copy zoom_out_map
    user@router> show vmhost jdm status
    
    JDM Information
    ---------------------------
    Package    :  jns-jdm-vmhost-19.1-B2.x86_64
    Status     :  Running
    PID        :  3088
    Free Space :  62967 (MiB)
    
  3. After a few seconds, log in to JDM.
    content_copy zoom_out_map
    root@router> request vmhost jdm login
    
    ****************************************************************************
    * The Juniper Device Manager (JDM) must only be used for orchestrating the *
    * Virtual Machines for Junos Node Slicing                                  *
    *                                                                          *
    * Host Linux Distro: Wind River Linux                                      *
    * JDM Version: jns-jdm-vmhost-19.1-20181003.dev.common.0.x86_64            *
    * Free Disk Space on JDM's root-fs ("/"): 125081(MiB)                      *
    ****************************************************************************
    Last login: Thu Oct  4 15:26:30 2018 from 192.168.1.1
    
    Note:
    • You need to have root user privilege on the BSYS to log in to JDM.

    • The in-chassis JDM root account password can be different from Junos root account password.

    • It takes approximately 10 seconds for JDM to start. If you enter the request vmhost jdm login command before JDM starts, you might get the following message:

      content_copy zoom_out_map
      ssh_exchange_identification: read: Connection reset by peer
  4. Enter the JDM CLI by running the following command:
    content_copy zoom_out_map
    root@jdm% cli
  5. In configuration mode, apply the configurations shown in the following example:
    Note:

    The IP addresses shown in the following example are samples. Replace them with the actual IP addresses in your configuration.

    content_copy zoom_out_map
    root@jdm# set groups server0 system host-name host-name
    root@jdm# set groups server0 interfaces jmgmt0 unit 0 family inet address 192.0.2.1/24
    root@jdm# set groups server0 routing-options static route 0.0.0.0/0 next-hop 192.0.2.2
    root@jdm# set groups server1 system host-name host-name
    root@jdm# set groups server1 interfaces jmgmt0 unit 0 family inet address 198.51.100.1/24
    root@jdm# set groups server1 routing-options static route 0.0.0.0/0 next-hop 198.51.100.2
  6. In configuration mode, set the root password for the JDM on both the Routing Engines, and commit.
    content_copy zoom_out_map
    root@jdm# set apply-groups [server0 server1]
    root@jdm# set system root-authentication plain-text-password
    New password:
    content_copy zoom_out_map
    root@jdm# commit
    Note:
    • The JDM supports root user administration account only.

  7. In operation mode, enter the following command on both the Routing Engines to copy the ssh public key to the peer JDM.
    content_copy zoom_out_map
    root@jdm> request server authenticate-peer-server
    
    /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    root@jdm-server1's password:
     
    Number of key(s) added: 1
     
    Now try logging into the machine, with:   "ssh 'root@jdm-server1'"
    and check to make sure that only the key(s) you wanted were added.
    
    Note:

    You need to enter the root password of the peer JDM when prompted.

  8. In the configuration mode, apply the following commands:
    content_copy zoom_out_map
    root@jdm# set system commit synchronize
    root@jdm# commit synchronize
Note:
  • In in-chassis Junos node slicing, you cannot ping or send traffic between the management interfaces of the same Routing Engine (for example, from the Routing Engine 0 of GNF1 to the Routing Engine 0 of GNF2 or from the Routing Engine 0 of GNF1 to JDM).

  • In in-chassis mode, you cannot perform an scp operation between the BSYS and the JDM management interfaces.

  • You must have done the ssh key exchange as described in the Step 7 before attempting the Step 8. If you attempt the Step 8 without completing the Step 7, the system displays an error message as shown in the following example:

    Failed to fetch JDM software version from server1. If authentication of peer server is not done yet, try running request server authenticate-peer-server.

Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs. .

Assigning MAC Addresses to GNF

Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs.

To receive the globally unique MAC address range for the GNFs, contact your Juniper Networks representative and provide your GNF license SSRN (Software Support Reference Number), which will have been shipped to you electronically upon your purchase of the GNF license. To locate the SSRN in your GNF license, refer to the Juniper Networks Knowledge Base article KB11364.

For each GNF license, you will then be provided an ‘augmented SSRN’, which includes the globally unique MAC address range assigned by Juniper Networks for that GNF license. You must then configure this augmented SSRN at the JDM CLI as follows:


content_copy zoom_out_map
root@jdm# set system vnf-license-supplement vnf-id gnf-id license-supplement-string augmented-ssrn-string
root@jdm# commit
Note:
  • An augmented SSRN must be used for only one GNF ID. In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). GNF ID is one of its attributes. Attributes of a VNF are fully described in the follow-on section Configuring Guest Network Functions.

  • By default, the augmented SSRN will be validated. Should you ever need to skip this validation, you can use the no-validate attribute in the CLI as follows: Example: set system vnf-license-supplement vnf-id gnf-id license-supplement-string augmented-ssrn-string [no-validate].

Note:
  • You can configure the augmented SSRN for a GNF ID only when the GNF is not operational and has not yet been provisioned as well. You must first configure the augmented SSRN for a GNF ID before configuring the GNF. If the GNF ID is already provisioned, you must first delete the GNF for that GNF ID on both the servers (in case of the external server model) or on both the Routing Engines (in case of the in-chassis Junos node slicing model) before configuring the augmented SSRN.

  • Similarly, you must first delete the GNF for a given GNF ID on both the servers (in case of the external server model) or on both the Routing Engines (in case of the in-chassis Junos node slicing model) before deleting the augmented SSRN for the GNF ID.

  • You cannot apply an augmented SSRN to a GNF that is based on Junos OS 19.1R1 or older.

  • To confirm that the assigned MAC address range for a GNF has been applied, when the GNF becomes operational, use the Junos CLI command show chassis mac-addresses - the output will match a substring of the augmented SSRN.

Configuring Guest Network Functions

Configuring a guest network function (GNF) comprises two tasks, one to be performed at the BSYS and the other at the JDM.

Note:
  • Before attempting to create a GNF, you must ensure that you have configured commit synchronization as part of JDM configuration so that the random MAC prefixes generated by the JDM instances are in sync. To check if the random MAC prefixes are in sync, use the CLI command show server connections or show system random-mac-prefix at JDM. If the random MAC prefixes are not in sync, the software raises the following major alarm: Mismatched MAC address pool between GNF RE0 and GNF RE1. To view the alarm, use the show system alarms command.
  • Before attempting to create a GNF, you must ensure that the servers (or Routing Engines in the case of in-chassis model) have sufficient resources (CPU, memory, storage) for that GNF.

  • You need to assign an ID to each GNF. This ID must be the same at the BSYS and the JDM.

At the BSYS, specify a GNF by assigning it an ID and a set of line cards by applying the configuration as shown in the following example:

user@router# set chassis network-slices guest-network-functions gnf 1 fpcs 4

user@router# commit

In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). A VNF has the following attributes:

  • A VNF name.

  • A GNF ID. This ID must be the same as the GNF ID used at the BSYS.

  • The MX Series platform type.

  • A Junos OS image to be used for the GNF, which can be downloaded from the Juniper Downloads page.

    From the Downloads page, select All Products > Junos Node Slicing - Guest Network Function to download a Junos image for the GNF.

  • The VNF server resource template.

At the JDM, to configure a VNF, perform the following steps:

  1. Use the JDM shell command scp to retrieve the Junos OS Node Slicing image for GNF and place it in the JDM local directory /var/jdm-usr/gnf-images (repeat this step to retrieve the GNF configuration file).

    content_copy zoom_out_map
    root@jdm:~# scp source-location-of-the-gnf-image /var/jdm-usr/gnf-images
    root@jdm:~# scp source-location-of-the-gnf-configuration-file /var/jdm-usr/gnf-config
  2. Assign this image to a GNF by using the JDM CLI command as shown in the following example:

    content_copy zoom_out_map
    root@test-jdm-server0> request virtual-network-functions test-gnf add-image /var/jdm-usr/gnf-images/junos-install-ns-mx-x86-64-17.4R1.10.tgz all-servers
    
    Server0:
    Added image: /vm-primary/test-gnf/test-gnf.img
    
    
    Server1:
    Added image: /vm-primary/test-gnf/test-gnf.img
    
  3. Configure the VNF by applying the configuration statements as shown in the following example:

    root@test-jdm-server0# set virtual-network-functions test-gnf id 1

    root@test-jdm-server0# set virtual-network-functions test-gnf chassis-type mx2020

    root@test-jdm-server0# set virtual-network-functions test-gnf resource-template 2core-16g

    root@test-jdm-server0# set system vnf-license-supplement vnf-id 1 license-supplement-string RTU00023003204-01-AABBCCDDEE00-1100-01-411C

    For in-chassis model, do not configure the platform type (set virtual-network-functions test-gnf chassis-type mx2020). It will be detected automatically.

    Starting in Junos OS Release 19.2R1, Junos node slicing supports the assignment of a globally unique MAC address range (supplied by Juniper Networks) for GNFs.

    To also specify a baseline or initial Junos OS configuration for a GNF, prepare the GNF configuration file (example: /var/jdm-usr/gnf-config/test-gnf.conf) on both the servers (server0 and server1) for external server model, and on both the Routing Engines (re0 and re1) for the in-chassis model, and specify the filename as the parameter in the base-config statement as shown below:

    root@test-jdm-server0# set virtual-network-functions test-gnf base-config /var/jdm-usr/gnf-config/test-gnf.conf

    root@test-jdm-server0# commit synchronize

    Note:

    Ensure that:

    • You use the same GNF ID as the one specified earlier in BSYS.

    • The baseline configuration filename (with the path) is the same on both the servers / Routing Engines.

    • The syntax of the baseline file contents is in the Junos OS configuration format.

    • The GNF name used here is the same as the one assigned to the Junos OS image for GNF in the step 2.

  4. To verify that the VNF is created, run the following JDM CLI command:

    root@test-jdm-server0> show virtual-network-functions test-gnf

  5. Log in to the console of the VNF by issuing the following JDM CLI command:

    root@test-jdm-server0> request virtual-network-functions test-gnf console

    Note:

    Remember to log out of the VNF console after your have completed your configuration tasks. We recommend that you set an idle time-out using the command set system login idle-timeout minutes. Otherwise, if a user forgets to log out of the VNF console session, another user can log in without providing the access credentials. For more information, see system login (Junos Node Slicing).

  6. Configure the VNF the same way as you configure an MX Series Routing Engine.

Note:
  • The CLI prompt for in-chassis model is root@jdm# .

  • For sample configurations, see Sample Configuration for Junos Node Slicing.

  • In the case of the external server model, if you had previously brought down any physical x86 CB interfaces or the GNF management interface from Linux shell (by using the command ifconfig interface-name down), these will automatically be brought up when the GNF is started.

Configuring Abstracted Fabric Interfaces Between a Pair of GNFs

Creating an abstracted fabric (af) interface between two guest network functions (GNFs) involves configurations both at the base system (BSYS) and at the GNF. Abstracted fabric interfaces are created on GNFs based on the BSYS configuration, which is then sent to those GNFs.

Note:
  • Only one af interface can be configured between a pair of GNFs.

  • In a Junos node slicing setup where each GNF is assigned with a single FPC, if the Packet Forwarding Engines of the FPC assigned to the remote GNF becomes unreachable over fabric, the associated abstracted fabric interface goes down. Examples of errors that could cause this behavior include pfe fabric reachability errors and cmerror events causing pfe disable action (use the show chassis fpc errors command for the details). If a GNF has multiple FPCs assigned to it, the local FPCs that report all peer Packet Forwarding Engines to be down are removed from determining the abstracted fabric interface state.

To configure af interfaces between a pair of GNFs:

  1. At the BSYS, apply the configuration as shown in the following example:

    content_copy zoom_out_map
    user@router# set chassis network-slices guest-network-functions gnf 2 af4 peer-gnf id 4
    user@router# set chassis network-slices guest-network-functions gnf 2 af4 peer-gnf af2
    user@router# set chassis network-slices guest-network-functions gnf 4 af2 peer-gnf id 2
    user@router# set chassis network-slices guest-network-functions gnf 4 af2 peer-gnf af4

    In this example, af2 is the abstracted fabric interface instance 2 and af4 is the abstracted fabric interface instance 4.

    Note:

    The allowed af interface values range from af0 through af9.

    The GNF af interface will be visible and up. You can configure an af interface the way you configure any other interface.

  2. At the GNF, apply the configuration as shown in the following example:

    content_copy zoom_out_map
    user@router-gnf-b# set interfaces af4 unit 0 family inet address 10.10.10.1/24
    user@router-gnf-d# set interfaces af2 unit 0 family inet address 10.10.10.2/24
Note:
  • If you want to apply MPLS family configurations on the af interfaces, you can apply the command set interfaces af-name unit logical-unit-number family mpls on both the GNFs between which the af interface is configured.

  • For sample af configurations, see Sample Configuration for Junos Node Slicing.

Class of Service on Abstracted Fabric Interfaces

Class of service (CoS) packet classification assigns an incoming packet to an output queue based on the packet’s forwarding class. See CoS Configuration Guide for more details.

The following sections explain the forwarding class- to-queue mapping, and the behavior aggregate (BA) classifiers and rewrites supported on the abstracted fabric (af) interfaces.

Forwarding Class-to-Queue Mapping

An af interface is a simulated WAN interface with most capabilities of any other interface except that the traffic designated to a remote Packet Forwarding Engine will still have to go over the two fabric queues (Low/High priority ones).

Note:

Presently, an af interface operates in 2-queue mode only. Hence, all queue-based features such as scheduling, policing, and shaping are not available on an af interface.

Packets on the af interface inherit the fabric queue that is determined by the fabric priority configured for the forwarding class to which that packet belongs. For example, see the following forwarding class to queue map configuration:

[edit]

content_copy zoom_out_map
user@router# show class-of-service forwarding-classes

class Economy queue-num 0 priority low; /* Low fabric priority */
class Stream queue-num 1;
class Business queue-num 2;
class Voice queue-num 3;
class NetControl queue-num 3;
class Business2 queue-num 4;
class Business3 queue-num 5;
class VoiceSig queue-num 6 priority high; /* High fabric priority */
class VoiceRTP queue-num 7;

As shown in the preceding example, when a packet gets classified to the forwarding class VoiceSig, the code in the forwarding path examines the fabric priority of that forwarding class and decides which fabric queue to choose for this packet. In this case, high-priority fabric queue is chosen.

BA Classifiers and Rewrites

The behavior aggregate (BA) classifier maps a class-of-service (CoS) value to a forwarding class and loss priority. The forwarding class and loss-priority combination determines the CoS treatment given to the packet in the router. The following BA classifiers and rewrites are supported:

  • Inet-Precedence classifier and rewrite

  • DSCP classifier and rewrite

  • MPLS EXP classifier and rewrite

    You can also apply rewrites for IP packets entering the MPLS tunnel and do a rewrite of both EXP and IPv4 type of service (ToS) bits. This approach will work as it does on other normal interfaces.

  • DSCP v6 classifier and rewrite for IP v6 traffic

Note:

The following are not supported:

  • IEEE 802.1 classification and rewrite

  • IEEE 802.1AD (QinQ) classification and rewrite

See CoS Configuration Guide for details on CoS BA classifiers.

Optimizing Fabric Path for Abstracted Fabric Interface

You can optimize the traffic flowing over the abstracted fabric (af) interfaces between two guest network functions (GNFs), by configuring a fabric path optimization mode. This feature reduces fabric bandwidth consumption by preventing any additional fabric hop (switching of traffic flows from one Packet Forwarding Engine to another) before the packets eventually reach the destination Packet Forwarding Engine. Fabric path optimization, supported on MX2008, MX2010, and MX2020 with MPC9E and MX2K-MPC11E, prevents only a single additional traffic hop that results from abstracted fabric interface load balancing.

You can configure one of the following fabric path optimization modes:

  • monitor—If you configure this mode, the peer GNF monitors the traffic flow and sends information to the source GNF about the Packet Forwarding Engine to which the traffic is being forwarded currently and the desired Packet Forwarding Engine that could provide an optimized traffic path. In this mode, the source GNF does not forward the traffic towards the desired Packet Forwarding Engine.

  • optimize—If you configure this mode, the peer GNF monitors the traffic flow and sends information to the source GNF about the Packet Forwarding Engine to which the traffic is being forwarded currently and the desired Packet Forwarding Engine that could provide an optimized traffic path. The source GNF then forwards the traffic towards the desired Packet Forwarding Engine.

To configure a fabric path optimization mode, use the following CLI commands at BSYS.

content_copy zoom_out_map
user@router# set chassis network-slices guest-network-functions gnf id af-name collapsed-forward (monitor | optimize)
user@router# commit

After configuring fabric path optimization, you can use the command show interfaces af-interface-name in GNF to view the number of packets that are currently flowing on the optimal / non-optimal path.

SNMP Trap Support: Configuring NMS Server (External Server Model)

The Juniper Device Manager (JDM) supports the following SNMP traps:

  • LinkUp and linkDown traps for JDM interfaces.

    Standard linkUp/linkDown SNMP traps are generated. A default community string jdm is used.

  • LinkUp/linkDown traps for host interfaces.

    Standard linkUp/linkDown SNMP traps are generated. A default community string host is used.

  • JDM to JDM connectivity loss/regain traps.

    JDM to JDM connectivity loss/regain traps are sent using generic syslog traps (jnxSyslogTrap) through the host management interface.

    The JDM connectivity down trap JDM_JDM_LINK_DOWN is sent when the JDM is not able to communicate with the peer JDM on another server over cb0 or cb1 links. See the following example:

    content_copy zoom_out_map
    { SNMPv2c C=host { V2Trap(296) R=1299287309  
    .1.3.6.1.2.1.1.3.0=42761992 
    .1.3.6.1.6.3.1.1.4.1.0=.1.3.6.1.4.1.2636.4.12.0.1 .1.3.6.1.4.1.2636.3.35.1.1.1.2.1="JDM_JDM_LINK_DOWN" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.3.1="" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.4.1=5 
    .1.3.6.1.4.1.2636.3.35.1.1.1.5.1=24 
    .1.3.6.1.4.1.2636.3.35.1.1.1.6.1=0 
    .1.3.6.1.4.1.2636.3.35.1.1.1.7.1="jdmmon" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.8.1="JDM-HOST" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.9.1="JDM to JDM Connection Lost" 
    .1.3.6.1.6.3.1.1.4.3.0.0=”” } }
    

    The JDM to JDM Connectivity up trap JDM_JDM_LINK_UP is sent when either the cb0 or cb1 link comes up, and JDMs on both the servers are able to communicate again. See the following example:

    content_copy zoom_out_map
    { SNMPv2c C=host { V2Trap(292) R=998879760  
    .1.3.6.1.2.1.1.3.0=42762230 
    .1.3.6.1.6.3.1.1.4.1.0=.1.3.6.1.4.1.2636.4.12.0.1 
    .1.3.6.1.4.1.2636.3.35.1.1.1.2.1="JDM_JDM_LINK_UP" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.3.1="" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.4.1=5 
    .1.3.6.1.4.1.2636.3.35.1.1.1.5.1=24 
    .1.3.6.1.4.1.2636.3.35.1.1.1.6.1=0 
    .1.3.6.1.4.1.2636.3.35.1.1.1.7.1="jdmmon" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.8.1="JDM-HOST" 
    .1.3.6.1.4.1.2636.3.35.1.1.1.9.1="JDM to JDM Connection Up" 
    .1.3.6.1.6.3.1.1.4.3.0.0="" } }
    
  • VM(GNF) up/down—libvirtGuestNotif notifications.

    For GNF start/shutdown events, the standard libvirtGuestNotif notifications are generated. For libvirtMIB notification details, see this web page. Also, see the following example:

    content_copy zoom_out_map
    HOST [UDP: [127.0.0.1]:53568->[127.0.0.1]]: Trap , DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (636682) 1:46:06.82,
    SNMPv2-MIB::snmpTrapOID.0 = OID: LIBVIRT-MIB::libvirtGuestNotif,
    LIBVIRT-MIB::libvirtGuestName.0 = STRING: "gnf1",
    LIBVIRT-MIB::libvirtGuestUUID.1 = STRING: 7ad4bc2a-16db-d8c0-1f5a-6cb777e17cd8,
    LIBVIRT-MIB::libvirtGuestState.2 = INTEGER: running(1),
    LIBVIRT-MIB::libvirtGuestRowStatus.3 = INTEGER: active(1)
    

SNMP traps are sent to the target NMS server. To configure the target NMS server details in the JDM, see the following example:

[edit]

content_copy zoom_out_map
root@jdm# show snmp | display set
root@jdm# set snmp name name
root@jdm# set snmp description description
root@jdm# set snmp location location
root@jdm# set snmp contact user's email
root@jdm# set snmp trap-group tg-1 targets target ip address1
root@jdm# set snmp trap-group tg-1 targets target ip address2

JDM does not write any configuration to the host snmp configuration file (/etc/snmp/snmpd.conf). Hence, JDM installation and subsequent configuration do not have any impact on the host SNMP. The SNMP configuration CLI command in JDM is used only to configure the JDM’s snmpd.conf file which is present within the container. To generate linkUp/Down trap, you must manually include the configuration as shown in the following example in the host server’s snmpd.conf file (/etc/snmp/snmpd.conf):

content_copy zoom_out_map
createUser trapUser
iquerySecName trapUser
rouser trapUser
defaultMonitors yes
notificationEvent  linkUpTrap    linkUp   ifIndex ifAdminStatus ifOperStatus ifDescr
notificationEvent  linkDownTrap  linkDown ifIndex ifAdminStatus ifOperStatus ifDescr
monitor -r 10  -e linkUpTrap   "Generate linkUp" ifOperStatus != 2
monitor -r 10  -e linkDownTrap "Generate linkDown" ifOperStatus == 2
trap2sink <NMS-IP> host

In the above example, replace <NMS-IP> with the IP address of Network Management Station (NMS).

Chassis Configuration Hierarchy at BSYS and GNF

In Junos node slicing, the BSYS owns all the physical components of the router, including the line cards and fabric, while the GNFs maintain forwarding state on their respective line cards. In keeping with this split responsibility, Junos CLI configuration under the chassis hierarchy (if any), should be applied at the BSYS or at the GNF as follows:

  • Physical-level parameters under the chassis configuration hierarchy should be applied at the BSYS. For example, the configuration for handling physical errors at an FPC is a physical-level parameter, and should therefore be applied at the BSYS.

    content_copy zoom_out_map
    At BSYS Junos CLI:
    [edit]
    user@router# set chassis fpc fpc slot error major threshold threshold value action alarm

  • Logical or feature-level parameters under the chassis configuration hierarchy should be applied at the GNF associated with the FPC. For example, the configuration for max-queues per line card is a logical-level parameter, and should therefore be applied at the GNF.

    content_copy zoom_out_map
    At GNF Junos CLI:
    [edit]
    user@router# set chassis fpc fpc slot max-queues value
  • As exceptions, the following two parameters under the chassis configuration hierarchy should be applied at both BSYS and GNF:

    content_copy zoom_out_map
    At both BSYS and GNF CLI:
    [edit]
    user@router# set chassis network-services network services mode
    user@router# set chassis fpc fpc slot flexible-queueing-mode

Configuring Sub Line Cards and Assigning Them to GNFs

For an overview of sub line cards, see Sub Line Card Overview.

Note:
  • This feature is applicable to the MPC11E line card (model number: MX2K-MPC11E) on the MX2010 and MX2020 routers used in the external server-based Junos node slicing setup.

  • Ensure that each Routing Engine of all GNFs and the BSYS run Junos OS Release 21.2R1 or later versions.

To slice an MPC11E further into sub line cards (SLCs), you must use the fpc-slice CLI option under the set chassis network-slices guest-network-functions gnf hierarchy in BSYS.

Before committing the configuration, you must configure all the SLCs supported by the line card and assign all the required resources such as core, DRAM and the Packet forwarding Engines to the SLCs. An MPC11E line card supports two SLCs.

GNFs support the following combinations of full line cards and SLCs:

  • GNF with MPC11E SLCs

  • GNF with MPC11E SLCs and MPC9

  • GNF with MPC11E SLCs and MPC11E

  • GNF with MPC11E SLCs, MPC9, MPC11E

To configure SLCs and assign them to GNFs, use the following steps:

Note:
  • You must configure all the following CLI statements at once for all the SLCs (as shown in the steps below). Any modification to this configuration later causes the entire line card to reboot.

  • If you configure any incorrect values (for example, unsupported Packet Forwarding Engine ranges, CPU cores, or DRAM values), the configuration commit fails with an appropriate message to indicate the error.
  1. Configure SLCs.
    content_copy zoom_out_map
    root@bsys# set chassis network-slices guest-network-functions gnf 1 fpc-slice 2 slc 1
    root@bsys# set chassis network-slices guest-network-functions gnf 2 fpc-slice 2 slc 2
    Note:

    Do not assign:

    • two or more SLCs of the same line card to the same GNF.

    • the same SLC of a line card to more than one GNF.

  2. Assign Packet Forwarding Engines to the SLCs. You must allocate all the Packet Forwarding Engines on the line card to the SLCs as shown in the following example:
    content_copy zoom_out_map
    root@bsys# set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 2 slc 1 pfe-id-list [0-3]
    root@bsys# set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 2 slc 2 pfe-id-list [4-7]
    Note:

    The configuration supports only the following Packet Forwarding Engine ranges:

    • 0-3 for one SLC, and 4-7 for the other SLC (symmetric profile)

    • 0-1 for one SLC, and 2-7 for the other SLC (asymmetric profile)

    • 0-5 for one SLC and 6-7 for the other SLC (asymmetric profile)

  3. Assign CPU cores to the SLCs as shown in the following example:
    content_copy zoom_out_map
    root@bsys# set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 2 slc 1 cores 4
    root@bsys# set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 2 slc 2 cores 4
    Note:

    4 is the only value of CPU cores supported. You must configure the value 4 for each of the two SLCs.

  4. Assign DRAMs to the SLCs as shown in the following example:
    content_copy zoom_out_map
    root@bsys# set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 2 slc 1 dram 13
    root@bsys# set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 2 slc 2 dram 13

    You must allocate a total DRAM of 26 GB for both the SLCs together. Only the following combinations of DRAM allocation are supported:

    SLC1 DRAM (GB)

    SLC2 DRAM (GB)

    Sub Total (GB)

    BLC/Linux Host DRAM (GB)

    Total (GB)

    13

    13

    26

    6

    32

    9/17

    17/9

    26

    6

    32

    Note:

    You cannot allocate resources to the BLC; they are automatically assigned by Junos OS.

  5. Commit the changes.
    content_copy zoom_out_map
    root@bsys# commit

Sample Configuration for Junos Node Slicing

This section provides sample configurations for Junos node slicing.

Sample JDM Configuration (External Server Model)

content_copy zoom_out_map
root@test-jdm-server0> show configuration
    groups {
    server0 {
        system {
            host-name test-jdm-server0;
        }
        server {
            interfaces {
                cb0 p3p1;
                cb1 p3p2;
                jdm-management em2; 
                vnf-management em3; 
            }
        }
        interfaces {
            jmgmt0 {
                unit 0 {
                    family inet {
                        address 10.216.105.112/21;
                    }
                }
            }
        }
        routing-options {
            static {
                route {
                    0.0.0.0/0 next-hop 10.216.111.254;
                }
            }
        }
    }
    server1 {
        system {
            host-name test-jdm-server1;
        }
        server {
            interfaces {
                cb0 p3p1;
                cb1 p3p2;
                jdm-management em2;
                vnf-management em3;
            }
        }
        interfaces {
            jmgmt0 {
                unit 0 {
                    family inet {
                        address 10.216.105.113/21;
                        
                    }
                }
            }
            routing-options {
                static {
                    route {
                        0.0.0.0/0 next-hop 10.216.111.254;
                    }
                }
            }
        }
    }
}
apply-groups [ server0 server1 ]; 
system {
    root-authentication {
        encrypted-password "..."; ## SECRET-DATA
    }
    services {
        ssh;
        netconf {
            ssh;
            rfc-compliant;
        }
    }
}
virtual-network-functions {
    test-gnf {
        id 1;
        chassis-type mx2020;
        resource-template 2core-16g;
        base-config /var/jdm-usr/gnf-config/test-gnf.conf;
    }
}

Sample JDM Configuration (In-Chassis Model)

content_copy zoom_out_map
root@test-jdm-server0> show configuration
    groups {
    server0 {
        system {
            host-name test-jdm-server0;
        }
        interfaces {
            jmgmt0 {
                unit 0 {
                    family inet {
                        address 10.216.105.112/21;
                    }
                }
            }
        }
        routing-options {
            static {
                route {
                    0.0.0.0/0 next-hop 10.216.111.254;
                }
            }
        }
    }
    server1 {
        system {
            host-name test-jdm-server1;
        }
        interfaces {
            jmgmt0 {
                unit 0 {
                    family inet {
                        address 10.216.105.113/21;
                        
                    }
                }
            }
            routing-options {
                static {
                    route {
                        0.0.0.0/0 next-hop 10.216.111.254;
                    }
                }
            }
        }
    }
}
apply-groups [ server0 server1 ]; 
system {
    root-authentication {
        encrypted-password "..."; ## SECRET-DATA
    }
    services {
        ssh;
        netconf {
            ssh;
            rfc-compliant;
        }
    }
}
virtual-network-functions {
    test-gnf {
        id 1;
        resource-template 2core-16g;
        base-config /var/jdm-usr/gnf-config/test-gnf.conf;
    }
}

Sample BSYS Configuration with Abstracted Fabric Interface

content_copy zoom_out_map
user@router> show configuration chassis
    network-slices {
    guest-network-functions {
        gnf 1 {
            af2 {
                peer-gnf id 2 af1;
            }
            af4 {
                peer-gnf id 4 af1;
            }
            description  gnf-a;
            fpcs [ 0 19];
        }
        gnf 2 {
            af1 {
                peer-gnf id 1 af2;
            }
            af4 {
                peer-gnf id 4 af2;
            }
            description gnf-b;
            fpcs [ 1 6 ];
        }
        gnf 4 {
            af1 {
                peer-gnf id 1 af4;
            }
            af2 {
                peer-gnf id 2 af4;
            }
            description gnf-d;
            fpcs [ 3 4 ];
        }
    }
}

Sample Abstracted Fabric Configuration at GNF with Class of Service

Assume that there is an abstracted fabric (af) interface between GNF1 and GNF2. The following sample configuration illustrates how to apply rewrites on the af interface at GNF1 and apply classifiers on the af interface on GNF2, in a scenario where traffic comes from GNF1 to GNF2:

GNF1 Configuration

content_copy zoom_out_map
interfaces {
    xe-4/0/0 {
        unit 0 {
            family inet {
                address 22.1.2.2/24;
            }
        }
    }
    af2 {
        unit 0 {
            family inet {
                address 32.1.2.1/24;
            }
        }
    }
}
class-of-service {
    classifiers {
        dscp testdscp {
            forwarding-class assured-forwarding {
                loss-priority low code-points [ 001001 000000 ];
            }
        }
    }
    interfaces {
        xe-4/0/0 {
            unit 0 {
                classifiers {
                    dscp testdscp;
                }
            }
            classifiers {
                dscp testdscp;
            }
        }
        af1 {
            unit 0 {
                rewrite-rules {
                    dscp testdscp;  /*Rewrite rule applied on egress AF interface on GNF1.*/
                }
            }
        }
    }
    rewrite-rules {
        dscp testdscp {
            forwarding-class assured-forwarding {
                loss-priority low code-point 001001;
            }
        }
    }
}

GNF2 Configuration

content_copy zoom_out_map
interfaces {
    xe-3/0/0:0 {
        unit 0 {
            family inet {
                address 42.1.2.1/24;
            }
        }
    }
    af1 {
        unit 0 {
            family inet {
                address 32.1.2.2/24;
            }
        }
    }
}
class-of-service {
    classifiers {
        dscp testdscp {
            forwarding-class network-control {
                loss-priority low code-points 001001;
            }
        }
    }
    interfaces {
        af1 {
            unit 0 {
                classifiers {
                    dscp testdscp;  /*Classifier applied on AF at ingress of GNF2*/
                }
            }
        }
    }
}

Sample Output for Abstracted Fabric Interface State at a GNF

content_copy zoom_out_map
user@router-gnf-b> show interfaces af9
Physical interface: af9, Enabled, Physical link is Up
  Interface index: 209, SNMP ifIndex: 527
  Type: Ethernet, Link-level type: Ethernet, MTU: 1514, Speed: 370000mbps
  Device flags   : Present Running
  Interface flags: Internal: 0x4000
  Link type      : Full-Duplex
  Link flags     : None
  Current address: 00:90:69:2b:00:4c, Hardware address: 00:90:69:2b:00:4c
  Last flapped   : 2018-09-12 01:44:01 PDT (00:01:02 ago)
  Input rate     : 0 bps (0 pps)
  Output rate    : 0 bps (0 pps)
  Bandwidth      : 370 Gbps
  Peer GNF id    : 9
  Peer GNF Forwarding element(FE) view :
  FPC slot:FE num  FE Bandwidth(Gbps) Status      Transmit Packets         Transmit Bytes
       6:0                   130         Up                      0                      0
      12:0                   120         Up                      0                      0
      12:1                   120         Up                      0                      0

  Residual Transmit Statistics :
  Packets :                    0 Bytes :                    0

  Fabric Queue Statistics :
  FPC slot:FE num    High priority(pkts)        Low priority(pkts)
       6:0                            0                         0
      12:0                            0                         0
      12:1                            0                         0
  FPC slot:FE num    High priority(bytes)      Low priority(bytes)
       6:0                              0                        0
      12:0                              0                        0
      12:1                              0                        0
  Residual Queue Statistics :
      High priority(pkts)       Low priority(pkts)
                       0                        0
      High priority(bytes)      Low priority(bytes)
                        0                        0

  Logical interface af9.0 (Index 332) (SNMP ifIndex 528)
    Flags: Up SNMP-Traps 0x4004000 Encapsulation: ENET2
    Input packets : 0
    Output packets: 13
    Protocol inet, MTU: 1500

Sample Configuration for Sub Line Cards

This section provides sample configurations for sub line cards (SLCs).

Sample Configuration for Symmetric Sub Line Card Profile

In the symmetric profile, only one combination of resources is possible.

The following is a sample configuration to slice the FPC 1 (MPC11E) in symmetric sub line card profile:

content_copy zoom_out_map
set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 1 slc 1 pfe-id-list 0-3
set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 1 slc 1 cores 4
set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 1 slc 1 dram 13
set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 1 slc 2 pfe-id-list 4-7
set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 1 slc 2 cores 4
set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 1 slc 2 dram 13

This configuration would appear as shown below:

content_copy zoom_out_map
root@bsys> show chassis network-slices guest-network-functions
    gnf 1{
    fpc-slice {
        fpc 1{
            slc 1{
                pfe-id-list 0-3;
                cores 4;
                dram 13;
            }
        }
    }
}
gnf 2{
    fpc-slice {
        fpc 1{
            slc 2{
                pfe-id-list 4-7;
                cores 4;
                dram 13;
            }
        }
    }
}

Sample Configuration for Asymmetric Sub Line Card Profile

In the asymmetric profile, two configurations are possible, depending on how the PFEs or Packet Forwarding Engines [0-7] are split between the two SLCs. In one example configuration, the first two Packet Forwarding Engines [0-1] are assigned to one SLC, and the remaining Packet Forwarding Engines [2-7] to the other SLC. In the other example configuration, the last two Packet Forwarding Engines [6-7] are assigned to one SLC, and the remaining Packet Forwarding Engines [0-5] to the other SLC.

The sample configuration below is an example of [0-1 2-7] split.

In the example below, the CPU core and DRAM assignments for the SLCs match one of the columns under the ‘Asymmetric Profile’ resource combination as shown in the table SLC Profiles Supported by MPC11E on the Sub Line Card Overview page.

content_copy zoom_out_map
set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 1 slc 1 pfe-id-list 0-1
set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 1 slc 1 cores 4
set chassis network-slices guest-network-functions gnf 1 fpc-slice fpc 1 slc 1 dram 17
set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 1 slc 2 pfe-id-list 2-7
set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 1 slc 2 cores 4
set chassis network-slices guest-network-functions gnf 2 fpc-slice fpc 1 slc 2 dram 9

This configuration would appear as below:

content_copy zoom_out_map
root@bsys> show chassis network-slices guest-network-functions
    gnf 1{
    fpc-slice {
        fpc 1{
            slc 1{
                pfe-id-list 0-1;
                cores 4;
                dram 17;
            }
        }
    }
}
gnf 2{
    fpc-slice {
        fpc 1{
            slc 2{
                pfe-id-list 2-7;
                cores 4;
                dram 9;
            }
        }
    }
}
footer-navigation