Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

close
keyboard_arrow_left
list Table of Contents
file_download PDF
keyboard_arrow_right

System Requirements for Wind River Deployment

date_range 12-Jul-24
Read this section to understand the system, resource, port, and licensing requirements for installing Juniper Cloud-Native Router on a Wind River deployment. We provide requirements for both pre-bound and non-pre-bound SR-IOV interfaces.

Minimum Host System Requirements on a Wind River Deployment

Table 1 lists the host system requirements for installing JCNR on a Wind River deployment.

Table 1: Cloud-Native Router Minimum Host System Requirements on a Wind River Deployment
Component Value/Version Notes
CPU Intel x86 The tested CPU is Intel(R) Xeon(R) Silver 4314 CPU @ 2.40GHz
Host OS Debian GNU/Linux (depends on Wind River Cloud Platform version)  
Kernel Version 5.10 5.10.0-6-amd64
NIC
  • Intel E810 with Firmware 4.00 0x80014411 1.3236.0

  • Intel E810-CQDA2 with Firmware 4.000x800144111.3236.0

  • Intel XL710 with Firmware 9.00 0x8000cead 1.3179.0

  • Mellanox ConnectX-6

  • Mellanox ConnectX-7

Support for Mellanox NICs is considered a Juniper Technology Preview (Tech Preview) feature.

When using Mellanox NICs, ensure your interface names do not exceed 11 characters in length.

Wind River Cloud Platform 22.12  
IAVF driver Version 4.5.3.1  
ICE_COMMS Version 1.3.35.0  
ICE Version 1.9.11.9 ICE driver is used only with the Intel E810 NIC
i40e Version 2.18.9 i40e driver is used only with the Intel XL710 NIC
Kubernetes (K8s) Version 1.24 The tested K8s version is 1.24.4
Calico Version 3.24.x  
Multus Version 3.8  
Helm 3.9.x  
Container-RT containerd 1.4.x Other container runtimes may work but have not been tested with JCNR.

Resource Requirements on a Wind River Deployment

Table 2 lists the resource requirements for installing JCNR on a Wind River deployment.

Table 2: Resource Requirements on a Wind River Deployment
Resource Value Usage Notes
Data plane forwarding cores 2 cores (2P + 2S)  
Service/Control Cores 0  
Hugepages (1G) 6 Gi

Lock the controller and get the memory processors using below command:

source /etc/platform/openrc
system host-lock controller-0 
system host-memory-list controller-0
To set the huge pages, run the following command for each controller:
system host-memory-modify controller-0 0 -1G 64 
system host-memory-modify controller-0 1 -1G 64

View the huge pages with the following command:

system host-memory-list controller-0

Unlock the controller:

system host-unlock controller-0
Note:

This 6 x 1GB hugepage requirement is the minimum for a basic L2 mode setup. Increase this number for more elaborate installations. For example, in an L3 mode setup with 2 NUMA nodes and 256k descriptors, set the number of 1GB hugepages to 10 for best performance.

JCNR Controller cores .5  
JCNR vRouter Agent cores .5  

Miscellaneous Requirements on a Wind River Deployment

Table 3 lists the additional requirements for installing JCNR on a Wind River deployment.

Table 3: Miscellaneous Requirements on a Wind River Deployment
Requirement

Example

Enable the host with SR-IOV and VT-d in the system's BIOS.

Depends on BIOS.

Isolate CPUs from the kernel scheduler.

source /etc/platform/openrc
system host-lock controller-0
system host-cpu-list controller-0
system host-cpu-modify -f application-isolated -c 4-59 controller-0
system host-unlock controller-0

Additional kernel modules need to be loaded on the host before deploying JCNR in L3 mode. These modules are usually available in linux-modules-extra or kernel-modules-extra packages.

Note:

Applicable for L3 deployments only.

Create a conf file and add the kernel modules:

cat /etc/modules-load.d/crpd.conf
tun
fou
fou6
ipip
ip_tunnel
ip6_tunnel
mpls_gso
mpls_router
mpls_iptunnel
vrf
vxlan

Enable kernel-based forwarding on the Linux host.

ip fou add port 6635 ipproto 137

Verify the core_pattern value is set on the host before deploying JCNR.

sysctl kernel.core_pattern
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e

You can update the core_pattern in /etc/sysctl.conf. For example:

kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz

Requirements for Pre-Bound SR-IOV Interfaces on a Wind River Deployment

In a Wind River deployment, you typically bind all your JCNR interfaces to the vfio DPDK driver before you deploy JCNR. Table 4 shows an example of how you can do this on an SR-IOV-enabled interface on a host.

Note:

We support pre-binding interfaces for JCNR L3 mode deployments only.

Table 4: Requirements for Pre-Bound SR-IOV Interfaces on a Wind River Deployment

Requirement

Example

Pre-bind the JCNR interfaces to the vfio DPDK driver.

source /etc/platform/openrc

system host-lock controller-0
system host-label-assign controller-0 sriovdp=enabled   # <-- Label node to accept SR-IOV-enabled
                                                        #     deployments.

system host-label-assign controller-0 kube-cpu-mgr-policy=static
system host-label-assign controller-0 kube-topology-mgr-policy=restricted   # <-- see note below

system datanetwork-add datanet0 flat   # <-- Create datanet0 network. You'll define this in a NAD 
                                       #     later.

DTNIF=enp175s0f0
system host-if-modify -m 1500 -n $DTNIF -c pci-sriov -N 8 controller-0 $DTNIF --vf-driver=netdevice
                                                      # ^ Enable 8 (for example) VFs on enp175s0f0.

system host-if-add -c pci-sriov controller-0 srif0 vf $DTNIF -N 1 --vf-driver=vfio  
                                             # ^ Create srif0 interface that uses one of the VFs 
                                             #   and bind to vfio driver.

IFUUID=$(system host-if-list 1 | awk '{if ($4 == "srif0") {print $2}}')
system interface-datanetwork-assign 1 $IFUUID datanet0  #  <-- Attach srif0 interface to datanet0 network.

system host-unlock 1


Note:

On hosts with a single NUMA node or where all NICs are attached to the same NUMA node, set kube-topology-mgr-policy=restricted.

On hosts with multiple NUMA nodes where the NICs are spread across NUMA nodes, set kube-topology-mgr-policy=best-effort.

Create and apply the Network Attachment Definition that attaches the datanet0 network defined above.

Create a yaml file for the Network Attachment Definition. For example:

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: srif0net0
  annotations:
    k8s.v1.cni.cncf.io/resourceName: intel.com/pci_sriov_net_datanet0
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "sriov",
      "spoofchk": "off",
      "trust": "on"
  }'

Apply the yaml to attach the datanet0 network:

kubectl apply -f srif0net0.yaml
where srif0net0.yaml is the file that contains the Network Attachment Definition above.

Update the Helm chart values.yaml to use the defined networks.

Here's an example of using two networks, datanet0/srif0net0 and datanet1/srif1net1.

jcnr-vrouter:
  guaranteedVrouterCpus: 4
  interfaceBoundType: 1
  
  networkDetails:
  - ddp: "off"
    name: srif0net0
    namespace: default
  - ddp: "off"
    name: srif1net1
    namespace: default

  networkResources:
    limits:
      intel.com/pci_sriov_net_datanet0: "1"
      intel.com/pci_sriov_net_datanet1: "1"
    requests:
      intel.com/pci_sriov_net_datanet0: "1"
      intel.com/pci_sriov_net_datanet1: "1"

Here's an example of using a bond interface attached to two networks (datanet0/srif0net0 and datanet1/srif1net1) and a regular interface attached to a third network (datanet2/srif2net2).

jcnr-vrouter:
  guaranteedVrouterCpus: 4
  interfaceBoundType: 1

  bondInterfaceConfigs:
  - mode: 1
    name: bond0
    slaveNetworkDetails:
    - name: srif0net0
      namespace: default
    - name: srif1net1
      namespace: default
  
  networkDetails:
  - ddp: "off"
    name: bond0
  - ddp: "off"
    name: srif2net2
    namespace: default

  networkResources:
    limits:
      intel.com/pci_sriov_net_datanet0: "1"
      intel.com/pci_sriov_net_datanet1: "1"
      intel.com/pci_sriov_net_datanet2: "1"
    requests:
      intel.com/pci_sriov_net_datanet0: "1"
      intel.com/pci_sriov_net_datanet1: "1"
      intel.com/pci_sriov_net_datanet2: "1"

Requirements for Non-Pre-Bound SR-IOV Interfaces on a Wind River Deployment

In some situations, you might want to run with non-pre-bound interfaces. Table 5 shows the requirements for non-pre-bound interfaces.

Table 5: Requirements for Non-Pre-Bound SR-IOV Interfaces on a Wind River Deployment

Requirement

Example

Configure IPv4 and IPv6 addresses for the non-pre-bound interfaces allocated to JCNR.

source /etc/platform/openrc
system host-lock controller-0
system host-if-modify -n ens1f0 -c platform --ipv4-mode static controller-0 ens1f0
system host-addr-add 1 ens1f0 11.11.11.29 24
system host-if-modify -n ens1f0 -c platform --ipv6-mode static controller-0 ens1f0
system host-addr-add 1 ens1f0 abcd::11.11.11.29 112
system host-if-list controller-0
system host-addr-list controller-0
system host-unlock controller-0

Port Requirements

Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.

Table 6: Cloud-Native Router Listening Ports
Protocol Port Description
TCP 8085 vRouter introspect–Used to gain internal statistical information about vRouter
TCP 8070 Telemetry Information- Used to see telemetry data from the JCNR vRouter
TCP 8072 Telemetry Information-Used to see telemetry data from JCNR control plane
TCP 8075, 8076 Telemetry Information- Used for gNMI requests
TCP 9091 vRouter health check–cloud-native router checks to ensure the vRouter agent is running.
TCP 9092 vRouter health check–cloud-native router checks to ensure the vRouter DPDK is running.
TCP 50052 gRPC port–JCNR listens on both IPv4 and IPv6
TCP 8081 JCNR Deployer Port
TCP 24 cRPD SSH
TCP 830 cRPD NETCONF
TCP 666 rpd
TCP 1883 Mosquito mqtt–Publish/subscribe messaging utility
TCP 9500 agentd on cRPD
TCP 21883 na-mqttd

TCP

50053

Default gNMI port that listens to the client subscription request

TCP 51051 jsd on cRPD
UDP 50055 Syslog-NG

Download Options

See JCNR Software Download Packages.

JCNR Licensing

See Manage JCNR Licenses.

external-footer-nav