Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

System Requirements for GCP Deployment

Read this section to understand the system, resource, port, and licensing requirements for installing Juniper Cloud-Native Router on Google Cloud Platform (GCP).

Minimum Host System Requirements for GCP Deployment

Table 1 lists the host system requirements for installing JCNR on GCP.

Note:

The settings below are pre-configured when you deploy JCNR via the Google Cloud Marketplace.

Table 1: Minimum Host System Requirements for GCP Deployment
Component Value/Version Notes
GCP Deployment VM-based  
Instance Type n2-standard-16  
CPU Intel x86 The tested CPU is Intel Cascade Lake
Host OS Rocky Linux 8.8 (Green Obsidian)
Kernel Version

Rocky Linux: 4.18.X

The tested kernel version is 4.18.0-477.15.1.el8_8.cloud.x86_64
NIC VirtIO NIC  
Kubernetes (K8s) Version 1.25.x The tested K8s version is 1.25.5.

The K8s version for Google Cloud Marketplace JCNR subscription is v1.27.5.

Calico Version 3.25.1  
Multus Version 4.0  
Helm 3.9.x  
Container-RT containerd 1.7.x Other container runtimes may work but have not been tested with JCNR.

Resource Requirements for GCP Deployment

Table 2 lists the resource requirements for installing JCNR on GCP.

Table 2: Resource Requirements for GCP Deployment
Resource Value Usage Notes
Data plane forwarding cores 2 cores  
Service/Control Cores 0  
UIO Driver VFIO-PCI To enable, follow the steps below:
cat /etc/modules-load.d/vfio.conf
vfio
vfio-pci
Enable Unsafe IOMMU mode
echo Y > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts
echo Y > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
Hugepages (1G) 6 Gi Add GRUB_CMDLINE_LINUX_DEFAULT values in /etc/default/grub. For example:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt"

Update grub and reboot the host. For example:

grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

Verify the hugepage is set by executing the following commands:

cat /proc/cmdline
grep -i hugepages /proc/meminfo
Note:

This 6 x 1GB hugepage requirement is the minimum for a basic L2 mode setup. Increase this number for more elaborate installations. For example, in an L3 mode setup with 2 NUMA nodes and 256k descriptors, set the number of 1GB hugepages to 10 for best performance.

JCNR Controller cores .5  
JCNR vRouter Agent cores .5  

Miscellaneous Requirements for GCP Deployment

Table 3 lists additional requirements for deploying JCNR on GCP.

Table 3: Miscellaneous Requirements for GCP Deployment

Requirement

Example

Set IOMMU and IOMMU-PT in GRUB.

Add the following line to /etc/default/grub.
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt"

Update grub and reboot.

grub2-mkconfig -o /boot/grub2/grub.cfg 
reboot

Additional kernel modules need to be loaded on the host before deploying JCNR in L3 mode. These modules are usually available in linux-modules-extra or kernel-modules-extra packages.

Note:

Applicable for L3 deployments only.

Create a /etc/modules-load.d/crpd.conf file and add the following kernel modules to it:

tun
fou
fou6
ipip
ip_tunnel
ip6_tunnel
mpls_gso
mpls_router
mpls_iptunnel
vrf
vxlan

Enable kernel-based forwarding on the Linux host.

ip fou add port 6635 ipproto 137

Enable IP Forwarding for VMs in GCP.

Use one of these two methods to enable IP forwarding:
  1. Specify it as an option while creating the VM. For example:

    gcloud compute instances create instance-name --can-ip-forward
  2. For an exisiting VM, enable IP forwarding by updating the compute instance via a file. For example:

    gcloud compute instances export transit-jcnr01 --project jcnr-ci-admin --zone us-west1-a --destination=instance_file_1

    Edit the instance file to set the value canIpForward=true.

    Update the compute instance from the file:
    gcloud compute instances update-from-file transit-jcnr01 --project jcnr-ci-admin --zone us-west1-a --source=instance_file_1 --most-disruptive-allowed-action ALLOWED_ACTION

Enable Multi-IP subnet on Guest OS.

gcloud compute images create debian-9-multi-ip-subnet \
     --source-disk debian-9-disk \
     --source-disk-zone us-west1-a \
     --guest-os-features MULTI_IP_SUBNET 

Add firewall rules for loopback address for VPC.

Configure the VPC firewall rule to allow ingress traffic with source filters set to the subnet range to which JCNR is attached, along with the IP ranges or addresses for the loopback addresses.

For example:

Navigate to Firewall policies on the GCP console and create a firewall rule with the following attributes:

  1. Name: Name of the firewall rule

  2. Network: Choose the VPC network

  3. Priority: 1000

  4. Direction: Ingress

  5. Action on Match: Allow

  6. Source filters: 10.2.0.0/24, 10.51.2.0/24, 10.51.1.0/24, 10.12.2.2/32, 10.13.3.3/32

  7. Protocols: all

  8. Enforcement: Enabled

where 10.2.0.0/24 is the subnet to which JCNR is attached and 10.51.2.0/24, 10.51.1.0/24, 10.12.2.2/32, and 10.13.3.3/32 are loopback IP ranges.

Exclude JCNR interfaces from NetworkManager control.

NetworkManager is a tool in some operating systems to make the management of network interfaces easier. NetworkManager may make the operation and configuration of the default interfaces easier. However, it can interfere with Kubernetes management and create problems.

To avoid NetworkManager from interfering with JCNR interface configuration, exclude JCNR interfaces from NetworkManager control. Here's an example on how to do this in some Linux distributions:

  1. Create the /etc/NetworkManager/conf.d/crpd.conf file and list the interfaces that you don't want NetworkManager to manage.

    For example:

    [keyfile]
     unmanaged-devices+=interface-name:enp*;interface-name:ens*
    where enp* and ens* refer to your JCNR interfaces.
    Note: enp* indicates all interfaces starting with enp. For specific interface names, provided a comma-separated list.
  2. Restart the NetworkManager service:
    sudo systemctl restart NetworkManager
  3. Edit the /etc/sysctl.conf file on the host and paste the following content in it:
    net.ipv6.conf.default.addr_gen_mode=0
    net.ipv6.conf.all.addr_gen_mode=0
    net.ipv6.conf.default.autoconf=0
    net.ipv6.conf.all.autoconf=0
  4. Run the command sysctl -p /etc/sysctl.conf to load the new sysctl.conf values on the host.

Verify the core_pattern value is set on the host before deploying JCNR.

sysctl kernel.core_pattern
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e

You can update the core_pattern in /etc/sysctl.conf. For example:

kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz
Note:

Here are additional restrictions:

  • JCNR supports only IPv4 for GCP.

  • JCNR deployment on GCP supports only N8-standard for VM deployments. The N16-standard is not supported.

Port Requirements

Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.

Table 4: Cloud-Native Router Listening Ports
Protocol Port Description
TCP 8085 vRouter introspect–Used to gain internal statistical information about vRouter
TCP 8070 Telemetry Information- Used to see telemetry data from the JCNR vRouter
TCP 8072 Telemetry Information-Used to see telemetry data from JCNR control plane
TCP 8075, 8076 Telemetry Information- Used for gNMI requests
TCP 9091 vRouter health check–cloud-native router checks to ensure the vRouter agent is running.
TCP 9092 vRouter health check–cloud-native router checks to ensure the vRouter DPDK is running.
TCP 50052 gRPC port–JCNR listens on both IPv4 and IPv6
TCP 8081 JCNR Deployer Port
TCP 24 cRPD SSH
TCP 830 cRPD NETCONF
TCP 666 rpd
TCP 1883 Mosquito mqtt–Publish/subscribe messaging utility
TCP 9500 agentd on cRPD
TCP 21883 na-mqttd

TCP

50053

Default gNMI port that listens to the client subscription request

TCP 51051 jsd on cRPD
UDP 50055 Syslog-NG

Download Options

To deploy JCNR on GCP, you can either download the Helm charts from the Juniper Networks software download site (see JCNR Software Download Packages) or subscribe via the Google Cloud Marketplace.

Note: Before deploying JCNR on GCP via Helm charts downloaded from the Juniper Networks software download site, you must whitelist the https://enterprise.hub.juniper.net URL as the JCNR image repository.

JCNR Licensing

See Manage JCNR Licenses.