System Requirements for EKS Deployment
Minimum Host System Requirements for EKS
Table 1 lists the host system requirements for installing JCNR on EKS.
Component | Value/Version |
---|---|
EKS Deployment | Self-managed nodes or managed node group |
Host OS |
Amazon Linux 2 |
EKS version / Kubernetes | 1.26.3, 1.28.x |
Instance Type | Any instance type with ENA adapters |
Kernel Version | The tested kernel version is 5.10.210-201.852.amzn2.x86_64 |
NIC | Elastic Network Adapter (ENA) |
AWS CLI version | 2.11.9 |
VPC CNI | v1.14.0-eksbuild.3 |
EBS CSI Driver | v1.28.0-eksbuild.1 |
Node Role |
AmazonEBSCSIDriverPolicy AmazonEKS_CNI_Policy |
Multus | 3.7.2 ( |
Helm | 3.11 |
Container-RT | containerd 1.7.x |
Resource Requirements for EKS
Table 2 lists the resource requirements for installing JCNR on EKS.
Resource | Value | Usage Notes |
---|---|---|
Data plane forwarding cores | 2 cores (2P + 2S) | |
Service/Control Cores | 0 | |
UIO Driver | VFIO-PCI |
To enable, follow the steps below: cat /etc/modules-load.d/vfio.conf vfio vfio-pci Enable Unsafe IOMMU mode echo Y > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts echo Y > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode |
Hugepages (1G) | 6 Gi | Add GRUB_CMDLINE_LINUX_DEFAULT values in
/etc/default/grub on the host. For example:
GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0
default_hugepagesz=1G hugepagesz=1G hugepages=8 intel_iommu=on
iommu=pt" Update grub and reboot the host. For example: grub2-mkconfig -o /boot/grub2/grub.cfg reboot Verify the hugepage is set by executing the following commands: cat /proc/cmdline grep -i hugepages /proc/meminfo Note:
This 6 x 1GB hugepage requirement is the minimum for a basic L2 mode setup. Increase this number for more elaborate installations. For example, in an L3 mode setup with 2 NUMA nodes and 256k descriptors, set the number of 1GB hugepages to 10 for best performance. |
JCNR Controller cores | .5 | |
JCNR vRouter Agent cores | .5 |
Miscellaneous Requirements for EKS
Table 3 lists additional requirements for installing JCNR on EKS.
Requirement |
Example |
---|---|
Disable source/destination checks. |
Disable source/destination checks on the AWS Elastic Network Interfaces (ENI) interfaces attached to JCNR. JCNR, being a transit router, is neither the source nor the destination of any traffic that it receives. |
Attach IAM policy. |
Attach the AmazonEBSCSIDriverPolicy IAM policy to the role
assigned to the EKS cluster. |
Set IOMMU and IOMMU-PT in GRUB. |
Add the following line to
/etc/default/grub.GRUB_CMDLINE_LINUX_DEFAULT="console=tty1 console=ttyS0 default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt" Update grub and reboot. grub2-mkconfig -o /boot/grub2/grub.cfg reboot |
Additional kernel modules need to be loaded on the host before deploying JCNR in
L3 mode. These modules are usually available in
Note:
Applicable for L3 deployments only. |
Create a /etc/modules-load.d/crpd.conf file and add the following kernel modules to it: tun fou fou6 ipip ip_tunnel ip6_tunnel mpls_gso mpls_router mpls_iptunnel vrf vxlan |
Verify the core_pattern value is set on the host before deploying JCNR. |
sysctl kernel.core_pattern kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e You can update the core_pattern in kernel.core_pattern=/var/crash/core_%e_%p_%i_%s_%h_%t.gz |
JCNR ConfigMap for VRRP
You can enable Virtual Router Redundancy Protocol (VRRP) for your JCNR cluster.
When running VRRP, the AWS IAM role for the node hosting the JCNR instance needs permission to modify the VPC route table. To provide that permission, add the NetworkAdministrator policy to that IAM role.
You must create a JCNR ConfigMap to define the behavior of VRRP for your JCNR cluster in an EKS deployment. Considering that AWS VPC supports exactly one next-hop for a prefix, the ConfigMap defines how the VRRP mastership status is used to copy prefixes from routing tables in JCNR to specific routing tables in AWS.
We provide an example jcnr-aws-config.yaml
manifest below:
apiVersion: v1 kind: ConfigMap metadata: name: jcnr-aws-config namespace: jcnr data: aws-rttable-map.json: | [ { "jcnr-table-name":"default-rt.inet.0", "jcnr-policy-name": "default-rt-to-aws-export", "jcnr-nexthop-interface-name":"eth4", "vpc-table-tag":"jcnr-aws-vpc-internal-table" }, { "jcnr-table-name":"default-rt.inet6.0", "jcnr-policy-name":"default-rt-to-aws-export", "jcnr-nexthop-interface-name":"eth4", "vpc-table-tag":"jcnr-aws-vpc-internal-table" } ]
Table 4 describes the ConfigMap elements:
Element | Description |
---|---|
jcnr-table-name |
The routing table in JCNR from which prefixes should be copied. |
jcnr-policy-name |
A routing policy in JCNR that imports the prefixes in the named routing table to copy to the AWS routing table. |
jcnr-nexthop-interface-name |
Name of the JCNR interface which should be used as the next-hop by the AWS VPC route table when this instance of the JCNR is VRRP master. |
vpc-table-tag |
A freeform tag applied to the VPC route table in AWS to which the prefixes should be copied. |
Apply jcnr-aws-config.yaml
to the cluster before installing JCNR. The JCNR
CNI deployer renders the cRPD configuration based on the ConfigMap.
When not using VRRP, provide an empty list as the data for
aws-rttable-map.json
.
Port Requirements
Juniper Cloud-Native Router listens on certain TCP and UDP ports. This section lists the port requirements for the cloud-native router.
Protocol | Port | Description |
---|---|---|
TCP | 8085 | vRouter introspect–Used to gain internal statistical information about vRouter |
TCP | 8070 | Telemetry Information- Used to see telemetry data from the JCNR vRouter |
TCP | 8072 | Telemetry Information-Used to see telemetry data from JCNR control plane |
TCP | 8075, 8076 | Telemetry Information- Used for gNMI requests |
TCP | 9091 | vRouter health check–cloud-native router checks to ensure the vRouter agent is running. |
TCP | 9092 | vRouter health check–cloud-native router checks to ensure the vRouter DPDK is running. |
TCP | 50052 | gRPC port–JCNR listens on both IPv4 and IPv6 |
TCP | 8081 | JCNR Deployer Port |
TCP | 24 | cRPD SSH |
TCP | 830 | cRPD NETCONF |
TCP | 666 | rpd |
TCP | 1883 | Mosquito mqtt–Publish/subscribe messaging utility |
TCP | 9500 | agentd on cRPD |
TCP | 21883 | na-mqttd |
TCP |
50053 |
Default gNMI port that listens to the client subscription request |
TCP | 51051 | jsd on cRPD |
UDP | 50055 | Syslog-NG |
Download Options
To deploy JCNR on an EKS cluster, you can either download the Helm charts from the Juniper Networks software download site (see JCNR Software Download Packages) or subscribe via the AWS Marketplace.
JCNR Licensing
You can purchase BYOL licenses for the Juniper Cloud-Native Router software through your Juniper Account Team.
For information on BYOL licenses, see Manage JCNR Licenses.