Install and Verify Juniper Cloud-Native Router
SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.
Install Juniper Cloud-Native Router Using Helm Chart
Read this section to learn the steps required to load the cloud-native router image components into docker and install the cloud-native router components using Helm charts.
- Review the Before You Install section to ensure the cluster has all the required configuration.
- Download the tarball, Juniper_Cloud_Native_Router_<release-number>.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
-
Expand the file
Juniper_Cloud_Native_Router_<release-number>.tgz.
tar xzvf Juniper_Cloud_Native_Router_<release-number>.tgz
-
Change directory to
Juniper_Cloud_Native_Router_<release-number>.
cd Juniper_Cloud_Native_Router_<release-number>
Note:All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_<release-number>.
-
View the contents in the current directory.
ls contrail-tools helmchart images README.md secrets
-
The JCNR container images are required for deployment. Choose one of the
following options:
-
Configure your cluster to deploy images from the Juniper Networks
enterprise-hub.juniper.net
repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart. -
Configure your cluster to deploy images from the images tarball included in the downloaded JCNR software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.
-
-
Enter the root password for your host server and your Juniper Cloud-Native
Router license file into the secrets/jcnr-secrets.yaml
file. You must enter the password and license in base64 encoded
format.
You can view the sample contents of the jcnr-secrets.yaml file below:
--- apiVersion: v1 kind: Namespace metadata: name: jcnr --- apiVersion: v1 kind: Secret metadata: name: jcnr-secrets namespace: jcnr data: root-password: <add your password in base64 format> crpd-license: | <add your license in base64 format>
To encode the password, create a file with the plain text password on a single line. Then issue the command: To encode the license file, copy the license file onto your host server and issue the command:base64 -w 0 rootPasswordFile
You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.base64 -w 0 licenseFile
Note:You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license file and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.
Apply the secrets/jcnr-secrets.yaml to the Kubernetes system.
kubectl apply -f secrets/jcnr-secrets.yaml namespace/jcnr created secret/jcnr-secrets created
Note:Starting with JCNR Release 23.2, the JCNR license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.
-
Customize the helm chart for your deployment using the
helmchart/values.yaml file.
See, Customize JCNR Helm Chart for descriptions of the helm chart configurations.
-
If you are installing JCNR on Amazon EKS, then update the
dpdkCommandAdditionalArgs
key in the helmchart/charts/jcnr-vrouter/values.yaml file and settx
andrx
descriptors to 256, else skip this step.For example:
dpdkCommandAdditionalArgs: "--yield_option 0 --dpdk_txd_sz 256 --dpdk_rxd_sz 256"
-
Optionally, create cRPD pods with custom configuration.
See, Customize JCNR Configuration using Node Annotations for creating and applying the cRPD customizations.
-
Deploy the Juniper Cloud-Native Router using the helm chart.
Navigate to the
helmchart
directory and run the following command:helm install jcnr .
NAME: jcnr LAST DEPLOYED: Fri Jun 23 06:04:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
-
Confirm Juniper Cloud-Native Router deployment.
helm ls
Sample output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jcnr default 1 2023-06-23 06:04:33.144611017 -0400 EDT deployed jcnr-23.2.0 23.2.0
Verify Installation
-
Verify the state of the JCNR pods by issuing the
kubectl get pods -A
command.The output of thekubectl
command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE contrail-deploy contrail-k8s-deployer-579cd5bc74-g27gs 1/1 Running 0 103s contrail contrail-vrouter-masters-lqjqk 3/3 Running 0 87s jcnr kube-crpd-worker-sts-0 1/1 Running 0 103s jcnr syslog-ng-ds5qd 1/1 Running 0 103s kube-system calico-kube-controllers-5f4fd8666-m78hk 1/1 Running 1 (3h13m ago) 4h2m kube-system calico-node-28w98 1/1 Running 3 (4d1h ago) 86d kube-system coredns-54bf8d85c7-vkpgs 1/1 Running 0 3h8m kube-system dns-autoscaler-7944dc7978-ws9fn 1/1 Running 3 (4d1h ago) 86d kube-system kube-apiserver-ix-esx-06 1/1 Running 4 (4d1h ago) 86d kube-system kube-controller-manager-ix-esx-06 1/1 Running 8 (4d1h ago) 86d kube-system kube-multus-ds-amd64-jl69w 1/1 Running 3 (4d1h ago) 86d kube-system kube-proxy-qm5bl 1/1 Running 3 (4d1h ago) 86d kube-system kube-scheduler-ix-esx-06 1/1 Running 9 (4d1h ago) 86d kube-system nodelocaldns-bntfp 1/1 Running 4 (4d1h ago) 86d
-
Verify the JCNR daemonsets by issuing the
kubectl get ds -A
command.Use the
kubectl get ds -A
command to get a list of daemonsets. The JCNR daemonsets are highlighted in bold text.kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE contrail contrail-vrouter-masters 1 1 1 1 1 <none> 90m contrail contrail-vrouter-nodes 0 0 0 0 0 <none> 90m jcnr syslog-ng 1 1 1 1 1 <none> 90m kube-system calico-node 1 1 1 1 1 kubernetes.io/os=linux 86d kube-system kube-multus-ds-amd64 1 1 1 1 1 kubernetes.io/arch=amd64 86d kube-system kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 86d kube-system nodelocaldns 1 1 1 1 1 kubernetes.io/os=linux 86d
-
Verify the JCNR statefulsets by issuing the
kubectl get statefulsets -A
command.The command output provides the statefulsets.
kubectl get statefulsets -A
NAMESPACE NAME READY AGE jcnr kube-crpd-worker-sts 1/1 27m
-
Verify if the cRPD is licensed and has the appropriate configurations
- View the Accessing the JCNR Controller (cRPD) CLI section to access the cRPD CLI.
-
Once you have access the cRPD CLI, issue the
show system license
command in the cli mode to view the system licenses. For example:root@jcnr-01:/# cli root@jcnr-01> show system license License usage: Licenses Licenses Licenses Expiry Feature name used installed needed containerized-rpd-standard 1 1 0 2024-09-20 16:59:00 PDT Licenses installed: License identifier: 85e5229f-0c64-0000-c10e4-a98c09ab34a1 License SKU: S-CRPD-10-A1-PF-5 License version: 1 Order Type: commercial Software Serial Number: 1000098711000-iHpgf Customer ID: Juniper Networks Inc. License count: 15000 Features: containerized-rpd-standard - Containerized routing protocol daemon with standard features date-based, 2022-08-21 17:00:00 PDT - 2027-09-20 16:59:00 PDT
-
Issue the
show configuration | display set
command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the JCNR deployment mode.root@jcnr-01# cli root@jcnr-01> show configuration | display set
-
Type the
exit
command to exit from the pod shell.
-
Verify the vRouter interfaces configuration
- View the Accessing the vRouter CLI section to access the vRouter CLI.
-
Once you have accessed the vRouter CLI, issue the
vif --list
command to view the vRouter interfaces . The output will depend upon the JCNR deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:$ vif --list Vrouter Interface Table Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS Left Intf HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast Enabled vif0/0 Socket: unix MTU: 1514 Type:Agent HWaddr:00:00:5e:00:01:00 Vrf:65535 Flags:L2 QOS:-1 Ref:3 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0 vif0/1 PCI: 0000:5a:02.1 (Speed 10000, Duplex 1) NH: 6 MTU: 9000 Type:Physical HWaddr:ba:9c:0f:ab:e2:c9 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:L3L2Vof QOS:0 Ref:12 RX port packets:66 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:5a:02.1 Status: UP Driver: net_iavf RX packets:66 bytes:5116 errors:0 TX packets:0 bytes:0 errors:0 Drops:0 vif0/2 PMD: eno3v1 NH: 9 MTU: 9000 Type:Host HWaddr:ba:9c:0f:ab:e2:c9 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:13 TxXVif:1 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:66 bytes:5116 errors:0 Drops:0 TX queue packets:66 errors:0 TX device packets:66 bytes:5116 errors:0
-
Type the
exit
command to exit the pod shell.