Install and Verify Juniper Cloud-Native Router for OpenShift Deployment
SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router on Red Hat OpenShift Container Platform (OCP).
Install Juniper Cloud-Native Router Using Helm Chart
Read this section to learn the steps required to install the cloud-native router components using Helm charts.
- Review the System Requirements for OpenShift Deployment to ensure the cluster has all the required configuration.
- Download the tarball, Juniper_Cloud_Native_Router_release-number.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
-
Expand the file
Juniper_Cloud_Native_Router_release-number.tgz.
tar xzvf Juniper_Cloud_Native_Router_release-number.tgz
-
Change directory to
Juniper_Cloud_Native_Router_release-number.
cd Juniper_Cloud_Native_Router_release-number
Note:All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_release-number.
-
View the contents in the current directory.
ls contrail-tools helmchart images README.md secrets
-
The JCNR container images are required for deployment. Choose one of the
following options:
-
Configure your cluster to deploy images from the Juniper Networks
enterprise-hub.juniper.net
repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart. -
Configure your cluster to deploy images from the images tarball included in the downloaded JCNR software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.
-
-
Enter the root password for your host server and your Juniper Cloud-Native
Router license file into the secrets/jcnr-secrets.yaml
file. You must enter the password and license in base64 encoded
format.
You can view the sample contents of the jcnr-secrets.yaml file below:
--- apiVersion: v1 kind: Namespace metadata: name: jcnr --- apiVersion: v1 kind: Secret metadata: name: jcnr-secrets namespace: jcnr data: root-password: <add your password in base64 format> crpd-license: | <add your license in base64 format>
To encode the password, create a file with the plain text password on a single line. Then issue the command: To encode the license, copy the license key into a file on your host server and issue the command:base64 -w 0 rootPasswordFile
You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.base64 -w 0 licenseFile
Note:You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.
Apply the secrets/jcnr-secrets.yaml to the Kubernetes system.
kubectl apply -f secrets/jcnr-secrets.yaml namespace/jcnr created secret/jcnr-secrets created
Note:Starting with JCNR Release 23.2, the JCNR license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.
-
Customize the helm chart for your deployment using the
helmchart/values.yaml file.
See, Customize JCNR Helm Chart for OpenShift Deployment for descriptions of the helm chart configurations.
-
Optionally, customize JCNR configuration.
See, Customize JCNR Configuration for creating and applying the cRPD customizations.
-
Deploy the Juniper Cloud-Native Router using the helm chart.
Navigate to the
helmchart
directory and run the following command:helm install jcnr .
NAME: jcnr LAST DEPLOYED: Fri Sep 22 06:04:33 2023 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None
-
Confirm Juniper Cloud-Native Router deployment.
helm ls
Sample output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION jcnr default 1 2023-09-22 06:04:33.144611017 -0400 EDT deployed jcnr-23.3.0 23.3.0
Verify Installation
-
Verify the state of the JCNR pods by issuing the
kubectl get pods -A
command.The output of thekubectl
command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE contrail contrail-vrouter-nodes-dt8sx 3/3 Running 0 16d jcnr kube-crpd-worker-sts-0 1/1 Running 0 16d jcnr syslog-ng-vh89p 1/1 Running 0 16d openshift-cluster-node-tuning-operator tuned-zccwc 1/1 Running 8 69d openshift-dns dns-default-wmchn 2/2 Running 14 69d openshift-dns node-resolver-dm9b7 1/1 Running 8 69d openshift-image-registry image-pruner-28212480-bpn9w 0/1 Completed 0 2d11h openshift-image-registry image-pruner-28213920-9jk74 0/1 Completed 0 35h openshift-image-registry node-ca-jbwlx 1/1 Running 8 69d openshift-ingress-canary ingress-canary-k6jqs 1/1 Running 8 69d openshift-ingress router-default-55dff9cbc5-kz8bg 1/1 Running 1 62d openshift-kni-infra coredns-node-warthog-41 2/2 Running 16 69d openshift-kni-infra keepalived-node-warthog-41 2/2 Running 14 69d openshift-machine-config-operator machine-config-daemon-w8fbh 2/2 Running 16 69d openshift-monitoring alertmanager-main-1 6/6 Running 7 62d openshift-monitoring node-exporter-rbht9 2/2 Running 15 69d openshift-monitoring prometheus-adapter-7d77cfb894-nx29s 1/1 Running 0 6d18h openshift-monitoring prometheus-k8s-1 6/6 Running 6 62d openshift-monitoring prometheus-operator-admission-webhook-7d4759d465-mv98x 1/1 Running 1 62d openshift-monitoring thanos-querier-6d77dcb87-c4pr6 6/6 Running 6 62d openshift-multus multus-additional-cni-plugins-jbrv2 1/1 Running 8 69d openshift-multus multus-x2ddp 1/1 Running 8 69d openshift-multus network-metrics-daemon-tg528 2/2 Running 16 69d openshift-network-diagnostics network-check-target-mqr4t 1/1 Running 8 69d openshift-operator-lifecycle-manager collect-profiles-28216020-66xqc 0/1 Completed 0 6m8s openshift-ovn-kubernetes ovnkube-node-d4g2s 5/5 Running 37 69d
-
Verify the JCNR daemonsets by issuing the
kubectl get ds -A
command.Use the
kubectl get ds -A
command to get a list of daemonsets. The JCNR daemonsets are highlighted in bold text.kubectl get ds -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE contrail contrail-vrouter-masters 0 0 0 0 0 <none> 16d contrail contrail-vrouter-nodes 2 2 2 2 2 <none> 16d jcnr syslog-ng 2 2 2 2 2 <none> 16d openshift-cluster-node-tuning-operator tuned 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-dns dns-default 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-dns node-resolver 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-image-registry node-ca 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-ingress-canary ingress-canary 2 2 2 2 2 kubernetes.io/os=linux 69d openshift-machine-api ironic-proxy 3 3 3 3 3 node-role.kubernetes.io/master= 69d openshift-machine-config-operator machine-config-daemon 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-machine-config-operator machine-config-server 3 3 3 3 3 node-role.kubernetes.io/master= 69d openshift-monitoring node-exporter 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-multus multus 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-multus multus-additional-cni-plugins 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-multus network-metrics-daemon 5 5 5 5 5 kubernetes.io/os=linux 69d openshift-network-diagnostics network-check-target 5 5 5 5 5 beta.kubernetes.io/os=linux 69d openshift-ovn-kubernetes ovnkube-master 3 3 3 3 3 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 69d openshift-ovn-kubernetes ovnkube-node 5 5 5 5 5 beta.kubernetes.io/os=linux 69d
-
Verify the JCNR statefulsets by issuing the
kubectl get statefulsets -A
command.The command output provides the statefulsets.
kubectl get statefulsets -A
NAMESPACE NAME READY AGE jcnr kube-crpd-worker-sts 2/2 16d openshift-monitoring alertmanager-main 2/2 69d openshift-monitoring prometheus-k8s 2/2 69d
-
Verify if the cRPD is licensed and has the appropriate configurations
- View the access the cRPD CLI section to access the cRPD CLI.
-
Once you have access the cRPD CLI, issue the
show system license
command in the cli mode to view the system licenses. For example:root@jcnr-01:/# cli root@jcnr-01> show system license License usage: Licenses Licenses Licenses Expiry Feature name used installed needed containerized-rpd-standard 1 1 0 2024-09-20 16:59:00 PDT Licenses installed: License identifier: 85e5229f-0c64-0000-c10e4-a98c09ab34a1 License SKU: S-CRPD-10-A1-PF-5 License version: 1 Order Type: commercial Software Serial Number: 1000098711000-iHpgf Customer ID: Juniper Networks Inc. License count: 15000 Features: containerized-rpd-standard - Containerized routing protocol daemon with standard features date-based, 2022-08-21 17:00:00 PDT - 2027-09-20 16:59:00 PDT
-
Issue the
show configuration | display set
command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the JCNR deployment mode.root@jcnr-01# cli root@jcnr-01> show configuration | display set
-
Type the
exit
command to exit from the pod shell.
-
Verify the vRouter interfaces configuration
- View the access the vRouter CLI section to access the vRouter CLI.
-
Once you have accessed the vRouter CLI, issue the
vif --list
command to view the vRouter interfaces . The output will depend upon the JCNR deployment mode and configuration. An example for L3 mode deployment, with two fabric interfaces configured, is provided below:$ vif --list Vrouter Interface Table Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS Left Intf HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast Enabled vif0/0 Socket: unix MTU: 1514 Type:Agent HWaddr:00:00:5e:00:01:00 Vrf:65535 Flags:L2 QOS:-1 Ref:3 RX port packets:864 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:864 bytes:75536 errors:0 TX packets:13609 bytes:1419892 errors:0 Drops:0 vif0/1 PCI: 0000:17:00.0 (Speed 25000, Duplex 1) NH: 6 MTU: 9000 Type:Physical HWaddr:40:a6:b7:a0:f0:6c IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:TcL3L2Vof QOS:0 Ref:9 RX port packets:243886 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:17:00.0 Status: UP Driver: net_ice RX packets:243886 bytes:20529529 errors:0 TX packets:243244 bytes:20010274 errors:0 Drops:2675 TX port packets:243244 errors:0 vif0/2 PCI: 0000:17:00.1 (Speed 25000, Duplex 1) NH: 7 MTU: 9000 Type:Physical HWaddr:40:a6:b7:a0:f0:6d IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:TcL3L2Vof QOS:0 Ref:8 RX port packets:129173 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:17:00.1 Status: UP Driver: net_ice RX packets:129173 bytes:11623158 errors:0 TX packets:129204 bytes:11624377 errors:0 Drops:0 TX port packets:129204 errors:0 vif0/3 PMD: ens1f0 NH: 10 MTU: 9000 Type:Host HWaddr:40:a6:b7:a0:f0:6c IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:11 TxXVif:1 RX device packets:242329 bytes:19965464 errors:0 RX queue packets:242329 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:242329 bytes:19965464 errors:0 TX packets:241163 bytes:20324343 errors:0 Drops:0 TX queue packets:241163 errors:0 TX device packets:241163 bytes:20324343 errors:0 vif0/4 PMD: ens1f1 NH: 15 MTU: 9000 Type:Host HWaddr:40:a6:b7:a0:f0:6d IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:11 TxXVif:2 RX device packets:129204 bytes:11624377 errors:0 RX queue packets:129204 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:129204 bytes:11624377 errors:0 TX packets:129173 bytes:11623158 errors:0 Drops:0 TX queue packets:129173 errors:0 TX device packets:129173 bytes:11623158 errors:0
-
Type the
exit
command to exit the pod shell.