Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
close
keyboard_arrow_left
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Install and Verify Juniper Cloud-Native Router for GCP Deployment

date_range 01-Nov-24

SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

Install Juniper Cloud-Native Router Using Helm Chart

Read this section to learn the steps required to load the cloud-native router image components using Helm charts.

  1. Review the System Requirements for GCP Deployment section to ensure the setup has all the required configuration.
  2. Download the JCNR helm charts, Juniper_Cloud_Native_Router_release-number.tgz, to the directory of your choice. You must perform the file transfer in binary mode when transferring the file to your server, so that the compressed tar file expands properly.
  3. Expand the file Juniper_Cloud_Native_Router_release-number.tgz.
    content_copy zoom_out_map
    tar xzvf Juniper_Cloud_Native_Router_release-number.tgz
  4. Change directory to Juniper_Cloud_Native_Router_release-number.
    content_copy zoom_out_map
    cd Juniper_Cloud_Native_Router_release-number
    Note:

    All remaining steps in the installation assume that your current working directory is now Juniper_Cloud_Native_Router_release-number.

  5. View the contents in the current directory.
    content_copy zoom_out_map
    ls
    contrail-tools  helmchart  images  README.md  secrets
  6. The JCNR container images are required for deployment. Choose one of the following options:
    • Configure your cluster to deploy images from the Juniper Networks enterprise-hub.juniper.net repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart.

    • Configure your cluster to deploy images from the images tarball included in the downloaded JCNR software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.

  7. Enter the root password for your host server and your Juniper Cloud-Native Router license file into the secrets/jcnr-secrets.yaml file. You must enter the password and license in base64 encoded format.

    You can view the sample contents of the jcnr-secrets.yaml file below:

    content_copy zoom_out_map
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: jcnr
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: jcnr-secrets
      namespace: jcnr
    data:
      root-password: <add your password in base64 format>
      crpd-license: |
        <add your license in base64 format>
    To encode the password, create a file with the plain text password on a single line. Then issue the command:
    content_copy zoom_out_map
    base64 -w 0 rootPasswordFile
    To encode the license, copy the license key into a file on your host server and issue the command:
    content_copy zoom_out_map
    base64 -w 0 licenseFile
    You must copy the base64 outputs and paste them into the secrets/jcnr-secrets.yaml file in the appropriate locations.
    Note:

    You must obtain your license file from your account team and install it in the jcnr-secrets.yaml file as instructed above. Without the proper base64-encoded license key and root password in the jcnr-secrets.yaml file, the cRPD Pod does not enter Running state, but remains in CrashLoopBackOff state.

    Apply the secrets/jcnr-secrets.yaml to the Kubernetes system.

    content_copy zoom_out_map
    kubectl apply -f secrets/jcnr-secrets.yaml
    namespace/jcnr created
    secret/jcnr-secrets created
    Note:

    Starting with JCNR Release 23.2, the JCNR license format has changed. Request a new license key from the JAL portal before deploying or upgrading to 23.2 or newer releases.

  8. Customize the helm chart for your deployment using the helmchart/values.yaml file.

    See, Customize JCNR Helm Chart for GCP Deployment for descriptions of the helm chart configurations and a sample helm chart for GCP deployment..

  9. Optionally, customize JCNR configuration.
    See, Customize JCNR Configuration for creating and applying the cRPD customizations.
  10. Label the nodes to which JCNR must be installed based on the nodeaffinity defined in the values.yaml. For example:
    content_copy zoom_out_map
    kubectl label nodes ip-10.0.100.17.lab.net key1=jcnr --overwrite
  11. Deploy the Juniper Cloud-Native Router using the helm chart.
    Navigate to the helmchart directory and run the following command:
    content_copy zoom_out_map
    helm install jcnr .
    content_copy zoom_out_map
    NAME: jcnr
    LAST DEPLOYED: Fri Sep 22 06:04:33 2023
    NAMESPACE: default
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None
  12. Confirm Juniper Cloud-Native Router deployment.
    content_copy zoom_out_map
    helm ls

    Sample output:

    content_copy zoom_out_map
    NAME		NAMESPACE	REVISION		UPDATED                                	STATUS  		CHART                  	APP VERSION
    jcnr		default  	1       		2023-09-22 06:04:33.144611017 -0400 EDT	deployed		jcnr-23.3.0				23.3.0
    

Verify Installation

SUMMARY This section enables you to confirm a successful JCNR deployment.
  1. Verify the state of the JCNR pods by issuing the kubectl get pods -A command.
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
    content_copy zoom_out_map
    kubectl get pods -A
    content_copy zoom_out_map
    NAMESPACE         NAME                                      READY   STATUS    RESTARTS         AGE
    contrail-deploy   contrail-k8s-deployer-579cd5bc74-g27gs    1/1     Running   0                103s
    contrail          contrail-vrouter-masters-lqjqk            3/3     Running   0                87s
    jcnr              kube-crpd-worker-sts-0                    1/1     Running   0                103s
    jcnr              syslog-ng-ds5qd                           1/1     Running   0                103s
    kube-system       calico-kube-controllers-5f4fd8666-m78hk   1/1     Running   0                4h2m
    kube-system       calico-node-28w98                         1/1     Running   0                86d
    kube-system       coredns-54bf8d85c7-vkpgs                  1/1     Running   0                3h8m
    kube-system       dns-autoscaler-7944dc7978-ws9fn           1/1     Running   0                86d
    kube-system       kube-apiserver-ix-esx-06                  1/1     Running   0                86d
    kube-system       kube-controller-manager-ix-esx-06         1/1     Running   0                86d
    kube-system       kube-multus-ds-amd64-jl69w                1/1     Running   0                86d
    kube-system       kube-proxy-qm5bl                          1/1     Running   0                86d
    kube-system       kube-scheduler-ix-esx-06                  1/1     Running   0                86d
    kube-system       nodelocaldns-bntfp                        1/1     Running   0                86d
  2. Verify the JCNR daemonsets by issuing the kubectl get ds -A command.

    Use the kubectl get ds -A command to get a list of daemonsets. The JCNR daemonsets are highlighted in bold text.

    content_copy zoom_out_map
    kubectl get ds -A
    content_copy zoom_out_map
    NAMESPACE     NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR              AGE
    contrail      contrail-vrouter-masters   1         1         1       1            1           <none>                     90m
    contrail      contrail-vrouter-nodes     0         0         0       0            0           <none>                     90m
    jcnr          syslog-ng                  1         1         1       1            1           <none>                     90m
    kube-system   calico-node                1         1         1       1            1           kubernetes.io/os=linux     86d
    kube-system   kube-multus-ds-amd64       1         1         1       1            1           kubernetes.io/arch=amd64   86d
    kube-system   kube-proxy                 1         1         1       1            1           kubernetes.io/os=linux     86d
    kube-system   nodelocaldns               1         1         1       1            1           kubernetes.io/os=linux     86d
    
  3. Verify the JCNR statefulsets by issuing the kubectl get statefulsets -A command.

    The command output provides the statefulsets.

    content_copy zoom_out_map
    kubectl get statefulsets -A
    content_copy zoom_out_map
    NAMESPACE   NAME                   READY   AGE
    jcnr        kube-crpd-worker-sts   1/1     27m
  4. Verify if the cRPD is licensed and has the appropriate configurations
    1. View the Access cRPD CLI section to access the cRPD CLI.
    2. Once you have access the cRPD CLI, issue the show system license command in the cli mode to view the system licenses. For example:
      content_copy zoom_out_map
      root@jcnr-01:/# cli
      root@jcnr-01> show system license
      License usage:
                                       Licenses     Licenses    Licenses    Expiry
        Feature name                       used    installed      needed
        containerized-rpd-standard            1        1           0    2024-09-20 16:59:00 PDT
       
      Licenses installed:
        License identifier: 85e5229f-0c64-0000-c10e4-a98c09ab34a1
        License SKU: S-CRPD-10-A1-PF-5
        License version: 1
        Order Type: commercial
        Software Serial Number: 1000098711000-iHpgf
        Customer ID: Juniper Networks Inc.
        License count: 15000
        Features:
          containerized-rpd-standard - Containerized routing protocol daemon with standard features
            date-based, 2022-08-21 17:00:00 PDT - 2027-09-20 16:59:00 PDT
    3. Issue the show configuration | display set command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the JCNR deployment mode.
      content_copy zoom_out_map
      root@jcnr-01# cli          
      root@jcnr-01> show configuration | display set 
    4. Type the exit command to exit from the pod shell.
  5. Verify the vRouter interfaces configuration
    1. View the Access vRouter CLI section to access the vRouter CLI.
    2. Once you have accessed the vRouter CLI, issue the vif --list command to view the vRouter interfaces . The output will depend upon the JCNR deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:
      content_copy zoom_out_map
      $ vif --list 
      
      Vrouter Interface Table
      
      Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
             Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
             D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
             Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored
             Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled
             Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS Left Intf
             HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast Enabled
      
      vif0/0      Socket: unix MTU: 1514
                  Type:Agent HWaddr:00:00:5e:00:01:00
                  Vrf:65535 Flags:L2 QOS:-1 Ref:3
                  RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
                  RX packets:0  bytes:0 errors:0
                  TX packets:0  bytes:0 errors:0
                  Drops:0
      
      vif0/1      PCI: 0000:5a:02.1 (Speed 10000, Duplex 1) NH: 6 MTU: 9000
                  Type:Physical HWaddr:ba:9c:0f:ab:e2:c9 IPaddr:0.0.0.0
                  DDP: OFF SwLB: ON
                  Vrf:0 Mcast Vrf:0 Flags:L3L2Vof QOS:0 Ref:12
                  RX port   packets:66 errors:0
                  RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
                  Fabric Interface: 0000:5a:02.1  Status: UP  Driver: net_iavf
                  RX packets:66  bytes:5116 errors:0
                  TX packets:0  bytes:0 errors:0
                  Drops:0
      
      vif0/2      PMD: eno3v1 NH: 9 MTU: 9000
                  Type:Host HWaddr:ba:9c:0f:ab:e2:c9 IPaddr:0.0.0.0
                  DDP: OFF SwLB: ON
                  Vrf:0 Mcast Vrf:65535 Flags:L3L2DProxyEr QOS:-1 Ref:13 TxXVif:1 
                  RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
                  RX packets:0  bytes:0 errors:0
                  TX packets:66  bytes:5116 errors:0
                  Drops:0
                  TX queue  packets:66 errors:0
                  TX device packets:66  bytes:5116 errors:0
    3. Type the exit command to exit the pod shell.
external-footer-nav