Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Install and Verify Juniper Cloud-Native Router for GCP Deployment

SUMMARY The Juniper Cloud-Native Router (cloud-native router) uses the the JCNR-Controller (cRPD) to provide control plane capabilities and JCNR-CNI to provide a container network interface. Juniper Cloud-Native Router uses the DPDK-enabled vRouter to provide high-performance data plane capabilities and Syslog-NG to provide notification functions. This section explains how you can install these components of the Cloud-Native Router.

Install Juniper Cloud-Native Router Using Juniper Support Site Package

Read this section to learn the steps required to load the cloud-native router image components using Helm charts.

  1. Review the System Requirements for GCP Deployment section to ensure the setup has all the required configuration.
  2. Download the desired JCNR software package to the directory of your choice.
    You have the option of downloading the package to install JCNR only or downloading the package to install JNCR together with Juniper cSRX. See JCNR Software Download Packages for a description of the packages available. If you don't want to install Juniper cSRX now, you can always choose to install Juniper cSRX on your working JCNR installation later.
  3. Expand the file Juniper_Cloud_Native_Router_release-number.tgz.
  4. Change directory to the main installation directory.
    • If you're installing JCNR only, then:

      This directory contains the Helm chart for JCNR only.
    • If you're installing JCNR and cSRX at the same time, then:

      This directory contains the combination Helm chart for JCNR and cSRX.
    Note:

    All remaining steps in the installation assume that your current working directory is now either Juniper_Cloud_Native_Router_<release> or Juniper_Cloud_Native_Router_CSRX_<release>.

  5. View the contents in the current directory.
  6. Change to the helmchart directory and expand the Helm chart.
    • For JCNR only:

      The Helm chart is located in the jcnr directory.
    • For the combined JCNR and cSRX:

      The Helm chart is located in the jcnr_csrx directory.
  7. The JCNR container images are required for deployment. Choose one of the following options:
    • Configure your cluster to deploy images from the Juniper Networks enterprise-hub.juniper.net repository. See Configure Repository Credentials for instructions on how to configure repository credentials in the deployment Helm chart.

    • Configure your cluster to deploy images from the images tarball included in the downloaded JCNR software package. See Deploy Prepackaged Images for instructions on how to import images to the local container runtime.

  8. Follow the steps in Installing Your License to install your JCNR license.
  9. Enter the root password for your host server into the secrets/jcnr-secrets.yaml file at the following line:
    You must enter the password in base64-encoded format. To encode the password, create a file with the plain text password on a single line. Then issue the command: Copy the output of this command into secrets/jcnr-secrets.yaml.
  10. Apply secrets/jcnr-secrets.yaml to the cluster.
  11. If desired, configure how cores are assigned to the vRouter DPDK containers. See Allocate CPUs to the JCNR Forwarding Plane.
  12. Customize the Helm chart for your deployment using the helmchart/jcnr/values.yaml or helmchart/jcnr_csrx/values.yaml file.

    See Customize JCNR Helm Chart for GCP Deployment for descriptions of the Helm chart configurations.

  13. Optionally, customize JCNR configuration.
    See, Customize JCNR Configuration for creating and applying the cRPD customizations.
  14. If you're installing Juniper cSRX now, then follow the procedure in Apply the cSRX License and Configure cSRX.
  15. Label the nodes where you want JCNR to be installed based on the nodeaffinity configuration (if defined in the values.yaml). For example:
  16. Deploy the Juniper Cloud-Native Router using the Helm chart.
    Navigate to the helmchart/jcnr or the helmchart/jcnr_csrx directory and run the following command:or
  17. Confirm Juniper Cloud-Native Router deployment.

    Sample output:

Install Juniper Cloud-Native Router Via Google Cloud Marketplace

Read this section to learn the steps required to deploy the cloud-native router.

  1. Launch the Juniper Cloud-Native Router (PAYG) deployment wizard from the Google Cloud Marketplace.
  2. The table below lists the settings to be configured:

    Settings

    Value

    Deployment name

    Name of your deployment.

    Zone

    GCP zone.

    Series

    N2

    Machine Type n2-standard-32 (32 vCPU, 16 core, 128 GB)

    SSH-Keys

    SSH key pair for Compute Engine virtual machine (VM) instances.

    JCNR License

    Base64 encoded license key.

    To encode the license, copy the license key into a file on your host server and issue the command:
    base64 -w 0 licenseFile

    Copy and paste the base64 encoded license key in the JCNR license field.

    cRPD Config Template

    Create a config template to customize JCNR configuration. See, Customize JCNR Configuration (Google Cloud Marketplace) for sample cRPD template. The config template must be saved in the GCP bucket as an object. Provide the gsutil URI for the object in the cRPD Config Template field.

    cRPD Config Map

    Create a config template to customize JCNR configuration. See, Customize JCNR Configuration (Google Cloud Marketplace) for sample cRPD config map. The config template must be saved in the GCP bucket as an object. Provide the gsutil URI for the object in the cRPD Config Map field.

    Boot disk type

    Standard Persistent Disk

    Boot disk size in GB

    50

    Network Interfaces

    Define additional network interface. An interface in the VPC network is available by default.
  3. Review the System Requirements for GCP Deployment section for additional minimum system requirements. Please note that the settings are pre-configured for the JCNR deployment via Google Cloud Marketplace.
  4. Click Deploy to complete the JCNR deployment.
  5. Once deployed, you can customize the JCNR helm chart. Review the Customize JCNR Helm Chart for GCP Deployment topic for more information. Once configured issue the helm upgrade command to deploy the customizations.

Verify Installation

SUMMARY This section enables you to confirm a successful JCNR deployment.
Note:

The output shown in this example procedure is affected by the number of nodes in the cluster. The output you see in your setup may differ in that regard.

  1. Verify the state of the JCNR pods by issuing the kubectl get pods -A command.
    The output of the kubectl command shows all of the pods in the Kubernetes cluster in all namespaces. Successful deployment means that all pods are in the running state. In this example we have marked the Juniper Cloud-Native Router Pods in bold. For example:
  2. Verify the JCNR daemonsets by issuing the kubectl get ds -A command.

    Use the kubectl get ds -A command to get a list of daemonsets. The JCNR daemonsets are highlighted in bold text.

  3. Verify the JCNR statefulsets by issuing the kubectl get statefulsets -A command.

    The command output provides the statefulsets.

  4. Verify if the cRPD is licensed and has the appropriate configurations
    1. View the Access cRPD CLI section to access the cRPD CLI.
    2. Once you have access the cRPD CLI, issue the show system license command in the cli mode to view the system licenses. For example:
    3. Issue the show configuration | display set command in the cli mode to view the cRPD default and custom configuration. The output will be based on the custom configuration and the JCNR deployment mode.
    4. Type the exit command to exit from the pod shell.
  5. Verify the vRouter interfaces configuration
    1. View the Access vRouter CLI section to access the vRouter CLI.
    2. Once you have accessed the vRouter CLI, issue the vif --list command to view the vRouter interfaces . The output will depend upon the JCNR deployment mode and configuration. An example for L3 mode deployment, with one fabric interface configured, is provided below:
    3. Type the exit command to exit the pod shell.