Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Sample Configuration Files

date_range 20-Apr-23

Read this section to find sample YAML configuration files for use when you deploy Juniper Cloud-Native Router. These YAML files control the features and functions available to cloud-native router by affecting the deployment instructions. YAML files for workload configuration are also included. The workload configuration files control the workload functions.

We've included the following sample configuration files:

Use these files to understand the configuration options available for deployment of Juniper Cloud-Native Router. The workload configuration files display how you can configure trunk and access interfaces and configure various VLANs for each type. Each of the files contain comments that start with a hash mark (#) and are highlighted in bold in these examples.

  • values.yaml

    This is the main values.yaml file. There are 3 other values.yaml files supplied in the TAR file. 1 values.yaml for each of the installation components: jcnr-cni, jcnr-vrouter, and syslog-ng.

    If there are conflicting settings between the individual values.yaml files and the main values.yaml file, the settings in the main values.yaml file take precedence.

    content_copy zoom_out_map
    ####################################################################
    #                 Common Configuration (global vars)               #
    ####################################################################
    global:
      registry: enterprise-hub.juniper.net/
      # uncomment below if all images are available in the same path; it will 
      # take precedence over "repository" paths under "common" section below
      repository: jcnr-container-prod/
    
      # uncomment below if you are using a private registry that needs authentication
      # registryCredentials - Base64 representation of your Docker registry credentials
      # secretName - Name of the Secret object that will be created
      #imagePullSecret:
        #registryCredentials:
        #secretName: regcred
    
      common:
        vrouter: 
          #repository: atom-docker/cn2/bazel-build/dev/
          tag: R23.1-282
        crpd:
          #repository: junos-docker-local/warthog/
          tag: 23.1R1.8
        jcnrcni:
          #repository: junos-docker-local/warthog/
          tag: 23.1-20230320-56f952d
    
      # defines the log severity. Possible options: DEBUG, INFO, WARN, ERR
      log_level: "INFO"
    
      # "log_path": this directory will contain various jcnr related descriptive logs 
      # such as contrail-vrouter-agent.log, contrail-vrouter-dpdk.log etc.               
      log_path: "/var/log/jcnr/"
      # "syslog_notifications": absolute path to the file that will contain syslog-ng 
      # generated notifications in json format
      syslog_notifications: "/var/log/jcnr/jcnr_notifications.json"
    
      # mode in which jcnr will operate; possible options include "l2" or "l3"
      mode: "l2"
    
      # override default cni path with a path of your choice e.g. /var/opt/cni/bin
      # the default path is /opt/cni/bin if cni_bin_dir is not specified
      #cni_bin_dir: /var/opt/cni/bin
    
      ####################################################################
      #                            L2 PARAMS                             #
      ####################################################################
    
      # no-local-switching would prevent the local CE ports from switching packets to each other 
      # and any unknown unicast packets received from a CE will be flooded only to core/PE facing interfaces 
      # noLocalSwitching: [700, 800]  - You may provide a single vlan id, multiple vlan ids or a range 
      # no-local-switching: true      - Used to override default no-local-switching behavior (for trunk interfaces only)
      #noLocalSwitching: [700]
    
      # fabricInterface: NGDU or tor side interface, expected all types 
      # of traffic; interface_mode is always trunk for this mode
      fabricInterface:
      - bond0:
          interface_mode: trunk
          vlan-id-list: [100, 200, 300, 700-705]
          storm-control-profile: rate_limit_pf1
          #native-vlan-id: 100
          #no-local-switching: true
    
      # fabricWorkloadInterface: RU side interfaces, expected traffic is only 
      # management/control traffic; interface mode can be trunk or access
      # NOTE: only one vlan can be specified in case of access interfaces 
      # (as opposed to multiple vlans in trunk mode)
      fabricWorkloadInterface:
      - enp59s0f1v0:
          interface_mode: access
          vlan-id-list: [700]
      #- enp59s0f1v1:
      #    interface_mode: trunk
      #    vlan-id-list: [800, 900]
    
    jcnr-vrouter:
      # restoreInterfaces: setting this to true will restore the interfaces 
      # back to their original state in case vrouter pod crashes or restarts
      restoreInterfaces: false
    
      # bond interface configurations
      bondInterfaceConfigs:
        - name: "bond0"
          mode: 1             # ACTIVE_BACKUP MODE
          slaveInterfaces:
          - "enp59s0f0v0"
          - "enp59s0f0v1"     
    
      # MTU for all physical interfaces( all VF’s and  PF’s)
      mtu: "9000"
    
      # vrouter fwd core mask
      # if qos is enabled, you will need to allocate 4 CPU cores (primary and siblings)
      cpu_core_mask: "2,3,22,23"
    
      # rate limit profiles for bum traffic on fabric interfaces in bytes per second
      stormControlProfiles:
        rate_limit_pf1:
          bandwidth:
            level: 0
        #rate_limit_pf2:
        #  bandwidth:
        #    level: 0
    
      # Set ddp to true to enable Dynamic Device Personalization (DDP) 
      # It provides datapath optimization at NIC for traffic like GTPU, SCTP etc.
      ddp: true
    
      # Set true/false to Enable or Disable QOS, note: QOS is not supported on X710 NIC.
      qosEnable: false
    
      # core pattern to denote how the core file will be generated
      # if left empty, JCNR pods will not overwrite the default pattern
      corePattern: ""
    
      # path for the core file; vrouter considers /var/crashes as default value if not specified 
      coreFilePath: /var/crash
  • values_L3.yaml

    The file, values_L3.yaml controls installation and operation parameters of the cloud-native router when you deploy in L3 mode.

    Note that there are common configuration parameters that exist in both values_L3.yaml and the main values.yaml. Any values not set in values_L3.yamlare taken from the common configuration parameters section of the main values.yaml file.

    content_copy zoom_out_map
    # This is a sample values.yaml file to install JCNR in L3 mode
    # Install by overriding values.yaml with this file e.g. 
    # helm install jcnr -f values_L3.yaml 
    # Please note the overriding file does not replace values,yaml i.e. any values 
    # that are not present in this file will be taken from the original values.yaml 
    # e.g. if global.repository is commented in values_L3.yaml and uncommented in 
    # values.yaml, then the value in values.yaml is still considered 
    #
    ####################################################################
    #                 Common Configuration (global vars)               #
    ####################################################################
    global:
      registry: enterprise-hub.juniper.net/
      # uncomment below if all images are available in the same path; it will 
      # take precedence over "repository" paths under "common" section below
      repository: jcnr-container-prod/
    
      # uncomment below if you are using a private registry that needs authentication
      # registryCredentials - Base64 representation of your Docker registry credentials
      # secretName - Name of the Secret object that will be created
      #imagePullSecret:
        #registryCredentials:
        #secretName: regcred
    
      common:
        vrouter: 
          #repository: atom-docker/cn2/bazel-build/dev/
          tag: R23.1-282
        crpd:
          #repository: junos-docker-local/warthog/
          tag: 23.1R1.8
        jcnrcni:
          #repository: junos-docker-local/warthog/
          tag: 23.1-20230320-56f952d
    
      # defines the log severity. Possible options: DEBUG, INFO, WARN, ERR
      log_level: "INFO"
    
      # "log_path": this directory will contain various jcnr related descriptive logs 
      # such as contrail-vrouter-agent.log, contrail-vrouter-dpdk.log etc.               
      log_path: "/var/log/jcnr/"
      # "syslog_notifications": absolute path to the file that will contain syslog-ng 
      # generated notifications in json format
      syslog_notifications: "/var/log/jcnr/jcnr_notifications.json"
    
      # mode in which jcnr will operate; possible options include "l2" or "l3"
      mode: "l3"
    
      # nodeAffinity: Can be used to inject nodeAffinity for vRouter, cRPD and syslog-ng pods
      # You may label the nodes where we wish to deploy JCNR and inject affinity accodingly
      #nodeAffinity:
      #- key: node-role.kubernetes.io/worker
      #  operator: Exists
      #- key: node-role.kubernetes.io/master
      #  operator: DoesNotExist  
    
      # override default cni path with a path of your choice e.g. /var/opt/cni/bin
      # the default path is /opt/cni/bin if cni_bin_dir is not specified
      #cni_bin_dir: /var/opt/cni/bin
    
    jcnr-vrouter:
      # vrouter fwd core mask
      cpu_core_mask: "2,3"
    
      # set multinode to true if you have more than one node in your Kubernetes cluster 
      # (master + worker) and you want to run vrouter in both master and worker nodes
      #multinode: false
    
      # nodeSelector can be given as a key value pair for vrouter to install on the specific nodes, we can give multiple key value pair.
      # Example: nodeSelector: {key1: value1}
      #nodeSelector:
      #  key1: value1
      #  key2: value2
    
      #nodeSelector: {}
    
      # contrail vrouter vhost0 binding interface on the host
      vrouter_dpdk_physical_interface: "eth2"
    
      # uio driver will be vfio-pci or uio_pci_generic
      vrouter_dpdk_uio_driver: "vfio-pci"
    
      vhost_interface_ipv4: ""
    
      vhost_interface_ipv6: ""
    
      # vrouter gateway IP for IPv4
      vhost_gateway_ipv4: ""    # if gateway IP is not provided vrouter will pickup the gateway IP from kernel table
    
      # vrouter gateway IP for IPv6
      vhost_gateway_ipv6: ""   # if gateway IP is not provided vrouter will pickup the gateway IP from kernel table
    
      # core pattern to denote how the core file will be generated
      # if left empty, JCNR pods will not overwrite the default pattern
      corePattern: ""
    
      # path for the core file; vrouter considers /var/crashes as default value if not specified 
      coreFilePath: /var/crash
    
    jcnr-cni:
      #data plane default is dpdk for vrouter case, linux for kernel module
      dataplane: dpdk
    
      # only for development environment where master and worker on a single node, then we need to give true
      standalone: false
    
      # if crpd needs to be running on the master node as RR (Route Reflector) then we need to enable this filed.
      cRPD_RR:
        enabled: false
    
      networkAttachmentDefinitionName: jcnr  # default NAD name and VRF name will be Platter, if we change the name, NAD and VRF will be created on the new Name
      # Pod yaml we need to give the NAD name and VRF name as above
    
      vrfTarget: 10:10  # vrfTarget used for the default NAD
    
      #JCNR case, Calico running with default BGP port 179, then for cRPD BGP port have to be different, change the port to 178
      BGPListenPort: 178
    
      # if cRPD connects to MX or some other router, then we have to leave this port to 179 by default, MX wants to connect to jcnr then MX to cRPD BGP port has to be configured as 178
      BGPConnectPort: 179
    
      # If master node is used as a RR, then this address should be matched with master node ipv4 loopback address.
      BGPIPv4Neighbor: 100.1.1.2
    
      # If master node is used as a RR, then this address should be matched with master node ipv6 loopback address.
      BGPIPv6Neighbor: abcd::2
    
      SRGBStartLabel: "400000"
    
      SRGBIndexRange: "4000"
    
      # we can add multiple master nodes configuration by copying the below node configuration as many times as nodes, have the unique name based on the node host name,
      # Name format node-<actual-node-name>.json with unique IP Address
      masterNodeConfig:
        node-masternode1.json: |
          {
            "ipv4LoopbackAddr":"100.1.1.2",
            "ipv6LoopbackAddr":"abcd::2",
            "isoLoopbackAddr":"49.0004.1000.0000.0000.00",
            "srIPv4NodeIndex":"2002",
            "srIPv6NodeIndex":"3002"
          }
    
      # we can add multiple worker nodes configuration by copying the below node configuration as many times as nodes, have the unique name based on the node host name,
      # Name format node-<actual-node-name>.json with unique IP Address
      workerNodeConfig:
        node-workernode1.json: |
          {
            "ipv4LoopbackAddr":"100.1.1.3",
            "ipv6LoopbackAddr":"abcd::3",
            "isoLoopbackAddr":"49.0004.1000.0000.0001.00",
            "srIPv4NodeIndex":"2003",
            "srIPv6NodeIndex":"3003"
          }
  • jcnr-vrouter specific values.yaml

    This values.yaml file is specific to the jcnr-vrouter pod. It is located under the Juniper_Cloud_Native_Router_<release-number>/helmchart/charts/jcnr-vrouter directory. If you enter any values in this file that conflict with values in the main values.yaml file, the values in the main values.yaml file take precedence.

    content_copy zoom_out_map
    # # This is a YAML-formatted file.
    # # Declare variables to be passed into your templates.
    
    common:
      registry: svl-artifactory.juniper.net/
      repository: atom-docker/cn2/bazel-build/dev/
    
    # anchor tag for vrouter container images 
    vrouter-tag: &vrouter_tag JCNR-23.1-282
    
    contrail_init:
      image: contrail-init
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    contrail_vrouter_kernel_init_dpdk:
      image: contrail-vrouter-kernel-init-dpdk
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    contrail_vrouter_agent:
      image: contrail-vrouter-agent
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    contrail_vrouter_agent_dpdk:
      image: contrail-vrouter-dpdk
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
      resources:
        limits:
          memory: 4Gi
          hugepages-1Gi: 4Gi          # Hugepages must be enabled with default size as 1G; minimum 4Gi to be used 
        requests:
          memory: 4Gi
          hugepages-1Gi: 4Gi
    
    contrail_vrouter_telemetry_exporter:
      image: contrail-telemetry-exporter
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    contrail_k8s_deployer:
      image: contrail-k8s-deployer
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    contrail_k8s_crdloader:
      image: contrail-k8s-crdloader
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    contrail_k8s_applier:
      image: contrail-k8s-applier
      tag: *vrouter_tag
      pullPolicy: IfNotPresent
    
    busyBox:
      image: busybox
      tag:  "latest"
      pullPolicy: IfNotPresent
    
    vrouter_name: master
    
    # uio driver will be vfio-pci or uio_pci_generic
    vrouter_dpdk_uio_driver: "vfio-pci"      
    
    # MTU for all physical interfaces( all VF’s and  PF’s)
    mtu: "9000"
    
    vrouter_log_path: "/var/log/jcnr/"
    
    # Defines the log severity. Possible options: DEBUG, INFO, WARN, ERR
    log_level: "INFO"
    
    dpdkCommandAdditionalArgs: "--yield_option 0"
    
    # Set ddp to true to enable Dynamic Device Personalization (DDP) 
    # It provides datapath optimization at NIC for traffic like GTPU, SCTP etc.
    ddp: true
    
    # vrouter fwd core mask
    cpu_core_mask: "2,3"
    
    # vrouter service thread mask
    service_core_mask: ""
    
    # vrouter control thread mask
    dpdk_ctrl_thread_mask: ""
    
    # 
    dpdk_mem_per_socket: "1024"
    
    # L3 disabled for switching mode
    jcnr_mode: "l2_only"
    
    # global Mac table size - We recommend leaving this at the default value
    mac_table_size: "10240" 
    
    # timeout (seconds) for aging Mac table entries (S)
    mac_table_ageout: 60
    
    # parameters for vRouter livenessProbe
    livenessProbe:
      initialDelaySeconds: 10
      periodSeconds: 20
      timeoutSeconds: 5
      failureThreshold: 3
      successThreshold: 1
    
    # parameters for vRouter startupProbe
    startupProbe:
      initialDelaySeconds: 10
      periodSeconds: 20
      timeoutSeconds: 5
      failureThreshold: 3
      successThreshold: 1
    
    # setting this to true will restore the interfaces back to 
    # their original state in case vrouter pod crashes or restarts
    restoreInterfaces: false
    
    # tor side interface, expected all types of traffic
    fabricInterface:
    - enp4s0f0vf0
    - bond0
    
    # RU side interfaces, expected traffic is only management/control traffic
    fabricWorkloadInterface:
    - enp4s0f1vf0
    
    # bond interface configurations
    bondInterfaceConfigs:
      - name: "bond0"
        mode: 1                             # ACTIVE_BACKUP MODE
        slaveInterfaces:
        - "enp1s0f1"
        - "enp2s0f1"
    
    # rate limit for broadcast/multicast traffic on fabric interfaces in bytes per second
    fabricBMCastRateLimit: 0
    
  • jcnr-cni specific values.yaml

    This values.yaml file is specific to the jcnr-cni pod. The jcnr-cni specfic values.yaml file is located under the Juniper_Cloud_Native_Router_<release-number>/helmchart/charts/jcnr-cni directory. If you enter any values in this file that conflict with values in the main values.yaml file, the values in the main values.yaml file take precedence.

    content_copy zoom_out_map
    # Default values for jcnr.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
    
    common:
      registry: svl-artifactory.juniper.net/
      repository: junos-docker-local/warthog/
    
    crpdImage:
      image: crpd
      tag: "23.1R1.8"
      pullPolicy: IfNotPresent
    
    jcnrCNIImage:
      image: jcnr-cni
      tag: "23.1-20230320-56f952d"
      pullPolicy: IfNotPresent
    
    crpdConfigGeneratorImage:
      image: crpdconfig-generator
      tag: "v3"
      pullPolicy: IfNotPresent
    
    busyBox:
      image: busybox
      tag:  "latest"
      pullPolicy: IfNotPresent
    
    
    #data plane default is dpdk for vrouter case, linux for kernel module
    dataplane: dpdk
    
    networkAttachmentDefinitionName: vswitch
    
    crpd_log_path: "/var/log/jcnr/"
    
    # Defines the log severity. Possible options: panic, fatal, error, 
    # warn or warning, info, debug, trace
    
    log_level: "info"
    
    # parameters for cRPD livenessProbe
    livenessProbe:
      initialDelaySeconds: 5
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 3
      successThreshold: 1
    
    # parameters for cRPD startupProbe
    startupProbe:
      initialDelaySeconds: 5
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 3
      successThreshold: 1
    
    crpdConfigs:
      interface_groups:
        fabricInterface: # TOR side interface, expected all types of traffic
          - bond0:
              interface_mode: trunk # interface mode is always trunk for fabricInterface
              vlan-id-list: [100, 200, 700] # vlan-id-lists
          - enp4s0f0vf0:
              interface_mode: trunk # interface mode is always trunk for fabricInterface
              vlan-id-list: [300, 500, 3001, 3002] # vlan-id-lists
          - enp4s0f0vf1:
              interface_mode: trunk # interface mode is always trunk for fabricInterface
              vlan-id-list: [3003, 3004, 3201-3250, 900] # vlan-id-lists
          - enp4s0f0vf2:
              interface_mode: trunk # interface mode is always trunk for fabricInterface
              vlan-id-list: [3251-3255] # vlan-id-lists
        fabricWorkloadInterface:  # RU side interfaces, expected traffic is only management/control traffic
          - enp4s0f1vf0:
              interface_mode: access # interface mode is always access for fabricWorkloadInterface
              vlan-id-list: [700] # vlan-id-list must always be a single value for fabricWorkloadInterface
          - enp4s1f1vf0:
              interface_mode: access # interface mode is always access for fabricWorkloadInterface
              vlan-id-list: [900] # vlan-id-list must always be a single value for fabricWorkloadInterface
    
      routing_instances:
        - vswitch:
            instance-type: virtual-switch
    
  • nad-dpdk_trunk_vlan_3002.yaml

    content_copy zoom_out_map
      
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: nad-vswitch-bd3002
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "nad-vswitch-bd3002",
        "capabilities":{"ips":true},
        "plugins": [
          {
            "type": "jcnr",
            "args": {
              "instanceName": "vswitch",
              "instanceType": "virtual-switch",
              "bridgeDomain": "bd3002",
              "bridgeVlanId": "3002",
              "dataplane":"dpdk",
              "mtu": "9000"
            },
            "ipam": {
              "type": "static",
              "capabilities":{"ips":true},
              "addresses":[
                {
                  "address":"2001:db8:3002::10.2.0.1/64",
                  "gateway":"2001:db83002::10.2.0.254"
                },
                {
                  "address":"10.2.0.1/24",
                  "gateway":"10.2.0.254"
                }
              ]
            },
            "kubeConfig":"/etc/kubernetes/kubelet.conf"
          }
        ]
      }'
    
  • nad-kernel_access_vlan_3001.yaml

    content_copy zoom_out_map
     
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: pod1-vswitch-bd3001-1
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "pod1-vswitch-bd3001-1",
        "capabilities":{"ips":true},
        "plugins": [
          {
            "type": "jcnr",
            "args": {
              "instanceName": "vswitch",
              "instanceType": "virtual-switch",
              "bridgeDomain": "bd3001",
              "bridgeVlanId": "3001",
              "dataplane":"dpdk",
              "mtu": "9000",
              "interfaceType":"veth"
            },
            "ipam": {
              "type": "static",
              "capabilities":{"ips":true},
              "addresses":[
                {
                  "address":"2001:db8:3001::10.1.0.1/64",
                  "gateway":"2001:db8:3001::10.1.0.254"
                },
                {
                  "address":"10.1.0.1/24",
                  "gateway":"10.1.0.254"
                }
              ]
            },
            "kubeConfig":"/etc/kubernetes/kubelet.conf"
          }
        ]
      }'
    
  • nad-odu-bd3003-sub.yaml

    content_copy zoom_out_map
     
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: vswitch-bd3003-sub
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "vswitch-bd3003-sub",
        "capabilities":{"ips":true},
        "plugins": [
          {
            "type": "jcnr",
            "args": {
              "instanceName": "vswitch",
              "instanceType": "virtual-switch",
              "bridgeDomain": "bd3003",
              "bridgeVlanId": "3003",
              "parentInterface":"net1",
              "interface":"net1.3003",
              "dataplane":"dpdk"
            },
            "ipam": {
              "type": "static",
              "capabilities":{"ips":true},
              "addresses":[
                {
                  "address":"10.3.0.1/24",
                  "gateway":"10.3.0.254"
                },
                {
                  "address":"2001:db8:3003::10.3.0.1/120",
                  "gateway":"2001:db8:3003::10.3.0.1"
                }
              ]
            },
            "kubeConfig":"/etc/kubernetes/kubelet.conf"
          }
        ]
      }'
    
  • nad-odu-bd3004-sub.yaml

    content_copy zoom_out_map
     
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: vswitch-bd3004-sub
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "vswitch-bd3004-sub",
        "capabilities":{"ips":true},
        "plugins": [
          {
            "type": "jcnr",
            "args": {
              "instanceName": "vswitch",
              "instanceType": "virtual-switch",
              "bridgeDomain": "bd3004",
              "bridgeVlanId": "3004",
              "parentInterface":"net1",
              "interface":"net1.3004",
              "dataplane":"dpdk"
    
            },
            "ipam": {
              "type": "static",
              "capabilities":{"ips":true},
              "addresses":[
                {
                  "address":"30.4.0.1/24",
                  "gateway":"30.4.0.254"
                },
                {
                  "address":"2001:db8:3004::10.4.0.1/120",
                  "gateway":"2001:db8:3004::10.4.0.1"
                }
              ]
            },
            "kubeConfig":"/etc/kubernetes/kubelet.conf"
          }
        ]
      }'
    
  • odu-virtio-subinterface.yaml

    content_copy zoom_out_map
     
    apiVersion: v1
    kind: Pod
    metadata:
      name:   odu-subinterface-1
      annotations:
        k8s.v1.cni.cncf.io/networks: |
          [
            {
              "name": "vswitch-bd3003-sub"
            },
            {
              "name": "vswitch-bd3004-sub"
              }
          ]
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                    - 5d7s39.englab.juniper.net
      containers:
        - name: odu-subinterface
          image: svl-artifactory.juniper.net/junos-docker-local/warthog/pktgen19116:subint
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
          resources:
            requests:
              memory: 2Gi
            limits:
              hugepages-1Gi: 2Gi
          env:
            - name: KUBERNETES_POD_UID
              valueFrom:
                fieldRef:
                   fieldPath: metadata.uid
          volumeMounts:
            - name: dpdk
              mountPath: /dpdk
              subPathExpr: $(KUBERNETES_POD_UID)
            - mountPath: /dev/hugepages
              name: hugepage
      volumes:
        - name: dpdk
          hostPath:
            path: /var/run/jcnr/containers
        - name: hugepage
          emptyDir:
            medium: HugePages
    
  • pod-dpdk-trunk-vlan3002.yaml

    content_copy zoom_out_map
     
    apiVersion: v1
    kind: Pod
    metadata:
      name:   odu-trunk-1
      annotations:
        k8s.v1.cni.cncf.io/networks: nad-vswitch-bd3002
    spec:
      containers:
        - name: odu-trunk
          image: svl-artifactory.juniper.net/junos-docker-local/warthog/pktgen19116:trunk
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: true
          resources:
            requests:
              memory: 2Gi
            limits:
              hugepages-1Gi: 2Gi
          env:
            - name: KUBERNETES_POD_UID
              valueFrom:
                fieldRef:
                   fieldPath: metadata.uid
          volumeMounts:
            - name: dpdk
              mountPath: /dpdk
              subPathExpr: $(KUBERNETES_POD_UID)
            - mountPath: /dev/hugepages
              name: hugepage
      volumes:
        - name: dpdk
          hostPath:
            path: /var/run/jcnr/containers
        - name: hugepage
          emptyDir:
            medium: HugePages
    
  • pod-kernel-access-vlan-3001.yaml

    content_copy zoom_out_map
     
    apiVersion: v1
    kind: Pod
    metadata:
      name:   odu-kenel-pod-bd3001-1
      annotations:
        k8s.v1.cni.cncf.io/networks: pod1-vswitch-bd3001-1
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/hostname
                  operator: In
                  values:
                    - 5d8s7.englab.juniper.net
      containers:
        - name: odu-kenel-pod-bd3001-1
          image: vinod-iperf3:latest
          imagePullPolicy: IfNotPresent
          command: ["/bin/bash","-c","sleep infinity"]
          securityContext:
            privileged: false
          env:
            - name: KUBERNETES_POD_UID
              valueFrom:
                fieldRef:
                   fieldPath: metadata.uid
          volumeMounts:
            - name: dpdk
              mountPath: /dpdk
              subPathExpr: $(KUBERNETES_POD_UID)
      volumes:
        - name: dpdk
          hostPath:
            path: /var/run/jcnr/containers
  • L3_nad-net1.yaml

    content_copy zoom_out_map
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: net1
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "net1",
        "type": "jcnr",
        "args": {
          "vrfName": "net1",
          "vrfTarget": "1:11"
        },
        "kubeConfig":"/etc/kubernetes/kubelet.conf"
      }'
  • l3_odu1.yaml

    content_copy zoom_out_map
    apiVersion: v1
    kind: Pod
    metadata:
      name: L3-pktgen-odu1
      annotations:
        k8s.v1.cni.cncf.io/networks: |
          [
            {
              "name": "net1",
              "interface":"net1",
              "cni-args": {
                "mac":"aa:bb:cc:dd:ee:51",
                "dataplane":"vrouter",
                "ipConfig":{
                  "ipv4":{
                    "address":"10.1.51.2/30",
                    "gateway":"10.1.51.1",
                    "routes":[
                      "10.1.51.0/30"
                    ]
                  },
                  "ipv6":{
                    "address":"2001:db8::10:1:51:2/126",
                    "gateway":"2001:db8::10:1:51:1",
                    "routes":[
                      "2001:db8::1:1:51:0/126"
                    ]
                  }
                }
              }
            },
              "name": "net2",
              "interface":"net2",
              "cni-args": {
                "mac":"aa:bb:cc:dd:ee:52",
                "dataplane":"vrouter",
                "ipConfig":{
                  "ipv4":{
                    "address":"10.1.52.2/30",
                    "gateway":"10.1.52.1",
                    "routes":[
                      "10.1.52.0/30"
                    ]
                  },
                  "ipv6":{
                    "address":"2001:db8::10:1:52:2/126",
                    "gateway":"2001:db8::10:1:52:1",
                    "routes":[
                      "2001:db8::10:1:52:0/126"
                    ]
                  }
                }
              }
          ]
    spec:
      containers:
        - name: L3-pktgen-odu1
          image: svl-artifactory.juniper.net/blr-data-plane/dpdk-app/dpdk:21.11
          imagePullPolicy: IfNotPresent
          command: ["/bin/bash","-c","sleep infinity"]
          securityContext:
            privileged: false
          env:
            - name: KUBERNETES_POD_UID
              valueFrom:
                fieldRef:
                   fieldPath: metadata.uid
          resources:
            requests:
              memory: 4Gi
            limits:
              hugepages-1Gi: 4Gi
          name: hugepages
          command: ["sleep"]
          args: ["infinity"]
          volumeMounts:
            - name: dpdk
              mountPath: /dpdk
              subPathExpr: $(KUBERNETES_POD_UID)
            - name: hugepages
              mountPath: /hugepages
      volumes:
        - name: dpdk
          hostPath:
            path: /var/run/jcnr/containers
        - name: hugepages
          emptyDir:
            medium: HugePages
    
    
external-footer-nav