Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
header-navigation
keyboard_arrow_up
close
keyboard_arrow_left
Juniper Cloud-Native Router User Guide
Table of Contents Expand all
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

L3 VPN Interface Configuration Example

Release: JCNR 23.3
{}
Change Release
date_range 15-Oct-23

Read this topic to learn how to add a user pod with a virtio and kernel interfaces attached to an L3 VPN instance on the cloud-native router.

Overview

You can configure a user pod with a virtio and kernel interfaces to an L3 VPN instance on the cloud-native router. The Juniper Cloud-Native Router must have an L3 interface configured at the time of deployment. Your high-level tasks are:

  • Define and apply a network attachment definition (NAD)—The NAD file defines the required configuration for Multus to invoke the JCNR-CNI and create a network to attach the pod interface to.

  • Define and apply a pod YAML file to your cloud-native router cluster—The pod YAML contains the pod specifications and an annotation to the network created by the JCNR-CNI.

    Note:

    Please review the Cloud-Native Router Use-Cases and Configuration Overview topic for more information on NAD and pod YAML files.

Configuration Example

  1. Here is an example NAD to create a virtio interface attached to an L3 VPN instance:
    content_copy zoom_out_map
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: vrf100
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "vrf100",
        "plugins": [
          {
            "type": "jcnr",
            "args": {
              "instanceName": "vrf100",
              "instanceType": "vrf",
              "vrfTarget":"100:1"
            },
            "ipam": {
              "type": "static",
              "addresses":[
                {
                  "address":"99.61.0.2/16",
                  "gateway":"99.61.0.1"
                },
                {
                  "address":"1234::99.61.0.2/120",
                  "gateway":"1234::99.61.0.1"
                }
              ]
            },
            "kubeConfig":"/etc/kubernetes/kubelet.conf"
          }
        ]
      }'
    The NAD defines a virtual routing and forwarding (VRF) instance vrf100 to which the pod's virtio interface will be attached. You must use the vrf instance type for Layer 3 VPN implementations. The NAD also defines a static IP address to be assigned to the pod interface.
  2. Apply the NAD manifest to create the network.
    content_copy zoom_out_map
    kubectl apply -f nad_virtio_L3vpn.yaml 
    networkattachmentdefinition.k8s.cni.cncf.io/vrf100 created
  3. Here is an example NAD to create a kernel interface attached to an L3VPN instance:
    content_copy zoom_out_map
    apiVersion: "k8s.cni.cncf.io/v1"
    kind: NetworkAttachmentDefinition
    metadata:
      name: vrf200
    spec:
      config: '{
        "cniVersion":"0.4.0",
        "name": "vrf200",
        "plugins": [
          {
            "type": "jcnr",
            "args": {
              "instanceName": "vrf200",
              "instanceType": "vrf",
    	   "interfaceType": "veth",
              "vrfTarget":"200:1"
            },
            "ipam": {
              "type": "static",
              "addresses":[
                {
                  "address":"99.62.0.2/16",
                  "gateway":"99.62.0.1"
                },
                {
                  "address":"1234::99.62.0.2/120",
                  "gateway":"1234::99.62.0.1"
                }
              ]
            },
            "kubeConfig":"/etc/kubernetes/kubelet.conf"
          }
        ]
      }'

    The NAD defines a virtual routing and forwarding (VRF) instance vrf200 with a veth interface type to which the pod's kernel interface will be attached.

    It also defines a static IP address to be assigned to the pod interface.
  4. Apply the NAD manifest to create the network.
    content_copy zoom_out_map
    kubectl apply -f nad_kernel_L3vpn.yaml 
    networkattachmentdefinition.k8s.cni.cncf.io/vrf200 created
  5. Verify the NADs are created.
    content_copy zoom_out_map
    [root@jcnr-01]# kubectl get net-attach-def
    NAME                 AGE
    vrf100               8m40s
    vrf200               55s
  6. Here is an example yaml to create a pod attached to the vrf100 and vrf200 networks:

    content_copy zoom_out_map
    apiVersion: v1
    kind: Pod
    metadata:
      name:   pod1
      annotations:
        k8s.v1.cni.cncf.io/networks: vrf100, vrf200
    spec:
      containers:
        - name: pod1
          image: ubuntu:latest
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: false
          env:
            - name: KUBERNETES_POD_UID
              valueFrom:
                fieldRef:
                   fieldPath: metadata.uid
          volumeMounts:
            - name: dpdk
              mountPath: /dpdk
              subPathExpr: $(KUBERNETES_POD_UID)
      volumes:
        - name: dpdk
          hostPath:
            path: /var/run/jcnr/containers

    The pod attaches to the router instance using the k8s.v1.cni.cncf.io/networks annotation.

  7. Apply the pod manifest.

    content_copy zoom_out_map
    [root@jcnr-01]# kubectl apply -f pod_access_mode.yaml 
    pod/pod1 created
  8. Verify the pod is running.

    content_copy zoom_out_map
    [root@jcnr-01 ~]# kubectl get pods 
    NAME   READY   STATUS    RESTARTS   AGE
    pod1   1/1     Running   0          2m38s
  9. Describe the pod to verify two secondary interface are created and attached to the vrf100 and vrf200 networks. (The output is trimmed for brevity).
    content_copy zoom_out_map
    [root@jcnr-01 ~]# kubectl describe pod pod1
    Name:         pod1
    Namespace:    default
    Priority:     0
    Node:         jcnr-01/10.100.20.25
    Start Time:   Mon, 26 Jun 2023 09:53:31 -0400
    Labels:       <none>
    Annotations:  cni.projectcalico.org/containerID: 6705c204abca5aeaa0241c1791ea911d57bd972336d969ac5d6a482c96348d95
                  cni.projectcalico.org/podIP: 10.233.91.100/32
                  cni.projectcalico.org/podIPs: 10.233.91.100/32
                  jcnr.juniper.net/dpdk-interfaces:
                    [
                        {
                            "name": "net1",
                            "vhost-adaptor-path": "/dpdk/vhost-net1.sock",
                            "vhost-adaptor-mode": "client",
                            "ipv4-address": "99.61.0.2/16",
                            "ipv6-address": "1234::633d:2/120",
                            "mac-address": "02:00:00:A9:B3:23"
                        }
                    ]
                  k8s.v1.cni.cncf.io/network-status:
                    [{
                        "name": "k8s-pod-network",
                        "ips": [
                            "10.233.91.100"
                        ],
                        "default": true,
                        "dns": {}
                    },{
                        "name": "default/vrf100",
                        "interface": "net1",
                        "ips": [
                            "99.61.0.2",
                            "1234::633d:2"
                        ],
                        "mac": "02:00:00:A9:B3:23",
                        "dns": {}
                    },{
                        "name": "default/vrf200",
                        "interface": "net2",
                        "ips": [
                            "99.62.0.2",
                            "1234::633e:2"
                        ],
                        "mac": "02:00:00:E0:AC:59",
                        "dns": {}
                    }]          
    ...
  10. Verify the vRouter has the corresponding interface created. Access the vRouter CLI and issue the vif --list command.
    content_copy zoom_out_map
    vif0/5      PMD: vhostnet1-2464783d-1ddd-4bf5-b7 NH: 16 MTU: 9160
                Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:99.61.0.2
                IP6addr:1234::633d:2
                DDP: OFF SwLB: ON
                Vrf:1 Mcast Vrf:1 Flags:PL3DProxyEr QOS:-1 Ref:14
                RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
                RX packets:0  bytes:0 errors:0
                TX packets:0  bytes:0 errors:0
                Drops:0
    
    vif0/6      Ethernet: jvknet2-2464783 NH: 19 MTU: 9160
                Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:99.62.0.2
                IP6addr:1234::633e:2
                DDP: OFF SwLB: ON
                Vrf:2 Mcast Vrf:2 Flags:PL3DVofProxyEr QOS:-1 Ref:11
                RX port   packets:28 errors:0
                RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
                RX packets:28  bytes:13612 errors:0
                TX packets:0  bytes:0 errors:0
                Drops:28
    Note that the interface type is Virtual and the type of interface is L3. You can see the IP addresses assigned to the interfaces for the corresponding valid VRF numbers.
footer-navigation