Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Announcement: Try the Ask AI chatbot for answers to your technical questions about Juniper products and solutions.

close
external-header-nav
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Manage Cloud-Native Router Controller and Cloud-Native Router vRouter

date_range 20-Apr-23

SUMMARY This topic contains instructions for how to access the cloud-native router CLIs, how to run operational commands in cRPD and vRouter containers, and how to remove cloud-native router.

Access the Cloud-Native Router CLIs

You can access the cloud-native router's CLI to monitor the router's status and to make configuration changes. In this section we provide the commands that you use to access the cRPD and vRouter CLIs and provide some examples of show commands.

Because the cloud-native router controller element runs as a Pod in a Kubernetes (K8s) cluster, you must use K8s commands to access the CLI. We provide an example below. We do not provide specific directory paths in our examples so you can copy and paste the commands into your server.

Access the Cloud-Native Router Controller (cRPD) CLI

In this example we list all of the K8s Pods running on the K8s host server. We use that output to identify the cRPD Pod that hosts the cloud-native router controller container. We then connect to the CLI of the cloud-native router controller and run some show commands.

List the K8s Pods Running in the Cluster

content_copy zoom_out_map
kubectl get pods -A
content_copy zoom_out_map
NAMESPACE         NAME                                       READY   STATUS      RESTARTS      AGE
contrail-deploy   contrail-k8s-deployer-7b5dd699b9-nd7xf     1/1     Running     0             41m
contrail          contrail-vrouter-masters-dfxgm             3/3     Running     0             41m
default           delete-crpd-dirs--1-6jmxz                  0/1     Completed   0             43m
default           delete-vrouter-dirs--1-645dt               0/1     Completed   0             43m
jcnr              kube-crpd-worker-ds-8tnf7                  1/1     Running     0             41m
jcnr              syslog-ng-54749b7b77-v24hq                 1/1     Running     0             41m
kube-system       calico-kube-controllers-57b9767bdb-5wbj6   1/1     Running     2 (92d ago)   129d
kube-system       calico-node-j4m5b                          1/1     Running     2 (92d ago)   129d
kube-system       coredns-8474476ff8-fpw78                   1/1     Running     2 (92d ago)   129d
kube-system       dns-autoscaler-7f76f4dd6-q5vdp             1/1     Running     2 (92d ago)   129d
kube-system       kube-apiserver-5a5s5-node2                 1/1     Running     3 (92d ago)   129d
kube-system       kube-controller-manager-5a5s5-node2        1/1     Running     4 (92d ago)   129d

kube-system       kube-multus-ds-amd64-4zm5k                 1/1     Running     2 (92d ago)   129d
kube-system       kube-proxy-l6xm8                           1/1     Running     2 (92d ago)   129d
kube-system       kube-scheduler-5a5s5-node2                 1/1     Running     4 (92d ago)   129d
kube-system       nodelocaldns-6kwg5                         1/1     Running     2 (92d ago)   129d

The only Pod that has cRPD in its name is the kube-crpd-worker-ds-npbjq. Thus, this is the name of the Pod we will use to access the cRPD CLI.

Connect to the cRPD CLI

The kubectl command that allows access to the controller's CLI has the following form:

kubectl exec -n <namespace> -it <cRPD worker Pod name> -- bash

In practice, you substitute values from your system for the values contained between angle brackets (<>). For example:

content_copy zoom_out_map
kubectl exec -n jcnr -it kube-crpd-worker-ds-8tnf7 -- bash

The result of the above command should appear similar to:

content_copy zoom_out_map
===>
           Containerized Routing Protocols Daemon (CRPD)
 Copyright (C) 2020-2021, Juniper Networks, Inc. All rights reserved.
                                                                      <===
root@ix-jcnr-01:/#

At this point, you have connected to the shell of the cloud-native router. Just as with other Junos-based shells, you access the operational mode of the cloud-native router the same way as if you were connected to the console of a physical Junos OS device.

root@jcnr-01:/# cli
content_copy zoom_out_map
root@jcnr-01>

Example Show Commands

In the following examples, we remove the prompt, root@jcnr-01>, so you can copy and paste the commands into your system without editing them.

content_copy zoom_out_map
show interfaces terse
content_copy zoom_out_map
Interface@link   Oper State     Addresses
__crpd-brd1      UNKNOWN        fe80::acbf:beff:fe8a:e046/64
cali1b684d67bd4@if3 UP             fe80::ecee:eeff:feee:eeee/64
cali34cf41e29bb@if3 UP             fe80::ecee:eeff:feee:eeee/64
docker0          DOWN           172.17.0.1/16
eno1             UP             10.102.70.146/24 fe80::a94:efff:fe79:dcae/64
eno2             UP
eno3             UP             10.1.1.1/24 fe80::a94:efff:fe79:dcac/64
eno3v1           UP
eno4             DOWN
enp0s20f0u1u6    UNKNOWN
ens2f0           DOWN
ens2f1           DOWN
erspan0@NONE     DOWN
eth0             UNKNOWN        169.254.143.126/32 fe80::b4db:eeff:fe78:9f43/64
gre0@NONE        UNKNOWN
gretap0@NONE     DOWN
ip6tnl0@NONE     UNKNOWN        fe80::74b6:2cff:fea7:d850/64
irb              DOWN
kube-ipvs0       DOWN           10.233.0.1/32 10.233.0.3/32 10.233.35.229/32
lo               UNKNOWN        127.0.0.1/8 ::1/128
lsi              UNKNOWN        fe80::cc59:6dff:fe9c:4db3/64
nodelocaldns     DOWN           169.254.25.10/32
sit0@NONE        UNKNOWN        ::169.254.143.126/96 ::10.233.91.64/96 ::172.17.0.1/96 ::10.102.70.146/96 ::10.1.1.1/96 ::127.0.0.1/96
tunl0@NONE       UNKNOWN
vxlan.calico     UNKNOWN        10.233.91.64/32 fe80::64c6:34ff:fecd:3522/64
content_copy zoom_out_map
show configuration routing-instances
content_copy zoom_out_map
vswitch {
    instance-type virtual-switch;
    bridge-domains {
        bd100 {
            vlan-id 100;
        }
        bd200 {
            vlan-id 200;
        }
        bd300 {
            vlan-id 300;
        }
        bd700 {
            vlan-id 700;
            interface enp59s0f1v0;
        }
        bd701 {
            vlan-id 701;
        }
        bd702 {
            vlan-id 702;
        }
        bd703 {
            vlan-id 703;
        }
        bd704 {
            vlan-id 704;
        }
        bd705 {
            vlan-id 705;
        }
    }
    interface bond0;
}

Access the Cloud-Native Router vRouter CLI

In this example we list all of the K8s Pods running on the K8s host server. We use that output to identify the vRouter Pod that hosts the cloud-native router vrouter-agent container. We then connect to the CLI of the vRouter-agent and run a show command to list the available interfaces.

List the K8s Pods Running in the Cluster

content_copy zoom_out_map
kubectl get pods -n contrail
content_copy zoom_out_map
NAME                             READY   STATUS    RESTARTS   AGE
contrail-vrouter-masters-dfxgm   3/3     Running   0          79m

Connect to the Cloud-Native Router vRouter CLI

The kubectl command that allows access to the controller's CLI has the following form:

kubectl exec -n contrail -it <contrail-vrouter-masters-pod> -- bash

In practice, you substitute values from your system for the values contained between angle brackets (<>). For example:

content_copy zoom_out_map
kubectl exec -n contrail -it contrail-vrouter-masters-xnwwp -- bash

At this point, you have connected to the vRouter's CLI. You can run commands in the CLI to learn about the state of the vRouter. For example, the command shown below allows you to see which interfaces are present on the vRouter.

content_copy zoom_out_map
vif --list
content_copy zoom_out_map
Vrouter Operation Mode: PureL2
Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
       Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
       D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
       Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored
       Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled
       Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS Left Intf
       HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast Enabled

vif0/0      Socket: unix
            Type:Agent HWaddr:00:00:5e:00:01:00
            Vrf:65535 Flags:L2 QOS:-1 Ref:3
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0
            RX packets:0  bytes:0 errors:0
            TX packets:11  bytes:4169 errors:0
            Drops:0

vif0/1      PCI: 0000:00:00.0 (Speed 25000, Duplex 1)
            Type:Physical HWaddr:46:37:1f:de:df:bc
            Vrf:65535 Flags:L2Vof QOS:-1 Ref:8
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0
            Fabric Interface: eth_bond_bond0  Status: UP  Driver: net_bonding
            Slave Interface(0): 0000:3b:02.0  Status: UP  Driver: net_iavf
            Slave Interface(1): 0000:3b:02.1  Status: UP  Driver: net_iavf
            Vlan Mode: Trunk  Vlan: 100 200 300 700-705
            RX packets:0  bytes:0 errors:0
            TX packets:378  bytes:81438 errors:0
            Drops:0

vif0/2      PCI: 0000:3b:0a.0 (Speed 25000, Duplex 1)
            Type:Workload HWaddr:ba:69:c0:b7:1f:ba
            Vrf:0 Flags:L2Vof QOS:-1 Ref:7
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0
            Fabric Interface: 0000:3b:0a.0  Status: UP  Driver: net_iavf
            Vlan Mode: Access  Vlan Id: 700  OVlan Id: 700
            RX packets:378  bytes:81438 errors:2
            TX packets:0  bytes:0 errors:0
            Drops:391

Remove the Juniper Cloud-Native Router

We do not provide specific directory names for the command in this topic. This allows you to copy and paste the commands from this document onto your server.

Run the following command to uninstall the Juniper Cloud-Native Router.

content_copy zoom_out_map
helm uninstall jcnr
Note:

The jcnr namespace is not deleted as a part of the helm uninstallation and must be deleted manually.

external-footer-nav