Cloud-Native Router L2 Features
SUMMARY Read this chapter to learn about the features of the Juniper Cloud-Native Router running in L2 mode. We discuss L2 metrics and telemetry, L2 ACLs (firewall filters), MAC learning and aging, and L2 BUM traffic rate limiting.
Juniper Cloud-Native Router Deployment Modes
Starting in Juniper Cloud-Native Router Release 22.4, you can deploy and operate
Juniper Cloud-Native Router in either L2 or L3 mode. You control the deployment mode
by editing the appropriate values.yaml
file prior to deployment.
To deploy the cloud-native router in L2 mode, retain or modify the values in the file Juniper_Cloud_Native_Router_version-number/helmchart/values.yaml.
Throughout the rest of this chapter we identify those features that are only available in L2 mode by beginning the feature name with L2.
In L2 mode, the cloud-native router behaves like a switch and so performs no routing functions and runs no routing protocols. The pod network uses VLANs to direct traffic to various destinations.
To deploy the cloud-native router in L3 mode, retain or modify the values in the file Juniper_Cloud_Native_Router_version-number/helmchart/values_L3.yaml,
In L3 mode, the cloud-native router behaves like a router and so performs routing functions and runs routing protocols such as ISIS, BGP, OSPF, and segment routing-MPLS. In L3 mode, the pod network is divided into an IPv6 underlay network and an IPv4 or IPv6 overlay network. The IPv6 underlay network is used for control plane traffic.
Juniper Cloud-Native Router L2 Interface Types
Juniper Cloud-Native Router supports the following types of interfaces:
-
Agent interface
vRouter has only one agent interface. The agent interface enables communication between the vRouter-agent and the vRouter. On the vRouter CLI when you issue the
vif --list
command, the agent interface looks like this:vif0/0 Socket: unix Type:Agent HWaddr:00:00:5e:00:01:00 Vrf:65535 Flags:L2 QOS:-1 Ref:3 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:650 bytes:99307 errors:0 Drops:0
-
Data Plane Development Kit (DPDK) Virtual Function (VF) workload interfaces
These interfaces connect to the radio units (RUs) or millimeter-wave distributed units (mmWave-DUs) On the vRouter CLI when you issue the
vif --list
command, the DPDK VF workload interface looks like this:vif0/5 PCI: 0000:ca:19.1 (Speed 10000, Duplex 1) Type:Workload HWaddr:9e:52:29:9e:97:9b Vrf:0 Flags:L2Vof QOS:-1 Ref:9 RX queue packets:29087 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:ca:19.1 Status: UP Driver: net_iavf Vlan Mode: Access Vlan Id: 1250 OVlan Id: 1250 RX packets:29082 bytes:6766212 errors:5 TX packets:0 bytes:0 errors:0 Drops:29896
-
DPDK VF fabric interfaces
DPDK VF fabric interfaces, which are associated with the physical network interface card (NIC) on the host server, accept traffic from multiple VLANs. On the vRouter CLI when you issue the
vif --list
command, the DPDK VF fabric interface looks like this:vif0/1 PCI: 0000:31:01.0 (Speed 10000, Duplex 1) Type:Physical HWaddr:d6:22:c5:42:de:c3 Vrf:65535 Flags:L2Vof QOS:-1 Ref:12 RX queue packets:11813 errors:1 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 1 0 Fabric Interface: 0000:31:01.0 Status: UP Driver: net_iavf Vlan Mode: Trunk Vlan: 1001-1100 RX packets:0 bytes:0 errors:49962 TX packets:18188356 bytes:2037400554 errors:0 Drops:49963
-
Active or standby bond interfaces
Bond interfaces accept traffic from multiple VLANs. A bond interface runs in the active or standby mode (mode 0).
On the vRouter CLI when you issue the
vif --list
command, the bond interface looks like this:vif0/2 PCI: 0000:00:00.0 (Speed 10000, Duplex 1) Type:Physical HWaddr:32:f8:ad:8c:d3:bc Vrf:65535 Flags:L2Vof QOS:-1 Ref:8 RX queue packets:1882 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: eth_bond_bond0 Status: UP Driver: net_bonding Slave Interface(0): 0000:81:01.0 Status: UP Driver: net_iavf Slave Interface(1): 0000:81:03.0 Status: UP Driver: net_iavf Vlan Mode: Trunk Vlan: 751-755 RX packets:8108366000 bytes:486501960000 errors:4234 TX packets:65083776 bytes:4949969408 errors:0 Drops:8108370394
-
Pod interfaces using virtio and the DPDK data plane
Virtio interfaces accept traffic from multiple VLANs and are associated with pod interfaces that use virtio on the DPDK data plane.
On the vRouter CLI when you issue the
vif --list
command, the virtio with DPDK data plane interface looks like this:vif0/3 PMD: vhost242ip-93883f16-9ebb-4acf-b Type:Virtual HWaddr:00:16:3e:7e:84:a3 Vrf:65535 Flags:L2 QOS:-1 Ref:13 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Vlan Mode: Trunk Vlan: 1001-1003 RX packets:0 bytes:0 errors:0 TX packets:10604432 bytes:1314930908 errors:0 Drops:0 TX port packets:0 errors:10604432
-
Pod interfaces using virtual Ethernet (veth) pairs and the DPDK data plane
Pod interfaces that use veth pairs and the DPDK data plane are access interfaces rather than trunk interfaces. This type of a pod interface allows traffic from only one VLAN to pass.
On the vRouter CLI when you issue the
vif --list
command, the veth pair with DPDK data plane interface looks like this:vif0/4 Ethernet: jvknet1-88c44c3 Type:Virtual HWaddr:02:00:00:3a:8f:73 Vrf:0 Flags:L2Vof QOS:-1 Ref:10 RX queue packets:524 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Vlan Mode: Access Vlan Id: 3001 OVlan Id: 3001 RX packets:9 bytes:802 errors:515 TX packets:0 bytes:0 errors:0 Drops: 525
-
VLAN sub-interfaces
Starting in Juniper Cloud-Native Router Release 22.4, the cloud-native router supports the use of VLAN sub-interfaces. VLAN sub-interfaces are like logical interfaces on a physical switch or router. When you run the cloud-native router in L2 mode, you must associate each sub-interface with a specific VLAN. On the JCNR-vRouter, a VLAN sub-interface look like this:
vif0/5 Virtual: vhostnet1-71cd7db1-1a5e-49.3003 Vlan(o/i)(,S): 3003/3003 Parent:vif0/4 Type:Virtual(Vlan) HWaddr:00:99:99:99:33:09 Vrf:0 Flags:L2 QOS:-1 Ref:3 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
-
Physical Function (PF) workload interfaces
-
PF fabric interfaces
vRouter does not support the vhost0
interface when run in L2
mode.
The vRouter-agent detects L2 mode in values.yaml during
deployment, so does not wait for the vhost0
interface to come
up before completing installation. The vRouter-agent does not send a vhost
interface add message so the vRouter doesn't create the vhost0
interface.
In L3 mode, the vhost0
interface is present and functional.
Pods are the Kubernetes element that contains the interfaces used in cloud-native router. You control interface creation by manipulating the value portion of the key:value pairs in YAML configuration files. The cloud-native router uses a pod-specific file and a network attachment device (NAD)-specific file for pod and interface creation. During pod creation, Kubernetes consults the pod and NAD configuration files and creates the needed interfaces from the values contained within the NAD configuration file.
You can see example NAD and pod YAML files in the L2 - Add User Pod with Kernel Access to a Cloud-Native Router Instance and L2 - Add User Pod with virtio Trunk Ports to a Cloud-Native Router Instance examples.
L2 Metrics and Telemetry
Read this topic to learn how to view Layer 2 (L2) metrics from an instance of Juniper Cloud-Native Router.
Viewing L2 Metrics
Juniper Cloud-Native Router comes with telemetry capabilities that enable you to see performance metrics and telemetry data. The container contrail-vrouter-telemetry-exporter provides you this visibility. This container runs along side the other vRouter containers in the contrail-vrouter-masters pod.
The telemetry exporter periodically queries the Introspect agent on the vRouter-agent for statistics and reports metrics information in response to the Prometheus scrape requests. You can directly view the telemetry data by using the following URL: http://host server IP address:8070. The following table shows a sample output.
We've grouped the output shown in the following table. The cloud-native router does not group or sort the output on live systems.
Group | Sample Output |
---|---|
Memory usage per vRouter |
# TYPE virtual_router_system_memory_cached_bytes gauge # HELP virtual_router_system_memory_cached_bytes Virtual router system memory cached virtual_router_system_memory_cached_bytes{vrouter_name="jcnr.example.com"} 2635970448 # TYPE virtual_router_system_memory_buffers gauge # HELP virtual_router_system_memory_buffers Virtual router system memory buffer virtual_router_system_memory_buffers{vrouter_name="jcnr.example.com"} 32689 # TYPE virtual_router_system_memory_bytes gauge # HELP virtual_router_system_memory_bytes Virtual router total system memory virtual_router_system_memory_bytes{vrouter_name="jcnr.example.com"} 2635970448 # TYPE virtual_router_system_memory_free_bytes gauge # HELP virtual_router_system_memory_free_bytes Virtual router system memory free virtual_router_system_memory_free_bytes{vrouter_name="jcnr.example.com"} 2635969296 # TYPE virtual_router_system_memory_used_bytes gauge # HELP virtual_router_system_memory_used_bytes Virtual router system memory used virtual_router_system_memory_used_bytes{vrouter_name="jcnr.example.com"} 32689 # TYPE virtual_router_virtual_memory_kilobytes gauge # HELP virtual_router_virtual_memory_kilobytes Virtual router virtual memory virtual_router_virtual_memory_kilobytes{vrouter_name="jcnr.example.com"} 0 # TYPE virtual_router_resident_memory_kilobytes gauge # HELP virtual_router_resident_memory_kilobytes Virtual router resident memory virtual_router_resident_memory_kilobytes{vrouter_name="jcnr.example.com"} 32689 # TYPE virtual_router_peak_virtual_memory_bytes gauge # HELP virtual_router_peak_virtual_memory_bytes Virtual router peak virtual memory virtual_router_peak_virtual_memory_bytes{vrouter_name="jcnr.example.com"} 2894328001 |
Packet count per interface |
# TYPE virtual_router_phys_if_input_packets_total counter # HELP virtual_router_phys_if_input_packets_total Total packets received by physical interface virtual_router_phys_if_input_packets_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 1483 # TYPE virtual_router_phys_if_output_packets_total counter # HELP virtual_router_phys_if_output_packets_total Total packets sent by physical interface virtual_router_phys_if_output_packets_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 32969 # TYPE virtual_router_phys_if_input_bytes_total counter # HELP virtual_router_phys_if_input_bytes_total Total bytes received by physical interface virtual_router_phys_if_input_bytes_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 125558 # TYPE virtual_router_phys_if_output_bytes_total counter # HELP virtual_router_phys_if_output_bytes_total Total bytes sent by physical interface virtual_router_phys_if_output_bytes_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 4597076 virtual_router_phys_if_input_bytes_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 228300499320 virtual_router_phys_if_output_bytes_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 228297889634 virtual_router_phys_if_input_packets_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 1585421179 virtual_router_phys_if_output_packets_total{vrouter_name="jcnr.example.com",interface_name="bond0"} 1585402623 virtual_router_phys_if_output_packets_total{interface_name="bond0",vrouter_name="jcnr.example.com"} 1585403344 |
CPU usage per vRouter |
# TYPE virtual_router_cpu_1min_load_avg gauge # HELP virtual_router_cpu_1min_load_avg Virtual router CPU 1 minute load average virtual_router_cpu_1min_load_avg{vrouter_name="jcnr.example.com"} 0.11625 # TYPE virtual_router_cpu_5min_load_avg gauge # HELP virtual_router_cpu_5min_load_avg Virtual router CPU 5 minute load average virtual_router_cpu_5min_load_avg{vrouter_name="jcnr.example.com"} 0.109687 # TYPE virtual_router_cpu_15min_load_avg gauge # HELP virtual_router_cpu_15min_load_avg Virtual router CPU 15 minute load average virtual_router_cpu_15min_load_avg{vrouter_name="jcnr.example.com"} 0.110156 |
Drop packet count per vRouter |
# TYPE virtual_router_dropped_packets_total counter # HELP virtual_router_dropped_packets_total Total packets dropped virtual_router_dropped_packets_total{vrouter_name="jcnr.example.com"} 35850 |
Packet count per interface per VLAN |
# TYPE virtual_router_interface_vlan_multicast_input_packets_total counter # HELP virtual_router_interface_vlan_multicast_input_packets_total Total number of multicast packets received on interface VLAN virtual_router_interface_vlan_multicast_input_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_broadcast_output_packets_total counter # HELP virtual_router_interface_vlan_broadcast_output_packets_total Total number of broadcast packets sent on interface VLAN virtual_router_interface_vlan_broadcast_output_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_broadcast_input_packets_total counter # HELP virtual_router_interface_vlan_broadcast_input_packets_total Total number of broadcast packets received on interface VLAN virtual_router_interface_vlan_broadcast_input_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_multicast_output_packets_total counter # HELP virtual_router_interface_vlan_multicast_output_packets_total Total number of multicast packets sent on interface VLAN virtual_router_interface_vlan_multicast_output_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_unicast_input_packets_total counter # HELP virtual_router_interface_vlan_unicast_input_packets_total Total number of unicast packets received on interface VLAN virtual_router_interface_vlan_unicast_input_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_flooded_output_bytes_total counter # HELP virtual_router_interface_vlan_flooded_output_bytes_total Total number of output bytes flooded to interface VLAN virtual_router_interface_vlan_flooded_output_bytes_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_multicast_output_bytes_total counter # HELP virtual_router_interface_vlan_multicast_output_bytes_total Total number of multicast bytes sent on interface VLAN virtual_router_interface_vlan_multicast_output_bytes_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_unicast_output_packets_total counter # HELP virtual_router_interface_vlan_unicast_output_packets_total Total number of unicast packets sent on interface VLAN virtual_router_interface_vlan_unicast_output_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_broadcast_input_bytes_total counter # HELP virtual_router_interface_vlan_broadcast_input_bytes_total Total number of broadcast bytes received on interface VLAN virtual_router_interface_vlan_broadcast_input_bytes_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_multicast_input_bytes_total counter # HELP virtual_router_interface_vlan_multicast_input_bytes_total Total number of multicast bytes received on interface VLAN virtual_router_interface_vlan_multicast_input_bytes_total{vlan_id="100",interface_id="1"} 0 # TYPE virtual_router_interface_vlan_unicast_input_bytes_total counter # HELP virtual_router_interface_vlan_unicast_input_bytes_total Total number of unicast bytes received on interface VLAN virtual_router_interface_vlan_unicast_input_bytes_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_flooded_output_packets_total counter # HELP virtual_router_interface_vlan_flooded_output_packets_total Total number of output packets flooded to interface VLAN virtual_router_interface_vlan_flooded_output_packets_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_broadcast_output_bytes_total counter # HELP virtual_router_interface_vlan_broadcast_output_bytes_total Total number of broadcast bytes sent on interface VLAN virtual_router_interface_vlan_broadcast_output_bytes_total{interface_id="1",vlan_id="100"} 0 # TYPE virtual_router_interface_vlan_unicast_output_bytes_total counter # HELP virtual_router_interface_vlan_unicast_output_bytes_total Total number of unicast bytes sent on interface VLAN virtual_router_interface_vlan_unicast_output_bytes_total{interface_id="1",vlan_id="100"} 0 ... |
Prometheus is an open-source systems monitoring and alerting toolkit. You can use Prometheus to retrieve telemetry data from the cloud-native router host servers and view that data in the HTTP format. A sample of Prometheus configuration looks like this:
- job_name: "prometheus-JCNR-1a2b3c" # metrics_path defaults to '/metrics' # scheme defaults to 'http'. static_configs: - targets: ["<host-server-IP>:8070"]
See Also
L2 ACLs (Firewall Filters)
Read this topic to learn about Layer 2 access control lists (L2 ACLs) in the cloud-native router.
L2 Firewall Filters
Starting with Juniper Cloud-Native Router Release 22.2 we've included a limited firewall filter capability. You can configure the filters using the Junos OS CLI within the cloud-native router controller, using NETCONF, or the cloud-native router APIs.
During deployment, the system defines and applies firewall filters to block traffic from passing directly between the router interfaces. You can dynamically define and apply more filters. Use the firewall filters to:
-
Define firewall filters for bridge family traffic.
-
Define filters based on one or more of the following fields: source MAC address, destination MAC address, or EtherType.
-
Define multiple terms within each filter.
-
Discard the traffic that matches the filter.
-
Apply filters to bridge domains.
Firewall Filter Example
Below you can see an example of a firewall filter configuration from a cloud-native router deployment.
root@jcnr01> show configuration firewall:firewall family { bridge { filter example { term t1 { from { destination-mac-address 10:10:10:10:10:11; source-mac-address 10:10:10:10:10:10; ether-type arp; } then { discard; } } } } }
You can configure up to 16 terms in a single firewall filter.
The only then action you can configure in a firewall filter is the discard action.
After configuration, you must apply your firewall filters to a bridge domain
using a cRPD configuration command similar to:set routing-instances
vswitch bridge-domains bd3001 forwarding-options filter
input filter1
. Then you must commit the
configuration for the firewall filter to take effect.
To see how many packets matched the filter (per VLAN), you can issue the following command on the cRPD CLI:
show firewall filter filter1
The command output looks like this:
Filter : filter1 vlan-id : 3001 Term Packet t1 0
In the preceding example, we applied the filter to the bridge domain
bd3001
. The filter has not yet matched any packets.
L2 Firewall Filter (ACL) Troubleshooting
The following table lists some of the potential problems that you might face when you implement firewall rules or ACLs in the cloud-native router. You run most of these commands on the host server. The "Command" column indicates whether the command shown needs to run somewhere else.
Problem | Possible Causes and Resolution | Command |
---|---|---|
Firewall filters or ACLs not working |
gRPC connection (port 50052) to the vRouter is down. Check the gRPC connection. |
netstat -antp|grep 50052 |
The Check whether |
ps aux|grep ui-pubd |
|
Firewall filter or ACL show commands not working |
The gRPC connection (port 50052) to the vRouter is down. Check the gRPC connection. |
netstat -antp|grep 50052 |
The firewall service is not running. |
ps aux|grep firewall |
|
show log filter.log You must run this command in the JCNR-controller (cRPD) CLI. |
See Also
MAC Learning and Aging
Juniper Cloud-Native Router provides automated learning and aging of MAC addresses. Read this topic for an overview of the MAC learning and aging functionality in the cloud-native router.
MAC Learning
MAC learning enables the cloud-native router to efficiently send the received packets to their respective destinations.The cloud-native router maintains a table of MAC addresses grouped by interface. The table includes MAC addresses, VLANs, and the interface on which the vRouter learns each MAC address and VLAN. The MAC table informs the vRouter about the MAC addresses that each interface can reach.
The cloud-native router caches the source MAC address for a new packet flow to record the incoming interface into the MAC table. The router learns the MAC addresses for each VLAN or bridge domain. The cloud-native router creates a key in the MAC table from the MAC address and VLAN of the packet. Queries sent to the MAC table return the interface associated with the key. To enable MAC learning, the cloud-native router performs these steps:
-
Records the incoming interface into the MAC table by caching the source MAC address for a new packet flow.
-
Learns the MAC addresses for each VLAN or bridge domain.
-
Creates a key in the MAC table from the MAC address and VLAN of the packet.
If the destination MAC address and VLAN are missing (lookup failure), the cloud-native router floods the packet out all the interfaces (except the incoming interface) in the bridge domain.
By default:
-
MAC table entries time out after 60 seconds.
-
The MAC table size is limited to 10,240 entries.
You can configure the aging timeout and MAC table size during deployment by editing the values.yaml file under the jcnr-vrouter directory on the host server. We recommend that you do not change the default values.
You can see the MAC table entries by using:
-
Introspect agent at http://host server IP:8085/mac_learning.xml#Snh_FetchL2MacEntry.
-
The command show bridge mac-table on the cRPD CLI.
-
The command purel2cli --mac show on the CLI of the contrail-tools pod.
If you exceed the MAC address limit, the counter pkt_drop_due_to_mactable_limit increments. You can see this counter by using the introspect agent at http://host server IP:8085/Snh_AgentStatsReq.
If you delete or disable an interface, the cloud-native router deletes all the MAC entries associated with that interface from the MAC table.
MAC Entry Aging
The aging timeout for cached MAC entries is 60 seconds. You can configure the aging timeout at deployment time by editing the values.yaml file. The minimum timeout is 60 seconds and the maximum timeout is 10,240 seconds. You can see the time that is left for each MAC entry through introspect at http://host server IP:8085/mac_learning.xml#Snh_FetchL2MacEntry. We show an example of the output below:
l2_mac_entry_list vrf_id vlan_id mac index packets time_since_add last_stats_change 0 1001 00:10:94:00:00:01 5644 615123154 12:55:14.248785 00:00:00.155450 0 1001 00:10:94:00:00:65 6480 615108294 12:55:14.247765 00:00:00.155461 0 1002 01:10:94:00:00:02 5628 615123173 12:55:14.248295 00:00:00.155470
BUM Rate Limiting
The rate limiting feature controls the rate of egress broadcast, unknown unicast, and multicast (BUM) traffic on fabric interfaces. You specify the rate limit in bytes per second by adjusting stormControlProfiles in the values.yaml file before deployment. The system applies the configured profiles to all specified fabric interfaces in the cloud-native router. The maximum per-interface rate limit value you can set is 1,000,000 bytes per second.
If the unknown unicast, broadcast, or multicast traffic rate exceeds the set limit on
a specified fabric interface, the vRouter drops the traffic. You can see the drop
counter values by running the dropstats
command in the vRouter CLI.
You can see the per-interface rate limit drop counters by running the vRouter CLI
command vif --get fabric_vif_id
--get-drop-stats
For example:
dropstats L2 untag pkt drop 8832 L2 Src Mac lookup fail 880 Rate limit exceeded 29312474
When you configure a rate limit profile on a fabric interface, you can see the
configured limit in bytes per second when you run either vif --list
or vif --get fabric_vif_id.
L2 API to Force Bond Link Switchover
When you run cloud-native router in L2 mode with cascaded nodes you can configure
those nodes to use bond interfaces. If you also configure the bond interfaces as
BONDING_MODE_ACTIVE_BACKUP
, the vRouter-agent exposes the REST
API call: curl -X POST http://127.0.0.1:9091/bond-switch/bond0
on
localhost port 9091. You can use this REST API call to force traffic to switch from
the active interface to the standby interface.
The vRouter contains two CLI commands that allow you to see the active interface in a
bonded pair and to see the traffic statistics associated with your bond interfaces.
These commands are: dpdkinfo -b
and dpdkinfo -n
respectively.
L2 Quality of Service (QoS)
Starting in Juniper Cloud-Native Router Release 22.4, you can configure quality of service (QoS) parameters including classification, marking, and queuing. The cloud-native router performs classification and marking operations in vRouter and queing (scheduling) operations in the physical network interface card (NIC). Scheduling is only supported on the E810 NIC.
QoS Overview
You enable QoS prior to the deploy time by editing the
values.yaml
file in
Juniper-Cloud-Native-Router-version-number/helmchart
directory and changing the qosEnable
value to
true
. The default value for the QoS feature is
false
(disabled).
You can only enable the QoS feature if the host server on which you install your cloud-native router contains an Intel E810 NIC that is running lldp.
You enable lldp on the NIC using the lldptool
which runs on the
host server as a CLI application. Issue the following command to enable lldp on
the E810 NIC. For example, you could use the following command:
lldptool -T -i INTERFACE -V ETS-CFG willing=no tsa=0:strict,1:strict,2:strict,3:strict,4:strict, 5:strict,6:strict,7:strict up2tc=0:0,1:1,2:2,3:3,4:0,5:1,6:2,7:3
The details of the above command are:
-
ETS–Enhanced Transmission Selection
-
willing–The willing attribute determines whether the system uses locally configured packet forwarding classification (PFC) or not. If you set
willing
tono
(the default setting), the cloud-native router applies local PFC configuration. If you set willing to yes, and the cloud-native router receives TLV from the peer router, the cloud-native router applies the received values. -
tsa–The transmission selection algorithm is a comma seperated list of traffic class to selection algorithm maps. You can choose
ets
,strict
, orvendor
as selection algorithms. -
up2tc–Comma-separated list that maps user priorities to traffic classes
The list below provides an overview of the classification, marking, and queueing operations performed by cloud-native router.
-
Classification:
-
vRouter classifies packets by examining the priority bits in the packet
-
vRouter derives traffic class and loss priority
-
vRouter can apply traffic classifiers to fabric, traffic, and workload interface types
-
vRouter maintains 16 entries in its classifier map
-
-
Marking (Re-write):
-
vRouter performs marking operationsMarking is done in Vrouter. •Re-write of p-bits done in egress path. •At egress based on traffic class and drop priority new priority is derived. •Marking can be applied to Fabric interface only.
-
vRouter performs rewriting of p-bits in the egress path
-
vRouter derives new traffic priority based on traffic class and drop priority at egress
-
vRouter can apply marking to packets only on fabric interfaces
-
vRouter maintains 8 entries in its marking map
-
-
Queueing (Scheduling):
-
Cloud-native router performs strict priority scheduling in hardware (E810 NIC)
-
Cloud-native router maps each traffic class to one queue
-
Cloud-native router limits the maximum number of traffic queue to 4
-
Cloud-native router maps 8 possible priorities to 4 traffic classes; It also maps each traffic class 1 hardware queue
-
Cloud-native router can apply scheduling to fabric interface only
-
Virtual functions (VFs) leverage the queues that you configure in the physical functions (interfaces)
-
vRouter maintains 8 entries in its scheduler map
-
QoS Example Configuration
You configure QoS classifiers, rewrite rules, and schedulers in the cRPD using Junos set commands or remotely using NETCONF. We display a Junos-based example configuration below.
set class-of-service classifiers ieee-802.1 class1 forwarding-class assured-forwarding loss-priority high code-points 011 set class-of-service rewrite-rules ieee-802.1 Rule_1 forwarding-class assured-forwarding loss-priority high code-point 110 set class-of-service schedulers sch1 priority high set class-of-service scheduler-maps sch1 forwarding-class assured-forwarding scheduler sch1 set class-of-service interfaces enp175s1 scheduler-map sch1 set class-of-service interfaces enp175s1 unit 0 rewrite-rules ieee-802.1 Rule_1 set class-of-service interfaces vhostnet123-3546aefd-7af8-4fe5 unit 0 classifiers ieee-802.1 class1
Viewing the QoS Configuration
You view the QoS configuration in the cRPD CLI using show commands in Junos operation mode, The show commands reveal the configuration of classifiers, rewrite rules, or scheduler maps individually. We display three examples below; one example for each operation.
-
Show Classifier
user@jcnr1> show class-of-service classifier Classifier: class1, Code point type: ieee802.1p Code point Forwarding class Loss priority 011 assured-forwarding high
-
Show Rewrite-Rule
user@jcnr1> show class-of-service rewrite-rule Rewrite rule: Rule_1, Code point type: ieee802.1p Forwarding class Loss priority Code point assured-forwarding high 110
-
Show Scheduler-Map
show class-of-service scheduler-map sch1 Scheduler map: sch1 Scheduler: sch1, Forwarding class: assured-forwarding Transmit rate: unspecified, Rate Limit: none, Priority: high
Native VLAN
Starting in Juniper Cloud-Native Router Release 23.1, JCNR supports receiving and forwarding untagged packets on a trunk interface. Typically, trunk ports accept only tagged packets, and the untagged packets are dropped. You can enable a JCNR fabric trunk port to accept untagged packets by configuring a native VLAN identifier (ID) on the interface on which you want the untagged packets to be received. When a JCNR fabric trunk port is enabled to accept untagged packets, such packets are forwarded in the native VLAN domain.
native-vlan-id
Enable the native-vlan-id
key in the Helm chart, prior to the
deploy time, to configure the VLAN identifier to associate it with untagged data
packets received on the fabric trunk interface. Edit the
values.yaml
file in
Juniper_Cloud_Native_Router_<release-number>/helmchart
directory and add the key native-vlan-id
along with a value for
it. For example,
fabricInterface: - bond0: interface_mode: trunk vlan-id-list: [100, 200, 300, 700-705] storm-control-profile: rate_limit_pf1 native-vlan-id: 100
After editing the values.yaml file, you have to install
or upgrade JCNR using the edited values.yaml to ensure
that the native-vlan-id
key is enabled.
To verify, if native VLAN is enabled for an interface, connect to the vRouter
agent by executing the command kubectl exec -it -n contrail
contrail-vrouter-<agent container> --
bash
command, and then run the command vif --get
<interface index id>
. A sample output
is shown below.
vif0/1 PCI: 0000:00:00.0 (Speed 10000, Duplex 1) Type:Physical HWaddr:6a:45:b2:a8:ce:5c Vrf:0 Flags:L2Vof QOS:-1 Ref:11 RX port packets:36550 errors:0 RX queue packets:36550 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: eth_bond_bond0 Status: UP Driver: net_bonding Slave Interface(0): 0000:3b:02.0 Status: UP Driver: net_iavf Vlan Mode: Trunk Vlan: 100 200 300 Native vlan id: 100 RX packets:36550 bytes:5875795 errors:0 TX packets:0 bytes:0 errors:0 Drops:613
Preventing Local Switching
Starting in Juniper Cloud-Native Router Release 23.1, JCNR provides support to prevent interfaces in a bridge domain that are a part of the same VLAN group, from transmitting ethernet frame copies in between those interfaces. The noLocalSwitching key provides the option to enable the functionality on the selected VLAN IDs.
The noLocalSwitching functionality is a Technology Preview feature in the Juniper Cloud-Native Router Release 23.1.
To prevent interfaces in a bridge domain from transmitting and receiving ethernet frame copies, enable the noLocalSwitching key and assign a VLAN ID to it to ensure that the interfaces belonging to the VLAN ID do not transmit frames to one another. Note that the noLocalSwitching functionality is enabled only on the access interfaces. To enable noLocalSwitching on a trunk interface that is a part of the same VLAN ID, you have to separately enable the trunk interface by setting the no-local-switching key in the trunk interface to true. Use the noLocalSwitching functionality when you want to block interfaces that are a part of a VLAN group to stop transmitting traffic directly to one another.
For all the trunk interfaces and access interfaces, cRPD isolates traffic for the bridge domains configured with no-local-switching.
To prevent local switching, perform the steps below prior to the deploy time:
-
Edit the values.yaml file in Juniper_Cloud_Native_Router_<release-number>/helmchart directory.
-
Enable the noLocalSwitching key and provide the VLAN IDs.
Note:-
The value for the noLocalSwitching key can be an indivdual VLAN ID, or multipe comma-separated VLAN ID values, or a VLAN ID range, or a combination of comma-separated VLAN ID values and a VLAN ID range. For example, noLocalSwitching: [700, 701, 705-710].
-
With this step the feature is enabled for all access interfaces having the specified VLAN ID. You can skip the next step if you do not want to enable the feature on the trunk interface.
-
-
To enable the feature on a trunk interface, add the key no-local-switching and set it to true under the trunk interface configuration.
-
Install or upgrade JCNR using the values.yaml.
Example
#################################################################### # L2 PARAMS # #################################################################### noLocalSwitching: [700] # fabricInterface: NGDU or tor side interface, expected all types # of traffic; interface_mode is always trunk for this mode fabricInterface: - bond0: interface_mode: trunk vlan-id-list: [100, 200, 300, 700-705] storm-control-profile: rate_limit_pf1 #native-vlan-id: 100 no-local-switching: true # fabricWorkloadInterface: RU side interfaces, expected traffic is only # management/control traffic; interface mode can be trunk or access # NOTE: only one vlan can be specified in case of access interfaces # (as opposed to multiple vlans in trunk mode) fabricWorkloadInterface: - enp59s0f1v0: interface_mode: access vlan-id-list: [700]
To know all the interfaces that are enabled for noLocalSwitching
functionality on all the VLANs, connect to the vRouter agent by executing the
command kubectl exec -it -n contrail contrail-vrouter-<agent
container> -- bash
command, and then run the command
purel2cli --nolocal show
. A sample output is shown below.
[root@nodep25 /]# purel2cli --nolocal show ============================ vlan no_local_switch_list ============================ 100 1, 2, 4, 200 300 700 701 702 703
To check if noLocalSwitching functionality is enabled on a
specific VLAN ID, connect to the vRouter agent by executing the command
kubectl exec -it -n contrail contrail-vrouter-<agent
container> -- bash
command, and then run the command
purel2cli --nolocal get <VLAN ID>
. A
sample output is shown below.
[root@nodep25 /]# purel2cli --nolocal get 100 ============================ vlan no_local_switch_list ============================ 100 1, 2, 4,