Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configure Settings on Host OS

This chapter provides information on how to tune the settings on host OS to enable advanced features or to increase the scale of cRPD functionality.

Configure ARP Scaling

The maximum ARP entry number is controlled by the Linux host kernel. You can adjust the ARP or NDP entry limits using the sysctl command on the Linux host.

For example, to adjust the maximum ARP entries using IPv4:

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh1=4096

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh2=8192

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh3=8192

For example, to adjust the maximum NDP entries using IPv6:

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh1=4096

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh2=8192

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh3=8192

IGMP Membership Under Linux

To allow a greater number of OSPFv2/v3 adjacencies with cRPD, increase the IGMP membership limit:

Increase the IGMP membership limit.

root@host:~# sysctl -w net.ipv4.igmp_max_memberships=1000

Kernel Modules

You need to load the following kernel modules on the host before you deploy cRPD in Layer 3 mode. These modules are usually available in linux-modules-extra or kernel-modules-extra packages. Run the following commands to add the kernel modules.

  • modprobe tun

  • modprobe fou

  • modprobe fou6

  • modprobe ipip

  • modprobe ip_tunnel

  • modprobe ip6_tunnel

  • modprobe mpls_gso

  • modprobe mpls_router

  • modprobe mpls_iptunnel

  • modprobe vrf

  • modprobe vxlan

Configure MPLS

To configure MPLS in Linux kernel:

  1. Load the MPLS modules in the container using modprobe or insmod:

    root@crpd-ubuntu3:~# modprobe mpls_iptunnel

    root@crpd-ubuntu3:~# modprobe mpls_router

    root@crpd-ubuntu3:~# modprobe ip_tunnel

  2. Verify the MPLS modules loaded in host OS.

Hash Field Selection for ECMP Load Balancing on Linux

You can select the ECMP hash policy (fib_multipath_hash_policy) for both forwarded and locally generated traffic (IPv4/IPv6).

IPv4 Traffic

  1. By default, Linux kernel uses the L3 hash policy to load-balance the IPv4 traffic. L3 hashing uses the following information:
    • Source IP address
    • Destination IP address

    root@host:~# sysctl -n net.ipv4.fib_multipath_hash_policy 0

  2. Run the following command to load-balance the IPv4 traffic using Layer 4 hash policy. Layer 4 hashing load-balance the traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Source port number
    • Destination port number
    • Protocol

    root@host:~# sysctl -w net.ipv4.fib_multipath_hash_policy=1

    root@host:~# sysctl -n net.ipv4.fib_multipath_hash_policy 1

  3. Run the following command to use L3 hashing on the inner packet header (IPv4/IPv6 over IPv4 GRE).

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=2

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 2

    The policy defaults to L3 hashing on the packet forwarded as described in the default approach for IPv4 traffic.

    IPv6 Traffic

  4. By default, Linux kernel uses L3 hash policy to load-balance the IPv6 traffic. The L3 hash policy load-balance the traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Flow label
    • Next header (Protocol)

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 0

  5. You can use the Layer 4 hash policy to load-balance the IPv6 traffic. The Layer 4 hash policy load-balance traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Source port number
    • Destination port number
    • Next header (Protocol)

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=1

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 1

  6. Run the following command to use L3 hashing on the inner packet header (IPv4/IPv6 over IPv4 GRE).

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=2

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 2

    MPLS

  7. Linux kernel can select the next hop of a multipath route using the following parameters:
    • Label stack upto the limit of MAX_MP_SELECT_LABELS (4)
    • Source IP address
    • Destination IP address
    • Protocol of the inner IPv4/IPv6 header

    Neighbor Detection

  8. Run the following command to view the liveness (failed/incomplete/unresolved) of the neighbor entry.

    root@host:~# sysctl -w net.ipv4.fib_multipath_use_neigh=1

    By default, the packets are forwarded to next-hops using the root@host:~# sysctl -n net.ipv4.fib_multipath_use_neigh 0 command.

wECMP Using BGP on Linux

Unequal cost load balancing is a way to distribute traffic unequally among different paths (comprising the multipath next-hop); when the paths have different bandwidth capabilities. BGP protocol tags each route/path with the bandwidth of the link using the link bandwidth extended community. The bandwidth of the corresponding link can be encoded as part of this link bandwidth community. RPD uses this bandwidth information of each path to program the multipath next-hops with appropriate linux::weights. A next-hop with linux::weight allows linux kernel to load-balance traffic asymmetrically.

BGP forms a multipath next-hop and uses the bandwidth values of individual paths to find out the proportion of traffic that each of the next-hops that form the ECMP next-hop should receive. The bandwidth values specified in the link bandwidth need not be the absolute bandwidth of the interface. These values need to reflect the relative bandwidth of one path from the another. For details, see Understanding How to Define BGP Communities and Extended Communities and How BGP Communities and Extended Communities Are Evaluated in Routing Policy Match Conditions.

Consider a network with R1 receiving equal cost paths from R2 and R3 to a destination R4. If you want to send 90% of the load balanced traffic over the path R1-R2 and the remaining 10% of the traffic over the path R1-R3 using wECMP. You need to tag routes received from the two BGP peers with link bandwidth community by configuring policy-options.

  1. Configure policy statement.

    root@host> show configuration policy-options

  2. RPD uses the bandwidth values to unequally balance the traffic with the multiple path next-hops.

    root@host> show route 100.100.100.100 detail

  3. Linux kernel supports unequal load balancing by assigning linux::weights for each next-hop.

    root@host:/# ip route show 100.100.100.100

    The linux::weights are programmed to linux as divisions of integer 255 (the maximum value of an unsigned character). Each next-hop in the ECMP next-hop is given a linux::weight proportional to its share of the bandwidth.

Enable SRv6 on cRPD

You can enable IPv6 SR capability on cRPD using the following sysctl command:

  1. Enable SR.

    root@host:~# sysctl net.ipv6.conf.all.seg6_enabled=1

    root@host:~# sysctl net.ipv6.conf.all.forwarding=1

  2. Configure the following command to enable SRv6 on eth0 interface.

    root@host:~# sysctl net.ipv6.conf.eth0.seg6_enabled=1

  3. Configure the following command to set the DT4 SIDs.

    root@host:~# sysctl -wq net.vrf.strict_mode=1