Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

header-navigation
keyboard_arrow_up
list Table of Contents
file_download PDF
{ "lLangCode": "en", "lName": "English", "lCountryCode": "us", "transcode": "en_US" }
English
keyboard_arrow_right

Configure Settings on Host OS

date_range 10-Dec-24

This chapter provides information on how to tune the settings on host OS to enable advanced features or to increase the scale of cRPD functionality.

Configure ARP Scaling

The maximum ARP entry number is controlled by the Linux host kernel. You can adjust the ARP or NDP entry limits using the sysctl command on the Linux host.

For example, to adjust the maximum ARP entries using IPv4:

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh1=4096

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh2=8192

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh3=8192

For example, to adjust the maximum NDP entries using IPv6:

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh1=4096

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh2=8192

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh3=8192

IGMP Membership Under Linux

To allow a greater number of OSPFv2/v3 adjacencies with cRPD, increase the IGMP membership limit:

Increase the IGMP membership limit.

root@host:~# sysctl -w net.ipv4.igmp_max_memberships=1000

Kernel Modules

You need to load the following kernel modules on the host before you deploy cRPD in Layer 3 mode. These modules are usually available in linux-modules-extra or kernel-modules-extra packages. Run the following commands to add the kernel modules.

  • modprobe tun

  • modprobe fou

  • modprobe fou6

  • modprobe ipip

  • modprobe ip_tunnel

  • modprobe ip6_tunnel

  • modprobe mpls_gso

  • modprobe mpls_router

  • modprobe mpls_iptunnel

  • modprobe vrf

  • modprobe vxlan

Configure MPLS

To configure MPLS in Linux kernel:

  1. Load the MPLS modules in the container using modprobe or insmod:

    root@crpd-ubuntu3:~# modprobe mpls_iptunnel

    root@crpd-ubuntu3:~# modprobe mpls_router

    root@crpd-ubuntu3:~# modprobe ip_tunnel

  2. Verify the MPLS modules loaded in host OS.
    content_copy zoom_out_map
    root@host:~# lsmod | grep mpls
    
    content_copy zoom_out_map
    mpls_iptunnel          16384  0
    mpls_router            28672  1 mpls_iptunnel
    ip_tunnel              24576  4 ipip,ip_gre,sit,mpls_router

Hash Field Selection for ECMP Load Balancing on Linux

You can select the ECMP hash policy (fib_multipath_hash_policy) for both forwarded and locally generated traffic (IPv4/IPv6).

IPv4 Traffic

  1. By default, Linux kernel uses the L3 hash policy to load-balance the IPv4 traffic. L3 hashing uses the following information:
    • Source IP address
    • Destination IP address

    root@host:~# sysctl -n net.ipv4.fib_multipath_hash_policy 0

  2. Run the following command to load-balance the IPv4 traffic using Layer 4 hash policy. Layer 4 hashing load-balance the traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Source port number
    • Destination port number
    • Protocol

    root@host:~# sysctl -w net.ipv4.fib_multipath_hash_policy=1

    root@host:~# sysctl -n net.ipv4.fib_multipath_hash_policy 1

  3. Run the following command to use L3 hashing on the inner packet header (IPv4/IPv6 over IPv4 GRE).

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=2

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 2

    The policy defaults to L3 hashing on the packet forwarded as described in the default approach for IPv4 traffic.

    IPv6 Traffic

  4. By default, Linux kernel uses L3 hash policy to load-balance the IPv6 traffic. The L3 hash policy load-balance the traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Flow label
    • Next header (Protocol)

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 0

  5. You can use the Layer 4 hash policy to load-balance the IPv6 traffic. The Layer 4 hash policy load-balance traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Source port number
    • Destination port number
    • Next header (Protocol)

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=1

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 1

  6. Run the following command to use L3 hashing on the inner packet header (IPv4/IPv6 over IPv4 GRE).

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=2

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 2

    MPLS

  7. Linux kernel can select the next hop of a multipath route using the following parameters:
    • Label stack upto the limit of MAX_MP_SELECT_LABELS (4)
    • Source IP address
    • Destination IP address
    • Protocol of the inner IPv4/IPv6 header

    Neighbor Detection

  8. Run the following command to view the liveness (failed/incomplete/unresolved) of the neighbor entry.

    root@host:~# sysctl -w net.ipv4.fib_multipath_use_neigh=1

    By default, the packets are forwarded to next-hops using the root@host:~# sysctl -n net.ipv4.fib_multipath_use_neigh 0 command.

wECMP Using BGP on Linux

Unequal cost load balancing is a way to distribute traffic unequally among different paths (comprising the multipath next-hop); when the paths have different bandwidth capabilities. BGP protocol tags each route/path with the bandwidth of the link using the link bandwidth extended community. The bandwidth of the corresponding link can be encoded as part of this link bandwidth community. RPD uses this bandwidth information of each path to program the multipath next-hops with appropriate linux::weights. A next-hop with linux::weight allows linux kernel to load-balance traffic asymmetrically.

BGP forms a multipath next-hop and uses the bandwidth values of individual paths to find out the proportion of traffic that each of the next-hops that form the ECMP next-hop should receive. The bandwidth values specified in the link bandwidth need not be the absolute bandwidth of the interface. These values need to reflect the relative bandwidth of one path from the another. For details, see Understanding How to Define BGP Communities and Extended Communities and How BGP Communities and Extended Communities Are Evaluated in Routing Policy Match Conditions.

Consider a network with R1 receiving equal cost paths from R2 and R3 to a destination R4. If you want to send 90% of the load balanced traffic over the path R1-R2 and the remaining 10% of the traffic over the path R1-R3 using wECMP. You need to tag routes received from the two BGP peers with link bandwidth community by configuring policy-options.

  1. Configure policy statement.

    root@host> show configuration policy-options

    content_copy zoom_out_map
    policy-statement add-high-bw {
        then {
            community set high-bw;
            accept;
        }
    }
    policy-statement add-low-bw {
        then {
            community set low-bw;
            accept;
        }
    }
    community high-bw members [ bandwidth:2:90 ];
    community low-bw members [ bandwidth:2:10 ];
    
  2. RPD uses the bandwidth values to unequally balance the traffic with the multiple path next-hops.

    root@host> show route 100.100.100.100 detail

    content_copy zoom_out_map
    inet.0: 13 destinations, 16 routes (13 active, 0 holddown, 0 hidden)
    100.100.100.100/32 (2 entries, 1 announced)
            *BGP    Preference: 170/-101
                    Next hop type: Router, Next hop index: 0
                    Address: 0x565535f37a3c
                    Next-hop reference count: 10
                    Source: 10.1.1.5
                    Next hop: 20.1.1.5 via eth2 balance 10%, selected
                    Session Id: 0x0
                    Next hop: 10.1.1.5 via eth1 balance 90%
    
  3. Linux kernel supports unequal load balancing by assigning linux::weights for each next-hop.

    root@host:/# ip route show 100.100.100.100

    content_copy zoom_out_map
    100.100.100.100 proto 22
    	nexthop via 20.1.1.5 dev eth2 weight 26
    	nexthop via 10.1.1.5 dev eth1 weight 229
    

    The linux::weights are programmed to linux as divisions of integer 255 (the maximum value of an unsigned character). Each next-hop in the ECMP next-hop is given a linux::weight proportional to its share of the bandwidth.

Enable SRv6 on cRPD

You can enable IPv6 SR capability on cRPD using the following sysctl command:

  1. Enable SR.

    root@host:~# sysctl net.ipv6.conf.all.seg6_enabled=1

    root@host:~# sysctl net.ipv6.conf.all.forwarding=1

  2. Configure the following command to enable SRv6 on eth0 interface.

    root@host:~# sysctl net.ipv6.conf.eth0.seg6_enabled=1

  3. Configure the following command to set the DT4 SIDs.

    root@host:~# sysctl -wq net.vrf.strict_mode=1

footer-navigation