Page MenuHomeVyOS Platform

Performance system option destroy defined sysctl custom params
Open, Requires assessmentPublicBUG

Description

Playing with Xena traffic generator was found a bug with custom systcl params defined via VyOS CLI. As an example, gc_threshX will be recovered to default values when we apply performance throughput
Reproducing steps:

set system sysctl custom net.ipv4.neigh.default.gc_thresh1 value '131072'
set system sysctl custom net.ipv4.neigh.default.gc_thresh2 value '262000'
set system sysctl custom net.ipv4.neigh.default.gc_thresh3 value '524000'
commit

Check params

vyos@vyos# sudo sysctl net.ipv4.neigh.default.gc_thresh1
net.ipv4.neigh.default.gc_thresh1 = 131072

Set performance

set system option performance throughput
commit

Check values again

vyos@vyos# sudo sysctl net.ipv4.neigh.default.gc_thresh1
net.ipv4.neigh.default.gc_thresh1 = 1024

Details

Difficulty level
Unknown (require assessment)
Version
1.3-rolling-202101060217
Why the issue appeared?
Will be filled on close
Is it a breaking change?
Unspecified (possibly destroys the router)

Event Timeline

tuned is known for altering values inside sysctl, its written on the projects web page - I simply did not thought about that issue when installing tuned.

We might be better off using irqbalance instead which can be installed using apt-get install -y irqbalance

This can be fixed by copying the two profile directories you're using (/usr/lib/tuned/network-throughput and /usr/lib/tuned/network-latency) to /etc/tuned and changing the sysctl section to only contain enabled=false. Here's it working:

root@cr01a-vyos:~# diff /usr/lib/tuned/network-latency/tuned.conf /etc/tuned/network-latency/tuned.conf
13,16c13
< net.core.busy_read=50
< net.core.busy_poll=50
< net.ipv4.tcp_fastopen=3
< kernel.numa_balancing=0
---
> enabled=false
root@cr01a-vyos:~# diff /usr/lib/tuned/network-throughput/tuned.conf /etc/tuned/network-throughput/tuned.conf 
10,16c10
< # Increase kernel buffer size maximums.  Currently this seems only necessary at 40Gb speeds.
< #
< # The buffer tuning values below do not account for any potential hugepage allocation.
< # Ensure that you do not oversubscribe system memory.
< net.ipv4.tcp_rmem="4096 87380 16777216"
< net.ipv4.tcp_wmem="4096 16384 16777216"
< net.ipv4.udp_mem="3145728 4194304 16777216"
---
> enabled=false


trae@cr01a-vyos# set system sysctl custom net.ipv4.neigh.default.gc_thresh1 value '131072'
[edit]
trae@cr01a-vyos# set system sysctl custom net.ipv4.neigh.default.gc_thresh2 value '262000'
[edit]
trae@cr01a-vyos# set system sysctl custom net.ipv4.neigh.default.gc_thresh3 value '524000'
[edit]
trae@cr01a-vyos# commit
[edit]
trae@cr01a-vyos# sudo sysctl net.ipv4.neigh.default.gc_thresh{1,2,3}
net.ipv4.neigh.default.gc_thresh1 = 131072
net.ipv4.neigh.default.gc_thresh2 = 262000
net.ipv4.neigh.default.gc_thresh3 = 524000
[edit]
trae@cr01a-vyos# set system option performance latency 
[edit]
trae@cr01a-vyos# commit
[edit]
trae@cr01a-vyos# sudo sysctl net.ipv4.neigh.default.gc_thresh{1,2,3}
net.ipv4.neigh.default.gc_thresh1 = 131072
net.ipv4.neigh.default.gc_thresh2 = 262000
net.ipv4.neigh.default.gc_thresh3 = 524000
trae@cr01a-vyos# tuned-adm profile | grep active
Current active profile: network-latency

Docs: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-red_hat_enterprise_linux-performance_tuning_guide-tuned#custom-profiles