This is indeed fixed!
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Jun 9 2021
Jun 7 2021
May 23 2021
May 20 2021
Mar 27 2021
Mar 3 2021
I've had this bite me a few times now as well, but I wasn't able to pin it down before to being a bug.
Mar 2 2021
Feb 15 2021
Jan 22 2021
Jan 18 2021
Interesting. I agree, I think that defeats the purpose of VRRP if both have to be running. I guess we can go ahead and close this as not feasible at this time.
Jan 15 2021
This can be fixed by copying the two profile directories you're using (/usr/lib/tuned/network-throughput and /usr/lib/tuned/network-latency) to /etc/tuned and changing the sysctl section to only contain enabled=false. Here's it working:
root@cr01a-vyos:~# diff /usr/lib/tuned/network-latency/tuned.conf /etc/tuned/network-latency/tuned.conf 13,16c13 < net.core.busy_read=50 < net.core.busy_poll=50 < net.ipv4.tcp_fastopen=3 < kernel.numa_balancing=0 --- > enabled=false root@cr01a-vyos:~# diff /usr/lib/tuned/network-throughput/tuned.conf /etc/tuned/network-throughput/tuned.conf 10,16c10 < # Increase kernel buffer size maximums. Currently this seems only necessary at 40Gb speeds. < # < # The buffer tuning values below do not account for any potential hugepage allocation. < # Ensure that you do not oversubscribe system memory. < net.ipv4.tcp_rmem="4096 87380 16777216" < net.ipv4.tcp_wmem="4096 16384 16777216" < net.ipv4.udp_mem="3145728 4194304 16777216" --- > enabled=false
Jan 14 2021
Yep, I use RADIUS.
Jan 9 2021
This is fixed:
trae@cr01b-vyos:~$ restart flow-accounting trae@cr01b-vyos:~$ show ver
Jan 7 2021
Jan 6 2021
Nov 3 2020
Yes! Turns out the following is what fixed it:
https://phabricator.vyos.net/T2980
Oct 26 2020
All of ours do as far as I know, which includes 10,40, and 100g. I'm pretty sure 2.5g and 5g will as well.
Oct 18 2020
Works for me!
trae@cr01b-vyos:~$ show protocols bfd peers Session count: 11 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3776760774 192.168.253.3 192.168.253.7 up 1851352402 fd52:d62e:8011:fffe:192:168:253:3 fd52:d62e:8011:fffe:192:168:253:6 up 3344115206 192.168.253.3 192.168.253.2 down 1252680903 fd52:d62e:8011:fffe:192:168:253:3 fd52:d62e:8011:fffe:192:168:253:2 down 3664188082 192.168.253.3 192.168.253.6 up 2809207409 fd52:d62e:8011:fffe:192:168:253:3 fd52:d62e:8011:fffe:192:168:253:1 up 2086113021 192.168.253.3 192.168.253.12 up 1362288442 unknown fd52:d62e:8011:fffe:192:168:253:12 down 3846665654 fd52:d62e:8011:fffe:192:168:253:3 fd52:d62e:8011:fffe:192:168:253:7 up 276439511 fd52:d62e:8011:fffe:192:168:253:3 fd52:d62e:8011:fffe:192:168:253:12 down 1342044518 192.168.253.3 192.168.253.1 up
Oct 16 2020
That sounds great to me! I actually like that more.
Oct 15 2020
awesome, thanks!
Oct 13 2020
Oct 12 2020
Oct 7 2020
Oct 6 2020
Sep 12 2020
Try a show ipv6 route ospfand look at the routes; they're probably being rejected:
trae@cr01b-vyos# run show ipv ro ospf Codes: K - kernel route, C - connected, S - static, R - RIPng, O - OSPFv3, I - IS-IS, B - BGP, N - NHRP, T - Table, v - VNC, V - VNC-Direct, A - Babel, D - SHARP, F - PBR, f - OpenFabric, > - selected route, * - FIB route, q - queued route, r - rejected route
Sep 7 2020
Jul 12 2020
Jul 8 2020
Can regression testing of some sort be added for this? I've seen this issue crop up before now, so I would guess this is a good candidate for that if possible.
Closed - this is available as set system ip layer4-hashing
Oh, neat. Thanks, I'll close this then!
Jul 7 2020
Jul 4 2020
Jun 25 2020
This appears to be caused by the setting of service ssh listen-address; it appears the script generating the config is omitting the actual address. Removing a specific listening address is a temporary workaround.
May 16 2020
May 13 2020
Actually I think this is a result of T2434 since the source interface is bond1, so I'm closing this again. My apologies.
@jjakob This doesn't appear to have been fixed in VyOS 1.3-rolling-202005130117
May 10 2020
May 8 2020
That build was given to me to test in #lobby by Thomas Mangin, so he may be able to tell you more about it if needed.
May 7 2020
Apr 22 2020
Yup that did it. Thanks!
Apr 21 2020
That fixed it for me. Thanks!
Just tested this using 1.3-rolling-202004201924 and it still happens, so that doesn't appear to have worked.
Apr 20 2020
show dhcp server leases now works, but I've found show dhcp server statistics is broken as well:
vyos@cr01b-vyos:~$ show dhcp server statistics Traceback (most recent call last): File "/usr/libexec/vyos/op_mode/show_dhcp.py", line 243, in <module> leases = len(get_leases(lease_file, state='active', pool=p)) TypeError: get_leases() missing 1 required positional argument: 'leases'
Apr 17 2020
I'd also recommend not using a variable named stdout later on since it's very confusing (easily confused with sys.stdout, which took me a minute to figure out).
I've found that on the most recent releases of VyOS, Netflow flow-accounting is also broken. I've managed to fix the first 2 errors I encountered and verify uacctd is indeed running; however, if IPv6 is used, another error is encountered which I did not fix. I also probably did not fix Sflow entirely with these changes.
Initial error:
vyos@cr01a-vyos# commit [ system flow-accounting buffer-size 2048 ]
Apr 15 2020
Any reason in particular you're not using crypt.crypt() here?
Apr 13 2020
Shouldn't it be fixed at some point though? I mean is there a reason this should stay something that has to be worked around?
Feb 2 2020
Confirmed here as well, I had a working config back on 1.2.3 and it broke when I upgraded to 1.3. This is what happens when I try to commit:
Jan 16 2020
@kroy I tried just now with vyos-1.3-rolling-202001160217 in UEFI mode (even forced UEFI boot only in the BIOS to make sure) and am still having the same problem.
Dec 11 2019
In T1869#49212, @hagbard wrote:Looks like an issue with the raid metadata and grub, problem confirmed with virtual box. Tested, latest rolling, 1.2.3 and 1.2.4-epa1.
@dmitry yes, I tried 1.2 rolling as well. I have not been able to try 1.2.3 stable due to a lack of access.