It may be a good idea, but it sure needs a serious and broad discussion. I'm moving it to 1.4 for now, though we may move it back if we have time before the freeze.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Jun 25 2020
Since we are heading towards a freeze, I believe it's better to live big changes for later, even though I don't categorically disagree with the idea.
Still reproducible in 1.3-rolling-202006241940
Now that we have an HTTP API, I believe it's time to deprecate vymgmt altogether.
I have checked and fixed bandwidth default units for Limiter, Network Emulator, Rate Control,
Shaper and Shaper HFSC.
Jun 24 2020
vif interfaces are removed normally.
In T2630#68490, @Asteroza wrote:There is the weird area here, as 1G interfaces are generally capped at 9K more or less (whether limits include those overheads or not is always weird, such as switches saying they are 9K but also 9120). For VM nics, you're never completely sure of what the host or what the switches directly connected to the hosts will allow either.
Maybe warn on over 9000 but not block it? Also, what are NVMeoF/RoCE NIC's saying these days? Still, since path MTU discovery isn't reliable, direct testing the interface seems like a good fallback, but while udev running it might be okay, again for VM nics, the host changing the underlying hardware could cause changes while running, so rescan on every commit?
Update: After hooking up an actual EZIO device to my VM and working the code back and forth, I seem to have settled on this design:
It looks like this bug is in the kernel.
1.2.5 - 4.19.106-amd64
Work as expected.
The problem also probably exists with vif_s .. it needs to be investigated.
was done part of T2633
No problem occured after updating another machine from version VyOS 1.2-rolling-201910102056 to 1.3-rolling-202006230700. Login succeeded after reboot immediately.
There is the weird area here, as 1G interfaces are generally capped at 9K more or less (whether limits include those overheads or not is always weird, such as switches saying they are 9K but also 9120). For VM nics, you're never completely sure of what the host or what the switches directly connected to the hosts will allow either.
Jun 23 2020
In T2630#68467, @thomas-mangin wrote:could have the range 68-65536 but it may be a bit on the extreme side.
could have the range 68-65536 but it may be a bit on the extreme side.
https://github.com/vyos/vyos-1x/pull/473 was merged so now need to agree sane limits for the XML.
I have a PR for this (not changing the XML limiting range) for review ATM.
New Jenkins Job established https://ci.vyos.net/job/vyos-build-netfilter/ with pipeline from https://github.com/vyos/vyos-build/blob/current/packages/netfilter/Jenkinsfile
a) not have any limitations regarding MTU at all and then detect an error when trying to apply the new MTU. This means no way to verify if the new mtu is correct beforehand so it doesn't comply with the verify/apply separation that's prescribed in the developer docs. I described a possible workaround using revert code in T2404.
Just reproduced same issue on second system. Source VMware vSphere Host.
Same Problem here: After upgrading from 1.3-rolling-202005030117 to 1.3-rolling-202006230700 no login possible. After resetting password for admin user through password recovery login works. Rest of configuration was copied as should.
related to T2630
vyos@vyos# set interfaces tunnel tun0 description '*** SITE1 ***' [edit] vyos@vyos# set interfaces tunnel tun0 encapsulation 'gre-bridge' [edit] vyos@vyos# set interfaces tunnel tun0 local-ip '10.0.3.239' [edit] vyos@vyos# set interfaces tunnel tun0 remote-ip '10.0.32.240' [edit] vyos@vyos# set interfaces tunnel tun0 ip enable-arp-accept [edit] vyos@vyos# set interfaces tunnel tun0 ip enable-arp-announce [edit]
It would be possible to make the scripts check if IPv6 is enabled on the interface (or system?) and make the minimal MTU 1280 in that case. If IPv6 on the interface is disabled or not supported, have it go as low as it can.
This was discussed already in T2404. The problem is that NICs that expose their min/max MTU are rare. None of the NICs I have expose it, neither through sysfs nor through 'ip -d link show'. If I recap the discussion from T2404, there are 2 main ways to solve this:
a) not have any limitations regarding MTU at all and then detect an error when trying to apply the new MTU. This means no way to verify if the new mtu is correct beforehand so it doesn't comply with the verify/apply separation that's prescribed in the developer docs. I described a possible workaround using revert code in T2404.
b) have a mtu detection script that would be ran by udev on every new NIC detection (to support hotplugging NICs) that would determine the min/max mtu with a bruteforce binary search algorythm (try to set a mtu and see if it errors), then record the results in some temporary file that would get read by the config script. The idea was proposed by @thomas-mangin.
@systo mark as resolved. Reopen it if necessary.