- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Jan 4 2021
@c-po has also tested an iso from the above branches, notably testing a vyatta-wanloadbalance configuration. It had been presumed that the listed dependency of vyatta-wanloadbalance on vyatta-config-migrate was a legacy error, as there were no explicit dependencies, but it required a sanity check. I will merge via PR's, in order to better track the changes over the 3 required packages.
Jan 3 2021
Put PR for segment routing addition:
Thanks for that. However, encapsulation ip6gre is nowhere to be found in a documentation search on readthedocs.
As to why it got tased VyOS Manager, the tag was automatically inserted when I created the issue and no alternatives were offered by the UI at my userlevel.
Apologies for my ignorance, I am relatively new to VyOS though quite experienced with IPv6 and with routing in general.
Respectfully, I suggest reopening this ticket and recategorizing it to documentation appropriately so as to get the documents brought up to speed with the implementation.
@maznu I've also been looking at switch to Mellanox cards after my experience with Intel. It's not as if this is a mass consumer product with end-users that don't really know what they're doing. It's a product that's most likely supported by IT/network staff that get to influence purchasing decisions for equipment like this.
@drac, while yes that is an option I am unsure which VyOS should as a software package should use.
Or you could implement import and export policies for BGP, which is what I did.
We already had export policies but it is good practice to have both.
So, this option can be disabled here per the FRR manual:
Just tried this with 7.4 yeah, issue persists. Using 7.3 which is also in the LTS version for so long...
This is due to RFC 8212, which was added on FRR 7.4. Please see here...
Reverting back to 7.4 series until those are resolved.
This infact hit me, too which is super annoying and could kill your entire AS.
Here are my conclusions about the last week's shenanigans.
IPv6 GRE is already supported as tunnel encapsulation ip6gre. It is the IPv6 equivalent of GRE, which allows us to encapsulate any Layer 3 protocol over IPv6.
Why should this task be created in vyos manager
And a slightly longer-term traffic graph, showing CPU usage vs traffic levels across VyOS 1.2.5 to 1.3-rolling on the same XL710 box:
@drac we're a typical ISP/NSP, with a fair amount of eyeball traffic behind us, so expecting to see a fairly high amount of UDP for QUIC (but it's not the bulk of our traffic on our VyOS boxes which are BGP peering/transit edge). Each of our six VyOS boxes is pushing around 300-500Mbit/sec, of which two have XL710 NICs (the rest are a mix of ixgbe and qlcnic).
I've switched the image overnight.
Everything seems to work ok apart from I've noticed the following items being logged (104) entries so far - and it's only been up 20 mins
an 3 01:40:26 vyos kernel: [ 968.052496] l2tp_core: tunl 18294: recv short packet (len=12)
Jan 3 01:40:34 vyos kernel: [ 975.367315] l2tp_core: tunl 19358: recv short packet (len=12)
Jan 3 01:40:34 vyos kernel: [ 975.431273] l2tp_core: tunl 56700: recv short packet (len=12)
I've just realised, if you look in /var/log/messages you should see the LCP negotiation of MRU (i.e. the agreed MTU)
Jan 2 2021
Just to add to the knowledge surrounding this issue...
Allowing utf-8 in general may be problematical:
(1) utf-8 within, say, a description name (see parent T2941) should be manageable; any restrictions in the legacy backend (I have not confirmed) could be addressed relatively easily
(2) allowing alternative quote characters (this task) is a can of worms for the lexer, and likely a very bad idea
If you look at vif 17, the description contains matched single quotes: notably, unicode character U+2018 'Left Single Quotation Mark'. Converting to utf-8 following the prescription in 'man utf-8' gives:
0xE2 0x80 0x98
hence the 0xE2 error message. One can reference this at
https://www.compart.com/en/unicode/U+2018, but this value was checked for completeness.
@drac enabling such debug features is not easily possible as we can not install two kernels in parallel.
The odd thing about this is that I don't seem to have this issue consistently across systems.
I have two identical systems (hardware) one of them acting as a PPPoE concentrator with OSPF, the other is an L2TP session concentrator with OSPF and BGP.
I only see this issue on the L2TP system. It's currently only doing around 50Mbps of UDP on average.
The PPPoE system does at least twice that on average.
It feels like a bug which we received from upgrading to FRR 7.5 series.
I took the opportunity to update the supported protocols list of the dynamic DNS client. Thanks for the hint!
@drac are you seeing Slab in /proc/meminfo gradually increasing before the panic? If so, the sourceforge post at the top recommends disabling TUPLE "acceleration". It seems that the more traffic you have, the quicker the crash. We were getting them every ~6 hours.
Loopback IP addresses are now automatically assigned to every VRF interface
47: bar: <NOARP,MASTER,UP,LOWER_UP> mtu 65536 qdisc noqueue state UP group default qlen 1000 link/ether 76:7d:c0:53:6d:89 brd ff:ff:ff:ff:ff:ff inet 127.0.0.1/8 scope host bar valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever
The system tries to bind itself to the localhost address which is not in the VRF, this is definately a fault, Why did I not see that?
Amending /etc/snmp/snmpd.conf as follows got it working for me (albeit temporarily). Our snmp listen-address is 10.13.0.56 in this instance.
Similar issue for snmpd:
The frequency of this issues seems to have increased, we now seem to be getting panics daily (it was every 4 days previously)
Also, your client should still not end up with 1454 set.
On our system, we have mtu set to 1500, and various clients appear to negotiate both 1500 and 1492 settings successfully via LCP stage of ppp.
The default in code is 1436 - so I really don't understand how the value of 1450 has got there unless there is a problem generating the file at /var/run/accel-pppd/l2tp.conf and it isn't being re-written.
The config you posted has the following which is not correct, it should read 1454.
ppp-max-mtu=1450
Jan 1 2021
I think this may be related to the MAC bound to the device. You can modify the configuration of vyos to adjust the order
Need 'nopmtudisc' option for tunnel interface. This is required for MPLS over gre or Ethernet over gre applications. This option is described in the iproute2 manuals (ip-tunnel).
Alternatively, we've got an i40e VyOS box in production which is stable with:
i40e is a tyre fire.
Seems i40e is a lot of fun. Given thos nasty errors and Intels development cycle, I have a recent 1.3 ISO with Kernel 5.10.4 and build in i40e drivers (mainline).
Frustratingly, 2.13.10 seems to have some other — very nasty — bugs in it. We've had three kernel crashes on the latest VyOS 1.3 releases (from around Christmas) as a result, and I currently believe they are the same as those problems described here:
Dec 31 2020
So we have configured option max-mtu this means
ppp-max-mtu=n Set the maximum MTU value that can be negotiated for PPP over L2TP sessions.
But I think we need to provide possibility set min-mtu
[ppp] min-mtu=n