- User Since
- Sep 11 2019, 9:18 PM (72 w, 4 h)
Tue, Jan 12
p.s. 478 and 479 referred to qmimux0.pcap
Line 478 and 479 seem to be the important ones here.
The MRU of 1450 appears to be correctly negotiated. i.e. Vyos is setting the remote side to a maximum of 1450 bytes, because that is what the remote side requested!
Sun, Jan 3
@maznu I've also been looking at switch to Mellanox cards after my experience with Intel. It's not as if this is a mass consumer product with end-users that don't really know what they're doing. It's a product that's most likely supported by IT/network staff that get to influence purchasing decisions for equipment like this.
Or you could implement import and export policies for BGP, which is what I did.
We already had export policies but it is good practice to have both.
I've switched the image overnight.
Everything seems to work ok apart from I've noticed the following items being logged (104) entries so far - and it's only been up 20 mins
an 3 01:40:26 vyos kernel: [ 968.052496] l2tp_core: tunl 18294: recv short packet (len=12)
Jan 3 01:40:34 vyos kernel: [ 975.367315] l2tp_core: tunl 19358: recv short packet (len=12)
Jan 3 01:40:34 vyos kernel: [ 975.431273] l2tp_core: tunl 56700: recv short packet (len=12)
I've just realised, if you look in /var/log/messages you should see the LCP negotiation of MRU (i.e. the agreed MTU)
Sat, Jan 2
The odd thing about this is that I don't seem to have this issue consistently across systems.
I have two identical systems (hardware) one of them acting as a PPPoE concentrator with OSPF, the other is an L2TP session concentrator with OSPF and BGP.
I only see this issue on the L2TP system. It's currently only doing around 50Mbps of UDP on average.
The PPPoE system does at least twice that on average.
The frequency of this issues seems to have increased, we now seem to be getting panics daily (it was every 4 days previously)
Also, your client should still not end up with 1454 set.
On our system, we have mtu set to 1500, and various clients appear to negotiate both 1500 and 1492 settings successfully via LCP stage of ppp.
The default in code is 1436 - so I really don't understand how the value of 1450 has got there unless there is a problem generating the file at /var/run/accel-pppd/l2tp.conf and it isn't being re-written.
The config you posted has the following which is not correct, it should read 1454.
Thu, Dec 31
Looks like it's not an issue anymore in latest iso.
show mpls table was outputting data.
I've never configured MPLS on anything.
I've loaded the latest release from yesterday, and I'm no longer seeing the issue?
On server, what is in /var/run/accel-pppd/l2tp.conf ?
The setting should read ppp-max-mtu=1454 under l2tp section
Also I'd expect something is wrong on the client side, can you see the PPP config options the Teltonika is using?
The MTU setting is well described "max-mtu".
Can you capture the LCP stage of PPP negotiation from either the client or server, it sounds like it's negotiating a smaller one for some reason.
Wed, Dec 30
Nov 23 2020
- Totally agree with this. We had this same issue when we used to run Vyatta. Took me ages to figure out too.
However, I'm not sure what would be the best way to implement this is? I read a good explanation here about when to increase and change interrupt settings.
Do you think a config option is best e.g.
set interfaces ethernet eth0 advanced ring rx nnnn
Oct 23 2020
The latest driver also includes a fix for link flapping, which is the issue we are experiencing.
Oct 22 2020
Oct 20 2020
Sep 11 2019
I have been trying this new feature out.
- I had configured an MTU value and I had some sessions connected, I realised I had set it incorrect so I modified it to the correct value. On commit I received an error (sorry I don't have it at present) but to the extent that accel-pppd was not running on localhost:2004.
I had to reboot the router to get it working again.