I only have a workaround right now, it appears that grub.cfg can't be found. To boot at least the installed system do the following within the grub shell:
configfile /EFI/VyOS/grub.cfg
I only have a workaround right now, it appears that grub.cfg can't be found. To boot at least the installed system do the following within the grub shell:
configfile /EFI/VyOS/grub.cfg
Move to 'Needs testing'; repo linked below. Add example of third-party package, and test, before PR.
Seems to be a simple fix so i've submitted a PR: https://github.com/vyos/vyos-1x/pull/199
@hagbard i tried testing by installing the package.
The service is running but not working correctly.
The following is shown:
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 10: Warning: Unknown token: smuxpeer.
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 11: Warning: Unknown token: smuxpeer.
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 12: Warning: Unknown token: smuxsocket.
Jan 07 10:25:54 server snmpd[9979]: notificationEvent OID: linkUp
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 21: Error: unknown notification OID
Jan 07 10:25:54 server snmpd[9979]: notificationEvent OID: linkDown
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 22: Error: unknown notification OID
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 23: Warning: Unknown token: monitor.
Jan 07 10:25:54 server snmpd[9979]: /etc/snmp/snmpd.conf: line 24: Warning: Unknown token: monitor.
Jan 07 10:25:54 server snmpd[9979]: net-snmp: 2 error(s) in config file(s)
@bbabich If router A and router B connected via iBGP you need use nexthop-self
Hi @bbabich
How we can reproduce this bug?
I tested with 55 bgp-sessions. Each with its own unique filter. All filters applied as needed.
Without filters I announced 111 routes.
With filters per session I export 1 route to each peer.
Sure thing, let me know the result.
@Merijn https://github.com/vyos/vyos-1x/commit/78df0c46865b3af89d6bc327b4c1d08cc4450aff or tomorrows rolling, as you seem to compile it yourself is should now work out of the box when you install the new vyos-1x package.
(http://dev.packages.vyos.net/repositories/current/pool/main/v/vyos-1x/vyos-1x_1.3.0-16_all.deb)
Debian default snmp user is called Debian-snmp while the script tries to get the uid of the user snmp. Looks like that is the entire issue.
PR for this task https://github.com/vyos/vyos-1x/pull/198
In 1.3 default-route 'force' works as expected, for 1.2.x (crux) we need merge patch which was proposed by @hagbard
This was tested on a different VyOS install, so I will test again and confirm.
I just did a build of Crux 1.2.4. The issue that appeared in 1.2.3 did not occur in 1.2.4.
Hi @rherold , I can't create this behaviour in LAB.
I have clear vyos-1.2.3 installed and do next steps:
#!/bin/sh # This script is executed at boot time after VyOS configuration is fully applie$ # Any modifications required to work around unfixed bugs # or use services not available through the VyOS CLI system can be placed here. /config/scripts/rcs/rcs-mgnt-vlan.sh 2>/dev/null 1>/dev/null
#!/bin/bash
Tested successfully on VyOS 1.3-rolling-202001060217
Hi @max1e6 , can you please share the relevant openvpn config, so I can try to reproduce the issue? Thx.
@kroy thx for testing, glad it is working for you since I'm not really satified with it.
@hagbard Confirmed your hack takes care of the issue.
Yup, https://phabricator.vyos.net/T1831 is pending. FRR RAs will fix the issue entirely or going with set service ipv6-ra interface... would fix that too regardless what daemin is being used at the backend which would also make it interchangeable.
I was hoping you have some input here.
As I can remember there is no objection about FRR for RAs - its only the CLI structure.
I actually suggested using reload/SIGHUP. The problem is the very rapid reloads sent by the vyos script to systemd. start-stop-daemon is handled by systemd in Debian Buster, in Jessie it was still handled by sysvinit so it didn't have any limits. I suppose it uses some default restart limits/timeouts that can otherwise be adjusted in unit/service files. I suppose it could be converted to a native systemd service so the limits can be set if there is a corresponding setting that would fix the issue. Otherwise it'd be better if we don't use systemd to send SIGHUP at all and send the signal direclty to the daemon w/ pid read from pid file. Or switch to using frr for RAs - what's the progress on that?
That's what it does but using the init script.
Why not check if radv is running and sent a HUP signal instead?
My system takes 6 minutes to boot, and it only has two DHCP servers and about 12 interfaces.
Let's see if it does the job, it's more a hack than a fix. The real problem is that you will have reloads for each interface which has router-advert configured during boot.
Systemd doesn't like the quick restarts during boot and limits that.
I'll wait to test until the reload fix is merged
Definitely confirmed as of yesterday's rolling. Still occurs on three different VMs.
Confirmed fixed in a newer rolling.
By the way, may I say there are several bugs of stop function in vyos-router?
Why not use WantedBy instead of RequiredBy in vyos-hostsd.service like:
This is already fixed in latest rolling by T1921 - please recheck ltest rolling
PR for this task https://github.com/vyos/vyatta-cfg-system/pull/116 which rename (masked) VF interface to vf_ethX
I'm going to change the node.def to reload if radvd is running already, not sure if it fixes that since the issue appears to be systemd related.
Also if you just disable a vif, radvd won't be restarted and has an invalid interface it tries to announce on, that's why I want to move it out of the interfaces and set it as it's own service.
Hmm, I can't reproduce the issue, looks good when I test it with your config and reboot few times.
@c-po, it not possible to reproduce in 1.2.4 and 1.3 latest rolling.
I tried also delete system console on deployed 1.2.3, and it also works without issues, syslog clear.
As asked on slack and to be clear. This issue was never present in any of the 1.2 LTS releases (crux branch) - not in 1.2.0, 1.2.1, 1.2.2, 1.2.3 and 1.2.4.