- User Since
- Sep 15 2018, 11:31 PM (80 w, 5 d)
Tue, Mar 31
Mon, Mar 30
If this is a duplicate of something, go ahead and close it
PR289 should fix this.
Mon, Mar 23
PR260 should fix this
PR259 fixes this
Sun, Mar 22
Can confirm that this can be closed now.
Sat, Mar 21
Personally this is behavior I agree with.
Thu, Mar 19
Tue, Mar 17
Mon, Mar 16
@hagbard This problem is back on a rolling built on the 14th. Fix again was to run
Wed, Mar 11
PR88 should add this.
Jan 18 2020
Jan 17 2020
I hacked through how to reproduce.
Jan 13 2020
I pushed a fix earlier which might fix this in UEFI mode. Can you check the rolling tomorrow (or build youself today). If you are interested, I also have a custom built ISO with the fix in it.
Also discovered during testing that 4K sector drives will fail to boot with EFI. Also fixed in the above PR
T1940 should fix this. It would be pretty trivial to add the ability to choose between EFI and BIOS when EFI is present, though this fix should make it unnecessary
PR corrects this. Buster forces secure-boot by default, which we don't support
@c-po Thanks for the fix.
Jan 12 2020
Jan 8 2020
Confirmed fix with that commit.
Jan 7 2020
It definitely remains in my config:
Jan 4 2020
@hagbard Confirmed your hack takes care of the issue.
Jan 3 2020
I was hoping you have some input here.
My system takes 6 minutes to boot, and it only has two DHCP servers and about 12 interfaces.
I'll wait to test until the reload fix is merged
Definitely confirmed as of yesterday's rolling. Still occurs on three different VMs.
Confirmed fixed in a newer rolling.
Jan 2 2020
Dec 30 2019
@Dmitry Can confirm I was able to upgrade without any errors now. This problem appears to be fixed
Rolling built on 201912301711 seems to have a different issue. Same config as above.
Will confirm later this afternoon when I am on site and let you know.
Dec 28 2019
I agree this should be killed.
Dec 23 2019
A reboot seems to have fixed it.
Can confirm. Problem seems to be fixed now.
Dec 21 2019
So I guess the key to duplication here is to:
Maybe even just the action of sudo systemctl restart radvd.service is enough to fix it? It seems to maybe be the case
It seems like maybe something doesn't exist, or permissions aren't working right on a freshly upgraded system, until you manually do something to create it?
So the problem went away until I upgraded to the latest rolling. VyOS 1.3-rolling-201912211124
Dec 20 2019
My output there basically matches yours.
And I don't know if it's relevant, but the syslog output is definitely different depending on whether I restart it, or it gets restarted on boot
Upgraded to lastest official rolling:
Maybe it's because I have multiples? Or on vlans (not that that should matter)
Dec 19 2019
After some more testing, after a reboot, a tcpdump -n -i interface icmp6 on a client machine shows nothing until restarting the radvd service on the routers.
Dec 18 2019
Dec 11 2019
T1846 fixes this
Dec 10 2019
@hagbard Confirmed fix. Migration worked perfectly.
Dec 9 2019
Related to T1844, which should correct the original problem in this ticket
Dec 6 2019
Trying to apply the fix manually:
Built a fresh rolling. It failed with:
Okay, so this problem just got a LOT more bizarre.
Dec 5 2019
When the config was gone, the processes still seemed to be running
Dec 4 2019
It actually does work, if only by accident
This should be all of the relevant configs from the ASA side
Nov 21 2019
I guess I'd ask the question of whether we have any complaints of performance type issues that perf could pinpoint? I don't know if I've ever seen any of those kind of complaints in the circles I hang out in.
Nov 18 2019
PR166 should fix this.
Oct 31 2019