- User Since
- Sep 15 2018, 11:31 PM (57 w, 1 d)
Tue, Oct 15
Okay, after working with this for a while, I believe the whole 'vyatta-webproxy` should be a candidate for deletion in equuleus (see T1732).
Mon, Oct 14
To be fair, that’s what prompted this. The logs go to a different file already.
This PR should address those concerns
Tue, Oct 1
This is going to become more and more of a problem as wireguard adoption continues. Most major Wireguard VPN services provide a FQDN as their endpoint, not IP:
This should be reverted, as the change is breaking. I found some problems due to things like static routing being applied before wireguard now. So the wireguard tunnel works, but in some cases any routing that shouldbe going over the tunnel does not work.
Mon, Sep 30
Yep. Changing the priority fixes the issue completely
@runar This isn't a routing issue though.
Changing when the tunnel comes up isn’t an option? For whatever reason the tunnel comes up before DNS resolution works. Using a hostname when the system is running works perfectly
Sun, Sep 29
Guess? Wireguard coming up before vyos-hostsd?
Tue, Sep 24
Can confirm. All my routing tables now have 0.0.0.0/0, no matter what the device is:
Mon, Sep 23
At this point I've moved all my ASAs to VyOS, and all my tunnels to Wireguard. Unfortunately I cannot test this setup anymore.
Sep 20 2019
PR132 fixes this problem
Sep 19 2019
Sep 18 2019
Sep 16 2019
There are a number of strange things going on here, and I suspect there are multiple bugs:
Sep 9 2019
Sep 6 2019
Sep 3 2019
I took a look, but was unable to figure out how to finagle VyOS to fix it.
Jul 25 2019
Attached are the pcap and debug logs from a simple setup as outlined above, two hosts. "Master" distributing the route.
Mar 17 2019
Yep. Can confirm issue is fixed with the latest hot fix.
Feb 13 2019
Strange. I’ve seen that error a lot. Every time it’s been when I’ve forgotten to checkout current after cloning the repo.
@Merijn make sure you git checkout currenton everything.
Feb 6 2019
Can you describe the system and disks involved?
Yeah, it wasn't really a workable solution for me either and I too had to roll back. But it would be good to confirm that is the problem.
@lbv2rus There might actually be a few problems here. We might have hacked out that it's the interface-route with the custom routing table that's causing the problem.
I’ll add that I think this is happening because of the .252 and .254 addresses. The 254 address connects, but the 252 address is marked in a constant state of connecting.
Feb 1 2019
There might actually be a bit of a deeper problem here, somewhat conditional on some static interface routing. On an broken system, it does say something about staticd starting
Jan 31 2019
And more info:
I tracked down what is causing this.
Jan 30 2019
Too add, routes are present in FRR
Jan 29 2019
Jan 27 2019
Jan 26 2019
THis shows up in the logs:
Unfortunately that seems to have made the problem worse. Before, at least each host was seeing one other host. Now most of them see no hosts.
Sure. I'll set a reminder to check it out tomorrow when I have a free minute. Thanks
Jan 25 2019
Sorry. Spent the week restoring almost half a petabyte of data from backups due to a ZFS crash.
Jan 22 2019
Yeah. I remove the initial vyos user and add an admin and an ansible user. The admin is just for consistency across different devices.
@hagbard I remove/change the vyos user too. So it's definitely a breaking change.
Jan 21 2019
The latest rolling did seem to correct the base problem. That being cron scripts running breaking the ability to edit config afterwards.
@hagbard Note that a reboot does fix the ability to edit configuration again until the next time the cron script runs.
Jan 20 2019
Okay, spent the whole day messing with this and I've tracked it down so it's reproducible.
Jan 14 2019
Superseded by T1178
Seems to be a problem with just that build. I'll install a newer rolling when I get a chance and see if that corrects it.
Edit... actually I can't update anything:
Jan 13 2019
I was mistaken. Seems to have lost the route again.
As of (at least) VyOS 1.2.0-rolling+201901090739 I believe this problem to be corrected.
Jan 9 2019
@Line2 Thanks for the update.
Jan 7 2019
With VyOS as the edge:
Jan 6 2019
@syncer Just to confirm, the above pull request integrates this and can be merged. I'm not sure if there's a status change I should make to this or just leave it be. Thanks
Jan 4 2019
This is probably a duplicate of T1120, which should be fixed now.
Jan 2 2019
Unfortunately still a problem in the EPA2 release. 30 minutes hits, and I cease getting a default route on my entire OSPF infrastructure
Dec 28 2018
I can confirm that this is broken everywhere "get_response" is used, where the default should be "no". GPU signatures are ignored, disks are deleted, etc. I'll work on making up a sane replacement.
Dec 22 2018
Completed. PR: https://github.com/vyos/vyatta-cfg-system/pull/95
Dec 21 2018
Dec 20 2018
I've been using my branch for a few daily upgrades now, and it seems flawlessly, minus one thing.
@danhusan I'd be curious if an alternative of adding rootdelay=10 instead of the acpi=off works. That may or may not do anything depending on how the kernel is built though.
Yeah, I'm not familiar enough with things to understand why it would be trying to mount the bare RAID partitions, not to mention the actual bare drives
While I'm not sure if it's related, it looks like your system has a buggy ACPI implementation. Sometimes that can cause some weird behaviour.
I’ve been trying to research this a little, and I can’t duplicate. But I suspect it’s because I have fast disk. Your first output says it’s going to take over two hours for a resync.
I pulled down vyos-frr submodule and the above-mentioned commit is present.
Dec 18 2018
Maybe this isn't the same issue? Still a problem in RC11 unfortunately.
Here's a branch that makes this work on an upgrade. For now, it wouldn't cover the initial install, but only subsequent upgrades. This covers a MAJOR pain point for me where I lose my bash history on an upgrade.
Dec 14 2018
I’ve been using ospfv3 on all the RCs and definitely haven’t seen this behavior. OSPFv3 seems immune to the problem I’m seeing in T1020
Just to confirm that RC10 made this work for me too
Dec 12 2018
Would it make sense to create this as a separate partition during installation? Instead of trying to preserve? Given my recent work on the EFI stuff, I've got an idea where this might happen.
Dec 11 2018
Dec 7 2018
@c-po Thanks. Between the legit problem this task revealed early-on, some bad timing with the vyatta-config-migrate from earlier today, and a PEBKAC error, it looks like this might be resolved now.
This seems to be working now. It's working both on my normal build tree and a brand new one, so it looks like I was just attempting when some cache somewhere hadn't updated
I was building yesterday fine except for the above-mentioned vyatta-cfg-firewall problem.
Now a simple build is failing with:
Dec 6 2018
I just tested it on my 8.11 VM and it fails to build there as well for the reason you mentioned (the lack of initrd)
@max1e6 Is your build directory on NFS? I had that problem when my build directory was being stored on NFS