You can download it to your route and the just do a dpkg -i wireguard....deb.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 3 2018
Nov 6 2018
Oct 27 2018
Thanks hagbard!
I was litterally pulling my hair out over the error cause i have seen wireguard work in all its glory.
If i want to apply the patch on my own are there any resources i should use or is it simply a dpkg install ?
Oct 26 2018
I'll remove the ip-host validator from the wireguard tree, it causes a few issues if the network name is picked as address. e.g. 10.1.1.2/31
This is still not added to rc3 and rc4 same error
Oct 16 2018
should be in rc3
Was this by any chance merged to RC3 or will it first arrive in RC4 ?
Oct 15 2018
Oct 13 2018
Oct 12 2018
I had already created a task for a new syntax and linked it as a related task, would you like me to create a new separate one?
@hagbard from description i'm not sure if we ever may need it so i propose to do it globally,
we can enable it later if we need to
Should it be disabled globally, or just not loaded vi config?
do you mean like:
set protocols static table 10 route 192.168.0.0/24 next-hop 192.168.0.1 interface ethx
If so could you please create a new task for creating new syntax?
The root cause was in the script trying to call "brctl setpathcost $bridge $port 0", but newer bridge-utils version removed that (there's no cost of zero in STP really, so it would be a rather bad way to set the default anyway).
Oct 11 2018
Okay, the problem is not DMVPN relevant. I have a weird OSPF routing problem. When disabling OSPF between both routers everything is ok.
You're probably right. I thought I had a potential conflict if it started first, but now I can't think of what it was. Maybe there wasn't a conflict to begin with.
Being able to apply both an interface and next hop to the same route would still be extremely useful though, but could be slated for later I guess.
There is still the question of why FRR can't figure out what to do with separate interface routes in the same table.
Don't think we need to modify anything
vrrp must start before frr starts
@UnicronNL
Yes. It's a double issue actually. Hopefully this explains it better:
- The next-hop based routes in the alternate routing tables seem to be unaware of interface routes in the same table.
- VyOS command syntax cannot currently specify both a next-hop and interface for the same static route, despite FRR being able to do so.
- FRR will attempt to add an interface to a next-hop route (based on which interface has a subnet that includes the next hop) automatically, but this information is not preserved in the VyOS config file.
- Since FRR starts prior to VRRP (keepalived); interfaces with ONLY 'virtual' addresses will not receive FRR's automatic interface detection because they do not have a subnet when the route is created. This renders the routes unreachable and FRR does not refresh their status.
@Watcher7 I do not really understand what you mean, can you share your configs or a way to reproduce and elaborate a bit more?
Further examination of this issue indicates that some of the behavior may be caused by FRR starting prior to VRRP on boot, and backup VRRP routers with VIP only interfaces.
Oct 10 2018
vyos-documentation is still a playground but feel free to add. Always makes sense to play arround with new concepts
Here is the pastebin since im still not allowed to post on the wiki
https://pastebin.com/sZcJLyeB
@runar Yeah, but it doesn't know about the reboot scheduled via at when you call commit-confirm. I have a look today, it's the last missing piece, I patched already the vyatta portion yesterday. I will leave a comment in th code with the task number, once these vyatta scripts are supposed to go away, it's easy to remove the code.
I was about not to cross call scripts, each script should be able to run for its own with no dependency to other op-mode/conf-mode scripts.
turns out the script gets called with following argv:
['/usr/libexec/vyos/op_mode/show_dhcp.py', '--statistics', '--pool', 'statistics']
Working with only static and OSPF + static route.
Looks like this is working now on my two routers using this feature - good work!
@hagbard, the powerctrl.py script allready have everything needed, --check to check for scheduled reboot. :)
Oct 9 2018
So far only the show command is not working, the at job removal is working correctly.
@runar not your fault buddy, it was never supposed to work like that, at least what I see in the scripts. The actual real broken thing is, that after a reboot the at job wasn't removed, everything else works as expected. The show reboot would be a feature request, I may implement it as well, but I'm not sure if we should leave it with atd. I would rather see in powerctrl.py.
Ok, I can confirm that above config from @runar works for this setup using all VyOS 1.2.0-rc images
That function more broken than I thought. I have the fix for acting correctly when it reboots, however I found that it is supposed to accept a time after commit-confirm as well, not only y. Also if commit is called after commit-confirm, it does not remove the at reboot job. Looking into that next.
Hmm.. i think some things is missing here... the "reboot" and "poweroff" commands is using the new /usr/libexec/vyos/op_mode/powerctrl.py script to schedule reboots, but "show reboot" and "show poweroff"
Okay, the above error
@aopdal Please check out the rc2 (https://downloads.vyos.io/?dir=testing/1.2.0-rc2), should be fixed now.
@dmbaturin You want to implement it or shall I?
It creates a /var/run/confirm.job field which contains the atd ID. So I was more thinking I extend the at command they have in their script. It creates a rollback of the config, applies it then I would just executed the job deletion and reboot. I didn't find any other atd reference, but not sure if there is one hidden somewhere.
According to https://www.icann.org/dns-resolvers-checking-current-trust-anchors , we should be fine indeed:
I suppose we can simply delete all those jobs at boot time. Since nothing else uses atd, and old jobs likely should not survive reboots, I think it's the simplest solution.
Killing OpenNHRP as suggested by @runar and relaunching it with: