cpo@BR1# set interfaces ethernet eth1 pppoe 0 ipv6 address autoconf [edit] cpo@BR1# commit [ interfaces ethernet eth1 pppoe 0 ipv6 address autoconf ] cp: cannot create regular file ‘/etc/ppp/ipv6-up.d/50-vyos-pppoe0-autoconf’: No such file or directory sed: can't read /etc/ppp/ipv6-up.d/50-vyos-pppoe0-autoconf: No such file or dire ctory chmod: cannot access ‘/etc/ppp/ipv6-up.d/50-vyos-pppoe0-autoconf’: No such file or directory Warning: IPv6 forwarding is currently enabled. IPv6 address auto-configuration will not be performed unless IPv6 forwarding is disabled.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
All Stories
Feb 10 2019
@fromport how to reproduce? Is this possible with VMWare ESXi? If now, which virtualisation tool to use for testing?
Your given example can thus be enabled via set service dns forwarding domain microsoft.com server 127.0.0.1
Okay, a wildcard as in * does not work and is not supported by our underlying pdns-recursor.
configure -h is a bit "cryptic" (imo)
It doesn't show the defaults, so i ran a regular configure and then editted the build/build-config.json
I have removed resolving of IP address into DNS, you can also try compiling with --enable-ipv6 flag to use different functions in order to avoid this issue.
Can you please provide a little bit more information about you system ? It is very strange that gethostbyaddr() function crash, have you tried updating your system to latest version ?
@oliveriandrea can you please retest with latest rolling release if it already works?
Implemented in latest rolling and backported to Crux branch for 1.2.1
An easier solution is to wrap the test in ' like use-web='this is your IP'
After some digging this is what I found out with VyOS 1.2.0-epa3:
You rock Geoff!
Hey! Sorry I was out for a bit. But I'm back now. Time to catch up.
Feb 9 2019
looks to me like a classic buffer overflow on the zabix agent.
no ideas?
- vyatta-webgui removed from vyos-world (https://github.com/vyos/vyos-world/commit/dc9588ad4b49cc8f122075a2b6fe748e2f31af9c)
- vyatta-webgui removed from vyos-build submodules (https://github.com/vyos/vyos-build/commit/730f30c45fb0c1e5f5cb7576c54798941980a9d1)
Hi adestis, what you descripe is possible to do today with the help of a shellscript and the crontab, if you are interested i could help you create a script that does this for you, the one drawback is that the failover-time is in the ballpark of minutes, and the routes are not present in the configuration... Also, cron fills the log with messages every time it executed
Feb 8 2019
All right, let me know if you need help.
Handled in/with T484, hopefully
currently the command run under the hood is
cat $(printf "%s\n" /var/log/messages* | sort -nr) | grep -e pluto -e xl2tpd -e pptpd -e ppp
Please retest with a new rolling release tomorrow
This bug can't be reproduced in 1.2.0-LTS and 1.2.0-rolling+201902080337, so seems that it was fixed in some of previous releases. Closing ticket.
Feel free to reopen it if the the same behavior will be spotted in one of current releases.
@zsdc i meant test with 1.2.0 :-)
@danhusan , you can send the configuration to [email protected] with the theme "Phabricator T1148". Also, please check if a remote side of BGP peering run in active or passive mode?
We are seeing this issue mostly on BGP routers with Internet Exchange connections because at a reboot we are hitting max-prefix limits with a lot of peers.
At this moment it is not possible to upgrade to latest 1.2.0, still running 1.1.8.
Patch does not apply cleanly, need to backport it but will do
Will try to reinstall the baremetal router since it is the most inconsistant of the two routers. The virtual one works with other peers.
Small config, just 4 interfaces with IPv4 and IPv6 + some BGP config. I am running the VyOS instance in ESXi with some fairly modern hardware.
Unfortunately I cannot just reboot this device at will. If you provide your email I can send over the config.
Yes
retest
@c-po you can proceed with the removal
Please retest with the latest rolling
if the problem still persists, reopen
can't reproduce
please try latest rolling and if the problem still persists
create new ticket with full config and env spec
no retest were done, rejecting