- User Since
- May 30 2016, 11:24 PM (160 w, 1 d)
Feb 4 2019
My fault for not having the time to test this as one of the users who has a need for RFC compliant VRRP. The use of + for interface matching is less than ideal but if we do so we should take care to recommend that use of 802.1Q VLAN sub-interfaces not make use of the parent (untagged) interface else traffic matching will not be obvious.
Dec 5 2018
I agree this is becoming increasingly necessary as vendors turn to AWS for hosting services and IP addressing becomes less static for services.
Juniper and Cisco use a global configuration of "vxlan port" or "vxlan udp port". A per-interface configuration is more flexible and likely makes sense.
Nov 30 2018
Oct 23 2018
The functionality is fixed in 1.2-rcX for ARP, I haven't verified other services such as DNS.
Oct 5 2018
May be duplicate of T483
Sep 26 2018
1.2 rolling has ISC dhcrelay 4.3.1 from the Debian isc-dhcp-relay 4.3.1-6+deb8u3 package.
Sep 25 2018
Will this make it into 1.2 before the freeze?
Quick note that the work-around above breaks the local DNS resolver if pointing to a virtual IP. Still keeping an eye out for other issues.
I do agree that there should be a pre-canned way to do this.
Sep 24 2018
So I'm not sure this is a bug as much as a feature request. You CAN in fact accomplish what you're trying to do in VyOS 1.2 albeit manually using a transition script.
It looks like the commit has fixed this issue.
Expanding on this more, I've updated the fix above to suggest a workaround of bridge mode for macvlan interfaces.
Sep 23 2018
In the vyos-build repository /data/live-build-config/hooks/08-sysconf.chroot needs to be updated to remove:
This will need to be implemented using transition scripts in keepalived to enable and disable radvd for the prefix.
Sep 22 2018
So I took a step back and started wondering why we have /proc/sys/net/ipv4/conf/default/arp_filter set to 1 to begin with.
Sep 19 2018
Can you provide the output of tcpdump -eni <sub-interface> 'arp' (e.g. eth1.2001) from a root shell on VyOS when this is happening? I would like to see the capture with MAC addresses included for the specific sub-interface involved (text rather than screenshot please).
Just to add a quick note:
I haven't tested this to verify but some initial thoughts:
Jul 24 2018
Just adding a note to confirm that RFC-compliant VRRP does work correctly provided the arp_filter settings are fixed. I am able to forward traffic through the routing using its virtual MAC as the next-hop and firewall policy remains correctly applied.
Jul 22 2018
I have never attempted the non-RFC-compliant implementation but will test it Monday.
Jul 21 2018
This is for rfc3768-compatibility enabled
Jul 20 2018
In case you don't see the updates:
On a side note the reason I'm digging into this in 1.2 is that 1.1.8 currently has a bug where unicast ARP directed to a VRRP virtual IP is not responded to (while ARP sent to broadcast works fine).
Quick update that using the transition-script method does not work (I think because the script isn't run as root).
Moving this to critical because a router that won't respond to ARP is not very useful :-)
Mar 13 2018
Opened a feature request with ISC:
Mar 12 2018
A bit more information on this.
Thank you very very much. I will pull down the next nightly and test.
Pretty much agree but don't think we need to worry about supporting a use case that is likely to create other problems until someone actually requests it as a feature.
Mar 9 2018
P.S. This is really starting to get more into the territory of support than bug reporting, have you considered purchasing support?
At first glance it looks like the name servers you are using are not reliable, and the lack of response is because the forwarder is also not getting a response.
Mar 8 2018
It sounds like we can give the upgrade to keepalived 1.3 a try provided we go back to virtual-address support IPv4 only (like in 1.1) and remove the native_ipv6 statement from the configuration script as step 1.
We'll need some more information.
Mar 7 2018
By default the DNS forwarder will cache recent responses. Have you disabled DNS caching on the forwarding service with the following configuration?
Just checking on this. The nightly build for 1.2 has keepalived 1.2.19 and transition support for virtual IPv6 addresses using VRRPv2 appears to be functional at first glance.
It was likely the first scenario that I mentioned where there was traffic already established before the NAT rule was created. Also note that a reset conntrack is essentially a flush of the conntrack table and can be disruptive for established connections. Alternatively you could have cleared conntrack entries for the specific host address only as a more safe way of doing it in the future.
I may have confused you. My reference to only needing a firewall rule in one direction was in respect to making a specific rule to permit return traffic as opposed to an overly broad one in the case where a stateful firewall didn't exist.
Mar 6 2018
I have verified that this is working on 1.1.8 so there might be a configuration or operation issue that is making you see this behavior (I actually have this working in production at scale using over 14,500 rules across 28 chains).
Mar 5 2018
Mar 3 2018
Mar 2 2018
Just a quick bump as I'm reviewing 1.2 nightly builds for IPv6 support.
For reference this is the standard ruleset I use on our servers for IPv6 (limited to only what is necessary for DHCPv6 and ICMPv6):
P.S. @dmbaturin If you can direct me to some instructions on how you would prefer suggested patches be submitted I can re-work to make it easier on you.
Jan 2 2018
Jan 8 2017
With respect to the concerns I mentioned above, I've voted no.
I keep coming back to a sense that dramatic syntax changes are very damaging and disruptive to users. My fear is that we'll be spending years explaining to people that they're looking at old documentation or examples and that they don't have their curly braces in the right place. Or that we'll alienate a segment of our user base that is averse to change.
Jan 5 2017
I haven't voted yet because I haven't decided ... It's a big change.
From a parsing perspective the only challenge tag nodes present is that you can't easily distinguish between "key value" and "key tag" without context. "key" and "key tag value" are fine. Using a ":" you get "key: value" vs "key tag" which removes the ambiguity.
The XORP configuration syntax (which Vyatta initially built upon) solves the parsing issue with the simple introduction of a ":" as a delimiter between keys and values.
Sep 23 2016
It looks interesting and I think QoS is a good application of nDPI. I'm a little nervous about what the performance and stability implications are. Not having looked into it much is it implemented as a module that could be disabled if needed?
Sep 19 2016
@hmkias Patch Squid for what?
I'll make a move here and suggest that until FOSS projects to implement DPDK support see more maturity that VyOS doesn't go down the rabbit hole of that for now; I think a side project, maybe "HP-VyOS" (for High-Performance VyOS) take on trying to build a version of VyOS that can leverage experimental code like DPDK or VPP.
In theory, you could have the web filter be a pair of servers using VRRP.
Sep 16 2016
@mickvav I think you're misunderstanding the benefit of DPDK. It's essentially fastpath for Intel-based platforms and if implimented correctly can be the difference between 10 Gbps and 100 Gbps on the same hardware. Obviously being able to scale VyOS to that level would be game-changing. It's important, just likely not in scope for VyOS at this time ...
@EwaldvanGeffen have you given the method I described a try on VyOS? I know it works on EdgeOS and pre- 6.4 releases of Vyatta and honestly haven't tested it on VyOS because it's not something I have a need for... so it very well could work differently/be broken on VyOS, but that would be surprising.
I've added a quick note in the SNAT section of the Wiki to explain this. Feel free to edit if it seems unclear or could be worded better.
@mickvav I think when people ask "does it support DPDK" it's because they've read that using DPDK will allow forwarding and possible filtering and NATing of traffic at 10 Gbps+ rates. VyOS offering some DPDK stuff and saying "mission accomplished" would leave a bad taste in people's mouths the same way CloudRouter is claiming DPDK support when it's only for bridged traffic.
Sep 15 2016
"DPDK support" involved a lot of low-level contributions to a lot of different projects. Essentially you need to re-implement major parts of Linux on a case-by-case basis which is outside of the scope for VyOS right now.
You can use policy routing to match HTTP and HTTPS traffic and point it at a next-hop that is an external transparent proxy.
Can we move this to "wontfix". This is the normal behavior of Linux and doing any sort of global drop of invalid state traffic by default is not a realistic change.
After VRRPv3 (with some intelligent way to handle radvd) this is the major blocker for using VyOS as a production IPv6 firewall in my environment.