- User Since
- May 30 2016, 11:24 PM (115 w, 2 d)
Tue, Jul 24
Just adding a note to confirm that RFC-compliant VRRP does work correctly provided the arp_filter settings are fixed. I am able to forward traffic through the routing using its virtual MAC as the next-hop and firewall policy remains correctly applied.
Sun, Jul 22
I have never attempted the non-RFC-compliant implementation but will test it Monday.
Sat, Jul 21
This is for rfc3768-compatibility enabled
Fri, Jul 20
In case you don't see the updates:
On a side note the reason I'm digging into this in 1.2 is that 1.1.8 currently has a bug where unicast ARP directed to a VRRP virtual IP is not responded to (while ARP sent to broadcast works fine).
Quick update that using the transition-script method does not work (I think because the script isn't run as root).
Moving this to critical because a router that won't respond to ARP is not very useful :-)
Mar 12 2018
A bit more information on this.
Thank you very very much. I will pull down the next nightly and test.
Pretty much agree but don't think we need to worry about supporting a use case that is likely to create other problems until someone actually requests it as a feature.
Mar 9 2018
P.S. This is really starting to get more into the territory of support than bug reporting, have you considered purchasing support?
At first glance it looks like the name servers you are using are not reliable, and the lack of response is because the forwarder is also not getting a response.
Mar 8 2018
It sounds like we can give the upgrade to keepalived 1.3 a try provided we go back to virtual-address support IPv4 only (like in 1.1) and remove the native_ipv6 statement from the configuration script as step 1.
We'll need some more information.
Mar 7 2018
By default the DNS forwarder will cache recent responses. Have you disabled DNS caching on the forwarding service with the following configuration?
Just checking on this. The nightly build for 1.2 has keepalived 1.2.19 and transition support for virtual IPv6 addresses using VRRPv2 appears to be functional at first glance.
It was likely the first scenario that I mentioned where there was traffic already established before the NAT rule was created. Also note that a reset conntrack is essentially a flush of the conntrack table and can be disruptive for established connections. Alternatively you could have cleared conntrack entries for the specific host address only as a more safe way of doing it in the future.
I may have confused you. My reference to only needing a firewall rule in one direction was in respect to making a specific rule to permit return traffic as opposed to an overly broad one in the case where a stateful firewall didn't exist.
Mar 6 2018
I have verified that this is working on 1.1.8 so there might be a configuration or operation issue that is making you see this behavior (I actually have this working in production at scale using over 14,500 rules across 28 chains).
Mar 5 2018
Mar 3 2018
Mar 2 2018
Just a quick bump as I'm reviewing 1.2 nightly builds for IPv6 support.
For reference this is the standard ruleset I use on our servers for IPv6 (limited to only what is necessary for DHCPv6 and ICMPv6):
P.S. @dmbaturin If you can direct me to some instructions on how you would prefer suggested patches be submitted I can re-work to make it easier on you.
Jan 2 2018
Jan 8 2017
With respect to the concerns I mentioned above, I've voted no.
I keep coming back to a sense that dramatic syntax changes are very damaging and disruptive to users. My fear is that we'll be spending years explaining to people that they're looking at old documentation or examples and that they don't have their curly braces in the right place. Or that we'll alienate a segment of our user base that is averse to change.
Jan 5 2017
I haven't voted yet because I haven't decided ... It's a big change.
From a parsing perspective the only challenge tag nodes present is that you can't easily distinguish between "key value" and "key tag" without context. "key" and "key tag value" are fine. Using a ":" you get "key: value" vs "key tag" which removes the ambiguity.
The XORP configuration syntax (which Vyatta initially built upon) solves the parsing issue with the simple introduction of a ":" as a delimiter between keys and values.
Sep 23 2016
It looks interesting and I think QoS is a good application of nDPI. I'm a little nervous about what the performance and stability implications are. Not having looked into it much is it implemented as a module that could be disabled if needed?
Sep 19 2016
@hmkias Patch Squid for what?
I'll make a move here and suggest that until FOSS projects to implement DPDK support see more maturity that VyOS doesn't go down the rabbit hole of that for now; I think a side project, maybe "HP-VyOS" (for High-Performance VyOS) take on trying to build a version of VyOS that can leverage experimental code like DPDK or VPP.
In theory, you could have the web filter be a pair of servers using VRRP.
Sep 16 2016
@mickvav I think you're misunderstanding the benefit of DPDK. It's essentially fastpath for Intel-based platforms and if implimented correctly can be the difference between 10 Gbps and 100 Gbps on the same hardware. Obviously being able to scale VyOS to that level would be game-changing. It's important, just likely not in scope for VyOS at this time ...
@EwaldvanGeffen have you given the method I described a try on VyOS? I know it works on EdgeOS and pre- 6.4 releases of Vyatta and honestly haven't tested it on VyOS because it's not something I have a need for... so it very well could work differently/be broken on VyOS, but that would be surprising.
I've added a quick note in the SNAT section of the Wiki to explain this. Feel free to edit if it seems unclear or could be worded better.
@mickvav I think when people ask "does it support DPDK" it's because they've read that using DPDK will allow forwarding and possible filtering and NATing of traffic at 10 Gbps+ rates. VyOS offering some DPDK stuff and saying "mission accomplished" would leave a bad taste in people's mouths the same way CloudRouter is claiming DPDK support when it's only for bridged traffic.
Sep 15 2016
"DPDK support" involved a lot of low-level contributions to a lot of different projects. Essentially you need to re-implement major parts of Linux on a case-by-case basis which is outside of the scope for VyOS right now.
You can use policy routing to match HTTP and HTTPS traffic and point it at a next-hop that is an external transparent proxy.
Can we move this to "wontfix". This is the normal behavior of Linux and doing any sort of global drop of invalid state traffic by default is not a realistic change.
After VRRPv3 (with some intelligent way to handle radvd) this is the major blocker for using VyOS as a production IPv6 firewall in my environment.