Allow bonding non-ethernet interfaces
Open, WishlistPublicFEATURE REQUEST


Basically, the idea is to allow doing this:

set interfaces openvpn vtun0 bond-group bond0.

Use case: bonding multiple OpenVPN TAP tunnels over different WANs to aggregate the bandwidth and get faster internet access when MLPPP is not possible.

This is already possible in ZeroShell ("VPN bonding") and most likely in pfSense too (haven't tested it but I can bond ovpncX interfaces in a LAGG).

Under the hood, this would have to be done:
ip link set dev vtun0 master bond0.


Difficulty level
Unknown (require assessment)
Why the issue appeared?
Will be filled on close
This request is:
Service Request
sebastianm updated the task description. (Show Details)
sebastianm updated the task description. (Show Details)

If you have a lab to try - may be you can just copy




and check, whether the corresponding config mode commands will just work. If they do, it will be much easier to write a patch.

There should also be some dependency issue - such bonding interface can be brought up AFTER underlying vtun's, I think. And it should be somehow explicitly specified.

sebastianm added a comment.EditedOct 12 2017, 9:39 AM

I've copied it and I can set bond-group on the OpenVPN interface. I'll check if it actually works in a minute. (you need to replace "tunnel" in the second string with openvpn).

sebastianm added a comment.EditedOct 12 2017, 10:01 AM

Looks like it doesn't work. I can't see any traffic on bond0 although it's configured in round-robin mode.

When I commit the openvpn interface, I get the following:
Warning: priority inversion [interfaces openvpn vtun1 bond-group](319) <= [interfaces openvpn vtun1](460)

changing [interfaces openvpn vtun1 bond-group] to (461)

EwaldvanGeffen added a subscriber: EwaldvanGeffen.EditedOct 12 2017, 9:48 PM

I've tried to attain this holy grail of combining VPNs to gain a faster more reliable link. Although my environment where multiple consumer WAN links with different specs. Yours seem to be more uniform to account for so you might get away with easier.

Bonding tunnels is a bad idea when latency differences arise on your paths. The problem can be exacerbated by using TCP over TCP with it's own set of problems (running out of your TCP window). I was able to overcome latency differences by using a lot of sticky-interface glue to ensure a TCP stream would use the same link regardless of which side initiated and then again you want to make source-dest tupples sticky to keep the internet workable.

I think the ultimate solution is VPN over multi-path TCP.

The only remotely sensible use case I can see is active/standby bonding of L2 VPNs to provide redundant paths. But then again, the real answer to this is distributed switches such as openvswitch.

Well, I'd like to use bonding with round-robin load balancing over two VDSL2 uplinks to same provider with the same latency (my ISP wants a business account for MLPPP).

Also, it doesn't seem to work because vtun0 is not coming up -- but that seems to be related to my specific config.

I'll check it again and try to get it working.

I tried to get this working on a good known OpenVPN TAP configuration. I can confirm that it's flaky and will require additional debugging.

syncer triaged this task as Wishlist priority.Thu, Dec 21, 9:40 PM